///
1 min read

Microsoft has developed a specialized artificial intelligence (AI) chip for cloud infrastructure

As the demand for more advanced AI applications grows, technology companies are seeking ways to enhance the efficiency of their infrastructure in training large language models (LLMs) in a cost-effective manner.

To reduce costs and avoid dependence on third-party chipmakers, Microsoft has introduced two custom-designed chips: the Microsoft Azure Maia 100 AI Accelerator, optimized for AI tasks and generative AI, and the Microsoft Azure Cobalt CPU, an Arm-based processor for general-purpose compute workloads on the Microsoft Cloud.

These chips are seen as a crucial component for Microsoft’s infrastructure systems, providing end-to-end optimization for internal and customer workloads.

The chips are expected to roll out in early 2024, initially powering Microsoft’s services like Microsoft Copilot and Azure OpenAI Service before becoming available to industry partners for integration into their systems.

Microsoft’s goal is to establish an Azure hardware system that maximizes flexibility and can be optimized for power, performance, sustainability, or cost.

The introduction of these chips enables Microsoft to move away from relying on third-party GPUs, such as Nvidia’s H100 GPUs, providing the company with greater control and flexibility over its hardware ecosystem.

Leave a Reply