Microsoft’s 2nd gen AI chips revealed

Published on January 27, 2026

Microsoft

(Image Credit:efes, CC0, via Wikimedia Commons)

Source:

Reuters

Microsoft has introduced the second generation of its proprietary artificial intelligence chip, the Maia 200, alongside a suite of software tools designed to address a key competitive strength held by Nvidia among developers, as reported by Reuters. According to the company, the new chip is now operational in a data center in Iowa, with a second location planned in Arizona. This release builds on the initial Maia chip launched in 2023.

The move reflects a broader trend among leading cloud providers—including Microsoft, Google, and Amazon Web Services—who are developing their own AI chips, even as they remain significant customers of Nvidia. Notably, Google has attracted interest from major Nvidia clients such as Meta Platforms, which is collaborating with Google to narrow software gaps between their respective AI chip offerings.

In parallel, Microsoft is offering a package of software tools to program the Maia 200. Among these is Triton, an open-source tool with substantial contributions from OpenAI, designed to perform tasks similar to Nvidia’s Cuda software—widely regarded by analysts as a cornerstone of Nvidia’s market advantage.

Manufactured by Taiwan Semiconductor Manufacturing Co using 3-nanometer technology, the Maia 200 utilizes high-bandwidth memory, though it is an older and slower generation compared to Nvidia’s upcoming Vera Rubin chips. However, Microsoft has adopted a strategy seen among some emerging competitors by integrating a substantial amount of SRAM into the Maia 200. This type of memory can enhance speed for AI applications like chatbots when handling high volumes of user requests.

This approach is also employed by Cerebras Systems, which recently secured a $10 billion agreement with OpenAI, and by Groq, a startup from which Nvidia has reportedly licensed technology in a deal valued at $20 billion.