Microsoft has rolled out a new AI-inference accelerator piece of hardware called the Maia 200, as per an official update on the company’s website on Monday.
The new product comes after NVIDIA announced at CES 2026 multiple roll-outs of AI-focused infrastructure, including the 6-chip Rubin platform. Microsoft’s new announcement suggests that it’s keen to join the market with a fresh set of innovations for infrastructure.
In the official statement, the Maia was referred to as part of the company’s heterogeneous AI infrastructure and is expected to service AI applications such as GPT-5.2 rolled out by OpenAI, Microsoft Foundry, and Microsoft 365 co-pilot.
“For synthetic data pipeline use cases, Maia 200’s unique design helps accelerate the rate at which high-quality, domain-specific data can be generated and filtered, feeding downstream training with fresher, more targeted signals,” wrote Cloud and AI executive vice president, Scott Guthrie.
At the time of writing, Microsoft shares were trading at $470.28, up by 1.41%.
“Fabricated on TSMC’s cutting-edge 3-nanometer process, each Maia 200 chip contains over 140 billion transistors and is tailored for large-scale AI workloads while also delivering efficient performance per dollar,” wrote Guthrie.
The update marks the latest in a strong of announcements by Silicon Valley companies in the magnificent seven, all of whom are trying to meet the rising demand for compute chips, and quality infrastructure to fill the widening gaps in supply, especially when it comes to enterprise and LLM applications.


