NVIDIA’s CEO and founder, Jensen Huang, gave a keynote speech at the start of 2026 (January 5) at CES, which was the essence of rolling out a red carpet for the AI industry.
This was the first significant set of announcements since the company rolled out an updated version of its Nemotron family of AI models.
Initial remarks made at the beginning of the event suggested that the flow of investment into AI was not going to stop, with hundreds of billions yet to be poured into the industry. 2025 had already witnessed some of the largest funding rounds seen for any sector so far.
By the end of his keynote address, Huang outlined a slew of features that confirm the blueprint of AI now extends far beyond LLMs—the technology that spurred on ChatGPT and the many other human-like assistants we see today, capable of carrying out conversations and simple tasks.
Huang did not address the potential issues that could arise from these innovations, but instead focused on all the benefits that AI could bring to the table.
NVIDIA was initially known for its prowess in the graphics card industry, bringing some of the most respectable and bespoke models for gaming to market. Today, the company sits large on a pile of AI models and products that it has developed.
Some of those were introduced at the CES tech event, a gathering that saw CEOs in attendance—and giving their own speeches on the future of AI and other topics—from companies such as Siemens (Dr Roland Busch), AMD (Lisa Su), and Lenovo (Yuangking Yang).
Alongside the event, an official string of bullish communication flowed from the NVIDIA newsroom. In its flurry of press releases, which perhaps paint too optimistic a picture of AI, NVIDIA also mentioned multiple emerging technologies its new models have touched, such as robotics, autonomous vehicles, and storage processing.
The multi-pronged reach pulled in industry leaders—only in name for now—from various Silicon Valley companies, suggesting that NVIDIA is looking to hit the ground running in the AI sector by securing potential partnerships with the tech industry’s bigwigs.
- Super-six AI computing chips were welded together. But for what?
NVIDIA has pulled together six different model chips to make one super-set that can virtually power some of the most-power intensive applications required of AI today. The models in question are:
- NVIDIA Vera CPU
- NVIDIA Rubin GPU
- NVIDIA NVLink 6 Switch
- NVIDIA ConnectX-9 SuperNIC
- NVIDIA BlueField-4 DPU
- NVIDIA Spectrum-6 Ethernet Switch
The chips are essentially meant to form a new architecture called the Rubin platform, which is expected to deliver a 10x reduction in token inference cost.
Token inference cost refers to the cost required to process text, which is input that is broken down into tokens. The input will then be churned according to a certain objective and generate an output, which typically requires more compute power.
AI applications often require tremendous amounts of compute power, which means that such an innovation—if truly developed for that purpose—will bring costs down for training and running AI models.
- The debut of the Alpamayo family of AI models for autonomous driving
Perhaps it was most apparent that NVIDIA was interested in more than just LLMs when Huang went on to speak about the advances the company has made in autonomous vehicles, or AVs.
The Alpamayo family of AI models was introduced as an upgrade in technology to the existing infrastructure that autonomous vehicles use.
“Robotaxis are among the first to benefit. Alpamayo brings reasoning to autonomous vehicles, allowing them to think through rare scenarios, drive safely in complex environments and explain their driving decisions — it’s the foundation for safe, scalable autonomy,” said Huang.
NVIDIA introduced three tools at CES for autonomous vehicles:
- The Alpamayo 1, which is a reasoning VLA model, can be used to accelerate the development of autonomous vehicles. The innovation comes attached with a 10-billion-parameter architecture and video input to help the vehicle show its logic behind each decision it makes.
- AlpaSim, which is a simulation network accessible on GitHub
- An estimated 1,700 hours of driving data, which is stored in the form of Physical AI open data sets. Physical AI is the technology used to develop reasoning specifically for the external environment, made possible by cameras and sensors. It helps a system develop its own logic for applications in unexpected situations in the physical environment.
3. NVIDIA aims to leverage Physical AI not just for AVs, but for robotics
The tech company will be putting its own footprint in an arena where Hyundai and Boston Dynamics have already collaborated to integrate robotics across the car company’s manufacturing sites, an announcement made at CES as well.
For this field, NVIDIA rolled out the Cosmos and the GROOT open models, which will be used to develop robots and their reasoning.
Videos by Boston Dynamics now show that robots are capable of handling basic tasks in an assembly line, which means potential for automation in this field is very high.
The new models included as part of the robotics development wave were:
- NVIDIA Cosmos Transfer 2.5
- NVIDIA Cosmos Predict 2.5
- NVIDIA Isaac GROOT N1.6
In some cases, companies such as Boston Dynamics, Caterpillar, Franka Robotics, Humanoid, LG Electronics, and Neura Robotics are already using NVIDIA’s open-source models to develop their products.
- The next level in storage processing for AI: Bluefield 4
AI requires better storage processing and NVIDIA aims to stay ahead of that curve with its Bluefield 4 model. AI models often generate tremendous amounts of data and the Bluefield 4 model will be used to power an “Inference Context Memory Storage Platform”, now to be considered the next step in AI-specific storage infrastructure.
AI models require billions of parameters to function effectively and with agentic AI on the rise in every sphere of the economy, the amount of data and parameters is expected to rise greatly. To help with this, the Inference Context Memory Storage Platform takes token processing and power efficiency up a notch, with tokens per second brought up 5x coupled with 5x greater power efficiency.
“AI is revolutionizing the entire computing stack — and now, storage,” said Huang. He also said AI is no longer about “one-shot chatbots” but intelligent collaborators with new, improved capabilities.
“With BlueField-4, NVIDIA and our software and hardware partners are reinventing the storage stack for the next frontier of AI.”
- The gaming segment was not left out at CES, but it does appear diminished from neglect
Graphics cards and frames per second—the two things that put NVIDIA on the map of the gaming world—were pulled into conversation. While the gaming segment clearly has taken second place to the company’s new list of priorities, NVIDIA made sure to mention improvements in what was once one of its core businesses.
A new upgrade to the DLSS system, the DLSS 4.5, was rolled out, which has a dynamic multi-frame generation that will greatly improve quality and visuals for gamers around the world. Also, the number of titles that support NVIDIA’s DLSS technology increased to 250 from 75 games last year when the technology was launched at CES.
Alongside the upgrade in gaming performance, NVIDIA unveiled next-gen G-Sync Pulsar gaming monitors, modding platform NVIDIA RTX Remix, and a new set of AI technologies that will turn conversational NPCs into more interactive gaming characters that will have heightened autonomy, making them more human-like.


