CES 2026, Nvidia launches Alpamayo a family of Ai models for autonomous driving
From the CES stage in Las Vegas, Jensen Huang presents Alpamayo, a new family of models, simulators and open source datasets designed to train autonomous cars and robots to reason in the real world.
LAS VEGAS - Nvidia lifts the bonnet of the autonomous car and sticks its brain in it. It is called Alpamayo: a new open source family of artificial intelligence models, simulators and datasets for training robots and physical vehicles. The goal is ambitious and very concrete.
After dominating the 2025 edition with the announcement of the RTX 5000 series GPUs and the desktop supercomputer, Jensen Huang returns to the Consumer Electronic Show in Las Vegas as the undisputed king of AI, like a new Elvis Presley to shape Physical AI, which defines artificial intelligence that understands the laws of physics. No less. Jensen shows up greeting Las Vegas in a black crocodile leather rock star jacket.
For Jensen Huang, number one at Nvidia, it is a historic step. "The ChatGPT moment of physical artificial intelligence has arrived," he says. Translation: machines no longer just see and react. They now begin to understand, reason and act in the real world. Like a human driver.
At the heart of the system is Alpamayo 1. A VLA model - vision, language and action - with 10 billion parameters. It not only recognises what is happening in front of the windscreen, but also lines up thoughts. It reasons in steps. It breaks down problems. It evaluates options. Then it chooses the safest. It is the famous chain of thought, applied to asphalt.
Technically, he explained, it is a complete and open ecosystem for reasoning-based autonomy. Alpamayo integrates three fundamental pillars (open models, simulation frameworks and datasets) into a cohesive, open ecosystem that any automotive developer or research team can build on.



