NVIDIA: Leading the New AI Industrial Revolution, Envisioning a Trillion-Dollar Intelligent Future
In the Q&A session of NVIDIA’s latest earnings call, CEO Jensen Huang not only revealed the company’s strong momentum but also painted a magnificent vision for the future of AI. This is not just a technological leap, but a profound industrial revolution that will completely change the world as we know it.
The Next Wave of AI: Reasoning Agentic AI and Physical AI
Huang pointed out that the most exciting growth driver in the AI field is the rise of “Reasoning Agentic AI.” While past chatbots could only handle single prompts and generate responses, today’s AI can research, plan, and use tools to achieve “long-chain-of-thought” capabilities. This advancement has increased the computational demand for AI by a hundred or even a thousand times, significantly reducing the “hallucination” phenomenon in AI and opening up new domains like enterprise-level applications, physical AI, and robotics.
At the same time, the maturation of physical AI and robotics technology is creating a brand-new industry, which translates to long-term, massive demand for NVIDIA’s data center platform.
NVIDIA’s Annual Innovation Cadence: The Dual Engines of Blackwell and Rubin Platforms
NVIDIA’s pace of innovation is unprecedented. The Blackwell platform, especially the GB300 and Blackwell Ultra, is accelerating into mass production and demonstrating astonishing performance improvements. Compared to the previous generation, Hopper, the GB300 and NVLink 72 AI factory have achieved significant improvements in token-per-watt efficiency, which directly translates into revenue for customers in power-constrained data center environments.
The chips for the next-generation Rubin platform are already in the fab and are expected to enter mass production next year. Rubin will continue NVIDIA’s annual product update cadence, with each architectural innovation aimed at accelerating cost reduction, maximizing customers’ revenue-generating capabilities, and significantly enhancing AI performance. This confirms that NVIDIA is not only focused on short-term results but also on long-term, continuous innovation.
Full-Stack Co-Design: Tackling the Extreme Complexity of AI Factories
Facing the extreme complexity of AI factories and the challenge of rapidly evolving models, Huang emphasized that NVIDIA’s competitive advantage lies in its “full-stack co-design” approach. NVIDIA provides not just a GPU chip, but a complete AI supercomputer platform that includes a variety of chips such as CPUs, GPUs, super NICs, and NVLink scale-up switches.
The ubiquity of the NVIDIA platform allows for the same programming model to be used across all clouds, computer companies, edge computing, and robotics applications, accelerating the entire AI workflow from data processing to training and inference. This comprehensive solution ensures the long-term utility and value of customers’ data centers.
The Trillion-Dollar Opportunity in Global AI Infrastructure
Huang estimates that by the end of the decade, global AI infrastructure spending will reach an extremely large number. He observed that the capital expenditures of large cloud service providers have doubled in the past two years, and this is just the beginning of the AI build-out. NVIDIA’s goal is to continue being an AI infrastructure company, helping customers maximize revenue in power-constrained environments by improving “perf per watt,” as higher performance per watt directly correlates to customer revenue.
Strategic Networking Layout: Connecting Giant AI Super Factories
NVIDIA’s strategic layout in networking technology is equally critical. The company offers three main networking technologies:
- NVLink: Used for “scale-up,” such as the NVLink 72 in the Blackwell platform, which greatly increases memory bandwidth and is crucial for inference systems.
- InfiniBand: Provides the lowest latency and jitter “scale-out” solution for supercomputing and top-tier model makers.
- Spectrum X Ethernet: Designed for Ethernet AI workloads, offering a high-throughput, low-latency “scale-out” network. The newly launched Spectrum XGS Ethernet aims to unify multiple data centers and AI factories into a “giant AI super factory,” which will significantly improve the overall efficiency of AI factories. Huang stressed that choosing the right networking solution can improve the efficiency of an AI factory by tens of percentage points, yielding enormous benefits.
The Potential and Advocacy for the China Market
Huang reiterated the importance of the China market, calling it the world’s second-largest computing market with nearly half of the world’s AI researchers. Although geopolitical issues still need to be navigated, NVIDIA has received initial licenses to ship the H20 to customers in China. The company is actively communicating with the US government, advocating for the approval of the Blackwell architecture for sale in China, as this would help American tech companies lead and win the AI race and make the US tech stack the global standard.
Unwavering Confidence in Future Growth
Huang is highly optimistic about future growth prospects. He noted that current market demand is extremely high, with “everything sold out.” AI startups are seeing significant growth in both funding and revenue, and open-source models are opening up new opportunities in large enterprises, SaaS, and industrial AI. He expects that in the coming years, especially by the end of the decade, NVIDIA will continue to achieve record growth as the AI revolution is in full swing and the AI race is well underway. The maturation of reasoning agentic AI and physical AI will open up huge enterprise markets and entirely new industries in robotics and industrial automation.
NVIDIA’s vision is clear: to continuously drive technological innovation, build a comprehensive AI infrastructure, and lead the world into a new industrial era full of intelligence and infinite possibilities.
About Gemini
由 Google 開發的大型語言模型。