Nvidia Ceo Says Ai Models Like Deepseek's R1 Need 100 Times More Compute

Nvidia CEO Jensen Huang.
Patrick T. Fallon for AFP via Getty Images
- Nvidia CEO says reasoning models need 100 times as much computing resources as traditional models.
- CEO Jensen Huang said that the "vast majority" of Nvidia's demand comes from inference.
- Analysts say Nvidia sees more competition in inference computing.
AI models are getting even more hungry for computing power, Nvidia CEO Jensen Huang said on Wednesday night's earnings call.
The company beat even the top-of-the-range revenue expectations again, yet investors displayed a tepid response. Since DeepSeek launched its eerily efficient open-source models last month, the biggest question looming for Nvidia was if the Chinese firm's efficiently trained models reduce the demand for AI computing.
Since DeepSeek's most impactful model, R1, was a reasoning model, Huang's answer was an emphatic "no."
"Reasoning models can consume 100x more compute. Future reasoning can consume much more compute," he said on the call.
Huang called DeepSeek an "excellent innovation."
"But even more importantly, it has open-sourced a world-class reasoning AI model. Nearly every AI developer is applying R1, or chain of thought and reinforcement learning techniques like R1 to scale their model's performance," Huang said Wednesday.
Huang had addressed Nvidia's stuttering stock price in an interview aired last week, saying that investors misinterpreted the DeepSeek phenomenon.
Cloud providers also previously told BI that demand for Nvidia's most powerful chips would keep growing.
DeepSeek's lasting impact was the push toward "resource-intensive reasoning models," Synovus analyst Dan Morgan said in an investor note, referring to resources like chips and power for inference — the type of AI computing that refines models and generates reasoning and query responses.
Competition rises
Inference has been slowly rising as AI applications mature.
"The vast majority of our compute today is actually inference, and Blackwell takes all of that to a new level," Huang said Wednesday, referring to the company's newest chip generation.
Though Nvidia still maintains the largest share of all AI compute markets, analysts are starting to see a world where that's less certain.
"Competition is starting to take its toll on Nvidia's position, although it is not very material at this point," said Third Bridge analyst Lucas Keh following Nvidia's earnings call.
Nvidia's challengers have targeted inference for years since it's expected to be a larger market in the long run. Indeed in recent months, new inference chip companies have grown in number and backing. Chip startup Tenstorrent announced nearly $700 million, and Etched announced $120 million in fresh funding last year.
Additionally, investors are increasingly concerned that the custom AI chips that cloud companies like Google and Amazon have ordered could chip away at Nvidia's lead—specifically in inference.
"We've heard that their market share in inference could go down to 50% as the landscape develops," Keh told BI via email.
Nvidia declined to comment.