Sign up for your FREE personalized newsletter featuring insights, trends, and news for America's Active Baby Boomers

Newsletter
New

Why Trump’s Ai Plan Made Elon Musk Flip Out

Card image cap


At the White House on Tuesday, SoftBank CEO Masayoshi Son confidently predicted that “artificial superintelligence” will kick off America’s “golden age.” President Donald Trump beamed as Son, OpenAI’s Sam Altman and Oracle’s Larry Ellison announced a $500 billion investment in an American scheme to unlock the potential of super-powerful AI.

The goodwill didn’t last 24 hours.

Elon Musk, the close Trump adviser who has his own AI company and was notably not at the press conference, erupted Wednesday with a relentless stream of online mockery. “They don’t actually have the money,” he posted on X.

He directed particular vitriol at Altman, whose OpenAI he co-founded and is currently suing; he reposted an image of a crack pipe with the joking allegation that Altman and his associates were smoking it. After hours of this, Altman finally slapped back, saying “i realize what is great for the country isn’t always what’s optimal for your companies, but in your new role i hope you’ll mostly put first.”

That’s a lot of drama for a feel-good photo op. Clearly, more is at stake than a few minutes onstage with the new president.

This story was adapted from Digital Future Daily, POLITICO’s afternoon newsletter about the power and politics of tech.

Politically, it’s part of Trump’s promise to build out American tech capacity and lock in geopolitical dominance over China. The group was there to promote a new collaboration between SoftBank, OpenAI, Oracle and MGX to build up to $500 billion in AI data centers over the next four years. Dubbed “Stargate,” it’s supposed to massively scale up existing operations to speed the development of godlike AI systems that Ellison promised would develop instant, bespoke cancer vaccines.

But Stargate quickly went from a political victory lap for Trump to an almost comical illustration of what billionaires will fight over in public. Altman and Musk are friends-turned-rivals in the high-stakes bid to develop ultra-powerful AI systems. Both of them have, at times, been Washington’s chosen authority on the human future.

Now, really, what they’re jostling for is bragging rights over the biggest and most impressive artificial minds being built — systems that claim to approach “artificial general intelligence,” the ambiguously defined, hypothetical AI system that can match or surpass human capabilities.

For a more in-depth look at what they’re fighting over, and why it matters so much, DFD called Ethan Mollick, a University of Pennsylvania Wharton School professor and author of “Co-Intelligence: Living and Working With AI.”

Mollick sounded a different note of skepticism about the project on X, wondering what all this competition is speeding us toward. “For those convinced they are making AGI soon,” he asked, “what does daily life look like 5-10 years later?”

He spoke to DFD about what, exactly, each party thinks they’re getting into with this massive investment, and what it means for America’s relationship to the AI industry.

He also discussed why “First Buddy” Elon Musk is so skeptical of this particular project, China’s role in the still-nebulous race to AGI, and how many people in the AI community sincerely believe that with projects like this they’re building a “machine god.” An edited and condensed version of our conversation follows:

This Stargate project seems to check so many boxes for everyone involved. Is there a “too good to be true” quality to it?

Part of this is we’ve got this weird R&D investment into a somewhat ill-defined, but if it happens, absolutely society-changing, economy-changing thing. So you want a piece of this next world. People are putting money in on that. But it is hard to imagine checking every box, especially because there’s no policy piece to go with it. The money investment is one piece, but what is the geopolitical element? It’ll be a U.S. company scaling faster. It’s a little bit of everything, and people are getting what they want, but it’s not clear that it’s a solution to every problem. There are questions about open models rather than closed models, is the government putting its finger on a scale for one versus another? It’s a little confusing. The Biden AI executive order got rescinded and not replaced by anything, and we’re not yet in a place where there’s any national direction.

What are the risks of investing this much money when the definition of “AGI” is still unsettled?

There is a fuzzy-ish definition of AGI, which is a machine that out-performs all human experts at all tasks. Or the average person at most tasks. We don’t really know. Increasingly, the way the AI companies have been announcing things, they view this as an intermediate step to ASI, a super-intelligent machine, a sort of machine god. I don’t know what their vision for the future is with this product. Everyone has different views, but thinks they’re ushering in a new era and wants to be there for it. People in these labs are very sincere, and there has been a vibe shift that AGI is achievable.

Is China thinking about AGI in similar terms?

I’m confused on all sides, by everyone’s strategy. China is releasing excellent models like DeepSeek. They’re releasing open models, they’re publishing lots of interesting papers. Part of the reason this is all happening, and everyone is speeding up, is because Meta is releasing its Llama models for free and publishing how to modify them and create them. I don’t know what the geopolitical conflict around AGI looks like right now because we don’t even know what an AGI looks like. Is it the end of the road, or is it just going to be another five years of everyone adopting smarter machines?

Some people have compared Stargate to the Manhattan Project or the Apollo Program, but given that lack of a definable goal, those don’t seem like great comparisons. Is there one?

It’s hard to know, because in some ways it’s not really a research project. Those programs were comparable at least in size, if 500 billion of funding actually happens. But those projects were, at peak, 0.5 percent of U.S. GDP. Meta spent more money last year on H100 chips than was spent inflation-adjusted on the Manhattan Project. So already it’s like comparing the size of Delaware to a banana. We shouldn’t confuse the yardstick of mass amounts of money with the scale of intent.

What’s confusing about this is it’s not clear that this is funding breakthrough research. It’s funding building new facilities. It’s almost an infrastructure project. The problem is, the larger infrastructure is, the higher your scale is, the bigger your model is, the smarter it is. So it’s kind of hard to know exactly what we’re building.

We have a bunch of competing labs that are all roughly at the same spot in terms of scaling. So is this a finger on the scale for OpenAI’s model? What does that mean for Google and X and all these other companies scaling up?

It’s a very hard comparison, because it’s not a single national effort, like you said. It’s not clear what the end goal is. It’s not clear that it funds fundamental research rather than scaling, and it’s a commercial project that people are going to make money from every step of the way.

Why do you think Elon Musk is so skeptical of this project?

Well yes, there’s bad blood between him and OpenAI. But also I think he is winning the scaling war right now. Informally, from what I hear from people, he’s managed to get more chips online faster than anyone else. Grok models are scaling very quickly. He’s very critical of OpenAI because he wants to win the race, but again, no one’s articulated what the end point of the race looks like, so I don’t know what winning means for them.

The thing to me that’s always been weird about this is that no one’s articulated a vision for what the world will look like. They’re always willing to say it comes down to supercharged scientific research, which may very well be true, but also requires rethinking how science is organized to take advantage of these models. What does that look like? There are a lot of other social components that have to fit into it.


Recent