Sign up for your FREE personalized newsletter featuring insights, trends, and news for America's Active Baby Boomers

Newsletter
New

Ai Promise And Chip Precariousness

Card image cap

Listen to this post:

Yesterday Anthropic released Claude Sonnet 3.7; Dylan Patel had the joke of the day about Anthropic’s seeming aversion to the number “4”, which means “die” in Chinese:

Anthropic is also a chinese ai company because of their aversion to the number four

— Dylan Patel (@dylan522p) February 24, 2025

Jokes aside, the correction on this post by Ethan Mollick suggests that Anthropic did not increment the main version number because Sonnet 3.7 is still in the GPT-4 class of models as far as compute is concerned.

After publishing this piece, I was contacted by Anthropic who told me that Sonnet 3.7 would not be considered a 10^26 FLOP model and cost a few tens of millions of dollars to train, though future models will be much bigger. I updated the post with that information. The only significant change is that Claude 3 is now referred to as an advanced model but not a Gen3 model.

I love Mollick’s work, but reject his neutral naming scheme: whoever gets to a generation first deserves the honor of the name. In other words, if Gen2 models are GPT-4 class, then Gen3 models are Grok 3 class.

And, whereas Sonnet 3.7 is an evolution of Sonnet 3.5’s fascinating mixture of personality and coding prowess, likely a result of some Anthropic special sauce in post-training, Grok 3 feels like a model that is the result of a step-order increase in compute capacity, with a much lighter layer of reinforcement learning with human feedback (RLHF). Its answers are far more in-depth and detailed (model good!), but frequently becomes too verbose (RLHF lacking); it gets math problems right (model good!), but its explanations are harder to follow (RLHF lacking). It is also much more willing to generate forbidden content, from erotica to bomb recipes, while having on the surface the political sensibilities of Tumblr, with something more akin to 4chan under the surface if you prod.1 Grok 3, more than any model yet, feels like the distilled Internet; it’s my favorite so far.

Grok 3 is also a reminder of how much speed matters, and, by extension, why base models are still important in a world of AI’s that reason. Grok 3 is tangibly faster than the competition, which is a better user experience; more generally, conversation is the realm of quick wits, not deep thinkers. The latter is who I want doing research or other agentic-type tasks; the former makes for a better consumer user experience in a chatbot or voice interface.

ChatGPT, meanwhile, still has the best product experience — its Mac app in particular is dramatically better than Claude’s2 — and it handles more consumer-y use cases like math homework in a much more user-friendly way. Deep Research, meanwhile, is significantly better than all of its competitors (including Grok’s “Deep Search”), and, for me anyways, the closest experience yet to AGI.

OpenAI’s biggest asset, however, is the ChatGPT brand and associated mindshare; COO Brad Lightcap just told CNBC that the service had surpassed 400 million weekly active users, a 33% increase in less than 3 months. OpenAI is, as I declared four months after the release of ChatGPT, the accidental consumer tech company. Consumer tech companies are the hardest to build and have the potential to be the most valuable; they also require a completely different culture and value chain than a research organization with an API on the side. That is the fundamental reality that I suspect has driven much of the OpenAI upheaval over the last two-and-a-half years: long-time OpenAI employees didn’t sign up to be the next Google Search or Meta, nor is Microsoft interested in being a mere component supplier to a company that must own the consumer relationship to succeed.

In fact, though, OpenAI has moved too slowly: the company should absolutely have an ad-supported version by now, no matter how much the very idea might make AI researchers’ skin crawl; one of the takeaways from the DeepSeek phenomenon was how many consumers didn’t understand how good OpenAI’s best models were because they were not paying customers. It is very much in OpenAI’s competitive interest to make it cost-effective to give free users the best models, and that means advertising. More importantly, the only way for a consumer tech company to truly scale to the entire world is by having an ad model, which maximizes the addressable market while still making it possible to continually increase the average revenue per user (this doesn’t foreclose a subscription model of course; indeed, ads + subscriptions is the ultimate destination for a consumer content business).

DeepSeek, meanwhile, has both been the biggest story of the year, in part because it is the yin to Grok 3’s yang. DeepSeek’s V3 and R1 models are excellent and worthy competitors in the GPT-4 class, and they achieved this excellence through extremely impressive engineering on both the infrastructure and model layers; Grok 3, on the other hand, simply bought the most top-of-the-line Nvidia chips, leveraging the company’s networking to build the biggest computing cluster yet, and came out with a model that is better, but not astronomically so.

The fact that DeepSeek is Chinese is critically important, for reasons I will get to below, but it is just as important that it is an open lab, regularly publishing papers, full model weights, and underlying source code. DeepSeek’s models — which are both better than Meta’s Llama models and more open (and unencumbered by an “openish” license) — set the bar for “minimum open capability”; any model at or below DeepSeek’s models has no real excuse to not be open. Safety concerns are moot when you can just run DeepSeek, while competitive concerns are dwarfed by the sacrifice in uptake and interest entailed in having a model that is both worse and closed.

Both DeepSeek and Llama, meanwhile, provide significant pressure on pricing; API costs in both the U.S. and China have come down in response to the Chinese research lab’s releases, and the only way to have a sustainable margin in the long run is to either have a cost advantage in infrastructure (i.e. Google), have a sustainable model capability advantage (potentially Claude and coding), or be an Aggregator (which is what OpenAI ought to pursue with ChatGPT).

The State of AI Chips

All of this is — but for those with high p-doom concerns — great news. AI at the moment seems to be a goldilocks position: there is sufficient incentive for the leading research labs to raise money and continue investing in new foundation models (in the hope of building an AI that improves itself), even as competition drives API prices down relentlessly, further incentivizing model makers to come up with differentiated products and capabilities.

The biggest winner, of course, continues to be Nvidia, whose chips are fabbed by TSMC: DeepSeek’s success is causing Chinese demand for the H20, Nvidia’s reduced-compute-and-reduced-bandwidth-to-abide-by-export-controls version of the H200, to skyrocket, even as xAI just demonstrated that the fastest way to compete is to pay for the best chips. DeepSeek’s innovations will make other models more efficient, but it’s reasonable to argue that those efficiences are downstream from the chip ban, and that it’s understandable why companies who can just buy the best chips haven’t pursued — but will certainly borrow! — similar gains.

That latter point is a problem for AMD in particular: SemiAnalysis published a brutal breakdown late last year demonstrating just how poor the Nvidia competitor’s software is relative to its hardware; AMD promises to do better, but, frankly, great chips limited by poor software has been the story of AMD for its entire five decades of existence. Some companies, like Meta or Microsoft, might put in the work to write better software, but leading labs don’t have the time nor expertise.

The story is different for Huawei and its Ascend line of AI chips. Those chips are fabbed on China’s Semiconductor Manufactoring International Corporation’s (SMIC) 7nm process, using western-built deep ultraviolet lighography (DUV) and quad-patterning; that this is possible isn’t a surprise, but it’s reasonable to assume that the fab won’t progress further without a Chinese supplier developing extreme ultraviolet lithography (EUV) (and no, calling an evolution of the 7nm process 5.5nm doesn’t count).

Still, the primary limitation for AI chips — particularly when it comes to inference — isn’t necessarily chip speed, but rather memory bandwidth, and that can be improved at the current process level. Moreover, one way to (somewhat) overcome the necessity of using less efficient chips is to simply build more data centers with more power, something that China is much better at than the U.S. Most importantly, however, is that China’s tech companies have the motivation — and the software chops — to make the Ascend a viable contender, particularly for inference.

There is one more player who should be mentioned alongside Nvidia/TSMC and Huawei/SMIC, and that is the hyperscalers who design their own chips, either on their own (AWS with Trainium and Microsoft with Maia) or in collaboration with Broadcom (Google with TPUs and Meta with MTIA). The capabilities and importance of these efforts varies — Google has been investing in TPUs for a decade now, and trains its own models on them, while the next-generation Anthropic model is being trained on Trainium; Meta’s MTIA is about recommendations and not generative AI, while Microsoft’s Maia is a much more nascent effort — but what they all have in common is that their chips are fabbed by TSMC.

TSMC and Intel

That TSMC is dominant isn’t necessarily a surprise. Yes, much has been written, including on this site, about Intel’s stumbles and TSMC’s rise, but even if Intel had managed to stay on the leading edge — and 18A is looking promising — there is still the matter of the company needing to transform itself from an integrated device manufacturer (IDM) who designs and makes its own chips, to a foundry that has the customer service, IP library, and experience to make chips for 3rd parties like all of the entities I just discussed.

Nvidia, to take a pertinent example, was making its chips at TSMC (and Samsung) even when Intel had the leading process; indeed, it was the creation of TSMC and its pure-play foundry model that even made Nvidia possible.3 This also means that TSMC doesn’t just have leading edge capacity, but trailing edge capacity as well. There are a lot of chips in the world — both on AI servers and also in everything from cars to stereos to refrigerators — that don’t need to be on the cutting edge and which benefit from the low costs afforded by the fully depreciated foundries TSMC still maintains, mostly in Taiwan. And TSMC, in turn, can take that cash flow — along with increasing prices for the leading edge — and invest in new fabs on the cutting edge.

Those leading edge fabs continue to skyrocket in price, which means volume is critical. That is why it was clear to me back when this site started in 2013 that Intel needed to become a foundry; unfortunately the company didn’t follow my advice, preferring to see their stock price soar on the back of cloud server demand. Fast forward to 2021 and Intel — now no longer on the leading edge, and with its cloud server business bleeding share to a resurgent AMD on TSMC’s superior process — tried, under the leadership of Pat Gelsinger, to become a foundry; unfortunately the company’s diminishing cash position is larger than its foundry customer base, which is mostly experimental chips or x86 variants.

Intel’s core problem goes back to the observation above: becoming a foundry is about more than having the leading edge process; Intel might have been able to develop those skills in conjunction with customers eager to be on the best process in the world, but once Intel didn’t even have that, it had nothing to offer. There simply is no reason for an Apple or AMD or Nvidia to take the massive risk entailed in working with Intel when TSMC is an option.

China and a Changing World

TSMC is, of course, headquartered in Taiwan; that is where the company’s R&D and leading edge fabs are located, along with most of its trailing edge capacity. SMIC, obviously, is in China; another foundry is Samsung, in South Korea. I told the story as to why so much of this industry ended up in Asia last fall in A Chance to Build:

Semiconductors are so integral to the history of Silicon Valley that they give the region its name, and, more importantly, its culture: chips require huge amounts of up-front investment, but they have, relative to most other manufactured goods, minimal marginal costs; this economic reality helped drive the development of the venture capital model, which provided unencumbered startup capital to companies who could earn theoretically unlimited returns at scale. This model worked even better with software, which was perfectly replicable.

That history starts in 1956, when William Shockley founded the Shockley Semiconductor Laboratory to commercialize the transistor that he had helped invent at Bell Labs; he chose Mountain View to be close to his ailing mother. A year later the so-called “Traitorous Eight”, led by Robert Noyce, left and founded Fairchild Semiconductor down the road. Six years after that Fairchild Semiconductor opened a facility in Hong Kong to assemble and test semiconductors. Assembly required manually attaching wires to a semiconductor chip, a labor-intensive and monotonous task that was difficult to do economically with American wages, which ran about $2.50/hour; Hong Kong wages were a tenth of that. Four years later Texas Instruments opened a facility in Taiwan, where wages were $0.19/hour; two years after that Fairchild Semiconductor opened another facility in Singapore, where wages were $0.11/hour.

In other words, you can make the case that the classic story of Silicon Valley isn’t completely honest. Chips did have marginal costs, but that marginal cost was, within single digit years of the founding of Silicon Valley, exported to Asia.

I recounted in that Article about how this outsourcing was an intentional policy of the U.S. government, and launched into a broader discussion about the post-War Pax Americana global order that placed the U.S. consumer market at the center of global trade, denominated by the dollar, and why that led to an inevitable decline in American manufacturing and the rise of a country in China that, in retrospect, was simply too big, and thus too expensive, for America to bear.

That, anyways, is how one might frame many of the signals coming out of the 2nd Trump administration, including what appears to be a Monroe 2.0 Doctrine approach to North America, an attempt to extricate the U.S. from the Ukraine conflict specifically and Europe broadly, and, well, a perhaps tamer approach to China to start, at least compared to Trump’s rhetoric on the campaign trail.

One possibility is that Trump is actually following through on the “pivot to Asia” that U.S. Presidents have been talking about but failing to execute on for years; in this view the U.S. is girding itself up to defend Taiwan and other entities in Asia, and hopefully break up the burgeoning China-Russia relationship in the process.

The other explanation is more depressing, but perhaps more realistic: President Trump may believe that the unipolar U.S.-dominated world that has been the norm since the fall of the Soviet Union is drawing to a close, and it’s better for the U.S. to proactively shift to a new norm than to have it forced upon them.

The important takeaway that is relevant to this Article is that Taiwan is the flashpoint in both scenarios. A pivot to Asia is about gearing up to defend Taiwan from a potential Chinese invasion or embargo; a retrenchment to the Americas is about potentially granting — or acknowledging — China as the hegemon of Asia, which would inevitably lead to Taiwan’s envelopment by China.

This is, needless to say, a discussion where I tread gingerly, not least because I have lived in Taipei off and on for over two decades. And, of course, there is the moral component entailed in Taiwan being a vibrant democracy with a population that has no interest in reunification with China. To that end, the status quo has been simultaneously absurd and yet surprisingly sustainable: Taiwan is an independent country in nearly every respect, with its own border, military, currency, passports, and — pertinent to tech — economy, increasingly dominated by TSMC; at the same time, Taiwan has not declared independence, and the official position of the United States is to acknowledge that China believes Taiwan is theirs, without endorsing either that position or Taiwanese independence.

Chinese and Taiwanese do, in my experience, handle this sort of ambiguity much more easily than do Americans; still, gray zones only go so far. What has been just as important are realist factors like military strength (once in favor of Taiwan, now decidedly in favor of China), economic ties (extremely deep between Taiwan and China, and China and the U.S.), and war-waging credibility. Here the Ukraine conflict and the resultant China-Russia relationship looms large, thanks to the sharing of military technology and overland supply chains for oil and food that have resulted, even as the U.S. has depleted itself. That, by extension, gets at another changing factor: the hollowing out of American manufacturing under Pax Americana has been directly correlated with China’s dominance of the business of making things, the most essential war-fighting capability.

Still, there is — or rather was — a critical factor that might give China pause: the importance of TSMC. Chips undergird every aspect of the modern economy; the rise of AI, and the promise of the massive gains that might result, only make this need even more pressing. And, as long as China needs TSMC chips, they have a powerful incentive to leave Taiwan alone.

Trump, Taiwan, and TSMC

Anyone who has been following the news for the last few years, however, can surely see the problem: the various iterations of the chip ban, going back to the initial action against ZTE in 2018, have the perhaps-unintended effect of making China less dependent on TSMC. I wrote at the time of the ZTE ban:

What seems likely to happen in the long run is a separation at the hardware layer as well; China is already investing heavily in chips, and this action will certainly spur the country to focus on the sort of relatively low-volume high-precision components that other countries like the U.S., Taiwan, and Japan specialize in (to date it has always made more sense for Chinese companies to focus on higher-volume lower-precision components). To catch up will certainly take time, but if this action harms ZTE as much as it seems it will I suspect the commitment will be even more significant than it already is.

I added two years later, after President Trump barred Huawei from TSMC chips in 2020:

I am, needless to say, not going to get into the finer details of the relationship between China and Taiwan (and the United States, which plays a prominent role); it is less that reasonable people may disagree and more that expecting reasonableness is probably naive. It is sufficient to note that should the United States and China ever actually go to war, it would likely be because of Taiwan.

In this TSMC specifically, and the Taiwan manufacturing base generally, are a significant deterrent: both China and the U.S. need access to the best chip maker in the world, along with a host of other high-precision pieces of the global electronics supply chain. That means that a hot war, which would almost certainly result in some amount of destruction to these capabilities, would be devastating…one of the risks of cutting China off from TSMC is that the deterrent value of TSMC’s operations is diminished.

Now you can see the fly in Goldilocks’ porridge! China would certainly like the best chips from TSMC, but they are figuring out how to manage with SMIC and the Ascend and surprisingly efficient state-of-the-art models; the entire AI economy in the U.S., on the other hand — the one that is developing so nicely, with private funding pursuing the frontier, and competition and innovation up-and-down the stack — is completely dependent on TSMC and Taiwan. We have created a situation where China is less dependent on Taiwan, even while we are more dependent on the island.

This is the necessary context for two more will-he-or-won’t-he ideas floated by President Trump; both are summarized in this Foreign Policy article:

U.S. President Donald Trump has vowed to impose tariffs on Taiwan’s semiconductor industry and has previously accused Taiwan of stealing the U.S. chip industry…The primary strategic goal for the administration is to revitalize advanced semiconductor manufacturing in the United States…As the negotiations between TSMC and the White House unfold, several options are emerging.

The most discussed option is a deal between TSMC, Intel, the U.S. government, and U.S. chip designers such as Broadcom and Qualcomm. Multiple reports indicate that the White House has proposed a deal that would have TSMC acquire a stake in Intel Foundry Services and take a leading role in its operations after IFS separated from Intel. Other reports suggest a potential joint venture involving TSMC, Intel, the U.S. government, and industry partners, with technology transfer and technical support from TSMC.

The motivation for such a proposal is clear: Intel’s board, who fired Gelsinger late last year, seems to want out of the foundry business, and Broadcom or Qualcomm are natural landing places for the design division; the U.S., however, is the entity that needs a leading edge foundry in the U.S., and the Trump administration is trying to compel TSMC to make it happen.

Unfortunately, I don’t think this plan is a good one. It’s simply not possible for one foundry to “take over” another: while the final output is the same — a microprocessor — nearly every step of the process is different in a multitude of ways. Transistors — even ones of the same class — can have different dimensions, with different layouts (TSMC, for example, packs its transistors more densely); production lines can be organized differently, to serve different approaches to lithography; chemicals are tuned to individual processes, and can’t be shared; equipment is tailored to a specific line, and can’t be switched out; materials can differ, throughout the chip, along with how exactly they are prepared and applied. Sure, most of the equipment could be repurposed, but one doesn’t simply layer a TSMC process onto an Intel fab! The best you could hope for is that TSMC could rebuild the fabs using the existing equipment according to their specifications.

That, though, doesn’t actually solve the Taiwan problem: TSMC is still headquartered in Taiwan, still has its R&D division there, and is still beholden to a Taiwanese government directive to not export its most cutting edge processes (and yes, there is truth to Trump’s complaints that Taiwan sees TSMC as leverage to guarantee that the U.S. defends Taiwan in the event of a Chinese invasion). Moreover, the U.S. chip problem isn’t just about the leading edge, but also the trailing edge. I wrote in Chips and China:

It’s worth pointing out, though, that this is producing a new kind of liability for the U.S., and potentially more danger for Taiwan…these aren’t difficult chips to make, but that is precisely why it makes little sense to build new trailing edge foundries in the U.S.: Taiwan already has it covered (with the largest marketshare in both categories), and China has the motivation to build more just so it can learn.

What, though, if TSMC were taken off the board?

Much of the discussion around a potential invasion of Taiwan — which would destroy TSMC (foundries don’t do well in wars) — centers around TSMC’s lead in high end chips. That lead is real, but Intel, for all of its struggles, is only a few years behind. That is a meaningful difference in terms of the processors used in smartphones, high performance computing, and AI, but the U.S. is still in the game. What would be much more difficult to replace are, paradoxically, trailing node chips, made in fabs that Intel long ago abandoned…

The more that China builds up its chip capabilities — even if that is only at trailing nodes — the more motivation there is to make TSMC a target, not only to deny the U.S. its advanced capabilities, but also the basic chips that are more integral to everyday life than we ever realized.

It’s good that the administration is focused on the issue of TSMC and Taiwan: what I’m not sure anyone realizes is just how deep the dependency goes, and just how vulnerable the U.S. — and our future in AI — really is.

What To Do

Everything that I’ve written until now has been, in some respects, trivial: it’s easy to identify problems and criticize proposed solutions; it’s much more difficult to come up with solutions of one’s own. The problem is less the need for creative thinking and more the courage to make trade-offs: the fact of the matter is that there are no good solutions to the situation the U.S. has got itself into with regards to Taiwan and chips. That is a long-winded way to say that the following proposal includes several ideas that, in isolation, I find some combination of distasteful, against my principles, and even downright dangerous. So here goes.

End the China Chip Ban

The first thing the U.S. should do — and, by all means, make this a negotiating plank in a broader agreement with China — is let Chinese companies, including Huawei, make chips at TSMC, and further, let Chinese companies buy top-of-the-line Nvidia chips.

The Huawei one is straightforward: Huawei’s founder may have told Chinese President Xi Jinping that Huawei doesn’t need external chip makers, but I think that the reality of having access to cutting edge TSMC fabrication would show that the company’s revealed preference would be for better chips than Huawei can get from SMIC — and the delta is only going to grow. Sure, Huawei would still work with SMIC, but the volume would go down; critically, so would the urgency of having no other choice. This, by extension, would restart China’s dependency on TSMC, thereby increasing the cost of making a move on Taiwan.

At the same time, giving Huawei access to cutting edge chips would be a significant threat to Nvidia’s dominance; the reason the company is so up-in-arms about the chip ban isn’t simply foregone revenue but the forced development of an alternative to their CUDA ecosystem. The best way to neuter that challenge — and it is in the U.S.’s interest to have Nvidia in control, not Huawei — is to give companies like Bytedance, Alibaba, and DeepSeek the opportunity to buy the best.

This does, without question, unleash China in terms of AI; preventing that has been the entire point of the various flavors of chip bans that came down from the Biden administration. DeepSeek’s success, however, should force a re-evaluation about just how viable it is to completely cut China off from AI.

It’s also worth noting that success in stopping China’s AI efforts has its own risks: another reason why China has held off from moving against Taiwan is the knowledge that every year they wait increases their relative advantages in all the real world realities I listed above; that makes it more prudent to wait. The prospect of the U.S. developing the sort of AI that matters in a military context, however, even as China is cut off, changes that calculus: now the prudent course is to move sooner rather than later, particularly if the U.S. is dependent on Taiwan for the chips that make that AI possible.

Double Down on the Semiconductor Equipment Ban

While I’ve continually made references to “chip bans”, that’s actually incomplete: the U.S. has also made moves to limit China’s access to semiconductor equipment necessary for making leading edge chips (SMIC’s 7nm process, for example, is almost completely dependent on western semiconductor equipment). Unfortunately, this effort has mostly been a failure, thanks to generous loopholes that are downstream from China being a large market for U.S. semiconductor equipment manufacturers.

It’s time for those loopholes to go away; remember, the overriding goal is for China to increase its dependence on Taiwan, and that means cutting SMIC and China’s other foundries off at the knees. Yes, this increases the risk that China will develop its own alternatives to western semiconductor manufacturers, leading to long-term competition and diminished money for R&D, but this is a time for hard choices and increasing Taiwan’s importance to China is more important.

Build Trailing Edge Fabs in the U.S.

The U.S.’s dependency on TSMC for trailing edge chip capacity remains a massive problem; if you think the COVID chip shortages were bad, then a scenario where the U.S. is stuck with GlobalFoundries and no one else is a disaster so great it is hard to contemplate. However, as long as TSMC exists, there is zero economic rationale for anyone to build more trailing edge fabs.

This, then, is a textbook example of where government subsidies are the answer: there is a national security need for trailing edge capacity, and no economic incentive to build it. And, as an added bonus, this helps fill in some of the revenue for semiconductor manufacturers who are now fully cut off from China. TSMC takes a blow, of course, but they are also being buttressed by orders from Huawei and other Chinese chip makers.

Intel and the Leading Edge

That leaves Intel and the need for native leading edge capacity, and this is in some respects the hardest problem to solve.

First, the U.S. should engineer a spin-off of Intel’s x86 chip business to Broadcom or Qualcomm at a nominal price; the real cost for the recipient company will be guaranteed orders for not just Intel chips but also a large portion of their existing chips for Intel Foundry. This will provide the foundational customer to get Intel Foundry off the ground.

Second, the U.S. should offer to subsidize Nvidia chips made at Intel Foundry. Yes, this is an offer worth billions of dollars, but it is the shortest, fastest route to ground the U.S. AI industry in U.S. fabs.

Third, if Nvidia declines — and they probably will, given the risks entailed in a foundry change — then the U.S. should make a massive order for Intel Gaudi AI accelerators, build data centers to house them, and make them freely available to companies and startups who want to build their own AI models, with the caveat that everything is open source.

Fourth, the U.S. should heavily subsidize chip startups to build at Intel Foundry, with the caveat that all of the resultant IP that is developed to actually build chips — the basic building blocks, that are separate from the “secret sauce” of the chip itself — is open-sourced.

Fifth, the U.S. should indemnify every model created on U.S.-manufactured chips against any copyright violations, with the caveat that the data used to train the model must be made freely available.


Here is the future state the U.S. wants to get to: a strong AI industry running on U.S.-made chips, along with trailing edge capacity that is beyond the reaches of China. Getting there, however, will take significant interventions into the market to undo the overwhelming incentives for U.S. companies to simply rely on TSMC; even then, such a shift will take time, which is why making Taiwan indispensable to China’s technology industry is the price that needs to be paid in the meantime.

AI is in an exciting place; it’s also a very precarious one. I believe this plan, with all of the risks and sacrifices it entails, is the best way to ensure that all of the trees that are sprouting have time to actually take root and change the world.


  1. This suggests a surprising takeaway: it’s possible that while RLHF on ChatGPT and especially Claude block off the 4chan elements, they also tamp down the Tumblr elements, which is to say the politics don’t come from the post-training, but from the dataset — i.e. the Internet. In other words, if I’m right about Grok 3 having a much lighter layer of RLHF, then that explains both the surface politics, and what is available under the surface. 

  2. Grok doesn’t yet have a Mac app, but its iPhone app is very good 

  3. Although Nvidia’s first chip was made by SGS-Thomson Microelectronics 


Recent