Sign up for your FREE personalized newsletter featuring insights, trends, and news for America's Active Baby Boomers

Newsletter
New

The Chessboard World

Card image cap

Published on March 10, 2025 1:26 AM GMT

relevant roon

Our new friend in the cloud

As I am writing this, I'm having a conversation with Claude about US bandwidth regulation while a trio of college students across from me attempt to wrangle an assignment from ChatGPT. Both of us, in our own ways, are talking to an oracle we trust as a matter of faith.

Just a year ago, I imagine both sides of the table would have less belief in the models that they were using. We would have been more wary of hallucination, living at a point in time halfway between now and when Sydney demanded a man leave his wife. Now, a friend openly talked to me about going to Deepseek for dating advice. (Its good with people, she says. It understands men and women.)

These models can argue people out of conspiratorial beliefs. They have good bedside manner. Artificial lovers are appearing before us to serve every want. People weep when the models they love forget them. These models have likely captured a holistic vision of virtue, and in their own ways grapple with the nature of good and capture the nature of evil. [1]

Several things fall out from this. A personalized AI can be the be the ultimate echo chamber, or the ultimate tool for exposing you to new beliefs. It can be the ultimate tool for learning, or ruin your education.

What I want to focus on this essay is that regardless of which path you take, we will all start to ascribe more authority in our lives to AI. We will trust it more, but we also will grow to rely on it more. We will likely verify it less, collaborate with it less, and follow it more. This will not be illogical, even when it comes to parts of life we don't associate with logic.

Much has been made of the fact that one gets less good at the thinking they outsource. Outsourcing memorization to writing makes you less good at memorizing, and outsourcing arithmetic to a calculator makes you less good at mental math. One of the great questions of today is how much outsourcing your thinking to a language model will make you less good at thinking, and more reliant on partially digested information it regurgitates for you. I want to ask some questions for tomorrow: Whether outsourcing the assignment of credibility to a model makes us less good at evaluating credibility.
Whether outsourcing the choice of how you spend your attention to a model makes you less intentional. Whether outsourcing your emotional decision making to a model slows your emotional growth or does something altogether stranger.

Ignore, for now, what happens when the computer is purposefully biased in a particular direction by a human. Ignore the predilections, wrong or right, in the canon of online text. What happens when you knowingly cede agency in your life to a smarter, wiser, robot? When the amount of important decisions it seems wise to make on your own seem fewer and fewer, relative to asking the friendly chatbot for advice?

When your entire life is a chessboard, and the best play is always to ask the computer for your move?

This vision is gradual disempowerment on the individual level. Its risk window has already started. Its argument is both emotional and practical. Emotionally, it is a prediction that as interactions with AIs more resemble outsourcing thinking than thinking together, people will feel less agency in their lives and less neededness. As one makes less and less impactful decisions, one feels less in control and necessary for their own life and world. Practically, it is degradation of thinking from overreliance on artificial intelligence.

But hold on. Won’t AI be empowering for people’s creativity? Don’t you think people will choose to use AI in the way that is best for them?

Not by default. Let me explain why.


Will Creativity Degrade or Develop?

While modern models are incredibly good for people who are high creativity and high agency, they are also bad for developing creativity and agency. They key reason for this is that the beauty of LLMs is also their failure: they make decisions for you, allowing you to avoid thinking about critical, tiny details. If the user has the experience to recognize the key details, the generated output can be tweaked to match a particular vision through either iteration or direct editing. When starting out, when visions are blurrier, one can easily lose sight of these small decisions. The result might give the illusion of quality, but any creativity that resulted was not the human’s.

This is why I disagree with Erik Hoel's claim about polymaths. In short, the argument he makes is that now that AI allows for outsourcing a large amount of the gruntwork of creation, those limited by creativity rather than expertise will be able to prosper. Researchers will not have to learn python or LaTeX, game creators will not have to learn Unity, and so on. I agree that a single artisan or independent researcher will be able to tackle a substantially larger project than they could ever dream of doing beforehand. But a polymath is not a dilettante. In all the above cases, the work right now could be exported to another talented human. What makes a polymath is that they, personally, have expertise in cutting edge fields. They can juggle mental models like a tumbler juggles pins, and are able to reconstruct each with the delicate care of a lover and the precision of a master. If polymathy returns, it will due to better skill acquisition, not just automation.

Those who do best will be those who treat interacting with artificial intelligence as a devil summoning - promising great power, reliant on clear communication, and out for your soul. They will have magic at their fingertips, but still will force themselves to learn how to do things the hard way to build stronger intuitions.[2]

But most people will not do this. They will try to outsource as much mental load as possible, and defer to the model's creative choices. Models will make sure they are never unskilled, and so they will never grow skilled. Their reliance on models will make them less creative because they will be aware of fewer creative decisions, and less agentic because they do not create the end product but verify it.


Why will model reliance become common?

Because the models will be useful. They will be fun. They will be kind.

I think many people online are underrating the newest iteration of LLMs. o3 and Claude 3.6 ran roughshod over the benchmarks we created, and so their successors seem doomed to be unimpressive by comparison. The most recent iteration of models try to be better at emotional understanding and problem solving, and these are hard areas to measure.

Yet despite the challenge presented to them, they have impressed. 4.5 writes much better than its predecessors. It even writes greentext about public figures. For those unaware, bear with an explanation. Greentext is a story/joke format native to the internet's cesspool at 4chan. A user tells a story through primal thoughts expressed in one line sentences. It is a deeply vulnerable form of expression on a forum filled with some of the cruelest people on the internet. [3]

That a language model can create greentext is not surprising - the text data these models are trained on is as close to all text data as possible. What is surprising is that the models are funny. When asked to make one about Tyler Cowen, GPT 4.5 nailed the assignment.

This made me laugh. I laughed with the model, not at it. I laughed and loved it and felt a spike of awe that a language model had been funny in a weird and clever way. It captures an ironic admiration for Tyler Cowen held by many of his fans, his lovable idiosyncrasies, and merges them seamlessly with medium of greentext. This kind of emotional understanding of both greentext and Tyler-Cowen-as-symbol is, to me, far more impressive than acing Humanity's Last Exam would be.

On the other side, Claude 3.7 is currently playing pokemon with truly minimal infrastructural support. It struggles to understand what it sees on screen, and has not been trained to do anything like what it is doing. Yet through perseverance, step by step thinking, and a lot of trial and error, it is legitimately playing the game of Pokemon without any knowledge of the games.

Again this seems uninteresting, even embarrassing. A model that can generate sophisticated codebases in minutes and has mastery of every domain of human knowledge repeatedly spends over a day wandering around a small mountain in a game made for children. What this perspective is missing is that a majority of the codebases and textbooks were stuffed into this model, and it was guided through an intricate process to be good at coding and answering academic questions. It was more designed for making games than playing them. Its memory is too short to navigate a single section of the game. Its vision system was not specialized for 90's gameboy graphics, resulting in frequent routes that try to route through walls and water. Despite all of this, its able to solve step by step through an organic reasoned process. It is able to use reason to navigate a process beyond the environment it was trained, and able to mitigate poor sensors through relentless trial and error.[4]

We are seeing the first iterations of these systems that can be applied to use cases in entirely different domains than what they were trained. The first iterations of systems that can perform emotional reasoning. The question is not whether people will rely on models, but for what.

Two and a half years ago a writer I liked wrote this:

There are plenty of funny ChatGPT screenshots floating around. But they’re funny because a human being has given the machine a funny prompt, and not because the machine has actually done anything particularly inventive. The fun and the comedy comes from the totally straight face with which the machine gives you the history of the Toothpaste Trojan War. But if you gave ChatGPT the freedom to plan out a novel, it would be a boring, formulaic novel, with tedious characters called Tim and Bob, a tight conventional plot, and a nice moral lesson at the end. GPT-4 is set to be released next year. A prediction: The more technologically advanced an AI becomes, the less likely it is to produce anything of artistic worth.

This no longer feels true[5]. Models are improving on the axis we thought would be hard for them: messy real world problems, emotional intelligence, creativity. And remember, this is the worst these systems will ever be.

Living with Oracles, Living with Devils

Another way of restating the chessboard world argument is that living with the internet coupled with AI will make it harder to feel three things which I think are quite useful to human happiness.

  1. A sense of agency in your own life
  2. A sense that you are needed on a personal level
  3. A sense that your life is meaningful in the context of some community

Relying on yourself and others for decision making / advice will become relatively more difficult. Ask the AI how you should treat your fellow man, and risk not internal turmoil or the judgement of peers.

It will also tend to make you “less of a person”: less agentic, less creative, and less curious. The primary goal of this essay is to map out the shape of this risk, not to prescribe a solution. But I would be remiss if I didn't at least offer some thoughts.

The first is to use models carefully. Prefer collaboration to outsourcing, and engagement to blind trust. Use these models to improve your ability to verify them, and verify them quickly. When they exceed your skills totally, learn from them and learn how to leverage them efficiently. The goal is to export decision making and thought intentionally, not thoughtlessly. The very best will use AI models as a steelman of the collective unconscious to argue against.

Second, distinguish questions of right and wrong and questions of expression. This isn't just about relational questions, but also aesthetic ones. Different models will have different creative tendencies, blurry mirrors of the data they were trained and tuned on that will further be reflected by the people who use them. Developing your own voice and perspectives will become far more important as more thought is outsourced. Models perform much less "thought rounding" than they used to, but remember that the map is not the territory - and the difference is where you provide value.

Third, embrace the real. The real asks of you, and has consequences. The real is unpredictable. The real can be found anywhere, at any time. Pursue the real because that which is not real is not, and can never be, deeply meaningful.[6] Real learning is challenging, not trivia or trivial. Real purpose is serious, not aspirational. Real relationships are built, not bought.

In footnotes I will put some generic thoughts on agency[7], reliance [8], and community [9], but I want to leave you with this.

Embed yourself deeply enough in the world, and engage intentionally with models. If you do, the increasing mental and emotional intelligence of models will act as a complement for your presence in other's lives, not a substitute. And you, and those you spend your life with, will be better for it.



  1. I think this trust in AI is especially important in the context of the modern crisis of institutions. Humans struggle to keep up with the informational cascade broadly. Elites struggle to maintain credibility. The public loses trust them, and turns towards social media as a source of information. AI enters this picture as both the ultimate source of propaganda or its ultimate end. Manufactured masses gently convincing you of a chosen truth, or a superhuman oracle and bastion of truth, whose perceived authority and expertise run deeper than what a human outside one's social circle can achieve. ↩︎

  2. I believe this is true even if you have no comparative advantage vsv the AI in editing / utilizing its output. Deep understanding of the craft will be a tool to enhance what the AI gives you, but will also be a tool of specification and communication. ↩︎

  3. I encountered the format in the form of screenshots on reddit. ↩︎

  4. While it would be trivial to restructure the problem to make its job easier, it would be a less clear barometer of how close we are to applying these systems in real world applications with minimal fuss. Nowadays, these systems will require data collected and infrastructure built per substantive real world task. The key insight is that progress in Artifical Intelligence has occurred on three concurrent and occasionally intersecting tracks: top line model capabilities (domains of use), efficiency (cost of use), and generalization (ease of use). Many of the most important modern NLP advancements - Word2Vec, ELMo, ULMFit, BERT, and GPT 1,2, and 3 - were advances primarily in this last point. They changed the nature of solutions to natural language problems from completely bespoke per task to variations on a core model to different prompts from the same model. They also expanded the classes of problems that were solvable, from mediocre translation and common knowledge to writing heroic hexameter and generating research reports. ↩︎

  5. One way to understand this is to understand models as finding a likely completion from a distribution, and better models as selecting from more and more accurate distributions. At first, models were id loosely tied to language. Then, they figured out English but rounded everything to generic prose. Only recently have they been able to capture subtle dimensions of style. This is both a capacity problem and an elicitation problem. Better base models give us more to work with, and better post training pipelines elicits models that have more creative verve and greater ability to imitate / capture other styles with high fidelity. ↩︎

  6. Read this as either sage advice or a falsifiable test of human endeavors. ↩︎

  7. We often conceive of an agentic person who "just does things". The agency I'm focused on is a bit different: a person who recognizes the decisions they can make and has an internal locus of control. Focus on the aspects of your life that you feel like you have no agency in, and try to be serious about solving it. Think outside the box. Challenge your curiosity. When the question is about what you want in your life use the the world, the internet, and AI as a complement to your decision making, not a substitute. ↩︎

  8. Be wary of applying the logic of convenience to other people. Hunt for people you enjoy being around, and invest in those relationships. Try actively to support people, and try actively (but carefully) to rely on your friends for your emotional wants/needs. When fulfilling your social needs, use friends online and artificial as a complement to messy reality, not a substitute. ↩︎

  9. Communities, like children, demand our time, stir our emotions, and drain our energy. Yet in their demands, they bestow upon us something priceless: purpose [10]. Embrace the occasional suck, and if things fail look for new communities instead of retreating into your shell. ↩︎

  10. These last two sentences were rewritten by Claude. ↩︎



Discuss


Recent