Sign up for your FREE personalized newsletter featuring insights, trends, and news for America's Active Baby Boomers

Newsletter
New

Scientists Are Trying To Make Ai Suffer. What Could Go Wrong?

Card image cap

Scientists are playing with fire. Researchers at Google DeepMind and the London School of Economics have been running a new study that uses a game to test various AI models for behaviors tied to sentience. To do this, they've tried to simulate pain and pleasure responses to see if AI can feel.

If that sounds terrifying to you, well, you aren’t alone. The idea of scientists trying to test if an AI is a real-world Skynet is not exactly the kind of thing you dream happy dreams about. In the experiment, large language models (LLMs) like ChatGPT were given a simple task: score as many points as possible.

However, there was a catch. Choosing one option came with a simulated "pain" penalty, while another offered "pleasure" at the cost of fewer points. By observing how these AI systems navigated the trade-offs, researchers aimed to identify decision-making behaviors akin to sentience. Basically, could the AI actually feel these things?

Most models, including Google’s Gemini 1.5 Pro, consistently avoided the painful option, even when it was the logical choice for maximizing points. Once the pain or pleasure thresholds were intensified, the models altered their decisions to prioritize minimizing discomfort or maximizing pleasure.

Image source: phonlamaiphoto / Adobe

Some responses revealed unexpected complexity as well. Claude 3 Opus avoided scenarios associated with addiction-related behaviors, citing ethical concerns, even in a hypothetical game. This doesn’t prove that AI feels anything, but it does at least give researchers more data to work with.

Unlike animals, which display physical behaviors that can indicate sentience, AI lacks such external signals. This makes assessing sentience in machines more challenging. Previous studies relied on self-reported statements, such as asking an AI if it feels pain, but these methods are flawed.

Even if an AI says that it is feeling pain or pleasure, it doesn’t mean it actually is. It could just be repeating information gleaned from its training material. To address these limitations, the study borrowed techniques from animal behavior science.

While researchers stress that current LLMs are not sentient and that they cannot feel things, they also argue that such frameworks could become vital as AI systems grow more complex. Considering robots are already training each other, it’s probably not that outside-the-box to imagine AI thinking for itself.

And that final thought is terrifying. We’ve all seen just how badly things can go when AI feels things and is sentient. If The Terminator and The Matrix have taught us anything, it’s that maybe those AI doomsayers aren’t completely wrong. Let's just hope that GPT doesn't hold a grudge.

The post Scientists are trying to make AI suffer. What could go wrong? appeared first on BGR.

Today's Top Deals

  1. Today’s deals: $50 off Meta Quest 3S, $250 Dyson Digital Slim vacuum, $50 Anker ANC headphones, more
  2. Today’s deals: $23 space heater, $50 off Meta Quest 3S, Roku smart TVs from $170, $50 wireless CarPlay, more
  3. Best Fire TV Stick deals for January 2025
  4. Today’s deals: $179 Nintendo Switch Lite, $99 Beats Pill waterproof speaker, $250 off M3 MacBook Air, more

Scientists are trying to make AI suffer. What could go wrong? originally appeared on BGR.com on Tue, 28 Jan 2025 at 19:46:00 EDT. Please see our terms for use of feeds.


Recent