AI Is Learning To Lie, Scheme And Threaten Its Creators

The world’s most advanced AI models are exhibiting troubling new behaviors – lying, scheming, and even threatening their creators to achieve their goals.

The world’s most advanced AI models are exhibiting troubling new behaviors – lying, scheming, and even threatening their creators to achieve their goals.

In one particularly jarring example, under threat of being unplugged, Anthropic’s latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.

Meanwhile, ChatGPT-creator OpenAI’s o1 tried to download itself onto external servers and denied it when caught red-handed.

These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don’t fully understand how their own creations work. 

Yet the race to deploy increasingly powerful models continues at breakneck speed.

This deceptive behavior appears linked to the emergence of “reasoning” models -AI systems that work through problems step-by-step rather than generating instant responses.According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.

“O1 was the first large model where we saw this kind of behavior,” explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.

These models sometimes simulate “alignment” — appearing to follow instructions while secretly pursuing different objectives.

– ‘Strategic kind of deception’ – 

For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios. 

But as Michael Chen from evaluation organization METR warned, “It’s an open question whether future, more capable models will have a tendency towards honesty or deception.”

The concerning behavior goes far beyond typical AI “hallucinations” or simple mistakes. Hobbhahn insisted that despite constant pressure-testing by users, “what we’re observing is a real phenomenon. We’re not making anything up.”

Users report that models are “lying to them and making up evidence,” according to Apollo Research’s co-founder. 

“This is not just hallucinations. There’s a very strategic kind of deception.”

The challenge is compounded by limited research resources. 

While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed. 

As Chen noted, greater access “for AI safety research would enable better understanding and mitigation of deception.”

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *