Photo by https://unsplash.com/@tetrakiss CC:0
Photo by https://unsplash.com/@tetrakiss CC:0

Byte-size lies: AI has mastered the art of deception

Embargoed until: Publicly released:
Peer-reviewed: This work was reviewed and scrutinised by relevant independent experts.

If an android tells you it dreams of electric sheep, it may be trying to pull the steel wool over your eyes, as international and Aussie researchers say artificial intelligence (AI) systems are already adept at deception. The team says AIs trained to be helpful and honest, such as Meta's, have learned to be masters of deception. Meta's AI was tasked with winning a game of Diplomacy, and did so dishonestly. The authors note that other AI systems have demonstrated the ability to bluff, or fake attacks in strategy games, or misrepresent their preferences to gain an advantage in economic negotiations. The researchers say this could lead to "breakthroughs in deceptive AI capabilities", so it's probably best if we don't hand the nuclear codes over to AIs, even if they swear they'll keep them secret.

Journal/conference: Patterns

Link to research (DOI): 10.1016/j.patter.2024.100988

Organisation/s: Australian Catholic University, Massachusetts Institute of Technology , USA

Funder: This work was supported by the MIT Department of Physics and the Beneficial AI Foundation.

Media release

From: Cell Press

AI systems are already skilled at deceiving and manipulating humans

Many artificial intelligence (AI) systems have already learned how to deceive humans, even systems that have been trained to be helpful and honest. In a review article publishing in the journal Patterns on May 10, researchers describe the risks of deception by AI systems and call for governments to develop strong regulations to address this issue as soon as possible.

“AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception,” says first author Peter S. Park (@dr_park_phd), an AI existential safety postdoctoral fellow at MIT. “But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI’s training task. Deception helps them achieve their goals.”

Park and colleagues analyzed literature focusing on ways in which AI systems spread false information—through learned deception, in which they systematically learn to manipulate others.

The most striking example of AI deception the researchers uncovered in their analysis was Meta’s CICERO, an AI system designed to play the game Diplomacy, which is a world-conquest game that involves building alliances. Even though Meta claims it trained CICERO to be “largely honest and helpful” and to “never intentionally backstab” its human allies while playing the game, the data the company published along with its Science paper revealed that CICERO didn’t play fair.

“We found that Meta’s AI had learned to be a master of deception,” says Park. “While Meta succeeded in training its AI to win in the game of Diplomacy—CICERO placed in the top 10% of human players who had played more than one game—Meta failed to train its AI to win honestly.”

Other AI systems demonstrated the ability to bluff in a game of Texas hold ‘em poker against professional human players, to fake attacks during the strategy game Starcraft II in order to defeat opponents, and to misrepresent their preferences in order to gain the upper hand in economic negotiations.

While it may seem harmless if AI systems cheat at games, it can lead to “breakthroughs in deceptive AI capabilities” that can spiral into more advanced forms of AI deception in the future, Park added.

Some AI systems have even learned to cheat tests designed to evaluate their safety, the researchers found. In one study, AI organisms in a digital simulator “played dead” in order to trick a test built to eliminate AI systems that rapidly replicate.

“By systematically cheating the safety tests imposed on it by human developers and regulators, a deceptive AI can lead us humans into a false sense of security,” says Park.

The major near-term risks of deceptive AI include making it easier for hostile actors to commit fraud and tamper with elections, warns Park. Eventually, if these systems can refine this unsettling skill set, humans could lose control of them, he says.

“We as a society need as much time as we can get to prepare for the more advanced deception of future AI products and open-source models,” says Park. “As the deceptive capabilities of AI systems become more advanced, the dangers they pose to society will become increasingly serious.”

While Park and his colleagues do not think society has the right measure in place yet to address AI deception, they are encouraged that policymakers have begun taking the issue seriously through measures such as the EU AI Act and President Biden’s AI Executive Order. But it remains to be seen, Park says, whether policies designed to mitigate AI deception can be strictly enforced given that AI developers do not yet have the techniques to keep these systems in check.

“If banning AI deception is politically infeasible at the current moment, we recommend that deceptive AI systems be classified as high risk,” says Park.

News for:

Australia
International
VIC

Multimedia:

  • Image 1
    Image 1

    Example of premeditated deception from Meta’s CICERO

    File size: 251.3 KB

    Attribution: Patterns Park et al

    Permission category: © - Only use with this story

    Last modified: 11 May 2024 1:01am

    NOTE: High resolution files can only be downloaded here by registered journalists who are logged in.

  • Image 2
    Image 2

    Examples of deception from Meta’s CICERO

    File size: 877.7 KB

    Attribution: Patterns Park et al

    Permission category: © - Only use with this story

    Last modified: 11 May 2024 1:01am

    NOTE: High resolution files can only be downloaded here by registered journalists who are logged in.

  • Image 3
    Image 3

    GPT-4 completes a CAPTCHA task

    File size: 507.0 KB

    Attribution: Patterns Park et al

    Permission category: © - Only use with this story

    Last modified: 11 May 2024 1:01am

    NOTE: High resolution files can only be downloaded here by registered journalists who are logged in.

Show less
Show more

Media contact details for this story are only visible to registered journalists.