Deceitful tactics by artificial intelligence exposed: ‘Meta’s AI a master of deception’

By Study Finds

Artificial intelligence systems are fast becoming increasingly sophisticated, with engineers and developers working to make them as “human” as possible. Unfortunately, that can also mean lying just like a person. AI platforms are reportedly learning to deceive us in ways that can have far-reaching consequences. A new study by researchers from the Center for AI Safety in San Francisco delves into the world of AI deception, exposing the risks and offering potential solutions to this growing problem.

At its core, deception is the luring of false beliefs from others to achieve a goal other than telling the truth. When humans engage in deception, we can usually explain it in terms of their beliefs and desires – they want the listener to believe something false because it benefits them in some way. But can we say the same about AI systems?

The study, published in the open-access journal Patterns, argues that the philosophical debate about whether AIs truly have beliefs and desires is less important than the observable fact that they are increasingly exhibiting deceptive behaviors that would be concerning if displayed by a human.

The study surveys a wide range of examples where AI systems have successfully learned to deceive. In the realm of gaming, the AI system CICERO, developed by Meta to play the strategy game Diplomacy, turned out to be an expert liar despite its creators’ efforts to make it honest and helpful. CICERO engaged in premeditated deception, making alliances with human players only to betray them later in its pursuit of victory.

Read more: Deceitful tactics by artificial intelligence exposed: ‘Meta’s AI a master of deception’ 



Leave a Reply

Your email address will not be published. Required fields are marked *