AI can persuade People to make wrong Decisions

Posted by Peter Rudin on 5. March 2021 in News

In July 2020, OpenAI unveiled GPT-3, a language model that was easily the largest known at the time. Put simply, GPT-3 is trained to predict the next word in a sentence, much like how a text message autocomplete feature works.

Academics are increasingly concerned that AI could be co-opted by malicious actors to foment discord by spreading misinformation, disinformation, and outright lies. DALL-E, a version of GPT-3 generates images from text descriptions, potentially visually  distorting reality.

In a large-scale survey leveraging OpenAI’s language model, the researchers found AI’s advice can “corrupt” people even when they are aware the source of the advice is AI.


Leave a Reply

Your email address will not be published. Required fields are marked *