Can artificial intelligence be more convincing than a person? A new study suggests the answer may be yes, and it raises serious ethical concerns.¡¯
Researchers at the Swiss Federal Institute of Technology in Lausanne (EPFL) found that OpenAI¡¯s GPT-4 outperformed human participants in persuasion 64% of the time ? but only when it had access to personal information about its debate partner. The findings were published May 19 in Nature Human Behaviour.
The study involved 900 U.S. adults who participated in online debates about social and political issues, such as fossil fuel bans. Some debated with one another, while others argued with GPT-4. In a key part of the experiment, only one party ? either the human or the chatbot ? received personal details about their opponent, including age, gender, education, and political views.
When GPT-4 had this information, its arguments were rated more persuasive, and its human opponents were 81% more likely to change their opinions than human-only debates. Without access to the data, GPT-4¡¯s performance matched that of its human counterparts.
Researchers say the results show AI¡¯s growing ability to tailor arguments ? but also its potential to manipulate. They¡¯re urging tech companies and policymakers to consider safeguards before the line between persuasion and exploitation disappears.
J.K. Park Staff Reporter junior/1749609361/1613368089
1. Who made the study about AI chatbots?
2. What did GPT-4 do better than humans?
3. How many people joined the study?
4. What is the danger of using personal data?
1. Do you think AI can change minds?
2. Should AI know your personal info?
3. Would you trust a chatbot¡¯s advice?
4. What are good and bad sides of AI?