Quantcast

SC Connecticut News

Wednesday, October 16, 2024

Study reveals difficulty distinguishing between AI-generated and human-written restaurant reviews

Webp pdb35xqzt3c00yieveg2mvly0a3u

Peter Salovey President | Yale University

Peter Salovey President | Yale University

Balázs Kovács, a professor of organizational behavior at Yale School of Management, has found that artificial intelligence can now generate restaurant reviews that are indistinguishable from those written by humans. In his recent study, he discovered that human testers could not reliably differentiate between human-written reviews and those produced by GPT-4, the latest iteration of OpenAI’s ChatGPT.

“I don’t look at the numbers,” Kovács says about reading Yelp reviews. “I read to connect with the experience. It’s more personable if someone writes about their experience.” His research interest in large language models led him to explore how easily AI could create fake Yelp reviews.

Kovács conducted a series of experiments where participants were asked to identify whether a review was written by a human or an AI. The results showed that participants were more confident in the authenticity of AI-written reviews than those penned by humans. This phenomenon aligns with what is known as “AI hyperrealism.”

In his study titled “The Turing test of online reviews: Can we tell the difference between human-written and GPT-4-written online reviews?”, Kovács collected Yelp reviews from 2019 and used them to train GPT-4 to produce similar content. He then mixed these AI-generated reviews with real ones and had participants attempt to classify them correctly.

Out of 151 participants, only six managed to accurately classify enough reviews to earn a bonus payment offered as an incentive. When asked to rate the likelihood of a review being human or AI on a five-point scale, participants identified human-written content correctly about half the time—akin to flipping a coin. However, they only identified AI-generated content correctly one-third of the time and displayed greater confidence when they incorrectly classified an AI review as human-authored.

Kovács acknowledges that while some applications of superior AI performance can be beneficial—such as self-driving cars—there are potential dangers in other areas. An influx of AI-generated content could undermine trust in crowdsourced platforms like Yelp and lead people back to traditional sources for recommendations.

The broader implications concern Kovács. He cites examples such as a fake robocall during the New Hampshire primary attributed falsely to President Biden and an incident involving fabricated audio purportedly from a high school principal in Maryland making racist remarks. These incidents highlight how advanced AI capabilities can be misused for malicious purposes.

“People have to know it’s very scary,” Kovács warns. “If people can’t tell anymore what’s written by an AI or a human, there will be more fake news than before.” With elections approaching, he suggests that this issue could significantly impact public trust.

ORGANIZATIONS IN THIS STORY

!RECEIVE ALERTS

The next time we write about any of these orgs, we’ll email you a link to the story. You may edit your settings or unsubscribe at any time.
Sign-up

DONATE

Help support the Metric Media Foundation's mission to restore community based news.
Donate

MORE NEWS