Quantcast

SC Connecticut News

Sunday, December 22, 2024

Research shows peer influence shapes professional evaluations

Webp pdb35xqzt3c00yieveg2mvly0a3u

Peter Salovey President | Yale University

Peter Salovey President | Yale University

Few people like to consider themselves followers, but when it comes to evaluating goods, services, and even colleagues, many are influenced by others. This phenomenon is significant because collective evaluation processes—whether for restaurant ratings on Yelp or performance reviews at work—can impact the success of a product, service, or career.

New research from Yale SOM’s Tristan Botelho reveals how the design of evaluation processes affects outcomes. The study shows that individuals who submit evaluations are influenced by previously seen ratings. “A lot of what we see in terms of evaluative outcomes is actually directly affected by the structure” of the evaluation process, Botelho says. “So these design choices actually have significant implications for the outcomes.”

Botelho highlights that people's livelihoods and careers may be at stake in these ratings, particularly for those working on gig platforms such as Upwork or Uber. “Why this is so fascinating to me is because evaluation processes dictate most of the resources in our economy and society,” he adds.

To understand how prior evaluations affect subsequent ones, Botelho used data from an online platform where investment professionals rate each other's recommendations. This setup allowed him to compare evaluations made before and after prior ratings were visible. The platform's participants all had professional expertise in evaluating investment recommendations.

Existing research suggests several hypotheses on why evaluations tend to converge: competitive threat, reputation and status management, and peer deference. However, Botelho's dataset eliminated some hypotheses since all participants were anonymous.

The platform allowed users to rate recommendations on a scale of 1 to 5 based on investment analysis quality (justification rating) and expected stock performance (return rating). The first four ratings were hidden; once the fourth was submitted, the average became public.

Botelho found that seeing an existing rating made professionals less likely to rate the recommendation themselves. For unseen ratings, 10% of viewers submitted a rating compared to just 4% for visible ratings.

Private ratings tended to be distinct from one another. However, once a recommendation's score became public after four total ratings, subsequent evaluations converged significantly—54% to 63% closer to the prior average rating than when they were hidden. This convergence was immediate rather than gradual.

“This convergence occurred in a context that at first glance would seem relatively immune to peer pressure,” Botelho notes. Despite being anonymous professionals whose jobs involve evaluating similar recommendations daily, visibility into others' opinions affected whether they shared their own and increased their likelihood of aligning with crowd evaluations.

Initial private ratings were unrelated to future stock performance, indicating that higher ratings were not necessarily accurate assessments.

Botelho warns that “the design choices by a lot of these platforms or organizations are creating a self-fulfilling prophecy.” He explains that initial high private ratings attract more attention from others—a valued outcome for professionals.

One mitigating factor against herd behavior was subject-area expertise. Evaluators with industry-specific knowledge resisted convergence and provided differing ratings from prior ones.

Botelho’s findings suggest ways companies can adjust their evaluation processes for more meaningful results. For instance, extending "quiet periods" until more private submissions are collected or prioritizing input from subject-area experts could improve decision-making processes.

“A lot of companies are trying to open the decision-making process," says Botelho. "However, these findings show that these processes could introduce issues if you’re not careful."

---

ORGANIZATIONS IN THIS STORY

!RECEIVE ALERTS

The next time we write about any of these orgs, we’ll email you a link to the story. You may edit your settings or unsubscribe at any time.
Sign-up

DONATE

Help support the Metric Media Foundation's mission to restore community based news.
Donate

MORE NEWS