Quantcast

SC Connecticut News

Monday, December 23, 2024

Shadow Banning: The Subtle Tool Shaping Opinions on Social Media

Webp pdb35xqzt3c00yieveg2mvly0a3u

Peter Salovey President | Yale University

Peter Salovey President | Yale University

The recent legislation compelling TikTok to either sell or close its U.S. operations has brought into focus the influence of social media platforms on public opinion. This development, triggered by concerns over potential data sharing with the Chinese government, is a stark reminder of the power these platforms wield. Tauhid Zaman from Yale School of Management (SOM) argues that there is an even more subtle and potent threat posed by these platforms - the ability to manipulate public sentiment through "shadow banning."

Zaman explains that social media platforms have the power to select and promote content, thereby influencing user opinions. While most users are aware of content removal as a method for steering public opinion, Zaman points out a less obvious but more effective technique - shadow banning.

Shadow banning is a strategy where the visibility of a user's content is limited without their knowledge. The content remains on the user's profile page but appears less frequently or not at all in other users' timelines. This tactic makes it nearly impossible for policymakers or software engineering experts to detect any bias.

In collaboration with Yale SOM PhD student Yen-Shao Chen, Zaman has authored a paper exploring this phenomenon. The study uses a simulation of a real social network to demonstrate how shadow banning can shift users' opinions and alter polarization levels.

Zaman states that this form of content moderation can appear neutral from an external perspective as it can simultaneously reduce the volume on accounts from both sides of a debate. He likens this scenario to a frog sitting in slowly heating water; by the time any manipulation becomes apparent, it may be too late.

Zaman emphasizes that understanding how shadow banning operates is crucial for regulators and social media platforms alike. It could help identify malicious actors attempting to shape network opinions and improve content-recommendation algorithms to avoid inadvertent polarization.

The researchers developed an opinion dynamics model based on established persuasion findings for their study. They then simulated large-scale social network conversations focused on specific hashtags, using real tweets about the 2016 U.S. Presidential election and France's Yellow Vest protests.

The simulations revealed that it was possible to shift user opinions and influence polarization levels by strategically muting certain connections. However, Zaman and Chen also discovered a method to detect shadow banning by assigning scores to the connections between users.

Zaman intends to share his findings with policymakers, highlighting the potential dangers of unchecked content algorithms on social media platforms. He suggests that regulation should focus on quantifying these algorithms across all networks.

ORGANIZATIONS IN THIS STORY

!RECEIVE ALERTS

The next time we write about any of these orgs, we’ll email you a link to the story. You may edit your settings or unsubscribe at any time.
Sign-up

DONATE

Help support the Metric Media Foundation's mission to restore community based news.
Donate