Dec. 2, 2025

Posting in the Shadows: The Impact of Visibility Reduction in Social Media

Postdoctoral Scholar Dr. Afrouz Hojati discusses content moderation research recently published in Information Systems Research
laptop keyboard and hand typing in dim blue light

From misinformation and harassment to cyberbullying, disturbing media, and hate speech, extreme content is prevalent in social media feeds and across multiple platforms. In response, social media platforms may reduce the visibility of extreme content without removing it and without notifying the user who posted it—a strategy known as shadowbanning.

Postdoctoral Scholar Dr. Afrouz Hojati and Distinguished Research Professor Dr. Barrie R. Nault confront how content moderation of this kind can affect platform profit and user participation in their research paper “Content Moderation with Shadowbanning” recently published in Information Systems Research.[1]

Interest in the topic began with wanting to understand the impacts of shadowbanning users without them knowing, compared to other strategies like removing content, where the user is notified by the platform.

“I was particularly intrigued by how user unawareness affects strategic interactions between platforms and users.” Hojati continues:

“With shadowbanning, platforms can potentially satisfy multiple stakeholders simultaneously: they can appear to protect users from harmful content while avoiding the backlash that often accompanies transparent moderation.” 

“Shadowbanning is inherently an opaque strategy, and requiring platforms to disclose its use would fundamentally undermine its effectiveness."

A major finding of the study suggests that shadowbanning can both increase platform profit through advertising revenue and improve user experience by reducing harmful content under certain conditions—a win-win scenario.

“This result emerges from shadowbanning’s unique ability to expand platform participation across the full spectrum of user types—something neither content removal nor no moderation can achieve,” states Hojati.

As the study shows, user engagement is largely manipulated by the extent to which users believe that the moderation strategy is being implemented. As Hojati explains,

"We show that when shadowbanned users increase their beliefs about the likelihood of future shadowbanning, it can fundamentally alter the platform’s incentives. Moreover, as users' beliefs about shadowbanning increase, the strategic advantage of shadowbanning diminishes.” 

In response, the platform may apply less strict shadowbanning thresholds to retain usership.

Afrouz Hojati sitting at desk

Postdoctoral Scholar Dr. Afrouz Hojati

Photo provided by Dr. Afrouz Hojati

Regulatory Scrutiny 

In February 2025, the US Federal Trade Commission (FTC) launched a public inquiry into content moderation practices like shadowbanning, citing public concerns about censorship based on users’ political beliefs or sexual identity by platforms like X and Instagram. While the FTC verdict as to whether such moderation practices are unlawful is still out, Hojati believes that policymakers will need to find dynamic solutions.

“My primary hope is that policymakers recognize the complexity of content moderation and avoid one-size-fits-all approaches.” Hojati continues:

“Shadowbanning is inherently an opaque strategy, and requiring platforms to disclose its use would fundamentally undermine its effectiveness. It's like requiring magicians to explain their tricks while performing them.” 

However, Hojati also argues that the strategy shouldn’t be used without constraints and should be avoided in cases of political speech. 

“I believe it should be reserved for specific types of harmful content where transparency isn't essential for due process, particularly automated content from bots and spam accounts.”

The authors recognize that platforms are situated in a rapidly changing digital environment. As platforms continue to grow, generative AI plays an increasing role in digital content creation, causing what Hojati calls an ‘arms race’ between AI creation and moderation.

“The main challenges going forward will be addressing fairness and bias in AI systems while recognizing that platforms' incentives may not align with perfect detection accuracy. Equally important will be maintaining oversight and accountability as these systems become increasingly complex.” Hojati adds:

“We're likely to see increasingly sophisticated AI-generated harmful content (e.g., misinformation) that requires equally sophisticated detection and response mechanisms.”

Dr. Barrie R. Nault standing in the Haskayne School of Business foyer

Dr. Barrie R. Nault, BTMA Distinguished Professor and Hojati’s co-author on the Information Systems Research article, also serves as her supervisor at the Haskayne School of Business.

A Scalable Model

The study’s economic model measures user content extremeness within a scaling system to address different levels of harm that require different levels of response. 

As Hojati explains, the practicality of a non-binary approach “also allows us to examine how platforms optimally set moderation thresholds rather than making arbitrary categorical decisions.”

The authors have been further adapting their paper’s model to other scenarios where extreme content requires a more nuanced analysis.

“This allows for a more general and realistic model—and we have already obtained some interesting results,” states Hojati.

Yet, the future development of this research could still face challenges in addressing the vast complexity of content moderation. As Hojati iterates, 

“Predicting the future is inherently difficult, and it will likely be far more complex and multifaceted than we can currently imagine. The one thing I do know is that content moderation has always been a double-edged sword—historically associated with censorship—and it likely always will be.

As platforms continue to grow into extraordinarily powerful entities that shape public discourse, the need for effective governance becomes increasingly urgent. Addressing the challenges of content moderation will require a careful balance of technological, social, and cultural solutions.”

_________________________________________________________________________

[1] Hojati, A., & Nault, B. R. (July 2025). Content Moderation with Shadowbanning. Information Systems Research. doi: 10.1287/isre.2024.1140

Find more information about iRC research and activities here.