Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

A US court dismissed a lawsuit by Rohingya refugees against Meta, citing Section 230 protections for online platforms. The decision reveals significant legal barriers to holding technology companies accountable for algorithmic amplification of hate speech linked to genocide, with major implications for Indo-Pacific digital governance.
A US federal court has dismissed a landmark lawsuit brought by Rohingya refugees against Meta Platforms (formerly Facebook), marking a significant setback for efforts to hold technology companies legally responsible for their role in facilitating ethnic violence. The dismissal highlights the substantial legal barriers facing victims of online-facilitated atrocities in seeking redress through American courts, and raises critical questions about corporate accountability in the digital age.
The plaintiffs—Rohingya survivors and advocacy groups—had argued that Facebook’s algorithms, content moderation failures, and platform design actively facilitated the spread of dehumanising hate speech that “amounted to a substantial cause, and eventual perpetuation of, the Rohingya genocide” in Myanmar between 2012 and 2017. The case represented one of the most direct attempts to establish corporate liability for algorithmic amplification of ethnic violence.
The genocide against Myanmar’s Rohingya Muslim minority resulted in the deaths of an estimated 25,000 people and the displacement of over 700,000 refugees to Bangladesh between 2012 and 2017. During this period, Facebook became the primary information source for millions of Myanmar’s 9 million internet users, who accessed the platform predominantly through mobile devices and low-cost data plans.
Investigative reporting and independent analyses documented how Facebook’s platform became a vector for coordinated hate speech campaigns. Military officials, Buddhist nationalist groups, and state-aligned actors weaponised the platform to dehumanise Rohingya populations, using terms that translated to “Bengali invaders” and “Muslim terrorists.” The platform’s algorithmic recommendation systems amplified this content, while Meta’s limited Burmese-language content moderation capacity—the company employed only two Burmese speakers to moderate content for a nation of 54 million people—meant hate speech circulated largely unchecked.
Meta itself acknowledged in a 2018 internal review that it had “not done enough to help prevent the harm.” The company subsequently increased its Burmese-language moderation capacity and implemented additional safeguards, but these measures came only after the genocide’s most acute phase had concluded.
The court’s dismissal rested primarily on Section 230 of the US Communications Decency Act, a 1996 provision that grants broad immunity to online platforms for user-generated content. Section 230 stipulates that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
US courts have interpreted Section 230 expansively, generally protecting platforms even when their algorithms amplify harmful content or their moderation practices are demonstrably inadequate. The Rohingya plaintiffs argued that Meta’s algorithmic curation and design choices constituted “publisher” functions that should strip away immunity, but the court rejected this distinction. This interpretation reflects the prevailing judicial consensus in American law: algorithmic recommendation, standing alone, does not convert a platform into a publisher liable for the content it distributes.
The dismissal underscores a fundamental asymmetry in tech regulation: American courts have constructed a legal framework that shields platforms from responsibility for content amplification, while simultaneously acknowledging that these platforms possess unprecedented power to shape information ecosystems and influence real-world violence.
While US courts have proven inaccessible for Rohingya victims, other jurisdictions and mechanisms remain potentially viable. The International Court of Justice has accepted Myanmar’s case on genocide allegations, though this proceeding addresses state conduct rather than corporate responsibility. The International Criminal Court’s Office of the Prosecutor has opened a preliminary examination into crimes against humanity in Myanmar, but similarly focuses on state and military actors.
The Gambia, acting as a proxy for the Organisation of Islamic Cooperation, filed the ICJ case in 2019, arguing that Myanmar’s military committed genocide under the Genocide Convention. This case may eventually produce findings regarding the systematic nature of violence, which could indirectly inform arguments about corporate facilitation, but it does not directly address Meta’s conduct.
Alternatively, the UN Fact-Finding Mission on Myanmar (established in 2017) has documented Meta’s role extensively and recommended that the company be investigated for potential complicity in crimes against humanity. However, UN fact-finding mechanisms lack enforcement power. The responsibility now falls to individual states to pursue corporate accountability through their own legal systems or to establish new international frameworks specifically addressing tech platform liability for atrocity facilitation.
The dismissal reflects a widening gap between technological capability and legal accountability. Meta possesses granular data on content distribution, user engagement, and algorithmic amplification—data that could definitively establish whether the company’s systems disproportionately promoted hate speech. Yet US law does not compel the company to face liability for how it deploys these systems, even in cases involving genocide.
This legal vacuum has prompted renewed calls for Section 230 reform, though proposed amendments remain contentious. Some policymakers advocate for carving out exceptions for content linked to violence or atrocities; others propose holding platforms accountable for algorithmic amplification specifically. The EU has adopted a more interventionist approach through the Digital Services Act, which imposes content moderation and algorithmic transparency obligations on large platforms, though enforcement mechanisms remain nascent.
For the Indo-Pacific region, the Myanmar case establishes a cautionary precedent. As digital penetration accelerates across Southeast Asia, South Asia, and the Pacific, the risk of platform-facilitated communal violence increases. Governments in the region cannot rely on US legal mechanisms to constrain corporate conduct; they must develop regional standards and enforcement frameworks independently. ASEAN member states, in particular, should consider harmonised approaches to digital platform accountability within the ASEAN Regional Forum or through bilateral agreements with major tech companies.
Meta’s legal victory does not resolve the underlying policy question: should technology companies bear responsibility for algorithmic amplification of content linked to mass violence? The court’s decision reflects existing US law, but does not settle the normative or strategic question of whether that law remains appropriate.
For policymakers in Australia, New Zealand, and the Pacific, the Myanmar case underscores the necessity of proactive digital governance. Rather than awaiting American legal evolution, Indo-Pacific governments should establish clear expectations for platform conduct in their jurisdictions, including mandatory content moderation standards, algorithmic transparency requirements, and corporate liability frameworks specific to atrocity-related content. The Rohingya genocide represents a cautionary case study demonstrating that market-based self-regulation and reactive legal frameworks are insufficient to prevent tech-facilitated mass violence.
As Meta and other platforms expand operations throughout the Indo-Pacific, establishing clear accountability mechanisms now—before crises emerge—represents a strategic imperative for regional stability and human security.