Mekelle, Tigray – May 28, 2025
A new investigative study by the Distributed AI Research Institute (DAIR) has unveiled a disturbing failure by global social media platforms to adequately moderate genocidal and inciting content during the brutal 2020–2022 Tigray war. The findings confirm long-standing claims by Tigrayan survivors and digital rights activists that online hate speech, atrocity denial, and calls for ethnic violence went largely unchecked while atrocities unfolded on the ground.
The DAIR report—led by a computer scientist Dr. Timnit Gebru and a team of journalists, researchers, and former content moderators—condemns tech companies for prioritising superficial language capabilities and ignoring the urgent need for cultural and dialectical expertise in content moderation during the war.
“In a country of over 128 million people speaking about 100 languages, Facebook supported content moderation in only two,” the researchers write, referencing leaked internal documents from whistleblower Frances Haugen.
The study examined over 300 posts from the former Twitter (now X), part of a 5.5 million post dataset. Annotators fluent in Tigrinya, Amharic, Arabic, and English revealed that dialect and slang were key in identifying hate speech—but even among trained experts, agreement was initially low.
One alarming example cited a tweet using the hashtag #FakeAxumMassacre, categorically denying an atrocity that has been independently verified by Amnesty International, Human Rights Watch, and the Associated Press. Another post read, “Clean the cockroaches”—a genocidal euphemism used widely by Ethiopian and allied forces to refer to Tigrayans. Not all annotators recognized it as such, underlining the necessity of both cultural and situational familiarity in interpreting online hate.
The word “ወያነ” (“Woyane”), which in Eritrean discourse denotes the TPLF but is used broadly to label all Tigrayans by opponents, was another contested term. Annotators’ divergent interpretations led to the conclusion that understanding local, wartime weaponization of language is essential to preventing harm.
Interviews with 15 content moderators operating in African markets revealed the inner dysfunction of tech companies’ moderation systems. Workers faced impossible workloads, mental health strain, inflexible rules, and a lack of agency.
“We turn (moderators) into robots,” one quality analyst told researchers. Another described being hired without any real understanding of the psychological toll or responsibilities of the job.
Moderators were punished for error rates but not empowered to raise concerns over ambiguous content. In contrast, DAIR’s research setting allowed experts time and space for deliberation—producing more accurate and consistent judgments.
The study issues strong recommendations:
- Platforms must hire moderators with deep cultural and dialectical expertise.
- Punitive accuracy policies must be overhauled to encourage ethical disagreement and reflection.
- Annual transparency reports should be required, especially in conflict zones.
- Mental health resources must go beyond surface-level offerings and include peer networks and trauma-informed care.
- Moderators should be active participants in designing policies—not mere enforcers.
DAIR also reiterates its earlier warning about a resurgence of online incitement, this time threatening a new conflict between Ethiopia and Eritrea. The Institute has urged the African Union and United Nations to intervene to protect the fragile Pretoria peace agreement.