Chapter 1 โ€” The Anger Algorithm

How Facebook Made Outrage a Currency

Facebook earns its money from advertising. Advertising only works when people stay on the platform for a long time. The algorithm therefore has a single purpose: to maximise dwell time.

The problem: anger, outrage and fear keep people in front of their screens longer than joy or neutral information. This is not speculation โ€” it is documented by internal Facebook research and external science.

The emoji reactions: five times more for anger

In 2016, Facebook introduced emoji reactions: Love, Haha, Wow, Sad, Angry. Internally it was determined that each of these reactions counted five times more than a simple "Like" in the algorithm.

The logic behind this was understandable: someone who shows a reaction is more emotionally engaged than someone who merely likes. So the algorithm shows such posts to more people.

But Facebook's own data scientists quickly discovered that posts triggering angry reactions contained disproportionately high levels of misinformation, spam or divisive content. An internal employee asked early on: "Does the 5x weighting cause the News Feed to show more controversial than pleasant content?" The internal research answered: Yes.[1][2]

2016
Emoji reactions introduced. Anger reaction counts 5ร— as much as a Like.
2017
Internal research shows: anger posts contain disproportionately high levels of misinformation.
2018
Weighting of anger reaction reduced to 4ร—. Mechanism introduced to downrank anger-driven posts.
2020
All reactions reduced to 1.5ร— a Like. Four years after the problem was introduced.

Meta's own words: "We exploit the human brain's attraction to divisiveness"

In 2018, employees from Facebook's own Integrity teams produced a presentation for company leadership. One slide contained the following key statement:

"Our algorithms exploit the human brain's attraction to divisiveness. If left unchecked, Facebook will feed users more and more divisive content in an effort to gain user attention and increase time on the platform."

Internal Facebook presentation, 2018 โ€” as cited by the Wall Street Journal / US Congress [4]

This is not a statement from a critic. This is the self-assessment of Facebook's own engineers in a presentation prepared for their own company leadership.

โš ๏ธ

What happened next: According to internal documents and witness statements, Zuckerberg and other executives largely set the research aside. Proposed measures were rejected or so heavily watered down that they had no effect.[6]

Chapter 2 โ€” MSI and the Polarisation Spiral

The Restructuring That Made Everything Worse โ†‘ top

In late 2017, Facebook decided to fundamentally change its News FeedNews FeedThe central content stream on Facebook โ€” the home page where posts, videos and adverts appear. An algorithm decides what you see and in what order. Not chronological, but optimised for maximum dwell time.. The new guiding metric: "Meaningful Social InteractionsMSIFacebook's 2018 algorithm overhaul: posts from friends were meant to become more prominent. In practice, MSI had the opposite effect: provocative content received greater reach because it generated more comments." (MSI) โ€” meaningful social interactions. Posts from friends and family were to become more prominent, content from media organisations and companies less so. Zuckerberg publicly framed this as a step towards a "healthier" Facebook.

What MSI produced in practice

An internal memo from November 2018 found: more negative comments on a post led to more clicks on that post. This was not an isolated case โ€” it applied to at least 14 major pages studied. The memo's author wrote directly:

"Ethical concerns aside โ€” empirically, the financial incentives our algorithms create are not compatible with our mission."

Internal Facebook memo, November 2018 [3]

A further data scientist memo from 2019 recorded that MSI had "unhealthy side effects" on political content. Political parties in several European countries reported internally that they had since been compelled to give "systematically provocative, qualitatively inferior content" more distribution โ€” because factual, positive posts barely received any reach any more.

In Poland, the social media team of one party reported that its content mix had shifted from 50/50 positive/negative to 80% negative โ€” not by choice, but because the algorithm rewarded it.[18]

Chapter 3 โ€” Recommended Into Radicalisation

64% Through Facebook's Own Recommendations โ†‘ top

The algorithm determines not only which posts you see โ€” it also recommends which groups you should join. And here lies a particularly well-documented problem.

The 64 per cent figure

As early as 2016, Facebook researcher Monica Lee conducted an internal study โ€” prompted by concerns about extremist groups in Germany. The result:

64% of all joins to extremist groups on Facebook were attributable to Facebook's own recommendation toolsRecommendation algorithmAutomatic suggestions such as "Groups you should know" or "Pages you might like". 64% of all joins to extremist groups on Facebook came from these recommendations.. The "Groups you should know" and "Discover" features actively directed users into extremist groups.[7][8]

Facebook was therefore not merely passively exposing people to extremism. It was actively recommending them towards it.

The "Common Ground" project โ€” and its end

In 2017, Facebook launched the internal project "Common Ground" โ€” a company-wide attempt to make the platform less divisive. Teams developed concrete measures: limiting the reach of hyperactive extreme users, classifying and downranking polarising content, suppressing clickbaitClickbaitSensationalist headlines or preview images deliberately designed to arouse curiosity and prompt clicks โ€” the actual content rarely delivers on what the headline promises. Algorithms reward clickbait because it generates clicks. on political topics.

What happened: Joel Kaplan, Facebook's Vice President for Global Public Affairs and a former Chief of Staff under US President George W. Bush, blocked or diluted most of the proposals. His argument: the measures were "paternalistic" and would disproportionately affect conservative users and groups โ€” which was politically risky.

Measures to reduce polarising content were internally classified as "antigrowth". Carlos Uribe, who led the News Feed Integrity Team, left Facebook and the tech industry within a year. He publicly confirmed that he had left the company out of frustration with leadership decisions.[4][6]

Chapter 4 โ€” Frances Haugen

The Woman Who Took the Documents โ†‘ top

Frances Haugen is a data scientist. She worked at Facebook from 2019 until May 2021 as a product manager in the Civic Integrity TeamCivic IntegrityFacebook's internal team for democracy, elections and disinformation. Developed protective measures against manipulation โ€” was dissolved after the US election in 2020. Two months later: the storming of the Capitol on 6 January 2021. โ€” the team responsible for democracy, elections and disinformation. Before joining Facebook, she had already worked at Google, Pinterest and Yelp. She chose Facebook deliberately because, after losing a friend to conspiracy theories, she wanted to do something about online disinformationDisinformationDeliberately false or misleading information spread intentionally to manipulate public opinion. Distinction from misinformation: disinformation is knowingly false, misinformation is passed on unwittingly..

What she did

Haugen copied thousands of internal Facebook documents and passed them first to the Wall Street Journal (which published the "Facebook Files" series in September 2021), then to the US Congress and the SECSECSecurities and Exchange Commission โ€” the US financial markets regulator. Haugen filed a complaint there: Facebook had misled investors about the risks of its algorithms.. On 3 October 2021, she revealed herself as the whistleblower in an interview with CBS 60 Minutes. On 5 October 2021, she testified before the US Senate.

What the documents showed

Facebook knew the algorithm was causing harm. An internal document from August 2019 recorded: "We have compelling evidence that our core product mechanics โ€” such as viralityViralityThe explosive spread of content across the internet โ€” when a post is shared millions of times. Facebook's algorithm is optimised for virality: content that triggers emotional reactions is spread exponentially., recommendations and engagement optimisationEngagementEvery user interaction: likes, comments, shares, clicks, dwell time. The more engagement a post generates, the more people the algorithm shows it to. Problem: outrage generates more engagement than factual information. โ€” are a substantial reason why hate speech, polarisation and disinformation thrive on the platform."[10]

Safety measures were switched off after the 2020 election. Facebook had activated temporary safety systems against disinformation ahead of the 2020 US election. After the election and Biden's victory, these systems were rolled back or switched off โ€” to prioritise growth.

"When the election was over, they turned them back off. That felt like a betrayal of democracy to me."

Frances Haugen, CBS 60 Minutes, October 2021 [10]

The Civic Integrity Team was dissolved. Shortly after the 2020 election, Facebook dissolved the Civic Integrity Team โ€” the team in which Haugen worked. Facebook said the work had been distributed to other departments. Two months later: the storming of the US Capitol on 6 January 2021.

"They said: 'Good, we got through the election. There were no riots. We can dissolve Civic Integrity now.' A few months later we had the insurrection."

Frances Haugen, CBS 60 Minutes [10]

"Nobody at Facebook is malicious. But the incentives are misaligned. Facebook makes more money when you consume more content. People like interacting with things that trigger an emotional response."

Frances Haugen, TIME [12]

Facebook's response to Haugen

Facebook disputed many of the accounts. Lena Pietsch, then Director of Communications, stated that Haugen's criticisms were invalid because she had worked at the company for less than two years and had held no leadership responsibility.

Samidh Chakrabarti, former head of the Civic Integrity Team with over six years of experience and direct C-level access, publicly countered: he found Haugen's positions on algorithmic regulation and transparency "entirely valid for a public debate".[11]

Chapter 5 โ€” What the Research Says

Documented, Contested โ€” and What the Difference Means โ†‘ top

What is clearly documented

External research confirms the core of the problem: outrage and misinformation are algorithmically linked. A study in the journal Science (2024) showed: content that triggers outrage spreads further and faster than other content โ€” because algorithms favour emotionally charged material, and because outrage is communicatively useful (it strengthens group solidarity, signals moral stances), regardless of whether the content is true. This makes fact-checking alone a limited countermeasure.[19]

Meta's own 2023 study โ€” and why it drew conclusions that went too far

In July 2023, Meta and 17 external researchers published four studies in Science and Nature. The experiment ran during the 2020 US election with around 20,000 users and tested: if the algorithm is replaced by a chronological feed โ€” does users' political attitude change? Result: no significant effect on polarisation.

Meta representative Nick Clegg used this as evidence: Facebook's algorithm does not cause harmful polarisation. However, the editors of Science themselves published a rebuttal: the study was methodologically sound, but the conclusions Meta drew from it were too far-reaching.[22]

An independent study (2025, also in Science) on the X/Twitter algorithm โ€” this time without platform involvement โ€” found clear causal evidence: algorithmic amplification of politically hostile content did genuinely change users' attitudes towards political opponents.[25]

Conclusion on the state of research: Whether Facebook's algorithm directly and permanently changes political attitudes has not yet been definitively established by science. What is unambiguously documented: the algorithm amplifies content that triggers anger and outrage. And Meta knew this from its own internal studies โ€” as early as 2017.
Chapter 6 โ€” The Invisible Workforce

Content Moderators: The People Behind the Algorithm โ†‘ top

Before the algorithm decides what billions of people see, people decide: what stays on the platform, what gets deleted. That is the work of content moderatorsContent moderationPeople who review thousands of images and videos on Facebook's behalf every day: beheadings, child abuse, terrorism. Often in poorer countries, for $1โ€“3 per hour, under strict confidentiality. Many develop PTSD. โ€” and Meta has systematically outsourced them out of its own company, to poorer countries, at low wages, under strict confidentiality.

These individuals see the worst of what people upload to the internet every day: beheadings, child abuse, torture, suicides. They decide within seconds โ€” often thousands of images per day. The psychological consequences are theirs to bear.

Manila: The documentary that made everything visible

German filmmakers Hans Block and Moritz Riesewieck began searching in 2015 for the cleaners of the internet. It took eight months before they found the first moderators โ€” such was the secrecy. Anyone working for Facebook was contractually forbidden from speaking about it. Internally, the client was known at some companies simply as "The Honey Badger Project".

In 2018, their documentary "The Cleaners" premiered at the Sundance Film Festival and received international recognition. It showed: moderators in Manila process up to 25,000 images per day, for 1 to 3 US dollars per hour. Some reported suicides among colleagues. Many developed PTSDPTSDPost-traumatic stress disorder โ€” a serious psychiatric condition following traumatic experiences. Symptoms: flashbacks, sleep disturbances, anxiety, emotional numbness. In content moderators, triggered by daily exposure to extreme violence. symptoms: fear of public spaces, sleep disturbances, flashbacks.[28]

USA: Phoenix and Tampa โ€” death at the desk

In the US, thousands of moderators worked for Facebook via IT services provider Cognizant โ€” in Phoenix, Arizona and Tampa, Florida.

In 2018, Selena Scola, who had moderated for Facebook for nine months, sued for PTSD. The suit became a class action. Reporter Casey Newton of The Verge documented the working conditions in 2019: two 15-minute breaks and 9 minutes of "wellness time" per day. Employees consuming cannabis at their desks to cope with the work. Some who, through daily exposure, developed conspiracy theories and far-right beliefs.

On 9 March 2018, employee Keith Utley suffered a heart attack during his shift at Cognizant in Tampa. The office had no defibrillator. He died. What he was looking at on his screen at that moment was never made public.

In May 2020, Facebook agreed a settlement of $52 million โ€” for over 11,000 current and former moderators in four US states. Cognizant withdrew entirely from the content moderation business and closed the sites. The affected moderators lost their jobs โ€” without ongoing psychological support.[30]

Kenya: $1.50 per hour โ€” and the landmark ruling

In 2022, Daniel Motaung, a South African moderator who worked for Facebook's contractor Sama in Nairobi, filed a lawsuit. Hourly rate: US $1.50. Work content: beheading videos, child abuse, terrorism. When Motaung tried to form a trade union, he was dismissed. His case grew into a class action of 185 moderators. An independent medical report diagnosed severe PTSD, anxiety disorders or major depression in more than 140 of the claimants.

Meta's position: the company had never directly employed the moderators โ€” the employer was Sama alone. No responsibility.

In September 2024, the Kenyan Court of Appeal ruled: Meta can be sued before the labour court in Kenya โ€” even without direct employment. As the effective commissioning party, Meta bears legal responsibility. This is a landmark precedent: for the first time, a court in a developing country ruled conclusively that US tech companies cannot escape liability through outsourcing structures.[36]

Ghana: The next secret location

After Kenya became public, Meta quietly withdrew the contract. New location: Accra, Ghana โ€” operated via Majorel, a subsidiary of the French group Teleperformance. Meta did not confirm the location for months. The British investigative organisation The Bureau of Investigative Journalism exposed it in 2025.

What moderators there reported: an employee attempted suicide โ€” their contract was then terminated. Moderators shared beds in cramped employer-provided accommodation. Managers followed employees to the toilet. Anyone who insisted on better conditions risked dismissal.[34]

"These are the worst conditions I have seen in six years of working with social media content moderators."

Martha Dark, co-founder of the NGO Foxglove

The pattern

The geographical sequence is no coincidence. It follows a system: as soon as one location comes under pressure, Meta withdraws the contract and starts afresh elsewhere. The chain of subcontractors ensures that Meta remains as legally remote from events as possible.

Location Contractor Outcome
Manila, Philippines Various third-party firms No lawsuits; industry still active
Phoenix & Tampa, USA Cognizant $52m settlement 2020; Cognizant exits industry
Nairobi, Kenya Sama/Samasource Landmark ruling 2024: Meta suable; proceedings ongoing
Accra, Ghana Majorel (Teleperformance) Lawsuits filed 2025; ongoing

Meta has never directly employed. Meta has never acknowledged direct responsibility. And Meta has not publicly confirmed any location before external investigations exposed it.[37]

Chapter 7 โ€” What Meta Says

The Counter-Position โ€” and Why the Documents Carry More Weight โ†‘ top

Meta disputes the central allegations. The company's main arguments:

โ„น๏ธ

The company's own internal documents from 2018 and 2019 directly contradict this public portrayal. Meta has never challenged these documents โ€” they were confirmed through the Haugen leak and congressional proceedings. The question of what Meta knew internally and what it did with that knowledge is documented. The question of how large the long-term societal damage is has not yet been definitively established by science.

Note: as the same internal documents show, Meta also knew what Instagram does to teenagers โ€” to their body image, self-esteem and sleep. More on this in our Instagram article.

Conclusion

Wrong Incentives. Known Consequences. No Change. โ†‘ top

Meta built a system that rewards anger. This is not the opinion of critics โ€” it is written in Meta's own internal documents from 2016, 2018 and 2019. The engineers recognised the problem. They made proposals. Company leadership chose growth.

Frances Haugen made these documents public. She testified before the US Congress, the British Parliament and the EU. Facebook's only serious response was: to dissolve the Civic Integrity Team.

That is the story of the algorithm. Not malicious intent โ€” wrong incentives. But incentives that the company knew about, and nonetheless kept in place.
38 Sources
  1. Nieman Journalism Lab โ€” Facebook algorithm prioritized anger (Oct. 2021): niemanlab.org
  2. The Hill โ€” 5 points for anger (Oct. 2021): thehill.com
  3. CNN โ€” Facebook MSI News Feed math (Oct. 2021): cnn.com
  4. Wall Street Journal โ€” Facebook Executives Shut Down Efforts to Make the Site Less Divisive (May 2020): wsj.com
  5. US Congressional document with WSJ citations: congress.gov
  6. Engadget โ€” Facebook resisted being less divisive (May 2020): engadget.com
  7. E&T Magazine โ€” Facebook did not act on extremism research: eandt.theiet.org
  8. MIT Technology Review โ€” Haugen algorithms (Oct. 2021): technologyreview.com
  9. MIT Technology Review โ€” Facebook's AI misinformation addiction (March 2021): technologyreview.com
  10. CBS News โ€” 60 Minutes Haugen Interview (Oct. 2021): cbsnews.com
  11. TIME โ€” How Haugen's team forced a reckoning (Oct. 2021): time.com
  12. TIME โ€” Frances Haugen whistleblower reveals identity (Oct. 2021): time.com
  13. CNN โ€” Haugen Senate testimony live (Oct. 2021): cnn.com
  14. CNN โ€” Haugen 60 Minutes background: cnn.com
  15. NPR โ€” Facebook whistleblower renewing scrutiny (Oct. 2021): npr.org
  16. CNBC โ€” Haugen reveals identity (Oct. 2021): cnbc.com
  17. NBC News โ€” Haugen: Facebook asleep at the wheel (2023): nbcnews.com
  18. NBC News โ€” 2018 algorithm change boosted GOP groups (June 2022): nbcnews.com
  19. Science โ€” Misinformation exploits outrage (2024): science.org
  20. The Decision Lab โ€” Social Media and Moral Outrage: thedecisionlab.com
  21. NPR โ€” Meta algorithm studies July 2023: npr.org
  22. Science AAAS โ€” Study on polarization, critique (2024): science.org
  23. Columbia Journalism Review โ€” Meta studies on polarization: cjr.org
  24. CNBC โ€” Meta algorithm studies (July 2023): cnbc.com
  25. Science Media Centre โ€” X algorithm polarisation study (Nov. 2025): sciencemediacentre.es
  26. Today.com โ€” Social media outrage culture studies (Oct. 2025): today.com
  27. Nature โ€” Facebook echo chamber study (July 2023): nature.com
  28. The Cleaners (documentary, 2018), directed by Hans Block, Moritz Riesewieck: imdb.com
  29. NPR โ€” Interview with the directors of The Cleaners (Nov. 2018): npr.org
  30. TechCrunch โ€” Facebook to pay $52 million to content moderators (May 2020): techcrunch.com
  31. Tampa Bay Times โ€” Facebook sued by Tampa workers (Feb. 2020): tampabay.com
  32. Lawsuit filing Garrett et al. v. Facebook Inc., Case 8:20-cv-00585: digitalcommons.law.scu.edu
  33. Staffing Industry Analysts โ€” Judge rules for Facebook and Cognizant (Sept. 2023): staffingindustry.com
  34. The Bureau of Investigative Journalism โ€” Meta's new moderators face worst conditions yet (April 2025): thebureauinvestigates.com
  35. France24/AFP โ€” Lawyers probe 'dire' conditions for Meta content moderators in Ghana (May 2025): france24.com
  36. Al Jazeera โ€” African workers are taking on Meta (April 2025): aljazeera.com
  37. IHRB โ€” Content moderation is a new factory floor of exploitation (Nov. 2025): ihrb.org
  38. Business and Human Rights Resource Centre โ€” Ghana: Meta faces lawsuit: business-humanrights.org