What exactly is algorithmic discrimination?

An algorithm is a set of computational instructions — a sequence of rules a computer follows to calculate a result from input data. Modern algorithms, especially those using machine learningMachine LearningA branch of AI in which software is not explicitly programmed but instead learns patterns independently from large datasets. The more data, the better the prediction — but also the greater the risk of inheriting biases embedded in that data., learn these rules autonomously from vast amounts of data. They identify patterns in historical data and apply them to new cases.

Algorithmic discrimination means that such systems systematically disadvantage certain groups of people. The critical difference from classical discrimination: there is no single person consciously discriminating. Instead, the software reproduces inequalities already embedded in the data it was trained on. If a hiring algorithm learns from ten years of recruitment data in which predominantly men were hired, it draws the conclusion: "Male applicants are more successful" — and from that point on, disadvantages women.

Why is this so insidious? Algorithmic decisions appear objective and neutral because they come from a machine. The disadvantage does not affect individuals — it operates on millions of people simultaneously. This scaling effect was documented in detail as early as 2019 by Germany's Federal Anti-Discrimination Agency.

The connection to tracking and personal data is direct: the more data collected about us — location, browsing behaviour, purchases, social contacts — the more precisely algorithms can categorise us. Seemingly harmless data points become proxiesProxy VariableA seemingly neutral data attribute that indirectly signals a protected characteristic. Example: postcode as a proxy for ethnicity, first name as a proxy for social background, healthcare costs as a proxy for race. for protected characteristics: a postcode reveals the ethnic composition of a neighbourhood, search terms indicate health status, and a first name correlates with social background.

The five technical mechanisms

01
Biased training data
If the past was unjust, the software learns that injustice. The most common cause of algorithmic bias.
02
Proxy variables
Protected characteristics are replaced by apparently neutral data — such as healthcare costs instead of race.
03
Feedback loopsFeedback LoopA self-reinforcing cycle: the algorithm makes a decision that generates new data, which in turn confirms the next decision. Example: more police in a neighbourhood leads to more arrests, which validates the algorithm — leading to even more police presence.
More police presence → more arrests → validates the algorithm → even more presence. Self-reinforcing.
04
Misguided optimisation
The algorithm optimises for a measurable goal that does not align with fairness.
05
Biased labels
Even the "ground truth" in the data is not neutral — when "recidivism" is measured through racially skewed policing patterns.

Documented cases: Where algorithms have discriminated

The following cases are substantiated by investigative journalism, academic studies, or regulatory investigations. They demonstrate that algorithmic discrimination is not a theoretical problem.

Credit — Apple Card (USA, 2019)

Financial Sector · USA · 2019
Apple Card: 20× higher limit for the husband
Tech entrepreneur David Heinemeier Hansson publicly reported that his Apple Card gave him a credit limit 20 times higher than his wife's — despite them filing joint tax returns and her having the better credit score. Apple co-founder Steve Wozniak reported a similar experience. Goldman Sachs repeatedly cited "the algorithm" without further explanation.
⚖ NYDFS investigation: No intentional discrimination found, but significant transparency failures identified (March 2021)

Recruiting — Amazon AI (USA, 2014–2017)

Recruiting · USA · 2014–2017
Amazon's secret AI tool systematically downgraded women
From 2014, Amazon developed an AI recruiting tool that rated applications on a 1–5 star scale. Trained on ten years of predominantly male application data, the system learned to penalise CVs containing the word "women's" and to downgrade graduates of all-women's colleges. Amazon dissolved the team in 2017 after it became clear the bias could not be corrected.
⚖ Revealed by Reuters exclusive investigation, October 2018

Advertising and housing — Meta (USA/EU)

Advertising · USA/EU · 2016–2025
Facebook enabled years of discrimination in housing ads
Facebook allowed housing advertisers to deliberately exclude users based on ethnicity, religion, and national origin. ProPublica exposed this in 2016. In June 2022, the US Department of Justice reached a settlement with Meta: Meta was required to shut down its "Special Ad Audience" tool — the first case prosecuted under the US Fair Housing Act for algorithmic discrimination. In 2025, the Dutch Institute for Human Rights ruled against Meta's job ad algorithm, which served positions to 79% women or 91% men depending on the role. In October 2025, the French Défenseur des Droits followed with a formal ruling.
⚖ First European ruling against a social media algorithm for discrimination (October 2025)

Criminal justice — COMPAS (USA, since 2016)

Criminal Justice · USA · since 2016
Recidivism algorithm gets it wrong for Black defendants twice as often
COMPASCOMPASCorrectional Offender Management Profiling for Alternative Sanctions — an algorithm used in 46 US states to predict recidivism risk. ProPublica demonstrated racial bias in 2016., used in 46 US states to predict recidivism risk, was investigated by ProPublica in 2016. The finding: Black defendants were falsely flagged as high riskFalse PositiveA person is classified as a risk by the algorithm when they are not. With COMPAS, Black defendants were incorrectly labelled "high risk" almost twice as often as white defendants (45% vs 23%). at a rate of 45% — compared to just 23% for white defendants. Conversely, white defendants who later did reoffend were more often assessed as low risk. Researchers at Stanford, Cornell, and CMU mathematically proved that it is impossible to optimise an algorithm simultaneously for multiple fairness criteria when base rates differ between groups.
⚖ ProPublica "Machine Bias", May 2016 — sparked worldwide academic debate

Healthcare — Optum/UnitedHealth (USA, 2019)

Healthcare · USA · 2019
200 million patients, systematically misclassified
An algorithm by Optum/UnitedHealth, applied annually to approximately 200 million Americans, used healthcare costs as a proxy for healthcare need. Because Black patients spent on average $1,800 less per year due to systemic access barriers, the algorithm systematically classified them as less sick. After correction, the share of Black patients receiving additional care would have risen from 17.7% to 46.5%.
⚖ Obermeyer et al., published in Science, October 2019

Recent cases 2025: AI in recruiting

Recruiting · USA/DACH · 2025
Mobley v. Workday — first nationwide AI class action
Derek Mobley (Black, over 40, disabled) was rejected after more than 100 applications via Workday's AI — often within minutes. The lawsuit was certified as a nationwide class action in May 2025. Workday acknowledged 1.1 billion rejections in the relevant period. Workday is also used by companies across Germany, Austria, and Switzerland. A PNAS Nexus study (May 2025) using 361,000 fictitious CVs confirmed: all 5 leading AI models systematically disadvantaged Black male applicants.
⚖ Class action certified May 2025 — verdict pending

Algorithmic discrimination in the DACH region

The Schufa: Germany's most powerful algorithm

The SchufaSchufaSchutzgemeinschaft für allgemeine Kreditsicherung — Germany's largest credit bureau. It calculates a creditworthiness score from stored financial data that determines access to loans, rental agreements, and mobile phone contracts. holds data on approximately 68 million people in Germany and calculates credit scoresCredit ScoreA number (Schufa: 0–100%) expressing your creditworthiness. The higher the score, the more likely you are — according to the algorithm — to pay your bills. It determines loans, rental contracts, and phone contracts, often without your knowledge. that decide on loans, rental agreements, and phone contracts. The calculation formula was secret for decades, protected by Germany's Federal Court of Justice as a trade secret (BGH, 28 January 2014, ref. VI ZR 156/13).

The OpenSchufa project by AlgorithmWatch and the Open Knowledge Foundation (2018) attempted to decode the algorithm through crowdsourcingCrowdsourcingDelegating a task to a large group of volunteers. In the OpenSchufa project, 3,000 people donated their Schufa data disclosures in an effort to collectively decode the secret algorithm.: over 30,000 data disclosures were requested, around 3,000 donated. The findings: older and female individuals tended to receive better scores, while frequent moves had a negative impact. Consumer watchdog Stiftung Warentest found that only 11 out of 89 test participants had completely correct data stored at Schufa.

⚖️
Landmark ruling — CJEUCJEUCourt of Justice of the European Union in Luxembourg — the EU's highest court. Its rulings are binding across all EU member states. The 2023 Schufa ruling is a milestone for protection against automated decisions., 7 December 2023 (C-634/21): Schufa scoring constitutes an "automated decision" within the meaning of Art. 22 GDPRArt. 22 GDPRThe EU's central provision against automated individual decisions. It prohibits decisions based solely on automated processing that have legal or similarly significant effects — such as automatic credit rejections. when banks rely predominantly on the score. Those affected have the right to human review and the right to contest the decision. In February 2025, the CJEU clarified (C-203/22) that scoring providers must transparently explain their assessment logic — trade secrets alone are not sufficient justification.
Mar. 2025 Regional Court Bamberg (ref. 41 O 749/24): First German ruling: fully automated Schufa scoring is fundamentally unlawful under Art. 22 GDPR. €1,000 damages awarded.
Apr. 2025 Regional Court Bayreuth (ref. 31 O 593/24): €3,000 damages. Schufa must disclose in detail which data was weighted in what way.
Apr. 2025 Higher Regional Court Cologne (ref. 15 U 249/24): Three-year retention period for settled debts ruled unlawful — immediate deletion required. Federal Supreme Court appeal pending.
Sept. 2025 Hamburg: First significant German GDPR fine under Art. 22 — €492,000 for automated credit card rejection.
Sept. 2025 Austria — DPA bans KSV1870 scoring: Fully automated rejection within one minute ruled unlawful. Credit bureau qualifies as "decision-maker" under Art. 22 GDPR.

Predictive PolicingPredictive PolicingSoftware that uses historical crime data to predict where and when future offences are likely to occur. Criticism: it reproduces existing policing patterns and leads to over-policing of specific neighbourhoods.: Palantir software in German law enforcement

Several German federal states use software from US company PalantirPalantirUS technology company founded by Peter Thiel (PayPal). Supplies data analysis software to intelligence agencies, military, and police worldwide. Several German states use Palantir software — despite constitutional concerns. for law enforcement data analysis. Hesse was a pioneer from 2017 with "hessenDATA" (Palantir Gotham). North Rhine-Westphalia followed with the "DAR" system. Bavaria launched the "VeRA" system in December 2024 and concluded a framework contract enabling other states to purchase it.

🏛️
Federal Constitutional Court, 16 February 2023: Automated data analysis provisions in Hesse (§ 25a HSOG) and Hamburg (§ 49 HmbPolDVG) were declared unconstitutional. Open-ended statistical searches were ruled incompatible with fundamental rights. The Society for Civil Rights (GFF) had initiated both constitutional complaints and in 2025 — together with the Chaos Computer Club — filed a complaint against Bavaria's VeRA system.

The AMS algorithm in Austria

Austria's Public Employment Service (AMS) developed an algorithm to classify jobseekers into three categories. The system systematically assigned lower scores to women, people without EU citizenship, those with disabilities, and those with caring responsibilities. Austria's data protection authority declared the system unlawful in 2020. In September 2025, however, the Federal Administrative Court overturned this ruling — on the grounds that advisors could intervene. The verdict has been sharply criticised by data protection experts and is not yet final.

What Art. 22 GDPR means for you

Article 22 of the General Data Protection Regulation is the most important legal basis against algorithmic discrimination. It establishes that you have the right not to be subject to a decision based solely on automated processing — if that decision has legal effect or similarly significantly affects you. This covers automatic credit rejections, algorithmic candidate screening, and automated insurance decisions.

⚖️ Your three rights under automated decisions
1
Request human involvement
You can demand that a human reviews the automated decision — not just a rubber stamp.
Art. 22(3) GDPR
2
Express your point of view
You have the right to present your perspective before a final decision is made.
Art. 22(3) GDPR
3
Contest the decision
You can formally contest the decision. If there is no response: file a free complaint with your national data protection authority.
Art. 22(3) GDPR + Art. 77 GDPR
Important limitation: Art. 22 only applies to decisions that are "solely" automated. A token human inserted into the process can nullify the protection — as the Austrian AMS ruling of 2025 demonstrates. It also only covers decisions with "legal effect or similarly significant impact." Personalised pricing or news feeds often fall below this threshold.

Practical steps: How to protect yourself

Check your credit file

If you live in Germany, you can request a free data copy from meineschufa.de under Art. 15 GDPR. Check stored entries for errors — the odds are high: according to consumer watchdog Stiftung Warentest, only 11 out of 89 test participants had completely accurate data on file. If you find errors, file an objection directly with Schufa. Outside Germany, check which credit bureau operates in your country (e.g. Experian, Equifax, or TransUnion in the UK/US) — most offer a free annual disclosure.

Exercise your GDPR rights actively

The most effective tool is your right of access under Art. 15 GDPRGDPRGeneral Data Protection Regulation — EU data protection law since 2018. Gives you the right to access, erasure, rectification, and objection to automated decisions. Violations can be fined up to 4% of annual global turnover.. Every company must respond within one month, free of charge, stating what data it holds about you and whether automated decision-making is being used. Template letters are available from consumer advice centres and at datenanfragen.de.

Leave fewer data traces

For parents: Under Art. 8 GDPR, in Germany (and most EU member states), parental consent is required for online services used by children under 16. Children — even as adults — can request the deletion of data collected during their childhood. More information at klicksafe.de.

What the EU AI Act now changes

The EU AI ActEU AI ActThe world's first comprehensive law regulating artificial intelligence, in force since August 2024. It bans certain AI applications outright (e.g. social scoring), regulates high-risk AI, and threatens fines of up to €35 million or 7% of annual turnover. (Regulation (EU) 2024/1689), which entered into force on 1 August 2024, is the world's first comprehensive law regulating artificial intelligence. Rather than only enabling individuals to defend themselves, it now places systematic obligations on developers and operators of AI systems.

Prohibited AI practices — since February 2025

Since 2 February 2025, eight categories of AI systems are completely banned in the EU:

Banned immediately: Social scoringSocial ScoringGovernment-run rating of citizens based on social behaviour — modelled on China's system. Completely banned in the EU since February 2025. · Subliminal manipulation · Individual predictive policing (based solely on profiling) · Emotion recognitionEmotion RecognitionAI systems that infer emotions from facial expressions, voice, or body posture. Scientifically contested, as facial expressions vary culturally. Banned in workplaces and educational institutions across the EU since February 2025. in workplaces and educational institutions · Biometric categorisationBiometricsClassification of people based on physical characteristics (facial geometry, skin colour, gait) into categories such as ethnicity, political opinion, or sexual orientation. Completely banned in the EU since February 2025. by ethnicity, political opinion, or sexual orientation

Timeline at a glance

Date What applies
02.02.2025 Prohibited AI practices in force + AI competence obligations for operators
02.08.2025 Governance rules, obligations for general-purpose AI models (GPT, Gemini, etc.)
02.08.2026 Full enforcement powers: up to €35M or 7% of annual global turnover
02.08.2027 Full obligations for high-risk AI in recruiting, credit, and insurance

Fewer than 20% of European employers are "very well prepared," according to the Littler Survey 2025. The European Commission proposed in November 2025 via the "Digital Omnibus on AI" to postpone certain high-risk deadlines to possibly December 2027 — still under negotiation.

Germany: AI oversight in preparation

The German Federal Cabinet approved the AI Market Surveillance and Innovation Promotion Act (KI-MIG) on 11 February 2026. The Federal Network AgencyBNetzAGermany's Federal Network Agency — responsible for regulating telecommunications, energy, and postal services. Under the KI-MIG, it will become Germany's central AI supervisory authority. It has operated an AI Service Desk since July 2025. (BNetzA) will become Germany's central AI supervisory authority and has already operated an AI Service Desk since July 2025. The law still requires approval from both houses of parliament. The snap federal election in February 2025 had delayed the process.

Important: Anti-discrimination law reform overdue. AlgorithmWatch and 20 organisations are calling for algorithms to be explicitly included in anti-discrimination legislation, along with the right for civil society organisations to bring collective actions and a reversal of the burden of proof. Without reform of national equality laws, effective civil law protection against algorithmic discrimination remains out of reach.

Conclusion

Algorithmic discrimination is not a niche problem of the tech industry — it affects everyday decisions from apartment hunting to credit applications. The year 2025 marks a turning point: for the first time, European authorities condemned social media algorithms for discrimination. Opaque credit scoring is being increasingly declared unlawful by courts across Germany and Austria.

The CJEU ruling on credit scoring, the German Federal Constitutional Court's ruling on predictive policing, and the EU AI Act mark a watershed moment: binding rules are emerging for the first time that not only strengthen individual rights, but impose systemic requirements on AI developers and operators.

For individuals, the tools exist today: GDPR provides effective mechanisms. Those who minimise their data traces, know their rights, and critically question which algorithms play a role in decisions about their lives are better protected than the vast majority.

Sources & References
[1]
Federal Anti-Discrimination Agency (Germany) – Orwat Study (2019)
"Discrimination risks from the use of algorithms" — 47 documented examples of algorithmic disadvantage. A foundational work for the German debate.
antidiskriminierungsstelle.de
[2]
ProPublica – "Machine Bias" (COMPAS, May 2016)
Angwin, Larson, Mattu & Kirchner: Black defendants flagged at a 45% false-positive rate by recidivism prediction algorithm COMPAS. Foundational investigative piece.
propublica.org
[3]
Obermeyer et al. – Healthcare Bias (Science, October 2019)
"Dissecting racial bias in an algorithm used to manage the health of populations" — 200 million patients systematically misclassified via healthcare costs as a proxy variable.
science.org/doi/10.1126/science.aax2342
[4]
Reuters – Amazon scraps secret AI recruiting tool (October 2018)
Jeffrey Dastin: exclusive investigation into Amazon's AI recruiting tool that systematically downgraded women — discontinued in 2017.
reuters.com
[5]
CJEU – Schufa Ruling C-634/21 (December 2023)
Schufa credit assessment constitutes an "automated decision" under Art. 22 GDPR where banks rely predominantly on it. Those affected have the right to contest.
curia.europa.eu
[6]
CJEU – Dun & Bradstreet C-203/22 (February 2025)
Scoring providers must transparently explain their assessment logic — trade secrets alone are insufficient. Clarification of the Schufa ruling.
curia.europa.eu
[7]
Federal Constitutional Court (Germany) – Predictive Policing (February 2023)
1 BvR 1547/19 and 1 BvR 2634/20: provisions on automated data analysis in Hesse and Hamburg declared unconstitutional. Open-ended statistical searches incompatible with fundamental rights.
bundesverfassungsgericht.de
[8]
EU AI Act – Full text (Regulation (EU) 2024/1689)
World's first comprehensive AI law. In force 1 August 2024. Prohibitions since 2 February 2025, full enforcement powers from 2 August 2026.
artificialintelligenceact.eu
[9]
US DOJ – Meta / Fair Housing Act (June 2022)
First case in which algorithmic bias was prosecuted under the Fair Housing Act. Meta was required to shut down its "Special Ad Audience" tool for housing ads.
justice.gov
[10]
Dutch Institute for Human Rights – Meta Job Ads (February 2025)
Job ads were served to 79% women or 91% men depending on role — Meta could not rebut the presumption of discrimination. First European ruling against a social media algorithm.
cnn.com
[11]
Regional Court Bamberg – Schufa scoring unlawful (March 2025, ref. 41 O 749/24)
First German ruling: fully automated Schufa scoring fundamentally unlawful under Art. 22 GDPR. €1,000 damages awarded.
lhr-law.de
[12]
noyb – KSV1870 scoring banned (September 2025)
Austrian data protection authority bans fully automated credit rejection within one minute. Credit bureau qualifies as "decision-maker" under Art. 22 GDPR.
noyb.eu
[13]
AlgorithmWatch – AutoCheck Guide on Discrimination
Practical guide to recognising algorithmic discrimination in everyday life and taking action. Includes a reporting portal for cases.
algorithmwatch.org/de/autocheck-ratgeber-diskriminierung/
[14]
GFF – Society for Civil Rights (Predictive Policing)
Initiator of constitutional complaints against predictive policing in Hesse and Hamburg. In 2025: new complaint against Bavaria's VeRA system, together with the CCC.
freiheitsrechte.org
[15]
Datenanfragen.de – GDPR Request Generator
Free generator for GDPR access requests with pre-filled contact details for hundreds of companies. Open-source project.
datenanfragen.de/generator/
[16]
EFF – Cover Your Tracks (Browser Fingerprinting Test)
Tests whether your browser is uniquely identifiable via fingerprinting. Developed by the Electronic Frontier Foundation.
coveryourtracks.eff.org
[17]
Meine SCHUFA – Request free data copy (Germany)
Free data copy under Art. 15 GDPR. Once per year at no cost — checking for incorrect entries is strongly recommended by consumer organisations.
meineschufa.de/de/datenkopie
[18]
CNN – Mobley v. Workday AI Class Action (May 2025)
First nationwide class action against AI-assisted recruiting. Workday acknowledged 1.1 billion rejections. Also used by companies in Germany, Austria, and Switzerland.
fairnow.ai/workday-lawsuit-resume-screening/