O
Octo
CoursesPricing
O
Octo
CoursesPricingDashboardPrivacyTerms

© 2026 Octo

Generative AI Explained
1What Is Generative AI?2What Is Deep Learning?3What Are AI Agents?4What Is Agentic AI?5AI Safety & Regulation
Module 5~15 min

AI Safety & Regulation

AI regulation is here — the EU AI Act is law, and the US is catching up. Here's what you need to know about AI safety, bias, and compliance before August 2026.

The hiring algorithm that discriminated

In 2018, Amazon discovered their AI-powered hiring tool was systematically downranking resumes from women. The model had been trained on 10 years of hiring data — a decade where the tech industry was overwhelmingly male. So the AI learned that male candidates were "better" and penalized resumes that mentioned women's colleges or included the word "women's."

Amazon scrapped the tool. But it took years to discover the bias — and no law required them to check.

Today, the EU AI Act makes that kind of unchecked AI deployment illegal. By August 2026, companies deploying high-risk AI systems in the EU (including hiring, lending, and law enforcement AI) must prove they've assessed and mitigated bias, or face fines of up to 3% of global annual revenue (up to 7% for deploying outright prohibited AI systems).

3%max fine for high-risk AI violations (of global annual revenue)
35MEUR 35M or 7% of global revenue (whichever is higher) for prohibited AI
492MUSD AI governance market est. (2025, analyst estimates vary)

Why AI safety matters

AI systems can cause harm in ways traditional software can't:

RiskExampleWho's affected
BiasLoan approval AI that discriminates by race or genderMinorities, protected groups
HallucinationMedical chatbot confidently giving wrong medical advicePatients
PrivacyAI trained on personal data without consentEveryone
ManipulationDeepfakes used to impersonate politiciansDemocracies, public trust
Concentration of powerA few companies control the AI that shapes informationSociety at large
Autonomous harmSelf-driving car making a fatal decisionBystanders, passengers
⚠️This isn't hypothetical
In 2024, Air Canada's AI chatbot told a grieving customer he could book a full-price ticket and apply for a bereavement fare discount afterward — a refund process that did not exist. The British Columbia Civil Resolution Tribunal ruled Air Canada was responsible (Moffatt v. Air Canada, February 2024) — you can't blame the AI. In the same year, a lawyer was sanctioned for citing fake cases generated by ChatGPT. AI mistakes have real legal consequences.

The EU AI Act — what you need to know

The EU AI Act is the world's first comprehensive AI law. It was adopted in 2024, with enforcement phased in through 2027.

The risk-based approach

**Unacceptable risk (BANNED)** — Social scoring by governments, real-time facial recognition in public spaces (with narrow exceptions), AI that manipulates people's behavior, exploitation of vulnerable groups
**High risk (HEAVILY REGULATED)** — AI in hiring, credit scoring, law enforcement, immigration, education, medical devices. Must have: risk assessments, human oversight, data governance, transparency, accuracy requirements
**Limited risk (TRANSPARENCY RULES)** — Chatbots, deepfakes, emotion recognition. Must disclose: "You are interacting with AI" and label AI-generated content
**Minimal risk (NO RULES)** — AI spam filters, AI in video games, recommendation algorithms. No specific requirements

Key deadlines

Feb 2025Article 4: AI Literacy

Already in effect — organizations must ensure staff have sufficient AI literacy

Feb 2025Banned AI

Already passed — prohibited (unacceptable-risk) AI systems must have been discontinued

Aug 2026High-risk rules

Full compliance for high-risk AI systems required

Aug 2027Full enforcement

All provisions fully enforceable with penalties

Article 4: AI Literacy (already in effect)

The most broadly applicable provision: every organization deploying AI must ensure staff have "sufficient AI literacy" to use AI systems competently and understand their limitations.

This means: if your employees use ChatGPT, Copilot, or any AI tool, you need to document that they've been trained on how to use it responsibly.

💭You're Probably Wondering…

There Are No Dumb Questions

Does the EU AI Act apply to me if I'm not in the EU?

If your AI system is used by people in the EU or affects EU residents, yes — it applies regardless of where your company is based. Similar to how GDPR applies to any company handling EU citizens' data.

What happens if my company doesn't comply?

Fines range from €7.5M/1.5% (minor violations) to €15M/3% (high-risk AI non-compliance) to €35M/7% (prohibited AI) — in each case the higher of the fixed amount or percentage applies. But the Act also allows member states to determine penalties — some may be stricter.

My company just uses ChatGPT — does this apply?

Article 4 (AI literacy) applies to everyone using AI. If you use AI for high-risk decisions (hiring, lending, medical), the stricter rules apply. If you just use ChatGPT for writing emails, you need literacy training but not a full risk assessment.

⚡

Classify the AI risk level

25 XP
Using the EU AI Act framework, classify each AI system: 1. An AI that scores citizens' trustworthiness for government benefits → ___ 2. A chatbot on a retail website → ___ 3. An AI that screens job applicants and ranks resumes → ___ 4. A spam filter in your email → ___ 5. An AI system that generates deepfake videos → ___ _Risk levels: Unacceptable (banned), High risk, Limited risk, Minimal risk_

AI safety beyond regulation

Bias and fairness

AI models inherit biases from their training data. If historical data reflects discrimination, the model will discriminate. Key areas:

  • Hiring AI — Can discriminate by gender, race, age, disability
  • Lending AI — Can redline neighborhoods or demographics
  • Healthcare AI — Can perform worse for underrepresented groups
  • Criminal justice AI — Can amplify existing racial disparities

What responsible organizations do: Test models across demographics, audit for disparate impact, maintain human oversight for consequential decisions.

Transparency and explainability

When an AI denies your loan or flags your content, you should be able to understand why. "The model said so" isn't acceptable for decisions that affect people's lives.

Human oversight

The principle that humans should remain in control of AI decisions — especially high-stakes ones. This means:

  • Human review before final decisions in hiring, lending, healthcare
  • Kill switches for autonomous systems
  • Clear escalation paths when AI output seems wrong
🔑The alignment problem
The fundamental challenge of AI safety: how do you ensure an AI system does what you actually want, not just what you literally asked for? A system told to "maximize engagement" might learn to show outrageous content because outrage is engaging. The goals you give AI matter as much as the AI itself.

What this means for you

If you use AI at work

  • Understand your company's AI policy
  • Never use AI for consequential decisions without human review
  • Document how AI is used in your workflows (for compliance)
  • Get AI literacy training (it may already be required)

If you build AI products

  • Conduct risk assessments before deployment
  • Test for bias across demographics
  • Build in transparency — explain how your AI makes decisions
  • Design human oversight into high-stakes workflows
  • Document everything — the EU AI Act requires extensive documentation

If you lead a team or organization

  • Assess which AI systems you use and their risk levels
  • Ensure Article 4 compliance — train staff on AI literacy
  • Appoint someone responsible for AI governance
  • Start compliance work now — August 2026 is coming fast

⚡

AI governance assessment

50 XP
Think about how AI is used at your organization (or one you know). Answer: 1. What AI systems does the organization use? (list them) 2. What risk level would each be under the EU AI Act? 3. Has the staff received AI literacy training? 4. Is there a human review process for AI-assisted decisions? 5. What's one step the organization should take NOW to prepare for compliance?

Key takeaways

  • AI safety covers bias, hallucination, privacy, manipulation, and autonomous harm
  • The EU AI Act is law — Article 4 (AI literacy) is already in effect; high-risk rules enforce August 2026
  • AI systems are classified by risk: unacceptable (banned), high, limited, minimal
  • Fines reach 3% of global revenue for high-risk AI violations, and up to 7% for prohibited AI — this isn't optional
  • The Act applies to anyone whose AI affects EU residents, regardless of company location
  • Start now: train staff, assess AI risk levels, document usage, ensure human oversight

?

Knowledge Check

1.Under the EU AI Act, what is an example of 'unacceptable risk' AI (banned)?

2.What does Article 4 of the EU AI Act require?

3.Why did Amazon's AI hiring tool discriminate against women?

4.Does the EU AI Act apply to companies based outside the EU?

Previous

What Is Agentic AI?

Take the quiz →