O
Octo
CoursesPricing
O
Octo
CoursesPricingDashboardPrivacyTerms

© 2026 Octo

Leading AI Products
1LLM Fundamentals for PMs2AI Capabilities & Limitations3Measuring AI Product Success4Prompt Design for PMs5AI Product Design Patterns6Building the Business Case for AI7AI Safety, Ethics, and Responsible Product Decisions8Working with AI Engineering Teams
Module 7~20 min

AI Safety, Ethics, and Responsible Product Decisions

Identify AI risk categories, assign ownership, and navigate the EU AI Act before your feature ships.

The hiring tool that ruined Q3

Priya is a PM at a mid-size logistics company. She ships an AI hiring tool in Q1 — it cuts recruiter screening time by 40%. High-fives all around. The CEO mentions it in the board meeting.

Three months later, a data analyst named Marcus pulls a routine scoring report. He almost skips it — it's sitting on his to-do list as a slow-Friday side task. But he opens it. And he sees something ugly.

Candidates from certain universities are scoring 15–20% lower (illustrative) regardless of qualifications. For candidates from historically Black colleges and universities, the gap widens to 28% (illustrative figures — real bias gaps vary widely by system and context).

The model wasn't broken. It was working exactly as trained. The training data came from a decade of hiring decisions at a company that had systematically favored a small set of schools. The data was accurate — and that accuracy encoded the bias.

Here's the part that keeps PMs up at night: Priya's feature spec had no bias test requirement. No one was assigned to check. No acceptance criterion existed. So no one caught it — until Marcus almost didn't.

You're about to learn how to make sure you're never Priya.

Bias isn't a bug — it's a mirror

Let's get something straight right now: bias in AI isn't like a bug in your code. A bug means something broke. Bias means the model learned exactly what you showed it — and what you showed it was unfair.

Think of it like a broken scale. Imagine a kitchen scale that's been calibrated to read 2 ounces heavy on everything. Every reading it gives you is precise and consistent. But every reading is also wrong. The scale isn't malfunctioning — it was set up badly from the start.

That's what happened to Priya's hiring tool. The model looked at ten years of hiring data and learned: "People from these schools get hired. People from those schools don't." It found a real pattern. The pattern was just... biased.

A bias eval — an automated test that measures whether the model treats different groups differently — would have caught this before a single candidate was scored. Specifically, a test that treats university name as a feature and confirms it carries near-zero correlation with predicted score after controlling for skills and experience.

💭You're Probably Wondering…

There Are No Dumb Questions

"If the data was accurate, how is the model wrong?"

The data was accurate about what the company did. It wasn't accurate about what the company should have done. If a company only hired men for ten years, a model trained on that data will learn "men get hired." That's not intelligence — it's copying. Your job as PM is to specify: "The model must NOT replicate historical patterns that correlate with protected characteristics."

"Isn't bias detection the data science team's job?"

They run the tests. You write the requirement that makes the test mandatory. If the requirement isn't in the spec, the test is optional. Optional tests don't get run on deadline weeks.

⚡

Spot the Broken Scale

25 XP
bugbias
A customer service chatbot crashes when it receives messages in Korean
A resume screener scores women 12% lower than men with identical qualifications
A loan model gives higher risk scores to applicants from ZIP codes that are 90%+ minority neighborhoods, even when income and credit history are identical
An image classifier labels all photos as "cat" regardless of content

2. A resume screener scores women 12% lower than men with identical qualifications →

0/4 answered

The four risks hiding in every AI feature

(You saw hallucination in the Capabilities & Limitations module. That's the Technical risk row. This module adds three more risk categories that PMs miss — and gives you the framework for writing requirements that cover all four.)

Every AI feature you ship has four categories of risk lurking inside it. Think of them like the four tires on a car — you can drive with three, but you really shouldn't.

Risk CategoryWhat can go wrongReal exampleWhat goes in the spec
TechnicalHallucination, model drift, prompt injectionA support bot invents a refund policy that doesn't exist"Model must score ≥ 92% on the hallucination eval suite before launch"
DataPII leakage, data poisoning, training data qualityA chatbot accidentally reveals a user's home address from training data"No PII may appear in model output; PII filter must block 100% of SSN/address patterns"
LegalIP violation, EU AI Act non-compliance, sector regulationHiring tool ships without conformity assessment, triggering six-figure fine"Legal must sign off on EU AI Act classification before sprint 1"
ReputationalBiased output, harmful content, tone failuresCredit scoring model penalizes applicants by race"Model must show no statistically significant score difference by gender, race, or age"

Here's the pattern: every single row ends in the spec. A risk category without a written requirement is a risk category with no owner and no acceptance criterion. It's a tire you forgot to check.

💭You're Probably Wondering…

There Are No Dumb Questions

"Do I really need all four categories for every feature?"

Yes. Even a simple internal summarization tool has technical risk (hallucination), data risk (what if it summarizes a document containing PII?), legal risk (is the source content copyrighted?), and reputational risk (what if the summary misrepresents someone's work?). You might decide some risks are low — but you decide that explicitly, not by forgetting to check.

"What if my company doesn't have a risk template?"

You just got one. The table above IS the template. Copy it into your spec, fill in each row for your specific feature, and you're ahead of 90% of PMs shipping AI today.

⚡

Match the Risk

25 XP
Technical, Data, Legal,Reputational
The model includes a customer's past medical purchases in a promotional email
The model drafts an email claiming a product benefit that isn't true
The email tone comes across as manipulative to elderly customers
The marketing email uses copyrighted taglines from a competitor

2. The model drafts an email claiming a product benefit that isn't true →

0/4 answered

The PM risk checklist: your pre-flight inspection

Pilots don't decide whether to run a pre-flight checklist based on how they feel that morning. They run it every single time. Rain or shine. Short flight or long.

Your risk checklist works the same way. Before engineering starts on any AI feature, you walk through each risk category and write a requirement in the spec. Not a vague "we should think about bias" — a specific, measurable sentence.

Here's what Priya's spec was missing:

💭You're Probably Wondering…

"Model must show no statistically significant scoring difference by university, gender, or graduation year."

That's it. One measurable sentence. That single sentence would have created an acceptance criterion. The data science team would have run the bias eval. Marcus wouldn't have been the last line of defense on a slow Friday.

One measurable sentence written before engineering starts is all that separates a compliant launch from a remediation crisis.

Your checklist for every AI feature spec

  • Technical risk: What's the hallucination/accuracy threshold? Write the eval metric and the pass/fail number.
  • Data risk: Does the feature touch PII? Write the data handling requirement and the filtering test.
  • Legal risk: Which regulations apply? Write the compliance review gate and who owns sign-off.
  • Reputational risk: Can the output treat groups differently? Write the bias test criteria with specific demographic variables.

A risk missing from the spec removes the acceptance criterion that blocks a bad launch. No requirement = no test = no catch = front-page news.

⚡

Write the Missing Requirement

25 XP
Each scenario below has a risk with NO spec requirement. Write one specific, measurable sentence that would go in the spec. 1. **An AI customer support bot** that answers questions about insurance policies. Technical risk: it might hallucinate policy details. - Your requirement: ___ 2. **An AI content moderation tool** that flags harmful posts. Reputational risk: it might flag posts from certain cultural communities at higher rates. - Your requirement: ___ _Hint: A good requirement has a number in it. "Must not hallucinate" is vague. "Must score ≥ 95% accuracy on the policy-QA eval suite with zero tolerance for invented coverage terms" is measurable._

The EU AI Act: why Priya's story gets worse

Remember Priya's hiring tool? It gets worse. The EU AI Act classifies hiring tools as high-risk AI systems. That means before you deploy one in the EU, you need a conformity assessment — which for most Annex III systems (including hiring and credit tools) is a documented self-assessment against the EU's requirements; mandatory third-party notified body review applies only to specific categories like biometric identification systems.

Not after launch. Before.

That translates to 6–12 months of pre-launch compliance work:

  • Documenting training data provenance (where did the data come from? who collected it? what biases might it contain?)
  • Defining accuracy metrics and error rates
  • Building human oversight mechanisms (a human can override or shut down the system)
  • Registering the system in the EU database

A PM who treats safety as a post-launch concern hands legal a six-figure remediation bill.

The EU AI Act risk tiers at a glance

Risk tierWhat it meansExamplesWhat you must do
UnacceptableBanned. Cannot ship in the EU.Social scoring by governments, real-time biometric surveillanceDon't build it.
High-riskHeavy regulation. Conformity assessment required.Hiring tools, credit scoring, medical devices, employee performance reviewsDocumented self-assessment conformity procedure (for most Annex III systems) + EU database registration + human oversight + documentation. Third-party notified body review only required for specific categories (e.g. biometric ID). Start 6–12 months before launch.
Limited riskTransparency obligations.Chatbots, deepfakesDisclose that AI is being used.
Minimal riskNo special requirements.Spam filters, video game AIStandard product quality practices.

The critical question for you as a PM: does your feature make decisions about people? If it decides who gets hired, who gets a loan, who gets medical treatment, or how employees are evaluated — you're almost certainly in the high-risk tier.

💭You're Probably Wondering…

There Are No Dumb Questions

"We're a US company. Why should I care about the EU AI Act?"

If you have any EU users, the Act applies. It's based on where the AI system is used, not where the company is headquartered. Same logic as GDPR — and we all saw how that played out.

"What's a conformity assessment, exactly?"

Think of it like a building inspection. Before you can open a restaurant, an inspector checks that the building meets safety codes. A conformity assessment is the same idea: your system is checked against the EU's requirements — data quality, accuracy, human oversight, transparency. For most Annex III high-risk systems (including hiring and credit scoring tools), this is a documented self-assessment you conduct and record; you don't need an external auditor. Mandatory third-party notified body review is reserved for specific categories like biometric identification systems. But self-assessment doesn't mean optional — the documentation burden is real, and you must register the system in the EU database.

⚡

Classify the Risk Tier

25 XP
Unacceptable, High-risk, Limited risk,Minimal risk
An AI tool that scores job applicants and ranks them for recruiters
A chatbot on your website that answers product FAQs
A spam filter that sorts incoming emails
An AI system that evaluates employee performance and recommends promotions
A government system that assigns citizens a "trustworthiness score" based on social behavior

2. A chatbot on your website that answers product FAQs →

0/5 answered

Back to Priya: what should have happened

Let's rewind the tape. Here's what Priya's spec should have looked like:

Risk categoryRequirement in specOwnerAcceptance test
Technical"Model accuracy ≥ 90% on held-out candidate evaluations"ML leadEval suite run before launch
Data"Training data audited for historical hiring bias; no features correlated with protected classes at r > 0.1"Data teamAudit report signed off by data lead
Legal"EU AI Act high-risk conformity assessment completed; system registered in EU database"LegalConformity certificate on file
Reputational"Model must show no statistically significant scoring difference by university, gender, or graduation year"ML lead + DEI teamDisparate impact test across all protected classes

Every row has a requirement. Every requirement has an owner. Every owner has a test. That's the system.

Marcus wouldn't have needed to catch the bias on a slow Friday — it would have been caught by an automated test that ran before the feature ever touched a real candidate.

Try it

⚡

Challenge

50 XP
CreditFlow is a B2B fintech building an AI feature that pre-screens loan applications and outputs a risk score (0–100) with an explanation. The product manager is writing the safety requirements section of the feature spec. Answer these specific questions: 1. Under the EU AI Act, which risk tier does this feature fall under? (Prohibited / High-risk / Limited risk / Minimal risk) 2. Name the specific Annex III category that applies. 3. What does this risk tier require before CreditFlow can launch this feature in the EU? (Choose the correct answer: A — documented self-assessment conformity procedure only, B — conformity assessment by a notified body, C — documented self-assessment plus registration in the EU AI Act database, D — conformity assessment by a notified body plus registration) 4. Write one specific, measurable safety requirement for the bias axis: 'The model must show no statistically significant difference in risk scores for applicants with identical financial profiles who differ only in ___.' Fill in the blank with three demographic variables that must be tested. _Hint: Start with question 1. The EU AI Act's Annex III lists specific high-risk use cases by category — does credit scoring appear explicitly? Once you've determined the tier, question 3 follows from what that tier legally requires before launch. For most Annex III systems (including credit scoring), the conformity procedure is a documented self-assessment — not a mandatory notified body review. Registration in the EU database is also required._

🚨The PM is accountable for AI feature failures
When your AI feature produces a harmful output, the person responsible is not the model, not the training data, not the engineering team. It's the product manager who shipped without adequate safeguards. You are the last line of defence before an AI feature reaches users. Build your review process accordingly.

✗ Without AI

  • ✗Launch without output review
  • ✗Trust the model's training
  • ✗User reports edge cases after launch
  • ✗Fix in production with hotfixes

✓ With AI

  • ✓Red-team the feature before launch
  • ✓Define explicit failure modes and acceptable error rates
  • ✓Build review UI for human oversight
  • ✓Monitor output quality after launch

Back to Priya and Marcus

Marcus filed the report on a Friday. By Monday, Priya had pulled the model from production.

The CEO was not happy. The recruiter team was not happy. But Priya could show them the numbers: 28% scoring gap on equally qualified candidates. No ambiguity.

She rewrote the spec for v2. One new line: "The model must show no statistically significant difference in predicted scores for candidates who differ only by institution type, controlling for all skills and experience variables." One line. It would have changed everything.

Marcus still runs the scoring reports. Except now they're not a slow-Friday side task — they're a weekly gate requirement in the spec.

Key takeaways

  • You can prevent unowned risk by naming a responsible party for each risk category in every feature spec before engineering starts.
  • Every time you ship an AI feature touching hiring, credit, or medical decisions, EU AI Act high-risk rules apply — compliance work starts before the first line of code.
  • You can block a regulatory fine by writing one measurable bias requirement in the spec — bias testing is not optional for features that make decisions about people.

?

Knowledge Check

1.You're building an AI hiring screening tool. Which combination of requirements should you insist on from your data science team before launch?

2.Your AI feature makes loan recommendations. A user is denied. What must the product provide?

3.A VP wants to remove a safety guardrail because it's blocking 8% of requests. What is the right first step?

4.Under the EU AI Act, what risk tier does an AI-powered employee performance review tool fall into, and what does that require of your product team?

Previous

Building the Business Case for AI

Next

Working with AI Engineering Teams