Computer Science
Grade 11
20 min
AI Ethics
AI Ethics
Tutorial Preview
1
Introduction & Learning Objectives
Learning Objectives
Analyze the societal impact of AI on employment, privacy, and social structures.
Evaluate real-world AI systems for potential biases and fairness issues.
Define and differentiate between key ethical frameworks (e.g., utilitarianism, deontology) as they apply to AI decision-making.
Propose ethical guidelines for the development of a hypothetical AI application.
Articulate the challenges of accountability and transparency in complex AI models (the 'black box' problem).
Assess the dual-use nature of AI technology in both beneficial and harmful applications.
What if an AI denied your loan application but couldn't explain why? 🤔 Let's explore the rules that should govern intelligent machines.
This lesson explores the profound impact of AI...
2
Key Concepts & Vocabulary
TermDefinitionExample
Algorithmic BiasSystematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others.A hiring AI trained predominantly on resumes of male engineers consistently ranks female candidates lower, even with equivalent qualifications, because it learned to associate male-centric language with success.
Transparency (Explainability)The degree to which a human can understand the cause of a decision made by an AI model. This is the opposite of a 'black box' model.A transparent loan-denial AI would state, 'Loan denied due to a credit score of 550 and a high debt-to-income ratio,' whereas a non-transparent one would just say 'Denied'.
AccountabilityThe principle that there mu...
3
Core Syntax & Patterns
Utilitarian Framework
The most ethical choice is the one that will produce the greatest good for the greatest number.
Used to evaluate an AI's decision by summing the total positive outcomes (utility) and subtracting the negative outcomes. The action with the highest net utility is considered the best. This is often applied in large-scale policy decisions, like traffic control or public health AI.
Deontological Framework
An action is judged as right or wrong based on a set of rules or duties, regardless of the outcome.
Focuses on the inherent rightness of an action, not its consequences. In AI, this means programming the system to never violate certain core principles (e.g., 'never deceive a user,' 'never share private data without consent'), even if...
4 more steps in this tutorial
Sign up free to access the complete tutorial with worked examples and practice.
Sign Up Free to ContinueSample Practice Questions
Challenging
You are on a team developing a new AI application to summarize lengthy court case documents for lawyers. Based on the learning objectives, which of the following is the most critical ethical guideline to propose for its development?
A.The system must include a confidence score for each summary and highlight ambiguous legal terms to prevent over-reliance and ensure human oversight.
B.The system must be optimized to run on standard law office computers to ensure equitable access for smaller firms.
C.The system's user interface should be designed by lawyers to ensure it follows legal industry workflow standards.
D.The system must be trained on legal documents from at least 50 different countries to ensure a global perspective.
Challenging
A city must choose an AI to help allocate housing resources. Model A is 95% accurate overall but has a 20% error rate for a specific minority group. Model B is 90% accurate overall but has a 10% error rate for all groups. Based on the tutorial's discussion of fairness, what does this choice represent?
A.clear choice for Model B, as lower overall accuracy is an acceptable price for fairness.
B.complex trade-off between overall utility (Model A's accuracy) and equitable outcomes (Model B's fairness), with no single 'correct' answer.
C.situation where a Utilitarian framework would choose Model A and a Deontological framework would choose Model B.
D.demonstration that both models are unethical and a non-AI, human-only system should be used instead.
Challenging
The tutorial states, 'all data reflects the biases of the society and methods used to collect it.' Given this 'Data is Objective' fallacy, what is the most profound challenge for a developer trying to build a truly 'fair' AI system for university admissions using historical data?
A.There is not enough historical data available to train a sufficiently accurate model.
B.The computational cost of processing decades of admissions data is too high for most universities.
C.The historical data itself contains the results of past systemic biases, meaning a model trained on it will likely learn and perpetuate those same biases.
D.Privacy laws like GDPR make it illegal to use historical student data for training AI models.
Want to practice and check your answers?
Sign up to access all questions with instant feedback, explanations, and progress tracking.
Start Practicing Free