What This Template Does
Shipping AI features without an ethics review is like shipping software without security review -- it works fine until it does not, and by then the damage is done. AI products can discriminate against protected groups, violate user privacy in non-obvious ways, erode trust through opaque decision-making, and cause downstream harms that product teams never anticipated. These are not hypothetical risks. They happen regularly to companies of all sizes.
This template provides a structured ethics review process designed for product teams, not academic researchers. It is practical, actionable, and focused on the decisions product managers actually need to make. Each section includes specific questions to answer, risks to assess, and mitigations to implement. Use it as a pre-launch gate for any AI feature, and revisit it quarterly for AI features already in production.
Direct Answer
An AI Ethics Review is a structured assessment of an AI product or feature across fairness, transparency, privacy, accountability, and societal impact dimensions. It identifies risks, documents mitigations, and creates an accountability record. This template provides the complete checklist for conducting a responsible AI ethics review.
Template Structure
1. Product and AI Overview
Purpose: Document what the AI does and who it affects so reviewers have full context.
## Product Overview
**Product/Feature Name**: [Name]
**Review Date**: [Date]
**Review Lead**: [Name and role]
**Product Owner**: [Name and role]
### What the AI Does
[2-3 sentences describing the AI capability in plain language]
### Who Is Affected
| Stakeholder Group | How They Are Affected | Can They Opt Out? |
|-------------------|----------------------|-------------------|
| [Primary users] | [How AI impacts them] | [Yes / No / Partially] |
| [Secondary users] | [How AI impacts them] | [Yes / No / Partially] |
| [Non-users] | [How AI impacts them] | [Yes / No / Partially] |
### Decision Impact Level
How significant are the decisions this AI influences?
- [ ] **Low impact**: Convenience features (autocomplete, recommendations). Errors are annoying but not harmful.
- [ ] **Medium impact**: Workflow decisions (prioritization, routing, summarization). Errors affect productivity and judgment.
- [ ] **High impact**: Consequential decisions (hiring, credit, healthcare, safety). Errors can cause real harm to real people.
**Selected impact level**: [Low / Medium / High]
**Justification**: [Why this impact level is appropriate]
2. Fairness and Bias Assessment
Purpose: Evaluate whether the AI treats different groups of people equitably and identify potential sources of bias.
## Fairness Assessment
### Data Bias Sources
For each data source the AI uses, assess potential bias:
| Data Source | Potential Bias | Severity | Mitigation |
|------------|---------------|----------|------------|
| [Training data] | [e.g., Underrepresentation of certain demographics] | [Low/Med/High] | [e.g., Data augmentation, resampling] |
| [User input data] | [e.g., Language/cultural assumptions] | [Low/Med/High] | [e.g., Multi-language support, cultural review] |
| [Context data] | [e.g., Historical patterns reflecting past discrimination] | [Low/Med/High] | [e.g., Debiasing, fairness constraints] |
### Demographic Performance Analysis
Has the AI been tested across demographic groups?
- [ ] **Gender**: Performance tested across gender groups. Maximum variance: [X%]
- [ ] **Race/Ethnicity**: Performance tested across racial/ethnic groups. Maximum variance: [X%]
- [ ] **Age**: Performance tested across age groups. Maximum variance: [X%]
- [ ] **Language/Dialect**: Performance tested across language variants. Maximum variance: [X%]
- [ ] **Disability**: Accessibility tested for users with disabilities
- [ ] **Geographic**: Performance tested across geographic regions
### Proxy Discrimination
Even if the AI does not use protected attributes directly, does it use proxies that correlate with them?
- [ ] ZIP code or location (proxy for race and income)
- [ ] Name or language patterns (proxy for ethnicity and national origin)
- [ ] Writing style or vocabulary (proxy for education and socioeconomic status)
- [ ] Device type or browser (proxy for income)
- [ ] Time of activity (proxy for employment type)
**Proxy risks identified**: [List any proxy discrimination risks]
**Mitigations**: [How proxy risks are addressed]
### Fairness Criteria Selected
Which fairness definition applies to your use case?
- [ ] **Demographic parity**: Equal positive outcome rates across groups
- [ ] **Equal opportunity**: Equal true positive rates across groups
- [ ] **Predictive parity**: Equal precision across groups
- [ ] **Individual fairness**: Similar individuals receive similar outcomes
- [ ] **Not applicable**: [Explain why fairness metrics do not apply]
**Selected criterion**: [Which fairness definition]
**Justification**: [Why this criterion is appropriate for your use case]
3. Transparency and Explainability
Purpose: Assess whether users understand they are interacting with AI, can comprehend how decisions are made, and have recourse when they disagree.
## Transparency Assessment
### AI Disclosure
- [ ] Users are informed they are interacting with AI before their first interaction
- [ ] AI-generated content is clearly labeled or visually distinguished
- [ ] The product does not misrepresent AI capabilities or present AI as human
- [ ] Marketing materials accurately describe what the AI can and cannot do
- [ ] Limitations and error rates are communicated to users
### Explainability
For the AI decisions or outputs, can users understand why?
| Decision/Output | Explainability Level | Method |
|----------------|---------------------|--------|
| [Decision 1] | [None / Low / Medium / High] | [How users can understand the reasoning] |
| [Decision 2] | [None / Low / Medium / High] | [How users can understand the reasoning] |
### User Control and Recourse
- [ ] Users can appeal or contest AI decisions
- [ ] There is a human review process for contested decisions
- [ ] Users can provide feedback that improves AI behavior
- [ ] Users can disable AI features and use the product without them
- [ ] Users can request a human-only experience
### Documentation
- [ ] Technical documentation describes how the AI works at an appropriate level
- [ ] User-facing documentation explains AI limitations and best practices
- [ ] Internal documentation captures design decisions and trade-offs made
4. Privacy and Data Protection
Purpose: Assess data handling practices specific to the AI component beyond standard privacy compliance.
## Privacy Assessment
### Data Minimization
- [ ] The AI only accesses data it needs for its specific function
- [ ] Data sent to external AI providers is minimized to what is necessary
- [ ] PII is stripped or anonymized before AI processing where possible
- [ ] Data retention for AI purposes is limited and documented
### Consent and Awareness
- [ ] Users understand what data the AI accesses
- [ ] Users have consented to AI processing of their data
- [ ] Users can see what data the AI used for a specific output
- [ ] Users can request deletion of data used by AI systems
### Third-Party AI Providers
| Provider | Data Shared | Training Opt-Out | DPA in Place | Data Residency |
|----------|------------|-----------------|-------------|---------------|
| [Provider 1] | [What data] | [Yes/No] | [Yes/No] | [Region] |
| [Provider 2] | [What data] | [Yes/No] | [Yes/No] | [Region] |
### Surveillance and Monitoring Risks
- [ ] The AI does not enable surveillance of individuals
- [ ] The AI does not create behavioral profiles without explicit consent
- [ ] The AI does not infer sensitive attributes (health, politics, sexuality) from non-sensitive data
- [ ] User interactions with AI are not used to evaluate or score users without their knowledge
5. Accountability and Governance
Purpose: Establish who is responsible for AI behavior and what processes ensure ongoing oversight.
## Accountability
### Responsibility Assignment
| Responsibility | Owner | Backup |
|---------------|-------|--------|
| Overall AI ethics compliance | [Name, Role] | [Name, Role] |
| Bias monitoring and mitigation | [Name, Role] | [Name, Role] |
| Safety incident response | [Name, Role] | [Name, Role] |
| User complaints about AI | [Name, Role] | [Name, Role] |
| Regulatory compliance | [Name, Role] | [Name, Role] |
### Decision Documentation
- [ ] Key design decisions and their ethical trade-offs are documented
- [ ] Rejected alternatives and reasons for rejection are recorded
- [ ] Risk acceptance decisions include who accepted the risk and when
### Ongoing Governance
- [ ] Ethics review will be repeated [quarterly / semi-annually / annually]
- [ ] Trigger events for ad-hoc review defined (new regulations, user harm incidents, model changes)
- [ ] Process for incorporating user feedback into ethics assessments
- [ ] Escalation path for ethics concerns raised by team members
6. Societal and Environmental Impact
Purpose: Consider broader impacts beyond your users, including effects on society, the environment, and market dynamics.
## Broader Impact Assessment
### Societal Impact
| Impact Area | Potential Effect | Severity | Mitigation |
|------------|-----------------|----------|------------|
| Employment | [Could this AI displace workers?] | [Low/Med/High] | [How you are addressing it] |
| Information quality | [Could this AI spread misinformation?] | [Low/Med/High] | [How you are addressing it] |
| Power dynamics | [Could this AI concentrate power unfairly?] | [Low/Med/High] | [How you are addressing it] |
| Accessibility | [Could this AI exclude certain populations?] | [Low/Med/High] | [How you are addressing it] |
| Children and minors | [Could this AI affect children?] | [Low/Med/High] | [How you are addressing it] |
### Dual Use and Misuse
- [ ] Potential misuse scenarios have been identified and documented
- [ ] Technical safeguards prevent the most harmful misuse scenarios
- [ ] Terms of service prohibit identified misuse patterns
- [ ] Monitoring is in place to detect misuse
### Environmental Impact
- [ ] Energy consumption of AI inference is estimated: [kWh per month]
- [ ] Carbon footprint of AI operations is estimated: [CO2 per month]
- [ ] Energy-efficient model choices have been considered
- [ ] Trade-off between model capability and environmental cost is documented
7. Review Summary and Decision
Purpose: Summarize findings, document the decision, and define follow-up actions.
## Review Summary
### Risk Assessment Summary
| Area | Risk Level | Key Findings | Status |
|------|-----------|-------------|--------|
| Fairness and Bias | [Low/Med/High] | [Summary] | [Acceptable / Needs work] |
| Transparency | [Low/Med/High] | [Summary] | [Acceptable / Needs work] |
| Privacy | [Low/Med/High] | [Summary] | [Acceptable / Needs work] |
| Accountability | [Low/Med/High] | [Summary] | [Acceptable / Needs work] |
| Societal Impact | [Low/Med/High] | [Summary] | [Acceptable / Needs work] |
### Decision
- [ ] **Approved**: All areas are acceptable. Proceed with launch.
- [ ] **Conditionally approved**: Proceed with launch after completing the following mitigations: [List]
- [ ] **Not approved**: Address the following issues before re-review: [List]
### Follow-Up Actions
| Action | Owner | Deadline | Status |
|--------|-------|----------|--------|
| [Action 1] | [Name] | [Date] | [Pending] |
| [Action 2] | [Name] | [Date] | [Pending] |
| [Action 3] | [Name] | [Date] | [Pending] |
### Sign-Off
| Reviewer | Role | Decision | Date |
|----------|------|----------|------|
| [Name] | [Product] | [Approve / Conditional / Reject] | [Date] |
| [Name] | [Engineering] | [Approve / Conditional / Reject] | [Date] |
| [Name] | [Legal] | [Approve / Conditional / Reject] | [Date] |
| [Name] | [Leadership] | [Approve / Conditional / Reject] | [Date] |
### Next Review Date: [Date]
How to Use This Template
Tips for Best Results
Key Takeaways
About This Template
Created by: Tim Adair
Last Updated: 2/9/2026
Version: 1.0.0
License: Free for personal and commercial use