II.
Workflow overview
Reference · liveworkflow:model-fairness-audit
Model Fairness Audit overview
Audits deployed ML models for demographic bias and fairness violations — computing disparate impact ratios, equalized odds, and demographic parity metrics across protected attributes using Fairlearn or AIF360; comparing subgroup performance differentials; documenting mitigation strategies applied (reweighing, threshold adjustment, post-processing); and certifying compliance with organizational fairness policies before production promotion. Excludes model training and feature engineering.
Attributes
displayName
Model Fairness Audit
workflowKind
governance
triggerType
scheduled
typicalCadence
per-model
complexity
cross-team
description
Audits deployed ML models for demographic bias and fairness violations —
computing disparate impact ratios, equalized odds, and demographic parity
metrics across protected attributes using Fairlearn or AIF360; comparing
subgroup performance differentials; documenting mitigation strategies
applied (reweighing, threshold adjustment, post-processing); and certifying
compliance with organizational fairness policies before production
promotion. Excludes model training and feature engineering.
Outgoing edges
applies_to_domain2
- domain:data-science·DomainData Science
- domain:ml-ops·DomainMLOps
involves_role3
- role:data-scientist·RoleData Scientist
- role:ml-engineer·RoleMachine Learning Engineer
- role:security-reviewer·RoleSecurity Reviewer
performed_by_org_unit3
- org-unit:ml-team·OrgUnitML Team
- org-unit:ai-enablement·OrgUnitAI Enablement
- org-unit:security-team·OrgUnitSecurity Team
requires_skill_area2
- skill-area:ml-fine-tuning·SkillAreaML Fine-Tuning
- skill-area:eval-driven-development·SkillAreaEval-Driven LLM Development
triggers_responsibility2
- responsibility:ai-safety-guardrails·Responsibility
- responsibility:data-quality-monitoring·ResponsibilityData quality monitoring
Incoming edges
None.