displayName
Model Explainability Review
workflowKind
governance
triggerType
event-driven
typicalCadence
per-model
complexity
single-team
description
Reviews the interpretability and explainability posture of ML models before
deployment — generating SHAP/LIME feature importance explanations,
validating that global and local explanations are consistent, verifying
explanation latency meets serving SLAs, documenting known blind spots and
out-of-distribution behavior, and ensuring explanations are surfaced
appropriately in end-user interfaces. Excludes model training and fairness
auditing.