II.
LibraryProcess overview
Reference · livelib-process:ai-agents-conversational--content-moderation-safety
content-moderation-safety overview
Content Moderation and Safety Filters - Process for implementing content filtering for both inputs and outputs including toxicity detection, PII redaction, hallucination detection, and abuse prevention.
Attributes
displayName
content-moderation-safety
description
Content Moderation and Safety Filters - Process for implementing content filtering for both inputs and outputs
including toxicity detection, PII redaction, hallucination detection, and abuse prevention.
libraryPath
library/specializations/ai-agents-conversational/content-moderation-safety.js
specialization
ai-agents-conversational
references
- - OpenAI Moderation: https://platform.openai.com/docs/guides/moderation - Perspective API: https://developers.perspectiveapi.com/ - Azure Content Safety: https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety
example
const result = await orchestrate('specializations/ai-agents-conversational/content-moderation-safety', {
systemName: 'chat-moderation',
contentTypes: ['text', 'images'],
moderationLevel: 'strict'
});
usesAgents
- safety-auditor
- toxicity-developer
- pii-developer
- hallucination-developer
- abuse-prevention-developer
- alert-developer
- pipeline-developer
Outgoing edges
lib_applies_to_domain1
- domain:software-engineering·DomainSoftware Engineering
lib_belongs_to_specialization1
- specialization:ai-agents-conversational·Specialization
lib_implements_workflow1
- workflow:agent-evaluation-cycle·WorkflowAgent Evaluation Cycle
uses_agent1
- lib-agent:ai-agents-conversational--safety-auditor·LibraryAgentsafety-auditor
Incoming edges
None.