GuidesAuto ModerationToxicity Prevention

Toxicity Prevention

Automatically detect and action toxic language, slurs, harassment, and hostile messages.

How It Works

The toxicity filter analyses every message sent in monitored channels against a dictionary of known toxic patterns and contextual language models. When a message exceeds the configured sensitivity threshold, Exorion applies your chosen action.

Sensitivity Levels

LevelDescriptionTypical Use
LowOnly catches explicit slurs and severe toxicity.Public communities that want minimal interference.
MediumCatches moderate insults, targeted harassment, and hate speech.General-purpose servers. Recommended default.
HighCatches subtle toxicity, passive-aggressive language, and borderline content.Strict communities, brand servers.
StrictFlags almost any potentially negative language.Child-safe or heavily moderated environments.

Actions

When a message is flagged, Exorion can perform one or multiple actions:

Delete Message

Silently removes the flagged message from the channel.

Warn User

Sends a DM or inline warning. Logged to the infraction system.

Timeout User

Applies a Discord timeout for a configured duration.

Ban User

Immediately bans the user. Reserved for severe or repeat offences.

Channel & Role Exemptions

You can exempt specific channels (e.g. staff-chat) or roles (e.g. Moderator) from the toxicity filter. Navigate to Dashboard → Auto Moderation → Toxicity and scroll to the exemptions section.

Setting the sensitivity to Strict may cause false positives. We recommend starting at Medium and adjusting based on your community.