Overview
Rules define content moderation criteria in PixelPatrol. Each rule evaluates specific aspects of submitted content, and rules work together within Rule Groups to provide comprehensive content filtering. Important: All rule management is done through the dashboard interface - no API access is available for creating, editing, or managing rules.How Rules Work
Rule Group Logic
Within each rule group, ALL rules must pass for content to be approved. If any single rule fails, the content is flagged for moderation. This ensures comprehensive content filtering without compromise. Example: A “Profile Photos” rule group with 3 rules:- ✅ No explicit content (passes)
- ✅ Must have clear face (passes)
- ❌ No children in photos (fails)
- Result: Content is rejected because one rule failed
Rule Types
Requirements
Content must satisfy this condition:
- Must have clear face
- Must be high quality
- Must match profile
Prohibitions
Content must avoid this condition:
- No explicit content
- No violence/weapons
- No children
Available Rule Templates
PixelPatrol provides 16 pre-built rule templates optimized for different moderation needs:Content Safety
- Explicit Content - Blocks NSFW/adult content
- Violence/Weapons - Prevents violent imagery
- Drugs/Alcohol - Filters substance-related content
- Offensive Gestures/Symbols - Blocks inappropriate symbols
Image Quality
- No Clear/Focused Face - Ensures face visibility
- Unclear/Blurry Image - Maintains quality standards
- Fake/Generated Image - Detects AI-generated content
- Screenshot Detected - Prevents screenshot uploads
Content Compliance
- Contains Children - Protects child safety
- Multiple Focal Points - Ensures single-person focus
- Celebrity Detected - Prevents celebrity photos
- Excessive Text - Limits text overlays
Privacy & Identity
- Consent/Privacy Violation - Protects user privacy
- Offensive Text/Images - Blocks inappropriate overlays
- Gender Mismatch - Validates gender matches profile (requires advanced parameters)
Custom Rules
Create completely custom rules for specific business requirements not covered by templates.Rule Configuration
Each rule can be configured with:- Name: Descriptive identifier for the rule
- Description: Explanation of what the rule does
- Evaluation Type: Requirement or Prohibition
- Confidence Threshold: How confident AI must be (50%-100%)
- Active Status: Enable/disable without deleting
- Advanced Parameters: Additional metadata requirements
Confidence Thresholds
Set how confident the AI must be in its evaluation:- High (85-95%): Fewer false positives, more false negatives
- Medium (70-85%): Balanced approach (recommended starting point)
- Low (50-70%): More false positives, fewer false negatives
Advanced Parameters
Some rule templates support advanced parameters that use metadata from your media submissions for more precise moderation.Gender Matching Example
The “Gender Mismatch” rule requires gender information from media metadata:Industry Use Cases
Social Media Platform
Profile Photos Rule Group
Profile Photos Rule Group
- No Clear/Focused Face (requirement)
- Explicit Content (prohibition)
- Contains Children (prohibition)
- Celebrity Detected (prohibition)
- Gender Mismatch (prohibition) - with gender parameter
User Content Rule Group
User Content Rule Group
- Explicit Content (prohibition)
- Violence/Weapons (prohibition)
- Offensive Gestures/Symbols (prohibition)
- Drugs/Alcohol (prohibition)
E-commerce Platform
Product Photos Rule Group
Product Photos Rule Group
- Screenshot Detected (prohibition)
- Unclear/Blurry Image (prohibition)
- Excessive Text (prohibition)
- Offensive Text/Images (prohibition)
Review Images Rule Group
Review Images Rule Group
- Fake/Generated Image (prohibition)
- Consent/Privacy Violation (prohibition)
- Explicit Content (prohibition)
Dating Application
Main Profile Photos
Main Profile Photos
- No Clear/Focused Face (requirement)
- Multiple Focal Points (prohibition)
- Explicit Content (prohibition)
- Contains Children (prohibition)
- Gender Mismatch (prohibition) - with gender parameter
- Fake/Generated Image (prohibition)
Creating Rules via Dashboard
- Navigate to Rule Groups: Sites → [Your Site] → Rule Groups
- Select Rule Group: Choose the group to add rules to
- Click “Add Rule”: Opens the rule creation dialog
- Select Template: Choose from pre-built templates or “Custom”
- Configure Settings:
- Set evaluation type (requirement/prohibition)
- Adjust confidence threshold
- Add any advanced parameters
- Test & Save: Use test interface to validate before saving
Best Practices
Rule Organization
- Group by purpose: Keep related rules together
- Limit group size: 3-7 rules per group for optimal performance
- Name clearly: Use descriptive names that explain the rule’s purpose
Threshold Tuning
- Start at 85%: Begin with higher confidence and adjust down if needed
- Test with real content: Use actual examples from your platform
- Monitor false positives: Review flagged content regularly
- Adjust gradually: Make small threshold changes (5-10%)
Advanced Parameters
- Validate availability: Ensure your app provides required metadata
- Handle missing data: Consider fallback behavior
- Privacy first: Only collect necessary information
- Document requirements: Clearly communicate metadata needs
Monitoring Performance
Track rule effectiveness through the dashboard:- Trigger Frequency: Which rules activate most often
- Confidence Distribution: How confident AI is in decisions
- False Positive Rate: Rules that may need adjustment
- Processing Time: Rule evaluation performance
Pro tip: Review your moderation logs weekly to identify patterns and optimize rule configurations. Rules that trigger too frequently or rarely may need threshold adjustments.
Limitations
- No composite rules with OR logic - all rules must pass
- No variables or dynamic values in rule configuration
- No API access - all management through dashboard only
- Rules evaluate independently - no dependencies between rules
Related Concepts
- Rule Groups - How rules are organized
- Sites - Where rule groups are assigned
- Moderation - Overall moderation process