ZipLove Content Moderation Policy

Last Updated: November 6, 2025

Purpose

This Content Moderation Policy explains how ZipLove reviews, moderates, and takes action on user-generated content and reported behavior to maintain a safe and respectful community.

Our Approach to Moderation

Core Principles

  1. Safety     First: User safety is our top priority
  2. Fair     and Consistent: All users are treated equally under our policies
  3. Transparency:     Clear processes and communication
  4. Privacy:     Protect user privacy during investigations
  5. Human     Review: Critical decisions involve human moderators, not just     automated systems

What We Moderate

  • User     profiles (photos, bios, information)
  • Messages     and communications between users
  • Event     behavior and conduct
  • Reports     of off-platform behavior that affects community safety
  • User-uploaded     content (photos, videos, comments)

Reporting System

How Users Can Report

In-App Reporting:

  • Profile     Report: Tap three dots on any profile → "Report User"
  • Message     Report: Long-press message → "Report Message"
  • Event     Report: Event page → "Report Issue"

Email Reporting:

  • General     concerns: support@ziplove.org
  • Safety     concerns: safety@ziplove.org
  • Legal     matters: legal@ziplove.org

Emergency Situations:

  • Call     911 immediately if you're in danger
  • Then     report to ZipLove so we can take platform action

What to Include in Reports

To help us review quickly and effectively, please include:

  • Username     of the person you're reporting
  • Description     of the violation
  • Screenshots     or evidence (if applicable)
  • Date     and time of incident
  • Location/event     (if applicable)

Report Categories

Users can report:

  • Inappropriate     Content: Nudity, sexual content, violence, hate speech
  • Fake     Profile: Catfishing, fake photos, impersonation
  • Harassment:     Bullying, stalking, threatening behavior
  • Scam/Fraud:     Financial scams, spam, commercial solicitation
  • Safety     Concern: Dangerous behavior, threats, illegal activity
  • Underage     User: Suspected minor on the platform
  • Other:     Anything else that violates our Community Guidelines

Review Process

Initial Review (Automated)

Automated Detection Systems:

  • Scan     for known explicit images using photo hashing
  • Flag     certain keywords in profiles and messages
  • Detect     spam patterns and bot behavior
  • Identify     multiple reports against same user
  • Flag     accounts with suspicious patterns (creation, activity, etc.)

Automated Actions:

  • Remove     clearly prohibited content (e.g., known child sexual abuse material)
  • Temporarily     hide reported content pending human review
  • Prevent     suspected spam from being sent

Human Review

All user reports receive human review within:

  • Critical     safety issues: 2 hours
  • High-priority     violations: 24 hours
  • Standard     reports: 48-72 hours

Our Moderation Team:

  • Trained     in safety, privacy, and community standards
  • Based     in the US with understanding of cultural context
  • Review     content based on our Community Guidelines and Terms of Service
  • Follow     consistent decision-making frameworks
  • Escalate     complex cases to senior moderators or legal team

Investigation Process

Step 1: Initial Assessment

  • Review     the report and reported content
  • Check     user's history and previous violations
  • Gather     context (conversation history, event details, etc.)

Step 2: Evidence Collection

  • Screenshots     and content in question
  • Account     activity and patterns
  • Previous     reports or warnings
  • Correspondence     between parties (if relevant)

Step 3: Decision

  • Determine     if violation occurred
  • Assess     severity and intent
  • Consider     user history and context
  • Decide     on appropriate action

Step 4: Action and Communication

  • Take     enforcement action (see below)
  • Notify     both parties as appropriate
  • Document     decision for records

Step 5: Follow-Up

  • Monitor     for continued violations
  • Track     repeat offenders
  • Update     systems to prevent similar issues

Enforcement Actions

Warning System

First Offense (Minor Violation):

  • Warning     notification sent
  • Explanation     of violation
  • Education     on Community Guidelines
  • No     account restrictions

Second Offense (Minor Violation):

  • Final     warning
  • Probationary     status for 30 days
  • Removal     of violating content
  • May     restrict certain features

Account Restrictions

Temporary Suspension:

  • Duration:     7-30 days depending on severity
  • No     access to app or events
  • Active     subscriptions continue (no refund)
  • Must     acknowledge violation to reactivate

Feature Restrictions:

  • Messaging     disabled: Can't send messages
  • Event     restrictions: Can't RSVP to events
  • Profile     visibility: Hidden from browse/search
  • Upload     restrictions: Can't change photos

Permanent Ban

Immediate Permanent Ban for:

  • Sexual     harassment or assault
  • Threats     of violence
  • Child     safety violations
  • Illegal     activity (drugs, prostitution, etc.)
  • Severe     hate speech or discrimination
  • Doxxing     or revenge porn
  • Third     violation of major Community Guidelines
  • Circumventing     previous bans

Permanent Ban Process:

  • Account     immediately deactivated
  • All     content removed
  • Banned     from creating new accounts
  • No     refunds for subscriptions or events
  • May     be reported to law enforcement
  • Device     and email may be banned

Special Situations

False Reports

We take false reports seriously:

  • Malicious     false reports violate Community Guidelines
  • May     result in action against the reporter
  • Repeated     false reports will lead to account suspension

However:

  • Good     faith reports that don't result in action are not penalized
  • "When     in doubt, report it" - we'd rather review and find no violation than     miss a real issue

Off-Platform Behavior

We may take action for behavior outside ZipLove if:

  • It     occurs at ZipLove events
  • It     directly threatens ZipLove community members
  • It     involves content from ZipLove (leaked messages, photos, etc.)
  • It     represents ongoing danger to our community
  • We     receive credible evidence of serious violations

Examples:

  • Assault     at a ZipLove event
  • Stalking     or harassing users outside the app
  • Sharing     intimate photos from ZipLove connections without consent
  • Using     information from ZipLove for fraud or harassment

Law Enforcement Requests

We cooperate with law enforcement when:

  • Required     by valid legal process (subpoena, warrant, court order)
  • Emergency     situations involving imminent harm
  • Child     safety violations (mandatory reporting)

We will:

  • Verify     legitimacy of requests
  • Provide     only information specifically requested and legally required
  • Notify     users when legally permitted
  • Maintain     records of law enforcement cooperation

Account Recovery After Ban

Permanent bans are typically final, but we may considerappeals if:

  • New     evidence shows the ban was in error
  • Identity     theft or account compromise was involved
  • User     provides compelling rehabilitation evidence (rare cases)

Appeal Process:

  • Email     appeals@ziplove.org within 30 days
  • Provide     detailed explanation and any new evidence
  • Senior     team reviews within 14 business days
  • Decision     after appeal is final

Content Categories and Actions

Profile Content

Violation

First Offense

Second Offense

Third Offense

Fake/Old Photos

Warning + Remove Photo

7-day suspension

30-day suspension

Misleading Info

Warning

Warning + Restriction

7-day suspension

Inappropriate Photos

Remove + Warning

7-day suspension

Permanent ban

Hate Symbols

Remove + 7-day ban

Permanent ban

N/A

Messaging Violations

Violation

Action

Spam Messages

Auto-block + Warning

Harassment

Warning or temp suspension

Sexual Content

Temp suspension or ban

Threats

Immediate permanent ban

Scam Attempts

Immediate permanent ban

Event Violations

Violation

Action

No-show (3x)

Account review + Warning

Disruptive Behavior

Event ban + Temp suspension

Venue Damage

Permanent event ban + Liability

Safety Violation

Permanent ban

User Rights During Moderation

You Have the Right To:

  1. Know     Why: Receive explanation of any action taken
  2. Appeal:     Contest decisions you believe are wrong
  3. Privacy:     Have your reports kept confidential
  4. Safety:     Be protected from retaliation for reporting
  5. Fair     Process: Have evidence reviewed by trained moderators

We Will Not:

  • Take     action without review (except clear automated cases)
  • Share     your identity with reported users (unless legally required)
  • Penalize     good faith reports
  • Allow     retaliation against reporters
  • Discriminate     in enforcement

Moderator Training and Standards

Our Moderation Team Receives:

  • Comprehensive     training on Community Guidelines
  • Regular     updates on new policies and threats
  • Mental     health support (reviewing difficult content is challenging)
  • Clear     decision-making frameworks and guidelines
  • Regular     audits and quality checks

Moderation Standards:

  • Consistent     application of rules across all users
  • Cultural     competency and context awareness
  • Trauma-informed     approach to sensitive cases
  • Documentation     of all decisions
  • Escalation     protocols for complex cases

Transparency and Accountability

Regular Transparency Reports

We publish quarterly reports including:

  • Number     of reports received by category
  • Action     taken (warnings, suspensions, bans)
  • Appeal     outcomes
  • Trends     and emerging issues
  • Policy     improvements based on data

Community Feedback

We welcome feedback on moderation:

  • Email:     moderation-feedback@ziplove.org
  • Quarterly     community surveys
  • User     advisory council (coming soon)

Proactive Moderation

Beyond Reports

We proactively monitor for:

  • New     accounts (profile review)
  • Suspicious     patterns (spam, bots, scams)
  • High-risk     content (using automated tools)
  • Event     behavior (venue partner reports)
  • Emerging     threats and trends

Automated Systems:

  • Photo     scanning for inappropriate content
  • Keyword     filtering for messages
  • Spam     and bot detection
  • Pattern     recognition for scams
  • Risk     scoring for new accounts

Human Review:

  • New     profile verification
  • Flagged     content review
  • Complex     case decisions
  • Policy     exceptions and gray areas

Privacy and Data

What We Collect for Moderation:

  • Reported     content and context
  • User     account history
  • Communications     related to violations
  • Evidence     submitted by reporters

How We Use It:

  • Enforce     Community Guidelines
  • Improve     safety systems
  • Train     moderation AI and staff
  • Comply     with legal obligations
  • Publish     aggregate transparency data

How We Protect It:

  • Access     limited to moderation team
  • Encrypted     storage
  • Automatic     deletion after resolution (except records needed for legal/safety reasons)
  • Never     shared publicly without anonymization
  • Subject     to our Privacy Policy

Cooperation with Others

Industry Cooperation

We share hashed identifiers of banned users with:

  • National     Center for Missing and Exploited Children (NCMEC) - child safety
  • Tech     Coalition - combating online abuse
  • Industry     partners - preventing ban evasion

We do NOT share:

  • Personal     information
  • Private     messages
  • Photos     (except illegal content to NCMEC/law enforcement)

Updates to This Policy

We may update this Content Moderation Policy to:

  • Reflect     new types of violations
  • Improve     processes based on experience
  • Comply     with legal requirements
  • Incorporate     user feedback

We'll notify you of material changes via:

  • In-app     notification
  • Email
  • Website     notice

Contact Us

Questions about moderation?

  • Email:     moderation@ziplove.org
  • In-app:     Settings → Help & Support → Moderation Questions

Report a safety concern:

  • Email:     safety@ziplove.org
  • Phone:     [will add when available]

Legal matters:

  • Email:     legal@ziplove.org

Thank you for helping us maintain a safe, respectful community. Your reports and vigilance make ZipLove better for everyone.

This Content Moderation Policy is part of and incorporated into our Terms of Service and Community Guidelines. By using ZipLove, you agree to these moderation practices.