This website is owned, regulated, and operated by Bright Mobile B.V, hereinafter referred to as “we”.
1. Purpose and Scope
We are committed to providing a respectful, inclusive, and secure environment for users worldwide. This policy outlines our global approach to moderating user content, communications, and behavior in compliance with international regulations, including the UK Online Safety Act 2023, EU Digital Services Act, and U.S. Section 230.
This policy applies to all areas of the platform including profiles, chat messages, multimedia content, user interactions, and reports submitted through our platform.
2. Permissible Use and Community Standards
Users must adhere to the following standards:
- No content involving hate speech, harassment, discrimination, or threats
- No content involving grooming, trafficking, or sexual exploitation
- No impersonation or fraudulent profiles
- No uploading or distribution of explicit or violent material
- No spam, phishing, or malicious links
- No solicitation, promotion, or sale of illegal services or substances
- No underage use or facilitation of unlawful contact
3. Moderation Systems
Support interactions, including those conducted via call centers or automated systems, are also subject to monitoring for abusive or exploitative behavior.
We use a hybrid system that combines automated and manual methods:
- Pre-launch Manual Review: All user profiles are reviewed by trained moderators
- Real-Time AI Detection: Automated filters monitor user behavior and flag prohibited content
- Chat Moderation: Conversations are monitored using Google Firebase and proprietary safeguards
- User Reports: Easy-to-use reporting tools are embedded throughout the platform
- Moderator Triage: Flagged items are reviewed contextually by trained personnel
- Priority Escalation: Suspected CSAM, grooming, or trafficking content is reviewed immediately and escalated to law enforcement if applicable
4. Illegal Content Removal Timelines
- Content identified as clearly illegal (e.g., CSAM, incitement to violence, trafficking) will be removed within 24–48 hours of detection or report
- Borderline content is placed under temporary suspension and reviewed within 72 hours
5. Appeals and User Rights
Users may appeal moderation actions:
- Submit a written explanation via our appeals form
- Appeals are reviewed by a separate moderator within 5 business days
- Outcomes and justifications are recorded
6. Recordkeeping and Transparency
- All moderation actions and appeals are logged and stored securely
- We conduct quarterly internal audits of moderation accuracy
- Where required by law, we will issue an annual transparency report summarizing:
- Number of reports received
- Time taken to act on illegal content
- Volume and outcome of appeals
7. Special Protections Against Grooming and Exploitation
We recognize the unique risks of grooming and exploitation on dating platforms. We:
- Prohibit all users under 18 years of age
- Use AI and keyword scanning to detect early grooming patterns
- Maintain a dedicated escalation protocol for such cases
- Cooperate with law enforcement as needed
8. Policy Governance
This policy extends to all communication channels, including customer service responses from non-human systems, and is reviewed accordingly.
This policy is reviewed annually by our legal and compliance team. Updates are made to reflect:
- Regulatory developments
- Evolving online abuse threats
- Operational improvements