Introduction & Zero Tolerance Policy
LINQ is committed to providing a safe environment for all users. We have a zero-tolerance policy for any content or behavior that sexually exploits, abuses, or endangers children.
Child Sexual Abuse and Exploitation (CSAE) includes, but is not limited to: child sexual abuse material (CSAM), grooming behaviors, sextortion, child trafficking, and any content or conduct that exploits or harms minors.
We actively work to prevent, detect, and remove such content from our platform and cooperate fully with law enforcement agencies and organizations dedicated to child protection.
Age Requirements
LINQ is a professional networking platform designed for adults. Our age requirements are as follows:
- Minimum Age: Users must be at least 16 years old to create an account and use our Service
- Age Verification: We implement age verification measures during account registration
- Prohibition: Users under 16 are strictly prohibited from using our platform
- Account Termination: Accounts found to belong to users under 16 will be immediately terminated
If you believe a user is under the minimum age requirement, please report them immediately using the methods described in this policy.
Prohibited Content
The following content and behaviors are strictly prohibited on LINQ:
- Child Sexual Abuse Material (CSAM): Any visual depiction of sexually explicit conduct involving a minor
- Sexual Content Involving Minors: Any content that depicts, describes, or promotes sexual activity involving individuals under 18
- Grooming: Any attempt to build a relationship with a minor for the purpose of sexual exploitation
- Sextortion: Threatening to share intimate images of minors or coercing minors for sexual content
- Child Trafficking: Any content or activity related to the trafficking of minors for sexual purposes
- Sexualized Comments: Sexual or romantic comments directed at or about minors
- Predatory Behavior: Any behavior intended to exploit, harm, or inappropriately engage with minors
Content Moderation
We employ multiple layers of protection to detect and remove prohibited content:
- Automated Detection: AI-powered systems continuously monitor content for potential CSAE material
- Human Review: Trained moderators review flagged content and user reports
- Hash Matching: We use industry-standard hash-matching technology to detect known CSAM
- Proactive Monitoring: Regular audits and monitoring of platform activity for suspicious patterns
- User Reports: Easy-to-use reporting mechanisms for users to flag concerning content or behavior
All content moderation is conducted in compliance with applicable laws and platform policies.
Reporting Violations
If you encounter any content or behavior that you believe violates our child safety standards, please report it immediately:
In-App Reporting
Use the “Report” feature available on user profiles and content to flag potential violations. Reports are reviewed promptly by our safety team.
Email Reporting
For urgent concerns or if you're unable to use in-app reporting, contact us directly:
LINQ Safety Team
Email: info@joinlinqapp.com
External Reporting
You may also report suspected child exploitation to:
- NCMEC CyberTipline: www.missingkids.org/gethelpnow/cybertipline
- Local Law Enforcement: Contact your local police department
- FBI IC3: www.ic3.gov
Law Enforcement & NCMEC Compliance
LINQ is committed to cooperating with law enforcement and child protection organizations:
- NCMEC Reporting: We report all confirmed CSAM to the National Center for Missing & Exploited Children (NCMEC) CyberTipline as required by law
- Law Enforcement Cooperation: We respond promptly to valid legal requests from law enforcement agencies investigating child exploitation
- Evidence Preservation: We preserve relevant evidence when we become aware of potential child exploitation
- Emergency Disclosure: In emergency situations involving imminent danger to a child, we may disclose information to appropriate authorities without delay
Account Enforcement
Violations of our child safety standards result in immediate and permanent consequences:
- Immediate Account Termination: Accounts that violate our CSAE policies are immediately and permanently terminated
- Permanent Ban: Violators are permanently banned from creating new accounts on our platform
- Reporting to Authorities: Violations are reported to NCMEC and relevant law enforcement agencies
- No Appeals for CSAE: There is no appeal process for CSAE-related violations
- Content Removal: All associated content is immediately removed and preserved for law enforcement
We reserve the right to take action against accounts that exhibit suspicious behavior patterns, even before confirmed violations occur.
For questions about our child safety standards or to report concerns:
LINQ Safety Team
Email: info@joinlinqapp.com
We are committed to responding to all child safety reports within 24 hours. Reports involving imminent danger to a child are prioritized and escalated immediately.