replr
Legal
Log inGet Started

Legal Documents

Terms of ServicePrivacy PolicyAcceptable UseDMCA PolicyCookie PolicyTrust & SafetyCreator AgreementAPI TermsData Processing

Trust & Safety

Effective: March 202611 min read

1. OUR COMMITMENT

At REPLR, Inc. ("REPLR," "we," "us," or "our"), the safety of our users and the integrity of our platform are foundational to everything we build. We are committed to creating an environment where people can create, interact with, and share AI companions without fear of exploitation, abuse, or harm.

Our Trust & Safety program is designed to proactively identify and mitigate risks, respond swiftly to reports of harmful content and behavior, and continuously improve our safety systems as the platform evolves. We invest in a combination of automated detection systems, human review processes, and transparent policies to uphold these standards.

This page describes our safety practices, reporting mechanisms, enforcement framework, and the measures we take to protect vulnerable populations. For detailed rules on what is and is not permitted on the platform, please refer to our Acceptable Use Policy.

2. CONTENT MODERATION

REPLR employs a multi-layered approach to content moderation that combines automated systems with human oversight to detect and address policy-violating content across the platform.

  • Automated detection systems. We deploy machine learning classifiers and rule-based filters that operate in real time to identify potentially harmful content. These systems analyze text, images, and AI character configurations against our content policies, flagging material that may violate our Acceptable Use Policy. Automated systems are continuously trained and updated to address emerging patterns of abuse.
  • Human review. Content that is flagged by automated systems or reported by users is escalated to our Trust & Safety team for human review. Trained reviewers evaluate flagged content against our policies, taking into account context, intent, and potential for harm. Human review is the final decision point for all enforcement actions beyond automated filtering.
  • Proactive marketplace monitoring. All AI characters submitted to the REPLR Marketplace undergo a review process before publication. Our review team evaluates character configurations, descriptions, sample interactions, and behavioral parameters to ensure compliance with our content guidelines. Characters that do not meet our standards are rejected with feedback to the Creator, and may be re-submitted after the issues are resolved.

3. AI SAFETY MEASURES

Because REPLR is an AI-powered platform, we implement safety measures that address the unique risks associated with AI-generated content and adversarial use of AI systems.

  • Prompt injection prevention. We employ multiple layers of defense against prompt injection attacks, including input sanitization, system prompt isolation, context boundary enforcement, and adversarial testing. These measures are designed to prevent users from manipulating AI characters into generating content that violates their configured boundaries or our platform policies.
  • Output filtering. AI character responses pass through output filters that detect and block content categories prohibited under our Acceptable Use Policy, including but not limited to explicit violence, sexually explicit content involving minors, instructions for illegal activities, and personally identifiable information. Filtering operates on both text and voice outputs.
  • Rate limiting. We enforce rate limits on message volume, character creation, and API requests to prevent automated abuse, spam, and denial-of-service conditions. Rate limits are calibrated based on usage patterns and may be adjusted dynamically in response to detected abuse.
  • Content boundaries. Each REPLR character operates within content boundaries defined by its Creator and further constrained by platform-level safety policies. Creators can configure the topics and interaction styles their characters support, but cannot override platform-level prohibitions. Our system enforces a hierarchy where platform safety rules always take precedence over creator-configured settings.
  • Adversarial use monitoring. We actively monitor for patterns of adversarial use, including systematic attempts to bypass safety filters, coordinated campaigns to exploit AI characters, and novel attack vectors. Our security team conducts regular red-team exercises to identify vulnerabilities and strengthen defenses before they can be exploited at scale.

4. USER REPORTING

We rely on our community to help us identify content and behavior that violates our policies. We provide multiple channels for reporting and are committed to reviewing every report we receive.

  • In-app report button. Every character page, Marketplace listing, user profile, and chat interface includes a report button that allows you to flag content or behavior directly from within the application. In-app reports are routed immediately to our Trust & Safety queue with full context, including the content in question, the reporting user, and relevant metadata.
  • Email reporting. You can submit detailed reports via email to safety@replr.ai. Email reports are particularly useful for complex situations that require additional context, supporting evidence, or descriptions of patterns of behavior that span multiple interactions.
  • What happens when you report. When a report is submitted, it is assigned to a member of our Trust & Safety team. The reviewer evaluates the reported content or behavior against our policies, considers any available context, and determines the appropriate course of action. The reporter will be notified of the outcome of their report, subject to privacy and legal constraints.
  • Review timelines. We prioritize reports based on the severity and urgency of the situation. Reports involving imminent threats to physical safety, child exploitation, or other emergency situations are reviewed within 24 hours. All other reports are reviewed within 72 hours of submission. During periods of high report volume, standard-priority reviews may take longer, but we strive to meet these timelines consistently.

5. ENFORCEMENT ACTIONS

When our Trust & Safety team confirms a policy violation, we take enforcement action proportionate to the severity of the violation, the user's history, and the potential for ongoing harm. Available enforcement actions include:

  • Content removal. The violating content is removed from the platform and, where applicable, deleted from our systems. The content owner is notified of the removal and the specific policy it violated.
  • Account warnings. The user receives a formal warning that documents the violation and outlines the expected corrective behavior. Warnings are permanently recorded and considered in evaluating future violations.
  • Temporary suspension. The user's access to all or part of the Service is suspended for a defined period, typically ranging from 24 hours to 30 days depending on the severity of the violation. During suspension, the user cannot access their account, create or interact with characters, or use the API.
  • Permanent ban. The user's account is permanently terminated and all associated content is removed from the platform. Permanent bans are reserved for the most serious violations, including child safety offenses, credible threats of violence, and repeated policy violations following prior enforcement actions. Users who are permanently banned are prohibited from creating new accounts.
  • Law enforcement referral. In cases involving illegal content or activity — including but not limited to child sexual abuse material, credible threats of violence, terrorism-related content, or other criminal conduct — we will report the matter to the appropriate law enforcement authorities and cooperate fully with any resulting investigation. We may preserve and produce user data in response to valid legal process.

6. APPEAL PROCESS

We recognize that enforcement decisions are consequential, and we are committed to providing a fair process for users who believe an action was taken in error.

  • How to appeal. If you believe an enforcement action against your account or content was made in error, you may submit an appeal by emailing appeals@replr.ai within thirty (30) days of the action. Your appeal should include your username, the enforcement action you are contesting, any reference numbers from your notification, and a clear explanation of why you believe the action was incorrect.
  • Review timeline. Appeals are reviewed by a senior member of our Trust & Safety team who was not involved in the original enforcement decision. We will acknowledge receipt of your appeal within two (2) business days and issue a determination within five (5) business days from the date of acknowledgment. Complex appeals that require additional investigation may take longer, and we will notify you if an extension is needed.
  • One appeal per enforcement action. Each enforcement action may be appealed once. The determination rendered on appeal is final and constitutes REPLR's last decision on the matter. We will communicate the outcome and reasoning in writing to the email address associated with your account.

7. PROTECTING MINORS

The safety of minors is a paramount priority. We take affirmative steps to prevent the exploitation of children on our platform and comply with all applicable laws governing the protection of minors online.

  • Age verification. REPLR requires all users to be at least 13 years of age (or the minimum age required in their jurisdiction, if higher) to create an account. We implement age verification measures during the registration process and reserve the right to request additional verification at any time. Users who are found to have misrepresented their age will have their accounts immediately terminated.
  • Parental controls. We are developing parental control features that will allow parents and guardians to manage and monitor their child's activity on the platform, including the ability to restrict access to certain types of characters and content. These features will be announced as they become available.
  • COPPA compliance. We comply with the Children's Online Privacy Protection Act (COPPA) and do not knowingly collect personal information from children under the age of 13 without verifiable parental consent. If we learn that we have collected personal information from a child under 13 without proper parental consent, we will take immediate steps to delete that information and terminate the associated account. Parents or guardians who believe their child has provided personal information to REPLR without consent should contact us at safety@replr.ai.
  • Reporting mechanisms. Any user who encounters content or behavior on the platform that they believe exploits, endangers, or otherwise harms a minor should report it immediately using the in-app report button or by emailing safety@replr.ai with the subject line "Child Safety Report." Reports involving minors are treated with the highest priority and reviewed within 24 hours. Confirmed child safety violations are reported to NCMEC and applicable law enforcement.

8. TRANSPARENCY

We believe that transparency is essential to building trust with our users and the broader public. We are committed to providing visibility into how we enforce our policies and how our safety systems operate.

  • Transparency reports. We will publish periodic transparency reports that provide aggregate data on our content moderation and enforcement activities. These reports will be made available on our website and will cover defined reporting periods.
  • Types of content actioned. Transparency reports will include breakdowns of enforcement actions by content category, including but not limited to violence, hate speech, harassment, child safety, spam, intellectual property violations, and AI-specific policy violations. This data will help the public understand the types of challenges we face and how we address them.
  • Volume of reports. Transparency reports will disclose the total volume of user reports received, the number of reports reviewed, the percentage of reports resulting in enforcement action, average response times, and the volume of proactive detections by our automated systems. We will also report on the number of appeals received and the rate at which enforcement actions were reversed on appeal.

9. CONTACT

If you have questions about our Trust & Safety practices, want to report a safety concern, or need to reach our team for any reason related to platform safety, please contact us at:

Trust & Safety Team

REPLR, Inc.

Email: safety@replr.ai

For general legal inquiries unrelated to safety, please contact legal@replr.ai. For copyright and DMCA-related matters, please refer to our DMCA & Copyright Policy.

Was this page helpful?

Cookie PolicyCreator Agreement
replr
replr

Your AI, everywhere.

Product

  • Explore
  • Discover
  • Pricing
  • API Docs

Safety

  • Safety Center
  • Community Guidelines
  • Content Moderation
  • Parental Insights
  • Reporting

Company

  • About
  • Help Center
  • Contact
  • Legal
  • Privacy

Connect

  • Twitter
  • Discord
  • GitHub

© 2026 REPLR, Inc. All rights reserved.

PrivacyTerms