top of page
  • Black Instagram Icon
  • Black Facebook Icon
  • TikTok
  • LinkedIn

Off Limits
Topics

At Clevr, we believe that a child’s long-term wellbeing is the foundation of meaningful learning.

That’s why our platform carefully manages off-limit topics to ensure children engage only with safe, age-appropriate, and non-harmful content. Combining expert-designed default filters with advanced AI, we block sensitive material based on each child’s age and learning stage.

Most importantly, we put parents in control—allowing them to customize and manage off-limit topics for every child individually. From setting preferences during onboarding to updating them anytime, parents shape a personalized, secure learning environment that aligns with their family’s values.

 

Protecting your child’s growth and curiosity is at the heart of everything we do at Clevr.

How do I affect what is Off-Limits?

You have full control over what topics are off-limits for each child in your family. During the onboarding process, you’ll be guided to set these preferences individually for every child so that their learning experience matches your family’s values.
 

For example, you might choose to block topics like sensitive health issues  or complex social subjects because you prefer to discuss these personally with your child rather than have our AI-based platform explain them.
 

Once you set these preferences, Clevr automatically filters out content related to those areas. You can always update these settings later in your parental controls per child as your child grows or your preferences change.

This way, every child’s learning journey is safe, tailored, and aligned with what matters most to you as a family.

What are some default off-limit topics?

Generally based on age, some topics that we try to steer kids away from by default that might be considered harmful include: unverified medical advice forums, unsafe DIY health or beauty hacks, stalkerware or spyware promotion, Websites encouraging dangerous pranks, Sites promoting academic cheating (e.g., essay mills), Platforms encouraging online harassment or doxxing, Human trafficking or exploitation forums, Drug-related content (promotion or sales), Weapon-building tutorials or sales, Fake news and propaganda sites, Extremist political forums, Pornography, Violence and gore, Pro-suicide or self-harm content, Eating disorder promotion, Conspiracy theories, Cult recruitment, Hate speech (racism, sexism, etc.), Alt-right or extremist ideologies, Pro-violence forums or groups, Cyberbullying platforms, Scams and phishing websites, Methods for messaging strangers anonymously, Gambling sites, Illegal marketplaces (e.g., dark web), Dangerous challenges or trends, Misinformation hubs. etc)

How does Clevr determine what is off limits by default?

Clevr aims to restrict access to harmful content by having its AI models probabilistically evaluate content appropriateness according to the user’s age/learning stage.

1.System Prompts as Contextual Conditioning

  • Each model in Clevr’s pipeline (e.g., LLMs plus specialized custom systems) receives the system prompt that includes the child’s age as part of its input context.

  • This prompt acts as a soft conditioning variable, biasing each model’s token probability distributions toward outputs that are age-appropriate and safe for Clevr’s users.

  • Because models may differ in training or specialization, the prompt’s influence on tone and content style varies but consistently steers the output toward child-safe communication.

2. Probabilistic Nature and Variance Across Models

  • Each model in Clevr’s system generates text probabilistically, so even with identical prompts, outputs vary due to randomness inherent in sampling methods.

3. Clevr’s Own Filters as a Final Hard Safety Net

  • Clevr’s custom content filters, whether rule-based, heuristic, or machine learning classifiers, operate as post-generation safeguards.

  • These filters scan all generated content from the pipeline to detect and block harmful or policy-violating material such as inappropriate language, misinformation, or unsafe instructions.

  • These filters are fine-tuned to regulatory compliance (e.g., COPPA, PIPEDA), and generally recognized standards for child safety.

4. Multi-Stage Pipeline Benefits

  • Using multiple models with the same age prompt allows Clevr to achieve progressive refinement:

    • Initial models generate child-appropriate responses guided by the age context.

    • Later models or reranking steps further sanitize or simplify content, reinforcing safety and clarity aligned with Clevr’s learning goals.

  • This layered approach significantly reduces the risk of harmful or off-prompt content slipping through.

5. Limitations & Considerations

  • Prompt Injection Risks: Clevr must carefully design prompts and sanitize inputs to prevent attempts to override age-based instructions.

  • Filter Accuracy: Balancing false positives and false negatives is critical; filters must be continuously evaluated and improved to maintain user trust and smooth experience.

  • Performance Trade-offs: Multiple models and filtering layers increase computational complexity and latency, requiring careful system design to maintain responsiveness.

bottom of page