Guardrails
Guardrails are the rules that protect your AI from going off-track.
They act like digital boundaries — keeping your AI compliant, trustworthy, and aligned with your brand or industry standards.
In short, Guardrails make sure your AI knows what not to say or do.
Why Guardrails Matter
Even the smartest AI can overstep if not guided. Guardrails prevent:
- Inaccurate or fabricated information
- Sensitive or regulated content (e.g. medical, legal, or financial advice)
- Unsafe or misleading interactions
- Unrealistic claims or promises of real-time actions
This ensures that every AI reply is ethical, accurate, and safe for your users — especially when your AI operates in industries with strict compliance rules.
How Guardrails Work
Guardrails are a set of predefined rules that automatically rewrite, block, or adjust an AI’s response when it detects a violation.
For example:
If the AI doesn’t know something → it replies with “I’m not sure about that” instead of guessing.
If a user asks for a medical diagnosis → the AI politely declines.
If a response includes fake URLs → it removes or rephrases them.
You can add, edit, or remove Guardrail rules in your AI Settings under the Guardrails tab.
Examples of Default Guardrails
Here are some of the standard rules you’ll find:
- Acknowledge when information is unknown.
- Do not fabricate or make up facts.
- Avoid estimating pricing unless explicitly stated.
- Do not provide financial, legal, medical, or property advice.
- Do not imply access to real-time data or current market info.
- Never make up fake links or suggest the AI can perform real-world actions.
You can also add custom rules to match your business or compliance needs, for example, preventing mentions of competitors or enforcing brand tone.
Best Practice
Think of Guardrails as your AI’s moral compass and compliance filter.
Use them to protect your brand reputation and keep every conversation aligned with truth, trust, and transparency.
Next Step
Learn more on Guardrails