Debugging
Debugging helps you identify, analyze, and fix issues in your AI’s logic, responses, or goal paths.
It’s your behind-the-scenes view of what really happened inside a conversation — why your Agent responded a certain way, which block or rule it triggered, and how it processed user input.
Think of Debugging as the AI’s black box recorder — every interaction, every decision, and every trigger can be traced back to improve accuracy and reliability.
Why It Matters
When your Agent doesn’t behave as expected — missing a goal, giving a weak response, or repeating lines — Debugging helps you uncover exactly why.
It allows you to:
Trace the logic path the AI followed.
Verify which Brain source or Journey block was used.
Identify missing or incorrect context.
Catch broken or misfired conditions before users do.
With Debugging, you’re not guessing what went wrong — you’re seeing it clearly.
Where to Find It
You can open Debugging from:
Sessions → Reviewing
Click the 🤖 robot icon next to any AI message.
The Troubleshooting drawer will open, showing technical details about how that specific message was generated.
What You’ll See in Debugging Mode
Section | Description |
|---|---|
Message Context | The system’s internal understanding of user intent and conversation state before the AI responded. |
Knowledge Source | Which Brain document, collection, or Q&A the AI used to form its answer. |
Journey Path | Which Journey block or condition was triggered (e.g. “Hook → Align → Conversion”). |
Conditions Matched | A list of IF/THEN logic checks and their outcomes (matched or skipped). |
Version | The specific Agent version that handled the message — helpful when testing iterations. |
Token Usage | (Advanced) How many tokens or steps the AI used to generate the response. |
Error Logs | Warnings about missing context, failed goal triggers, or invalid actions. |
Common Debugging Scenarios
Problem | Possible Cause | Debugging Fix |
|---|---|---|
AI gave irrelevant response | Wrong or outdated Brain source | Reassign Knowledge collection or update context. |
Goal not triggered | Condition mismatch or missing path | Check Journey logic; ensure block transition is defined. |
AI repeated messages | Recursive prompt or memory loop | Reset memory or add stop conditions. |
No response generated | Invalid Action or API error | Check integration (e.g., webhook or calendar task). |
AI tone inconsistent | Conflicting Persona overrides | Verify Persona per block and global tone settings. |
How to Debug a Session
Open any active or completed session in the Sessions tab.
Identify the message that needs inspection.
Click the 🤖 icon beside it — this opens Troubleshooting Mode.
Review:
Context – what the AI knew before replying.
Source – where it pulled its answer from.
Logic – which Journey block or action triggered.
Conditions – which IF/THEN checks were matched.
Use this insight to adjust your Brain, Journey, or Persona.
Recreate the same conversation (using the Recreate Session button) to verify your fix.
Pro Debugging Techniques
Technique | Description | Benefit |
|---|---|---|
Version Comparison | Compare v1 vs v2 session paths to test AI improvements. | See measurable progress in tone, clarity, and conversions. |
Tag Failed Goals | Use session tags like “Goal Missed” or “Debug” for tracking. | Organize your optimization workflow. |
Recreate Session | Replay the same user messages after edits. | Confirm that your changes solved the issue. |
Cross-Link with Knowledge | Check if the AI missed relevant content in Brain. | Fill Knowledge gaps to avoid future errors. |
Real-World Example
During a session, your AI was supposed to trigger the “Book Demo” goal after detecting intent.
Instead, it continued the conversation without completing it.
After clicking the 🤖 Debug icon, you find:
Condition “Intent = Demo Booking” didn’t match because the user said “schedule a quick call” (not “demo”).
You update the condition to include synonyms (“book,” “call,” “schedule”).
You recreate the same session — and this time, the goal triggers perfectly.
That’s how Debugging turns insight into improvement — instantly.
Best Practices
- Debug sessions after every major Journey or Brain update to catch regressions early.
- Always test multiple user variations of key phrases (“demo,” “meeting,” “talk”).
- Keep Knowledge sources clean and well-tagged for accurate context retrieval.
- Use Agent Version labels to organize iterative testing.
- Link Debugging findings back to your Playbooks and Guardrails.
Why It’s Powerful
Debugging doesn’t just fix problems — it builds intelligence.
Each time you trace and correct your AI’s logic, you’re training it to think, respond, and sell smarter.
Your AI doesn’t learn from perfection — it learns from the fixes you make here.