Understanding Your First Test Result
Rihario results show what the AI found during exploration, not pass/fail status. You'll see pages explored, issues detected, step-by-step actions, and evidence. Here's how to make sense of it all.
Result Overview
At the top of your result page, you'll see a summary:
Status Values
- COMPLETED - Exploration finished successfully
- FAILED - Exploration stopped due to an error
- BLOCKED - Hit a blocker (CAPTCHA, MFA, cookie banner)
- SKIPPED - Step was skipped (usually due to safety guards)
See What FAILED vs BLOCKED vs SKIPPED Means for detailed explanations.
Step-by-Step Log
The step log shows every action the AI took, in order. Each step includes:
- Action - What the AI did (clicked button, filled form, navigated)
- Target - What element it interacted with
- Timestamp - When it happened
- Screenshot - What the page looked like
- Evidence - Console logs, network errors, or other data
Understanding Step Actions
- Navigate - AI loaded a new page
- Click - AI clicked an element
- Type - AI filled in a form field
- Wait - AI waited for something (page load, element to appear)
- Screenshot - AI captured the current state
- Check - AI verified something exists or looks correct
Issues List
Issues are problems the AI detected during exploration. They're grouped by type:
Visual Issues
Layout problems, overlapping elements, broken CSS, visual regressions.
Console Errors
JavaScript errors logged to the browser console. These might break functionality.
Network Errors
Failed API requests, 404s, CORS errors, timeouts.
Accessibility Issues
Missing alt text, missing labels, contrast problems, keyboard navigation issues.
Performance Issues
Slow page loads, large bundle sizes, render-blocking resources.
See Console Errors vs Network Errors vs UI Issues for more details.
Reading Individual Issues
Each issue shows:
Severity Levels
- HIGH - Likely breaks functionality or user experience
- MEDIUM - Might cause problems, worth checking
- LOW - Minor issue, probably fine to ignore
Severity is AI-determined based on context. Always verify yourself - false positives happen.
Evidence Collection
Each issue includes evidence to help you understand what happened:
- Screenshots - What the page looked like when the issue occurred
- Console logs - JavaScript errors or warnings
- Network logs - Failed requests, response codes
- DOM snapshots - HTML structure at the time
See How Evidence Is Collected for technical details.
Video Replay
At the bottom of the result page, you can watch a video replay of the entire exploration. This shows exactly what the AI saw and did, frame by frame.
Useful for:
- Understanding context around issues
- Seeing what the AI tried to do when something failed
- Verifying findings are real problems
Common Questions
"Why are there so many steps?"
The AI explores thoroughly. It might click multiple buttons, try different interactions, scroll through pages. More steps usually means more thorough exploration.
"Why didn't it find issue X?"
AI exploration is not exhaustive. It finds issues it notices, but it doesn't test every possible scenario. If you know about a specific issue, mention it in the test instructions.
"Is this a real bug?"
Always verify findings yourself. Look at the evidence (screenshot, logs), try to reproduce it manually, check if it's a false positive. AI makes mistakes.
"Why is this marked HIGH severity?"
Severity is AI-determined based on context. A console error during form submission is probably HIGH because it might break functionality. But it could be a false positive. Always verify.