Section 9: QA Process - How to Review an Agent
Why QA Matters
Building a great agent doesn’t stop at setup — it ends with accuracy.
Your QA process is what ensures your agent:
- Extracts the right data
- Formats it cleanly
- Grounds its outputs for traceability
- Scales confidently across files
This section walks through the tools and best practices available in Go to streamline the review process and make sure your agent delivers clean, accurate, and auditable results.
How to QA an Agent Like a Pro
Use the Review Tab to Organize Your Outputs
The Review tab gives you a visual, drag-and-drop layout where you can:
- Resize, move, and rearrange property widgets
- Create a custom output layout that aligns with your QA flow
- Group key metrics visually (e.g. “Financials” vs. “Risk Factors”)
This is your QA dashboard. Use it to:
- Focus attention on the most important outputs
- Quickly validate values across rows
- Organize views around specific personas (Analyst, Legal, Ops)
Leverage AI Citations for Verifiability
Text and Number properties support AI Citations, which allow you to:
- Highlight where an output came from in the source file
- Jump directly to that page or sentence
- Visually verify the bounding box in context
Best Practice:
For any property pulling directly from a file (e.g. NOI, Cap Rate, Address), always enable citations in the tool settings. It makes verifying outputs 10x faster.
Prefer Single & Multi-Select Where Possible
Using Select properties improves QA in two major ways:
- Efficiency: Fewer possible outputs → faster review
- Clarity: Clearly defined dropdowns make errors obvious
You can also:
- Color code options for better scannability
- Route based on select values (e.g. “Needs Review” view)
Bonus: They also reduce token usage and hallucination, improving both performance and cost.
Use the “Expand Entity” View for Side-by-Side QA
Clicking Expand Entity on any row opens a split-screen review mode:
- Left: The source file (PDF, webpage, contract, etc.)
- Right: All extracted properties listed vertically
This is ideal for:
- Spot-checking values in full context
- Viewing citation targets inline
- Quickly editing outputs or reviewing model reasoning
- Validating collections or long form extractions
It’s your go-to view when reviewing:
- A new agent for the first time
- Long documents or financial tables
- Multi-property summaries like executive memos
Suggested QA Best Practices
Combine these workflows for a scalable and reliable review loop:
| Best Practice | Why It Works |
|---|---|
| Start with Review View | Visually prioritize your most critical outputs |
| Always test with 3–5 diverse, heavy files you know inside-out | Surfaces edge cases early and accelerates QA by using files you’re already familiar with |
| Add Selects to flag rows | Route exceptions to “Needs Review” views for faster triage |
| Use JSON + Python for grouped logic | Simplifies debugging and makes structured outputs easier to validate |
| Document what “good” looks like | Aligns team members reviewing outputs and prevents subjective QA bottlenecks |
Pro Tip: Use files that differ in structure, length, and formatting — but that you’ve personally reviewed before — so you can immediately tell when something’s off.
Summary: A QA Checklist
Before you call your agent “done,” ask:
- Are all key properties visible and organized in Review view?
- Are citations enabled on all text/number fields that pull from source documents?
- Are select fields being used wherever options are discrete and finite?
- Have you opened a few rows using Expand Entity to verify side-by-side behavior?
- Have you run real files and caught 1–2 edge cases?
- Are your outputs clear, traceable, and scalable?
- Do you have sub-workflows with human-in-the-loop review stages?
If the answer to all of these is yes — you’re ready to deploy.
Updated about 15 hours ago
