Search⌘ K
AI Features

Reliability Improvements

Explore how to enhance the reliability of AI research assistants by incorporating a quality review node between synthesis and formatting steps. Understand how to implement conditional routing for fallback responses, manage confidence labels, and deliver structured, source-attributed answers with LangGraph. This lesson helps you build robust AI workflows that ensure clear, verifiable outputs.

At the end of the previous lesson, the pipeline runs all the way from the clarity gate through planning and search. State contains a list of domain-sourced results and the source names that produced them. The two remaining stubs, synthesise_findings and format_response, return empty strings. This lesson replaces both stubs and adds a quality review node between them.

Adding a review between synthesis and formatting is a deliberate structural choice. The synthesis node’s job is to produce an answer. The review node’s job is to assess that answer. The format node’s job is to package the final deliverable. Keeping these three responsibilities in three separate nodes means that each decision, “what should the answer say?”, “is the answer good enough?”, “how should the answer be presented?” can be inspected independently in the checkpoint history and changed without touching the others.

Expanding the state schema

Two new fields are needed for the review step. Both are owned by the review_synthesis node.

from typing import TypedDict
class ResearchState(TypedDict):
user_question: str
needs_clarification: bool
clarification_question: str
search_plan: list[str]
max_steps: int
step_count: int
search_results: list[str]
sources_used: list[str]
skipped_count: int
synthesis: str
confidence_level: str
quality_passed: bool # written by review_synthesis
quality_note: str # written by review_synthesis
formatted_response: str
Updated ResearchState with quality review fields
  • Lines 14–15: quality_passed is the routing signal for the conditional edge after review_synthesis. quality_note records the reason for failure and is included in the fallback response.

The synthesis node

The synthesis node calls the LLM with the full set of search results and asks it to write a direct answer and assign a confidence level. The confidence level reflects how well the retrieved content covers the question, not a measure of the model’s certainty about what it is saying.

The prompt structure follows the same labelled-line approach used in the clarity check and planning nodes. ANSWER: and CONFIDENCE: are the two expected labels.

def synthesise_findings(state: ResearchState) -> dict:
if not state["search_results"]:
return {
"synthesis": "No search results were available to answer this question.",
"confidence_level": "low",
}
results_block = "".join(
f"Source {i} [{src}]:\n{res}\n\n"
for i, (src, res) in enumerate(
zip(state["sources_used"], state["search_results"]), 1
)
)
fallback_marker = "No specific"
covered = sum(1 for r in state["search_results"] if fallback_marker not in r)
coverage_ratio = covered / len(state["search_results"])
api_key = "{{GROQ_API_KEY}}"
client = Groq(api_key=api_key)
prompt = (
"You are a research synthesiser for a product knowledge assistant.\n\n"
"Using only the sources below, write a clear and direct answer to the "
"research question. Do not add information not present in the sources.\n\n"
f"Research question: {state['user_question']}\n\n"
f"Sources:\n{results_block}"
"Reply with exactly two labelled lines:\n"
"ANSWER: <your complete answer in 3–5 sentences>\n"
"CONFIDENCE: <high | medium | low>\n\n"
"Use 'high' if all sources contained specific relevant information.\n"
"Use 'medium' if most sources were relevant but some were generic.\n"
"Use 'low' if most sources returned generic fallback content."
)
response = client.chat.completions.create(
model="llama-3.1-8b-instant",
messages=[{"role": "user", "content": prompt}],
)
answer, confidence = "", "medium"
for line in response.choices[0].message.content.strip().splitlines():
if line.upper().startswith("ANSWER:"):
answer = line.split(":", 1)[1].strip()
elif line.upper().startswith("CONFIDENCE:"):
raw = line.split(":", 1)[1].strip().lower()
if raw in {"high", "medium", "low"}:
confidence = raw
if not answer:
answer = "The retrieved sources did not contain enough information for a complete answer."
confidence = "low"
if coverage_ratio < 0.5 and confidence == "high":
confidence = "medium"
return {"synthesis": answer, "confidence_level": confidence}
Synthesis node that combines search results into a structured answer
  • Lines 2–6: Early exit when search_results is empty. This handles the case where the executor was blocked entirely by max_steps = ...