# Prompting Patterns

# Prompting Patterns

This page documents practical prompting patterns that work well in Curiosity Workspace because they align with a retrieval + graph-first architecture.

# Pattern: Retrieval-first Q&A (grounded answers)

Use when you want answers tied to workspace data.

  • Retrieve: search for relevant nodes/documents first.
  • Select: choose a small set of high-signal sources.
  • Answer: ask the LLM to answer strictly from the provided context.
  • Cite: include pointers back to sources (UIDs/links) for traceability.

Common guardrail: “If the answer isn’t in the context, say you don’t know.”

# Pattern: Structured output (classification/extraction)

Use when the output must be machine-consumable (labels, JSON).

Good practices:

  • specify a strict schema for output
  • include examples for ambiguous cases
  • validate output in code (endpoints) before acting on it

# Pattern: Tool-using agent (endpoint orchestration)

Use when the assistant must do multi-step work.

Recommended architecture:

  • LLM decides which tool to call
  • Endpoint performs deterministic retrieval/logic
  • LLM synthesizes user-facing explanation

Keep tools small and composable (search, fetch neighbors, compute aggregate).

# Pattern: Summarize then link (graph enrichment)

Use when you want to create durable artifacts:

  • summarize content into a stable “case summary”
  • extract key entities and link them into the graph
  • store summary + links for future retrieval

# Common pitfalls

  • Prompt-only logic: business rules should live in endpoints, not prompts.
  • Over-context: passing too much text reduces quality; retrieve/select carefully.
  • No traceability: always include source pointers for high-stakes workflows.

# Next steps