#
AI & LLMs Overview
#
AI & LLMs Overview
Curiosity Workspace supports AI-assisted features that are grounded in your workspace data. In practice, this usually means:
- retrieve relevant information from search + graph
- apply LLMs to summarize, answer, classify, or orchestrate workflows
- optionally write results back to the workspace (notes, links, actions) via endpoints/tasks
#
What to use LLMs for
High-value, common use cases:
- Q&A with grounding (RAG-style experiences)
- summarization of long content (cases, conversations, documents)
- classification (routing, tagging, prioritization)
- tool-using assistants that call endpoints to retrieve or act
#
What not to use LLMs for
Avoid using an LLM as the only source of truth for:
- permission checks
- deterministic business rules
- data updates that require strict correctness
Instead, move those into endpoints and validate outputs.
#
A recommended architecture pattern
- Search/Graph: retrieve candidates and context
- Endpoints: implement business logic and tool calls
- LLM: synthesize the user-facing response
- UI: present results with navigation and traceability (what sources were used?)
#
Next steps
- Configure providers and templates: LLM Configuration
- Learn practical prompting patterns: Prompting Patterns