# Ranking Tuning

# Ranking Tuning

Ranking tuning is the process of making “the right results appear first” for your users. In Curiosity Workspace, tuning typically happens through configuration of:

  • indexed fields and boosts
  • filters and facets
  • hybrid retrieval strategy (keyword + semantic)

# Start with a relevance baseline

Before changing knobs:

  • define 20–50 representative queries
  • capture expected “good results” (gold set)
  • include edge cases (acronyms, short queries, long queries)

# High-leverage tuning levers

# Field selection

  • Index only fields that should participate in retrieval.
  • Remove fields that inject noise (e.g., boilerplate text).

# Field boosts

  • Boost high-signal fields (titles, summaries, identifiers).
  • Reduce boost for long body fields if they dominate rankings.

# Facets and scoping

  • Add facets users actually use (status, type, owner).
  • Use graph-related facets for meaningful constraints.

# Recency and freshness

  • For time-sensitive domains, prefer a sort mode that rewards recent items.
  • Consider separating “recent first” views from “relevance first” views.

# Vector tuning

For semantic retrieval:

  • choose which fields are embedded (usually long text)
  • tune similarity cutoffs (what qualifies as “related”?)
  • tune chunking (for long text fields)

# Common pitfalls

  • Tuning without evaluation: always compare before/after on a query set.
  • Boosting everything: boosts should express a clear priority order.
  • Ignoring filters: the best ranking often comes from better scoping, not just scoring.

# Next steps