#
Performance Tuning
#
Performance Tuning
Optimizing your Curiosity Workspace ensures fast search responses and efficient data ingestion, especially as your graph scales.
#
Ingestion Performance
- Batch Sizes: Use optimal batch sizes (typically 100-500 items) when committing to the graph.
- Parallelism: Run multiple ingestion tasks in parallel if the source system and network bandwidth allow.
- Skip Unchanged: Implement logic to skip records that haven't changed since the last sync.
#
Search Optimization
- Indexing: Only index fields that are actually used for searching or filtering.
- Query Complexity: Simplify complex graph traversals where possible.
- Caching: Leverage endpoint caching for frequently requested data.
#
Resource Allocation
- Memory: Ensure sufficient RAM for the graph engine and search indexes.
- CPU: Monitor CPU usage during heavy ingestion or complex AI workflows.
- Disk I/O: Use high-performance SSDs for storage to minimize latency.
#
Scaling Strategies
As your data grows, consider:
- Vertical Scaling: Increasing CPU and RAM for the workspace service.
- Horizontal Scaling: Distributing search and ingestion load across multiple instances (for enterprise deployments). Curiosity supports a primary + read-replicas model for horizontal scalability.
#
Monitoring for Bottlenecks
Regularly check the Monitoring dashboard for:
- High query latency
- Slow ingestion rates
- Memory pressure
- Failed background tasks