v2.3.17

December 19, 2025

Semantic chunking now supports any embedder, including custom providers

SemanticChunking now works with all Agno embedders (e.g., Azure OpenAI, Mistral) and custom chonkie BaseEmbeddings via a wrapper, with new parameters for finer control. This expands model choice, helps optimize cost/latency, and reduces vendor lock-in without refactoring pipelines.

Details

  • Plug in your preferred embedding provider with minimal configuration
  • Tune chunk sizes and thresholds to match corpus and performance goals
  • Maintain consistent chunking strategies across environments

Who this is for: RAG builders and platform teams optimizing retrieval quality and TCO.