Knowledge & RAG
The Knowledge layer is what stops 10ex agents from sounding generic. Every agent’s first sub-agent typically reads a project-specific subset of knowledge before doing anything else. That’s why filling the schema once carries through to every agent you hire.
What goes in
- Brand context. Name, offerings, USPs,
brand_kit,brand_voice. - ICP profiles. Audience definitions, used to filter and personalize.
- Past outputs. Successful blogs, ads, sequences for few-shot retrieval.
- Documents. PDFs, web pages, transcripts, chunked and embedded into Qdrant.
How it’s stored and retrieved
Knowledge is stored as Qdrant vectors per workspace. Agents retrieve top-k chunks at runtime, scoped to the slice each crew needs. A blog crew won’t read your ICP fields; a prospector won’t read your past blog drafts.
Each document gets an Indexed badge in the Knowledge browser once embedding finishes. Until then, the file is uploaded but not yet retrievable.
How knowledge gets used at runtime
- The crew starts. Its first sub-agent calls
knowledge_organizer. knowledge_organizerprojects only the schema slice this crew needs (e.g. brand voice plus past blogs for Marcus/Blog).- For document RAG, the crew issues a Qdrant query against the workspace’s vector collection.
- Retrieved chunks land in the prompt context for the rest of the crew’s sub-agents.
A common gotcha: uploading a 200-page PDF and expecting an agent to “know” it. The agent retrieves chunks, not the whole doc. If the relevant section is buried in a long document with weak headings, retrieval quality drops. Split long docs into focused files.
Common questions
Can I see which chunks an agent retrieved? Yes, via Langfuse traces. Each retrieval step logs the query and the returned chunks.
How long does indexing take? Most documents finish in under a minute. PDFs above 50MB can take longer.
What happens to stale knowledge? Nothing automatic. You delete and re-upload. Versioning isn’t in v1.
Related
- Brand knowledge schema: the canonical structured fields
- Knowledge guide: ingestion options (URL scrape, file upload, API)