retrieving
Video ingestion activePrivate pilot · cohort open

Retrieval, rebuilt from the evidence up.

Generic AI tools retrieve fragments and hope. Viore is built for knowledge where the wrong answer has a cost — and the right one has to be defensible.

19,281 momentsevidence indexed from
active pilot corpora
arXiv preprintEvaluative Fingerprints
published research
The gap

Your corpus has structure.
Generic AI tools treat it like a pile of text.

If you own a knowledge-dense corpus (hundreds of hours of expert material, thousands of pages of internal docs, a decade of institutional memory), you already know the problem. Your team can't retrieve it. Off-the-shelf tools hallucinate against it. Notebook-style products fragment it.

The chunk is the wrong unit.Experts don't reason in 500-token windows. They reason in claims, sources, and the consequences of being wrong.

Viore is built on evidence, not fragments. What that unlocks is a conversation best had on a call.

Research credibility

We measured the judge, not just the answer.

Before we built anything, we studied evaluation. Evaluative Fingerprintsis our open research on LLM-as-judge reliability: near-zero inter-judge agreement (Krippendorff's α = 0.042) alongside high within-judge stability (ICC up to 0.872). Judge identity is recoverable at 89.9%. That result changed how we think about trust, grounding, and what reliable retrieval should actually be optimizing for.

Paper
Evaluative Fingerprints
Preprint on arXiv · A reliability paradox in LLM-as-judge evaluation
Open source
12 Angry Tokens
Multi-judge evaluation harness, released alongside the paper
Recognition
ElevenLabs Grant
Awarded for voice-grounded retrieval research

Built for corpora that matter.

Viore is in private pilot with a curated cohort. We're selective about who we take next, and honest about who we're not for.

Intro call · 20 min · NDA available