Hyprcore indexes every transcript and every note for two kinds of search:Documentation Index
Fetch the complete documentation index at: https://docs.hyprcore.ai/llms.txt
Use this file to discover all available pages before exploring further.
- Keyword. Match exact words and phrases. Fast and exact.
- Semantic. Match by meaning, not just words. Slower but smarter.
Keyword search
The fastest way is the ⌘K palette—open it, type a word, hit Enter. Hits show up grouped by page, with the matching line highlighted. This uses SQLite full-text search, so it’s instant even with thousands of meetings. Useful for:- Finding a specific name, number, or technical term you remember saying.
- Re-locating a meeting where someone mentioned a particular project.
- Pulling all the times you talked about a competitor, a customer, or a person.
Semantic search
Semantic search uses embeddings—vector representations of meaning—so the query “how should we handle authentication” finds meetings that talked about login, sign-in, OAuth, sessions, or auth even if they didn’t use the exact word. You’ll find it in the Search section of the sidebar. Type a question or a description. Results rank by similarity, not by exact match. Useful for:- Finding meetings whose summary doesn’t quite match your search but the discussion did.
- Pulling all conversations about a vague theme—“customer complaints”, “scaling challenges”, “hiring decisions.”
- Working in a different language than the transcript was recorded in (semantic search is multilingual).
Indexing
Hyprcore indexes meetings as you record them. For older meetings created before you turned on indexing, open Settings → Intelligence → Knowledge base and click Reindex. For large vaults this can take several minutes. You choose which folders are indexed. By default everything is, but for sensitive folders you can opt out per-folder.Embedding provider
Semantic search uses an embedding model. Configure it in Settings → Intelligence → Embedding provider:- Hyprcore Cloud (default if signed in). Hosted; uses credits. Best multilingual quality.
- Ollama. Run an embedding model locally. Free, fully private. Recommended models:
nomic-embed-text,mxbai-embed-large. - LM Studio. Same idea as Ollama, point Hyprcore at the LM Studio server.
- OpenRouter. Bring your own OpenRouter key.
Performance
- Keyword search. Always instant.
- Semantic search. Local: 100–300 ms per query. Cloud: 200–500 ms with a network round-trip.
- Indexing. Roughly 5–10 seconds per meeting transcript on Apple Silicon with local Ollama.
Filters
Both search modes support filters:- Folder. Limit to a specific vault folder.
- Date range. Last week, last month, custom.
- Meeting only. Skip notes and search transcripts only.
- Speaker. Restrict to meetings where a particular person spoke.

