Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.hyprcore.ai/llms.txt

Use this file to discover all available pages before exploring further.

Session chat is RAG (retrieval-augmented generation) over your meetings and notes. It lets you ask natural-language questions and get answers grounded in your own content, with citations to the source meeting.

Two scopes

  • Per-meeting chat. Open any meeting page → click Chat. The model has only that meeting’s transcript and notes as context.
  • Global chat. Open the Chat section in the sidebar. The model can pull from any meeting in your knowledge base.
Per-meeting chat is precise and cheap. Global chat is broader and uses more credits because it has to retrieve relevant chunks from across everything you’ve recorded.

How it works

  1. You ask a question.
  2. Hyprcore embeds the question and finds the most relevant chunks of transcripts and notes.
  3. Those chunks are passed to the LLM with your question.
  4. The LLM answers and cites which meetings it used.
  5. You can click any citation to jump to the source.
This means answers are grounded—the model doesn’t make things up about your meetings. If your knowledge base doesn’t contain the answer, the model says so instead of hallucinating.

Good questions to ask

  • “What did we decide about pricing in the August roadmap meeting?”
  • “List every action item assigned to me last week.”
  • “What customers have asked about the API in the last 30 days?”
  • “Has anyone on the team talked about ClickHouse?”
  • “Summarize all my 1:1s with Sarah this quarter.”
For action items and structured queries, results are better when the underlying meetings have good summaries (since action items are extracted there).

Picking a model

The chat uses your default LLM, configured in Settings → Models → Post-processing. For session chat:
  • Claude Sonnet is the best default—long context, careful with citations.
  • GPT-4o is a close second.
  • Hyprcore Cloud uses Claude or GPT depending on the model you pick. Easiest because it’s one credit pool.
  • Local Ollama works for short questions if you don’t want to spend credits, but quality drops with smaller models.
Each question costs credits if you’re on Hyprcore Cloud—roughly 5–30 credits depending on how much context the model has to read.

Filters

In global chat, you can scope the search before asking:
  • By folder. Only meetings inside a specific folder.
  • By date. Last week, this quarter, custom range.
  • By speaker. Meetings where a specific person spoke.
  • By template. Only standups, only customer calls, etc.
This both improves answer quality and reduces credit cost.

What’s not yet supported

  • Asking the model to take actions (creating meetings, editing notes). Today, chat is read-only over your vault.
  • Citing audio timestamps. Citations point to the meeting and section, but not yet to a specific second of audio.
Both are on the roadmap.