Speech models turn audio into text. Post-processing LLMs turn that text into something polished—a clean dictation, a meeting summary, an answer to a question. Hyprcore lets you pick whichever LLM you trust.Documentation Index
Fetch the complete documentation index at: https://docs.hyprcore.ai/llms.txt
Use this file to discover all available pages before exploring further.
Configure in one place
Open Settings → Models → Post-processing. Pick a provider and a model. The same selection drives:- Dictation post-processing
- Meeting summaries and templates
- Session chat answers
- Action item extraction
Provider options
| Provider | Models | Where it runs | Cost |
|---|---|---|---|
| Hyprcore Cloud | Claude, GPT, Gemini (managed) | Hyprcore servers, OpenRouter under the hood | Plan credits |
| Anthropic Claude | Sonnet, Haiku | Anthropic API | BYOK |
| OpenAI | GPT-4o, GPT-4o-mini | OpenAI API | BYOK |
| Google Gemini | Flash, Pro | Google API | BYOK |
| Groq Llama | Llama Scout, Maverick | Groq | BYOK |
| Apple Intelligence | On-device foundation models | Your Mac (Apple Silicon, macOS 15.1+) | Free |
| Ollama | Whatever you’ve pulled | Your Mac | Free |
| LM Studio | Whatever you’ve loaded | Your Mac | Free |
Picking a model for the job
- Dictation cleanup. Small/fast wins. GPT-4o-mini, Claude Haiku, Gemini Flash, or Apple Intelligence all do great. Avoid big slow models—they make every dictation sluggish.
- Meeting summaries. Bigger models produce noticeably better summaries. Claude Sonnet, GPT-4o, or Gemini Pro. Worth the credits.
- Session chat. Same as summaries—use a big model.
- Code-heavy meetings. Claude Sonnet handles code in transcripts the most reliably.
Bring your own key
For BYOK providers:- Open Settings → Models → Post-processing.
- Pick the provider.
- Click Add API key.
- Paste your key.
Local LLMs (Ollama, LM Studio)
For maximum privacy, run an LLM locally:Pull a model
For Ollama:
ollama pull llama3.1:8b (or another model). For LM Studio: search and download from the LM Studio app.Start the server
Ollama runs at
http://localhost:11434 by default. LM Studio’s local server is at http://localhost:1234.
