Raw transcripts are accurate but messy: ums, false starts, “like, I mean” trailing off. Post-processing runs each transcript through an LLM and a prompt template, so the final pasted text reads the way you’d write it.Documentation Index
Fetch the complete documentation index at: https://docs.hyprcore.ai/llms.txt
Use this file to discover all available pages before exploring further.
When it runs
- Always, if you bound a shortcut to Toggle with post-processing.
- Per-session, if you press the post-processing shortcut on this dictation but normally use the plain toggle.
- Never, if you turn post-processing off in Settings → Dictation.
Pick a template
Templates live in Settings → Post-processing → Templates. Hyprcore ships with a few defaults you can use as-is or duplicate and edit:- Cleanup. Remove filler words and fix obvious typos. Closest to “what you actually said” without the ums.
- Professional tone. Cleanup plus a more polished voice. Good for emails, customer messages, and Slack to coworkers.
- Casual tone. Cleanup, but conversational. Good for personal messages, journal entries.
- Bullet points. Convert a stream-of-thought into a tight bulleted list. Good for meeting prep and quick notes.
- A system prompt that defines tone and rules.
- A user prompt template that wraps your raw transcript with
{{TRANSCRIPT}}substitution. - A target LLM provider and model.
Pick an LLM provider
Configure providers in Settings → Models → Post-processing.| Provider | Where it runs | Notes |
|---|---|---|
| Hyprcore Cloud | Managed via Hyprcore (uses OpenRouter) | Easiest. Pay with credits from your plan. |
| Anthropic Claude | Anthropic’s API | Bring your own API key. Sonnet for quality, Haiku for speed. |
| OpenAI | OpenAI’s API | GPT-4o or GPT-4o-mini. Bring your own key. |
| Google Gemini | Google’s API | Flash for speed, Pro for quality. Bring your own key. |
| Llama (Groq, Together) | Groq or Together | Fast and cheap. Llama Scout / Maverick. |
| Apple Intelligence | On-device, Apple Silicon only | Free, fully local, smaller models. macOS 15.1+. |
| Ollama | Your own machine | Fully local, fully private. Install Ollama separately and point Hyprcore at http://localhost:11434. |
| LM Studio | Your own machine | Same idea as Ollama, point Hyprcore at the LM Studio server. |
Performance
Post-processing adds a second or two to dictation. If you’re chatting in real time and that lag is annoying, leave the default toggle on plain transcription and reserve the post-processing shortcut for longer drafts.Tips for good prompts
- Lead with what the output should be, not what the input is. “Output: a polished email opening” beats “the user dictated some text.”
- Tell the model what to keep verbatim—names, code, URLs, technical terms.
- For language-switching templates, add an explicit “do not translate proper nouns” instruction.
- Use Cleanup as a baseline and stack stylistic rules on top, rather than writing a giant single template.

