Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.hyprcore.ai/llms.txt

Use this file to discover all available pages before exploring further.

Raw transcripts are accurate but messy: ums, false starts, “like, I mean” trailing off. Post-processing runs each transcript through an LLM and a prompt template, so the final pasted text reads the way you’d write it.

When it runs

  • Always, if you bound a shortcut to Toggle with post-processing.
  • Per-session, if you press the post-processing shortcut on this dictation but normally use the plain toggle.
  • Never, if you turn post-processing off in Settings → Dictation.

Pick a template

Templates live in Settings → Post-processing → Templates. Hyprcore ships with a few defaults you can use as-is or duplicate and edit:
  • Cleanup. Remove filler words and fix obvious typos. Closest to “what you actually said” without the ums.
  • Professional tone. Cleanup plus a more polished voice. Good for emails, customer messages, and Slack to coworkers.
  • Casual tone. Cleanup, but conversational. Good for personal messages, journal entries.
  • Bullet points. Convert a stream-of-thought into a tight bulleted list. Good for meeting prep and quick notes.
Each template has:
  • A system prompt that defines tone and rules.
  • A user prompt template that wraps your raw transcript with {{TRANSCRIPT}} substitution.
  • A target LLM provider and model.
You can write your own templates—anything from “translate to Spanish” to “rewrite as a JIRA ticket” to “respond as if I’m dictating Markdown.”

Pick an LLM provider

Configure providers in Settings → Models → Post-processing.
ProviderWhere it runsNotes
Hyprcore CloudManaged via Hyprcore (uses OpenRouter)Easiest. Pay with credits from your plan.
Anthropic ClaudeAnthropic’s APIBring your own API key. Sonnet for quality, Haiku for speed.
OpenAIOpenAI’s APIGPT-4o or GPT-4o-mini. Bring your own key.
Google GeminiGoogle’s APIFlash for speed, Pro for quality. Bring your own key.
Llama (Groq, Together)Groq or TogetherFast and cheap. Llama Scout / Maverick.
Apple IntelligenceOn-device, Apple Silicon onlyFree, fully local, smaller models. macOS 15.1+.
OllamaYour own machineFully local, fully private. Install Ollama separately and point Hyprcore at http://localhost:11434.
LM StudioYour own machineSame idea as Ollama, point Hyprcore at the LM Studio server.
If you use Hyprcore Cloud, the cost is billed in credits—see Credits for the per-action rates.

Performance

Post-processing adds a second or two to dictation. If you’re chatting in real time and that lag is annoying, leave the default toggle on plain transcription and reserve the post-processing shortcut for longer drafts.

Tips for good prompts

  • Lead with what the output should be, not what the input is. “Output: a polished email opening” beats “the user dictated some text.”
  • Tell the model what to keep verbatim—names, code, URLs, technical terms.
  • For language-switching templates, add an explicit “do not translate proper nouns” instruction.
  • Use Cleanup as a baseline and stack stylistic rules on top, rather than writing a giant single template.