Hyprcore can run six different speech-to-text models on your Mac, with no internet required. You can also use cloud providers when you want speed or live streaming. You set the model in two places—Settings → Models → Dictation and Settings → Models → Meetings—because dictation and meetings often want different tradeoffs.Documentation Index
Fetch the complete documentation index at: https://docs.hyprcore.ai/llms.txt
Use this file to discover all available pages before exploring further.
Local models
| Model | Size on disk | Best for | Notes |
|---|---|---|---|
| Parakeet V3 | 478 MB | Dictation | CPU-only, ~5× real-time, auto-detects language. Best default for dictation. |
| Parakeet V2 | 473 MB | Dictation (legacy) | Older. Stick with V3 unless you have a specific reason. |
| NVIDIA Canary | ~1 GB | Multilingual meetings | NVIDIA’s NeMo Canary model. Strong multilingual accuracy across English, Spanish, French, and German. Apple Silicon recommended. |
| Whisper Small | 487 MB | Light meetings | Fast, decent accuracy. |
| Whisper Medium | 492 MB | Meetings | Balanced. Good default for meetings. |
| Whisper Turbo | 1.6 GB | Long meetings | Fast inference on long audio. Apple Silicon recommended. |
| Whisper Large | 1.1 GB | Maximum accuracy | Slow on Intel; great on Apple Silicon. |
| Moonshine, SenseVoice, Breeze, GigaAM | varies | Specialized | Specialized models for specific languages and domains. Pick one in Settings → Models when its language fits yours better than Whisper. |
Which one should I use
- Just dictating? Parakeet V3. Auto-language, fastest CPU performance, no GPU dependency.
- Recording short meetings (under 30 minutes)? Whisper Medium. Good summary inputs.
- Recording long meetings or podcasts? Whisper Turbo on Apple Silicon. It scales better with hours of audio.
- Multilingual meetings? NVIDIA Canary. Best-in-class for English, Spanish, French, and German.
- Need maximum accuracy and you have an M-series Mac? Whisper Large.
Cloud models
Cloud STT runs on a provider’s servers. Faster, supports live streaming, and lets you use diarization (who said what). The tradeoff is that audio leaves your Mac, and you pay for usage.| Provider | Modes | Diarization | Cost |
|---|---|---|---|
| Deepgram | Batch + live | Yes (best in class) | Hyprcore Cloud credits or BYOK |
| Groq Whisper | Batch | Limited | Hyprcore Cloud credits or BYOK |
| OpenAI Whisper API | Batch | No | BYOK only |
Provider policy
In Settings → Meetings → STT provider policy, choose how Hyprcore picks an engine:- Prefer local (default). Use the local model unless you explicitly switch.
- Prefer cloud. Use the cloud provider when available; fall back to local on errors.
- Local only. Never call the cloud, even if a provider is configured.
- Cloud only. Always use the cloud; fail if it’s not available.
- Manual. Always ask which engine to use when starting a meeting.
Downloading
When you select a model that’s not yet on disk, Hyprcore downloads it frommodels.hyprcore.ai. Download progress shows in the model picker. Models are cached at:

