When you first open Hyprcore, the Local AI setup step asks you to install Ollama and pull a Gemma model. Ollama is a free third-party app that runs language models on your Mac. Hyprcore uses it for dictation cleanup, meeting summaries, and session chat — fully on-device, no cloud round-trip. You can skip the screen and use Hyprcore Cloud or your own API key instead, but if you want a local LLM, this page walks through it end to end.Documentation Index
Fetch the complete documentation index at: https://docs.hyprcore.ai/llms.txt
Use this file to discover all available pages before exploring further.
What the onboarding step does
Behind the scenes, the screen runs three checks in a loop every few seconds:- Is Ollama installed? It looks for the binary on your Mac.
- Is Ollama running? It pings
http://localhost:11434. - Is the Gemma model pulled? It asks Ollama for the list of installed models.
Install Ollama
Download the macOS app
Go to ollama.com/download/mac and download the installer. The onboarding screen has the same link as the Install Ollama button.
Open the .dmg and drag Ollama into Applications
Standard macOS install. Approve the Gatekeeper prompt the first time.
Launch Ollama
Open Ollama from Applications or Spotlight. The first launch starts the background service and adds a llama icon to the menu bar. Leave it running.
Ollama runs as a background service on
http://localhost:11434. Quitting it from the menu bar will stop Hyprcore’s local LLM features until you start it again.Pull a Gemma model
Once Ollama is running, the onboarding screen shows two choices:| Model | Size on disk | Free RAM needed | Best for |
|---|---|---|---|
| Gemma 3 1B | ~815 MB | ~2 GB | Almost any Mac. Fast cleanup, short summaries. |
| Gemma 4 E2B | ~7.2 GB | ~4 GB | Recommended on 16 GB+ Macs. Noticeably better summaries. |
What happens when you click Continue
Hyprcore writes three settings:- Settings → Models → Post-processing → Provider: Ollama
- Settings → Models → Post-processing → Model: the Gemma tag you chose
- Settings → Meetings → Processing provider: Ollama
Troubleshooting
Step 1 stays gray after I install Ollama
Step 1 stays gray after I install Ollama
Ollama has to be running, not just installed. Open it from Applications. You should see a llama icon in the menu bar. The onboarding screen polls every 3 seconds, so it should turn green within a few seconds of launch.If the menu bar icon is there but Step 1 is still gray, your firewall or a VPN may be blocking
localhost. Try curl http://localhost:11434/api/tags from a terminal — a working Ollama responds with a JSON list (possibly empty).The download for Gemma fails partway through
The download for Gemma fails partway through
Ollama writes partial pulls into Then click Download in Hyprcore again, or Continue if Ollama already reports the model. Common causes are flaky Wi‑Fi and corporate proxies that interrupt long downloads — a wired connection helps.
~/.ollama/models. To retry cleanly:Port 11434 is already in use
Port 11434 is already in use
Another tool (a previous Ollama install, a dev server, or LM Studio’s compatibility layer) is holding the port. Find it with:Quit the conflicting process, then relaunch Ollama. Hyprcore expects Ollama at the default
http://localhost:11434 — changing the host or port via OLLAMA_HOST is not supported in onboarding.My Mac runs hot or feels slow during a summary
My Mac runs hot or feels slow during a summary
Gemma 4 E2B uses significantly more RAM and CPU than Gemma 3 1B. On 8 GB Macs, switch back to Gemma 3 1B in Settings → Models → Post-processing. You can keep Gemma 3 1B for dictation cleanup and route meeting summaries through Hyprcore Cloud or a BYOK provider for higher quality.
I already have other Ollama models pulled
I already have other Ollama models pulled
Anything you’ve pulled in Ollama shows up automatically in Settings → Models → Post-processing → Model when the provider is Ollama. Common alternatives:See Custom and local models for the full list and embedding models for semantic search.
I don't want to install Ollama at all
I don't want to install Ollama at all
Click Skip on the onboarding screen. Hyprcore will use Hyprcore Cloud (powered by your plan credits) by default, and you can change to BYOK Anthropic, OpenAI, Google, Groq, or LM Studio in Settings → Models at any time. See Post-processing LLMs for the full provider list.
Onboarding is stuck even though Ollama works
Onboarding is stuck even though Ollama works
Quit Hyprcore from the tray menu (
⌘Q) and relaunch it. The onboarding screen re-runs all three checks on each launch. If the issue persists, grab logs from Settings → Advanced → Open log directory and reach out (links below).Still stuck
- Live chat. The chat bubble on hyprcore.ai and inside the web dashboard.
- Discord. discord.gg/UEvrjftB.
- Email. support@hyprcore.ai with your Ollama version (
ollama --version) and Hyprcore logs attached.

