Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.hyprcore.ai/llms.txt

Use this file to discover all available pages before exploring further.

When you first open Hyprcore, the Local AI setup step asks you to install Ollama and pull a Gemma model. Ollama is a free third-party app that runs language models on your Mac. Hyprcore uses it for dictation cleanup, meeting summaries, and session chat — fully on-device, no cloud round-trip. You can skip the screen and use Hyprcore Cloud or your own API key instead, but if you want a local LLM, this page walks through it end to end.

What the onboarding step does

Behind the scenes, the screen runs three checks in a loop every few seconds:
  1. Is Ollama installed? It looks for the binary on your Mac.
  2. Is Ollama running? It pings http://localhost:11434.
  3. Is the Gemma model pulled? It asks Ollama for the list of installed models.
When all three are green, the Continue button appears and Hyprcore wires Ollama as the default provider for both dictation and meeting post-processing.

Install Ollama

1

Download the macOS app

Go to ollama.com/download/mac and download the installer. The onboarding screen has the same link as the Install Ollama button.
2

Open the .dmg and drag Ollama into Applications

Standard macOS install. Approve the Gatekeeper prompt the first time.
3

Launch Ollama

Open Ollama from Applications or Spotlight. The first launch starts the background service and adds a llama icon to the menu bar. Leave it running.
4

Return to Hyprcore

Within a few seconds the onboarding screen flips Step 1 to a green check. You don’t need to restart Hyprcore.
Ollama runs as a background service on http://localhost:11434. Quitting it from the menu bar will stop Hyprcore’s local LLM features until you start it again.

Pull a Gemma model

Once Ollama is running, the onboarding screen shows two choices:
ModelSize on diskFree RAM neededBest for
Gemma 3 1B~815 MB~2 GBAlmost any Mac. Fast cleanup, short summaries.
Gemma 4 E2B~7.2 GB~4 GBRecommended on 16 GB+ Macs. Noticeably better summaries.
Pick one and click Download. The progress bar shows a live percentage. When the pull finishes, the Continue button appears. If you’d rather pull manually from a terminal:
ollama pull gemma3:1b
# or
ollama pull gemma4:e2b
Hyprcore will pick up the model on its next status check.

What happens when you click Continue

Hyprcore writes three settings:
  • Settings → Models → Post-processing → Provider: Ollama
  • Settings → Models → Post-processing → Model: the Gemma tag you chose
  • Settings → Meetings → Processing provider: Ollama
You can change any of these later in Settings → Models.

Troubleshooting

Ollama has to be running, not just installed. Open it from Applications. You should see a llama icon in the menu bar. The onboarding screen polls every 3 seconds, so it should turn green within a few seconds of launch.If the menu bar icon is there but Step 1 is still gray, your firewall or a VPN may be blocking localhost. Try curl http://localhost:11434/api/tags from a terminal — a working Ollama responds with a JSON list (possibly empty).
Ollama writes partial pulls into ~/.ollama/models. To retry cleanly:
ollama rm gemma3:1b
ollama pull gemma3:1b
Then click Download in Hyprcore again, or Continue if Ollama already reports the model. Common causes are flaky Wi‑Fi and corporate proxies that interrupt long downloads — a wired connection helps.
Another tool (a previous Ollama install, a dev server, or LM Studio’s compatibility layer) is holding the port. Find it with:
lsof -i :11434
Quit the conflicting process, then relaunch Ollama. Hyprcore expects Ollama at the default http://localhost:11434 — changing the host or port via OLLAMA_HOST is not supported in onboarding.
Gemma 4 E2B uses significantly more RAM and CPU than Gemma 3 1B. On 8 GB Macs, switch back to Gemma 3 1B in Settings → Models → Post-processing. You can keep Gemma 3 1B for dictation cleanup and route meeting summaries through Hyprcore Cloud or a BYOK provider for higher quality.
Anything you’ve pulled in Ollama shows up automatically in Settings → Models → Post-processing → Model when the provider is Ollama. Common alternatives:
ollama pull llama3.1:8b      # 4.7 GB, decent quality
ollama pull qwen2.5:14b      # 9 GB, good for summaries
ollama pull llama3.1:70b     # 40 GB, near-cloud quality (32 GB+ Mac)
See Custom and local models for the full list and embedding models for semantic search.
Click Skip on the onboarding screen. Hyprcore will use Hyprcore Cloud (powered by your plan credits) by default, and you can change to BYOK Anthropic, OpenAI, Google, Groq, or LM Studio in Settings → Models at any time. See Post-processing LLMs for the full provider list.
Quit Hyprcore from the tray menu (⌘Q) and relaunch it. The onboarding screen re-runs all three checks on each launch. If the issue persists, grab logs from Settings → Advanced → Open log directory and reach out (links below).

Still stuck