Documentation Index
Fetch the complete documentation index at: https://hyperwhisper.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
Where your audio goes
HyperWhisper supports three transcription paths, each with different privacy implications:| Path | Where audio goes | Who’s responsible for training opt-out |
|---|---|---|
| Local models (Whisper, Parakeet) | Never leaves your device | You — but it’s offline, so there’s nothing to opt out of |
| HyperWhisper Cloud | Routed through HyperWhisper to a backing provider (Grok STT, ElevenLabs, Deepgram, or Groq) | HyperWhisper — we opt you out on every upstream provider |
| Your own API key (BYOK) | Sent directly from your device to the provider | You — except Deepgram, where we opt out for you automatically |
HyperWhisper Cloud
When you use HyperWhisper Cloud, we forward your audio to a backing speech-to-text provider, and your transcribed text to a backing LLM for post-processing. We don’t store your audio on HyperWhisper’s own servers — it’s processed in memory at the edge and discarded after the response is returned. We’ve opted you out of model training on every upstream provider we use. That means:- For providers that expose a per-request opt-out flag, we set it on every call we make on your behalf.
- For providers that only expose an account-level toggle, we’ve turned it off on the HyperWhisper account that handles your requests.
Your own API keys (BYOK)
When you configure your own API key in Settings → API Keys, your audio is sent directly from your device to that provider, using their account, on their terms.Deepgram BYOK — we opt out for you automatically
For Deepgram specifically, HyperWhisper does the work for you on both macOS and Windows: every direct Deepgram request the app sends includesmip_opt_out=true in the query string. You don’t need to change any setting on your Deepgram dashboard for this to take effect — it applies on a per-request basis. Verify it on a recent request in console.deepgram.com under Usage → Logs: the request detail should show mip_opt_out: true.
Other providers — quick reference
| Provider | Trains on API data by default? | How to opt out |
|---|---|---|
| Groq | No — inference data is not retained by default | Optional: enable Zero Data Retention in console.groq.com → Data Controls. Source |
| Anthropic (Claude) | No — commercial API inputs/outputs are not used for training | Nothing to do for default usage. Source |
| Cerebras | No — API content is not used to train or fine-tune models | Nothing to do. Source |
| OpenAI | No (since March 2023) — API data is not used for training by default | Nothing to do; verify under platform.openai.com → Settings → Data Controls |
| xAI Grok | No — API inputs and outputs are not used for training by default | Nothing to do for default usage; verify in the xAI data controls if your use case is sensitive |
| Google Gemini | Free tier (AI Studio): yes, paid API: no | Use a paid API key, or change settings in aistudio.google.com |
| ElevenLabs | Retention is enabled by default; Zero Retention Mode is enterprise-only | Contact ElevenLabs sales if your use case requires it |
| AssemblyAI | No — by default; verify in their security & privacy page | Nothing to do for default usage |
| Fireworks AI, Mistral | Verify directly on their dashboards | See “Verifying any provider” below |
This table reflects publicly stated policies as of when this page was written. Providers can change their terms — for any provider whose policy is critical to your use case, verify directly using the prompt below.
Verifying any provider
Provider data policies move around — pages get renamed, settings get redesigned. The most reliable check is to ask a current language model. Open ChatGPT, Claude, or any LLM with web access and paste:If you’ve already used a key without opting out
A few things you can do:- Opt out now — most providers stop using future requests for training the moment you flip the setting, even if past requests were used.
- Request deletion — many providers honor data deletion requests for previously sent content. The same LLM prompt above can ask “how do I request deletion of past API data on
{PROVIDER}?” - Rotate the key — if you want a hard line in the sand, generate a new key on the provider’s dashboard and replace the old one in Settings → API Keys.
Summary
- Local = nothing leaves your device.
- HyperWhisper Cloud = we opt you out of model training on every upstream provider, both at the request level and the account level.
- Your own API key = mostly your responsibility — except for Deepgram, which HyperWhisper auto-opts-out by adding
mip_opt_out=trueto every request the app sends.
Related documentation:
