AI Settings
Workspace administrators have two controls in Settings > Workspace > AI:
- External LLM toggle — enable or disable all calls to external large language models. When off, no data is sent to any third-party AI provider. Internal ML models continue to run.
- BYOC provider configuration — route LLM calls through your own cloud provider account instead of GitGuardian's sub-processors. AWS Bedrock is the first supported provider, with more to follow.
Customer options
| Option | External LLM calls | What you get |
|---|---|---|
| Default (GitGuardian sub-processors) | Yes — via declared sub-processors | Full LLM-powered features. See the AI Management Policy. |
| BYOC (Bring Your Own Cloud) | Yes — via your own cloud account | Full LLM-powered features, routed through your own infrastructure. At launch: AWS Bedrock only. |
| External LLM off | No | External LLM features are disabled. Internal ML models remain active. |
What the toggle controls
"AI features" is an umbrella term that covers two distinct layers:
- Internal ML models — run entirely on GitGuardian infrastructure. No data leaves. These currently power false positive detection, prioritization scoring, and similar issues grouping.
- External LLM calls — calls to third-party large language models via GitGuardian's sub-processors (or your own cloud account with BYOC).
The workspace toggle only controls external LLM calls. Internal ML models are unaffected by the toggle and continue to run.
Bring Your Own Cloud (BYOC)
info
At launch, the only supported BYOC provider is AWS Bedrock. Other cloud providers will follow.
BYOC lets you route LLM calls through your own cloud provider account, giving your security team full control over the infrastructure that processes AI requests while keeping all LLM-powered features.
GitGuardian offers BYOC (Cloud), not BYOK (Key). BYOC is a stronger guarantee because LLM calls go through your infrastructure.