Why enable third-party LLMs
AI has been part of GitGuardian from the start. We built and hosted our own machine-learning models to power capabilities like false positive detection, risk scoring, incident enrichment, and guided remediation. These internal models served us well, but the landscape has changed.
Third-party large language models (LLMs) now deliver a quality leap that is transformational, not incremental. They understand code and context with a depth that purpose-built internal models cannot match, and they improve at a pace we could not sustain alone. We are progressively replacing our internal models with third-party LLMs to bring that value to every customer.
We will rely on third-party LLMs more and more to bring value to the platform. The improvements they deliver to false positive reduction, contextual analysis, and guided remediation are too significant to withhold from our customers.
Adopting third-party LLMs does not mean compromising on security: our secrets detection engine scans and redacts any secret it finds before data is sent to an external model. Customer data is never used for training and is never retained by any AI provider. For full details, see our AI management policy. To control how AI features operate in your workspace, see AI settings.