Skip to main content

Applicative metrics

Applicative metrics are gathered thanks to Prometheus, a software used for event monitoring and alerting, which permits scraping the data captured in the application.

Enable or disable Application metrics

Applicative metrics are deactivated by default. Two steps are needed to activate Application Metrics:

  • authorize collection of metrics by the application
  • activate Prometheus export

Authorize metrics collection

To authorize the metrics collection, you should go to the Preferences section in the Admin Area, check the prometheus_metrics_active feature flag and save settings.

Activate Applicative Metrics

To disable it, you should uncheck this parameter and save settings.

Activate Prometheus export using KOTS

Metrics are collected by Prometheus using the Prometheus Operator. This Operator is automatically installed within Embedded Clusters.

For Existing Clusters, you should manually install it (installation documentation).

To create exporter resources and allow automatic discovery, you should go in the KOTS Admin Console and check the Activate Prometheus Exporter checkbox in the Prometheus section of the configuration section.

Activate Applicative Metrics

Then save the configuration, and Deploy the application to apply the new configuration.

To disable it, you should uncheck this parameter, save configuration, and apply it through a new deployment.

Activate Prometheus export using Helm

Applicative metrics exporter can be enabled by setting observability.exporters.webAppExporter.enabled=true in values file.

observability:
exporters:
webAppExporter:
enabled: true

Please note that Helm application also features a Celery Exporter allowing to monitor several Celery metrics like "count of active tasks by queue". The full list of metrics is available here.

You can activate Celery Exporter in your values file:

observability:
exporters:
webAppExporter:
enabled: true
statefulAppExporter:
enabled: true

If you use Prometheus Operator, you may want to use the Service Monitor provided for automatic discovery of the exporter by Prometheus:

observability:
exporters:
webAppExporter:
enabled: true
statefulAppExporter:
enabled: true
serviceMonitors:
enabled: true

Otherwise, you can manually scrape exporters.

  • Applicative metrics can be scrapped from the app-exporter service at: http://app-exporter:9808/metrics
  • Celery metrics can be scrapped from the celery-exporter service at: http://celery-exporter:9808/metrics

How to collect metrics

For Embedded Clusters

Prometheus is installed on Embedded clusters and allows full observability of the cluster. For more information, read the Monitoring section on Replicated website.

This kind of installation uses Kube-Prometheus operator. Applicative metrics are directly available through this installation.

For Existing Clusters

On Existing Clusters, Prometheus must be installed and configured manually. If the Kube-Prometheus Operator is used, all the applicative metrics will be automatically listed thanks to the Discovery service of Kube-Prometheus Operator.

Otherwise, a manual configuration may be needed. Applicative metrics discovery is possible through the app-monitoring headless service. This service exposes an exporter pod serving metrics at http://exporter-xxxxx-xxxxx:9808/metrics

Metrics available

The Prometheus exporter gives access to the following metrics:

MetricTypeDescriptionDimensions
gim_version_infoInfoVersion of the applicationApplication version, TokenScanner version
gim_active_users_totalGaugeAll users in the systemNone
gim_issues_totalGaugeAll incidents in the systemSeverity, Status
gim_occurrences_totalGaugeAll occurrences in the systemHidden, Status
gim_commits_totalGaugeCommits processedAccount, Scan type
gim_public_api_quota_totalGaugeMaximum allowed usage of the Public APIAccount
gim_public_api_usage_totalGaugeCurrent usage of the Public APIAccount
gim_public_api_token_totalGaugeCount of active tokens for the Public APIAccount, Type
gim_postgres_used_disk_bytesGaugeDisk space used by PostgreSQL dataNone
gim_redis_used_memory_bytesGaugeMemory used by Redis dataNone
gim_redis_available_memory_bytesGaugeMemory available for Redis dataNone

Usage data

GitGuardian collects usage data to improve the user experience and support. It can be easily deactivated by adjusting the custom_telemetry_active setting found in the preferences section in the Admin area.

info

Why keep usage data enable?

  • Continuous Product Improvement: usage data greatly helps us understand how our application is used in various environments. This allows us to specifically target areas needing improvements and direct our testing efforts. This ensures that our product evolves to meet our users' needs effectively while contributing to better quality and stability.

  • Targeted and Efficient Support: in case of technical problems, usage data enables GitGuardian to identify and resolve issues much more quickly. This means reduced downtime for you and a better overall user experience.

  • Security and Privacy: we want to reassure you that data privacy and security are our top priority. We do not collect any personal or sensitive data. Our goal is solely to improve user experience and the performance of our product.

Here are the categories and metrics we collect:

  • Replicated

    • Various deployment-related metrics such as cloud status, version, and uptime
  • System

    • SSO Provider
    • Network Gateway Type
    • Custom CA and proxy status
    • Prometheus Application metrics activation status
  • Users & Teams

    • Number of pending invitees, registered users with different roles, active users.
  • Historical Scan

    • Number of historical scans canceled, failed, and finished
    • Percentile durations of historical scans
    • Number of secrets found per day and source scans per day in historical scans
    • Number of historical scans considered too large
  • Integrations

    • Number of instances, installations, projects, sites for ADO, BitBucket, GitHub Enterprise, GitHub, GitLab, Slack, Jira.
    • Number of monitored and unmonitored sources, along with source size percentiles and estimated users per VCS
  • Secret

    • Number of deactivated detectors per category, registered and unregistered feedbacks, and various metrics related to secret validity checks and incidents
  • Public API

    • Number of calls for ggshield secrets scans, including different modes and repositories
    • Number of active personal access tokens and service accounts
    • Number of public API calls

How can I help you ?