DocsCI Integration

CI Integration

Run testgap in your CI pipeline to catch test regressions before they ship.

GitHub Actions#

Add testgap as a step in your GitHub Actions workflow:

.github/workflows/testgap.yml
name: Test Gap Analysis

on:
  pull_request:
    branches: [main]

jobs:
  testgap:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install Rust
        uses: dtolnay/rust-toolchain@stable

      - name: Install testgap
        run: cargo install testgap

      - name: Check test gaps
        run: testgap analyze --format json --fail-on-critical --no-ai

No API key needed

With --no-ai, testgap uses pure static analysis. No API keys or external services needed in CI.

With AI Analysis in CI#

If you want AI risk assessment in CI, add your Anthropic API key as a secret:

.github/workflows/testgap.yml
      - name: Check test gaps (with AI)
        env:
          ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
        run: testgap analyze --format json --fail-on-critical --ai-severity critical

Cost control

Use --ai-severity critical to only send critical gaps to the AI. This dramatically reduces API costs in CI where you may run on every PR.

SARIF Output#

JSON output from testgap can be transformed to SARIF format for integration with GitHub Code Scanning and other security tools:

sarif-transform.sh
# Generate JSON output
testgap analyze --format json --no-ai > testgap-results.json

# Transform to SARIF (using jq)
cat testgap-results.json | jq '{
  "$schema": "https://raw.githubusercontent.com/oasis-tcs/sarif-spec/main/sarif-2.1/schema/sarif-schema-2.1.0.json",
  "version": "2.1.0",
  "runs": [{
    "tool": {
      "driver": {
        "name": "testgap",
        "version": "0.2.0"
      }
    },
    "results": [.gaps[] | {
      "ruleId": "testgap/\(.severity)",
      "level": (if .severity == "critical" then "error" elif .severity == "warning" then "warning" else "note" end),
      "message": { "text": .reason },
      "locations": [{
        "physicalLocation": {
          "artifactLocation": { "uri": .file },
          "region": { "startLine": .line }
        }
      }]
    }]
  }]
}' > testgap.sarif

CI Gate with --fail-on-critical#

The --fail-on-critical flag makes testgap exit with code 1 when critical gaps are found. This is designed for use as a CI quality gate:

terminal
# Fails CI if any public+complex function is untested
testgap analyze --fail-on-critical --no-ai

# Check exit code
echo $?  # 0 = pass, 1 = critical gaps, 2 = error

Exit Code Reference

Other CI Systems#

testgap works with any CI system. The key flags for CI are:

other-ci.yml
# GitLab CI
test-gaps:
  stage: test
  script:
    - cargo install testgap
    - testgap analyze --format json --fail-on-critical --no-ai

# CircleCI
- run:
    name: Check test gaps
    command: |
      cargo install testgap
      testgap analyze --format json --fail-on-critical --no-ai