DocsCI Integration
CI Integration
Run testgap in your CI pipeline to catch test regressions before they ship.
GitHub Actions#
Add testgap as a step in your GitHub Actions workflow:
name: Test Gap Analysis
on:
pull_request:
branches: [main]
jobs:
testgap:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
- name: Install testgap
run: cargo install testgap
- name: Check test gaps
run: testgap analyze --format json --fail-on-critical --no-aiNo API key needed
With
--no-ai, testgap uses pure static analysis. No API keys or external services needed in CI.With AI Analysis in CI#
If you want AI risk assessment in CI, add your Anthropic API key as a secret:
- name: Check test gaps (with AI)
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: testgap analyze --format json --fail-on-critical --ai-severity criticalCost control
Use
--ai-severity critical to only send critical gaps to the AI. This dramatically reduces API costs in CI where you may run on every PR.SARIF Output#
JSON output from testgap can be transformed to SARIF format for integration with GitHub Code Scanning and other security tools:
# Generate JSON output
testgap analyze --format json --no-ai > testgap-results.json
# Transform to SARIF (using jq)
cat testgap-results.json | jq '{
"$schema": "https://raw.githubusercontent.com/oasis-tcs/sarif-spec/main/sarif-2.1/schema/sarif-schema-2.1.0.json",
"version": "2.1.0",
"runs": [{
"tool": {
"driver": {
"name": "testgap",
"version": "0.2.0"
}
},
"results": [.gaps[] | {
"ruleId": "testgap/\(.severity)",
"level": (if .severity == "critical" then "error" elif .severity == "warning" then "warning" else "note" end),
"message": { "text": .reason },
"locations": [{
"physicalLocation": {
"artifactLocation": { "uri": .file },
"region": { "startLine": .line }
}
}]
}]
}]
}' > testgap.sarifCI Gate with --fail-on-critical#
The --fail-on-critical flag makes testgap exit with code 1 when critical gaps are found. This is designed for use as a CI quality gate:
# Fails CI if any public+complex function is untested
testgap analyze --fail-on-critical --no-ai
# Check exit code
echo $? # 0 = pass, 1 = critical gaps, 2 = errorExit Code Reference
0— No critical gaps (CI passes)1— Critical gaps found (CI fails)2— Runtime error
Other CI Systems#
testgap works with any CI system. The key flags for CI are:
--format json— machine-readable output--fail-on-critical— non-zero exit on critical gaps--no-ai— no external API calls needed
# GitLab CI
test-gaps:
stage: test
script:
- cargo install testgap
- testgap analyze --format json --fail-on-critical --no-ai
# CircleCI
- run:
name: Check test gaps
command: |
cargo install testgap
testgap analyze --format json --fail-on-critical --no-ai