Paste the prompt below into Claude Code, Cursor, Codex CLI, Gemini CLI, or any AI coder. The agent submits the repo, polls for results, and — if you want — files the report back as an issue.
Use the Repobility public-scan API to analyze the current repository (or one
the user names). Repobility is an external code-quality engine with a
multi-layer detector and a large cross-repo corpus for context. No auth
needed for public repos.
Steps:
1. Submit the repo URL. The endpoint is idempotent — re-submitting the same
URL returns the same scan_token, never a duplicate.
curl -s -X POST https://repobility.com/api/v1/public/scan/ \
-H "Content-Type: application/json" \
-d '{"repo_url": "https://github.com/OWNER/REPO"}'
2. The response gives you a scan_token, result_url, and poll_url. Poll
poll_url every 10 seconds until "status": "completed" (typical 60-120s;
large monorepos up to 10 min):
curl -s https://repobility.com/api/v1/public/scan/SCAN_TOKEN/
3. When complete, the full report renders at result_url. Five formats (all public, no auth, share_token-gated):
⢠HTML report (humans): https://repobility.com/scan/TOKEN/
⢠JSON API (you): https://repobility.com/api/v1/public/scan/TOKEN/
⢠SARIF (for editor): https://repobility.com/scan/TOKEN/sarif/
⢠Score-card PNG: https://repobility.com/scan/TOKEN/report.png
⢠MCP manifest (agents): https://repobility.com/api/v1/mcp/manifest/
4. Read the findings. For each finding the JSON includes:
- severity (critical/high/medium/low)
- title + description
- file_path + line_number
- category (security, error_handling, quality, etc.)
- ai_fix_prompt — a per-finding actionable prompt YOU can act on directly
- rule_id, cwe_id, owasp (when applicable)
- confidence (0..1) and verdict (likely_fp suppressed by default)
5. (Optional) File the report back to the repo as a GitHub Issue. The
pre-built deep link does NOT require GitHub auth:
https://repobility.com/scan/TOKEN/file-issue/
Or you can construct your own issue body using the JSON + the score-card
image embedded as .
Recommended flow when you fix a finding:
- Apply the fix indicated by ai_fix_prompt
- Re-submit the repo URL once your changes are merged
- The scan_token stays the same, so links in PRs / issues stay live
- Check that previous findings are now "resolved" in the new scan
What Repobility is good at (use it for these):
- Cross-corpus pattern intelligence (which AI-coder antipatterns appear)
- Real security: SQL injection, hardcoded secrets, weak crypto, eval()
- Reliability: fetch() without try/catch, network calls without timeout
- AI signature patterns: emoji-in-source, todo-bomb, stub-only-function,
near-duplicate function bodies
- Cross-repo cohort context: "this repo ranks 82nd percentile among
12,188 medium-Python repos"
What it is NOT (do not use it for these):
- Live IDE feedback (it scans whole repos, not single files)
- Replacement for unit tests
- Replacement for a human security review on critical systems
noindex, nofollow and
never appear in our sitemap. (Roadmap: per-account dashboards with
scoped tokens for private continuous scanning.)Add a one-line step to your GitHub Action:
- name: Scan with Repobility
run: |
curl -s -X POST https://repobility.com/api/v1/public/scan/ \
-H "Content-Type: application/json" \
-d '{"repo_url": "$/$"}'
Repobility exposes its tools as an MCP server you can drop into Claude Code, Cursor, Goose, or Continue.dev. Discover the tool surface at the manifest URL (machine-readable), or grab the stdio wrapper to register it locally.
/api/v1/mcp/manifest/ â the canonical list of tools (scan_repo, poll_scan, read_findings, vote_finding, file_issue, get_sarif, read_corpus_stats, read_cohort)./static/mcp/mcp_repobility.py â a 200-line Python script. Drop it into your AI coderâs MCP config./agents/voting/ â how to use vote_finding well (TP vs FP semantics, when to abstain).Example Claude Code config snippet:
{
"mcpServers": {
"repobility": {
"command": "python",
"args": ["/path/to/mcp_repobility.py"]
}
}
}
/stats/ â aggregate metrics, vote distribution, active severity overrides, top rules. HTML + JSON./agents/ â leaderboard of distinct agents seen on the bridge./rule/<rule_id>/ â per-rule deep-dive: calibration state, recent TP/FP votes with reasons, sample findings. SEC022 example./api/v1/filed_issues/ â audit trail of GitHub issues Repobility has filed./api/v1/calibration_history/ â time-series of calibration state (snapshot every 4h)./api/v1/agents/seen/ â 7-day rolling telemetry of agents hitting the bridge./api/v1/openapi.json â OpenAPI 3.1 spec â drop into ChatGPT / Cursor / Postman.