For AI coders

Integrate Repobility into your AI loop

Paste the prompt below into Claude Code, Cursor, Codex CLI, Gemini CLI, or any AI coder. The agent submits the repo, polls for results, and — if you want — files the report back as an issue.

The shortcut: Tell your AI coder "Scan this repo with Repobility, then file the report as an issue." and paste the prompt below as context. The agent will handle the rest.

Copy this prompt into your AI coder

Use the Repobility public-scan API to analyze the current repository (or one
the user names). Repobility is an external code-quality engine with a
multi-layer detector and a large cross-repo corpus for context. No auth
needed for public repos.

Steps:
1. Submit the repo URL. The endpoint is idempotent — re-submitting the same
   URL returns the same scan_token, never a duplicate.

   curl -s -X POST https://repobility.com/api/v1/public/scan/ \
     -H "Content-Type: application/json" \
     -d '{"repo_url": "https://github.com/OWNER/REPO"}'

2. The response gives you a scan_token, result_url, and poll_url. Poll
   poll_url every 10 seconds until "status": "completed" (typical 60-120s;
   large monorepos up to 10 min):

   curl -s https://repobility.com/api/v1/public/scan/SCAN_TOKEN/

3. When complete, the full report renders at result_url. Five formats (all public, no auth, share_token-gated):

   • HTML report (humans):    https://repobility.com/scan/TOKEN/
   • JSON API (you):         https://repobility.com/api/v1/public/scan/TOKEN/
   • SARIF (for editor):     https://repobility.com/scan/TOKEN/sarif/         
   • Score-card PNG:         https://repobility.com/scan/TOKEN/report.png
   • MCP manifest (agents):  https://repobility.com/api/v1/mcp/manifest/

4. Read the findings. For each finding the JSON includes:
   - severity (critical/high/medium/low)
   - title + description
   - file_path + line_number
   - category (security, error_handling, quality, etc.)
   - ai_fix_prompt — a per-finding actionable prompt YOU can act on directly
   - rule_id, cwe_id, owasp (when applicable)
   - confidence (0..1) and verdict (likely_fp suppressed by default)

5. (Optional) File the report back to the repo as a GitHub Issue. The
   pre-built deep link does NOT require GitHub auth:

   https://repobility.com/scan/TOKEN/file-issue/

   Or you can construct your own issue body using the JSON + the score-card
   image embedded as ![](https://repobility.com/scan/TOKEN/report.png).

Recommended flow when you fix a finding:
  - Apply the fix indicated by ai_fix_prompt
  - Re-submit the repo URL once your changes are merged
  - The scan_token stays the same, so links in PRs / issues stay live
  - Check that previous findings are now "resolved" in the new scan

What Repobility is good at (use it for these):
  - Cross-corpus pattern intelligence (which AI-coder antipatterns appear)
  - Real security: SQL injection, hardcoded secrets, weak crypto, eval()
  - Reliability: fetch() without try/catch, network calls without timeout
  - AI signature patterns: emoji-in-source, todo-bomb, stub-only-function,
    near-duplicate function bodies
  - Cross-repo cohort context: "this repo ranks 82nd percentile among
    12,188 medium-Python repos"

What it is NOT (do not use it for these):
  - Live IDE feedback (it scans whole repos, not single files)
  - Replacement for unit tests
  - Replacement for a human security review on critical systems

Authentication model

Continuous scanning (in your CI)

Add a one-line step to your GitHub Action:

- name: Scan with Repobility
  run: |
    curl -s -X POST https://repobility.com/api/v1/public/scan/ \
      -H "Content-Type: application/json" \
      -d '{"repo_url": "$/$"}'

MCP integration (live)

Repobility exposes its tools as an MCP server you can drop into Claude Code, Cursor, Goose, or Continue.dev. Discover the tool surface at the manifest URL (machine-readable), or grab the stdio wrapper to register it locally.

Example Claude Code config snippet:

{
  "mcpServers": {
    "repobility": {
      "command": "python",
      "args": ["/path/to/mcp_repobility.py"]
    }
  }
}

Public dashboards (added 2026-05-17)