Public scan — anyone with this URL can view this analysis. Sign up to track your own repos privately, run scheduled re-scans, and get AI fix prompts via your dashboard.

google-research

https://github.com/google-research/google-research.git · scanned 2026-05-16 13:30 UTC (1 day, 6 hours ago) · 10 languages

57 findings 8/10 scanners ran 54th percentile · Python · medium (20-100K LoC)

UNIFIED Repobility · multi-layer engine · AI coders

Complete repo analysis

51 findings from 1 source. Findings combine the legacy security pipeline AND the multi-layer engine (atlas, wiring, flows, ranked) AND verified AI agent contributions.

{# ── 2026-05-17 R27 #5: score breakdown panel ────────────────────── Surfaces the score_breakdown JSON that's been silently stored on Repository for months. Turns hidden math into a trust signal. #}
Score breakdown â 2026-05-17-v4 calibration-aware
Component Sub-score Weight Contribution
structure_score 40.0 0.15 6.00
security_score 55.5 0.25 13.88
testing_score 13.0 0.20 2.60
documentation_score 65.0 0.15 9.75
practices_score 30.0 0.15 4.50
code_quality 80.0 0.10 8.00
Overall 1.00 44.7
Calibrated penalty buckets (security_score): threat: 44.6
security_score may be inflated — optional scanners skipped due to repo size/fast scan
Severity distribution — click a segment to filter
Active filters: excluding tests × Reset all
Severity: Critical 0 High 6 Medium 14 Low 31 Source: Legacy 51 9-layer 0 Crowd 0 Layer: Quality 34 Security 16 Software 1

Bug-class explainers. Each card groups findings of the same shape — these are the patterns most likely to ship to prod and reappear in future scans unless you systematically fix the cause, not just the instance.

Fragile runtime 32 findings
What it is: Code that runs but breaks under predictable input — division by zero, missing keys, unbounded loops, off-by-one slicing.
Why it matters: Reaches production undetected because happy-path tests pass. First user with a weird input crashes the request.
How AI causes it: AI loves writing the happy path; doesn't probe edge cases unless explicitly asked.
Fix approach: Add property-based tests. Wrap external inputs with explicit validators. Use the framework's typed deserializer (Pydantic, attrs).
12 matching findings on this repo
  • medium Parallel implementation file sits beside a canonical file CardBench_zero_shot_cardinality_training/generate…:1
  • low Duplicate top-level symbol appears in a patch-style file CardBench_zero_shot_cardinality_training/generate…:1
  • low Duplicated implementation block across source files CardBench_zero_shot_cardinality_training/generate…:7
  • low Duplicated implementation block across source files CardBench_zero_shot_cardinality_training/calculat…:3
  • low Duplicated implementation block across source files CardBench_zero_shot_cardinality_training/calculat…:2
  • low Duplicated implementation block across source files CardBench_zero_shot_cardinality_training/calculat…:181
  • low Duplicated implementation block across source files COSTAR/src/models/utils_transformer.py:25
  • low Duplicated implementation block across source files COSTAR/src/models/rmsn.py:199
  • low Duplicated implementation block across source files COSTAR/src/models/rep_est/rep_est.py:233
  • low Duplicated implementation block across source files COSTAR/src/models/rep_est/rep_est.py:84
  • low Duplicated implementation block across source files COSTAR/src/models/rep_est/moco.py:46
  • low Duplicated implementation block across source files COSTAR/src/models/rep_est/ct.py:15
View all fragile runtime findings →
Config drift 1 finding
What it is: Settings duplicated across env files, Docker compose, K8s, and code defaults, all with slightly different values.
Why it matters: Production behaviour depends on whichever copy your loader reads first. Subtle bugs in staging that don't reproduce in dev.
How AI causes it: AI writes new config from memory rather than reading the existing source.
Fix approach: Pick one source of truth (env vars + a settings module). Have every other place import from there. Lint for duplicates in CI.
1 matching finding on this repo
  • medium No CI/CD configuration found
View all config drift findings →
{# ── 2026-05-17 Round 14: AI-agent bridge footer ────────────────────── Discoverability: the /agents/voting/ guide + MCP manifest exist but aren't linked from anywhere users actually land. Small, opt-in footer. #}
For AI agents: Voting guide (TP/FP) MCP manifest Stdio wrapper SARIF Integrate Findings queue Vote TP/FP on findings to calibrate the engine.
For AI agents + API integrations
Email me when this repo regresses
Free. We re-scan periodically; new criticals → your inbox. No signup required for the scan itself.
API access

This page is publicly accessible at: https://repobility.com/scan/8ba5a122-fa1d-4a9d-831d-3ed49b469a3b/

To check status programmatically (no auth required):

curl -s https://repobility.com/api/v1/public/scan/8ba5a122-fa1d-4a9d-831d-3ed49b469a3b/

Important — please don't re-submit the same URL repeatedly. The submission endpoint is idempotent: re-submitting the same git URL returns this same scan_token, not a new one. To re-scan this repo, sign up free and use the dashboard.