Public scan — anyone with this URL can view this analysis. Sign up to track your own repos privately, run scheduled re-scans, and get AI fix prompts via your dashboard.

numpy/numpy

https://github.com/numpy/numpy.git · scanned 2026-05-16 12:55 UTC (1 day, 7 hours ago) · 10 languages

273 findings (20 legacy + 253 scanner) 2/10 scanners ran 88th percentile · Python · huge (>500K LoC) Scanner says 99 (lower by 17)

UNIFIED Repobility · multi-layer engine · AI coders

Complete repo analysis

Last scanned 1 day, 10 hours ago · v1 · 265 findings from 2 sources. Findings combine the legacy security pipeline AND the multi-layer engine (atlas, wiring, flows, ranked) AND verified AI agent contributions.

JSON
{# ── 2026-05-17 R27 #5: score breakdown panel ────────────────────── Surfaces the score_breakdown JSON that's been silently stored on Repository for months. Turns hidden math into a trust signal. #}
Severity distribution — click a segment to filter
Active filters: excluding tests × Reset all
Severity: Critical 0 High 14 Medium 14 Low 182 Source: Legacy 12 9-layer 253 Crowd 0 Layer: Software 69 Security 16 Quality 177 Api 1 Frontend 1 Cicd 1

Bug-class explainers. Each card groups findings of the same shape — these are the patterns most likely to ship to prod and reappear in future scans unless you systematically fix the cause, not just the instance.

Fragile runtime 9 findings
What it is: Code that runs but breaks under predictable input — division by zero, missing keys, unbounded loops, off-by-one slicing.
Why it matters: Reaches production undetected because happy-path tests pass. First user with a weird input crashes the request.
How AI causes it: AI loves writing the happy path; doesn't probe edge cases unless explicitly asked.
Fix approach: Add property-based tests. Wrap external inputs with explicit validators. Use the framework's typed deserializer (Pydantic, attrs).
9 matching findings on this repo
  • medium [ERR001] Silent Exception Swallowing: Silently swallowing all exceptions hides … numpy/ma/core.py:1097
  • medium [ERR001] Silent Exception Swallowing: Silently swallowing all exceptions hides … tools/refguide_check.py:309
  • medium Average file size is 646 lines (recommend <300)
  • medium [ERR001] Silent Exception Swallowing: Silently swallowing all exceptions hides … numpy/_core/function_base.py:486
  • medium Network/subprocess call without timeout or try/except — benchmarks/asv_pip_nope…
  • medium Network/subprocess call without timeout or try/except — numpy/f2py/_backends/_m…
  • medium Network/subprocess call without timeout or try/except — tools/linter.py:26
  • medium Network/subprocess call without timeout or try/except — tools/write_release.py:…
  • medium Network/subprocess call without timeout or try/except — tools/check_python_h_fi…
View all fragile runtime findings →
Legacy markers 37 findings
What it is: TODO, FIXME, XXX, HACK comments. Often indicate a known-broken path the author meant to fix.
Why it matters: Each marker is an unfinished thought. Production code shouldn't ship with debt that's documented but not tracked.
How AI causes it: AI mirrors the style of the codebase, so existing TODOs propagate into new code.
Fix approach: Convert each into a ticket. Delete the comment when the ticket lands. Use a pre-commit hook to block new TODOs without an issue link.
12 matching findings on this repo
  • low Legacy-named symbol `time_strided_copy` in benchmarks/benchmarks/bench_io.py:30
  • low Legacy-named symbol `time_array_no_copy` in benchmarks/benchmarks/bench_array_c…
  • low Legacy-named symbol `isintent_copy` in numpy/f2py/auxfuncs.py:32
  • low Legacy-named symbol `isintent_copy` in numpy/f2py/rules.py:96
  • low Legacy-named symbol `exp_copy` in numpy/f2py/tests/test_inplace.py:31
  • low Legacy-named symbol `_character_bc_old` in numpy/f2py/tests/test_character.py:5…
  • low Legacy-named symbol `obj_copy` in numpy/f2py/tests/test_array_from_pyobj.py:262
  • low Legacy-named symbol `a_copy` in numpy/lib/tests/test_function_base.py:310
  • low Legacy-named symbol `std_old` in numpy/lib/tests/test_nanfunctions.py:828
  • low Legacy-named symbol `test_drop_metadata_identity_and_copy` in numpy/lib/tests/t…
  • low Legacy-named symbol `is_deprecated` in numpy/lib/tests/test__datasource.py:320
  • low Legacy-named symbol `test_copy` in numpy/polynomial/tests/test_symbol.py:156
View all legacy markers findings →
Commented-out code 53 findings
What it is: Lines of source that were intentionally disabled but never deleted.
Why it matters: Git already remembers history — commented code rots, becomes wrong, and adds noise to diffs.
How AI causes it: AI sometimes comments out broken code instead of fixing it. Reviewers approve out of inertia.
Fix approach: Delete. Trust `git log`. If you really need to remember, save it in a notes file under `docs/`.
12 matching findings on this repo
  • info Commented-code block (5 lines) in benchmarks/benchmarks/bench_ufunc.py:529
  • info Commented-code block (6 lines) in numpy/_globals.py:100
  • info Commented-code block (5 lines) in numpy/__init__.py:663
  • info Commented-code block (5 lines) in numpy/testing/tests/test_utils.py:499
  • info Commented-code block (6 lines) in numpy/testing/_private/utils.py:81
  • info Commented-code block (7 lines) in numpy/tests/test_public_api.py:98
  • info Commented-code block (13 lines) in numpy/f2py/cfuncs.py:398
  • info Commented-code block (5 lines) in numpy/f2py/crackfortran.py:1439
  • info Commented-code block (8 lines) in numpy/f2py/symbolic.py:14
  • info Commented-code block (5 lines) in numpy/f2py/rules.py:955
  • info Commented-code block (5 lines) in numpy/f2py/capi_maps.py:151
  • info Commented-code block (5 lines) in numpy/lib/_datasource.py:305
View all commented-out code findings →
{# ── 2026-05-17 Round 14: AI-agent bridge footer ────────────────────── Discoverability: the /agents/voting/ guide + MCP manifest exist but aren't linked from anywhere users actually land. Small, opt-in footer. #}
For AI agents: Voting guide (TP/FP) MCP manifest Stdio wrapper SARIF Integrate Findings queue Vote TP/FP on findings to calibrate the engine.
For AI agents + API integrations
Email me when this repo regresses
Free. We re-scan periodically; new criticals → your inbox. No signup required for the scan itself.
API access

This page is publicly accessible at: https://repobility.com/scan/42d62344-be26-4abd-ae9b-1edcf3c5f360/

To check status programmatically (no auth required):

curl -s https://repobility.com/api/v1/public/scan/42d62344-be26-4abd-ae9b-1edcf3c5f360/

Important — please don't re-submit the same URL repeatedly. The submission endpoint is idempotent: re-submitting the same git URL returns this same scan_token, not a new one. To re-scan this repo, sign up free and use the dashboard.