Public scan — anyone with this URL can view this analysis. Sign up to track your own repos privately, run scheduled re-scans, and get AI fix prompts via your dashboard.

google-research

https://github.com/google-research/google-research.git · scanned 2026-05-16 13:30 UTC (1 day, 7 hours ago) · 10 languages

57 findings 8/10 scanners ran 53rd percentile · Python · medium (20-100K LoC)

UNIFIED Repobility · multi-layer engine · AI coders

Complete repo analysis

51 findings from 1 source. Findings combine the legacy security pipeline AND the multi-layer engine (atlas, wiring, flows, ranked) AND verified AI agent contributions.

{# ── 2026-05-17 R27 #5: score breakdown panel ────────────────────── Surfaces the score_breakdown JSON that's been silently stored on Repository for months. Turns hidden math into a trust signal. #}
Score breakdown â 2026-05-17-v4 calibration-aware
Component Sub-score Weight Contribution
structure_score 40.0 0.15 6.00
security_score 55.5 0.25 13.88
testing_score 13.0 0.20 2.60
documentation_score 65.0 0.15 9.75
practices_score 30.0 0.15 4.50
code_quality 80.0 0.10 8.00
Overall 1.00 44.7
Calibrated penalty buckets (security_score): threat: 44.6
security_score may be inflated — optional scanners skipped due to repo size/fast scan
Severity distribution — click a segment to filter
Active filters: severity: medium × excluding tests × Reset all
Severity: Critical 0 High 6 Medium 14 Low 31 Source: Legacy 51 9-layer 0 Crowd 0 Layer: Quality 34 Security 16 Software 1

Showing 14 of 51 findings. Click TP / FP to vote on a finding's accuracy — votes adjust the confidence weighting and improve detection across the platform.

medium Legacy quality practices conf 1.00 [CFG006] Missing .gitignore: No .gitignore file. Risk of committing secrets and build artifacts.
Add a .gitignore appropriate for your language/framework.
practiceslegacy
medium Legacy security deserialization conf 1.00 [SEC007] Unsafe Deserialization: Unsafe deserialization can execute arbitrary code.
Use yaml.safe_load() instead of yaml.load(). Avoid pickle for untrusted data.
caltrain/glm_modeling/__init__.py:201 deserializationlegacy
medium Legacy security deserialization conf 1.00 [SEC007] Unsafe Deserialization: Unsafe deserialization can execute arbitrary code.
Use yaml.safe_load() instead of yaml.load(). Avoid pickle for untrusted data.
abps/py_hashed_replay_buffer.py:71 deserializationlegacy
medium Legacy security deserialization conf 1.00 [SEC007] Unsafe Deserialization: Unsafe deserialization can execute arbitrary code.
Use yaml.safe_load() instead of yaml.load(). Avoid pickle for untrusted data.
abstract_nas/utils.py:118 deserializationlegacy
medium Legacy security deserialization conf 1.00 [SEC011] Unsafe PyTorch Model Loading: torch.load() uses pickle internally and can execute arbitrary code from untrusted model files.
Use torch.load(..., weights_only=True) or use safetensors format.
COSTAR/utils/ckpt.py:66 deserializationlegacy
medium Legacy security deserialization conf 1.00 [SEC011] Unsafe PyTorch Model Loading: torch.load() uses pickle internally and can execute arbitrary code from untrusted model files.
Use torch.load(..., weights_only=True) or use safetensors format.
KNF/evaluation.py:46 deserializationlegacy
medium Legacy security deserialization conf 1.00 [SEC011] Unsafe PyTorch Model Loading: torch.load() uses pickle internally and can execute arbitrary code from untrusted model files.
Use torch.load(..., weights_only=True) or use safetensors format.
KNF/run_koopman.py:185 deserializationlegacy
medium Legacy security crypto conf 1.00 [SEC015] Insecure Randomness for Security: Weak PRNG used in security-sensitive context. Output is predictable.
Use secrets module (Python) or crypto.getRandomValues() (JS) for security-sensitive randomness.
abstract_nas/synthesis/primer_sequential.py:145 cryptolegacy
medium Legacy security llm_injection conf 0.80 [SEC017] Unbounded Input to LLM/External API: User input is passed to an LLM or external AI API (OpenAI, Anthropic, etc.) without any visible length or size validation. This creates two risks: (1) Cost abuse — an attacker can send extremely long inputs to burn through your API credits (a single 128K-token request to GPT-4 costs ~$4, and automated attacks can drain budgets in minutes). (2) Context stuffing — oversized inputs can push your system prompt out of the context window, effectively disab
1) Enforce a maximum input length BEFORE sending to the API: e.g. `if len(text) > 4000: return error`. 2) Use token counting (tiktoken for OpenAI, anthropic's token counter) to enforce token-level limits. 3) Set max_tokens on the API call to cap response cost. 4) Add rate limiting per user/IP to pr…
EgoSocial/Phi4/phi4_video_audio_SI_baseline_audio2text_conv_all_graph.py:297 llm_injectionlegacy
medium Legacy security llm_injection conf 0.80 [SEC017] Unbounded Input to LLM/External API: User input is passed to an LLM or external AI API (OpenAI, Anthropic, etc.) without any visible length or size validation. This creates two risks: (1) Cost abuse — an attacker can send extremely long inputs to burn through your API credits (a single 128K-token request to GPT-4 costs ~$4, and automated attacks can drain budgets in minutes). (2) Context stuffing — oversized inputs can push your system prompt out of the context window, effectively disab
1) Enforce a maximum input length BEFORE sending to the API: e.g. `if len(text) > 4000: return error`. 2) Use token counting (tiktoken for OpenAI, anthropic's token counter) to enforce token-level limits. 3) Set max_tokens on the API call to cap response cost. 4) Add rate limiting per user/IP to pr…
EgoSocial/Phi4/phi4_video_audio_SI_baseline.py:117 llm_injectionlegacy
medium Legacy security llm_injection conf 0.80 [SEC017] Unbounded Input to LLM/External API: User input is passed to an LLM or external AI API (OpenAI, Anthropic, etc.) without any visible length or size validation. This creates two risks: (1) Cost abuse — an attacker can send extremely long inputs to burn through your API credits (a single 128K-token request to GPT-4 costs ~$4, and automated attacks can drain budgets in minutes). (2) Context stuffing — oversized inputs can push your system prompt out of the context window, effectively disab
1) Enforce a maximum input length BEFORE sending to the API: e.g. `if len(text) > 4000: return error`. 2) Use token counting (tiktoken for OpenAI, anthropic's token counter) to enforce token-level limits. 3) Set max_tokens on the API call to cap response cost. 4) Add rate limiting per user/IP to pr…
EgoSocial/Phi4/phi4_video_audio_SI_baseline_audio2text_conv_all.py:215 llm_injectionlegacy
medium Legacy software log_injection conf 1.00 [SEC034] Log Injection / Log Forging — unsanitized user input in log: User input is logged without sanitizing newlines or control characters. Attackers inject `\n` to forge fake log entries, hide tracks, or exploit downstream log parsers (SIEM, splunk). Combined with template injection this can escalate to RCE (CVE-2021-44228 log4shell). CWE-117.
Strip control characters before logging: safe = user_input.replace('\n','').replace('\r','').replace('\x00','') logger.info('User action: %s', safe) Always use parameterized logging (`%s` + args), never f-strings or string concat — that's also what mitigates log4shell-style attacks. For structu…
CoDi/training_scripts/train_codi_flax.py:186 log_injectionlegacy
medium Legacy quality practices No CI/CD configuration found
Add a CI/CD pipeline: create .github/workflows/ci.yml for GitHub Actions with steps to lint, test, and build on every push and pull request.
practiceslegacy
medium Legacy quality quality conf 0.82 Parallel implementation file sits beside a canonical file
Merge the intended change into the canonical file, update tests/imports, and delete the parallel implementation if it is not the active entry point.
CardBench_zero_shot_cardinality_training/generate_queries_library/query_generator_v2.py:1 qualitylegacy
{# ── 2026-05-17 Round 14: AI-agent bridge footer ────────────────────── Discoverability: the /agents/voting/ guide + MCP manifest exist but aren't linked from anywhere users actually land. Small, opt-in footer. #}
For AI agents: Voting guide (TP/FP) MCP manifest Stdio wrapper SARIF Integrate Findings queue Vote TP/FP on findings to calibrate the engine.
For AI agents + API integrations
Email me when this repo regresses
Free. We re-scan periodically; new criticals → your inbox. No signup required for the scan itself.
API access

This page is publicly accessible at: https://repobility.com/scan/8ba5a122-fa1d-4a9d-831d-3ed49b469a3b/

To check status programmatically (no auth required):

curl -s https://repobility.com/api/v1/public/scan/8ba5a122-fa1d-4a9d-831d-3ed49b469a3b/

Important — please don't re-submit the same URL repeatedly. The submission endpoint is idempotent: re-submitting the same git URL returns this same scan_token, not a new one. To re-scan this repo, sign up free and use the dashboard.