Prevent merging of malicious code in pull requests
-
Updated
Jan 8, 2026 - Python
Prevent merging of malicious code in pull requests
Focused malicious code detection ruleset, with a high protection-to-noise ratio
Deterministic verification layer for LLMs | AI hallucination detection | Model output validation | Formal verification for AI | Python 🐍
AI code generation and improvement
Codeaudit - Modern Python source code security analyzer based on distrust.
Contexi let you interact with entire codebase or data with context using a local LLM on your system.
Automatically monitors GitHub for code similarities and potential plagiarism using GitHub API. Includes Slack & Email alerts and an AI-based scanning skeleton for advanced code similarity detection.
Defensive secret scanner for Git repositories. Prevent tokens, keys, and passwords from being committed.
PyGitGuard is a Git security scanner designed to prevent accidental commits of sensitive data by scanning for:
SAST Scanner Modified - Fully open-source SAST scanner supporting a range of languages and frameworks. Integrates with major CI pipelines and IDE such as Azure DevOps, Google CloudBuild, VS Code and Visual Studio. No server required!
Calculate context-aware confidence scores for security findings. Prioritize vulnerabilities based on actual exploitability in your codebase.
Static Python code vulnerability scanner powered by LLMs.
A Python-based AI agent for detecting insecure code patterns in Python projects and providing context-based remediation suggestions.
Triagem automatizada de vulnerabilidades SAST integrada ao GitHub via API, com uso de LLM local (DeepSeek-R1 & Ollama)
A simple web-based tool to scan code for common security vulnerabilities (like SQL Injection, hardcoded passwords, and XSS) and auto-fix them. Upload your code, scan for issues, and download a fixed version instantly.
From prompt to paste: evaluate AI / LLM output under a strict Python sandbox and get actionable scores across 7 categories, including security, correctness and upkeep.
Comprehensive security auditing tools for vibe coded projects. Protect against accidental API key leaks, private data exposure, and security vulnerabilities before publishing.
LLMGrep combines the precision of Semgrep's static analysis with the power of Large Language Models to deliver comprehensive security scanning, interactive vulnerability discussions, and intelligent rule generation capabilities.
A GitHub Security Lab initiative, providing an in-repo learning experience, where learners secure intentionally vulnerable code.
Add a description, image, and links to the code-security topic page so that developers can more easily learn about it.
To associate your repository with the code-security topic, visit your repo's landing page and select "manage topics."