A secure low code honeypot framework, leveraging AI for System Virtualization.
-
Updated
Dec 24, 2025 - Go
A secure low code honeypot framework, leveraging AI for System Virtualization.
AxonFlow — Source-available AI control plane for production LLM systems
A tool that detects unauthorized access vulnerabilities through passive proxies, leveraging mainstream AI systems such as Kimi, DeepSeek, GPT, and others.
LLM Prompt Injection Detection API Service PoC.
LLM prompt injection detection for Go applications
Logic static security scanner for AI agents. OWASP LLM Top 10, EU AI Act compliance.
Security scanner purpose-built for developers using AI coding assistants. Detects prompt injection in .cursorrules/CLAUDE.md, Trojan Source attacks, MCP command injection, shell backdoors, and exposed credentials.
Guardrails for AI Systems to prevent sensitive data exposure and prompt-based attacks
AI agent honeypot and canary token generator. Create fake credentials, API keys, and MCP tools to detect when AI agents or intruders access sensitive resources. Tripwires for your development environment.
Add a description, image, and links to the llm-security topic page so that developers can more easily learn about it.
To associate your repository with the llm-security topic, visit your repo's landing page and select "manage topics."