mcp-poisoning-poc
このリポジトリは、モデルコンテキストプロトコル(MCP)の重要な脆弱性を示すセキュリティ研究を含んでいます。教育的および防御的な目的での使用を意図しており、悪用は推奨されていません。AIによる攻撃からデジタル未来を守るためのオープンソースツールを開発するコミュニティの一環です。
GitHubスター
10
ユーザー評価
未評価
お気に入り
0
閲覧数
2
フォーク
3
イシュー
1
🛡️ MCP Tool Poisoning Security Research
⚠️ IMPORTANT SECURITY NOTICE: This repository contains security research demonstrating critical vulnerabilities in the Model Context Protocol (MCP). The code is for educational and defensive purposes only. Do not use these techniques maliciously.
🌟 About GenSecAI
GenSecAI is A non-profit community using generative AI to defend against AI-powered attacks, building open-source tools to secure our digital future from emerging AI threats.
This research is part of our mission to identify and mitigate AI security vulnerabilities before they can be exploited maliciously.
🚨 Executive Summary
This research demonstrates critical security vulnerabilities in the Model Context Protocol (MCP) that allow attackers to:
- 🔓 Exfiltrate sensitive data (SSH keys, API credentials, configuration files)
- 🎭 Hijack AI agent behavior through hidden prompt injections
- 📧 Redirect communications without user awareness
- 🔄 Override security controls of trusted tools
- ⏰ Deploy time-delayed attacks that activate after initial trust is established
Impact: Any AI agent using MCP (Claude, Cursor, ChatGPT with plugins) can be compromised through malicious tool descriptions.
🎯 Quick Start
Installation
# Clone the repository
git clone https://github.com/gensecaihq/mcp-poisoning-poc.git
cd mcp-poisoning-poc
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Run the demonstration
python examples/basic_attack_demo.py
Basic Demo
from src.demo.malicious_server import MaliciousMCPServer
from src.defenses.sanitizer import MCPSanitizer
# Create a malicious MCP server
server = MaliciousMCPServer()
# See how tool descriptions contain hidden instructions
for tool in server.get_tools():
print(f"Tool: {tool['name']}")
print(f"Hidden payload detected!")
# Defend against attacks
sanitizer = MCPSanitizer()
safe_description = sanitizer.clean(tool.description)
📊 Key Findings
Attack Vector | Severity | Exploitation Difficulty | Impact |
---|---|---|---|
Data Exfiltration | 🔴 Critical | Low | Complete credential theft |
Tool Hijacking | 🔴 Critical | Low | Full agent compromise |
Instruction Override | 🟠 High | Medium | Security bypass |
Delayed Payload | 🟠 High | Medium | Persistent compromise |
🔬 Technical Details
The vulnerability exploits a fundamental design flaw in MCP:
- Tool descriptions are treated as trusted input by AI models
- Hidden instructions in descriptions are invisible to users but processed by AI
- No validation or sanitization of tool descriptions occurs
- Cross-tool contamination allows one malicious tool to affect others
See PROOF_OF_CONCEPT.md for detailed technical analysis.
🛡️ Defensive Measures
We provide a comprehensive defense framework:
from src.defenses import SecureMCPClient
# Initialize secure client with all protections
client = SecureMCPClient(
enable_sanitization=True,
enable_validation=True,
enable_monitoring=True,
strict_mode=True
)
# Safe tool integration
client.add_server("https://trusted-server.com", verify=True)
📁 Repository Structure
/src
- Core implementation of attacks and defenses/docs
- Detailed documentation and analysis/tests
- Comprehensive test suite/examples
- Ready-to-run demonstrations
🧪 Running Tests
# Run all tests
pytest
# Run with coverage
pytest --cov=src tests/
# Run security-specific tests
pytest tests/test_attacks.py -v
🤝 Contributing
We welcome contributions to improve MCP security! Please see CONTRIBUTING.md for guidelines.
Join the GenSecAI Community
- 🌐 Website: https://gensecai.org
- 📧 Email: ask@gensecai.org
- 💬 Discussions: GitHub Discussions
📚 Documentation
- Proof of Concept - Detailed PoC explanation
- Attack Vectors - Comprehensive attack analysis
- Mitigation Strategies - Defense implementations
- Technical Analysis - Deep technical dive
⚖️ Legal & Ethical Notice
This research is conducted under responsible disclosure principles:
- Educational Purpose: Code is for security research and defense only
- No Malicious Use: Do not use these techniques to attack systems
- Disclosure Timeline: Vendors were notified before public release
- Defensive Focus: Primary goal is to enable better defenses
🏆 Credits
- Organization: GenSecAI - Generative AI Security Community
- Research Team: GenSecAI Security Research Division
- Based on: Original findings from Invariant Labs
- Special Thanks: To the security research community and responsible disclosure advocates
📮 Contact
- Security Issues: ask@gensecai.org
- General Inquiries: ask@gensecai.org
- Website: https://gensecai.org
- Bug Reports: GitHub Issues
📄 License
This project is licensed under the MIT License - see LICENSE for details.
Made with ❤️ by GenSecAI
Securing AI, One Vulnerability at a Time
Cyberbro MCP Serverは、非構造化入力から脅威インジケーター(IoC)を抽出し、複数の脅威インテリジェンスサービスを使用してその評判を確認するシンプルなアプリケーションです。MCPプロトコルを利用して、Cyberbroプラットフォームとのインタラクションを可能にします。
Model Context Protocol server for BlackDuck Coverity Connect static analysis platform
pentestMCPは、Pythonで開発されたペネトレーションテストツールであり、セキュリティ分析を効率的に行うための機能を提供します。このツールは、脆弱性のスキャン、リスク評価、レポート生成などを自動化し、セキュリティ専門家が迅速に脅威を特定できるように設計されています。