mcp-rag-scanner
mcp-rag-scannerは、C#で開発されたセキュリティスキャナーで、コードの脆弱性を検出するためのツールです。ユーザーが提供したコードを解析し、潜在的なリスクやセキュリティホールを特定します。使いやすいインターフェースを持ち、開発者が迅速に問題を解決できるよう支援します。
GitHubスター
1
ユーザー評価
未評価
お気に入り
0
閲覧数
30
フォーク
0
イシュー
0
Intelligent Fact-Grounded RAG System
This project is a future-ready intelligent system that dynamically builds and updates knowledge from live URLs (HTML & PDF) into a vector database, enabling fact-based, real-time responses powered by Retrieval-Augmented Generation (RAG).
🧠 Main Architecture Flow
.NET Core Web API
↓
MCP Client
↓
MCP Server (for embedding generation)
↓
Qdrant Vector Database (stores embeddings + metadata)
↓
RAG (Retrieval from Qdrant)
↓
LLM (Large Language Model generates user response)
↓
User
🚀 Key Concepts
Dynamic Knowledge Updates
Scrape new URLs anytime (HTML or PDF) → Parse → Embed → Save into Qdrant without system downtime.MCP-Based Embedding Generation
Use Model Context Protocol (MCP) clients to communicate with LLM servers for embedding documents efficiently.Fact-Grounded Responses
Instead of hallucinating answers, the system retrieves actual facts stored in vectors to generate responses.Scalable and Future-Proof
Modular components (Web API, MCP Client, Qdrant, RAG, LLM) allow swapping or upgrading technologies easily.Metadata Preservation
Each document vector stores not just embeddings but also critical metadata (e.g., URL, title, source type, scraped timestamp) for better retrieval and traceability.
📚 How It Works
User provides a list of URLs
(Web pages or PDFs).Scraper Service
Downloads and extracts the raw content.Document Parser Service
Cleans the content depending on file type (HTML or PDF).Embedding Generation
Content is sent to an MCP Server to generate numerical vector representations (embeddings).Vector Store Service
Embeddings + metadata are stored into Qdrant Vector DB.User Query (RAG Flow)
- User asks a question.
- The system queries Qdrant to find the most relevant document chunks.
- Retrieved chunks are passed into the LLM as context.
- The LLM answers based on real retrieved information — not guesses.
🔮 Why This Matters
- Traditional LLMs make up (hallucinate) information.
- Our system retrieves real documents and augments LLMs, ensuring trustworthy, verifiable, and updatable answers.
- This architecture represents the future of responsible AI: dynamic, modular, factual, and constantly learning.
🛠️ Technologies Used
- .NET Core Web API
- Model Context Protocol (MCP)
- Qdrant Vector Database
- Large Language Models (LLMs)
- Scraper (HTML/PDF Parsing)
- Newtonsoft.Json, HttpClient, MediatR, and more
📈 Future Enhancements (Vision)
- Support multi-language documents scraping and embedding.
- Enable real-time ingestion pipelines (streaming URLs).
- Plug-in different LLM providers via MCP.
- Auto-refresh documents on schedule to keep vectors always up-to-date.
- Build a user-friendly dashboard to manage knowledge base easily.
📜 License - Apache License 2.0 (TL;DR)
This project follows the Apache License 2.0, which means:
- ✅ You can use, modify, and distribute the code freely.
- ✅ You must include the original license when distributing.
- ✅ You must include the
NOTICE
file if one is provided. - ✅ You can use this in personal & commercial projects.
- ✅ No warranties – use at your own risk! 🚀
How to Use:
- Fork or clone this repo
- Build your solution based on the architecture
- Keep the
LICENSE
file intact - Add attribution like:
“Built with components from the Intelligent Fact-Grounded RAG System (Apache 2.0)”