exp-llm-mcp-rag

基于 KelvinQiu802/llm-mcp-rag 的 Python 实现版本,用于学习和实践 LLM、MCP 和 RAG 技术

GitHub Stars

115

User Rating

Not Rated

Favorites

0

Views

34

Forks

18

Issues

1

Installation
Difficulty
Intermediate
Estimated Time
10-20 minutes
Requirements
Python 3.12以上

Installation

Installation

Prerequisites

Python: 3.12 or higher

Installation Steps

1. Clone Repository

bash
git clone https://github.com/StrayDragon/exp-llm-mcp-rag
cd exp-llm-mcp-rag

2. Set Environment Variables

Copy .env.example to create .env and fill in the necessary configuration.

3. Install Dependencies

bash
uv sync

4. Run Sample

bash
just help

Troubleshooting

Issue: Failed to install dependencies Solution: Check the Python version and ensure that required packages are installed correctly.

Configuration

Configuration

Basic Configuration

Edit the .env file to set the following environment variables:
OPENAI_API_KEY: OpenAI API key
OPENAI_BASE_URL: Base URL for OpenAI API (default is 'https://api.openai.com/v1')
DEFAULT_MODEL_NAME: Model name to use (default is 'gpt-4o-mini')

Configuration Example

dotenv
OPENAI_API_KEY=your_openai_api_key
OPENAI_BASE_URL=https://api.openai.com/v1
DEFAULT_MODEL_NAME=gpt-4o-mini

Security Settings

Store API keys securely.
Limit unnecessary file access.

Examples

Examples

Programmatic Usage

python
import requests

def call_mcp_tool(tool_name, params):
    response = requests.post(
        'http://localhost:3000/mcp/call',
        json={
            'tool': tool_name,
            'parameters': params
        }
    )
    return response.json()

Usage example

result = call_mcp_tool('analyze', { 'input': 'sample data', 'options': {'format': 'json'} })

RAG Flow Example

mermaid
sequenceDiagram
    participant User as User
    participant Agent as Agent
    participant LLM as LLM
    User->>Agent: Question
    Agent->>LLM: Send question
    LLM-->>Agent: Generate answer
    Agent-->>User: Return answer

Use Cases

An AI assistant responds to user queries by retrieving external data and generating answers.
Using an MCP client to perform file operations such as reading and writing specific files.
Fetching information from the web to generate answers based on user requests.
Using vector retrieval to quickly obtain relevant documents and enhance the context for LLM.
Integrating with other systems via APIs for data analysis and processing.

Additional Resources