GitHubスター
0
ユーザー評価
未評価
お気に入り
0
閲覧数
7
フォーク
0
イシュー
0
Agentic-MCP
A high-performance, Rust-based MCP (Model Context Protocol) server that provides AI model access and agent functionality for agentic systems. This server acts as a bridge between MCP clients and various LLM providers, offering a unified interface for model interaction with customizable agent personas.
Features
🤖 Multi-Provider LLM Support
- OpenAI Integration: GPT-4, GPT-3.5-turbo, and other OpenAI models
- Cohere Support: Command-R and other Cohere models
- Extensible Architecture: Easy to add new providers through the Rig framework
- Model Metadata: Each model includes descriptions and recommended use cases
🎭 Agent System
- Markdown-based Agents: Define agent personas using simple markdown files
- Dynamic Loading: Agents are loaded from the
agents/
directory at runtime - Context Injection: Agent instructions are automatically prepended to user prompts
- Flexible Agent IDs: Use filename (without
.md
) as agent identifier
🔧 MCP Tools
list_models
: Retrieve available models with descriptions and use casesexecute_prompt
: Execute prompts with optional agent context and model selection- Standard MCP Protocol: Full compatibility with MCP specification
⚙️ Configuration Management
- TOML Configuration: Human-readable configuration format
- Environment Variables: Secure API key management
- Flexible Path Resolution: Multiple ways to specify config location
- Hot Configuration: Easy to modify without code changes
🔌 MCP Client Functionality
- Client Mode: Can connect to other MCP servers as a client
- Tool Chaining: Potential for complex workflows (future feature)
- Configurable Endpoints: Connect to any MCP-compatible server
Installation
Prerequisites
- Rust (nightly toolchain)
- API keys for desired LLM providers
Build from Source
# Clone the repository
git clone <repository-url>
cd agentic-mcp
# Build the project
cargo +nightly check
cargo +nightly build
# Run the server
cargo +nightly run
Configuration
Configuration File Location
The server determines the config.toml
file location using the following priority:
- Command-line Argument:
--config-path=/path/to/config.toml
- Environment Variable:
CONFIG_PATH=/path/to/config.toml
- Default Path:
./config.toml
in the current working directory
Configuration Format
Create a config.toml
file with the following structure:
# Main configuration for the MCP LLM Server
# Configuration for the MCP client functionality
[mcp_client]
enabled = true
server_url = "http://localhost:8080" # Example URL
# LLM Provider configurations
[[providers]]
name = "openai"
api_key_env = "OPENAI_API_KEY" # Environment variable for the API key
[[providers]]
name = "cohere"
api_key_env = "COHERE_API_KEY"
# Active model configurations
[[models]]
provider = "openai"
name = "gpt-4"
description = "Powerful model for complex reasoning."
recommended_use_case = "Code generation, content creation, complex problem solving."
[[models]]
provider = "openai"
name = "gpt-3.5-turbo"
description = "Fast and cost-effective model for general tasks."
recommended_use_case = "Summarization, translation, chatbots."
[[models]]
provider = "cohere"
name = "command-r"
description = "A model focused on conversational AI and RAG."
recommended_use_case = "Customer support bots, enterprise search."
Environment Variables
Set the required API keys as environment variables:
export OPENAI_API_KEY="your-openai-api-key"
export COHERE_API_KEY="your-cohere-api-key"
Agent System
Creating Agents
- Create a markdown file in the
agents/
directory - The filename (without
.md
) becomes the agent ID - Write the agent's persona and instructions in the file
Example: agents/creative_writer.md
You are a creative writing assistant with expertise in storytelling, character development, and narrative structure. You help users craft compelling stories, develop interesting characters, and improve their writing style. Always encourage creativity while providing constructive feedback.
Using Agents
Reference agents by their ID (filename without extension) when calling the execute_prompt
tool:
{
"tool_name": "execute_prompt",
"arguments": {
"model_name": "gpt-4",
"prompt": "Write a short story about a robot who discovers music.",
"agent_id": "creative_writer"
}
}
MCP Tools Reference
list_models
Returns available LLM models with their metadata.
Input Schema: None
Output Example:
{
"models": [
{
"provider": "openai",
"name": "gpt-4",
"description": "Powerful model for complex reasoning.",
"recommended_use_case": "Code generation, content creation, complex problem solving."
},
{
"provider": "openai",
"name": "gpt-3.5-turbo",
"description": "Fast and cost-effective model for general tasks.",
"recommended_use_case": "Summarization, translation, chatbots."
}
]
}
execute_prompt
Executes a prompt against a specified LLM model with optional agent context.
Input Schema:
model_name
(string, required): Name of the model to useprompt
(string, required): The user's promptagent_id
(string, optional): ID of the agent to use for context
Usage Example:
{
"tool_name": "execute_prompt",
"arguments": {
"model_name": "gpt-4",
"prompt": "Explain quantum computing in simple terms",
"agent_id": "science_teacher"
}
}
How It Works
Architecture Overview
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ MCP Client │───▶│ Agentic-MCP │───▶│ LLM Provider │
│ │ │ Server │ │ (OpenAI, │
│ │ │ │ │ Cohere, etc.) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
▼
┌─────────────────┐
│ Agents │
│ Directory │
│ (Markdown) │
└─────────────────┘
Request Flow
- MCP Client sends a tool request via stdio transport
- Agentic-MCP Server receives and parses the request
- Configuration Loading provides model and provider information
- Agent Loading (if agent_id specified) loads agent context from markdown file
- LLM Client constructs the final prompt and calls the appropriate provider
- Response is formatted and returned to the MCP client
Key Components
main.rs
: Entry point, handles MCP transport and server initializationconfig.rs
: Configuration loading with flexible path resolutionmcp_server.rs
: MCP protocol implementation and tool definitionsllm_client.rs
: LLM provider abstraction using the Rig frameworkagent.rs
: Agent loading and management system
Development
Project Structure
agentic-mcp/
├── Cargo.toml # Rust dependencies and metadata
├── config.toml # Server configuration
├── agents/ # Agent definition directory
│ └── example_agent.md # Example agent persona
└── src/
├── main.rs # Server entry point, MCP transport
├── config.rs # Configuration loading logic
├── mcp_server.rs # MCP server implementation
├── llm_client.rs # LLM provider wrapper
└── agent.rs # Agent loading system
Dependencies
rig-core
: LLM provider abstraction and client librarymcp-core
: Model Context Protocol implementationtokio
: Async runtime for handling concurrent requestsserde
: Serialization/deserialization for JSON and TOMLtoml
: Configuration file parsing
Building and Testing
# Check code without building
cargo +nightly check
# Build in debug mode
cargo +nightly build
# Build optimized release
cargo +nightly build --release
# Run with custom config
cargo +nightly run -- --config-path=/path/to/config.toml
# Or using environment variable
CONFIG_PATH=/path/to/config.toml cargo +nightly run
Future Enhancements
- Tool Chaining: Enable agents to call external MCP tools
- Provider Expansion: Add support for Anthropic, Google, and other providers
- Agent Templates: Pre-built agent personas for common use cases
- Streaming Responses: Real-time response streaming for better UX
- Metrics and Logging: Comprehensive observability features
- Authentication: Secure access control for production deployments
License
This project is licensed under the MIT License - see the LICENSE file for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.