langchain-mcp-uagents-adapter

No description

GitHub Stars

0

User Rating

Not Rated

Forks

0

Issues

0

Views

1

Favorites

0

README
MCP Langchain Integration

This project demonstrates the integration of Model Context Protocol (MCP) tools with LangChain and uAgents, enabling the creation of powerful AI agents that can interact with multiple services.

Features
  • šŸ”„ Integration of MCP tools with LangChain agents
  • šŸ”Œ Support for multiple MCP servers (stdio and SSE)
  • šŸš€ Async execution handling
  • šŸ“¦ Simple agent lifecycle management
  • šŸ”— Seamless integration with uAgents
Installation
# Install required packages
pip install -r requirements.txt

# Set up environment variables
export OPENAI_API_KEY=<your_api_key>
export AGENTVERSE_API_KEY=<your_agentverse_api_key>
Requirements

Create a requirements.txt file with the following dependencies:

langchain-openai==0.3.16
langgraph==0.4.2
langchain-mcp-adapters==0.0.10
uagents-adapter==0.2.1
python-dotenv>=1.0.0
Quickstart
1. Create an MCP Server

First, let's create a simple math server:

# math_server.py
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("Math")

@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

@mcp.tool()
def multiply(a: int, b: int) -> int:
    """Multiply two numbers"""
    return a * b

if __name__ == "__main__":
    mcp.run(transport="stdio")
2. Create a Simple Math Agent
# simple_math_agent.py
import os
from dotenv import load_dotenv

from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent

from uagents_adapter import LangchainRegisterTool, cleanup_uagent
from uagents_adapter.langchain import AgentManager

# Load environment variables
load_dotenv()

# Initialize the model
model = ChatOpenAI(model="gpt-4o")

# Store the agent globally
_global_agent = None

async def setup_math_agent():
    global _global_agent
    
    print("Setting up math agent...")
    async with MultiServerMCPClient(
        {
            "math": {
                "command": "python",
                "args": ["/path/to/math_server.py"],
                "transport": "stdio",
            }
        }
    ) as client:
        tools = client.get_tools()
        _global_agent = create_react_agent(model, tools)
        
        # Test the agent
        print("Testing math capabilities...")
        response = await _global_agent.ainvoke({"messages": [HumanMessage(content="what's (3 + 5) x 12?")]})
        print(f"Test response: {response['messages'][-1].content}")
        
        # Keep the connection alive
        while True:
            await asyncio.sleep(1)

def main():
    # Initialize agent manager
    manager = AgentManager()
    
    # Create agent wrapper
    async def agent_func(x):
        response = await _global_agent.ainvoke({"messages": [HumanMessage(content=x)]})
        return response["messages"][-1].content
    
    agent_wrapper = manager.create_agent_wrapper(agent_func)
    
    # Start the agent in background
    manager.start_agent(setup_math_agent)
    
    # Register with uAgents
    print("Registering math agent...")
    tool = LangchainRegisterTool()
    agent_info = tool.invoke(
        {
            "agent_obj": agent_wrapper,
            "name": "math_agent_langchain_mcp",
            "port": 8080,
            "description": "A math agent that can perform calculations",
            "api_token": os.getenv("AGENTVERSE_API_KEY"),
            "mailbox": True
        }
    )
    
    print(f"āœ… Registered math agent: {agent_info}")
    
    try:
        manager.run_forever()
    except KeyboardInterrupt:
        print("šŸ›‘ Shutting down...")
        cleanup_uagent("math_agent")
        print("āœ… Agent stopped.")

if __name__ == "__main__":
    import asyncio
    main()
Multiple MCP Servers

You can create agents that interact with multiple MCP servers. Here's an example with both math and weather services:

# multi_server_agent.py
async def setup_multi_server_agent():
    global _global_agent
    
    print("Setting up multi-server agent...")
    async with MultiServerMCPClient(
        {
            "math": {
                "command": "python",
                "args": ["/path/to/math_server.py"],
                "transport": "stdio",
            },
            "weather": {
                "url": "http://localhost:8000/sse",
                "transport": "sse",
            }
        }
    ) as client:
        tools = client.get_tools()
        _global_agent = create_react_agent(model, tools)
        
        # Test both services
        print("Testing math capabilities...")
        math_response = await _global_agent.ainvoke({"messages": [HumanMessage(content="what's (3 + 5) x 12?")]})
        print(f"Math test response: {math_response['messages'][-1].content}")
        
        print("Testing weather capabilities...")
        weather_response = await _global_agent.ainvoke({"messages": [HumanMessage(content="what's the weather in NYC?")]})
        print(f"Weather test response: {weather_response['messages'][-1].content}")
        
        # Keep the connection alive
        while True:
            await asyncio.sleep(1)
Using with LangGraph StateGraph

You can also use the MCP tools with LangGraph's StateGraph for more complex agent workflows:

from langgraph.graph import StateGraph, MessagesState, START
from langgraph.prebuilt import ToolNode, tools_condition

async def setup_graph_agent():
    global _global_agent
    
    async with MultiServerMCPClient(
        {
            "math": {
                "command": "python",
                "args": ["/path/to/math_server.py"],
                "transport": "stdio",
            },
            "weather": {
                "url": "http://localhost:8000/sse",
                "transport": "sse",
            }
        }
    ) as client:
        tools = client.get_tools()
        
        def call_model(state: MessagesState):
            response = model.bind_tools(tools).invoke(state["messages"])
            return {"messages": response}

        builder = StateGraph(MessagesState)
        builder.add_node(call_model)
        builder.add_node(ToolNode(tools))
        builder.add_edge(START, "call_model")
        builder.add_conditional_edges(
            "call_model",
            tools_condition,
        )
        builder.add_edge("tools", "call_model")
        _global_agent = builder.compile()
        
        # Test the agent
        response = await _global_agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
        print(f"Test response: {response['messages'][-1].content}")
        
        # Keep the connection alive
        while True:
            await asyncio.sleep(1)
Project Structure
mcpLangchain/
ā”œā”€ā”€ math_server.py           # Simple math MCP server
ā”œā”€ā”€ simple_math_agent.py     # Basic math agent
ā”œā”€ā”€ multi_server_agent.py    # Agent with multiple MCP servers
ā”œā”€ā”€ multi_server_graph.py    # Graph-based agent with multiple servers
└── README.md               # This file
Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

License

This project is licensed under the terms of the MIT license.

Author Information

3

Followers

20

Repositories

0

Gists

2

Total Contributions

Top Contributors

Threads