agentweave
Framework for building and managing multi-agent systems with Model Context Protocol (MCP) and LangGraph support. Features a modern React UI, JWT-secured agent communication, and dynamic LLM provider integration (OpenAI, Azure, Google). Easily create, configure, and monitor agents that connect to external tools—all from a single interface.
GitHubスター
1
ユーザー評価
未評価
お気に入り
0
閲覧数
14
フォーク
1
イシュー
1
AgentWeave - Multi-Agent AI Communication Framework
Overview
AgentWeave is a cutting-edge, production-ready framework for creating and managing multi-agent AI systems with seamless Agent-to-Agent (A2A) communication. Built with integrated Model Context Protocol (MCP) and LangGraph support, AgentWeave enables developers to build sophisticated AI agent networks that can communicate, collaborate, and interact with external tools using JWT authentication for secure operations.
🚀 Perfect for: AI researchers, developers building autonomous systems, chatbot networks, workflow automation, and enterprise AI solutions.
Note: This is an actively developed project. Multi-agent communication features are being continuously enhanced.
🌟 Key Features
✅ MCP Tool Registration: Easily register your MCP-compatible tools and spin up A2A server agents
✅ Multi-Provider LLM Support: Works with OpenAI, Azure OpenAI, and Google AI models
✅ JWT Authentication: Secure communication between agents with JSON Web Tokens
✅ Reactive UI: Real-time updates and streaming responses
✅ Dynamic Agent Creation: Create and configure agents on-the-fly through a user-friendly interface
✅ Modular Architecture: Easily extendable for custom agent behaviors and tools
✅ Docker Support: Containerized MongoDB for easy deployment
✅ REST API: Full-featured API for integration with existing systems
Quick Demo
System Architecture
AgentWeave uses a layered architecture for flexibility and scalability:
- The React frontend provides a user interface for creating agents and managing the engine
- The Express server handles API requests and proxies them to the Python backend
- The Engine is the core component that:
- Starts the MongoDB database
- Launches the Python backend server
- Manages agent lifecycle and communication
- The Agent System consists of:
- Agent configurations stored in MongoDB
- MCP tool connections for each agent
- LLM provider integrations via the configured API keys
- MCP Tool Servers run independently and are connected to agents based on configuration
Prerequisites
- Node.js (v16+) and npm for frontend
- Python (3.11+) for backend
- Docker for running MongoDB
- Make utility for running scripts
Installation
Clone the repository
git clone https://github.com/shanviinnovations/agentweave.git
cd agentweave
Environment Setup
All environment variables and LLM provider settings are now configured directly through the AgentWeave UI.
To get started, use the "Set Configuration" option in the Engine section of the UI to configure your LLM provider and related settings. This is the recommended and only supported method for setting up your environment.
Note: The previous approach of using a
.env
file is no longer supported or required.
Getting Started
Starting the Application
To get started with AgentWeave, you only need to run the frontend initially:
make start-frontend
This script will:
- Install npm dependencies
- Start the React development server
- Start the Express proxy server
Access the UI
Once the frontend is running, access the UI at:
- Frontend: http://localhost:9700
Important: Unlike traditional applications, AgentWeave's backend (engine) is started directly from the frontend UI. This design makes it easier for developers to manage the entire system from a single interface.
Starting the Engine
From the frontend UI:
- Click on "Start Engine" button
- This will automatically start the backend components:
- MongoDB database
- Python backend server
- Required agent services
If the engine starts successfully, you'll see a confirmation message in the UI.
Managing Agents
Once the engine is running, the UI will display:
- Any existing agents already stored in MongoDB
- Options to refresh or delete agents
Refreshing Agents
The "Refresh" option for an agent allows you to verify if the MCP tools integrated with the agent are functioning correctly. Use this option when:
- You've updated an MCP tool
- You suspect connectivity issues
- After server restarts
Configuring LLM Settings
To configure your LLM provider:
- Click on "Set Configuration" in the Engine section
- Select your preferred LLM provider (OpenAI, Azure, or Google)
- Enter the required API keys and endpoints
- Save your configuration
Creating a New Agent
To create a new agent:
- Click on "Add Agent" in the UI
- Fill in the required details:
- Agent Name: A unique identifier for your agent
- Agent Description: A brief description of the agent's purpose
- Prompt: Define the agent's persona and behavior
- MCP Configuration: Enter the MCP server details that this agent will connect to
- Click "Create" to launch your new agent
Working with MCP Tools
AgentWeave uses the Model Context Protocol (MCP) to enable seamless communication between agents and tools. Here's how the integration works:
MCP Tool Integration
- Tool Registration: When you configure an agent with an MCP address, it connects to that MCP server to register available tools
- Tool Discovery: The agent automatically discovers all tools exposed by the MCP server
- Tool Usage: When the agent needs a specific capability, it will invoke the appropriate MCP tool
MCP Tool Health Check
When you click "Refresh" on an agent, AgentWeave will:
- Attempt to reconnect to the MCP server
- Verify all tools are available and responding
- Update the agent's status in the UI
Troubleshooting MCP Connections
If your agent shows connectivity issues with MCP tools:
- Verify the MCP server is running
- Check network connectivity between the agent and the MCP server
- Ensure the MCP server has the expected tools registered
- Use the "Refresh" button to re-establish the connection
Example Workflow
Here's a step-by-step example of how to use AgentWeave:
Start the Frontend
make start-frontend
Open the UI
- Navigate to http://localhost:9700 in your browser
Start the Engine
- Click the "Start Engine" button in the UI
- Wait for confirmation that the engine has started successfully
Configure LLM Settings
- Click "Set Configuration" in the Engine section
- Choose OpenAI as the provider
- Enter your OpenAI API key
- Save the configuration
Create a Research Agent
- Click "Add Agent"
- Fill in the details:
- Name: "ResearchAssistant"
- Description: "Helps with research and information gathering"
- Prompt: "You are a research assistant that helps users find information..."
- MCP Address: "http://localhost:9500" (your MCP server address)
- Click "Create"
Test the Agent
- Once the agent is connected, you will see the agent and its tool information in the Agents tab
- Click on the "Interact" button in the agent's information inside the Agents tab
- This will open a conversation window where you can start interacting with the agent and test both the agent and its integrated MCP tools
- Try sending different requests to verify the agent's capabilities and MCP tool integration
Monitor and Manage
- Use the agent list to see all active agents
- Click "Refresh" to check agent connectivity
- Delete agents that are no longer needed
Development
Project Structure
/frontend
- React frontend and Express proxy server/backend
- Python backend with FastAPI/scripts
- Shell scripts for development workflows/utils
- Shared utility functions/images
- Documentation images and assets
🤝 Contributing
We welcome contributions! Please see our Contributing Guidelines for details on:
- How to submit bug reports and feature requests
- Development setup and coding standards
- Pull request process
🛣️ Roadmap
- Advanced agent orchestration
- Built-in monitoring and analytics
- Kubernetes deployment templates
- Plugin system for custom integrations
🏆 Awards and Recognition
This section will be updated as the project gains recognition in the AI community.
📊 Project Stats
License
This project is licensed under the MIT License - see the LICENSE file for details.
0
フォロワー
0
リポジトリ
0
Gist
0
貢献数
A Model Context Protocol (MCP) Gateway & Registry. Serves as a central management point for tools, resources, and prompts that can be accessed by MCP-compatible LLM applications. Converts REST API endpoints to MCP, composes virtual MCP servers with added security and observability, and converts between protocols (stdio, SSE, Streamable HTTP).