langgraph-multiagent-boilerplate
langgraph-multiagent-boilerplateは、Pythonを用いたマルチエージェントシステムの開発を簡素化するためのボイラープレートコードです。複数のエージェントが協力してタスクを実行するためのフレームワークを提供し、開発者が迅速にプロジェクトを立ち上げることを可能にします。ドキュメントも充実しており、使いやすさが考慮されています。
GitHubスター
3
ユーザー評価
未評価
フォーク
1
イシュー
0
閲覧数
3
お気に入り
0
LangGraph Multi-Agent Boilerplate
A robust boilerplate for building AI agent clusters efficiently using LangGraph with supervisor architecture, Model Context Protocol (MCP) integration, and comprehensive API.
🌟 Features
- Multi-Agent Architecture: Build AI agent clusters with supervisor coordination
- LangGraph Integration: Leverage LangGraph's powerful state management for agent workflows
- MCP Support: Integrate tools via Model Context Protocol servers
- Streaming API: Real-time streaming responses for interactive conversations
- Database Persistence: Store conversations, agent states, and activity logs in PostgreSQL
- Cloud Storage: File management with Cloudflare R2
- Comprehensive API: RESTful endpoints with FastAPI, including Swagger documentation
- Security: Authentication middleware, error handling, and security best practices
🚀 Getting Started
Prerequisites
- Python 3.10+
- PostgreSQL
- Cloudflare R2 account (optional, for cloud storage)
- OpenRouter AI API key (or other compatible AI provider)
Installation
- Clone the repository
git clone https://github.com/yourusername/langgraph-multiagent-boilerplate.git
cd langgraph-multiagent-boilerplate
- Set up a Python virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
- Install dependencies
pip install -r requirements.txt
- Configure environment variables
cp .env.example .env
# Edit .env with your settings (database, API keys, etc.)
- Set up the database
# Create a PostgreSQL database
# Then run migrations (once implemented)
- Run the server
uvicorn app.main:app --reload
- Access the API documentation
- Swagger UI: http://localhost:8000/api/docs
- ReDoc: http://localhost:8000/api/redoc
📋 Project Structure
langgraph-multiagent-boilerplate/
├── app/
│ ├── api/
│ │ ├── exceptions.py # Error handling
│ │ ├── middleware/ # Security & auth middleware
│ │ └── routes/ # API endpoints
│ ├── core/
│ │ ├── config.py # Configuration management
│ │ └── langgraph/ # LangGraph components
│ ├── db/
│ │ └── base.py # Database setup
│ ├── models/ # SQLAlchemy models
│ ├── schemas/ # Pydantic schemas
│ ├── services/ # Business logic
│ └── main.py # Application entry point
├── tests/ # Test suite
├── .env.example # Environment template
├── pyproject.toml # Python project metadata
├── requirements.txt # Dependencies
├── README.md # This file
├── PROJECT_OVERVIEW.md # Detailed project documentation
└── IMPLEMENTATION_TASKS.md # Development roadmap
🧠 How It Works
Multi-Agent System Architecture
- AI Crews: Each AI agent cluster contains multiple crews, each led by a supervisor agent
- Supervisor Architecture: The supervisor agent analyzes user input, creates plans, and assigns tasks to other agents
- Tool Integration: Agents can access external tools via MCP servers
- Streaming Communication: Real-time responses with event streaming
- Persistence: All conversations, states, and activities are stored in the database
Example Flow
- User sends a message to a crew
- Supervisor agent receives the input via API call
- Supervisor analyzes the input and the crew's capabilities
- Supervisor either answers directly or creates a detailed plan
- If needed, supervisor assigns tasks to specialized agents
- Agents perform their tasks using attached MCP tools
- Supervisor collects results, analyzes them, and formulates a response
- Response is streamed back to the user
🔌 API Reference
Core Endpoints
Crews and Agents
GET /api/crews
- List all crewsPOST /api/crews
- Create a new crewGET /api/crews/{crew_id}
- Get crew detailsPUT /api/crews/{crew_id}
- Update a crewDELETE /api/crews/{crew_id}
- Delete a crewGET /api/agents
- List all agentsPOST /api/agents
- Create a new agentGET /api/agents/{agent_id}
- Get agent detailsPUT /api/agents/{agent_id}
- Update an agentDELETE /api/agents/{agent_id}
- Delete an agent
Conversations
GET /api/conversations
- List conversationsPOST /api/conversations
- Create a new conversationGET /api/conversations/{conversation_id}
- Get conversation detailsPOST /api/conversations/{conversation_id}/chat
- Send a message and get a responsePOST /api/conversations/{conversation_id}/chat/stream
- Get streaming response
See the Swagger documentation for the complete API reference.
📝 Usage Examples
Creating a Crew with Agents
import httpx
# Create a new crew
crew_data = {
"name": "Research Crew",
"description": "A crew specialized in research tasks",
"metadata": {"specialization": "research"}
}
response = httpx.post("http://localhost:8000/api/crews", json=crew_data)
crew = response.json()
crew_id = crew["id"]
# Create a supervisor agent
supervisor_data = {
"crew_id": crew_id,
"name": "Research Supervisor",
"description": "Supervises research operations",
"system_prompt": "You are a research supervisor responsible for coordinating research efforts.",
"model": "google/gemini-2.5-flash",
"is_supervisor": True,
"metadata": {}
}
httpx.post("http://localhost:8000/api/agents", json=supervisor_data)
# Create specialized agents
web_researcher_data = {
"crew_id": crew_id,
"name": "Web Researcher",
"description": "Specializes in web research",
"system_prompt": "You are a web researcher that finds accurate information online.",
"model": "claude-3-sonnet",
"is_supervisor": False,
"metadata": {"specialty": "web_search"}
}
httpx.post("http://localhost:8000/api/agents", json=web_researcher_data)
Starting a Conversation
# Create a conversation with a crew
conversation_data = {
"user_id": "user123",
"crew_id": crew_id,
"title": "Research on AI trends"
}
response = httpx.post("http://localhost:8000/api/conversations", json=conversation_data)
conversation = response.json()
conversation_id = conversation["id"]
# Send a message to the crew
message_data = {
"message": "What are the latest trends in multi-agent AI systems?",
"metadata": {}
}
# For non-streaming response
response = httpx.post(
f"http://localhost:8000/api/conversations/{conversation_id}/chat",
json=message_data
)
print(response.json()["content"])
# For streaming response
with httpx.stream(
"POST",
f"http://localhost:8000/api/conversations/{conversation_id}/chat/stream",
json=message_data,
timeout=60.0
) as response:
for chunk in response.iter_lines():
if chunk.startswith("data: "):
data = json.loads(chunk[6:])
if "choices" in data and data["choices"][0]["delta"].get("content"):
print(data["choices"][0]["delta"]["content"], end="")
🧪 Testing
Run the test suite with:
pytest
🔧 Configuration
Key environment variables:
DATABASE_URL
: PostgreSQL connection stringOPENROUTER_API_KEY
: OpenRouter API keyMCP_SERVER_URL
: URL of the MCP serverR2_ENDPOINT
,R2_BUCKET_NAME
, etc.: Cloudflare R2 configurationJWT_SECRET_KEY
: Secret for JWT authenticationDEBUG
: Enable debug mode
See .env.example
for a complete list of configuration options.
🧩 Extending the Boilerplate
Adding New MCP Tools
- Register a new MCP server in the database
- Discover and register tools from the server
- Assign tools to agents
Creating Custom Agent Types
- Create a new agent with specialized system prompt
- Assign relevant MCP tools to the agent
- Add the agent to a crew
Implementing Custom Workflows
- Modify the supervisor logic in
app/core/langgraph/supervisor.py
- Adjust the state graph to implement your custom workflow
🤝 Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.