agent-builder-api
agent-builder-apiは、エージェントを構築するためのPythonライブラリであり、開発者がAIエージェントを簡単に作成できるように設計されています。使いやすいAPIを提供し、さまざまな機能を統合することで、迅速なプロトタイピングや実装が可能です。
GitHubスター
28
ユーザー評価
未評価
お気に入り
0
閲覧数
22
フォーク
29
イシュー
2
Agent Builder API
🚀 Quick Start
You can start the API server using docker containers or manually cloning and building this repository.
Manual setup: Clone this repository in your local machine and start the python FAST API server. Optionally, install and set up Mongo DB.
Dev Container Setup: Dev Containers allow you to automate the environment setup.
Use this setup to install and run container services in an isolated environment with extensions preinstalled.
You can also open GitHub Codespaces in a remote environment/browser using secrets to pass Model API keys.
Manual Setup
Set up a Python virtualenv and install dependencies
python -m venv --prompt agent-builder-api venv source venv/Scripts/activate # venv/Scripts/activate (Windows PS) pip install -r requirements.txt
Set the model name and API key in .env file
OPENAI_API_KEY="sk----" MODEL_NAME="openai"
Start the server in the new terminal
python -m agentbuilder.main
[Optional] Start using Poetry
For fine-grained dependency management, use Poetry to pick and choose dependency packs based on your LLM model provider and tool features.
Follow the Offical Instruction Guide to install Poetry.
Pick and choose dependency packs to install.
poetry install --extras "mcp openai gemini cohere anthropic mongodb togetherai ollama vectordb langgraph guardrails ui"
Set the model name and API key in .env file
OPENAI_API_KEY="sk----" MODEL_NAME="openai"
Start the server in the new terminal
poetry run start-server
[!NOTE]
Poetry will create a virtual environment for us.
MCP servers
Local MCP servers can be added in MCP server folder
Multiple MCP clients can be configured in MPC client file
mcp_servers={ "math": { "command": sys.executable, "args": [str(current_dir / "servers" / "mcp_math.py")], "transport": "stdio", }, }
Vibe coding Agent
Projects are run inside a docker container (alpine_runner).
Pre-defined docker compose file can be used to bind host volumes and application ports.
Start the docker container using the compose file.
All commands executed by the model can be viewed using docker logs.
docker logs alpine_runner -f
Project related information can be configured in openvibe.py
@mcp.resource("config://app") def get_config() -> any: """Static configuration data""" return { "project_path": "~/projects/appname", "project_docs_path": "~/projects/appname/docs" }
CLI mode
Add alias to chat.sh & api_client.sh script in shell config (
/.zshrc,/.bashrc)alias chat=/pathto/chat.sh alias api=/pathto/api_client.sh
Use the help command option to identify different options
chat -h
Custom UI (Gradio)
Start gradio ui
poetry run start-ui # OR gradio ./agentbuilder/ui/app.py
More agents can be enabled by adding it in the dropdown choices - app.py
[Optional] Enable MongoDB
By default, data is stored as JSON files. Enable storage in Mongo DB by setting url using the environment variable.
MONGODB_URL="mongodb://localhost:27017/llmdb"
Dev Containers Setup
Enable Dev Containers in vscode by following the steps in official documentation.
Click on the badge below to run the services in an isolated container environment in a local machine.
This will clone the repo and start the API and Mongo DB container services.
[!TIP]
Use URL mongodb://mongodb:27017/llmdb in Mongo DB vscode extension to view storage data.
Execute F1 > Dev Containers: Attach to Running Container.. and select agent-builder-container.
Set the model name and API key in .env file
OPENAI_API_KEY="sk----" MODEL_NAME="openai"
Customization
Dependency packs and environment configuration
Dependency packs allow fine-grained package installations based on your requirements.
Use the environment variable EXTRA_DEPS in the docker-compose file to update.
install-extra-deps.sh script can be used in dev container mode if docker-compose is not available.
For example, the below environment configuration will install dependencies for the Gemini model,
Mongo DB, Langchain Graph, and VectorDB
EXTRA_DEPS: "gemini,mongodb,langgraph,vectordb"
[!TIP]
Start with a basic dependency pack to support your model and add other features incrementally
The following models are supported by its dependency pack
Model | Dependency pack | ENV key name |
---|---|---|
OPEN AI | openai | OPENAI_API_KEY |
GEMINI | gemini | GOOGLE_API_KEY |
COHERE | cohere | COHERE_API_KEY |
ANTHROPIC | anthropic | ANTHROPIC_API_KEY |
Some pre-configured tools require extra dependencies or API keys to get enabled.
Tool | Dependency pack | ENV key name |
---|---|---|
internet_search | - | TAVILY_API_KEY |
vectorstore_search | vectordb | EMBED_MODEL_NAME |
Adding Tools
Add custom Tools or Toolkits using tool factory module (agentbuilder/factory/tool_factory).
-
agentbuilder/tools/my_custom_tool.py
from pathlib import Path from langchain_core.tools import tool from pydantic import BaseModel, Field @tool def my_custom_tool(a: int, b: int): """Custom Tool Description""" return a + b my_custom_tool.name="custom_tool_name" my_custom_tool.description="Custom Tool Description" class field_inputs(BaseModel): a: int = Field(description="First input") b: int = Field(description="Second input") my_custom_tool.args_schema = sum_inputs my_custom_tool.metadata= {"file_path": str(Path(__file__).absolute())}
Add your tool in get_all_tools method in tool_factory module.
agentbuilder/factory/tool_factory.py
def get_all_tools()->Sequence[BaseTool]: return get_vectordb_tools()+ get_websearch_tools() + json_tools + [ directly_answer_tool, weather_clothing_tool, temperature_tool, temperature_sensor_tool, sum_tool,greeting_tool, git_diff_tool, repl_tool, + my_custom_tool ]
Adding Agents
Agents can be created using Extension UI or declared in code.
Add your agents using the Agent Factory Module (agentbuilder/factory/agent_factory).
Create your agent
def my_agent(): return AgentParams( name="my_agent", preamble= "You are a powerful agent that uses tools to answer Human questions", tools= ["my_custom_tool"], agent_type= 'tool_calling' )
Add your agent in get_all_agents method.
def get_all_agents(): return [ default_agent(), weather_agent(), python_agent(), git_agent(), + my_agent() ]
Custom Agent Builder and Graphs
Customize your Agent Workflow using custom prompts and graphs.
Filter the agent using the agent name to apply customizations.
For example, the following code applies graph builder workflow for the Agent named "graph_agent"
def get_agent_builder(params:AgentBuilderParams):
agent_name= params.name
match agent_name:
case "graph_agent":
from agentbuilder.agents.BaseGraphAgentBuilder import BaseGraphAgentBuilder
return BaseGraphAgentBuilder(params)
case _:
return BaseAgentBuilder(params)
[!IMPORTANT]
Dependency pack "langgraph" needs to be installed for BaseGraphAgentBuilder.
Configuring models
Update model configuration using environment variables.
Supports {Provider}/{ModelName} format
Gemini
Create API keys https://aistudio.google.com/app/apikey
MODEL_NAME="gemini/gemini-pro" EMBED_MODEL_NAME="gemini/embedding-001" GOOGLE_API_KEY=<GOOGLE_API_KEY>
Cohere
Create API keys https://dashboard.cohere.com/api-keys
MODEL_NAME="cohere/command" EMBED_MODEL_NAME="cohere/embed-english-v3.0" COHERE_API_KEY=<COHERE_API_KEY>
Open AI
Create API keys https://platform.openai.com/docs/quickstart/account-setup
MODEL_NAME="openai/gpt-4o" EMBED_MODEL_NAME="openai/text-embedding-3-large" OPENAI_API_KEY=<OPENAI_API_KEY>
Anthropic
Create API keys https://www.anthropic.com/ and https://www.voyageai.com/
MODEL_NAME="anthropic/claude-3-opus-20240229" EMBED_MODEL_NAME="voyageai/voyage-2" ANTHROPIC_API_KEY=<ANTHROPIC_API_KEY> VOYAGE_API_KEY=<VOYAGE_API_KEY>
Ollama
Use local models for function calls.
[!TIP]
Use JSON chat agent type for better compatibility with local models.Install Ollama and pull the model.
ollama pull mistral:v0.3
Set environment variable.
MODEL_NAME="ollama/mistral:v0.3"