agent-atlassian
Atlassian (Jira/Confluence) AI Agent powered by 1st Party MCP Server using OpenAPI Codegen, LangGraph and LangChain MCP Adapters. Agent is exposed on various agent transport protocols (AGNTCY Slim, Google A2A, MCP Server)
GitHub Stars
7
User Rating
Not Rated
Favorites
0
Views
7
Forks
3
Issues
4
π Atlassian AI Agent
π§ͺ Evaluation Badges
Claude | Gemini | OpenAI | Llama |
---|---|---|---|
- π€ Atlassian Agent is an LLM-powered agent built using the LangGraph ReAct Agent workflow and MCP tools.
- π Protocol Support: Compatible with A2A protocol for integration with external user clients.
- π‘οΈ Secure by Design: Enforces Atlassian API token-based RBAC and supports external authentication for strong access control.
- π Integrated Communication: Uses langchain-mcp-adapters to connect with the Atlassian MCP server within the LangGraph ReAct Agent workflow.
- π First-Party MCP Server: The MCP server is generated by our first-party openapi-mcp-codegen utility, ensuring version/API compatibility and software supply chain integrity.
π¦ Getting Started
1οΈβ£ Configure Environment
- Ensure your
.env
file is set up as described in the cnoe-agent-utils usage guide based on your LLM Provider. - Refer to .env.example as an example.
Example .env file:
1οΈβ£ Create/Update .env
LLM_PROVIDER=
AGENT_NAME=atlassian
ATLASSIAN_TOKEN=
ATLASSIAN_EMAIL=
ATLASSIAN_API_URL=
ATLASSIAN_VERIFY_SSL=
########### LLM Configuration ###########
# Refer to: https://github.com/cnoe-io/cnoe-agent-utils#-usage
Use the following link to get your own Atlassian API Token:
https://id.atlassian.com/manage-profile/security/api-tokens
2οΈβ£ Start the Agent (A2A Mode)
Run the agent in a Docker container using your .env
file:
docker run -p 0.0.0.0:8000:8000 -it\
-v "$(pwd)/.env:/app/.env"\
ghcr.io/cnoe-io/agent-atlassian:a2a-stable
3οΈβ£ Run the Client
Use the agent-chat-cli to interact with the agent:
uvx https://github.com/cnoe-io/agent-chat-cli.git a2a
ποΈ Architecture
flowchart TD
subgraph Client Layer
A[User Client A2A]
end
subgraph Agent Transport Layer
B[Google A2A]
end
subgraph Agent Graph Layer
C[LangGraph ReAct Agent]
end
subgraph Tools/MCP Layer
D[LangGraph MCP Adapter]
E[Atlassian MCP Server]
F[Atlassian API Server]
end
A --> B --> C
C --> D
D -.-> C
D --> E --> F --> E
β¨ Features
- π€ LangGraph + LangChain MCP Adapter for agent orchestration
- π§ Azure OpenAI GPT-4o as the LLM backend
- π Connects to Atlassian via a dedicated Atlassian MCP agent
π§ͺ Usage
βΆοΈ Test with Atlassian Server
π Quick Start: Run Atlassian Locally with Minikube
If you don't have an existing Atlassian server, you can quickly spin one up using Minikube:
- Start Minikube:
minikube start
- Install Atlassian in the
atlassian
namespace:
kubectl create namespace atlassian
kubectl apply -n atlassian -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
- Expose the Atlassian API server:
kubectl port-forward svc/atlassian-server -n atlassian 8080:443
The API will be available at https://localhost:8080
.
- Get the Atlassian admin password:
kubectl -n atlassian get secret atlassian-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo
- (Optional) Install Atlassian CLI:
brew install atlassian
# or see https://argo-cd.readthedocs.io/en/stable/cli_installation/
For more details, see the official getting started guide.
2οΈβ£ Run the A2A Client
To interact with the agent in A2A mode:
make run-a2a-client
Sample Streaming Output
When running in A2A mode, you'll see streaming responses like:
============================================================
RUNNING STREAMING TEST
============================================================
--- Single Turn Streaming Request ---
--- Streaming Chunk ---
The current version of Atlassian is **v2.13.3+a25c8a0**. Here are some additional details:
- **Build Date:** 2025-01-03
- **Git Commit:** a25c8a0eef7830be0c2c9074c92dbea8ff23a962
- **Git Tree State:** clean
- **Go Version:** go1.23.1
- **Compiler:** gc
- **Platform:** linux/amd64
- **Kustomize Version:** v5.4.3
- **Helm Version:** v3.15.4+gfa9efb0
- **Kubectl Version:** v0.31.0
- **Jsonnet Version:** v0.20.0
𧬠Internals
- π οΈ Uses
create_react_agent
for tool-calling - π Tools loaded from the Atlassian MCP server (submodule)
- β‘ MCP server launched via
uv run
withstdio
transport - πΈοΈ Single-node LangGraph for inference and action routing
π Project Structure
agent_atlassian/
β
βββ agent.py # LLM + MCP client orchestration
βββ langgraph.py # LangGraph graph definition
βββ __main__.py # CLI entrypoint
βββ state.py # Pydantic state models
βββ atlassian_mcp/ # Git submodule: Atlassian MCP server
π§© MCP Submodule (Atlassian Tools)
This project uses a first-party MCP module generated from the Atlassian OpenAPI specification using our openapi-mcp-codegen utility. The generated MCP server is included as a git submodule in atlassian_mcp/
.
All Atlassian-related LangChain tools are defined by this MCP server implementation, ensuring up-to-date API compatibility and supply chain integrity.
π MCP Integration
The agent uses MultiServerMCPClient
to communicate with MCP-compliant services.
Example (stdio transport):
async with MultiServerMCPClient(
{
"atlassian": {
"command": "uv",
"args": ["run", "/abs/path/to/atlassian_mcp/server.py"],
"env": {
"ATLASSIAN_TOKEN": atlassian_token,
"ATLASSIAN_API_URL": atlassian_api_url,
"ATLASSIAN_VERIFY_SSL": "false"
},
"transport": "stdio",
}
}
) as client:
agent = create_react_agent(model, client.get_tools())
Example (SSE transport):
async with MultiServerMCPClient(
{
"atlassian": {
"transport": "sse",
"url": "http://localhost:8000"
}
}
) as client:
...
Evals
Running Evals
This evaluation uses agentevals to perform strict trajectory match evaluation of the agent's behavior. To run the evaluation suite:
make evals
This will:
- Set up and activate the Python virtual environment
- Install evaluation dependencies (
agentevals
,tabulate
,pytest
) - Run strict trajectory matching tests against the agent
Example Output
=======================================
Setting up the Virtual Environment
=======================================
Virtual environment already exists.
=======================================
Activating virtual environment
=======================================
To activate venv manually, run: source .venv/bin/activate
. .venv/bin/activate
Running Agent Strict Trajectory Matching evals...
Installing agentevals with Poetry...
. .venv/bin/activate && uv add agentevals tabulate pytest
...
set -a && . .env && set +a && uv run evals/strict_match/test_strict_match.py
...
Test ID: atlassian_agent_1
Prompt: show atlassian version
Reference Trajectories: [['__start__', 'agent_atlassian']]
Note: Shows the version of the Atlassian Server Version.
...
Results:
{'score': True}
...
Evaluation Results
Latest Strict Match Eval Results
π License
Apache 2.0 (see LICENSE)
π₯ Maintainers
See MAINTAINERS.md
- Contributions welcome via PR or issue!
π Acknowledgements
- LangGraph and LangChain for agent orchestration frameworks.
- langchain-mcp-adapters for MCP integration.
- AGNTCY Agent Gateway Protocol(AGP)
- AGNTCY Workflow Server Manager (WFSM) for deployment and orchestration.
- Model Context Protocol (MCP) for the protocol specification.
- Google A2A
- The open source community for ongoing support and contributions.