GitHubスター
0
ユーザー評価
未評価
お気に入り
0
閲覧数
5
フォーク
0
イシュー
0
README
🧠 AI Research Assistant Using LangChain + Streamlit (with a Simple MCP Pattern)
This is a minimal AI-powered research assistant built using:
- LangChain
- OpenAI GPT
- DuckDuckGo Search Tool
- Streamlit for UI
.env
file for API key management
📦 Features
- 🤖 Planner Agent – Breaks down high-level tasks into subtasks using GPT
- 🔍 Executor Agent – Uses DuckDuckGo to research each subtask
- 🧠 Model Context Protocol (MCP) – Shared memory for agents
- 🖥️ Streamlit UI – Clean web interface to trigger and inspect results
🚀 Getting Started
1. Clone & Setup
git clone https://github.com/your-username/ai-research-assistant-mcp.git
cd ai-research-assistant-mcp
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
2. Add Your OpenAI API Key
Create a .env
file in the root of the project:
OPENAI_API_KEY=sk-...
✅ Your key is already set in
.env
, so no need to export it manually.
3. Run the App
streamlit run ui.py
🗂 Project Structure
ai-research-assistant-mcp/
├── mcp_context.py # Shared agent memory (MCP)
├── planner_executor.py # Agent logic (Planner + Executor)
├── ui.py # Streamlit interface
├── requirements.txt # Python dependencies
├── .env # OpenAI API Key (not tracked by Git)
🧠 How It Works
Planner Agent:
- Reads
current_task
frommcp_context.py
- Breaks it into subtasks using GPT
- Reads
Executor Agent:
- Searches the web for each subtask
- Stores results in
mcp["agent_memory"]["executor_results"]
Streamlit UI:
- Displays results in real time with a single button click
✏️ Customize Your Task
Edit the current task in mcp_context.py
:
"current_task": "Research LangChain Agents"
🛠 Future Ideas
- LangGraph or CrewAI integration
- Memory persistence (Redis, FAISS)
- CSV/PDF file ingestion with context-aware planning
- Role-based multi-agent delegation
📄 License
MIT – free to use, improve, and share.
🙋♀️ Author
Viplav Fauzdar – viplavfauzdar.com
作者情報
0
フォロワー
0
リポジトリ
0
Gist
0
貢献数