AIVIS-MCP

A collection of resources, guides, and examples from AIVIS (AIML club at BNMIT) covering large language models (LLMs), AI APIs, Ollama, Groq.com, Gemini, workflow automation with n8n, and AI agents.

GitHubスター

0

ユーザー評価

未評価

お気に入り

0

閲覧数

4

フォーク

0

イシュー

0

README
AIVIS_MCP Workspace Guide
Overview

This workspace provides a set of tools and demos for interacting with LLMs (Groq, Ollama), MCP tool servers, and n8n automation. It includes Python scripts, Streamlit apps, and server modules for math and weather tasks.


1. Prerequisites
  • Python 3.11+ (recommended)
  • Node.js & npm (for n8n)
  • Ollama (for local LLM inference)
  • Groq API Key (for Groq cloud LLMs)

2. Setting Up Python Virtual Environment
  1. Open a terminal in your project folder.
  2. Create a virtual environment:
    python -m venv .venv
    
  3. Activate the environment:
    • Windows:
      .venv\Scripts\activate
      
    • macOS/Linux:
      source .venv/bin/activate
      
  4. Upgrade pip:
    pip install --upgrade pip
    

3. Install Python Dependencies

Install all required packages from requirements.txt:

pip install -r requirements.txt

4. Setting Up Environment Variables

Create a .env file in the root directory:

GROQ_API_KEY="your_groq_api_key_here"

Replace with your actual Groq API key.


5. Download & Install Ollama

Ollama is required for local LLM inference.

  • Visit https://ollama.com/download and follow instructions for your OS.
  • After installation, start Ollama:
    ollama serve
    
  • Pull a model (example: llama3):
    ollama pull llama3
    

6. Running the Python Scripts
Weather & Math MCP Servers
  • Weather Server:
    Run with:
    python server/weather.py
    
  • Math Server:
    Run with:
    python server/math_server.py
    
MCP Client
  • Run the client to interact with the weather server:
    python mcp_client.py
    
Streamlit Apps
  • Groq Chatbot:
    streamlit run streamlit_groq.py
    
  • Ollama Chatbot:
    streamlit run ollama_frontend.py
    
  • n8n Test Chatbot:
    streamlit run n8n_test.py
    

7. Using Jupyter Notebook
  • Open groq.ipynb in Jupyter or VS Code.
  • Run the cells to interact with Groq API.

8. Setting Up n8n Locally & Using the Provided Workflow
Install n8n
  1. Install Node.js & npm
    Download and install Node.js from https://nodejs.org/.

  2. Install n8n globally

    npm install -g n8n
    
  3. Start n8n

    n8n
    

    By default, n8n runs at http://localhost:5678.


Import and Configure the Workflow
  1. Open n8n in your browser:
    Go to http://localhost:5678.

  2. Import the workflow template:

    • Click the menu (top right) → "Import workflow".
    • Copy the contents of N8N_workflow_template.json and paste it into the import dialog.
    • Click "Import".
  3. Configure Credentials:

    • Set up your Ollama and SerpAPI credentials in n8n:
      • Go to "Credentials" in n8n.
      • Add your Ollama API and SerpAPI keys.
      • Make sure the credential names match those referenced in the workflow (Ollama account, SerpAPI account).
  4. Activate the workflow:

    • Click "Activate" to enable the workflow.

How the Workflow Works
  • Webhook Trigger:
    The workflow listens for POST requests at /webhook/invoke_agent.
    You can send messages to this endpoint (see n8n_test.py for an example).

  • AI Agent Node:
    Handles user queries, uses Ollama for LLM responses, and can access Google search via SerpAPI.

  • Memory & Tools:
    Includes a memory buffer and SerpAPI integration for enhanced responses.


Example: Sending a Message to the Workflow

You can use the provided n8n_test.py Streamlit app to interact with the workflow, or send a POST request manually:

curl -X POST http://localhost:5678/webhook/invoke_agent \
  -H "Content-Type: application/json" \
  -d '{"sessionId": "your-session-id", "chatInput": "What is the weather in Delhi?"}'

Tip:
If you change the workflow, export it from n8n to update your local N8N_workflow_template.json.


10. File Overview

11. Troubleshooting
  • Ensure .venv is activated before running Python scripts.
  • Make sure Ollama is running for local LLM inference.
  • Check .env for correct API keys.
  • For n8n, ensure Node.js is installed and n8n
作者情報

1

フォロワー

14

リポジトリ

0

Gist

0

貢献数