llm_to_mcp_integration_engine
This integration engine leverages large language models (LLMs) to create various automation workflows. Implemented in Python, it is designed to enable users to easily utilize LLMs. It particularly excels in data processing and API integrations.
GitHub Stars
3
User Rating
Not Rated
Favorites
0
Views
43
Forks
0
Issues
0
llm_to_mcp_integration_engine
🔍 What is llm_to_mcp_integration_engine?
llm_to_mcp_integration_engine is a new idea for a communication layer between LLMs and MCP servers or functions.
It enhances the reliability of tool calling by ensuring tools are selected, validated, and executed correctly before triggering any external process.
It searches for tool selection indicators (SELECTED_TOOLS, SELECTED_TOOL, NO_TOOLS_SELECTED) in the LLM's response and validates them against a predefined tool list.
🚀 What is new about llm_to_mcp_integration_engine?
The llm_to_mcp_integration_engine distinguishes itself by effectively handling unstructured outputs and incorporating dynamic parsing and retry mechanisms(RETRY_PROMPT,CHANGE_LLM_IN_RETRY), offering a more flexible and resilient solution for LLM-tool integration.
❓ Why do we need llm_to_mcp_integration_engine?
- LLMs often misformat or misorder tool calls, leading to failures.
- Tool execution must be validated before triggering any MCP server or function.
- This protocol brings clarity, control, and reliability to LLM-tool integrations.
❌ Is there an existing communication layer?
No.
This is a novel invention. We introduced the LLM2MCP protocol, a first-of-its-kind communication framework that connects LLMs to MCP servers or functions in a structured, validated, and controllable way.
What makes it new:
- Dual Registration: Tools/functions are listed in both the LLM prompt and the engine, ensuring alignment and consistency.
- Non-JSON Tolerance: Even when the LLM response is not fully JSON, the engine can still extract valid tool selections using regex and logic-based checks.
- Retry Framework: If validation fails (missing tools, incorrect formats, etc.), the engine can retry with a new prompt or even switch to a different LLM.
- Fine-Grained Failure Detection: Developers can diagnose exactly where the LLM fails — whether in selecting the right tool, formatting parameters, or transitioning to tool execution.
- Execution Safety: The engine ensures no tool or MCP server is called unless the response is valid and verified.
This bundling of validation, fallbacks, control logic, and robustness into a single integration engine is what makes it a new invention.
⚙️ How to Use It
📦 Install via pip
pip install llm_to_mcp_integration_engine
✅ Default Usage
from llm_to_mcp_integration_engine import llm_to_mcp_integration_default
llm_to_mcp_integration_default(
tools_list=my_tools_list,
llm_respons=response_from_llm,
json_validation=True
)
🔧 Advanced Usage
from llm_to_mcp_integration_engine import llm_to_mcp_integration_advance
llm_to_mcp_integration_advance(
tools_list=my_tools_list,
llm_respons=response_from_llm,
json_validation=True,
no_tools_selected=True,
multi_stage_tools_select=True
)
🧠 Custom Usage (e.g., for agentic HTML/CSS tools)
from llm_to_mcp_integration_engine import llm_to_mcp_integration_custom
llm_to_mcp_integration_custom(
tools_list=my_tools_list,
llm_respons=response_from_llm,
json_validation=True
)
✅ Benefits of Using llm_to_mcp_integration_engine
- Flexible Response Handling
- Reliable Tool Execution
- Reliable Programmatic Validation
- Improved Tool Chaining
- Synergy with Reasoning Techniques (e.g., Chain-of-Thought)
- Handles "No Tools Needed" Scenarios
- Error Detection and Retry Mechanism
- Failure Diagnostics & Monitoring
- Cost Optimization via Tiered LLM Usage
- Standardization of LLM-to-Tool Interfaces
💡 Also includes dynamic LLM switching on failure for enhanced robustness and cost-efficiency.
📜 License
You are free to use this engine for personal and research purposes.
However, you are not allowed to modify or distribute it without explicit permission from the author.
gofannon is a collection of tools designed to enhance the capabilities of function-calling-enabled language models and agents. It offers cross-framework compatibility, allowing import and export from various frameworks such as smolagents and LangChain. The tools provide additional functionality for a variety of tasks, making it a valuable resource for developers.