turbular

turbularは、Pythonを使用してデータの流れを自動化するためのライブラリです。ユーザーは簡単にワークフローを構築し、データ処理を効率化できます。直感的なAPIを提供し、複雑なタスクを簡素化することが可能です。

GitHubスター

97

ユーザー評価

未評価

お気に入り

0

閲覧数

20

フォーク

12

イシュー

1

README
Turbular

FastAPI
Python
License

Turbular is an open-source Model Context Protocol (MCP) server that enables seamless database connectivity for Language
Models (LLMs). It provides a unified API interface to interact with various database types, making it perfect for AI
applications that need to work with multiple data sources.

✨ Features
  • 🔌 Multi-Database Support: Connect to various database types through a single API
  • 🔄 Schema Normalization: Automatically normalize database schemas to correct naming conventions for LLM
    compatibility
  • 🔒 Secure Connections: Support for SSL and various authentication methods
  • 🚀 High Performance: Optimizes your LLM generated queries
  • 📝 Query Transformation: Let LLM generate queries against normalized layouts and transform them into their
    unnormalized form
  • 🐳 Docker Support: Easy deployment with Docker and Docker Compose
  • 🔧 Easy to Extend: Adding new database providers can be easily done by extending
    the BaseDBConnector interface
🗄️ Supported Databases
Database Type Status Icon
PostgreSQL
MySQL
SQLite
BigQuery
Oracle
MS SQL
Redshift
🚀 Quick Start
Using Docker (Recommended)
  1. Clone the repository:

    git clone https://github.com/raeudigerRaeffi/turbular.git
    cd turbular
    
  2. Start the development environment:

    docker-compose -f docker-compose.dev.yml up --build
    
  3. Test the connection:

    ./scripts/test_connection.py
    
Manual Installation
  1. Install Python 3.11 or higher

  2. Install dependencies:

    pip install -r requirements.txt
    
  3. Run the server:

    uvicorn app.main:app --reload
    
🔌 API Reference
Database Operations
Get Database Schema
POST /get_schema

Retrieve the schema of a connected database for your LLM agent.

Parameters:

  • db_info: Database connection arguments
  • return_normalize_schema (optional): Return schema in LLM-friendly format
Execute Query
POST /execute_query

Optimizes query and then execute SQL queries on the connected database.

Parameters:

  • db_info: Database connection arguments
  • query: SQL query string
  • normalized_query: Boolean indicating if query is normalized
  • max_rows: Maximum number of rows to return
  • autocommit: Boolean for autocommit mode
File Management
Upload BigQuery Key
POST /upload-bigquery-key

Upload a BigQuery service account key file.

Parameters:

  • project_id: BigQuery project ID
  • key_file: JSON key file
Upload SQLite Database
POST /upload-sqlite-file

Upload a SQLite database file.

Parameters:

  • database_name: Name to identify the database
  • db_file: SQLite database file (.db or .sqlite)
Utility Endpoints
Health Check
GET /health

Verify if the API is running.

List Supported Databases
GET /supported-databases

Get a list of all supported database types.

🔧 Development Setup
  1. Fork and clone the repository

  2. Create a development environment:

    docker-compose -f docker-compose.dev.yml up --build
    
  3. The development server includes:

    • FastAPI server with hot reload
    • PostgreSQL test database
    • Pre-configured test data
  4. Access the API documentation:

🤝 Contributing

We welcome contributions! Here's how you can help:

  1. Check out our contribution guidelines
  2. Look for open issues
  3. Submit pull requests with improvements
  4. Help with documentation
  5. Share your feedback
Development Guidelines
  • Follow PEP 8 style guide
  • Write tests for new features
  • Update documentation as needed
  • Use meaningful commit messages
Roadmap
  1. Add more testing, formatting and commit hooks
  2. Add SSH support for database connection
  3. Add APIs as datasources using steampipe
  4. Enable local schema saving for databases to which the server has already connected
  5. Add more datasources (snowflake, mongodb, excel, etc.)
  6. Add authentication protection to routes
🧪 Testing

Run the test suite:

pytest

For development tests with the included PostgreSQL:

./scripts/test_connection.py
📚 Documentation
📝 Connection Examples
PostgreSQL
connection_info = {
    "database_type": "PostgreSQL",
    "username": "user",
    "password": "password",
    "host": "localhost",
    "port": 5432,
    "database_name": "mydb",
    "ssl": False
}
BigQuery
connection_info = {
    "database_type": "BigQuery",
    "path_cred": "/path/to/credentials.json",
    "project_id": "my-project",
    "dataset_id": "my_dataset"
}
SQLite
connection_info = {
    "type": "SQLite",
    "database_name": "my_database"
}
🙏 Acknowledgments
📞 Support

Made with ❤️ by the Turbular Team