blazing-art-mcp

blazing-art-mcpは、Rustで開発されたアート生成ツールです。ユーザーが簡単に美しいアートを生成できるように設計されており、直感的なインターフェースを提供します。多様なスタイルやカスタマイズオプションを備えており、アーティストやデザイナーにとって有用なリソースとなります。

GitHubスター

0

ユーザー評価

未評価

お気に入り

0

閲覧数

7

フォーク

0

イシュー

0

README
Blazing ART MCP Server

Rust
MCP
Docker
Kubernetes

Blazing-fast Adaptive Radix Tree (ART) powered MCP server delivering microsecond-latency structured memory access for Large Language Models. Built with Rust for zero V8 overhead and predictable performance.

This server implements the Model Context Protocol so any MCP-compatible LLM can query structured memory over JSON-RPC.

🚀 Performance Characteristics
Dataset Size Lookup P95 Prefix Scan (100 matches) Memory Usage
100k keys 8 µs 35 µs 12 MB
1M keys 11 µs 60 µs 85 MB

Note: Performance numbers dominated by JSON serialization, not ART traversal - demonstrating exceptional core efficiency.

🏗️ Architecture
┌─────────────────┐    MCP Protocol     ┌──────────────────┐
│   LLM Host      │◄──────────────────►│  Memory Server   │
│ (Claude/etc.)   │   JSON-RPC 2.0     │   (Rust + ART)   │
└─────────────────┘                     └──────────────────┘
                                               │
                                               ▼
                                        ┌──────────────┐
                                        │ Adaptive     │
                                        │ Radix Tree   │
                                        │ In-Memory    │
                                        └──────────────┘
Key Technologies
  • 🦀 Rust: Memory-safe, zero-cost abstractions, predictable performance
  • 🌳 Adaptive Radix Tree: O(k) operations, 8-52 bytes per key, cache-friendly
  • 🔌 Model Context Protocol: Standardized LLM integration via JSON-RPC 2.0
  • 🐳 Docker: Static-linked, distroless containers (<10MB)
  • ☸️ Kubernetes: Production-ready with autoscaling, monitoring, security
🛠 Prerequisites

Install Rust 1.76+ using rustup if you don't already
have the toolchain:

curl https://sh.rustup.rs -sSf | sh -s -- -y
source $HOME/.cargo/env

Then you can build and test the project with cargo build and cargo test.

🎯 Quick Start
Local Development
# Clone and build
git clone https://github.com/JohnJBoren/blazing-art-mcp.git
cd blazing-art-mcp
cargo build --release

# Run with sample data
./target/release/blazing_art_mcp \
  --entities examples/entities.json \
  --events examples/events.json

# WebSocket mode for remote access
./target/release/blazing_art_mcp \
  --ws 0.0.0.0:4000 \
  --entities examples/entities.json
Docker Deployment
# Build optimized container
docker build -t blazing-art-mcp:latest .

# Run with STDIO (sidecar mode)
docker run -i --rm blazing-art-mcp:latest

# Run with WebSocket (service mode)
docker run -p 4000:4000 -p 3000:3000 blazing-art-mcp:latest \
  --ws 0.0.0.0:4000 --health-port 3000
Kubernetes Production
# Deploy to Kubernetes
kubectl apply -f k8s/deployment.yaml

# Check status
kubectl get pods -n mcp-memory
kubectl logs -f deployment/mcp-memory -n mcp-memory

# Health check
kubectl port-forward svc/mcp-memory-service 3000:3000 -n mcp-memory
curl http://localhost:3000/health/ready
🔧 Configuration
Command Line Options
blazing_art_mcp [OPTIONS]

Options:
  --entities <FILE>      JSON file with entity data to preload
  --events <FILE>        JSON file with event data to preload  
  --ws <ADDRESS>         WebSocket address (e.g., 0.0.0.0:4000)
  --event-limit <NUM>    Max events returned by prefix search [default: 64]
  --health-port <PORT>   Health check port [default: 3000]
  --telemetry           Enable OpenTelemetry tracing
  --health-check        Run health check and exit (for containers)
Environment Variables
# Logging
RUST_LOG=info                                    # Log level
OTEL_EXPORTER_OTLP_ENDPOINT=http://jaeger:4317  # Telemetry endpoint

# Performance tuning (set automatically)
MIMALLOC_LARGE_OS_PAGES=1                        # Use huge pages if available
📊 MCP Protocol Interface

The server exposes two primary tools via MCP:

1. Entity Lookup
{
  "tool": "lookupEntity",
  "arguments": {
    "name": "Albert Einstein"
  }
}

Response:

{
  "name": "Albert Einstein",
  "summary": "Theoretical physicist, Nobel Prize 1921...",
  "born": "1879",
  "tags": ["physicist", "relativity", "nobel"]
}
2. Event Search
{
  "tool": "findEvents", 
  "arguments": {
    "prefix": "2023-11"
  }
}

Response:

[
  {
    "id": "2023-11-01:meeting",
    "timestamp": "2023-11-01T10:00:00Z",
    "description": "Team standup meeting",
    "category": "work"
  }
]
🏭 Production Features
Security Hardening
  • Non-root containers with distroless base images
  • Read-only filesystems and dropped capabilities
  • SBOM generation for supply chain security
  • Vulnerability scanning with Trivy
  • Network policies for micro-segmentation
Observability
  • Health checks (/health/live, /health/ready)
  • Prometheus metrics (/metrics)
  • Structured logging with JSON output
  • OpenTelemetry tracing for distributed systems
  • Graceful shutdown with statistics logging
High Availability
  • Horizontal Pod Autoscaling (2-10 replicas)
  • Pod Disruption Budgets for rolling updates
  • Anti-affinity rules for zone distribution
  • Resource limits and quality of service
🔬 Performance Optimizations
Memory Allocator
// 2-6x performance improvement with custom allocator
#[global_allocator]
static GLOBAL: mimalloc::MiMalloc = mimalloc::MiMalloc;
Zero-Copy Serialization
// 10-50x faster than standard JSON with rkyv
use rkyv::{Archive, Serialize, Deserialize};
Cache-Aligned Data Structures
// Optimize for CPU cache lines
#[repr(align(64))]
struct AlignedMemory { ... }
📈 Benchmarking
# Run performance benchmarks
cargo bench

# Memory profiling
cargo run --release -- --entities large_dataset.json &
ps aux | grep blazing_art_mcp  # Check RSS memory

# Load testing with WebSocket
wrk -t12 -c400 -d30s http://localhost:4000/
🛠️ Development
Building from Source
# Development build
cargo build

# Optimized release build
cargo build --release

# With all security features
cargo build --release --target x86_64-unknown-linux-musl
Testing
# Unit tests
cargo test

# Integration tests
cargo test --test integration

# Clippy linting
cargo clippy -- -D warnings

# Security audit
cargo audit
Container Development
# Development container with hot reload
docker build --target development -t mcp-memory:dev .
docker run -v $(pwd):/app mcp-memory:dev

# Security scanning
docker build --target security-scan -t mcp-memory:scan .
docker run --rm mcp-memory:scan cat /tmp/trivy-report.sarif
🔄 Data Management
Loading Data

Entities (entities.json):

[
  {
    "name": "Claude Shannon",
    "summary": "Father of information theory...",
    "born": "1916",
    "tags": ["mathematician", "information-theory"]
  }
]

Events (events.json):

[
  {
    "id": "2024-01-15:discovery",
    "timestamp": "2024-01-15T14:30:00Z", 
    "description": "Major breakthrough in quantum computing",
    "category": "science"
  }
]
Persistence Strategies
  1. Snapshot Loading: Mount JSON files for initial data load
  2. Runtime Updates: Use MCP tools for dynamic mutations
  3. Graceful Persistence: Flush to disk on shutdown signals
🐛 Troubleshooting
Common Issues

Container fails health check:

# Check health endpoint directly
docker exec -it <container> /blazing_art_mcp --health-check

# Verify port binding
docker ps | grep mcp-memory

High memory usage:

# Check ART statistics
curl http://localhost:3000/metrics

# Verify data size vs memory usage ratio

Performance degradation:

# Enable debug logging
RUST_LOG=debug ./blazing_art_mcp

# Check for JSON serialization bottlenecks in traces
📝 License

MIT License - see LICENSE for details.

🤝 Contributing
  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Commit changes: git commit -m 'Add amazing feature'
  4. Push to branch: git push origin feature/amazing-feature
  5. Open a Pull Request
🙏 Acknowledgments

Built with ❤️ for the future of AI-powered applications