GitHub Stars
1
User Rating
Not Rated
Favorites
0
Views
3
Forks
0
Issues
0
#I'm having a hard time getting this whole MCP server to work with Cursor. I'm not sure why. If anyone has any suggestions, please let me know.
An implementation inspired by Google Research's paper "Generative AI for Programming: A Common Task Framework". This server provides a neural memory system that can learn and predict sequences while maintaining state through a memory vector, following principles outlined in the research for improved code generation and understanding.
๐ Research Background
This implementation draws from the concepts presented in the Google Research paper (Muennighoff et al., 2024) which introduces a framework for evaluating and improving code generation models. The Titan Memory Server implements key concepts from the paper:
- Memory-augmented sequence learning
- Surprise metric for novelty detection
- Manifold optimization for stable learning
- State maintenance through memory vectors
These features align with the paper's goals of improving code understanding and generation through better memory and state management.
๐ Features
- Neural memory model with configurable dimensions
- Sequence learning and prediction
- Surprise metric calculation
- Model persistence (save/load)
- Memory state management
- Full MCP tool integration
๐ฆ Installation
Installing via Smithery
To install Titan Memory Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @henryhawke/mcp-titan --client claude
Manual Installation
# Install dependencies
npm install
# Build the project
npm run build
# Run tests
npm test
๐ ๏ธ Available MCP Tools
1. ๐ฏ init_model
Initialize the Titan Memory model with custom configuration.
{
inputDim?: number; // Input dimension (default: 64)
outputDim?: number; // Output/Memory dimension (default: 64)
}
2. ๐ train_step
Perform a single training step with current and next state vectors.
{
x_t: number[]; // Current state vector
x_next: number[]; // Next state vector
}
3. ๐ forward_pass
Run a forward pass through the model with an input vector.
{
x: number[]; // Input vector
}
4. ๐พ save_model
Save the model to a specified path.
{
path: string; // Path to save the model
}
5. ๐ load_model
Load the model from a specified path.
{
path: string; // Path to load the model from
}
6. โน๏ธ get_status
Get current model status and configuration.
{
} // No parameters required
7. ๐ train_sequence
Train the model on a sequence of vectors.
{
sequence: number[][]; // Array of vectors to train on
}
๐ Example Usage
// Initialize model
await callTool("init_model", { inputDim: 64, outputDim: 64 });
// Train on a sequence
const sequence = [
[1, 0, 0 /* ... */],
[0, 1, 0 /* ... */],
[0, 0, 1 /* ... */],
];
await callTool("train_sequence", { sequence });
// Run forward pass
const result = await callTool("forward_pass", {
x: [1, 0, 0 /* ... */],
});
๐ง Technical Details
- Built with TensorFlow.js for efficient tensor operations
- Uses manifold optimization for stable learning
- Implements surprise metric for novelty detection
- Memory management with proper tensor cleanup
- Type-safe implementation with TypeScript
- Comprehensive error handling
๐งช Testing
The project includes comprehensive tests covering:
- Model initialization and configuration
- Training and forward pass operations
- Memory state management
- Model persistence
- Edge cases and error handling
- Tensor cleanup and memory management
Run tests with:
npm test
๐ Implementation Notes
- All tensor operations are wrapped in
tf.tidy()
for proper memory management - Implements proper error handling with detailed error messages
- Uses type-safe MCP tool definitions
- Maintains memory state between operations
- Handles floating-point precision issues with epsilon tolerance
๐ License
MIT License - feel free to use and modify as needed!
Fixed the Implementation originally done by
https://github.com/synthience/mcp-titan-cognitive-memory/
Titan Memory MCP Server
A Model Context Protocol (MCP) server implementation that provides automatic memory-augmented learning capabilities for Cursor. This server maintains a persistent memory state that evolves based on interactions, enabling contextual awareness and learning over time.
Features
- ๐ง Automatic memory management and persistence
- ๐ Real-time memory updates based on input
- ๐ Memory state analysis and insights
- ๐ Seamless integration with Cursor via MCP
- ๐ Dynamic port allocation for HTTP endpoints
- ๐พ Automatic state saving every 5 minutes
Installation
# Install from npm
npm install @henryhawke/mcp-titan
# Or clone and install locally
git clone https://github.com/henryhawke/mcp-titan.git
cd mcp-titan
npm install
Usage
As a Cursor MCP Server
- Add the following to your Claude Desktop config (
~/Library/Application Support/Claude/claude_desktop_config.json
):
{
"mcpServers": {
"titan-memory": {
"command": "node",
"args": ["/path/to/mcp-titan/build/index.js"],
"env": {
"NODE_ENV": "production"
}
}
}
}
- Restart Claude Desktop
- Look for the hammer icon to confirm the server is connected
- Use the available tools:
process_input
: Process text and update memoryget_memory_state
: Retrieve current memory insights
As a Standalone Server
# Build and start the server
npm run build && npm start
# The server will run on stdio for MCP and start an HTTP server on a dynamic port
Development
Prerequisites
- Node.js >= 18.0.0
- npm >= 7.0.0
Setup
# Install dependencies
npm install
# Build the project
npm run build
# Run tests
npm test
Project Structure
src/
โโโ __tests__/ # Test files
โโโ index.ts # Main server implementation
โโโ model.ts # TitanMemory model implementation
โโโ types.ts # TypeScript type definitions
Available Scripts
npm run build
: Build the projectnpm start
: Start the servernpm test
: Run testsnpm run clean
: Clean build artifacts
API Reference
Tools
process_input
Process text input and update memory state.
interface ProcessInputParams {
text: string;
context?: string;
}
get_memory_state
Retrieve current memory state and statistics.
interface MemoryState {
memoryStats: {
mean: number;
std: number;
};
memorySize: number;
status: string;
}
HTTP Endpoints
GET /
: Server information and statusPOST /mcp
: MCP protocol endpoint
Contributing
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
Development Guidelines
- Follow the existing code style
- Add tests for new features
- Update documentation as needed
- Ensure all tests pass before submitting PR
- Keep PRs focused and atomic
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- Built with Model Context Protocol
- Uses TensorFlow.js for memory operations
Quick Start
Installation
You can run the Titan Memory MCP server directly using npx without installing it globally:
npx -y @smithery/cli@latest run @henryhawke/mcp-titan --config "{}"
Using with Cursor IDE
- Open Cursor IDE
- Create or edit your Cursor MCP configuration file:
MacOS/Linux: ~/Library/Application Support/Cursor/cursor_config.json
Windows: %APPDATA%\Cursor\cursor_config.json
Add the following configuration:
{
"mcpServers": {
"titan": {
"command": "npx",
"args": [
"-y",
"@smithery/cli@latest",
"run",
"@henryhawke/mcp-titan",
"--config",
"{}"
]
}
}
}
- Restart Cursor IDE
The Titan Memory server will now be available in your Cursor IDE. You can verify it's working by looking for the hammer icon in the bottom right corner of your editor.
Configuration Options
You can customize the server behavior by passing configuration options in the JSON config string:
{
"port": 3000 // Optional: HTTP port for REST API (default: 0 - disabled)
// Additional configuration options can be added here
}
Example with custom port:
npx -y @smithery/cli@latest run @henryhawke/mcp-titan --config '{"port": 3000}'
Available Tools
The Titan Memory server provides the following MCP tools:
init_model
- Initialize the memory model with custom dimensionstrain_step
- Train the model on code patternsforward_pass
- Get predictions for next likely code patternsget_memory_state
- Query the current memory state and statistics
๐ค LLM Integration Guide
When using the Titan Memory server with an LLM (like Claude), include the following information in your prompt or system context to help the LLM effectively use the memory tools:
Memory System Overview
The Titan Memory server implements a three-tier memory system:
- Short-term memory: For immediate context and recent patterns
- Long-term memory: For persistent patterns and learned behaviors
- Meta memory: For high-level abstractions and relationships
Tool Usage Guidelines
Initialization
// Always initialize the model first await init_model({ inputDim: 768, // Match your embedding dimension outputDim: 768, // Memory state dimension });
Training
- Use
train_step
when you have pairs of sequential states - Input vectors should be normalized embeddings
- The surprise metric indicates pattern novelty
- Use
Prediction
- Use
forward_pass
to predict likely next states - Compare predictions with actual outcomes
- High surprise values indicate unexpected patterns
- Use
Memory State Analysis
- Use
get_memory_state
to understand current context - Monitor memory statistics for learning progress
- Use memory insights to guide responses
- Use
Example Workflow
- Initialize model at the start of a session
- For each new code or text input:
- Convert to embedding vector
- Run forward pass to get prediction
- Use prediction confidence to guide responses
- Train on actual outcome
- Check memory state for context
Best Practices
Vector Preparation
- Normalize input vectors to unit length
- Use consistent embedding dimensions
- Handle out-of-vocabulary tokens appropriately
Memory Management
- Monitor surprise metrics for anomaly detection
- Use memory state insights to maintain context
- Consider both short and long-term patterns
Error Handling
- Check if model is initialized before operations
- Handle missing or invalid vectors gracefully
- Monitor memory usage and performance
Integration Example
// 1. Initialize model
await init_model({ inputDim: 768, outputDim: 768 });
// 2. Process new input
const currentVector = embedText(currentInput);
const { predicted, surprise } = await forward_pass({ x: currentVector });
// 3. Use prediction and surprise for response
if (surprise > 0.8) {
// Handle unexpected pattern
} else {
// Use prediction for response
}
// 4. Train on actual outcome
const nextVector = embedText(actualOutcome);
await train_step({ x_t: currentVector, x_next: nextVector });
// 5. Check memory state
const { memoryStats } = await get_memory_state();
Memory Interpretation
The memory state provides several insights:
- Mean activation indicates general memory utilization
- Standard deviation shows pattern diversity
- Memory size reflects context capacity
- Surprise metrics indicate novelty detection
Use these metrics to:
- Gauge confidence in predictions
- Detect context shifts
- Identify learning progress
- Guide response generation
๐ LLM Prompt Template
To enable an LLM to effectively use the Titan Memory system, include the following prompt in your system context:
You have access to a Titan Memory system that provides advanced memory capabilities for maintaining context and learning patterns. This system uses a three-tier memory architecture and provides the following tools:
1. init_model: Initialize the memory model
- Required at start of session
- Parameters: {
inputDim: number (default: 768), // Must match your embedding dimension
outputDim: number (default: 768) // Size of memory state
}
- Call this FIRST before any other memory operations
2. train_step: Train on sequential patterns
- Parameters: {
x_t: number[], // Current state vector (normalized, length = inputDim)
x_next: number[] // Next state vector (normalized, length = inputDim)
}
- Use to update memory with new patterns
- Returns: {
cost: number, // Training cost
predicted: number[], // Predicted next state
surprise: number // Novelty metric (0-1)
}
3. forward_pass: Predict next likely state
- Parameters: {
x: number[] // Current state vector (normalized, length = inputDim)
}
- Use to get predictions
- Returns: {
predicted: number[], // Predicted next state
memory: number[], // Current memory state
surprise: number // Novelty metric (0-1)
}
4. get_memory_state: Query memory insights
- Parameters: {} (none required)
- Returns: {
memoryStats: {
mean: number, // Average activation
std: number // Pattern diversity
},
memorySize: number, // Memory capacity
status: string // Memory system status
}
WORKFLOW INSTRUCTIONS:
1. ALWAYS call init_model first in a new session
2. For each interaction:
- Convert input to normalized vector
- Use forward_pass to predict and get surprise metric
- If surprise > 0.8, treat as novel pattern
- Use predictions to guide your responses
- Use train_step to update memory with actual outcomes
- Periodically check memory_state for context
MEMORY INTERPRETATION:
- High surprise (>0.8) indicates unexpected patterns
- Low surprise (<0.2) indicates familiar patterns
- High mean activation (>0.5) indicates strong memory utilization
- High std (>0.3) indicates diverse pattern recognition
You should:
- Initialize memory at session start
- Monitor surprise metrics for context shifts
- Use memory state to maintain consistency
- Consider both short and long-term patterns
- Handle errors gracefully
You must NOT:
- Skip initialization
- Use non-normalized vectors
- Ignore surprise metrics
- Forget to train on outcomes
When using this prompt, the LLM will:
- Understand the complete tool set available
- Follow the correct initialization sequence
- Properly interpret memory metrics
- Maintain consistent memory state
- Handle errors appropriately
๐ Testing with MCP Inspector
You can test the Titan Memory server using the MCP Inspector tool:
# Install and run the inspector
npx @modelcontextprotocol/inspector node build/index.js
The inspector will be available at http://localhost:5173. You can use it to:
- Test all available tools
- View tool schemas and descriptions
- Monitor memory state
- Debug tool calls
- Verify server responses
Testing Steps
Build the project first:
npm run build
Make sure the build/index.js file is executable:
chmod +x build/index.js
Run the inspector:
npx @modelcontextprotocol/inspector node build/index.js
Open http://localhost:5173 in your browser
Test the tools in sequence:
- Initialize model with
init_model
- Train with sample data using
train_step
- Test predictions with
forward_pass
- Monitor memory state with
get_memory_state
- Initialize model with
Troubleshooting Inspector
If you encounter issues:
- Ensure Node.js version >= 18.0.0
- Verify the build is up to date
- Check file permissions
- Monitor the terminal for error messages
52
Followers
3,824
Repositories
0
Gists
0
Total Contributions
Mirror of https://github.com/mixelpixx/GoogleSearch_McpServer