mcp-streamable-http-quickstart

このリポジトリは、MCPの公式クイックスタート天気アプリを拡張し、HTTPプロトコルを使用してクライアントとサーバー間の通信を実現します。SSEまたはストリーム可能HTTPを選択し、OpenAI APIを利用してLLM呼び出しを行う方法を示します。各変更点についても詳しく説明されており、実際のアプリケーションに役立つ内容です。

GitHubスター

2

ユーザー評価

未評価

フォーク

1

イシュー

0

閲覧数

2

お気に入り

0

README
MCP Quickstart Weather App Extensions

After reading the Official MCP quickstart examples on MCP server and client, do you wonder

  • How to upgrade the simple stdio-based example to HTTP server/client towards real-world uses?
  • How to replace the Anthropic API with OpenAI API widely used in open source inference servers like vllm?
Goal of This Repository
  1. Patch the official MCP quickstart weather app to use:
    • SSE or Streamable HTTP as the transport protocol between client and server
    • OpenAI API for LLM calls
  2. Explain each modification for readers to understand these extensions
How to Run
  1. Install uv
  2. Choose the protocol in your mind, either sse or streamable-http
  3. Open two terminals on one host (hardcoded localhost HTTP server in this example)
  4. Term 1: run server
    • Go to the server directory weather-server-python
    • Start the server uv run server PROTOCOL_OF_YOUR_CHOICE
  5. Term 2: run client
    • Go to the client directory mcp-client-python
    • Setup environment variables for OpenAI endpoint and API
      • export OPENAI_BASE_URL=http://xxx/v1
      • export OPENAI_API_KEY=yyy
    • Start the client uv run client PROTOCOL_OF_YOUR_CHOICE
Explanation of Modifications
Use SSE/Streamable-HTTP Instead of Stdio for Transport Protocol
  • Server: use mcp.run('sys.argv[1]') instead of mcp.run('stdio') given sys.argv[1] is either sse or streamable-http
    • SSE protocol: server main endpoint is http://localhost:8000/sse
    • Streamable HTTP protocol: server only endpoint is http://localhost:8000/mcp
  • Client: load rs (readstream), ws (writestream) from sse_client or streamablehttp_client intead of stdio_client in the original MCP quickstart example
Swap Anthropic API to OpenAI API for LLM call
  • Replace the LLM call function
    • self.anthropic.messages.create() -> self.client.chat.completions.create()
    • Dynamic model id for vllm
    • The tools argument uses a little different formatting
  • Replace the LLM response object handling
    • response -> response.choices[0].message
作者情報
Xiaokui Shu

My explanation of monad in Haskell: https://git.io/JP1e9

IBM Research

41

フォロワー

39

リポジトリ

1

Gist

5

貢献数

トップ貢献者

スレッド