mcp-streamable-http-quickstart
This repository extends the MCP quickstart weather app to implement communication between client and server using HTTP protocols. Specifically, it utilizes either SSE or Streamable HTTP and employs the OpenAI API for LLM calls. It also includes explanations for each modification, enabling readers to understand these extensions.
GitHub Stars
2
User Rating
Not Rated
Favorites
0
Views
104
Forks
1
Issues
0
README
MCP Quickstart Weather App Extensions
After reading the Official MCP quickstart examples on MCP server and client, do you wonder
- How to upgrade the simple stdio-based example to HTTP server/client towards real-world uses?
- The latest MCP document (June 2025) lists SSE as the default HTTP transport protocol
- The latest MCP specification (March 2025) further upgrades SSE to Streamable HTTP protocol
- How to replace the Anthropic API with OpenAI API widely used in open source inference servers like vllm?
Goal of This Repository
- Patch the official MCP quickstart weather app to use:
- SSE or Streamable HTTP as the transport protocol between client and server
- OpenAI API for LLM calls
- Explain each modification for readers to understand these extensions
How to Run
- Install uv
- Choose the protocol in your mind, either
sseorstreamable-http - Open two terminals on one host (hardcoded localhost HTTP server in this example)
- Term 1: run server
- Go to the server directory
weather-server-python - Start the server
uv run server PROTOCOL_OF_YOUR_CHOICE
- Go to the server directory
- Term 2: run client
- Go to the client directory
mcp-client-python - Setup environment variables for OpenAI endpoint and API
export OPENAI_BASE_URL=http://xxx/v1export OPENAI_API_KEY=yyy
- Start the client
uv run client PROTOCOL_OF_YOUR_CHOICE
- Go to the client directory
Explanation of Modifications
Use SSE/Streamable-HTTP Instead of Stdio for Transport Protocol
- Server: use
mcp.run('sys.argv[1]')instead ofmcp.run('stdio')givensys.argv[1]is eithersseorstreamable-http- SSE protocol: server main endpoint is
http://localhost:8000/sse - Streamable HTTP protocol: server only endpoint is
http://localhost:8000/mcp
- SSE protocol: server main endpoint is
- Client: load
rs(readstream),ws(writestream) fromsse_clientorstreamablehttp_clientintead ofstdio_clientin the original MCP quickstart example
Swap Anthropic API to OpenAI API for LLM call
- Replace the LLM call function
self.anthropic.messages.create()->self.client.chat.completions.create()- Dynamic model id for vllm
- The
toolsargument uses a little different formatting
- Replace the LLM response object handling
response->response.choices[0].message