FastMCP.__init__ clobbers the root logger with INFO-level RichHandler — hangs stdio servers under back-pressure
Summary
mcp.server.fastmcp.FastMCP.__init__ calls configure_logging("INFO") unconditionally, which uses logging.basicConfig to install a RichHandler(console=Console(stderr=True)) as a root-logger handler at INFO level. In a stdio MCP server whose stderr is a unix socket (typical when spawned by a Node.js MCP client like Claude Desktop, Cline, or qwen-code), this causes high-volume INFO logging from unrelated libraries (notably httpx — "HTTP Request: GET ...") to fill the kernel's stderr SNDBUF and block the asyncio main thread on the next write(2). The MCP client's read-side timeout then abandons the server.
Affected version: mcp==1.27.0 (and any earlier version with the same FastMCP.__init__ → configure_logging("INFO") call).
User-visible symptom: stdio MCP server hangs after a handful of tool calls, then is killed by the client. Repeated. No useful error in the server log because the server itself is alive — just blocked in write(2).
Reproduction
Minimal repro of the buffer-fill mechanic on macOS (net.local.stream.sendspace = 8192):
import logging, socket, sys, os
# Spawn a child whose stderr is a socketpair (mimics Node child_process.spawn)
parent, child = socket.socketpair(socket.AF_UNIX, socket.SOCK_STREAM)
pid = os.fork()
if pid == 0:
parent.close()
os.dup2(child.fileno(), 2)
child.close()
# FastMCP() does this for you:
from mcp.server.fastmcp import FastMCP
_ = FastMCP("test")
# Now any INFO log from any library hits the unbuffered stderr socket.
# Simulate ~5 retrieves' worth of httpx INFO output:
for _ in range(8):
logging.getLogger("httpx").info(
"HTTP Request: GET http://127.0.0.1:6333/collections/foo/points "
"\"HTTP/1.1 200 OK\""
) # ~120 bytes per call after Rich formatting
sys.exit(0)
else:
child.close()
import time; time.sleep(2)
print("child poll:", os.waitpid(pid, os.WNOHANG)) # (0, 0) = blocked
Real-world repro: spawn a FastMCP server that uses qdrant_client (or any httpx-based client) inside a tool handler, from a Node.js MCP host that doesn't aggressively drain stderr. We observed this with qwen-code consuming the MCP server's stderr — the server hangs reproducibly after ~5 retrieve_standard_section calls.
Root cause (with stack trace)
py-spy dump of a hung MCP server child:
Thread 0x2082C8800 (active): "MainThread"
_write_buffer (rich/console.py:2124) <-- blocks in write(2)
_check_buffer (rich/console.py:2055)
_exit_buffer (rich/console.py:820)
__exit__ (rich/console.py:864)
print (rich/console.py:1704)
emit (rich/logging.py:186) <-- RichHandler installed by FastMCP
handle (logging/__init__.py:1028)
...
info (logging/__init__.py:1539)
_send_single_request (httpx/_client.py:1025) <-- httpx INFO log
...
scroll (qdrant_client/qdrant_remote.py:956) <-- inside our tool handler
...
call_tool (mcp/server/fastmcp/server.py:346)
handler (mcp/server/lowlevel/server.py:535)
The chain: tool handler runs → calls into qdrant_client.scroll → httpx logs the HTTP request at INFO → RichHandler.emit fires → Rich's Console(stderr=True) writes to stderr → kernel SNDBUF (8 KB on macOS) fills because the host isn't reading fast enough → write(2) blocks → asyncio event loop is now stuck inside the synchronous logger call.
The relevant SDK code (mcp/server/fastmcp/utilities/logging.py):
def configure_logging(level: ... = "INFO") -> None:
handlers: list[logging.Handler] = []
try:
from rich.console import Console
from rich.logging import RichHandler
handlers.append(RichHandler(console=Console(stderr=True), rich_tracebacks=True))
except ImportError:
pass
if not handlers:
handlers.append(logging.StreamHandler())
logging.basicConfig(level=level, format="%(message)s", handlers=handlers)
And FastMCP.__init__ calls configure_logging(self.settings.log_level) (default "INFO") at server.py:242.
Why this is a footgun
- It's the root logger.
logging.basicConfig configures the root, so every library imported anywhere in the process gets routed through Rich → stderr at INFO. The user didn't opt into this; instantiating FastMCP("my-server") is enough.
- INFO is too verbose for a non-interactive runtime. httpx, urllib3, asyncio, openai, anthropic, etc. all chatter at INFO. In a CLI that's fine; in an MCP server whose stderr is a 8 KB-buffered pipe, it's a back-pressure bomb.
- stdio MCP hosts vary in how aggressively they drain stderr. The SDK can't assume the host has a tight stderr-reader loop. Even when the host does drain, a transient pause is enough to fill the buffer.
Proposed fixes (any of these resolves the issue)
- Don't touch the root logger. Configure
logging.getLogger("mcp") (or "FastMCP") instead of calling logging.basicConfig. Library code shouldn't claim the root namespace.
- Default to WARNING, not INFO. INFO chatter from transitive deps is rarely useful in a server runtime.
- Don't auto-install on instantiation. Move
configure_logging to a separately-callable helper users opt into; FastMCP itself shouldn't reach for the root logger as a side effect of construction.
- At minimum: warn or document. The current default is surprising and the docs don't mention that creating a FastMCP instance reconfigures global logging.
A minimal patch that picks #1 + #2:
def configure_logging(level: ... = "WARNING") -> None:
logger = logging.getLogger("mcp")
if logger.handlers:
return # idempotent; user can override
try:
from rich.console import Console
from rich.logging import RichHandler
logger.addHandler(RichHandler(console=Console(stderr=True), rich_tracebacks=True))
except ImportError:
logger.addHandler(logging.StreamHandler())
logger.setLevel(level)
logger.propagate = False
Workaround for affected users today
Call this at the top of your MCP server's main(), after importing FastMCP but before mcp.run():
import logging, sys
def _quarantine_root_logging():
root = logging.getLogger()
for h in list(root.handlers):
root.removeHandler(h)
root.addHandler(logging.StreamHandler(sys.stderr))
root.setLevel(logging.WARNING)
for noisy in ("httpx", "httpcore", "qdrant_client"):
logging.getLogger(noisy).setLevel(logging.WARNING)
We additionally bump SO_SNDBUF on stdout/stderr to 8 MB as belt-and-suspenders, so a future stray writer can't recreate the same hang.
Environment
- macOS (sysctl
net.local.stream.sendspace = 8192)
- Python 3.12.12
- mcp 1.27.0
- Stdio host: qwen-code 0.14.5 (Node.js child_process.spawn → AF_UNIX socketpair stdio)
The 8 KB unix-socket SNDBUF is macOS-specific (Linux pipes default to 64 KB), but the underlying issue — an unread stderr buffer eventually filling and blocking the writer — applies anywhere if log volume is high enough relative to the host's drain rate.
FastMCP.__init__clobbers the root logger with INFO-level RichHandler — hangs stdio servers under back-pressureSummary
mcp.server.fastmcp.FastMCP.__init__callsconfigure_logging("INFO")unconditionally, which useslogging.basicConfigto install aRichHandler(console=Console(stderr=True))as a root-logger handler at INFO level. In a stdio MCP server whose stderr is a unix socket (typical when spawned by a Node.js MCP client like Claude Desktop, Cline, or qwen-code), this causes high-volume INFO logging from unrelated libraries (notablyhttpx— "HTTP Request: GET ...") to fill the kernel's stderr SNDBUF and block the asyncio main thread on the nextwrite(2). The MCP client's read-side timeout then abandons the server.Affected version:
mcp==1.27.0(and any earlier version with the sameFastMCP.__init__→configure_logging("INFO")call).User-visible symptom: stdio MCP server hangs after a handful of tool calls, then is killed by the client. Repeated. No useful error in the server log because the server itself is alive — just blocked in
write(2).Reproduction
Minimal repro of the buffer-fill mechanic on macOS (
net.local.stream.sendspace = 8192):Real-world repro: spawn a FastMCP server that uses
qdrant_client(or any httpx-based client) inside a tool handler, from a Node.js MCP host that doesn't aggressively drain stderr. We observed this with qwen-code consuming the MCP server's stderr — the server hangs reproducibly after ~5retrieve_standard_sectioncalls.Root cause (with stack trace)
py-spy dumpof a hung MCP server child:The chain: tool handler runs → calls into
qdrant_client.scroll→httpxlogs the HTTP request at INFO →RichHandler.emitfires → Rich'sConsole(stderr=True)writes to stderr → kernel SNDBUF (8 KB on macOS) fills because the host isn't reading fast enough →write(2)blocks → asyncio event loop is now stuck inside the synchronous logger call.The relevant SDK code (
mcp/server/fastmcp/utilities/logging.py):And
FastMCP.__init__callsconfigure_logging(self.settings.log_level)(default"INFO") at server.py:242.Why this is a footgun
logging.basicConfigconfigures the root, so every library imported anywhere in the process gets routed through Rich → stderr at INFO. The user didn't opt into this; instantiatingFastMCP("my-server")is enough.Proposed fixes (any of these resolves the issue)
logging.getLogger("mcp")(or"FastMCP") instead of callinglogging.basicConfig. Library code shouldn't claim the root namespace.configure_loggingto a separately-callable helper users opt into; FastMCP itself shouldn't reach for the root logger as a side effect of construction.A minimal patch that picks #1 + #2:
Workaround for affected users today
Call this at the top of your MCP server's
main(), after importing FastMCP but beforemcp.run():We additionally bump
SO_SNDBUFon stdout/stderr to 8 MB as belt-and-suspenders, so a future stray writer can't recreate the same hang.Environment
net.local.stream.sendspace = 8192)The 8 KB unix-socket SNDBUF is macOS-specific (Linux pipes default to 64 KB), but the underlying issue — an unread stderr buffer eventually filling and blocking the writer — applies anywhere if log volume is high enough relative to the host's drain rate.