perf(streaming): coalesce per-token publishes to Redis (50ms / 128-char window)#333
perf(streaming): coalesce per-token publishes to Redis (50ms / 128-char window)#333
Conversation
…ar window) Per-token Redis publishes from TemporalStreamingModel were adding ~45s (56-62%) overhead to agent response latency, mostly from head-of-line blocking on the model's event loop: each `await streaming_context.stream_update(...)` inside the OpenAI stream `async for` paused token consumption until the publish round-trip completed. This change introduces a `CoalescingBuffer` driven by an `asyncio.Event`, so the producer never awaits on Redis. Deltas are merged consecutive-only (preserving character order in every (type, index) channel) and flushed on a 50ms timer, on a 128-char size threshold, or immediately for the first delta to keep perceived responsiveness high. The buffer's `close()` drains remaining deltas before the DONE event, so consumers see the full sequence in order. A new `StreamingMode = Literal["off", "per_token", "coalesced"]` lives in `streaming.py` as the single source of truth and is plumbed through the adk streaming module, `StreamingService.streaming_task_message_context`, and `StreamingTaskMessageContext`. Default is `"coalesced"` everywhere, so all 13+ existing context callers (claude_agents, langgraph, litellm provider, openai sync provider, etc.) benefit automatically.
- _run: when CancelledError is raised mid-flush in the for-loop, re-enqueue the in-flight item plus any remaining items in the local `drained` list back into self._buf so close()'s final drain can recover them. Previously the local `drained` list was unreachable after CancelledError exited the for-loop, causing the last coalesced batch to be silently dropped on close-during-flush races. Trade-off: the in-flight item may be duplicated on the consumer side (Redis pub may have completed before cancel was delivered), which is preferable to silent loss for streaming UX. - _merge_pair: replace `return b` fallback with AssertionError. All six current TaskMessageDelta variants have explicit isinstance branches, so the fallback is unreachable today. But _can_merge returns True for any same-type pair, so adding a 7th delta variant without updating _merge_pair would silently drop `a`'s accumulated content. Asserting turns a future silent data-loss into an immediate, diagnosable crash.
|
Addressed both Greptile findings in 0258aa5:
|
After merging the test-suite repair from main (#334) into this branch, one model test (test_responses_api_streaming) regressed because its assert_called_with strict-matched all kwargs of streaming_task_message_context and didn't tolerate the new `streaming_mode='coalesced'` kwarg this PR adds. Switched to assert_called() + targeted kwarg checks so the test verifies what it cares about (task_id threading) without locking in implementation details. Replaced the ad-hoc smoke scripts that lived in conversation with a real pytest module at tests/lib/core/services/adk/test_streaming.py covering: - _delta_char_len, _can_merge, _merge_pair: per-channel correctness + None-handling - _merge_consecutive: pure-text collapse, cross-channel order preservation, per-channel reconstruction matches per-token semantics - CoalescingBuffer: first-delta-immediate flush within ~20ms, size-threshold flush before timer fires, multi-delta coalescing within one window, idle close, add-after-close no-op - CoalescingBuffer cancel-during-flush regression test for the P1 fix: five queued chunks must all surface across publishes when close() cancels mid-flush (asserts substring presence rather than exact ordering, since the documented trade-off allows duplicates of the in-flight item) - StreamingTaskMessageContext mode dispatch: "off" suppresses publishes but persists full content, "per_token" publishes each delta synchronously, "coalesced" batches and persists full content
…gger
The model file used raw ``logging.getLogger("agentex.temporal.streaming")``,
which returns a logger with no handler attached and no level configured —
so the existing ``[TemporalStreamingModel] Initialized ... streaming_mode=...``
INFO log was silently dropped, making it impossible to verify at runtime
that a coalesced (or any) streaming mode was actually wired.
Switch to the SDK's ``make_logger`` helper (level=INFO, RichHandler in
local mode, StreamHandler otherwise) used everywhere else in the SDK.
The explicit logger name ``agentex.temporal.streaming`` is preserved so
any external logging configuration targeting that name keeps working.
* feat(api): api update * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * chore(internal): more robust bootstrap script * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * fix: use correct field name format for multipart file arrays * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * feat: support setting headers via env * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * codegen metadata * fix: allow litellm security patch (#336) * fix(adk): Always inject headers on execute activity (#337) * perf(streaming): coalesce per-token publishes to Redis (50ms / 128-char window) (#333) * perf(streaming): coalesce per-token publishes to Redis (50ms / 128-char window) Per-token Redis publishes from TemporalStreamingModel were adding ~45s (56-62%) overhead to agent response latency, mostly from head-of-line blocking on the model's event loop: each `await streaming_context.stream_update(...)` inside the OpenAI stream `async for` paused token consumption until the publish round-trip completed. This change introduces a `CoalescingBuffer` driven by an `asyncio.Event`, so the producer never awaits on Redis. Deltas are merged consecutive-only (preserving character order in every (type, index) channel) and flushed on a 50ms timer, on a 128-char size threshold, or immediately for the first delta to keep perceived responsiveness high. The buffer's `close()` drains remaining deltas before the DONE event, so consumers see the full sequence in order. A new `StreamingMode = Literal["off", "per_token", "coalesced"]` lives in `streaming.py` as the single source of truth and is plumbed through the adk streaming module, `StreamingService.streaming_task_message_context`, and `StreamingTaskMessageContext`. Default is `"coalesced"` everywhere, so all 13+ existing context callers (claude_agents, langgraph, litellm provider, openai sync provider, etc.) benefit automatically. * chore(streaming): fix import ordering (ruff I001) * fix(streaming): address greptile review findings - _run: when CancelledError is raised mid-flush in the for-loop, re-enqueue the in-flight item plus any remaining items in the local `drained` list back into self._buf so close()'s final drain can recover them. Previously the local `drained` list was unreachable after CancelledError exited the for-loop, causing the last coalesced batch to be silently dropped on close-during-flush races. Trade-off: the in-flight item may be duplicated on the consumer side (Redis pub may have completed before cancel was delivered), which is preferable to silent loss for streaming UX. - _merge_pair: replace `return b` fallback with AssertionError. All six current TaskMessageDelta variants have explicit isinstance branches, so the fallback is unreachable today. But _can_merge returns True for any same-type pair, so adding a 7th delta variant without updating _merge_pair would silently drop `a`'s accumulated content. Asserting turns a future silent data-loss into an immediate, diagnosable crash. * test(streaming): add coalescing-layer tests; loosen one model assertion After merging the test-suite repair from main (#334) into this branch, one model test (test_responses_api_streaming) regressed because its assert_called_with strict-matched all kwargs of streaming_task_message_context and didn't tolerate the new `streaming_mode='coalesced'` kwarg this PR adds. Switched to assert_called() + targeted kwarg checks so the test verifies what it cares about (task_id threading) without locking in implementation details. Replaced the ad-hoc smoke scripts that lived in conversation with a real pytest module at tests/lib/core/services/adk/test_streaming.py covering: - _delta_char_len, _can_merge, _merge_pair: per-channel correctness + None-handling - _merge_consecutive: pure-text collapse, cross-channel order preservation, per-channel reconstruction matches per-token semantics - CoalescingBuffer: first-delta-immediate flush within ~20ms, size-threshold flush before timer fires, multi-delta coalescing within one window, idle close, add-after-close no-op - CoalescingBuffer cancel-during-flush regression test for the P1 fix: five queued chunks must all surface across publishes when close() cancels mid-flush (asserts substring presence rather than exact ordering, since the documented trade-off allows duplicates of the in-flight item) - StreamingTaskMessageContext mode dispatch: "off" suppresses publishes but persists full content, "per_token" publishes each delta synchronously, "coalesced" batches and persists full content * chore(streaming): route TemporalStreamingModel logger through make_logger The model file used raw ``logging.getLogger("agentex.temporal.streaming")``, which returns a logger with no handler attached and no level configured — so the existing ``[TemporalStreamingModel] Initialized ... streaming_mode=...`` INFO log was silently dropped, making it impossible to verify at runtime that a coalesced (or any) streaming mode was actually wired. Switch to the SDK's ``make_logger`` helper (level=INFO, RichHandler in local mode, StreamHandler otherwise) used everywhere else in the SDK. The explicit logger name ``agentex.temporal.streaming`` is preserved so any external logging configuration targeting that name keeps working. * codegen metadata * feat(api): api update * release: 0.10.3 --------- Co-authored-by: stainless-app[bot] <142633134+stainless-app[bot]@users.noreply.github.com> Co-authored-by: Brandon Allen <brandon.allen@scale.com> Co-authored-by: Declan Brady <declan.brady@scale.com> Co-authored-by: Stas Moreinis <stas.moreinis@scale.com>
Summary
Per-token Redis publishes from
TemporalStreamingModelwere adding ~45s (56-62%) overhead to agent response latency. Root cause: eachawait streaming_context.stream_update(...)inside the OpenAI streamasync forblocked token consumption until the Redis publish round-trip completed (head-of-line blocking via TCP backpressure on the SSE connection).This PR replaces per-token publishes with a coalescing buffer that runs on a background ticker driven by
asyncio.Event— so the producer's event loop never awaits on Redis, even on size-threshold flushes.Performance context (from the original report)
Cost model with 1000 tokens at the chosen thresholds:
Design
StreamingMode = Literal["off", "per_token", "coalesced"]Single source of truth in
streaming.py. Every layer takes it as a parameter (model, provider, service, adk module, context).offstart → doneonly.per_tokencoalesced(default)CoalescingBufferasyncio.Event)add()— producer never awaits on Redis(type, index)channel is preserved exactly. Cross-channel order is preserved too.close()drains remaining buffer beforeStreamTaskMessageDoneso consumers see the full sequence.Default flip
The default for
streaming_modeis"coalesced"at every layer. All 13+ existing callers ofstreaming_task_message_context()(claude_agents, langgraph, litellm provider, openai sync provider, etc.) benefit automatically without code changes.Risks / caveats
streaming_mode=\"per_token\"on the model/provider/context constructor — no other code changes required.Test plan
tests/test_streaming.py(20/20 pass)close()drains buffer; persisted message body is the full assembled content\"off\"mode produces zero per-delta publishes but still persists complete contentGreptile Summary
This PR replaces per-token Redis publishes in
TemporalStreamingModelwith aCoalescingBufferthat merges consecutive same-channel deltas in 50ms / 128-char windows on a backgroundasyncio.Task, eliminating the head-of-line blocking that was adding ~45s to agent response latency. A newStreamingModeliteral ("off"/"per_token"/"coalesced") threads through every layer; the default flips to"coalesced"at all 13+ call sites automatically.The previously-flagged silent data-loss on
CancelledErrormid-flush has been addressed: items are now re-enqueued before re-raising soclose()'s final drain recovers them. Two minor style points remain (see inline comments) but neither affects correctness.Confidence Score: 5/5
Safe to merge; the core correctness concern from the prior review thread is resolved, and all remaining findings are P2 style suggestions.
The critical silent-drop bug (CancelledError during flush leaving items unrecovered) is fixed with the re-enqueue pattern. The two remaining comments are defensive coding suggestions — a TOCTOU on _closed that only matters for concurrent callers (not the sequential streaming model), and a _buf_chars accounting gap that is benign given _closed=True gates any threshold checks. Neither affects observable behavior. Test coverage is thorough (20 unit tests across helpers, buffer windowing, cancellation recovery, and mode dispatch).
src/agentex/lib/core/services/adk/streaming.py — the two P2 style items in CoalescingBuffer._run and add()
Important Files Changed
Sequence Diagram
sequenceDiagram participant P as Producer (OpenAI stream) participant C as StreamingTaskMessageContext participant B as CoalescingBuffer participant R as Redis (StreamingService) P->>C: stream_update(delta1) [first] C->>B: add(delta1) Note over B: _first_flushed=False → set _flush_signal B-->>R: on_flush(delta1) [immediate via ticker] P->>C: stream_update(delta2) P->>C: stream_update(delta3) Note over B: buffering in _buf... loop Every 50ms or 128 chars B->>B: _drain_locked() → _merge_consecutive() B-->>R: on_flush(merged delta2+delta3) end P->>C: close() C->>B: close() B->>B: cancel background ticker B->>B: final _drain_locked() B-->>R: on_flush(remaining merged) C-->>R: stream_update(StreamTaskMessageDone)Prompt To Fix All With AI
Reviews (5): Last reviewed commit: "chore(streaming): route TemporalStreamin..." | Re-trigger Greptile