release: 0.10.3#330
Merged
declan-scale merged 119 commits intomainfrom Apr 30, 2026
Merged
Conversation
78f998a to
48d5770
Compare
Contributor
Author
Release version edited manuallyThe Pull Request version has been manually set to If you instead want to use the version number |
|
Review the following changes in direct dependencies. Learn more about Socket for GitHub.
|
…ar window) (#333) * perf(streaming): coalesce per-token publishes to Redis (50ms / 128-char window) Per-token Redis publishes from TemporalStreamingModel were adding ~45s (56-62%) overhead to agent response latency, mostly from head-of-line blocking on the model's event loop: each `await streaming_context.stream_update(...)` inside the OpenAI stream `async for` paused token consumption until the publish round-trip completed. This change introduces a `CoalescingBuffer` driven by an `asyncio.Event`, so the producer never awaits on Redis. Deltas are merged consecutive-only (preserving character order in every (type, index) channel) and flushed on a 50ms timer, on a 128-char size threshold, or immediately for the first delta to keep perceived responsiveness high. The buffer's `close()` drains remaining deltas before the DONE event, so consumers see the full sequence in order. A new `StreamingMode = Literal["off", "per_token", "coalesced"]` lives in `streaming.py` as the single source of truth and is plumbed through the adk streaming module, `StreamingService.streaming_task_message_context`, and `StreamingTaskMessageContext`. Default is `"coalesced"` everywhere, so all 13+ existing context callers (claude_agents, langgraph, litellm provider, openai sync provider, etc.) benefit automatically. * chore(streaming): fix import ordering (ruff I001) * fix(streaming): address greptile review findings - _run: when CancelledError is raised mid-flush in the for-loop, re-enqueue the in-flight item plus any remaining items in the local `drained` list back into self._buf so close()'s final drain can recover them. Previously the local `drained` list was unreachable after CancelledError exited the for-loop, causing the last coalesced batch to be silently dropped on close-during-flush races. Trade-off: the in-flight item may be duplicated on the consumer side (Redis pub may have completed before cancel was delivered), which is preferable to silent loss for streaming UX. - _merge_pair: replace `return b` fallback with AssertionError. All six current TaskMessageDelta variants have explicit isinstance branches, so the fallback is unreachable today. But _can_merge returns True for any same-type pair, so adding a 7th delta variant without updating _merge_pair would silently drop `a`'s accumulated content. Asserting turns a future silent data-loss into an immediate, diagnosable crash. * test(streaming): add coalescing-layer tests; loosen one model assertion After merging the test-suite repair from main (#334) into this branch, one model test (test_responses_api_streaming) regressed because its assert_called_with strict-matched all kwargs of streaming_task_message_context and didn't tolerate the new `streaming_mode='coalesced'` kwarg this PR adds. Switched to assert_called() + targeted kwarg checks so the test verifies what it cares about (task_id threading) without locking in implementation details. Replaced the ad-hoc smoke scripts that lived in conversation with a real pytest module at tests/lib/core/services/adk/test_streaming.py covering: - _delta_char_len, _can_merge, _merge_pair: per-channel correctness + None-handling - _merge_consecutive: pure-text collapse, cross-channel order preservation, per-channel reconstruction matches per-token semantics - CoalescingBuffer: first-delta-immediate flush within ~20ms, size-threshold flush before timer fires, multi-delta coalescing within one window, idle close, add-after-close no-op - CoalescingBuffer cancel-during-flush regression test for the P1 fix: five queued chunks must all surface across publishes when close() cancels mid-flush (asserts substring presence rather than exact ordering, since the documented trade-off allows duplicates of the in-flight item) - StreamingTaskMessageContext mode dispatch: "off" suppresses publishes but persists full content, "per_token" publishes each delta synchronously, "coalesced" batches and persists full content * chore(streaming): route TemporalStreamingModel logger through make_logger The model file used raw ``logging.getLogger("agentex.temporal.streaming")``, which returns a logger with no handler attached and no level configured — so the existing ``[TemporalStreamingModel] Initialized ... streaming_mode=...`` INFO log was silently dropped, making it impossible to verify at runtime that a coalesced (or any) streaming mode was actually wired. Switch to the SDK's ``make_logger`` helper (level=INFO, RichHandler in local mode, StreamHandler otherwise) used everywhere else in the SDK. The explicit logger name ``agentex.temporal.streaming`` is preserved so any external logging configuration targeting that name keeps working.
declan-scale
approved these changes
Apr 30, 2026
Contributor
Author
|
🤖 Release is at https://github.com/scaleapi/scale-agentex-python/releases/tag/v0.10.3 🌻 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Automated Release PR
0.10.3 (2026-04-30)
Full Changelog: v0.10.2...v0.10.3
Features
Bug Fixes
Performance Improvements
Chores
This pull request is managed by Stainless's GitHub App.
The semver version number is based on included commit messages. Alternatively, you can manually set the version number in the title of this pull request.
For a better experience, it is recommended to use either rebase-merge or squash-merge when merging this pull request.
🔗 Stainless website
📚 Read the docs
🙋 Reach out for help or questions
Greptile Summary
This release bundles several independent fixes and one performance feature: a
CoalescingBufferthat batches per-token Redis publishes into 50 ms / 128-char windows, threeexecute_activity_method → execute_activitycorrections so headers are always injected, aBaseHTTPMiddleware → pure ASGIswap that fixes streaming response buffering, and anAGENTEX_CUSTOM_HEADERSenv-var feature.context_interceptor.pynow fireslogger.warningfor every non-agentex activity that lacks_task_id, which will flood logs in multi-activity workflows.send_message(sync + async) silently discards JSON-RPC error payloads when the server returns an error envelope, returning an emptyresultlist instead.Confidence Score: 4/5
Safe to merge with minor follow-up; no data loss or critical runtime failures introduced.
Both findings are P2: the warning log noise in the interceptor is annoying but not functionally breaking, and the silent error discard in send_message was a pre-existing gap now made slightly more likely to hit. All P0/P1 bugs (execute_activity fix, BaseHTTPMiddleware streaming fix, multipart field naming) are addressed with tests.
src/agentex/resources/agents.py (silent error swallowing in send_message) and src/agentex/lib/core/temporal/plugins/openai_agents/interceptors/context_interceptor.py (warning log level).
Important Files Changed
Flowchart
%%{init: {'theme': 'neutral'}}%% flowchart TD A[stream_update called] --> B{streaming_mode?} B -->|off| C[Feed accumulator only\nno publish] B -->|per_token| D[Publish immediately\nvia stream_update] B -->|coalesced| E[CoalescingBuffer.add] E --> F{first delta OR\nbuf_chars >= 128?} F -->|yes| G[Signal flush_event] F -->|no| H[Wait for ticker] G --> I[_run background task\nawakens immediately] H --> J[50ms timeout\nexpires] I --> K[_drain_locked:\nmerge consecutive same-channel deltas] J --> K K --> L[Publish merged batch\nvia on_flush] M[context.close] --> N[buffer.close:\ncancel ticker\ndrain remainder] N --> L L --> O[stream_update:\nStreamTaskMessageDone]Prompt To Fix All With AI
Reviews (113): Last reviewed commit: "release: 0.10.3" | Re-trigger Greptile