release: 0.11.0#343
Conversation
364e2b2 to
b1d20d6
Compare
b1d20d6 to
b837eeb
Compare
b837eeb to
ac067a6
Compare
ac067a6 to
16b956f
Compare
16b956f to
2ea4386
Compare
2ea4386 to
3b9a668
Compare
| event_type = items[0].event_type | ||
| assert all(i.event_type == event_type for i in items), ( | ||
| "_process_items requires all items to share the same event_type; " | ||
| "callers must split START and END batches before dispatching." | ||
| ) |
There was a problem hiding this comment.
assert in production guard defeats data-corruption protection
The code comment correctly identifies this as a potential "silent data-corruption bug," but using assert for the guard means it is silently stripped when Python runs with the -O (optimize) flag. If a caller ever passes a mixed-event-type list, START and END spans would be fed to the wrong batched method with no warning. Use an explicit if/raise instead.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agentex/lib/core/tracing/span_queue.py
Line: 107-111
Comment:
**`assert` in production guard defeats data-corruption protection**
The code comment correctly identifies this as a potential "silent data-corruption bug," but using `assert` for the guard means it is silently stripped when Python runs with the `-O` (optimize) flag. If a caller ever passes a mixed-event-type list, START and END spans would be fed to the wrong batched method with no warning. Use an explicit `if/raise` instead.
How can I resolve this? If you propose a fix, please make it concise.| sgp_spans: list[SGPSpan] = [] | ||
| for span in spans: | ||
| self._add_source_to_span(span) | ||
| sgp_span = create_span( | ||
| name=span.name, | ||
| span_type=_get_span_type(span), | ||
| span_id=span.id, | ||
| parent_id=span.parent_id, | ||
| trace_id=span.trace_id, | ||
| input=span.input, | ||
| output=span.output, | ||
| metadata=span.data, | ||
| ) | ||
| sgp_span.start_time = span.start_time.isoformat() # type: ignore[union-attr] | ||
| self._spans[span.id] = sgp_span | ||
| sgp_spans.append(sgp_span) | ||
|
|
||
| if self.disabled: | ||
| logger.warning("SGP is disabled, skipping span upsert") | ||
| return | ||
| # TODO(AGX1-198): Batch multiple spans into a single upsert_batch call | ||
| # instead of one span per HTTP request. | ||
| # https://linear.app/scale-epd/issue/AGX1-198/actually-use-sgp-batching-for-spans | ||
| await self.sgp_async_client.spans.upsert_batch( # type: ignore[union-attr] | ||
| items=[sgp_span.to_request_params()] | ||
| items=[s.to_request_params() for s in sgp_spans] | ||
| ) |
There was a problem hiding this comment.
_spans populated before upsert — stale entries on HTTP failure
Spans are added to self._spans before the upsert_batch HTTP call (lines 155–156). If the batch upsert throws (network error, server 5xx), the exception is caught upstream by the queue's _handle, but _spans already holds entries for spans whose start event was never delivered to SGP. A subsequent on_spans_end will find those spans, update them, and send end-only upserts — orphaned end events with no matching start on the server.
The old single-span code registered the span in _spans only after a successful upsert, so failures were cleanly skipped on the end path. Consider populating _spans only after confirming the batch call succeeded, or rolling back entries on exception.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agentex/lib/core/tracing/processors/sgp_tracing_processor.py
Line: 141-163
Comment:
**`_spans` populated before upsert — stale entries on HTTP failure**
Spans are added to `self._spans` before the `upsert_batch` HTTP call (lines 155–156). If the batch upsert throws (network error, server 5xx), the exception is caught upstream by the queue's `_handle`, but `_spans` already holds entries for spans whose start event was never delivered to SGP. A subsequent `on_spans_end` will find those spans, update them, and send end-only upserts — orphaned end events with no matching start on the server.
The old single-span code registered the span in `_spans` only after a successful upsert, so failures were cleanly skipped on the end path. Consider populating `_spans` only after confirming the batch call succeeded, or rolling back entries on exception.
How can I resolve this? If you propose a fix, please make it concise.3b9a668 to
b702eb9
Compare
| sgp_span.start_time = span.start_time.isoformat() # type: ignore[union-attr] | ||
| self._spans[span.id] = sgp_span | ||
| sgp_spans.append(sgp_span) | ||
|
|
||
| if self.disabled: | ||
| logger.warning("SGP is disabled, skipping span upsert") | ||
| return | ||
| # TODO(AGX1-198): Batch multiple spans into a single upsert_batch call | ||
| # instead of one span per HTTP request. | ||
| # https://linear.app/scale-epd/issue/AGX1-198/actually-use-sgp-batching-for-spans | ||
| await self.sgp_async_client.spans.upsert_batch( # type: ignore[union-attr] | ||
| items=[sgp_span.to_request_params()] | ||
| items=[s.to_request_params() for s in sgp_spans] | ||
| ) |
There was a problem hiding this comment.
shutdown() crashes with AttributeError when disabled=True and spans are in-flight
on_spans_start now populates self._spans (line 155) before the if self.disabled: return guard (line 158). If any spans are started but not yet ended when shutdown() is called in disabled mode, it reaches self.sgp_async_client.spans.upsert_batch(...) where self.sgp_async_client is None, triggering an AttributeError. Before this PR the disabled path returned before populating _spans, so _spans was always empty at shutdown time and this was never triggered in practice. The fix is to either move the self._spans[span.id] = sgp_span assignment after the if self.disabled guard, or add an early if self.disabled: return check at the top of shutdown() (mirroring how on_spans_end handles it at line 184).
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agentex/lib/core/tracing/processors/sgp_tracing_processor.py
Line: 154-163
Comment:
**`shutdown()` crashes with `AttributeError` when `disabled=True` and spans are in-flight**
`on_spans_start` now populates `self._spans` (line 155) **before** the `if self.disabled: return` guard (line 158). If any spans are started but not yet ended when `shutdown()` is called in disabled mode, it reaches `self.sgp_async_client.spans.upsert_batch(...)` where `self.sgp_async_client` is `None`, triggering an `AttributeError`. Before this PR the disabled path returned before populating `_spans`, so `_spans` was always empty at shutdown time and this was never triggered in practice. The fix is to either move the `self._spans[span.id] = sgp_span` assignment after the `if self.disabled` guard, or add an early `if self.disabled: return` check at the top of `shutdown()` (mirroring how `on_spans_end` handles it at line 184).
How can I resolve this? If you propose a fix, please make it concise.
Automated Release PR
0.11.0 (2026-05-05)
Full Changelog: v0.10.4...v0.11.0
Features
usage,response_id, plumbprevious_response_id, opt-inprompt_cache_keyfor stateful responses and prompt caching (#335) (ba5d64b)Chores
This pull request is managed by Stainless's GitHub App.
The semver version number is based on included commit messages. Alternatively, you can manually set the version number in the title of this pull request.
For a better experience, it is recommended to use either rebase-merge or squash-merge when merging this pull request.
🔗 Stainless website
📚 Read the docs
🙋 Reach out for help or questions
Greptile Summary
This release (0.11.0) introduces true HTTP batching for the SGP tracing processor:
on_spans_start/on_spans_endbatch methods are added toAsyncTracingProcessor, the span queue now groups spans by processor and dispatches an entire drain cycle in one call, andSGPAsyncTracingProcessoroverrides the batched methods to issue a singleupsert_batchHTTP request per drain cycle instead of one request per span.shutdown()crash when disabled with in-flight spans:on_spans_startnow populatesself._spansbefore theif self.disabled: returnguard, so spans accumulate when the processor is disabled.shutdown()unconditionally accessesself.sgp_async_client.spans(which isNonewhen disabled), causingAttributeErrorfor any run where a span was started but not yet ended at shutdown time — a regression from pre-PR behavior where_spanswas always empty in disabled mode.Confidence Score: 3/5
Not safe to merge as-is — the disabled-mode _spans population change causes shutdown() to crash with an AttributeError in any run that starts spans but is shut down before all spans complete.
One P1 defect (shutdown AttributeError regression in disabled mode) and one unresolved P1 from a previous review thread (_spans populated before upsert, orphaning end events on HTTP failure) both affect the core tracing path changed in this PR.
src/agentex/lib/core/tracing/processors/sgp_tracing_processor.py — specifically on_spans_start ordering and shutdown() guard
Important Files Changed
Sequence Diagram
sequenceDiagram participant Caller participant SpanQueue participant SGPAsyncTracingProcessor Caller->>SpanQueue: enqueue(START, span, [proc]) SpanQueue->>SpanQueue: _drain_loop() batches items SpanQueue->>SGPAsyncTracingProcessor: on_spans_start([span1, span2, ...]) SGPAsyncTracingProcessor->>SGPAsyncTracingProcessor: populate _spans[id] (before disabled check) alt disabled == False SGPAsyncTracingProcessor->>SGP API: upsert_batch([span1, span2, ...]) else disabled == True SGPAsyncTracingProcessor-->>SGPAsyncTracingProcessor: early return (_spans still populated!) end Caller->>SpanQueue: enqueue(END, span, [proc]) SpanQueue->>SGPAsyncTracingProcessor: on_spans_end([span1, span2, ...]) SGPAsyncTracingProcessor->>SGPAsyncTracingProcessor: pop from _spans alt disabled == False and to_upsert not empty SGPAsyncTracingProcessor->>SGP API: upsert_batch([span1, span2, ...]) end Caller->>SGPAsyncTracingProcessor: shutdown() alt disabled == True and _spans non-empty SGPAsyncTracingProcessor--xSGP API: AttributeError (sgp_async_client is None) else disabled == False SGPAsyncTracingProcessor->>SGP API: upsert_batch(remaining _spans) endPrompt To Fix All With AI
Reviews (8): Last reviewed commit: "release: 0.11.0" | Re-trigger Greptile