Skip to content

release: 0.11.0#343

Open
stainless-app[bot] wants to merge 58 commits intomainfrom
release-please--branches--main--changes--next
Open

release: 0.11.0#343
stainless-app[bot] wants to merge 58 commits intomainfrom
release-please--branches--main--changes--next

Conversation

@stainless-app
Copy link
Copy Markdown
Contributor

@stainless-app stainless-app Bot commented May 4, 2026

Automated Release PR

0.11.0 (2026-05-05)

Full Changelog: v0.10.4...v0.11.0

Features

  • openai_agents: expose real usage, response_id, plumb previous_response_id, opt-in prompt_cache_key for stateful responses and prompt caching (#335) (ba5d64b)

Chores

  • internal: reformat pyproject.toml (76e0299)
  • internal: version bump (0d318ad)

This pull request is managed by Stainless's GitHub App.

The semver version number is based on included commit messages. Alternatively, you can manually set the version number in the title of this pull request.

For a better experience, it is recommended to use either rebase-merge or squash-merge when merging this pull request.

🔗 Stainless website
📚 Read the docs
🙋 Reach out for help or questions

Greptile Summary

This release (0.11.0) introduces true HTTP batching for the SGP tracing processor: on_spans_start/on_spans_end batch methods are added to AsyncTracingProcessor, the span queue now groups spans by processor and dispatches an entire drain cycle in one call, and SGPAsyncTracingProcessor overrides the batched methods to issue a single upsert_batch HTTP request per drain cycle instead of one request per span.

  • P1 — shutdown() crash when disabled with in-flight spans: on_spans_start now populates self._spans before the if self.disabled: return guard, so spans accumulate when the processor is disabled. shutdown() unconditionally accesses self.sgp_async_client.spans (which is None when disabled), causing AttributeError for any run where a span was started but not yet ended at shutdown time — a regression from pre-PR behavior where _spans was always empty in disabled mode.

Confidence Score: 3/5

Not safe to merge as-is — the disabled-mode _spans population change causes shutdown() to crash with an AttributeError in any run that starts spans but is shut down before all spans complete.

One P1 defect (shutdown AttributeError regression in disabled mode) and one unresolved P1 from a previous review thread (_spans populated before upsert, orphaning end events on HTTP failure) both affect the core tracing path changed in this PR.

src/agentex/lib/core/tracing/processors/sgp_tracing_processor.py — specifically on_spans_start ordering and shutdown() guard

Important Files Changed

Filename Overview
src/agentex/lib/core/tracing/processors/sgp_tracing_processor.py Refactored to batched on_spans_start/on_spans_end; introduces a P1 bug where _spans is populated before the disabled guard, causing AttributeError in shutdown() when disabled with in-flight spans.
src/agentex/lib/core/tracing/processors/tracing_processor_interface.py Adds default on_spans_start/on_spans_end implementations that fan out to per-span methods; the mutual delegation with SGP's on_span_start creates a footgun for future subclasses.
src/agentex/lib/core/tracing/span_queue.py _process_items now groups spans by processor and dispatches the full batch in one on_spans_start/on_spans_end call per processor; assert guard for mixed event types is a known footgun (previously flagged).
tests/lib/core/tracing/processors/test_sgp_tracing_processor.py Adds two new batch tests verifying single upsert_batch call for N spans; coverage is good but doesn't exercise the disabled + in-flight shutdown scenario.
tests/lib/core/tracing/test_span_queue.py Adds batched dispatch tests and precondition test for mixed event types; mock helper updated to fan out batched calls to per-span mocks for backward compatibility.

Sequence Diagram

sequenceDiagram
    participant Caller
    participant SpanQueue
    participant SGPAsyncTracingProcessor

    Caller->>SpanQueue: enqueue(START, span, [proc])
    SpanQueue->>SpanQueue: _drain_loop() batches items

    SpanQueue->>SGPAsyncTracingProcessor: on_spans_start([span1, span2, ...])
    SGPAsyncTracingProcessor->>SGPAsyncTracingProcessor: populate _spans[id] (before disabled check)
    alt disabled == False
        SGPAsyncTracingProcessor->>SGP API: upsert_batch([span1, span2, ...])
    else disabled == True
        SGPAsyncTracingProcessor-->>SGPAsyncTracingProcessor: early return (_spans still populated!)
    end

    Caller->>SpanQueue: enqueue(END, span, [proc])
    SpanQueue->>SGPAsyncTracingProcessor: on_spans_end([span1, span2, ...])
    SGPAsyncTracingProcessor->>SGPAsyncTracingProcessor: pop from _spans
    alt disabled == False and to_upsert not empty
        SGPAsyncTracingProcessor->>SGP API: upsert_batch([span1, span2, ...])
    end

    Caller->>SGPAsyncTracingProcessor: shutdown()
    alt disabled == True and _spans non-empty
        SGPAsyncTracingProcessor--xSGP API: AttributeError (sgp_async_client is None)
    else disabled == False
        SGPAsyncTracingProcessor->>SGP API: upsert_batch(remaining _spans)
    end
Loading

Fix All in Cursor Fix All in Claude Code Fix All in Codex

Prompt To Fix All With AI
Fix the following 2 code review issues. Work through them one at a time, proposing concise fixes.

---

### Issue 1 of 2
src/agentex/lib/core/tracing/processors/sgp_tracing_processor.py:154-163
**`shutdown()` crashes with `AttributeError` when `disabled=True` and spans are in-flight**

`on_spans_start` now populates `self._spans` (line 155) **before** the `if self.disabled: return` guard (line 158). If any spans are started but not yet ended when `shutdown()` is called in disabled mode, it reaches `self.sgp_async_client.spans.upsert_batch(...)` where `self.sgp_async_client` is `None`, triggering an `AttributeError`. Before this PR the disabled path returned before populating `_spans`, so `_spans` was always empty at shutdown time and this was never triggered in practice. The fix is to either move the `self._spans[span.id] = sgp_span` assignment after the `if self.disabled` guard, or add an early `if self.disabled: return` check at the top of `shutdown()` (mirroring how `on_spans_end` handles it at line 184).

### Issue 2 of 2
src/agentex/lib/core/tracing/processors/tracing_processor_interface.py:43-57
**Mutual-recursion footgun for future implementers**

The base `on_spans_start` fans out to `self.on_span_start(s)`, and `SGPAsyncTracingProcessor.on_span_start` delegates back to `self.on_spans_start([span])`. This is safe today because `SGPAsyncTracingProcessor` also overrides `on_spans_start`, breaking the cycle. However, any future processor that copies SGP's single-span delegation pattern — overriding `on_span_start` to call `self.on_spans_start([span])` — but forgets to also override `on_spans_start` will hit unbounded recursion at runtime. Consider adding a docstring warning that subclasses delegating `on_span_start` to `on_spans_start` must also override `on_spans_start` to avoid mutual recursion.

Reviews (8): Last reviewed commit: "release: 0.11.0" | Re-trigger Greptile

Greptile also left 1 inline comment on this PR.

@stainless-app stainless-app Bot force-pushed the release-please--branches--main--changes--next branch from 364e2b2 to b1d20d6 Compare May 4, 2026 19:56
@stainless-app stainless-app Bot force-pushed the release-please--branches--main--changes--next branch from b1d20d6 to b837eeb Compare May 4, 2026 20:22
@stainless-app stainless-app Bot force-pushed the release-please--branches--main--changes--next branch from b837eeb to ac067a6 Compare May 4, 2026 22:16
@stainless-app stainless-app Bot force-pushed the release-please--branches--main--changes--next branch from ac067a6 to 16b956f Compare May 4, 2026 22:51
@stainless-app stainless-app Bot force-pushed the release-please--branches--main--changes--next branch from 16b956f to 2ea4386 Compare May 4, 2026 23:22
@stainless-app stainless-app Bot force-pushed the release-please--branches--main--changes--next branch from 2ea4386 to 3b9a668 Compare May 5, 2026 00:22
Comment on lines +107 to +111
event_type = items[0].event_type
assert all(i.event_type == event_type for i in items), (
"_process_items requires all items to share the same event_type; "
"callers must split START and END batches before dispatching."
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 assert in production guard defeats data-corruption protection

The code comment correctly identifies this as a potential "silent data-corruption bug," but using assert for the guard means it is silently stripped when Python runs with the -O (optimize) flag. If a caller ever passes a mixed-event-type list, START and END spans would be fed to the wrong batched method with no warning. Use an explicit if/raise instead.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agentex/lib/core/tracing/span_queue.py
Line: 107-111

Comment:
**`assert` in production guard defeats data-corruption protection**

The code comment correctly identifies this as a potential "silent data-corruption bug," but using `assert` for the guard means it is silently stripped when Python runs with the `-O` (optimize) flag. If a caller ever passes a mixed-event-type list, START and END spans would be fed to the wrong batched method with no warning. Use an explicit `if/raise` instead.

How can I resolve this? If you propose a fix, please make it concise.

Fix in Cursor Fix in Claude Code Fix in Codex

Comment on lines +141 to 163
sgp_spans: list[SGPSpan] = []
for span in spans:
self._add_source_to_span(span)
sgp_span = create_span(
name=span.name,
span_type=_get_span_type(span),
span_id=span.id,
parent_id=span.parent_id,
trace_id=span.trace_id,
input=span.input,
output=span.output,
metadata=span.data,
)
sgp_span.start_time = span.start_time.isoformat() # type: ignore[union-attr]
self._spans[span.id] = sgp_span
sgp_spans.append(sgp_span)

if self.disabled:
logger.warning("SGP is disabled, skipping span upsert")
return
# TODO(AGX1-198): Batch multiple spans into a single upsert_batch call
# instead of one span per HTTP request.
# https://linear.app/scale-epd/issue/AGX1-198/actually-use-sgp-batching-for-spans
await self.sgp_async_client.spans.upsert_batch( # type: ignore[union-attr]
items=[sgp_span.to_request_params()]
items=[s.to_request_params() for s in sgp_spans]
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 _spans populated before upsert — stale entries on HTTP failure

Spans are added to self._spans before the upsert_batch HTTP call (lines 155–156). If the batch upsert throws (network error, server 5xx), the exception is caught upstream by the queue's _handle, but _spans already holds entries for spans whose start event was never delivered to SGP. A subsequent on_spans_end will find those spans, update them, and send end-only upserts — orphaned end events with no matching start on the server.

The old single-span code registered the span in _spans only after a successful upsert, so failures were cleanly skipped on the end path. Consider populating _spans only after confirming the batch call succeeded, or rolling back entries on exception.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agentex/lib/core/tracing/processors/sgp_tracing_processor.py
Line: 141-163

Comment:
**`_spans` populated before upsert — stale entries on HTTP failure**

Spans are added to `self._spans` before the `upsert_batch` HTTP call (lines 155–156). If the batch upsert throws (network error, server 5xx), the exception is caught upstream by the queue's `_handle`, but `_spans` already holds entries for spans whose start event was never delivered to SGP. A subsequent `on_spans_end` will find those spans, update them, and send end-only upserts — orphaned end events with no matching start on the server.

The old single-span code registered the span in `_spans` only after a successful upsert, so failures were cleanly skipped on the end path. Consider populating `_spans` only after confirming the batch call succeeded, or rolling back entries on exception.

How can I resolve this? If you propose a fix, please make it concise.

Fix in Cursor Fix in Claude Code Fix in Codex

@stainless-app stainless-app Bot force-pushed the release-please--branches--main--changes--next branch from 3b9a668 to b702eb9 Compare May 5, 2026 01:22
Comment on lines +154 to 163
sgp_span.start_time = span.start_time.isoformat() # type: ignore[union-attr]
self._spans[span.id] = sgp_span
sgp_spans.append(sgp_span)

if self.disabled:
logger.warning("SGP is disabled, skipping span upsert")
return
# TODO(AGX1-198): Batch multiple spans into a single upsert_batch call
# instead of one span per HTTP request.
# https://linear.app/scale-epd/issue/AGX1-198/actually-use-sgp-batching-for-spans
await self.sgp_async_client.spans.upsert_batch( # type: ignore[union-attr]
items=[sgp_span.to_request_params()]
items=[s.to_request_params() for s in sgp_spans]
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 shutdown() crashes with AttributeError when disabled=True and spans are in-flight

on_spans_start now populates self._spans (line 155) before the if self.disabled: return guard (line 158). If any spans are started but not yet ended when shutdown() is called in disabled mode, it reaches self.sgp_async_client.spans.upsert_batch(...) where self.sgp_async_client is None, triggering an AttributeError. Before this PR the disabled path returned before populating _spans, so _spans was always empty at shutdown time and this was never triggered in practice. The fix is to either move the self._spans[span.id] = sgp_span assignment after the if self.disabled guard, or add an early if self.disabled: return check at the top of shutdown() (mirroring how on_spans_end handles it at line 184).

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agentex/lib/core/tracing/processors/sgp_tracing_processor.py
Line: 154-163

Comment:
**`shutdown()` crashes with `AttributeError` when `disabled=True` and spans are in-flight**

`on_spans_start` now populates `self._spans` (line 155) **before** the `if self.disabled: return` guard (line 158). If any spans are started but not yet ended when `shutdown()` is called in disabled mode, it reaches `self.sgp_async_client.spans.upsert_batch(...)` where `self.sgp_async_client` is `None`, triggering an `AttributeError`. Before this PR the disabled path returned before populating `_spans`, so `_spans` was always empty at shutdown time and this was never triggered in practice. The fix is to either move the `self._spans[span.id] = sgp_span` assignment after the `if self.disabled` guard, or add an early `if self.disabled: return` check at the top of `shutdown()` (mirroring how `on_spans_end` handles it at line 184).

How can I resolve this? If you propose a fix, please make it concise.

Fix in Cursor Fix in Claude Code Fix in Codex

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant