Skip to content

DrawingCanvas API: Replace imperative extension methods with stateful canvas-based drawing model#377

Open
JimBobSquarePants wants to merge 303 commits intomainfrom
js/canvas-api
Open

DrawingCanvas API: Replace imperative extension methods with stateful canvas-based drawing model#377
JimBobSquarePants wants to merge 303 commits intomainfrom
js/canvas-api

Conversation

@JimBobSquarePants
Copy link
Copy Markdown
Member

@JimBobSquarePants JimBobSquarePants commented Mar 1, 2026

Prerequisites

  • I have written a descriptive pull-request title
  • I have verified that there are no overlapping pull-requests open
  • I have verified that I am following matches the existing coding patterns and practice as demonstrated in the repository. These follow strict Stylecop rules 👮.
  • I have provided test coverage for my change (where applicable)

Breaking Changes: DrawingCanvas API

Fix #106
Fix #244
Fix #344
Fix #367

This is a major breaking change. The library's public drawing API has been redesigned around a canvas-based model, replacing the previous collection of imperative drawing extension methods.

What changed

The old API surface — dozens of IImageProcessingContext extension methods like DrawLine(), DrawPolygon(), FillPolygon(), DrawBeziers(), DrawImage(), DrawText(), etc. — has been removed. These methods were individually simple, but had several architectural limitations:

  • Each call was an independent image processor that rasterized and composited in isolation, making batching impossible.
  • State such as blending, clipping, and transforms had to be supplied repeatedly.
  • Alternate rendering backends had no clean way to intercept or accelerate a sequence of draw calls.
  • Reusing complex geometry across frames was not possible through the public drawing API.

The new model: DrawingCanvas

All drawing now goes through DrawingCanvas, a stateful canvas that records drawing commands into an ordered timeline.

DrawingCanvas<TPixel> remains the typed implementation used internally where the pixel format is required for brush normalization, readback, and backend execution. Public factory methods return DrawingCanvas, so CPU and WebGPU entry points expose the same canvas-facing API.

Via Image.Mutate() (most common)

using SixLabors.ImageSharp.Drawing;
using SixLabors.ImageSharp.Drawing.Processing;

image.Mutate(ctx => ctx.Paint(canvas =>
{
    canvas.Fill(Brushes.Solid(Color.Red), new EllipsePolygon(200, 200, 100));
    canvas.Draw(Pens.Solid(Color.Blue, 3), new RectangularPolygon(50, 50, 200, 100));
    canvas.DrawLine(Pens.Solid(Color.Green, 2), new PointF(0, 0), new PointF(100, 100));

    canvas.DrawText(
        new RichTextOptions(font) { Origin = new PointF(10, 10) },
        "Hello, World!",
        brush: Brushes.Solid(Color.Black),
        pen: null);

    canvas.DrawImage(sourceImage, sourceRect, destinationRect);

    canvas.Save(new DrawingOptions
    {
        GraphicsOptions = new GraphicsOptions { BlendPercentage = 0.5f }
    });

    canvas.Fill(brush, path);
    canvas.Restore();

    canvas.Apply(path, inner => inner.Brightness(0.5f));

    // Commands are rendered when the Paint canvas is disposed.
}));

Canvas state management

The canvas supports a save/restore stack, similar to HTML Canvas or SkCanvas:

int saveCount = canvas.Save();              // Push current state.
canvas.Save(options, clipPath1, clipPath2); // Push and replace state.

canvas.Restore();            // Pop one level.
canvas.RestoreTo(saveCount); // Pop to a specific level.

State includes DrawingOptions (graphics options, shape options, transform) and clip paths. SaveLayer(...) creates an isolated layer entry in the canvas timeline. The layer is closed by Restore() or RestoreTo(...) and is composited when the canvas timeline is rendered.

Apply(...) is also represented in the same timeline. It acts as a barrier: drawing before the barrier is rendered first, the requested image operation is applied to the target region, and drawing after the barrier continues in order.

Retained scenes

The canvas can create reusable retained scenes:

DrawingBackendScene background;

using (DrawingCanvas canvas = image.Frames.RootFrame.CreateCanvas(image.Configuration, new DrawingOptions()))
{
    DrawBackground(canvas);
    background = canvas.CreateScene();
}

image.Mutate(ctx => ctx.Paint(canvas =>
{
    canvas.RenderScene(background);
    DrawMovingObjects(canvas);
}));

CreateScene() converts the currently queued drawing commands into a backend scene. It does not render to the target.

RenderScene(scene) records an existing retained scene into the current canvas timeline. It does not render immediately. Any pending commands are sealed first, so normal drawing, retained scene replay, Flush(), layers, and Apply(...) barriers all preserve submission order.

This enables scenarios such as rendering a static background scene once, then replaying it repeatedly while drawing changing foreground content over it.

Flush()

Flush() seals the currently queued drawing commands into the canvas timeline. It does not write to the target by itself.

The root canvas renders the timeline when disposed. Paint(...) owns that disposal for the common Image.Mutate(...) path.

IDrawingBackend — bring your own renderer

Rasterization and composition are abstracted behind IDrawingBackend.

The canvas owns command ordering. Backends do not receive individual drawing calls; they receive prepared command batches and turn them into retained backend scenes.

Method Purpose
CreateScene Converts one prepared DrawingCommandBatch into a retained backend scene. This does not render to the target.
RenderScene<TPixel> Renders a retained backend scene into the target frame.
ReadRegion<TPixel> Reads pixels back from the target for operations that need current destination pixels, such as Apply(...).

The library ships with two backend implementations: DefaultDrawingBackend, the CPU backend built around a tiled fixed-point rasterizer, and WebGPUDrawingBackend, the WebGPU backend for native GPU surfaces. Both implement the same retained-scene contract, so callers use the same canvas API whether rendering through CPU memory or WebGPU targets.

Backends are registered on Configuration:

configuration.SetDrawingBackend(myCustomBackend);

The public DrawingCanvas API stays backend-neutral. Backend-specific retained data is hidden behind DrawingBackendScene.

Migration guide

Old API New API
ctx.Fill(color, path) ctx.Paint(c => c.Fill(Brushes.Solid(color), path))
ctx.Fill(brush, path) ctx.Paint(c => c.Fill(brush, path))
ctx.Draw(pen, path) ctx.Paint(c => c.Draw(pen, path))
ctx.DrawLine(pen, points) ctx.Paint(c => c.DrawLine(pen, points))
ctx.DrawPolygon(pen, points) ctx.Paint(c => c.Draw(pen, new Polygon(new LinearLineSegment(points))))
ctx.FillPolygon(brush, points) ctx.Paint(c => c.Fill(brush, new Polygon(new LinearLineSegment(points))))
ctx.DrawText(text, font, color, origin) ctx.Paint(c => c.DrawText(new RichTextOptions(font) { Origin = origin }, text, Brushes.Solid(color), null))
ctx.DrawImage(overlay, opacity) ctx.Paint(c => c.DrawImage(overlay, sourceRect, destRect))
Multiple independent draw calls Single Paint(...) block; commands are ordered through one canvas timeline

Other breaking changes in this PR

  • AntialiasSubpixelDepth removed — The rasterizer now uses fixed 24.8 coordinate precision. The old property controlled vertical subpixel sampling depth, but the new fixed-point scanline rasterizer integrates area/cover analytically per cell rather than sampling discrete subpixel rows.
  • GraphicsOptions.Antialias — now controls RasterizationMode (antialiased vs aliased). When false, coverage is snapped to binary using AntialiasThreshold.
  • GraphicsOptions.AntialiasThreshold — new property (0–1, default 0.5) controlling the coverage cutoff in aliased mode.

Benchmarks

All benchmarks run under the following environment.

BenchmarkDotNet=v0.13.1, OS=Windows 10.0.26200
Unknown processor
.NET SDK=10.0.103
  [Host] : .NET 8.0.24 (8.0.2426.7010), X64 RyuJIT

Toolchain=InProcessEmitToolchain  InvocationCount=1  IterationCount=40
LaunchCount=3  UnrollFactor=1  WarmupCount=40

DrawPolygonAll - Renders a 7200x4800px path of the state of Mississippi with a 2px stroke.

Method Mean Error StdDev Median Ratio RatioSD
SkiaSharp 42.20 ms 2.197 ms 6.976 ms 38.18 ms 1.00 0.00
SystemDrawing 44.10 ms 0.172 ms 0.538 ms 44.05 ms 1.07 0.16
ImageSharp 12.09 ms 0.083 ms 0.269 ms 12.06 ms 0.29 0.05
ImageSharpWebGPU 12.47 ms 0.291 ms 0.940 ms 12.71 ms 0.30 0.05

FillParis - Renders a 1096x1060px scene containing 50K fill paths.

Method Mean Error StdDev Median Ratio RatioSD
SkiaSharp 103.165 ms 0.4633 ms 1.4574 ms 103.323 ms 1.00 0.00
SystemDrawing 143.026 ms 0.6269 ms 1.9901 ms 142.638 ms 1.39 0.02
ImageSharp 48.544 ms 0.7920 ms 2.3353 ms 48.342 ms 0.47 0.02
ImageSharpRetainedScene 17.879 ms 0.4229 ms 1.3728 ms 18.168 ms 0.17 0.01
ImageSharpWebGPU 24.841 ms 0.4839 ms 1.5006 ms 24.857 ms 0.24 0.02
ImageSharpWebGPURetainedScene 5.476 ms 0.3023 ms 0.9813 ms 4.933 ms 0.05 0.01

@antonfirsov
Copy link
Copy Markdown
Member

antonfirsov commented Mar 3, 2026

I want to take a look at this, but it looks massive so the earliest time I can really get to it is around next weekend.

Before jumping into the code there is an important general question however: how would describe typical use-cases for this feature?

There are two that come into my mind; if you are envisioning the same ones, how do you weight their importance?
(1) Is it more about GPU accelerated offscreen rendering for services?
(2) Or is this new API more for desktop (and maybe mobile and WASM) apps to be used for drawing?

@JimBobSquarePants
Copy link
Copy Markdown
Member Author

I want to take a look at this, but it looks massive so the earliest time I can really get to it is around next weekend.

Before jumping into the code there is an important general question however: how would describe typical use-cases for this feature?

There are two that come into my mind; if you are envisioning the same ones, how do you weight their importance? (1) Is it more about GPU accelerated offscreen rendering for services? (2) Or is this new API more for desktop (and maybe mobile and WASM) apps to be used for drawing?

Thanks, mate.

I see these as a complete re-envisioning of the library and how it should work with both those targets in mind. There are MAJOR breaking changes*.

The new API can support both scenarios but the WebGPU IDrawingBackend implementation is purely experimental and currently only exists to validate my design (as you recommended) - it works but would take someone who really knows what they are doing in that space to make it perform as well as I'm sure it could. (That said it's not slow!!)

So, in short: Target 2, then 1.

I'm changing the entire shape of the library to focus around the new DrawingCanvas<TPixel> type which can operate separately from Image<TPixel> and with it via a new extension method ProcessWithCanvas.

The canvas is designed based on the best features of both System.Drawing Graphics and Skia SkGraphics implementations and provides all the methods and state management that users should require. It's IMO a joy to use and far less confusing than the current API. It's also really fast!

*I'm confident that these changes are both necessary and beneficial. This moves the library square into the expectations bracket for users.

@codecov
Copy link
Copy Markdown

codecov Bot commented Mar 4, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 86%. Comparing base (6d2010b) to head (eea8ce1).

Additional details and impacted files
@@          Coverage Diff          @@
##           main   #377     +/-   ##
=====================================
+ Coverage    84%    86%     +1%     
=====================================
  Files       101    107      +6     
  Lines      4529   8285   +3756     
  Branches    654   1066    +412     
=====================================
+ Hits       3849   7140   +3291     
- Misses      540    899    +359     
- Partials    140    246    +106     
Flag Coverage Δ
unittests 86% <ø> (+1%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

…rp.Drawing into js/canvas-api

# Conflicts:
#	src/ImageSharp.Drawing.WebGPU/WebGPUDrawingBackend.CompositePixels.cs
#	src/ImageSharp.Drawing.WebGPU/WebGPUTextureSampleTypeHelper.cs
@antonfirsov
Copy link
Copy Markdown
Member

@JimBobSquarePants assuming we are still working in shifts I may push trivial fixes here and there (like my previous commit). Let me know if it breaks your workflow!

Copy link
Copy Markdown
Member

@antonfirsov antonfirsov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still looking at the backend API-s, and top-level behaviors.

I would still consider plumbing DeviceSetUncapturedErrorCallback to some visible API surface. If there will be users encountering issues from WebGPU (which has very realistic chance), that would enable diagnostics, both for us & them.

Comment thread src/ImageSharp.Drawing.WebGPU/WebGPURuntime.cs
Comment thread src/ImageSharp.Drawing.WebGPU/WebGPURenderTarget.cs Outdated
Comment thread src/ImageSharp.Drawing.WebGPU/WebGPURenderTarget.cs Outdated
Comment thread src/ImageSharp.Drawing.WebGPU/WebGPURenderTarget.cs Outdated
nameof(destination));
}

NativeCanvasFrame<TPixel> frame = WebGPUCanvasFactory.CreateFrame<TPixel>(this.Bounds, this.Surface);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What will happen if TPixel is not compatible with this.Format? Do we have a test for that case?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If TPixel is not compatible with the target texture format, readback fails with NotSupportedException before any GPU copy/readback work is submitted.

WebGPURenderTarget.ReadbackInto<TPixel>(...) delegates to WebGPUDrawingBackend.ReadRegion<TPixel>(...), and that path maps TPixel to the expected WebGPUTextureFormat. It then compares that expected format with the target's native TargetFormat and throws if they differ.

We have coverage in WebGPUDeviceContextTests.CreateCanvas_RejectsInvalidHandles_AndReadbackRejectsMismatchedFormat: it creates the default Rgba8Unorm render target and attempts to read it into Image<Bgra32>, asserting NotSupportedException.

Comment thread src/ImageSharp.Drawing.WebGPU/WebGPUNativeSurfaceFactory.cs Outdated
Comment thread src/ImageSharp.Drawing.WebGPU/WebGPURenderTargetAllocation.cs Outdated
Comment thread src/ImageSharp.Drawing.WebGPU/WebGPUDrawingBackend.cs Outdated
Comment thread src/ImageSharp.Drawing.WebGPU/WebGPUSceneDispatch.cs Outdated
Comment thread src/ImageSharp.Drawing.WebGPU/WebGPUNativeSurface.cs Outdated
@JimBobSquarePants
Copy link
Copy Markdown
Member Author

JimBobSquarePants commented Apr 30, 2026

Still looking at the backend API-s, and top-level behaviors.

I would still consider plumbing DeviceSetUncapturedErrorCallback to some visible API surface. If there will be users encountering issues from WebGPU (which has very realistic chance), that would enable diagnostics, both for us & them.

@antonfirsov I've already wired this up.

The public API surface is WebGPUEnvironment.UncapturedError (line 19).

The native hook is wired here:

WebGPURuntime.DeviceSharedState.cs (line 74) calls DeviceSetUncapturedErrorCallback.
WebGPURuntime.DeviceSharedState.cs (line 133) maps Silk ErrorType to public WebGPUErrorType.
WebGPUDeviceContext.cs (line 110) forces device shared state creation so the callback is installed early for render targets and surfaces.

@antonfirsov
Copy link
Copy Markdown
Member

antonfirsov commented Apr 30, 2026

I've already wired this up.

@JimBobSquarePants sorry, I somehow missed it!

Regarding the failures I see locally, I'm afraid they are showing a real bug. For example:

FillPath_WithGraphicsOptionsModes_ImageBrush_MatchesDefaultOutput_FillPath_GraphicsOptions_ImageBrush_Overlay_SrcIn_WebGPU_NativeSurface

Reference output

image

Actual output

image

It's pretty random whether it appears or not but on my system it's pretty frequent. Any ideas if / how could I help debugging this? I have a Geforce RTX 3070 Laptop GPU.

PS: I don't really understand why is there a strip on the left on the correct output.

PS2: I have a theory that when these tests succeed, it's because RequestAdapter returns my integrated AMD Radeon GPU instead instead of the RTX. If this is true, the failure is consistent on the Geforce.

Comment thread src/ImageSharp.Drawing.WebGPU/WebGPURuntime.DeviceSharedState.cs
Comment thread src/ImageSharp.Drawing/Processing/DrawingCanvasShapeExtensions.cs Outdated
using PfnRequestAdapterCallback callbackPtr = PfnRequestAdapterCallback.From(Callback);
RequestAdapterOptions options = new()
{
PowerPreference = PowerPreference.HighPerformance
Copy link
Copy Markdown
Member

@antonfirsov antonfirsov May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we wanted to make this (or any other wgpu setup details) user-configurable in the future, how would we do it?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd treat adapter/device setup as environment-level configuration, not per-window or per-surface configuration.

For example, we could add a WebGPUEnvironmentOptions type and a WebGPUEnvironment.Configure(...) entry point, using ImageSharp.Drawing-owned enums/options rather than exposing Silk types. WebGPURuntime would read those options when it first creates the library-managed instance/adapter/device.

The important constraint is that these settings have to be supplied before the shared WebGPU device is created. Once ProbeAvailability(), WebGPURenderTarget, WebGPUWindow, etc. have caused device creation, changing adapter/device options would require resetting or recreating the environment.

Surface-specific settings, like format and present mode, should stay on WebGPUWindowOptions / WebGPUExternalSurfaceOptions; device-selection and wgpu setup details belong on the environment.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added options for power. We read everything else from the device.

@JimBobSquarePants
Copy link
Copy Markdown
Member Author

I've already wired this up.

@JimBobSquarePants sorry, I somehow missed it!

Regarding the failures I see locally, I'm afraid they are showing a real bug. For example:

FillPath_WithGraphicsOptionsModes_ImageBrush_MatchesDefaultOutput_FillPath_GraphicsOptions_ImageBrush_Overlay_SrcIn_WebGPU_NativeSurface

Reference output

image #### Actual output image It's pretty random whether it appears or not but on my system it's pretty frequent. Any ideas if / how could I help debugging this? I have a Geforce RTX 3070 Laptop GPU.

PS: I don't really understand why is there a strip on the left on the correct output.

PS2: I have a theory that when these tests succeed, it's because RequestAdapter returns my integrated AMD Radeon GPU instead instead of the RTX. If this is true, the failure is consistent on the Geforce.

Thanks, this is really helpful.

I still can’t reproduce it locally, but the screenshot makes it look like the ImageBrush repeat path is the culprit. The strip on the left in the reference output is expected: it comes from the brush source region + offset repeating before the brush origin.

The actual output looks like negative repeat coordinates are not wrapping correctly on your RTX path. The WebGPU shader still had some Vello-style generic image sampling code in this area, including repeat handling intended for broader filtered image sampling. For our ImageBrush, that was the wrong shape: the CPU implementation samples by integer pixel index and wraps with positive modulo semantics.

I’ve updated the WGSL path so ImageBrush repeat now mirrors the CPU calculation explicitly:

((value % length) + length) % length

I also removed the unused image-quality/bilinear branch from the shader. There is nothing “low quality” about nearest sampling here; it is the current ImageBrush contract.

Could you please retry the failing WebGPU tests on the GeForce adapter when you get a chance?

@antonfirsov
Copy link
Copy Markdown
Member

antonfirsov commented May 1, 2026

Looks like the fix worked! Now it's down to 9 failures where pixel differences don't seem to be visually significant, it's probably a floating point difference between HW. The tolerance values need to be tuned in these tests though.

DrawText_WithWebGPUCoverageBackend_RendersAndReleasesPreparedCoverage_DrawText_WebGPU_NativeSurface.png

image

Reference

DrawText_WithWebGPUCoverageBackend_RendersAndReleasesPreparedCoverage_DrawText_WebGPU_NativeSurface

Actual

DrawText_WithWebGPUCoverageBackend_RendersAndReleasesPreparedCoverage_DrawText_WebGPU_NativeSurface

Comment on lines +949 to +951
DebugSaveBackendPair(provider, $"DrawPath_PointStroke_LineCap_{lineCap}", defaultImage, nativeSurfaceImage);
AssertBackendPairSimilarity(defaultImage, nativeSurfaceImage, 0.03F);
AssertBackendPairReferenceOutputs(provider, $"DrawPath_PointStroke_LineCap_{lineCap}", defaultImage, nativeSurfaceImage);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: some of the tests are adding these redundant strings, which makes the output files longer and less convenient to work with.

@antonfirsov
Copy link
Copy Markdown
Member

antonfirsov commented May 1, 2026

@JimBobSquarePants I think we are getting very close in the sense that major user-facing issues and maintainability concerns are probably gone. This weekend I will be away from computers, but I'm planning to give this a final run next week.

Amongst other things I'll be looking for dead code, unnecessary or weird public APIs. I'd recommend you to do a thorough file-by-file self review hunting for these kind of leftovers to get to the finish line faster!

PS: I will also tune the tolerances myself where needed.

@JimBobSquarePants
Copy link
Copy Markdown
Member Author

@JimBobSquarePants I think we are getting very close in the sense that major user-facing issues and maintainability concerns are probably gone. This weekend I will be away from computers, but I'm planning to give this a final run next week.

Amongst other things I'll be looking for dead code, unnecessary or weird public APIs. I'd recommend you to do a thorough file-by-file self review hunting for these kind of leftovers to get to the finish line faster!

PS: I will also tune the tolerances myself where needed.

That's great to hear!

I've updated the tolerances, cleaned up diagnostic properties from the public API and done a cleanup of some dead code leftover from trying to support more pixel formats early on (before I learned about restrictions). I've also trimmed the test output names.

Comment on lines +107 to +162
private void FillIndexedTriangles(DrawingMesh mesh)
{
for (int i = 0; i < mesh.Indices.Length; i += 3)
{
MeshVertex v0 = mesh.Vertices[mesh.Indices[i]];
MeshVertex v1 = mesh.Vertices[mesh.Indices[i + 1]];
MeshVertex v2 = mesh.Vertices[mesh.Indices[i + 2]];

this.FillTriangle(v0, v1, v2);
}
}
```

`TriangleStrip` consumes one new vertex per triangle and flips the first two vertices on alternating triangles so the winding remains consistent:

```csharp
private void FillTriangleStrip(DrawingMesh mesh)
{
for (int i = 0; i <= mesh.Indices.Length - 3; i++)
{
int i0 = mesh.Indices[i];
int i1 = mesh.Indices[i + 1];
int i2 = mesh.Indices[i + 2];

if ((i & 1) != 0)
{
(i0, i1) = (i1, i0);
}

this.FillTriangle(mesh.Vertices[i0], mesh.Vertices[i1], mesh.Vertices[i2]);
}
}
```

Both modes should share one helper:

```csharp
private void FillTriangle(MeshVertex v0, MeshVertex v1, MeshVertex v2)
{
PointF[] points =
[
v0.Position,
v1.Position,
v2.Position
];

Color[] colors =
[
v0.Color,
v1.Color,
v2.Color
];

this.Fill(new PathGradientBrush(points, colors), new Polygon(points));
}
```
Copy link
Copy Markdown
Member

@antonfirsov antonfirsov May 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a spec for shader code generation? (Otherwise it would be pretty expensive.)

Rectangle textRegion = Rectangle.Intersect(
new Rectangle(0, 0, defaultImage.Width, defaultImage.Height),
new Rectangle(8, 12, defaultImage.Width - 16, Math.Min(220, defaultImage.Height - 12)));
AssertBackendPairSimilarityInRegion(defaultImage, nativeSurfaceImage, textRegion, 0.0157F);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
AssertBackendPairSimilarityInRegion(defaultImage, nativeSurfaceImage, textRegion, 0.0157F);
AssertBackendPairSimilarityInRegion(defaultImage, nativeSurfaceImage, textRegion, 0.02F);

Last remaining failure with

Image difference is over threshold!
      Report ImageFrame 0:
      Total difference: 0.0167%

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

2 participants