<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Sentry Blog RSS]]></title><description><![CDATA[Product, Engineering, and Marketing updates from the developers of Sentry.]]></description><link>https://blog.sentry.io</link><generator>GatsbyJS</generator><lastBuildDate>Fri, 06 Mar 2026 22:06:42 GMT</lastBuildDate><item><title><![CDATA[Routing OpenTelemetry logs to Sentry using OTLP]]></title><description><![CDATA[If you've already instrumented your app with OpenTelemetry, you don't have to rip it out to use Sentry. Two environment variables and your logs start flowing in...]]></description><link>https://blog.sentry.io/structured-logging-opentelemetry/</link><guid isPermaLink="false">https://blog.sentry.io/structured-logging-opentelemetry/</guid><pubDate>Thu, 05 Mar 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;If you&amp;#39;ve already instrumented your app with OpenTelemetry, you don&amp;#39;t have to rip it out to use Sentry. Two environment variables and your logs start flowing into Sentry, no SDK changes, no re-instrumentation. Here&amp;#39;s how to set it up in a sample app, and when the native Sentry SDK might be the better call.&lt;/p&gt;&lt;h2&gt;Why you&amp;#39;d use OTLP instead of the native SDK&lt;/h2&gt;&lt;p&gt;The main advantage of OTLP is that your logging code stays decoupled from any specific observability backend. You can switch where logs go by changing a few config lines. That&amp;#39;s useful if you:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Already have OpenTelemetry logging in place&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Want to send logs to multiple backends&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Need vendor-neutral instrumentation&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Work with AI or LLM frameworks that use OpenTelemetry by default&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Want to use the broader OpenTelemetry ecosystem&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;If you&amp;#39;re starting from scratch and only need Sentry, the &lt;a href=&quot;https://docs.sentry.io/platforms/node/logs/&quot;&gt;native Sentry SDK&lt;/a&gt; is probably the better call. With the native SDK, you get issue creation from &lt;a href=&quot;https://sentry.io/product/logs/&quot;&gt;logs&lt;/a&gt;, &lt;a href=&quot;https://sentry.io/product/session-replay/&quot;&gt;session replay&lt;/a&gt; integration, automatic breadcrumbs, and built-in error correlation. &lt;a href=&quot;https://docs.sentry.io/concepts/otlp/&quot;&gt;Ingesting OpenTelemetry traces and logs&lt;/a&gt; with Sentry via OTLP endpoints is still in beta and currently lacks these integrated features.&lt;/p&gt;&lt;h2&gt;Guide prerequisites&lt;/h2&gt;&lt;p&gt;Before we start, you need:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;A &lt;a href=&quot;https://sentry.io/signup/&quot;&gt;Sentry account&lt;/a&gt; (the free tier works)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Node.js 18 or later installed&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Basic familiarity with Express.js&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;If you don&amp;#39;t have a Sentry project yet, create one now. Select &lt;b&gt;Express&lt;/b&gt; as the platform. You can skip the DSN setup instructions because you&amp;#39;ll use the OTLP endpoint instead.&lt;/p&gt;&lt;h2&gt;Get your Sentry OTLP credentials&lt;/h2&gt;&lt;p&gt;Sentry exposes separate OTLP endpoints for logs and traces. In this guide, we&amp;#39;re focusing on the &lt;b&gt;Logs endpoint&lt;/b&gt;. To find your OTLP credentials:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Click &lt;b&gt;Settings&lt;/b&gt; in the left sidebar.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Under the &lt;b&gt;Organization&lt;/b&gt; section in the &lt;b&gt;Settings&lt;/b&gt; sidebar, click &lt;b&gt;Projects&lt;/b&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Find your project in the list and click on it to open the project settings.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;In the project settings sidebar, click &lt;b&gt;Client Keys (DSN)&lt;/b&gt; under the &lt;b&gt;SDK Setup&lt;/b&gt; section.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Select the &lt;b&gt;OpenTelemetry&lt;/b&gt; tab. Click the &lt;b&gt;Expand&lt;/b&gt; button to see all OTLP endpoint values.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Keep this tab open. We&amp;#39;ll use the following values in the next step:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;OTLP Logs Endpoint:&lt;/b&gt; The URL where Sentry receives logs (which looks like &lt;code&gt;https://o&lt;/code&gt;&lt;b&gt;&lt;code&gt;{ORG_ID}&lt;/code&gt;&lt;/b&gt;&lt;code&gt;.ingest.us.sentry.io/api/&lt;/code&gt;&lt;b&gt;&lt;code&gt;{PROJECT_ID}&lt;/code&gt;&lt;/b&gt;&lt;code&gt;/integration/otlp/v1/logs&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;OTLP Logs Endpoint Headers:&lt;/b&gt; The authentication header (which looks like &lt;code&gt;x-sentry-auth=sentry sentry_key=&lt;/code&gt;&lt;b&gt;&lt;code&gt;{YOUR_PUBLIC_KEY}&lt;/code&gt;&lt;/b&gt;)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;One thing worth knowing: most OTLP exporters expect headers as raw key/value pairs, not full header strings. You&amp;#39;ll need to parse the header in your app. We&amp;#39;ll handle this in the setup below.&lt;/p&gt;&lt;h2&gt;Connect your OpenTelemetry app to Sentry&lt;/h2&gt;&lt;p&gt;We&amp;#39;ll use a sample payment processing service that already has OpenTelemetry logging instrumentation. &lt;b&gt;You don&amp;#39;t need to touch the logging code itself.&lt;/b&gt; Just point it at Sentry&amp;#39;s OTLP endpoint.&lt;/p&gt;&lt;h3&gt;Clone the starter app&lt;/h3&gt;&lt;p&gt;Run the following commands to clone the payment processing app:&lt;/p&gt;&lt;p&gt;This app includes the OpenTelemetry SDK already configured, structured logging throughout, multiple log severity levels (&lt;code&gt;INFO&lt;/code&gt;, &lt;code&gt;DEBUG&lt;/code&gt;, &lt;code&gt;WARN&lt;/code&gt;, and &lt;code&gt;ERROR&lt;/code&gt;), and rich log attributes for every entry.&lt;/p&gt;&lt;h3&gt;Configure Sentry as the OTLP destination&lt;/h3&gt;&lt;p&gt;Create a .&lt;code&gt;env&lt;/code&gt; file in the project root:&lt;/p&gt;&lt;p&gt;Now edit &lt;code&gt;.env&lt;/code&gt; and add your Sentry OTLP credentials from the previous step:&lt;/p&gt;&lt;p&gt;Replace the placeholders with your actual Sentry credentials. The &lt;code&gt;OTEL_SERVICE_NAME&lt;/code&gt; will let you filter logs by service in Sentry later.&lt;/p&gt;&lt;p&gt;That&amp;#39;s it. Two config lines and OpenTelemetry logs are flowing to Sentry.&lt;/p&gt;&lt;h2&gt;Test the integration&lt;/h2&gt;&lt;p&gt;Start the app:&lt;/p&gt;&lt;p&gt;You should see:&lt;/p&gt;&lt;h3&gt;Generate some logs&lt;/h3&gt;&lt;p&gt;In a new terminal window, send a request to process a payment:&lt;/p&gt;&lt;p&gt;You&amp;#39;ll get a JSON response confirming the payment:&lt;/p&gt;&lt;h3&gt;View the logs in Sentry&lt;/h3&gt;&lt;p&gt;Now let&amp;#39;s see what this looks like in &lt;a href=&quot;https://docs.sentry.io/product/explore/logs/&quot;&gt;Sentry&amp;#39;s Logs view&lt;/a&gt;:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Go to your Sentry project.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Navigate to &lt;b&gt;Explore&lt;/b&gt; in the left sidebar, then click &lt;b&gt;Logs&lt;/b&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;You&amp;#39;ll see a list of log entries from your payment processing workflow. Each log shows a timestamp, severity indicator (colored dot), and message.&lt;/p&gt;&lt;h3&gt;Explore log attributes&lt;/h3&gt;&lt;p&gt;Click on any log entry to expand it and see all its attributes.&lt;/p&gt;&lt;p&gt;For example, the &lt;b&gt;High-risk transaction detected&lt;/b&gt; log includes attributes like the following:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;&lt;b&gt;fraud_check.score&lt;/b&gt;&lt;/code&gt;&lt;b&gt;:&lt;/b&gt; &lt;code&gt;97.98&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;&lt;b&gt;fraud_check.threshold&lt;/b&gt;&lt;/code&gt;&lt;b&gt;:&lt;/b&gt; &lt;code&gt;70&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;&lt;b&gt;fraud_check.reason&lt;/b&gt;&lt;/code&gt;&lt;b&gt;:&lt;/b&gt; &lt;code&gt;unusual_amount_pattern&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;&lt;b&gt;user.id&lt;/b&gt;&lt;/code&gt;&lt;b&gt;:&lt;/b&gt; &lt;code&gt;user123&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;&lt;b&gt;transaction.id&lt;/b&gt;&lt;/code&gt;&lt;b&gt;:&lt;/b&gt; &lt;code&gt;txn_1762164637756_0hscczobm&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;&lt;b&gt;severity&lt;/b&gt;&lt;/code&gt;&lt;b&gt;:&lt;/b&gt; &lt;code&gt;warn&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;All of these are searchable. To add any attribute as a filter, hover over it, click the overflow menu (three dots), and select &lt;b&gt;Add to filter&lt;/b&gt;.&lt;/p&gt;&lt;h2&gt;How OpenTelemetry logging works&lt;/h2&gt;&lt;p&gt;Here&amp;#39;s what&amp;#39;s happening under the hood, in case you&amp;#39;re applying these patterns to your own app.&lt;/p&gt;&lt;h3&gt;OpenTelemetry SDK initialization&lt;/h3&gt;&lt;p&gt;The &lt;code&gt;instrument.js&lt;/code&gt; file configures the OTLP exporter and wires up the logger provider:&lt;/p&gt;&lt;p&gt;These are the key parts:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://opentelemetry.io/docs/specs/otel/protocol/exporter/&quot;&gt;OTLPLogExporter&lt;/a&gt; sends logs to the OTLP endpoint.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://opentelemetry.io/docs/specs/otel/logs/sdk/#loggerprovider&quot;&gt;LoggerProvider&lt;/a&gt; manages the logging system.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://opentelemetry.io/docs/specs/otel/logs/sdk/#batching-processor&quot;&gt;BatchLogRecordProcessor&lt;/a&gt; groups log records before export, which reduces network overhead at scale.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3&gt;Emitting structured logs&lt;/h3&gt;&lt;p&gt;The &lt;code&gt;index.js&lt;/code&gt; file imports &lt;code&gt;instrument.js&lt;/code&gt; first, then creates a logger and emits records:&lt;/p&gt;&lt;p&gt;Here&amp;#39;s how we emit a structured log:&lt;/p&gt;&lt;p&gt;Each call to &lt;code&gt;logger.emit()&lt;/code&gt; takes a severity level, a message body, and a set of attributes. The attributes are what make logs searchable — the more context you add here, the easier it is to find specific events later.&lt;/p&gt;&lt;h3&gt;Log severity levels&lt;/h3&gt;&lt;p&gt;OpenTelemetry supports &lt;a href=&quot;https://opentelemetry.io/docs/specs/otel/logs/data-model/#field-severitynumber&quot;&gt;six severity levels&lt;/a&gt;:&lt;/p&gt;&lt;h3&gt;Adding rich attributes&lt;/h3&gt;&lt;p&gt;The more attributes you add, the easier it is to debug issues. Here&amp;#39;s an example from the fraud detection path:&lt;/p&gt;&lt;p&gt;All these attributes are searchable in Sentry, so you can find specific transactions quickly without scanning log text.&lt;/p&gt;&lt;h2&gt;OTLP vs native Sentry SDK&lt;/h2&gt;&lt;p&gt;Both approaches send logs to Sentry. The difference is in what you get automatically.&lt;/p&gt;&lt;h3&gt;Setup and configuration&lt;/h3&gt;&lt;p&gt;&lt;b&gt;OTLP&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Native Sentry SDK&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Note that &lt;code&gt;Sentry.logger&lt;/code&gt; requires Sentry JavaScript SDK v9.41.0 or above.&lt;/p&gt;&lt;h3&gt;Emitting logs&lt;/h3&gt;&lt;p&gt;&lt;b&gt;OTLP&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Native Sentry SDK&lt;/b&gt;&lt;/p&gt;&lt;p&gt;With OpenTelemetry, you specify both &lt;code&gt;severityNumber&lt;/code&gt; and &lt;code&gt;severityText&lt;/code&gt; manually. The Sentry SDK infers both from the method you call (&lt;code&gt;info()&lt;/code&gt;, &lt;code&gt;warn()&lt;/code&gt;, and so on). The SDK also associates logs with errors, transactions, and user sessions automatically, without any extra setup.&lt;/p&gt;&lt;h3&gt;Log levels&lt;/h3&gt;&lt;p&gt;&lt;b&gt;OTLP&lt;/b&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Native Sentry SDK&lt;/b&gt;&lt;/p&gt;&lt;h2&gt;What&amp;#39;s next&lt;/h2&gt;&lt;p&gt;You now have OpenTelemetry logs flowing into Sentry. A few ways to get more value from here:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Add context to your logs.&lt;/b&gt; The more attributes you add, the easier it is to debug issues. Add user IDs, request IDs, transaction IDs, feature flags, or any relevant business context to every log entry.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Use consistent attribute naming.&lt;/b&gt; Follow &lt;a href=&quot;https://opentelemetry.io/docs/specs/semconv/&quot;&gt;OpenTelemetry Semantic Conventions&lt;/a&gt; for standardized attribute names. This keeps your logs consistent and easier to search across services.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Set up alerts.&lt;/b&gt; Configure &lt;a href=&quot;https://docs.sentry.io/product/alerts/&quot;&gt;Sentry alerts&lt;/a&gt; to notify you when certain log patterns appear — &lt;code&gt;ERROR&lt;/code&gt; logs exceeding a threshold, or high-risk transactions crossing a fraud score cutoff.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Combine logs with traces.&lt;/b&gt; If you&amp;#39;re also sending &lt;a href=&quot;https://sentry.io/product/tracing/&quot;&gt;traces&lt;/a&gt; to Sentry, you can correlate them with logs to get a complete picture of your application&amp;#39;s behavior.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;OTLP logging support is still in open beta. If you run into a limitation not listed here, &lt;a href=&quot;https://github.com/getsentry/sentry&quot;&gt;open an issue on GitHub&lt;/a&gt;. That&amp;#39;s the fastest way to get it on our radar.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[React Native SDK 8.0.0 is here]]></title><description><![CDATA[We just released React Native SDK 8.0.0, here's what's new, and what's changed. It's been a while since the last major version. The last major release, 7.0.0, s...]]></description><link>https://blog.sentry.io/react-native-sdk-8-is-here/</link><guid isPermaLink="false">https://blog.sentry.io/react-native-sdk-8-is-here/</guid><pubDate>Thu, 19 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;We just released &lt;a href=&quot;https://github.com/getsentry/sentry-react-native/releases/tag/8.0.0&quot;&gt;React Native SDK 8.0.0&lt;/a&gt;, here&amp;#39;s what&amp;#39;s new, and what&amp;#39;s changed.&lt;/p&gt;&lt;p&gt;It&amp;#39;s been a while since the last major version. The last major release, 7.0.0, shipped on &lt;b&gt;September 2, 2025&lt;/b&gt;. After 13 minor and 2 patch releases, it&amp;#39;s finally time for a new major version to land: &lt;b&gt;8.0.0&lt;/b&gt;. This version is a maintenance and capability major. This means we:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Added app start error capture with native initialization&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Upgraded core native dependencies&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Bumped minimum version requirements&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;It should be straightforward to upgrade, but check the &lt;a href=&quot;https://docs.sentry.io/platforms/react-native/migration/v7-to-v8/&quot;&gt;migration guide&lt;/a&gt; for your setup.&lt;/p&gt;&lt;h2&gt;Changes you&amp;#39;ll want to know about&lt;/h2&gt;&lt;p&gt;Most of the changes in version 8 fall into two buckets: new capabilities and updated dependencies. Here are the essentials:&lt;/p&gt;&lt;h3&gt;App start error capture&lt;/h3&gt;&lt;p&gt;Sentry can now capture crashes and errors during React Native bridge setup, bundle loading, and native module initialization — not just after &lt;code&gt;Sentry.init()&lt;/code&gt; runs on the JavaScript side. In previous versions, this required complicated configuration and manual native initialization.&lt;/p&gt;&lt;p&gt;In version 8, you can initialize Sentry at the native layer using a &lt;code&gt;sentry.options.json&lt;/code&gt; configuration file and new native APIs. This lets the SDK capture app start errors and native crashes from the very beginning of the app lifecycle. You can find more information in the &lt;a href=&quot;https://docs.sentry.io/platforms/react-native/&quot;&gt;React Native documentation&lt;/a&gt;.&lt;/p&gt;&lt;h3&gt;Updated native dependencies&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://github.com/getsentry/sentry-cocoa&quot;&gt;Cocoa SDK v9&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://docs.sentry.io/cli/&quot;&gt;Sentry CLI v3&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Android Gradle Plugin v6&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3&gt;Updated minimum supported versions&lt;/h3&gt;&lt;p&gt;With the release of 8.0.0, the minimum version requirements have changed.&lt;/p&gt;&lt;h4&gt;Apple Platforms&lt;/h4&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;iOS&lt;/b&gt;: 15.0+ (previously 11.0+)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;macOS&lt;/b&gt;: 10.14+ (previously 10.13+)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;tvOS&lt;/b&gt;: 15.0+ (previously 11.0+)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h4&gt;Android&lt;/h4&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Android Gradle Plugin&lt;/b&gt;: 7.4.0+&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Kotlin&lt;/b&gt;: 1.8+&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;Upgrading to version 8&lt;/h2&gt;&lt;p&gt;Simply update your package manager to use the latest version of version 8 and then check the migration guide to see if you need to change anything. That&amp;#39;s all there is to it for most setups. It would look something like this:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Update &lt;code&gt;@sentry/react-native&lt;/code&gt; to the latest 8.x version in your package manager.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Follow the &lt;a href=&quot;https://docs.sentry.io/platforms/react-native/migration/v7-to-v8/&quot;&gt;v7 to v8 migration guide&lt;/a&gt; to adjust minimum versions and build configuration (iOS/macOS/tvOS deployment targets, Android Gradle Plugin, Kotlin, and self-hosted Sentry if applicable).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Optionally enable &lt;a href=&quot;https://docs.sentry.io/platforms/react-native/manual-setup/app-start-error-capture/&quot;&gt;app start error capture&lt;/a&gt; using &lt;code&gt;sentry.options.json&lt;/code&gt; and native initialization.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h3&gt;What about version 7?&lt;/h3&gt;&lt;p&gt;We have stopped feature development for version 7 and will only ship critical bug fixes. You still can use version 7 and aren&amp;#39;t forced to upgrade to version 8, but we still recommend updating to the latest major version if possible.&lt;/p&gt;&lt;h2&gt;If you&amp;#39;re ready to upgrade&lt;/h2&gt;&lt;p&gt;If you&amp;#39;re able to, give version 8 a try. Enable native initialization if you want full coverage from app launch. And if something breaks, or even just feels off, &lt;a href=&quot;https://github.com/getsentry/sentry-react-native/issues&quot;&gt;open an issue&lt;/a&gt;. You can also find us on &lt;a href=&quot;https://discord.gg/sentry&quot;&gt;Discord&lt;/a&gt;. We&amp;#39;ll help you sort it out.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[From random chunks to real code — wiring up Next.js source maps in Sentry]]></title><description><![CDATA[When you ship a Next.js app, the React and TypeScript you write aren’t what your users actually download. Next.js compiles, minifies, splits, and shuffles your ...]]></description><link>https://blog.sentry.io/setting-up-next-js-source-maps-sentry/</link><guid isPermaLink="false">https://blog.sentry.io/setting-up-next-js-source-maps-sentry/</guid><pubDate>Tue, 17 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;When you ship a &lt;a href=&quot;https://docs.sentry.io/platforms/javascript/guides/nextjs/&quot;&gt;Next.js&lt;/a&gt; app, the React and TypeScript you write aren’t what your users actually download. Next.js compiles, minifies, splits, and shuffles your code into chunks in ways that are great for performance and terrible for debugging.&lt;/p&gt;&lt;p&gt;This post shows you how that pipeline works, how &lt;a href=&quot;https://docs.sentry.io/platforms/javascript/guides/nextjs/sourcemaps/&quot;&gt;source maps&lt;/a&gt; and &lt;a href=&quot;https://docs.sentry.io/platforms/javascript/guides/nextjs/sourcemaps/troubleshooting_js/debug-ids/&quot;&gt;debug IDs&lt;/a&gt; connect it all back to your original code, and how to wire things up so Sentry shows you real file names and line numbers instead of an unreadable stack trace.&lt;/p&gt;&lt;h2&gt;What actually happens to your code&lt;/h2&gt;&lt;p&gt;In a typical Next.js app, your React + TypeScript source goes through a build pipeline that compiles it to JavaScript, HTML, and CSS, minifies that output, and splits it into chunks so users only download what they need.&lt;/p&gt;&lt;p&gt;All of this is good for page load. It&amp;#39;s less good for you when an error happens and your stack trace now points into &lt;code&gt;static/chunks/12345-something.js&lt;/code&gt; instead of &lt;code&gt;app/page.tsx&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;Your code goes from something you recognize to something you really do not. That&amp;#39;s where source maps step in.&lt;b&gt; &lt;/b&gt;Each of those compiled bundle chunks gets two important pieces of metadata — a debug ID and a &lt;code&gt;sourceMappingURL&lt;/code&gt; that points to the corresponding source map.&lt;/p&gt;&lt;p&gt;Sentry uses the debug ID on the uploaded minified file to find the matching debug ID on the uploaded source map. Once it has that pair, it can de-minify the stack trace, map it back to the original file, line, and column, and show you the code you actually wrote, not just the code your bundler generated.&lt;/p&gt;&lt;h2&gt;Why dev tools show nice stack traces but Sentry shows chunks&lt;/h2&gt;&lt;p&gt;In development, things look fine in the browser. You run the app in dev, you throw a sample error, and your browser devtools show a readable stack trace with your real filenames and source.&lt;/p&gt;&lt;p&gt;That&amp;#39;s because when you run &lt;code&gt;next dev&lt;/code&gt;, the browser’s got direct access to local source maps and your original source files. It can resolve everything on its own, without Sentry.&lt;/p&gt;&lt;p&gt;If you point that same dev build at Sentry, Sentry only sees what you send it. Without uploaded source maps, it just sees whatever bundles and chunk files the dev build produced. The result is a browser with a nice, human-readable stack trace and Sentry with a stack trace full of randomly named chunks.&lt;/p&gt;&lt;p&gt;Those chunk names change across builds, which means you can easily end up with the same logical error appearing multiple times as different issues in Sentry.&lt;/p&gt;&lt;h2&gt;Use Sentry for production, Spotlight for local debugging&lt;/h2&gt;&lt;p&gt;If you&amp;#39;re actively working on a feature and throwing errors on every refresh, you&amp;#39;re going to generate fresh chunks, send lots of near-duplicate errors, and burn through Sentry quota on problems only you ever saw.&lt;/p&gt;&lt;p&gt;For that local dev loop, use a tool like&lt;a href=&quot;https://spotlightjs.com/&quot;&gt; Spotlight&lt;/a&gt;. It&amp;#39;s built for local development, works similarly to Sentry, and keeps all that debug noise out of your Sentry org so you can save Sentry for production issues from actual users.&lt;/p&gt;&lt;h2&gt;Simulating a production build so Sentry gets your source maps&lt;/h2&gt;&lt;p&gt;Sentry cares about the production artifacts — the optimized bundles and matching source maps that your real users hit. To simulate a production environment locally and make sure Sentry sees what it needs, build your app and start it:&lt;/p&gt;&lt;p&gt;&lt;code&gt;npm run build&lt;/code&gt; creates the optimized production build where Sentry hooks in, and &lt;code&gt;npm run start&lt;/code&gt; serves that build as if it were running in production.&lt;/p&gt;&lt;p&gt;This run goes through the same pipeline as your deployed app, and it&amp;#39;s during this process that Sentry hooks in to upload your bundles and source maps.&lt;/p&gt;&lt;h3&gt;What Sentry does during the build step&lt;/h3&gt;&lt;p&gt;During the production build (when you run &lt;code&gt;npm run build&lt;/code&gt;), Sentry hooks into the &amp;quot;after production compile&amp;quot; step. In that phase, Sentry collects the generated chunks, finds their source maps, and uploads both to Sentry.&lt;/p&gt;&lt;p&gt;After uploading, Sentry deletes the source maps from the build output so they don&amp;#39;t ship to the browser. That gives you readable stack traces in Sentry, without handing over your original source code to every user who opens devtools.&lt;/p&gt;&lt;p&gt;If you want to see the more general ways to upload source maps (beyond Next.js), check out the&lt;a href=&quot;https://docs.sentry.io/platforms/javascript/sourcemaps/uploading/&quot;&gt; uploading source maps&lt;/a&gt; docs.&lt;/p&gt;&lt;h3&gt;Verifying that source maps are wired up correctly&lt;/h3&gt;&lt;p&gt;Once you&amp;#39;ve run &lt;code&gt;npm run build&lt;/code&gt; and &lt;code&gt;npm run start&lt;/code&gt;, trigger that same sample error again and compare. In the browser console, you should now see a long, minified stack trace. That&amp;#39;s expected. The browser no longer has access to your source maps in this production-style setup.&lt;/p&gt;&lt;p&gt;In Sentry, refresh the Issues page and open the new error. The stack trace should now point to your real Next.js files with the code you actually wrote. If the Sentry stack trace looks readable and matches your source, your source maps are wired up correctly.&lt;/p&gt;&lt;p&gt;In Sentry, source maps live in your project settings. Go to Issues, use the project selector to pick your Next.js project, click Project Settings, and in the sidebar, open the Source Maps page.&lt;/p&gt;&lt;p&gt;You’ll see all source maps uploaded for that project. For a Next.js app, you might see a client bundle, a server bundle, and an edge bundle. Seeing multiple entries for the same release is normal. Next.js just builds different bundles for different runtimes.&lt;/p&gt;&lt;p&gt;If source maps aren&amp;#39;t behaving the way you expect (for example, Sentry still shows chunked or minified frames), go to the Source Maps page for your project, delete the existing source maps, rerun your production build with &lt;code&gt;npm run build&lt;/code&gt;, and start the app again with &lt;code&gt;npm run start&lt;/code&gt; and trigger an error.&lt;/p&gt;&lt;p&gt;This triggers a clean upload and often fixes issues caused by stale or mismatched files.&lt;/p&gt;&lt;h3&gt;Double-checking your Next.js and Sentry configuration&lt;/h3&gt;&lt;p&gt;If you&amp;#39;re building your app but no source maps are appearing in Sentry, it usually comes down to configuration.&lt;/p&gt;&lt;p&gt;In your Next.js + Sentry setup, double-check that your Sentry organization is set correctly, the Sentry project is the one you actually expect, and you&amp;#39;re providing a valid auth token.&lt;/p&gt;&lt;p&gt;You can set the auth token either directly in your Sentry-related config or via the &lt;code&gt;SENTRY_AUTH_TOKEN&lt;/code&gt; environment variable in whatever environment is running your builds (local, CI, Vercel, etc.).&lt;/p&gt;&lt;p&gt;As long as the environment variable is in place, you don&amp;#39;t need to hardcode it in your config.&lt;/p&gt;&lt;h2&gt;Where to go when you&amp;#39;re still stuck&lt;/h2&gt;&lt;p&gt;If you&amp;#39;ve rebuilt the app in production mode, cleaned up and reuploaded source maps, and double-checked your Sentry config and environment variables, and your stack traces still look wrong, check the docs for&lt;a href=&quot;https://docs.sentry.io/platforms/javascript/guides/nextjs/sourcemaps/&quot;&gt; Next.js source maps&lt;/a&gt; and the &lt;a href=&quot;https://docs.sentry.io/platforms/javascript/guides/nextjs/sourcemaps/troubleshooting_js/&quot;&gt;troubleshooting section &lt;/a&gt;under that page.&lt;/p&gt;&lt;p&gt;Those walk through common misconfigurations, show known-good examples, and give you a checklist to compare against your own setup.&lt;/p&gt;&lt;p&gt;And if you still have questions after that, let us know. We&amp;#39;re happy to help you get from random chunk names back to the code you actually wrote. Find us on &lt;a href=&quot;https://discord.gg/sentry&quot;&gt;Discord&lt;/a&gt;, or if you&amp;#39;re new to Sentry, you can explore our &lt;a href=&quot;https://sandbox.sentry.io/&quot;&gt;interactive Sentry sandbox&lt;/a&gt; or &lt;a href=&quot;https://sentry.io/signup/&quot;&gt;sign up for free&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[AI-driven caching strategies and instrumentation]]></title><description><![CDATA[The things that separate a minimum viable product (MVP) from a production-ready app are polish, final touches, and the Pareto 'last 20%' of work. Most bugs, edg...]]></description><link>https://blog.sentry.io/ai-driven-caching-strategies-instrumentation/</link><guid isPermaLink="false">https://blog.sentry.io/ai-driven-caching-strategies-instrumentation/</guid><pubDate>Fri, 13 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;The things that separate a minimum viable product (MVP) from a production-ready app are polish, final touches, and the Pareto &amp;#39;last 20%&amp;#39; of work. Most bugs, edge cases, and &lt;a href=&quot;https://sentry.io/solutions/application-performance-monitoring/&quot;&gt;performance issues&lt;/a&gt; won&amp;#39;t show up until after launch, when real users start hammering your application. If you&amp;#39;re reading this, you&amp;#39;re probably at the 80% mark, ready to tackle the rest.&lt;/p&gt;&lt;p&gt;This article covers application caching: how to use it for cutting tail latency, protecting databases, and handling traffic spikes, plus how to monitor it once it&amp;#39;s running in production.&lt;/p&gt;&lt;p&gt;This article is part of a series of common pain points when bringing an MVP to production:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://blog.sentry.io/paginating-large-datasets-in-production-why-offset-fails-and-cursors-win/&quot;&gt;Paginating Large Datasets in Production: Why OFFSET Fails and Cursors Win&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;AI-driven caching strategies and instrumentation (this one)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;Building a mental model for caching&lt;/h2&gt;&lt;p&gt;Good caching multiplies your performance, scalability, and cost efficiency. Done right, it gives you sub-millisecond responses and absorbs traffic spikes without crushing your origin servers. Done wrong (aggressive caching, bad invalidation, wrong strategies) it creates subtle bugs, stale data, and degraded user experience (UX) that&amp;#39;s hard to debug and usually only shows up after it&amp;#39;s already affected a lot of users.&lt;/p&gt;&lt;p&gt;Before looking for caching opportunities, you need a mental model for what should and shouldn&amp;#39;t be cached. Here&amp;#39;s a checklist:&lt;/p&gt;&lt;h3&gt;✅ Cache if most are true:&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Expensive&lt;/b&gt;: slow CPU, slow input/output (IO), heavy DB, big joins/aggregates, external application programming interface (API)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Frequent&lt;/b&gt;: called a lot (high requests per minute (RPM)) or sits on hot paths (page load, core API)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Reusable&lt;/b&gt;: same inputs repeat (low key cardinality)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Stable-ish&lt;/b&gt;: data doesn&amp;#39;t change every second (or can tolerate staleness)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Spiky load&lt;/b&gt;: bursty traffic where cache absorbs thundering herds&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Tail hurts&lt;/b&gt;: P95/P99 is bad, and misses correlate with slow requests&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Safe to serve stale&lt;/b&gt;: user impact low, or can use stale-while-revalidate (SWR)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Invalidation is easy&lt;/b&gt;: time to live (TTL) works, or updates have clear triggers&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Small-ish payload&lt;/b&gt;: memory cost reasonable, serialization cheap&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3&gt;❌ Don&amp;#39;t cache (or be very careful) if any are true:&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;High cardinality keys&lt;/b&gt;: per-user / per-page / per-filter explosion → mostly misses (pagination is a special case - see note below)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Highly mutable&lt;/b&gt;: correctness demands freshness&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Personalized / permissioned&lt;/b&gt;: easy to leak data via key mistakes&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Hard invalidation&lt;/b&gt;: no clear TTL, updates unpredictable&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Already fast&lt;/b&gt;: saving 5ms isn&amp;#39;t worth complexity&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Cache stampede risk&lt;/b&gt;: expensive recompute + synchronized expiry (needs locking / jitter)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;There&amp;#39;s a special rule for caching paginated endpoints - &lt;b&gt;cache page 1 + common filters first&lt;/b&gt;. Page 1 and a small set of common filters are usually hot and reused, so caching pays off. As page numbers increase, key cardinality explodes and reuse collapses, so deep pages will naturally miss and that&amp;#39;s fine. Optimize for protecting the backend and reducing tail latency on the entry points, not for achieving uniform hit rates across all pages.&lt;/p&gt;&lt;h2&gt;Finding caching opportunities in production&lt;/h2&gt;&lt;p&gt;Once you know what &lt;i&gt;should&lt;/i&gt; be cached, the next question is where caching will actually matter. In production systems, good caching candidates show up through pain, usually in three forms.&lt;/p&gt;&lt;h3&gt;Backend pain (start here)&lt;/h3&gt;&lt;p&gt;For &lt;a href=&quot;https://docs.sentry.io/product/insights/backend/&quot;&gt;backend and full-stack systems,&lt;/a&gt; this is the most actionable signal.&lt;/p&gt;&lt;p&gt;Look for:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Transactions with bad P95/P99&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Endpoints with heavy database (DB) time&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Repeated queries, joins, aggregates&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Fan-out (one request triggering many downstream calls)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Lock contention or connection pool pressure&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;These are places where caching immediately reduces real work.&lt;/p&gt;&lt;h3&gt;User pain (confirmation)&lt;/h3&gt;&lt;p&gt;Slow page loads, janky interactions, timeouts. &lt;a href=&quot;https://sentry.io/for/web-vitals/&quot;&gt;Web Vitals&lt;/a&gt; like Time to First Byte (&lt;a href=&quot;https://webvitals.com/ttfb&quot;&gt;TTFB&lt;/a&gt;), Largest Contentful Paint (&lt;a href=&quot;https://webvitals.com/lcp&quot;&gt;LCP&lt;/a&gt;), and Interaction to Next Paint (&lt;a href=&quot;https://webvitals.com/inp&quot;&gt;INP&lt;/a&gt;) help confirm that backend slowness is actually affecting users. They&amp;#39;re most useful once you already suspect a backend bottleneck.&lt;/p&gt;&lt;h3&gt;Cost pain (the long-term signal)&lt;/h3&gt;&lt;p&gt;Even if your users aren&amp;#39;t complaining yet, repetition is expensive:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;High DB read volume&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Paid external API calls&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Recomputed rollups and counts&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Cost often lags behind performance problems, but it&amp;#39;s a strong motivator once traffic grows.&lt;/p&gt;&lt;p&gt;A simple prioritization heuristic is &lt;b&gt;cost density&lt;/b&gt;:&lt;/p&gt;&lt;p&gt;requests per minute * time saved per request&lt;/p&gt;&lt;p&gt;An endpoint that&amp;#39;s moderately slow but hit consistently is usually a better caching target than a pathological endpoint nobody touches.&lt;/p&gt;&lt;h2&gt;Example: a slow paginated endpoint&lt;/h2&gt;&lt;p&gt;Consider a paginated endpoint performing a heavy database query with no caching.&lt;/p&gt;&lt;p&gt;In &lt;b&gt;Sentry &amp;gt; Insights &amp;gt; Backend&lt;/b&gt;, filtering by API transactions (above the table) surfaces this:&lt;/p&gt;&lt;p&gt;The &lt;code&gt;GET /admin/order-items&lt;/code&gt; endpoint has potential for caching. Let&amp;#39;s dive into it. I&amp;#39;ll pick a slower event and inspect the &lt;a href=&quot;https://docs.sentry.io/concepts/key-terms/tracing/trace-view/&quot;&gt;trace view&lt;/a&gt;:&lt;/p&gt;&lt;p&gt;From the screenshot, we can see:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;776ms total duration&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;731ms spent in a single DB span&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Multiple joins&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;LIMIT + OFFSET pagination&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Poor TTFB in Web Vitals&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Against the checklist:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;✅ Expensive (heavy db query, joins)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;✅ Frequent (high throughput)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;✅ Stable-ish (can tolerate brief staleness)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;✅ Tail hurts (bad P95)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;✅ Invalidation is easy (writes are controlled)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;⚠️ High cardinality key (pagination)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;This is a strong candidate for &lt;b&gt;selective caching&lt;/b&gt;, not blanket caching.&lt;/p&gt;&lt;h2&gt;Applying and instrumenting caching&lt;/h2&gt;&lt;p&gt;Sentry comes with &lt;a href=&quot;https://docs.sentry.io/product/insights/backend/caches/&quot;&gt;Cache Monitoring&lt;/a&gt; too. It helps you see your caching hit/miss rates across your application, and inspect specific events captured in production when the cache was either hit or missed.&lt;/p&gt;&lt;p&gt;Instrumenting caches can be done both automatically and manually. If you&amp;#39;re using Redis, you can leverage the automatic instrumentation. If not, manual instrumentation is just as easy.&lt;/p&gt;&lt;p&gt;The most straightforward approach is to just ask Seer to do it for you. At the time of publishing this article, Seer&amp;#39;s &amp;quot;open-ended questions&amp;quot; feature is private access only, but I&amp;#39;ll give you a little sneak peek. You can access it with &lt;code&gt;Cmd + /&lt;/code&gt; and straight up ask it to instrument caches for you:&lt;/p&gt;&lt;p&gt;Seer will then open a PR on your repo, so you can merge it and be done with it.&lt;/p&gt;&lt;p&gt;In case you don&amp;#39;t have access to this Seer feature yet, this is all you need to do to instrument your caches:&lt;/p&gt;&lt;p&gt;That&amp;#39;s it. All we need to do is wrap the &lt;code&gt;redis.get&lt;/code&gt; and &lt;code&gt;redis.setex&lt;/code&gt; with &lt;code&gt;Sentry.startSpan&lt;/code&gt; and provide caching-specific span attributes. If you&amp;#39;re not using JavaScript on your backend, you can simply rewrite these functions in your language of choice. As long as you&amp;#39;re sending spans that have the correct &lt;code&gt;op&lt;/code&gt; and &lt;code&gt;attributes&lt;/code&gt;, you&amp;#39;ll get caches instrumentation right.&lt;/p&gt;&lt;p&gt;Now we can just use these two functions:&lt;/p&gt;&lt;h2&gt;Monitoring and optimizing caches&lt;/h2&gt;&lt;p&gt;Once we deploy this we start seeing caches coming in:&lt;/p&gt;&lt;p&gt;The data shows 75% Miss Rate on that endpoint. That&amp;#39;s neither good nor bad by itself. The goal is not to reach 0% miss rate. If you do, you&amp;#39;re probably hiding bugs. There is no &amp;quot;goal value&amp;quot; for the miss rate you should aim for. The miss rate % should only align with your expectations. 75% Miss Rate on this endpoint might make sense, but there also might be room for optimization. Let&amp;#39;s click into the transaction to see actual events:&lt;/p&gt;&lt;p&gt;From the screenshot above we can see that cache hits happen only on page 1, and misses on the other pages. And that&amp;#39;s because we followed the caching paginated endpoint advice - only cache page 1 and common filters. Users were visiting multiple pages, but Page 1 accounted for 25% of the visit, hence the 75% Miss Rate. From the Transaction Duration column we can see that Page 1 loaded in under 40ms, while for other pages the users had to wait &amp;gt;700ms.&lt;/p&gt;&lt;p&gt;So our caching implementation is working, and users are experiencing faster page loads. From this point on we&amp;#39;ll know that for our &lt;code&gt;/admin/order-items&lt;/code&gt; endpoint the normal miss rate sits around 75%. If we introduce a bug later on, for example buggy cache keys (missing params, extra params), new filters or sorting, per-user or per-flag keys creeping in, accidentally including volatile data in keys (timestamps, request IDs, locale), or mess up the TTL, this number is going to shoot up, and we&amp;#39;ll see it in the chart. A spike in the chart will indicate to us that we broke caching and the users are experiencing slowdowns.&lt;/p&gt;&lt;h2&gt;AI-assisted cache expansion in production&lt;/h2&gt;&lt;p&gt;Remember the &amp;quot;cache only page 1 + common filters&amp;quot; rule? We&amp;#39;re going to bend it a little bit. If we want to bring down the 75% Miss Rate above, we&amp;#39;ll need to expand caching to cover more pages than just page 1, but we have to be careful not to over-expand because we&amp;#39;ll bloat our Redis instance.&lt;/p&gt;&lt;p&gt;Here&amp;#39;s a practical AI-assisted approach to help make a good cache expansion decision:&lt;/p&gt;&lt;p&gt;You can use &lt;a href=&quot;https://mcp.sentry.dev/&quot;&gt;Sentry Model Context Protocol (MCP)&lt;/a&gt; to pull all the &lt;code&gt;cache.get&lt;/code&gt; spans from your project and group them by the &lt;code&gt;cache.key&lt;/code&gt; property, and then ask the agent to suggest how we can expand the caching. Looking at the screenshot, we can see that Page 1 remains with the most hits, but Pages 2 - 6 have significant traffic too. Long-tail pages like 7 and 10 have minimal traffic so no need to cache, and there&amp;#39;s also some test data that it discarded. It suggested me to expand the cache to Page 3. Let&amp;#39;s see how that affects the Miss Rate:&lt;/p&gt;&lt;p&gt;Would you look at that! We&amp;#39;re now at 30% Miss Rate, down from 75%. This means roughly only 1 in 3 requests will hit the database. But it&amp;#39;s important to keep an eye on Redis memory as well. Pushing caching from Page 1 to Page 3 might bloat our Redis instance, and in that case caching won&amp;#39;t be worth it. Redis bloat = hot paths evictions, which in other words means we&amp;#39;d be undoing the performance gains we got from caching in the first place.&lt;/p&gt;&lt;h2&gt;Alerting on miss rate deviations&lt;/h2&gt;&lt;p&gt;The last thing you want to do is to set up an alert that notifies you (email, Slack) when there&amp;#39;s an anomaly in cache misses. Head to &lt;b&gt;Sentry &amp;gt; Issues &amp;gt; Alerts&lt;/b&gt;, pick &lt;b&gt;Performance Throughput&lt;/b&gt;, and &lt;a href=&quot;https://docs.sentry.io/product/alerts/&quot;&gt;create an alert&lt;/a&gt; with the following options:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Make sure you pick your project and environment correctly&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;You&amp;#39;d want to filter on &lt;code&gt;cache.hit&lt;/code&gt; being &lt;code&gt;False&lt;/code&gt;, and on your &lt;code&gt;cache.key&lt;/code&gt; as well&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Set the thresholds to &lt;b&gt;Anomaly&lt;/b&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Start with &lt;b&gt;High&lt;/b&gt; level of responsiveness, and tune later&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;For &lt;b&gt;Direction of anomaly movement&lt;/b&gt; you&amp;#39;d want &lt;b&gt;Above bounds only&lt;/b&gt; so you only get notified on cache miss increases&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Lastly, define your action, whether you want an email to yourself, or your team, or a slack message in a specific channel&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Name it and hit &amp;quot;Save Rule&amp;quot;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;That&amp;#39;s it. Now if you accidentally break the caching mechanism, it&amp;#39;ll result in a flood of cache misses, and Sentry will pick it up and notify you about it. You&amp;#39;re free to filter as you want, and create as many alerts as you want. &lt;code&gt;cache.hit&lt;/code&gt; and &lt;code&gt;cache.key&lt;/code&gt; are not the only attributes you can filter on. Play with the filter bar to figure out all the ways you can filter on.&lt;/p&gt;&lt;h2&gt;Where to go from here&lt;/h2&gt;&lt;p&gt;At this point, caching is working. The endpoint is faster, the database is protected, and you have a baseline Miss Rate that reflects normal behaviour. From here on, the work is less about adding caching, and more about making sure it keeps doing what it&amp;#39;s supposed to do.&lt;/p&gt;&lt;p&gt;The first thing to watch is &lt;b&gt;Miss Rate deviations&lt;/b&gt;, not the absolute number. A stable line that suddenly jumps usually means something changed: a cache key bug, new filters or sorting, increased cardinality, or a TTL or invalidation mistake introduced during a deploy. Those changes tend to show up in cache metrics before users start complaining.&lt;/p&gt;&lt;p&gt;Next, always &lt;b&gt;read Miss Rate together with latency&lt;/b&gt;. A higher Miss Rate that doesn&amp;#39;t affect P95/P99 is usually harmless. A higher Miss Rate that brings the database spans back into the critical path is a regression worth acting on.&lt;/p&gt;&lt;p&gt;As you expand caching, &lt;b&gt;keep an eye on Redis memory and evictions&lt;/b&gt;. Improving hit rates by caching more pages only helps if hot keys stay resident. Memory pressure that causes frequent evictions can quietly undo your gains and make cache behaviour unpredictable.&lt;/p&gt;&lt;p&gt;Finally, &lt;b&gt;revisit cache boundaries as traffic evolves&lt;/b&gt;. Usage patterns change. What was a long-tail page last month may become hot after a product change or a new workflow. Cache strategies should evolve with real traffic, not stay frozen around initial assumptions.&lt;/p&gt;&lt;p&gt;If you treat cache metrics as guardrails (baseline Miss Rates, latency correlations, and post-deploy checks) caching becomes a stable part of your system instead of a fragile optimization you&amp;#39;re afraid to touch.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Sentry acquires XcodeBuildMCP]]></title><description><![CDATA[Today we're announcing that Sentry has acquired XcodeBuildMCP, an open source MCP server that gives AI agents the ability to build, test, and debug native iOS a...]]></description><link>https://blog.sentry.io/sentry-acquires-xcodebuildmcp/</link><guid isPermaLink="false">https://blog.sentry.io/sentry-acquires-xcodebuildmcp/</guid><pubDate>Wed, 11 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Today we&amp;#39;re announcing that Sentry has acquired &lt;a href=&quot;https://www.xcodebuildmcp.com/&quot;&gt;XcodeBuildMCP&lt;/a&gt;, an open source MCP server that gives AI agents the ability to build, test, and debug native iOS and macOS apps.&lt;/p&gt;&lt;p&gt;XcodeBuildMCP has become a go-to tool for agentic Apple-platform development, with more than 4,000 GitHub stars and an active community. It unlocks the full developer loop: build, run, debug, interact, and verify, allowing users to stay in their preferred agentic development environment.&lt;/p&gt;&lt;p&gt;As part of this acquisition, the creator and maintainer &lt;a href=&quot;https://www.linkedin.com/in/cameroncooke1/&quot;&gt;Cameron Cooke&lt;/a&gt; will also join the Sentry team to help us continue to improve Sentry&amp;#39;s mobile tooling and the new agentic development landscape.&lt;/p&gt;&lt;h2&gt;Why this fits Sentry&lt;/h2&gt;&lt;p&gt;Sentry is focused on making software more reliable and giving developers the fastest path from idea to production. For mobile teams, that path is still harder than it should be and was one of the reasons we also &lt;a href=&quot;https://sentry.io/about/press-releases/sentry-acquires-emerge-tools-to-enhance-its-mobile-app-monitoring-solution/&quot;&gt;acquired Emerge Tools in 2025&lt;/a&gt;. &lt;/p&gt;&lt;p&gt;Apple platform tooling has again been slow to embrace agentic workflows, and developers are increasingly working in tools like Cursor, Claude Code, and Codex CLI rather than heavyweight IDEs.&lt;/p&gt;&lt;p&gt;XcodeBuildMCP helps close that gap. It gives those agents the same real-world capabilities a developer has, which means they can iterate autonomously and verify changes instead of constantly handing control back to a human.&lt;/p&gt;&lt;h2&gt;What XcodeBuildMCP enables&lt;/h2&gt;&lt;p&gt;Key capabilities include:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Build, run, and test apps on devices and simulators&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Attach a debugger, inspect stack traces, and execute code&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Capture simulator screenshots&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Interact with running apps by tapping, swiping, and typing&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Capture and stream runtime logs&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;This is the closed loop developer workflow that makes agentic coding practical on Apple platforms.&lt;/p&gt;&lt;p&gt;To get started, all you have to do is add this configuration to your MCP client of choice:&lt;/p&gt;&lt;h3&gt;Example workflow&lt;/h3&gt;&lt;p&gt;XcodeBuildMCP turns high-level requests into working features. Here&amp;#39;s what a typical interaction looks like when an agent has access to the full development loop:&lt;/p&gt;&lt;p&gt;&lt;b&gt;User&lt;/b&gt;: &amp;quot;&lt;i&gt;Add dark mode support to my app.&lt;/i&gt;&amp;quot;&lt;/p&gt;&lt;p&gt;&lt;b&gt;Agent&lt;/b&gt;:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Edited &lt;code&gt;Theme.swift&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Edited &lt;code&gt;Settings.swift&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;XcodeBuildMCP: Found app project and build scheme&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;XcodeBuildMCP: Built and launched the app in the simulator&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;XcodeBuildMCP: Navigated to Settings&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;XcodeBuildMCP: Toggled the dark mode switch&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;XcodeBuildMCP: Captured screenshot&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Verified dark mode is enabled&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;b&gt;Agent Response&lt;/b&gt;: &amp;quot;&lt;i&gt;I&amp;#39;ve added dark mode support and verified it in the simulator.&lt;/i&gt;&amp;quot;&lt;/p&gt;&lt;h2&gt;How this fits alongside Apple&amp;#39;s MCP tooling&lt;/h2&gt;&lt;p&gt;Apple has just started to support agentic development with Xcode&amp;#39;s MCP tooling and agent integrations in the IDE. That&amp;#39;s a positive move, but it still assumes a heavyweight IDE-first workflow.&lt;/p&gt;&lt;p&gt;XcodeBuildMCP is IDE-agnostic. It supports developers who want the speed and flexibility of modern agentic tools while still building first-class Apple apps.&lt;/p&gt;&lt;p&gt;In practice, XcodeBuildMCP also provides a broader and more complete capability set, especially for runtime debugging, simulator interaction, and automation, than Apple&amp;#39;s current MCP tooling.&lt;/p&gt;&lt;h2&gt;Commitment to open source&lt;/h2&gt;&lt;p&gt;Sentry is committed to open source and to the community that built XcodeBuildMCP.&lt;/p&gt;&lt;p&gt;In 2024, we helped launch the &lt;a href=&quot;https://opensourcepledge.com/&quot;&gt;Open Source Pledge&lt;/a&gt;, a program that asks companies to contribute $2,000 per developer per year to the open source projects they depend on. We created the pledge because the world runs on open source software, but the people maintaining it are often unpaid and burned out.&lt;/p&gt;&lt;p&gt;The pledge is simple: pay the maintainers. We don&amp;#39;t think it&amp;#39;s the only way to give back, but direct funding is a good way to recognize the work maintainers do and the value they create.&lt;/p&gt;&lt;p&gt;Last year, Sentry gave &lt;a href=&quot;https://blog.sentry.io/another-year-another-750-000-to-open-source-maintainers/&quot;&gt;$750,000 to open source maintainers&lt;/a&gt;, our fifth year in a row of direct funding. More than 25 companies have joined the pledge, collectively contributing over $6.8 million to open source since launch. The more who join, the more who will join, and the stronger the open source ecosystem will be.&lt;/p&gt;&lt;p&gt;XcodeBuildMCP joins that ecosystem as a maintained, supported project that developers can rely on. &lt;/p&gt;&lt;h2&gt;Looking ahead&lt;/h2&gt;&lt;p&gt;We&amp;#39;re continuing to invest in &lt;a href=&quot;https://sentry.io/solutions/mobile-developers/&quot;&gt;mobile&lt;/a&gt; and in the tooling that accelerates modern software teams. XcodeBuildMCP is now part of that mission, and we&amp;#39;re just getting started.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Size Analysis is generally available in Sentry]]></title><description><![CDATA[Sentry acquired Emerge Tools in May 2025 to bring best-in-class mobile tooling to dev teams. Today, we’re officially bringing Size Analysis - one of their flags...]]></description><link>https://blog.sentry.io/size-analysis-generally-available/</link><guid isPermaLink="false">https://blog.sentry.io/size-analysis-generally-available/</guid><pubDate>Tue, 10 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Sentry acquired &lt;a href=&quot;https://blog.sentry.io/emerge-tools-is-now-a-part-of-sentry/&quot;&gt;Emerge Tools in May 2025&lt;/a&gt; to bring best-in-class mobile tooling to dev teams. Today, we’re officially bringing &lt;a href=&quot;https://docs.sentry.io/product/size-analysis/&quot;&gt;Size Analysis&lt;/a&gt; - one of their flagship products - to all Sentry users, so you never have to worry about app size again. &lt;/p&gt;&lt;h2&gt;Automated monitoring in your CI pipeline&lt;/h2&gt;&lt;p&gt;The most common way app size grows is incrementally. Small changes add up over time and suddenly you’re getting warnings about being over the cellular download limit. Those small changes are easy to optimize as you make them, but try to address it a year later and suddenly it’s a more difficult task.&lt;/p&gt;&lt;p&gt;Size Analysis integrates into your CI workflow so you constantly have a pulse on your app size. Every build can be uploaded and diffed. Any time size changes, you won’t just see that it changed, you’ll see &lt;i&gt;why&lt;/i&gt; it changed, and if there are any recommended fixes you can do to make it smaller.&lt;/p&gt;&lt;p&gt;Let’s look at a common scenario: &lt;b&gt;adding an SDK&lt;/b&gt;.&lt;/p&gt;&lt;p&gt;Here’s our &lt;a href=&quot;https://github.com/EmergeTools/hackernews/pull/726/checks?check_run_id=62253281125&quot;&gt;status check&lt;/a&gt; for adding Kingfisher. With Size Analysis, we immediately see:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;The PR is adding ~500 kB to download size and ~1.5 MB to install size&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The PR failed because we had a preconfigured threshold to fail any check where Install Size diff is more than 1 MB&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;We can double check our “Comparison Page” to confirm the diff is as expected and then approve the PR. From this page we are able to see the overall size change as well as both a tabular and visual view of every file that changed size. &lt;/p&gt;&lt;p&gt;In this case, it’s clear this is the intended diff so we can approve our status check and merge the PR 🎉.&lt;/p&gt;&lt;p&gt;Let’s look at another scenario: &lt;b&gt;adding a new hero image&lt;/b&gt;.&lt;/p&gt;&lt;p&gt;We again see the overall size difference, but this time we also see two Insights. Not only were the images we added not optimized, they’re also being duplicated. Clicking in, we can see the exact files that can be optimized + how much space that would save. &lt;/p&gt;&lt;p&gt;Instead of adding 9 MB to the app, we can make this PR closer to 3 MB. Now imagine this insight is on every PR. Rather than having to spend cycles retroactively addressing size issues, Size Analysis prevents them from ever happening in the first place.&lt;/p&gt;&lt;p&gt;Whether you want to upload every single build or upload once a week with your release builds, Size Analysis will bring visibility and actionable insights to your app’s size.&lt;/p&gt;&lt;h2&gt;Reducing your app’s size&lt;/h2&gt;&lt;p&gt;Automated monitoring is critical to make sure your app size doesn’t creep up, but in reality, your app might already need slimming down. For every build that you upload, Size Analysis will give you a detailed breakdown of where size is coming from + how you can reduce its size.&lt;/p&gt;&lt;p&gt;Here we have the CalAI app (analyzed from a public App Store build). CalAI has an install size of ~250 MB. Looking at the Size Analysis, we can see very quickly that a large chunk of the app size (30%) is from duplicating an &lt;code&gt;asset.car&lt;/code&gt; file twice.&lt;/p&gt;&lt;p&gt;We can open the Insight details to see a list of all insights: &lt;/p&gt;&lt;p&gt;Or we can even highlight all Insights on the treemap itself and hover over nodes to see how their size can be reduced:&lt;/p&gt;&lt;p&gt;Looking at a build’s details is great for seeing easy size win opportunities, but it also helps you understand where size is coming from. Below is the &lt;a href=&quot;http://chess.com/&quot;&gt;Chess.com&lt;/a&gt; app (also analyzed from a public App Store build):&lt;/p&gt;&lt;p&gt;We see a number of Insights for duplicate files and unnecessary binary data, but we can also see a fairly large &lt;code&gt;openingbook.json&lt;/code&gt; node.&lt;/p&gt;&lt;p&gt;21 MB is a lot of JSON and something to be immediately suspicious of. Inspecting this file, we can see it’s actually a list of all the possible openings that Chess.com highlights (if you’ve ever played a game on Chess.com and seen the esoteric opening names, this is where it’s coming from). &lt;/p&gt;&lt;p&gt;The problem here is that the JSON is not being minified, so a simple minification takes it from 21 MB → 13 MB. Size could further be reduced by making this a SQLite file or more aggressively optimize by de-duping keys, similar to what we described &lt;a href=&quot;https://x.com/emergetools/status/1610700661984739328&quot;&gt;here&lt;/a&gt; with localization size.&lt;/p&gt;&lt;p&gt;Chess JSON rabbit-hole aside, the point is that Size Analysis makes it obvious where your size is coming from and how you can reduce it. Applying Size Analysis wins is as easy as:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Automate monitoring so no extra cruft gets in the app&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Pick off size reduction opportunities as you have bandwidth (or soon just as &lt;a href=&quot;https://sentry.io/product/seer/&quot;&gt;Seer&lt;/a&gt; 😉)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Track over time&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h2&gt;Getting started with Size Analysis&lt;/h2&gt;&lt;p&gt;Every Sentry plan includes 100 build uploads per monthly billing period (higher upload volumes are available via our Enterprise plan). You can view uploaded builds on the Releases page under Mobile Builds, and optionally add automated size change notifications to PRs. For the exact setup steps, &lt;a href=&quot;https://docs.sentry.io/product/size-analysis/&quot;&gt;see the Size Analysis docs&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;If you don’t have a Sentry account, &lt;a href=&quot;https://sentry.io/signup/&quot;&gt;start one for free&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Watching everything is watching nothing: Sampling strategy for Sentry]]></title><description><![CDATA[TL;DR - Blanket sampling rates can be wasteful or inefficient. Capture 100% of the signal with less of the noise and fine-tune how you monitor your applications...]]></description><link>https://blog.sentry.io/sampling-strategy-sentry/</link><guid isPermaLink="false">https://blog.sentry.io/sampling-strategy-sentry/</guid><pubDate>Mon, 02 Feb 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;b&gt;TL;DR - Blanket sampling rates can be wasteful or inefficient. Capture 100% of the signal with less of the noise and fine-tune how you monitor your applications with custom sampling logic&lt;/b&gt;&lt;/p&gt;&lt;p&gt;In a high-traffic production environment, telemetry is your most direct link to the user experience. Every &lt;a href=&quot;https://docs.sentry.io/concepts/key-terms/tracing/#whats-a-span&quot;&gt;Span&lt;/a&gt;, &lt;a href=&quot;https://sentry.io/product/tracing/&quot;&gt;Trace&lt;/a&gt;, &lt;a href=&quot;https://sentry.io/product/logs/&quot;&gt;Log&lt;/a&gt;, and &lt;a href=&quot;https://sentry.io/product/session-replay/&quot;&gt;Replay&lt;/a&gt; sent to Sentry gives you high-fidelity visibility into what is actually happening in production.&lt;/p&gt;&lt;p&gt;But to extract the most value out of that visibility, you have to know how to filter signal from noise. If you treat a routine &amp;quot;page load&amp;quot; on a stable legacy route with the same intensity as a critical experience, like a checkout flow, or a brand-new feature launch, you aren&amp;#39;t optimizing the data you collect.&lt;/p&gt;&lt;p&gt;To build an observability strategy that survives scale and doesn’t break quotas, you need to move past &amp;quot;blanket sampling.&amp;quot; You want to prioritize high-resolution data where things are critical or changing fast, and optimize your setup where the system is stable.&lt;/p&gt;&lt;h2&gt;Why not sample 100% of everything?&lt;/h2&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;You can!&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;And if your app is small or brand new, that might actually be the right plan. But as you scale, &amp;quot;100% of everything&amp;quot; usually stops being a practical option, for a couple reasons:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Signal-to-Noise:&lt;/b&gt; Telemetry data is more useful when you know what happened at a glance. Needing to parse through 1 million “user clicked button” spans to discover the 100 times users experienced issues during checkout isn’t efficient for you, for queries, or any LLMs that might be consuming the same information as us.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;The Network Footprint:&lt;/b&gt; While the SDK is highly optimized, every interaction, every function call, it starts to add up. By sampling only what you need, performance stays high while still collecting valuable information.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;The basic dials: Static sample rates&lt;/h2&gt;&lt;p&gt;When you initialize Sentry, you get four main options for adjusting how much data you send. Understanding how these interact is the first step:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;&lt;b&gt;sampleRate&lt;/b&gt;&lt;/code&gt;: This is for errors. We almost always leave this at 1.0 because if something breaks, we want to know every single time.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;&lt;b&gt;tracesSampleRate&lt;/b&gt;&lt;/code&gt;: This provides a cross-section of your traffic. It’s your primary lever for managing the volume of performance data.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;&lt;b&gt;replaysSessionSampleRate&lt;/b&gt;&lt;/code&gt;: This records the whole session from the start. It&amp;#39;s high-fidelity, so you usually only need a small percentage to see how the &amp;quot;average&amp;quot; user navigates.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;&lt;b&gt;replaysOnErrorSampleRate&lt;/b&gt;&lt;/code&gt;: This is a buffer. It only sends the replay if an error occurs, capturing the 60 seconds of activity leading up to the error.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h2&gt;Precision control: The &lt;code&gt;tracesSampler&lt;/code&gt;&lt;/h2&gt;&lt;p&gt;Traces are potentially the most important metric we have, responsible for monitoring performance, errors, and connecting all of our data together. In a production environment, you almost always want to sample 100% of traces. However, especially when dealing with very high-traffic applications, you don’t &lt;i&gt;need&lt;/i&gt; to collect all of your traces if you trace strategically.&lt;/p&gt;&lt;p&gt;Instead of a blanket percentage, Sentry lets you pass a &lt;code&gt;tracesSampler&lt;/code&gt; function to &lt;code&gt;&lt;b&gt;tracesSampleRate&lt;/b&gt;&lt;/code&gt;. This allows you to make a decision in real-time based on the context of the request.&lt;/p&gt;&lt;h3&gt;What’s in the sampling context?&lt;/h3&gt;&lt;p&gt;When a span starts, the sampler receives a&lt;a href=&quot;https://docs.sentry.io/platforms/javascript/guides/react/tracing/configure-sampling/#the-sampling-context-object&quot;&gt; &lt;code&gt;samplingContext&lt;/code&gt; object&lt;/a&gt;. You’ll have this data automatically and can use it, along with anything else you may pass in, to decide if you should sample.&lt;/p&gt;&lt;p&gt;In your &lt;code&gt;tracesSampler&lt;/code&gt; function, either call the context object or directly destructure the values you need, and provide a return value ranging from &lt;code&gt;0&lt;/code&gt; to &lt;code&gt;1&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;You can find a few&lt;a href=&quot;https://docs.sentry.io/platforms/javascript/guides/react/tracing/configure-sampling/#traces-sampler-examples&quot;&gt; useful example sampling functions&lt;/a&gt; in our docs.&lt;/p&gt;&lt;h3&gt;Staying in sync with &lt;code&gt;inheritOrSampleWith&lt;/code&gt;&lt;/h3&gt;&lt;p&gt;If the backend has already decided to sample a trace, the frontend should usually follow suit. You can use the &lt;code&gt;inheritOrSampleWith&lt;/code&gt; utility to handle this. Destructure it from the &lt;code&gt;samplingContext&lt;/code&gt; and call it to inherit the backend sample, or fall back to another value.&lt;/p&gt;&lt;h2&gt;Smart sampling with Session Replay&lt;/h2&gt;&lt;p&gt;Traces tell you &lt;i&gt;what&lt;/i&gt; happened and &lt;i&gt;where&lt;/i&gt; (e.g., &amp;quot;The database was slow&amp;quot;). Replays &lt;i&gt;show&lt;/i&gt; you &lt;i&gt;how&lt;/i&gt; it happened (e.g., &amp;quot;The user clicked the button five times because the loading spinner didn&amp;#39;t show up&amp;quot;).&lt;/p&gt;&lt;p&gt;By default, Session Replay is configured to record a percentage of all user sessions, based on &lt;code&gt;replaysSessionSampleRate&lt;/code&gt;. There is also a separate sample rate that runs in a buffer and only submits if the user encounters an error; we typically leave that one at &lt;code&gt;1.0&lt;/code&gt; or a very high percentage.&lt;/p&gt;&lt;p&gt;While a typical Sentry plan comes with 5 million spans, we start with 50 Session Replays (though you can add additional of both). Not only do we have our plan quotas to worry about, recording a Session Replay is a little more taxing on our users.&lt;/p&gt;&lt;p&gt;While setting a sample rate is a good start, in a production application, we may want to be a bit more intentional with when we record. We may want to record certain parts of our app with greater frequency, like new features or parts of the critical experience flow.&lt;/p&gt;&lt;p&gt;While there isn&amp;#39;t a &lt;code&gt;replaysSampler&lt;/code&gt; yet, it is possible to manually initiate and manage replays using &lt;code&gt;replay.start()&lt;/code&gt;.&lt;/p&gt;&lt;h3&gt;Custom &lt;code&gt;useSessionReplay&lt;/code&gt; Hook&lt;/h3&gt;&lt;p&gt;Session replays can be configured with an overall sample rate, but we can also choose to &lt;i&gt;manually&lt;/i&gt; instrument Session Replay captures ourselves.&lt;/p&gt;&lt;p&gt;Let’s take a look at how we could build a custom React Hook to implement in our React/Next.js projects that will help us dynamically control how often we sample Session Replays per page.&lt;/p&gt;&lt;p&gt;First, remember to disable the &lt;code&gt;replaysSessionSampleRate&lt;/code&gt; since we&amp;#39;ll be implementing our own manually.&lt;/p&gt;&lt;p&gt;Then, add this wherever you store your hooks in your React app. We’ll use this in our app’s pages to define where we want to sample, how often, and we’ll even be able to inject some additional context.&lt;/p&gt;&lt;p&gt;This will let you dial in session replays to where you need them most, and even add important additional context.&lt;/p&gt;&lt;p&gt;It’s worth noting this method is not 100% free of downsides. This will only start recording after the component is mounted, so you may miss out on hydration issues that happen in the earliest part of the render. Though we may not be able to see that, we will capture it in&lt;a href=&quot;https://docs.sentry.io/product/insights/frontend/web-vitals/&quot;&gt; Web Vitals&lt;/a&gt;.&lt;/p&gt;&lt;h2&gt;What about Logs?&lt;/h2&gt;&lt;p&gt;Logs are a&lt;a href=&quot;https://sentry.io/product/logs/&quot;&gt; recent addition&lt;/a&gt; to Sentry. If you &lt;code&gt;enableLogs&lt;/code&gt; in your&lt;a href=&quot;https://docs.sentry.io/platforms/javascript/guides/react/logs/#setup&quot;&gt; Sentry init config&lt;/a&gt; you can start sending logs to Sentry – either via the Sentry Logger from the SDK, or an integration like the&lt;a href=&quot;https://docs.sentry.io/platforms/javascript/guides/react/logs/#console-logging-integration&quot;&gt; Console Logging Integration&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;Sentry receives 100% of logs by default if the integration is enabled. However, in high-traffic environments, you should filter logs at the source, using Sentry’s&lt;a href=&quot;https://docs.sentry.io/platforms/javascript/logs/#beforesendlog&quot;&gt; &lt;code&gt;beforeSendLog&lt;/code&gt;&lt;/a&gt; or another logging tool, like&lt;a href=&quot;https://blog.sentry.io/trace-connected-structured-logging-with-logtape-and-sentry/&quot;&gt; LogTape&lt;/a&gt; and &lt;i&gt;structured logging&lt;/i&gt;.&lt;/p&gt;&lt;p&gt;We recently talked about using&lt;a href=&quot;https://blog.sentry.io/trace-connected-structured-logging-with-logtape-and-sentry/&quot;&gt; LogTape with the Sentry sink&lt;/a&gt; to elevate how we approach logging in our production apps.&lt;/p&gt;&lt;p&gt;At the most basic level, we can limit what logs are sent, per sink, based on the “severity” level. We may instrument our app with debug logs, but at scale, these may be noisy and eat into our quotas. Instead, we can decide to filter out the lowest levels of logs by configuring lowestLevel.&lt;/p&gt;&lt;p&gt;With well-planned, structured logging, we can easily filter on specific data attributes. This is useful not only for limiting and processing what data we send (and where) for noise to signal reasons, but also for data scrubbing.&lt;/p&gt;&lt;p&gt;Using&lt;a href=&quot;https://logtape.org/manual/filters&quot;&gt; LogTape filters&lt;/a&gt; we can instrument our own filtering logic, including even our own built-in sampling logic if we wished.&lt;/p&gt;&lt;p&gt;Read more about &lt;a href=&quot;https://blog.sentry.io/trace-connected-structured-logging-with-logtape-and-sentry/&quot;&gt;filtering and querying logs using LogTape&lt;/a&gt;.&lt;/p&gt;&lt;h2&gt;Summary: Sampling telemetry at scale&lt;/h2&gt;&lt;p&gt;Setting up Sentry for the first time is simple: instrument your SDK, set your sample rates to 100%, and wait to start seeing all your errors, performance data, logs, session replays, and more coming in. But, as traffic ramps up and monitoring priorities change, you ideally want to optimize out the noise from the signal, and focus attention where it is needed.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Traces:&lt;/b&gt; Traces map and connect events in your apps. While you typically want to record most if not all of your traces, with enough traffic, it makes sense to prioritize high-impact targets.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Session Replays:&lt;/b&gt; Replays are your highest-fidelity tool, but they carry the most weight. Instead of a blanket percentage, use &lt;code&gt;replay.start()&lt;/code&gt; to trigger recordings only during critical user flows, like at checkout, or where new features are enabled, and the visual context is highly valuable.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Logs:&lt;/b&gt; Use &lt;code&gt;beforeSendLog&lt;/code&gt; to filter by severity or custom metadata. Stop sending &amp;quot;Info&amp;quot; logs for every routine event; adopt structured logging so that when a log does hit Sentry, it’s already formatted to be searchable and high-signal.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;New to Sentry? &lt;a href=&quot;https://sentry.io/signup/&quot;&gt;Sign up&lt;/a&gt; get started, or check out our quickstart guides for &lt;a href=&quot;https://sentry.io/quickstart/logs/&quot;&gt;Logs&lt;/a&gt; and &lt;a href=&quot;https://sentry.io/quickstart/session-replay/&quot;&gt;Session Replay&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Less code, faster builds, same telemetry: Turbopack support for the Next.js SDK]]></title><description><![CDATA[TL;DR - Turbopack became the default in Next.js, so we reworked our SDK to stop depending on bundlers. The result is less code, faster builds, and the same tele...]]></description><link>https://blog.sentry.io/turbopack-support-next-js-sdk/</link><guid isPermaLink="false">https://blog.sentry.io/turbopack-support-next-js-sdk/</guid><pubDate>Thu, 29 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&lt;b&gt;TL;DR - Turbopack became the default in Next.js, so we reworked our SDK to stop depending on bundlers. The result is less code, faster builds, and the same telemetry. This blog explains how we got there.&lt;/b&gt;&lt;/p&gt;&lt;p&gt;You know the feeling when you spend years building tooling that supports something and all of a sudden that something becomes deprecated and you have to rethink your full approach?&lt;/p&gt;&lt;p&gt;And no, this isn’t a post about Ralph Wiggum, the &lt;a href=&quot;https://ghuntley.com/ralph/&quot;&gt;recursive agent practice&lt;/a&gt; that we all as a community decided was okay to name that way and roll with it.&lt;/p&gt;&lt;p&gt;This is about Next.js rolling out Turbopack, deprecating Webpack (as of Next.js v16 Turbopack is the default) and us rethinking our telemetry approach in the SDK.&lt;/p&gt;&lt;h2&gt;What we were doing before&lt;/h2&gt;&lt;p&gt;When you ran &lt;code&gt;next build&lt;/code&gt;, our Webpack loader intercepted every page, API route, middleware, and server component. It would:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Parse your file to determine its type&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Bundle it with a Sentry wrapper template using Rollup&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Replace your original code with the instrumented version&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;This worked. It also required maintaining six different wrapper templates, a 360-line webpack loader, and roughly 1,667 lines of instrumentation code. Every new Next.js feature — server components, route handlers, the App Router — meant writing another template and hoping Webpack&amp;#39;s internals hadn&amp;#39;t changed since last time we looked.&lt;/p&gt;&lt;p&gt;We were building a parallel universe where every file had a Sentry-wrapped doppelgänger. It&amp;#39;s the kind of architecture that works until it doesn&amp;#39;t, and when it doesn&amp;#39;t, good luck figuring out which layer broke.&lt;/p&gt;&lt;h2&gt;What we do now&lt;/h2&gt;&lt;p&gt;Next.js has built-in OpenTelemetry instrumentation. It emits spans for every request, middleware execution, and render operation — complete with route information. Instead of wrapping your code, we listen.&lt;/p&gt;&lt;p&gt;The snippet above is the clean version. In reality, we overwrite the standard Next.js OTel config with Sentry-specific parts (span processors, context manager, etc.) and customize our HTTP integration to disable incoming Next.js-generated spans so we can enrich them ourselves. Still simpler than six Rollup templates.&lt;/p&gt;&lt;p&gt;The Turbopack-specific code is about 164 lines. That&amp;#39;s a 10x reduction from the Webpack approach, and most of those lines are config handling, not instrumentation logic.&lt;/p&gt;&lt;h2&gt;What this actually means for your builds&lt;/h2&gt;&lt;p&gt;Charly, an engineer on our Next.js SDK team, ran a test against the &lt;a href=&quot;https://github.com/getsentry/sentry-changelog&quot;&gt;Sentry Changelog repo&lt;/a&gt; to see the difference:&lt;/p&gt;&lt;p&gt;The SDK no longer runs Rollup on every file in your application during compilation. If you&amp;#39;ve ever wondered why &lt;code&gt;next build&lt;/code&gt; was taking a while, some of that was us. (&lt;i&gt;Sorry.&lt;/i&gt;)&lt;/p&gt;&lt;p&gt;The SDK automatically detects whether you&amp;#39;re using Turbopack or Webpack and adjusts. If you&amp;#39;re on Next.js 15.4.1 or later, Turbopack just works. The tracing data you see in Sentry looks the same — route handlers, middleware, server components, data fetching. All still there.&lt;/p&gt;&lt;h2&gt;What changes&lt;/h2&gt;&lt;p&gt;Some Webpack configuration options no longer apply when using Turbopack:&lt;/p&gt;&lt;p&gt;If you were excluding specific routes from instrumentation, you&amp;#39;ll need to filter them via Sentry&amp;#39;s &lt;code&gt;beforeSendTransaction&lt;/code&gt; hook instead. The SDK relies on Next.js&amp;#39;s OpenTelemetry instrumentation, so there&amp;#39;s no build-time wrapping to opt out of.&lt;/p&gt;&lt;p&gt;Server Actions still require manual instrumentation:&lt;/p&gt;&lt;p&gt;Server Actions don&amp;#39;t emit OTel spans we can hook into. We&amp;#39;re watching Next.js development here — if they expose the telemetry, we&amp;#39;ll add automatic instrumentation.&lt;/p&gt;&lt;h2&gt;Why this matters beyond our codebase&lt;/h2&gt;&lt;p&gt;Instead of every APM vendor maintaining their own build plugins, loaders, and bundler integrations, frameworks are adopting OpenTelemetry as a standard telemetry interface. Next.js emits spans. We consume them. The framework handles the &amp;quot;how,&amp;quot; we handle the &amp;quot;where it goes.&amp;quot;&lt;/p&gt;&lt;p&gt;This is where framework instrumentation is heading. We just got there by deleting code instead of writing more of it.&lt;/p&gt;&lt;h2&gt;Requirements&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Next.js 15.4.1+ for Turbopack production builds&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Next.js 15.6+ for native Debug IDs (improves source map resolution)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;No changes to your &lt;code&gt;instrumentation.ts&lt;/code&gt; or &lt;code&gt;instrumentation-client.ts&lt;/code&gt; files&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Next.js 16 makes Turbopack the default bundler and the SDK is ready for it.&lt;/p&gt;&lt;p&gt;The 15-month journey from &amp;quot;Turbopack: unsupported&amp;quot; to &amp;quot;Turbopack: default&amp;quot; involved 19 PRs, a complete architectural rethink, and the realization that the best way to support a new bundler was to stop depending on bundlers altogether.  That approach now powers our &lt;a href=&quot;https://sentry.io/for/nextjs/&quot;&gt;Next.js SDK&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Log Drains now available: Bringing your platform logs directly into Sentry]]></title><description><![CDATA[Sentry now supports log drains, making it easy to forward logs into Sentry without any application code changes or manual project-key lookups needed. If your lo...]]></description><link>https://blog.sentry.io/log-drains-now-available/</link><guid isPermaLink="false">https://blog.sentry.io/log-drains-now-available/</guid><pubDate>Wed, 28 Jan 2026 08:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Sentry now supports log drains, making it easy to forward logs into Sentry without any application code changes or manual project-key lookups needed. If your logs already exist somewhere else, you can now see them alongside errors and traces in Sentry, no code changes required.&lt;/p&gt;&lt;p&gt;Already want to get started? The &lt;a href=&quot;https://sentry.io/quickstart/logs/&quot;&gt;quickstart guide&lt;/a&gt; is one click away.&lt;/p&gt;&lt;h2&gt;Get all your logs in one place connected to issue context&lt;/h2&gt;&lt;p&gt;When we made logs generally available in Sentry back in September 2025, the goal was to enable developers to view logs, traces, errors, and replays in a single platform. And the feedback was largely about having the right logs attached to the right issues by default.&lt;/p&gt;&lt;p&gt;Now with &lt;a href=&quot;https://docs.sentry.io/product/drains/&quot;&gt;log drains&lt;/a&gt;, your platform logs (and traces) automatically flow into Sentry so the same “extra set of eyes” extends to platform-level events outside your application code. &lt;/p&gt;&lt;p&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=LQBvbQcbpIY&quot;&gt;If you are thinking “just show me an example,” watch our DevEx team walk about Vercel logs in Sentry.&lt;/a&gt;&lt;/p&gt;&lt;p&gt;
By pulling platform logs into the same place as your application errors and traces, teams get a complete picture of how systems behave across builds, deploys, edge runtimes, databases, and auth layers—without running additional agents or touching application code. &lt;/p&gt;&lt;p&gt;Instead of jumping between dashboards or losing logs to short retention windows, engineers can investigate issues end-to-end in Sentry.&lt;/p&gt;&lt;p&gt;&lt;a href=&quot;https://sentry.io/quickstart/logs/&quot;&gt;Get started&lt;/a&gt; with 5GB of logs included on every plan (with additional usage at $0.50 per GB). &lt;/p&gt;&lt;h2&gt;How are teams already using drains?&lt;/h2&gt;&lt;h3&gt;Debugging a Vercel deployment without leaving Sentry&lt;/h3&gt;&lt;p&gt;After a deploy, an ecommerce team sees a spike in client-side failures in Sentry. Browser events and logs captured by the Sentry SDK, like &lt;code&gt;ui.render_failed &lt;/code&gt;and&lt;code&gt; api.fetch_failed&lt;/code&gt;, cluster around the billing page &lt;code&gt;route=/settings/billing&lt;/code&gt;, mostly affecting Safari users in one region. The SDK gives them the who and where, with route, user agent, region, and release already attached. And because they add &lt;code&gt;vercel.deployment_id&lt;/code&gt; as a custom tag in Sentry, it’s easy to see the spike lines up with a single deploy rather than a broader issue.&lt;/p&gt;&lt;p&gt;From there, the team pivots to &lt;b&gt;Vercel logs in Sentry&lt;/b&gt;, filtering to log drain events using &lt;code&gt;origin:auto.log_drain.vercel &lt;/code&gt;for the same time window. Grouping runtime logs by the resolved function path &lt;code&gt;vercel.path&lt;/code&gt; and where the code actually ran &lt;code&gt;vercel.execution_region&lt;/code&gt; reveals a clear hotspot. Requests to /api/billing/subscription are returning 5xx responses, concentrated in a single region.&lt;/p&gt;&lt;p&gt;Now the same failure is visible from two useful angles. The SDK view shows what went wrong inside the application, with stack traces and app context. The Vercel log drain view adds the surrounding runtime details like request IDs, duration, memory usage, and stderr output. Switching between the two makes it easier to understand not just the error, but how it behaved in production.&lt;/p&gt;&lt;p&gt;Build logs for the deploy using vercel.source:build are clean, confirming the deploy itself succeeded. Looking next at &lt;b&gt;Vercel firewall logs&lt;/b&gt; using vercel.source:firewall fills in the final piece. There is a spike in deny actions for the same route at the edge vercel.proxy.path in the affected region. These platform signals explain why some requests never reach application code.&lt;/p&gt;&lt;p&gt;Putting it all together, the team sees the billing page fails because its backing API intermittently fails and in some cases is blocked within a specific region. They add log-based alerts on runtime 5xxs and firewall actions, grouped by path and region, so future regressions are immediately tied back to a specific deploy and blast radius.&lt;/p&gt;&lt;h3&gt;Debugging Supabase auth and database issues&lt;/h3&gt;&lt;p&gt;A team using Supabase for Postgres relied on Sentry SDKs in their application services, but had limited visibility into issues originating inside Supabase itself. Database errors were only available in the Supabase dashboard with limited retention, making post-incident investigation difficult.&lt;/p&gt;&lt;p&gt;By enabling a Supabase Log Drain, the team forwarded Supabase Postgres logs into Sentry without changing application code. This surfaced database activity in the same place as their application telemetry, searchable with queries like:&lt;/p&gt;&lt;p&gt;service:supabase AND message:*error* &lt;/p&gt;&lt;p&gt;In one incident, an increase in login failures lined up with Supabase database logs showing repeated errors related to expired tokens (&lt;code&gt;message:*JWT*expired*&lt;/code&gt;). With those logs retained in Sentry, the team quickly identified a misconfigured token lifetime rather than an application issue, avoided unnecessary code changes, and resolved the problem directly in Supabase.&lt;/p&gt;&lt;h3&gt;Bringing Cloudflare Worker logs into Sentry&lt;/h3&gt;&lt;p&gt;A team running an API behind Cloudflare Workers used Sentry SDKs in their core services, but Worker behavior remained a blind spot. Requests were occasionally failing due to routing, caching, or request-size issues, yet Cloudflare Worker logs only lived in the Cloudflare dashboard and were often unavailable during incident reviews.&lt;/p&gt;&lt;p&gt;After enabling a Cloudflare Log Drain, the team streamed Cloudflare Worker application logs into Sentry without deploying agents or modifying application code. They were able to search Worker errors using queries like:&lt;/p&gt;&lt;p&gt;&lt;code&gt;service:cloudflare AND message:*error*&lt;/code&gt;&lt;/p&gt;&lt;p&gt;During one incident, a spike in 4xx errors aligned with Worker logs showing repeated request-size rejections (&lt;code&gt;message:*request body too large*&lt;/code&gt;) from a single region. With these logs visible in Sentry, the team identified the issue as an edge configuration problem rather than a backend failure, avoided unnecessary service changes, and fixed the issue directly in Cloudflare.&lt;/p&gt;&lt;h2&gt;Ready to get started? &lt;/h2&gt;&lt;p&gt;Logs are available for all plans. Every plan includes 5GB of logs, with additional usage at $0.50 per GB and a 30-day log lookback  (&lt;i&gt;plus an unlimited 14-day trial you can start anytime&lt;/i&gt;).

For setup details, choose our &lt;a href=&quot;https://docs.sentry.io/product/explore/logs/&quot;&gt;logs&lt;/a&gt; and &lt;a href=&quot;https://docs.sentry.io/product/drains/&quot;&gt;log drains&lt;/a&gt; documentation or choose your platform below: &lt;/p&gt;&lt;p&gt;Platform drains&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://docs.sentry.io/product/drains/#:~:text=Traces-,Vercel,-%E2%9C%85&quot;&gt;Vercel&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://docs.sentry.io/product/drains/integration/cloudflare/&quot;&gt;Cloudflare&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://docs.sentry.io/product/drains/integration/heroku/&quot;&gt;Heroku&lt;/a&gt; &lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://docs.sentry.io/product/drains/integration/supabase/&quot;&gt;Supabase&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Forwarders&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://docs.sentry.io/product/drains/integration/opentelemetry-collector/&quot;&gt;OpenTelemetry Collector&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://docs.sentry.io/product/drains/integration/vector/&quot;&gt;Vector&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://docs.sentry.io/product/drains/integration/fluentbit/&quot;&gt;Fluent Bit&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Once enabled, logs typically show up within seconds and are automatically associated with related errors and traces—no extra configuration required.&lt;/p&gt;&lt;p&gt;Not a Sentry User? &lt;a href=&quot;https://sentry.io/signup/&quot;&gt;Start your free trial. &lt;/a&gt;&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Seer: debug with AI at every stage of development]]></title><description><![CDATA[When we launched Seer, our AI debugging agent, we built it on a core belief: production context is essential for understanding the complex failure modes of real...]]></description><link>https://blog.sentry.io/seer-debug-with-ai-at-every-stage-of-development/</link><guid isPermaLink="false">https://blog.sentry.io/seer-debug-with-ai-at-every-stage-of-development/</guid><pubDate>Tue, 27 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;When we launched Seer, our AI debugging agent, we built it on a core belief: production context is essential for understanding the complex failure modes of real-world software. Seer uses the detailed telemetry that Sentry collects (errors, spans, logs, metrics, and more) to accurately root cause and fix bugs. Because this telemetry is trace-connected, Seer can deterministically traverse all the data relevant to a problem rather than relying exclusively on imprecise time-range searches.&lt;/p&gt;&lt;p&gt;Coding agents can find some bugs by reading source code, but others are only reliably identifiable by observing runtime behavior. In distributed systems, failures often cross network boundaries: an unhealthy service can trigger timeouts or cascading failures elsewhere, and some issues only occur under load. The same applies to understanding performance characteristics. A p95 latency spike might stem from lock contention, a saturated connection pool, or other root causes not obvious from the code. Runtime context provides the evidence Seer needs to accurately diagnose and fix these real-world problems.&lt;/p&gt;&lt;p&gt;While fixing bugs in production will always be a critical use case, the best bugs are those you never ship. &lt;/p&gt;&lt;p&gt;Today, we&amp;#39;re shifting left and expanding Seer&amp;#39;s capabilities to help you debug during local development and code review, alongside a new flat price for unlimited use.&lt;/p&gt;&lt;h2&gt;Debug as you build locally&lt;/h2&gt;&lt;p&gt;Bugs are easiest to fix at the moment they&amp;#39;re introduced. The &lt;a href=&quot;https://mcp.sentry.dev/&quot;&gt;&lt;u&gt;Sentry MCP server&lt;/u&gt;&lt;/a&gt; connects your local coding agent to a powerful debugging feedback loop, enabling you to catch and resolve issues during development rather than in code review or production. As you reproduce bugs locally, telemetry flows from your application to Sentry, where the agent can access raw events for context or invoke Seer to run a full root cause analysis. Your coding agent gets everything it needs to generate a patch before the code leaves your local environment.&lt;/p&gt;&lt;h2&gt;Code review that catches real defects&lt;/h2&gt;&lt;p&gt;Still, no local environment catches everything. For bugs that slip through, Seer steps in at code review time, identifying issues in your pull request before they&amp;#39;re merged. Seer focuses on finding real bugs that risk breaking production, not low-signal suggestions or stylistic nitpicks. Catching these defects at review time means fewer incidents, faster releases, and less time spent debugging in production.&lt;/p&gt;&lt;p&gt;If you already use Seer, set up code review by installing the &lt;a href=&quot;https://docs.sentry.io/organization/integrations/source-code-mgmt/github/&quot;&gt;&lt;u&gt;GitHub integration&lt;/u&gt;&lt;/a&gt; and connecting GitHub repos in &lt;a href=&quot;https://sentry.io/orgredirect/settings/organziations/orgredirect/seer/repos&quot;&gt;&lt;u&gt;Seer Settings&lt;/u&gt;&lt;/a&gt;.&lt;/p&gt;&lt;h2&gt;Automate root cause analysis in production&lt;/h2&gt;&lt;p&gt;Even with these safeguards, some bugs will reach production. When they do, Seer identifies the most actionable issues and automatically uses your runtime telemetry to determine the root cause in the background. When Seer is highly confident that an issue is actionable, it can go further and generate code changes to fix the bug, or delegate to coding agents like &lt;a href=&quot;https://blog.sentry.io/seer-can-now-trigger-cursor-agents-to-fix-your-bugs/&quot;&gt;&lt;u&gt;Cursor&lt;/u&gt;&lt;/a&gt; to implement the fix on your behalf.&lt;/p&gt;&lt;h2&gt;Investigate the unknown&lt;/h2&gt;&lt;p&gt;Seer&amp;#39;s automated root cause analysis works well when Sentry has already identified an issue. But sometimes the problem isn&amp;#39;t a bug that Sentry has flagged. A customer reports that something feels off, or a dashboard shows a metric trending in the wrong direction. For these less structured investigations, we&amp;#39;re building a new experimental capability: the ability to ask Seer open-ended questions about your data or anything you see across Sentry. Describe what you&amp;#39;re observing, and Seer will query across your telemetry to surface relevant patterns and anomalies, helping you turn your hunch into an actionable root cause.&lt;/p&gt;&lt;p&gt;This feature is currently in development, but it’s available in early preview for select Seer customers. &lt;a href=&quot;https://github.com/getsentry/sentry/discussions/105737&quot;&gt;&lt;u&gt;Join this GitHub conversation&lt;/u&gt;&lt;/a&gt; to request access.&lt;/p&gt;&lt;h2&gt;Unlimited debugging for one flat price&lt;/h2&gt;&lt;p&gt;Along with these expanded capabilities, we&amp;#39;ve simplified pricing. Seer is now $40 per active contributor per month, with unlimited use. There&amp;#39;s no need to manage seats or worry about overages; just connect the GitHub repos you want Seer to cover, and it will keep track of contributors automatically. Anyone who creates at least 2 pull requests in a connected repository during the month counts as an active contributor.&lt;/p&gt;&lt;p&gt;Debugging doesn&amp;#39;t happen at a single point in your workflow, and neither should your tools. Seer meets you where you are: in your development environment, in code review, and in production. Existing Sentry customers who use GitHub or GitHub Enterprise can &lt;a href=&quot;https://sentry.io/orgredirect/organizations/:orgslug/settings/seer&quot;&gt;&lt;b&gt;&lt;u&gt;activate a 14-day free trial in settings&lt;/u&gt;&lt;/b&gt;&lt;/a&gt;. New to Sentry? &lt;a href=&quot;https://sentry.io/signup/&quot;&gt;&lt;b&gt;&lt;u&gt;Here&amp;#39;s how to get started&lt;/u&gt;&lt;/b&gt;&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Monitoring microservices and distributed systems with Sentry]]></title><description><![CDATA[If you’ve ever tried to debug a request that touched five services, a queue, and a database you don’t own, you already know why monitoring distributed systems i...]]></description><link>https://blog.sentry.io/monitoring-microservices-distributed-systems-with-sentry/</link><guid isPermaLink="false">https://blog.sentry.io/monitoring-microservices-distributed-systems-with-sentry/</guid><pubDate>Thu, 22 Jan 2026 01:00:00 GMT</pubDate><content:encoded>&lt;p&gt;If you’ve ever tried to debug a request that touched five services, a queue, and a database you don’t own, you already know why monitoring distributed systems is hard.&lt;/p&gt;&lt;p&gt;Logs live in different places, requests disappear halfway through a flow, and when something breaks in production, you’re reconstructing what happened from fragments.&lt;/p&gt;&lt;p&gt;Microservices make this worse by design. A single request fans out across small, independently deployed services, often communicating asynchronously. And the moment a request leaves a service you control, your visibility usually drops off a cliff.&lt;/p&gt;&lt;p&gt;This guide shows how to use Sentry tracing and logging to follow a request end to end, so you can answer the questions that usually take far too long in production:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Where did this request actually go?&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Which service slowed it down or failed?&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;How do I see that without stitching logs together by hand?&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;Prerequisites&lt;/h2&gt;&lt;p&gt;You need no experience with microservices to understand this article. Experience writing a web service is useful.&lt;/p&gt;&lt;p&gt;To follow the tutorial, you need:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://docs.docker.com/get-started/get-docker&quot;&gt;&lt;b&gt;Docker&lt;/b&gt;&lt;/a&gt;&lt;b&gt;:&lt;/b&gt; We’ll use Docker to run the example app. Docker guarantees that the app runs on any operating system, without needing to install any programming language versions, in a secure sandbox safely isolated from your personal files.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://sentry.io/signup/&quot;&gt;&lt;b&gt;A Sentry account&lt;/b&gt;&lt;/a&gt;&lt;b&gt;:&lt;/b&gt; You need a Sentry account if you want to connect the example application to one of your Sentry projects.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;And to make things a little bit easier, actions you need to perform are marked with ▶️.&lt;/p&gt;&lt;h2&gt;The example case study&lt;/h2&gt;&lt;p&gt;This example is intentionally simple. Real systems look a bit busier.&lt;/p&gt;&lt;p&gt;But the failure modes are the same: requests fan out, work happens asynchronously, and when something breaks, the original context is usually gone.&lt;/p&gt;&lt;p&gt;Let’s review how and why a microservice design works using a simple example. Imagine you have a website where a user can place an order for an item that needs to be made. The item could be anything from a physical 3D-printed object to a digital tax certificate.&lt;/p&gt;&lt;p&gt;You currently have a monolithic web server that handles the entire process and stores all data in one database. This is its design:&lt;/p&gt;&lt;p&gt;You have different teams working on the website, order management, and factory production of the items — and they each want to deploy improvements to their code and database tables independently, without breaking the rest of the system.&lt;/p&gt;&lt;p&gt;So you decide to separate your single service and database into three separate services (web, order, and factory). Your system now looks like this:&lt;/p&gt;&lt;p&gt;Each service knows the address (URL) of the other services. So if the order service wants the factory service to start making an item, the order service calls the factory service using an HTTP POST request.&lt;/p&gt;&lt;p&gt;Then the website team gets upset that the order service isn’t responding to orders fast enough, and is blocking the website from responding to user requests.&lt;/p&gt;&lt;p&gt;So instead of letting services call each other directly and synchronously, you decide to use a message queue, like &lt;a href=&quot;https://www.rabbitmq.com&quot;&gt;RabbitMQ&lt;/a&gt;, for all communication. To demonstrate how a message queue works, consider an example: The web server places a “create order” message on the order service’s queue without waiting for a response. The order service takes the message off the queue when the service is ready, and puts a response message on the web service’s queue when the order is ready for collection. No service needs to know the address or status of any other service — each service talks only to RabbitMQ.&lt;/p&gt;&lt;p&gt;Your system now looks like this:&lt;/p&gt;&lt;p&gt;This design now meets the microservice architecture criteria. Each service is small and focused, independently deployable by having a separate database and a separate Git repository, and autonomous by using an asynchronous message queue.&lt;/p&gt;&lt;h3&gt;Even more flexible designs&lt;/h3&gt;&lt;p&gt;You can make the design even more flexible. For example, your factory and order teams realize they need to start additional instances of their services when the number of requests increases. So you might have three factory services running simultaneously, all taking orders from the queue and writing to the same shared factory database.&lt;/p&gt;&lt;p&gt;Then, you need a central repository of URLs for each system component, like the order database and RabbitMQ, so that each new service knows where to find everything as containers start and stop, and URLs and ports change. To support this service discovery, you might use a simple key-value store in a container, like &lt;a href=&quot;https://etcd.io&quot;&gt;etcd&lt;/a&gt;, or you might want something more powerful, like &lt;a href=&quot;https://developer.hashicorp.com/consul&quot;&gt;Consul&lt;/a&gt;, or even a container orchestrator like &lt;a href=&quot;https://kubernetes.io&quot;&gt;Kubernetes&lt;/a&gt;.&lt;/p&gt;&lt;h2&gt;The example app&lt;/h2&gt;&lt;p&gt;In the &lt;a href=&quot;https://github.com/ritza-co/sentry-microservice-example&quot;&gt;GitHub repository&lt;/a&gt; that comes with this guide, we’ve created a minimalist microservice app that runs the services discussed in the case study.&lt;/p&gt;&lt;p&gt;▶️ Clone, or download and unzip, the &lt;a href=&quot;https://github.com/ritza-co/sentry-microservice-example&quot;&gt;repository&lt;/a&gt; onto your computer.&lt;/p&gt;&lt;p&gt;There are two folders in the repository: &lt;code&gt;withSentry&lt;/code&gt; and &lt;code&gt;withoutSentry&lt;/code&gt;. This guide runs the &lt;code&gt;withSentry&lt;/code&gt; app to demonstrate monitoring, but if you want to see an even simpler microservice design without any monitoring, you can look at the code in &lt;code&gt;withoutSentry&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;Below is a simplified diagram of the design used in both folders. Each component in the backend runs in a separate Docker container, configured by &lt;code&gt;docker-compose.yaml&lt;/code&gt;. There are:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Three &lt;a href=&quot;https://hub.docker.com/_/node&quot;&gt;Node.js&lt;/a&gt; services (&lt;code&gt;3_web.ts, 4_order.ts&lt;/code&gt;, &lt;code&gt;5_factory.ts&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Three &lt;a href=&quot;https://hub.docker.com/_/mongo&quot;&gt;MongoDB&lt;/a&gt; databases, which you can see at the top of the Docker Compose file (&lt;code&gt;msWebDb&lt;/code&gt;, &lt;code&gt;msOrderDb&lt;/code&gt;, and &lt;code&gt;msFactoryDb&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The &lt;a href=&quot;https://hub.docker.com/_/rabbitmq&quot;&gt;RabbitMQ&lt;/a&gt; software, which you can see in the middle of the Docker Compose file&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Using Node.js and MongoDB keeps this demonstration project as simple as possible, as Node.js code doesn’t need compilation (like Go) and MongoDB doesn’t need table creation scripts (like PostgreSQL).&lt;/p&gt;&lt;h2&gt;Configure the app to use Sentry Tracing&lt;/h2&gt;&lt;p&gt;Now that you’ve downloaded the app, let’s configure it to send traces to Sentry.&lt;/p&gt;&lt;p&gt;▶️ Open the Sentry web interface and use the sidebar to navigate to &lt;b&gt;Settings —&amp;gt; Projects&lt;/b&gt;.&lt;/p&gt;&lt;p&gt;▶️ Select the project you want to use for this test. If you have only a real production project available, first &lt;a href=&quot;https://docs.sentry.io/product/sentry-basics/integrate-frontend/create-new-project/&quot;&gt;create a Node.js project&lt;/a&gt; for the demo app, then select it.&lt;/p&gt;&lt;p&gt;The sidebar contents will change to show the project details.&lt;/p&gt;&lt;p&gt;▶️ In the sidebar, navigate to &lt;b&gt;Client Keys (DSN)&lt;/b&gt; and copy your DSN.&lt;/p&gt;&lt;p&gt;▶️ In your &lt;code&gt;withSentry&lt;/code&gt; project directory, open the &lt;code&gt;.env&lt;/code&gt; file and enter the copied DSN as the value of the &lt;code&gt;SENTRY_DSN&lt;/code&gt; environment variable.&lt;/p&gt;&lt;p&gt;This setting instructs all services in the app to use your Sentry project. Docker Compose pulls the &lt;code&gt;SENTRY_DSN&lt;/code&gt; value from &lt;code&gt;.env&lt;/code&gt; and sends it to the containers that have the &lt;code&gt;SENTRY_DSN&lt;/code&gt; environment variable.&lt;/p&gt;&lt;p&gt;▶️ In Sentry, navigate to &lt;b&gt;Loader Script&lt;/b&gt; at the bottom of the sidebar and copy the script shown at the top of the page&lt;/p&gt;&lt;p&gt;▶️ Open &lt;code&gt;withSentry/index.html&lt;/code&gt; and replace the &lt;code&gt;script&lt;/code&gt; line near the top of the file (below &lt;code&gt;&amp;lt;head&amp;gt;&lt;/code&gt;) with the copied loader script.&lt;/p&gt;&lt;p&gt;This setting links the app’s frontend webpage to your Sentry project. If you want to further configure an app, for example, to send only a fraction of traces to Sentry, refer to the &lt;a href=&quot;https://docs.sentry.io/platforms/javascript/install/loader/#sdk-configuration&quot;&gt;Loader Script documentation&lt;/a&gt;.&lt;/p&gt;&lt;h2&gt;Run the app&lt;/h2&gt;&lt;p&gt;Configuration is complete. Now you can run the app and see traces arrive in Sentry.&lt;/p&gt;&lt;p&gt;▶️ Open a terminal (command prompt) in the &lt;code&gt;withSentry&lt;/code&gt; folder, and run the following command:&lt;/p&gt;&lt;p&gt;&lt;code&gt;docker compose up&lt;/code&gt;&lt;/p&gt;&lt;p&gt;If you run docker ps in another terminal, all containers should show as healthy after ten seconds to a couple of minutes. Docker image and npm package downloads might take a while.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Note:&lt;/b&gt; The container names start with &lt;code&gt;ms&lt;/code&gt;, for microservice, to separate them clearly from any other containers you might run.&lt;/p&gt;&lt;p&gt;▶️ In your web browser, open the app at &lt;code&gt;http://localhost:8006&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;▶️ Disable any advertisement or tracker blockers and reload the page to ensure that Sentry is available.&lt;/p&gt;&lt;p&gt;&lt;i&gt;Unblock Sentry&lt;/i&gt;&lt;/p&gt;&lt;p&gt;A new UUID is set in the &lt;b&gt;Create order&lt;/b&gt; line whenever you refresh the page, but you can enter your own order name, like &lt;code&gt;alice&lt;/code&gt; or &lt;code&gt;2&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;▶️ Click &lt;b&gt;Submit&lt;/b&gt; to start an order.&lt;/p&gt;&lt;p&gt;Notice the order ID is set in the &lt;b&gt;Check order&lt;/b&gt; line, and the &lt;b&gt;Order status&lt;/b&gt; updates with a single call to check on the web service.&lt;/p&gt;&lt;p&gt;▶️ Click &lt;b&gt;Check&lt;/b&gt; repeatedly until the &lt;b&gt;Order status&lt;/b&gt; changes to &lt;b&gt;finished&lt;/b&gt; in about ten seconds.&lt;/p&gt;&lt;p&gt;&lt;i&gt;The microservice website&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Here’s what happened:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;The &lt;b&gt;webpage&lt;/b&gt; called the &lt;b&gt;web service&lt;/b&gt;, which created the order in the web database.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The &lt;b&gt;web service&lt;/b&gt; then sent the order ID to the &lt;b&gt;order service&lt;/b&gt; via a message on RabbitMQ.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The &lt;b&gt;order service&lt;/b&gt; then received, saved, and passed the order to the &lt;b&gt;factory service&lt;/b&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The &lt;b&gt;factory service&lt;/b&gt; received the order, waited five to ten seconds, then passed a message to RabbitMQ saying the item was made.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The &lt;b&gt;order service&lt;/b&gt; passed the status update back to the &lt;b&gt;web service&lt;/b&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;By clicking the &lt;b&gt;Check&lt;/b&gt; button, you requested the status of the order from the &lt;b&gt;web service&lt;/b&gt;, which looked in the web database.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Let’s see if that process is clearly shown in Sentry.&lt;/p&gt;&lt;p&gt;▶️ Navigate to &lt;b&gt;Explore —&amp;gt; Traces&lt;/b&gt; in the Sentry sidebar and ensure your test project is selected at the top of the traces page.&lt;/p&gt;&lt;p&gt;If you see traces, jump ahead to the next section on understanding tracing.&lt;/p&gt;&lt;p&gt;If you don’t see any traces after a minute, check for app configuration problems by following the troubleshooting instructions below. If it’s a new Sentry project, first skip through the steps in the &lt;b&gt;Set up the Sentry SDK&lt;/b&gt; section using the &lt;b&gt;Next&lt;/b&gt; button on each step and then, lastly, click the &lt;b&gt;Take me to my trace&lt;/b&gt; button.&lt;/p&gt;&lt;h3&gt;Troubleshooting&lt;/h3&gt;&lt;p&gt;The application needs six free ports on your computer: &lt;code&gt;8000&lt;/code&gt; to &lt;code&gt;8006&lt;/code&gt;. In the unlikely event that any other application uses them, you should stop that application.&lt;/p&gt;&lt;p&gt;▶️ Open a new terminal and run the code below to see the service logs.&lt;/p&gt;&lt;p&gt;The output should be similar to the following:&lt;/p&gt;&lt;p&gt;If you notice any errors, fix them first, before trying to see traces on Sentry. The most likely problem is that your project DSN in the &lt;code&gt;.env&lt;/code&gt; file doesn’t match the one in Sentry. Otherwise, it’s likely that npm couldn’t connect to the internet to download packages – in which case, try disabling any firewalls or VPNs temporarily and restarting Docker in the project folder with this command:&lt;/p&gt;&lt;p&gt;You can also check the contents of the databases using the commands below:&lt;/p&gt;&lt;h2&gt;Understand Sentry Tracing and Logs&lt;/h2&gt;&lt;p&gt;▶️ At the bottom of the &lt;b&gt;Traces&lt;/b&gt; page, click any of the span IDs.&lt;/p&gt;&lt;p&gt;You should see a trace similar to the one below. Each &lt;a href=&quot;https://docs.sentry.io/concepts/key-terms/tracing/#whats-a-trace&quot;&gt;trace&lt;/a&gt; represents a connected series of operations and actions, and is made up of &lt;a href=&quot;https://docs.sentry.io/concepts/key-terms/tracing/#whats-a-span&quot;&gt;spans&lt;/a&gt;. There are red annotations to show you which span corresponds to which service.&lt;/p&gt;&lt;p&gt;&lt;i&gt;Distributed trace&lt;/i&gt;&lt;/p&gt;&lt;p&gt;This is the moment distributed tracing pays off: every service call shows up in one place. Sentry passes the trace ID with every call made by a service, and so can follow the flow of service and database calls (even through RabbitMQ messages) from the website all the way down to the factory and back again. You can see this flow by reading down the call stack on the left of the page.&lt;/p&gt;&lt;p&gt;The span from the webpage shows everything from the page load to individual button clicks. While RabbitMQ itself isn’t instrumented with Sentry, the JavaScript that calls it is, so you can see all messages sent to and received from RabbitMQ. Similarly, MongoDB isn’t instrumented with Sentry, but calls to it are.&lt;/p&gt;&lt;p&gt;If you look at a database call, you can see that the parameters aren’t recorded. For example:&lt;/p&gt;&lt;p&gt;This is called query scrubbing. Sentry uses it to prevent sensitive data, like credit card numbers or password hashes, from being recorded. If you need the exact query details, Sentry Logs can capture them instead.&lt;/p&gt;&lt;p&gt;So what is this trace useful for?&lt;/p&gt;&lt;p&gt;First, it shows whether the system is behaving as expected. You can see if the control flow is correct, whether messages are duplicated, or if failures or database writes are missing.&lt;/p&gt;&lt;p&gt;Once the logic looks right, you can look at performance. How long does the full order take? Are there slow or inconsistent requests? Which service is responsible?&lt;/p&gt;&lt;p&gt;And when a user has a question, you can find the trace for their order ID and see exactly what happened.&lt;/p&gt;&lt;p&gt;In the example above, the flow jumps back out of the indentation near the bottom. That’s when the factory waits a few seconds to “manufacture” the item before sending a new message to the queue:&lt;/p&gt;&lt;p&gt;Real systems don’t respond in seconds. Updates often arrive long after the original trace ends. The way to connect them is with a shared identifier.&lt;/p&gt;&lt;p&gt;Here, that identifier is the order ID. Because it’s attached to spans across services, you can search for it in Sentry and see the entire lifecycle in one place.&lt;/p&gt;&lt;p&gt;▶️ Copy your order ID from the textbox on the app webpage to the filter in the Sentry &lt;b&gt;Traces&lt;/b&gt; page, as shown below. (You cannot type &lt;code&gt;is in&lt;/code&gt; the filter textbox. You have to first type orderId, then click for more options.)&lt;/p&gt;&lt;p&gt;&lt;i&gt;Tracing an order ID&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Click the &lt;b&gt;Edit Table&lt;/b&gt; button on the right to include any attributes you’re curious about in the filter results.&lt;/p&gt;&lt;h3&gt;Logs&lt;/h3&gt;&lt;p&gt;This article focuses on distributed tracing, but Sentry also supports standard monitoring tasks like capturing errors and exceptions with &lt;code&gt;Sentry.captureException(e)&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;Because an exception stack trace doesn’t include cross-service context, it’s important to attach an identifier — like &lt;code&gt;orderId&lt;/code&gt; — before capturing the error. One way to do that is with a breadcrumb.&lt;/p&gt;&lt;p&gt;A breadcrumb is lightweight context that’s recorded locally and only sent to Sentry if an event, such as an error, occurs. For example, when the factory starts creating an item, you might add:&lt;/p&gt;&lt;p&gt;If that function later throws an error and you capture it, the breadcrumb appears alongside the stack trace in Sentry.&lt;/p&gt;&lt;p&gt;Structured logs are similar, but more powerful. Instead of a single text message, they record key-value pairs, which makes filtering and searching easier in the dashboard. Unlike breadcrumbs, structured logs are sent immediately and aren’t tied to an error.&lt;/p&gt;&lt;p&gt;The microservices example uses logs alongside traces to provide this additional context.&lt;/p&gt;&lt;p&gt;▶️ In the Sentry sidebar, navigate to &lt;b&gt;Explore —&amp;gt; Logs&lt;/b&gt;.&lt;/p&gt;&lt;p&gt;&lt;i&gt;Distributed logs&lt;/i&gt;&lt;/p&gt;&lt;p&gt;In the screenshot above, the table includes two attributes added to the structured log: &lt;code&gt;orderId&lt;/code&gt; and &lt;code&gt;service&lt;/code&gt;. You can see each service logging when it receives a message from RabbitMQ.&lt;/p&gt;&lt;p&gt;In this simplified example, traces and logs look similar because both show the flow of an order through the system. In a real application, they serve different purposes.&lt;/p&gt;&lt;p&gt;Traces show how execution moves between components. Logs let you record whatever context you need inside your own business logic. You can add logs at specific steps, adjust them temporarily while debugging, and remove them when you’re done.&lt;/p&gt;&lt;p&gt;Logs also support severity levels (&lt;code&gt;trace&lt;/code&gt;, &lt;code&gt;debug&lt;/code&gt;, &lt;code&gt;info&lt;/code&gt;, &lt;code&gt;warn&lt;/code&gt;, &lt;code&gt;error&lt;/code&gt;, and &lt;code&gt;fatal&lt;/code&gt;) which makes them useful across development, testing, and production environments. Read the guide to &lt;a href=&quot;https://docs.sentry.io/platforms/javascript/guides/node/logs&quot;&gt;setting up logs in Node.js&lt;/a&gt; to learn more.&lt;/p&gt;&lt;h2&gt;How to monitor a distributed app with Sentry&lt;/h2&gt;&lt;p&gt;Now that you’ve seen what monitoring looks like in Sentry, let’s add it to your app. The examples in this section work in any Node.js application, not just microservices. From Sentry’s point of view, there’s no difference, and the same ideas apply in other languages like Python or .NET. Only the syntax changes.&lt;/p&gt;&lt;h3&gt;Monitor a webpage&lt;/h3&gt;&lt;p&gt;The &lt;a href=&quot;https://docs.sentry.io/platforms/javascript/install/loader&quot;&gt;loader script&lt;/a&gt; import you added to &lt;code&gt;index.html&lt;/code&gt; in the configuration section is all you need to start automatic monitoring of any webpage. It looked like this (remember to change [YOUR_ID]):&lt;/p&gt;&lt;p&gt;▶️ If you need to configure Sentry differently from the defaults in this script, add a &lt;code&gt;Sentry.init()&lt;/code&gt; call to create a &lt;a href=&quot;https://docs.sentry.io/platforms/javascript/install/loader/#custom-configuration&quot;&gt;custom configuration&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;You can set your DSN inside the &lt;code&gt;init&lt;/code&gt; function instead of hardcoding it into the script import URL above. &lt;a href=&quot;https://docs.sentry.io/concepts/key-terms/dsn-explainer&quot;&gt;You don’t have to hide your DSN&lt;/a&gt; from the public, as cases of abuse are very rare and Sentry can handle them.&lt;/p&gt;&lt;p&gt;Sentry &lt;a href=&quot;https://docs.sentry.io/platforms/javascript/tracing/instrumentation/automatic-instrumentation&quot;&gt;automatically collects traces for page navigation&lt;/a&gt; but not for fetch requests.&lt;/p&gt;&lt;p&gt;▶️ To record detailed information (such as the order ID), you need to &lt;a href=&quot;https://docs.sentry.io/platforms/javascript/tracing/instrumentation/requests-module&quot;&gt;manually instrument your HTTP calls&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;The following &lt;code&gt;Sentry.startSpan&lt;/code&gt; code is from the create order function in &lt;code&gt;index.html&lt;/code&gt;:&lt;/p&gt;&lt;p&gt;The &lt;code&gt;startSpan()&lt;/code&gt; function manually creates a span that records any call made within it. In this case, it records a call to &lt;code&gt;fetch(&amp;#39;http://localhost:8006/order&amp;#39;)&lt;/code&gt;. Only the &lt;code&gt;name&lt;/code&gt; parameter is mandatory, but the code includes the order ID as an attribute, and later adds the response status code as an attribute too, after the call completes.&lt;/p&gt;&lt;h3&gt;Monitor a web service&lt;/h3&gt;&lt;p&gt;▶️ To instrument a web service automatically, you need only &lt;a href=&quot;https://docs.sentry.io/platforms/javascript/guides/node/install/esm/&quot;&gt;import the Sentry configuration outside your code file&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;This &lt;code&gt;import&lt;/code&gt; is shown in the following Docker Compose file command:&lt;/p&gt;&lt;p&gt;The file &lt;code&gt;1_sentry.ts&lt;/code&gt; &lt;a href=&quot;https://docs.sentry.io/platforms/javascript/guides/node/configuration/options&quot;&gt;configures Sentry&lt;/a&gt;. It contains the following content:&lt;/p&gt;&lt;p&gt;Without your DSN, Sentry will not work. Without &lt;code&gt;enableLogs&lt;/code&gt;, logs will not be sent to Sentry.&lt;/p&gt;&lt;p&gt;Sentry automatically enables several integrations (monitoring plugins) by default, including MongoDB and RabbitMQ.&lt;/p&gt;&lt;p&gt;▶️ If your app uses other tools, look at the &lt;a href=&quot;https://docs.sentry.io/platforms/javascript/guides/node/configuration/integrations&quot;&gt;integration documentation&lt;/a&gt; to learn how to enable them.&lt;/p&gt;&lt;p&gt;A single configuration file is all you need for Sentry to automatically monitor your service. However, if you want to add custom attributes, like &lt;code&gt;orderId&lt;/code&gt;, across services and to use logging, you need to import the Sentry library in your code and add some manual instrumentation too.&lt;/p&gt;&lt;p&gt;▶️ Import Sentry using the following line:&lt;/p&gt;&lt;p&gt;▶️ To send a log entry, you can use a single line:&lt;/p&gt;&lt;p&gt;This call has a text message, and two attributes sent as JSON.&lt;/p&gt;&lt;p&gt;▶️ To add an attribute to a span, use the following code:&lt;/p&gt;&lt;p&gt;This line adds an attribute to the span created by Sentry’s automatic instrumentation.&lt;/p&gt;&lt;p&gt;If you examine all the spans in the trace in the Sentry website, you may notice that some spans don’t have order ID attributes. If you need an ID attribute and Sentry hasn’t automatically created a span for you to attach to, you need to create a span manually using &lt;code&gt;startSpan()&lt;/code&gt;, as the website code does.&lt;/p&gt;&lt;h2&gt;Tips for monitoring microservices and distributed systems&lt;/h2&gt;&lt;p&gt;Here’s the short version:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Sentry automatically creates spans for most operations without manual instrumentation.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;To add logs or custom attributes, you’ll need to instrument those explicitly.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;For tools without built-in integrations (like some message queues or databases), you’ll need to enable integrations or add manual instrumentation.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Because distributed systems don’t have a single call stack, you need a shared identifier, like &lt;code&gt;orderId&lt;/code&gt;, to link asynchronous work across services. UUIDs work well, as long as they’re easy to search for later.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Monitoring also looks different once services are independent. In this guide, all traces go to a single Sentry project. In practice, teams often use separate projects so they can own their own alerts, data, and workflows. This improves separation of concerns but adds operational complexity. Administrators can still investigate traces across projects when needed.&lt;/p&gt;&lt;p&gt;That independence means teams also need to agree on shared conventions. Centralized configuration and consistent message formats make it much easier to follow a request across services when something breaks.&lt;/p&gt;&lt;p&gt;Finally, microservices generate a lot of traffic. Start with a low trace sampling rate, around 10%, to understand system behavior without overwhelming yourself. As your application scales, keep an eye on service load and request latency so you know when it’s time to scale up.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Green dashboards, red flags]]></title><description><![CDATA[A VP of Engineering (from a company I’m not allowed to name) told me recently: "You helped us find and fix real user-facing issues. Now we need to convince our ...]]></description><link>https://blog.sentry.io/green-dashboards-red-flags/</link><guid isPermaLink="false">https://blog.sentry.io/green-dashboards-red-flags/</guid><pubDate>Wed, 21 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;A VP of Engineering (from a company I’m not allowed to name) told me recently: &amp;quot;You helped us find and fix real user-facing issues. Now we need to convince our CTO why that matters more than the standard SLO’s and systems.&amp;quot;&lt;/p&gt;&lt;p&gt;Here&amp;#39;s the thing: your CTO is not wrong in measuring the systems and basic uptime. That’s the baseline though. They’re all trying to watch everything, but they’re&lt;i&gt; seeing&lt;/i&gt; nothing as it relates to users.&lt;/p&gt;&lt;h2&gt;The traditional monitoring trap&lt;/h2&gt;&lt;p&gt;Uptime looks great. Latency is within SLOs. Error budgets are fine. Dashboards are green.&lt;/p&gt;&lt;p&gt;And yet: your users are still failing. Not because your system is down. Your system is fine. But somewhere between &amp;quot;user clicks button&amp;quot; and &amp;quot;user gets what they wanted,&amp;quot; something broke. Silently. No alert. No threshold crossed. Just a user who gave up and left.&lt;/p&gt;&lt;p&gt;Code breaks. That&amp;#39;s not the scandal. The scandal is found out three weeks later when it’s escalated by the sales team because a big customer is frustrated and leaving. &lt;/p&gt;&lt;h2&gt;Money moments&lt;/h2&gt;&lt;p&gt;Every product has a handful of “money moments” — the specific hyper-crucial parts of your product where the user succeeds or you lose money. Not &amp;quot;is the API up.&amp;quot; Not &amp;quot;did the page load.&amp;quot; The actual thing they came to do.&lt;/p&gt;&lt;p&gt;Just a few examples we’ve heard of recently:&lt;/p&gt;&lt;p&gt;&lt;b&gt;A retailer hit perfect uptime through Black Friday, &lt;/b&gt;&lt;b&gt;&lt;i&gt;but…&lt;/i&gt;&lt;/b&gt; Conversion dropped 12%. The money moment — a user completing checkout — was broken for anyone with a specific browser extension. No server errors. No alerts. Three weeks of lost revenue. The fix took an hour. Realizing the issue existed took forever.&lt;/p&gt;&lt;p&gt;&lt;b&gt;A payments company met every SLO, &lt;/b&gt;&lt;b&gt;&lt;i&gt;but&lt;/i&gt;&lt;/b&gt;&lt;b&gt;… &lt;/b&gt; Customers complained about &amp;quot;random&amp;quot; failures. The money moment — a transfer that actually completes and confirms — was failing intermittently for cross-border payments. A timeout edge case; Dashboards only showed averages. Users felt pain. Six lines of code, buried under months of noise.&lt;/p&gt;&lt;p&gt;&lt;b&gt;A B2B platform looked healthy by every metric, &lt;/b&gt;&lt;b&gt;&lt;i&gt;but… &lt;/i&gt;&lt;/b&gt; the money moment — a new customer hitting their &amp;quot;aha&amp;quot; moment — was broken for enterprise accounts with a specific config. Sales found it before monitoring did. Dashboards all said the  system was &amp;quot;up&amp;quot; but the product was broken.&lt;/p&gt;&lt;p&gt;Same pattern. Every time.&lt;/p&gt;&lt;h2&gt;You&amp;#39;re measuring the wrong thing&lt;/h2&gt;&lt;p&gt;Here&amp;#39;s the difference:&lt;/p&gt;&lt;p&gt;&lt;b&gt;What most teams measure:&lt;/b&gt; Is every service running? Are metrics within thresholds? Is the system healthy?&lt;/p&gt;&lt;p&gt;&lt;b&gt;What actually matters:&lt;/b&gt; Did the user succeed? If not, what code broke? How fast can we fix it?&lt;/p&gt;&lt;p&gt;One is horizontal. Watch everything, hope you catch something.&lt;/p&gt;&lt;p&gt;The other is vertical. Follow the money moment end-to-end. Know immediately when it breaks. Trace it to the release. Fix it.&lt;/p&gt;&lt;p&gt;You can do both. But if your dashboards are green yet users are failing, you know which one you&amp;#39;re missing.&lt;/p&gt;&lt;h2&gt;What to actually do&lt;/h2&gt;&lt;p&gt;This isn&amp;#39;t complicated.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Name your money moments.&lt;/b&gt; Not fifty different flows. Identify the three to five that determine if your product works — the ones that matter to your user, and whether or not they can do what they came to do. What are the moments where users succeed or you lose them?&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Monitor them by segment.&lt;/b&gt; Not averages. By customer tier, by region, by device, by release. The bug that breaks your biggest customer doesn&amp;#39;t show up in aggregate metrics.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Tie them to releases.&lt;/b&gt; When a money moment fails, the first question is &amp;quot;what changed?&amp;quot; If you can&amp;#39;t answer that in minutes, you&amp;#39;re flying with your eyes closed. &lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Measure time-to-fix, not time-to-alert.&lt;/b&gt; Nobody cares how fast your dashboards turned red. They care how fast you found the broken code and shipped the fix.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;This isn&amp;#39;t theory&lt;/h2&gt;&lt;p&gt;Teams that monitor their money moments instead of just their dashboards keep finding the same thing.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;One SaaS company had three APM tools and zero visibility into actual user flows. A dev spotted a call firing &lt;i&gt;eight times&lt;/i&gt; instead of once buried across async queues, messaging layers, and persistence. &lt;b&gt;Once they could trace the money moment end-to-end, they fixed it in minutes.&lt;/b&gt; Before? That would have taken days of log spelunking.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;An influencer marketing platform found a 50-second page load that &lt;i&gt;only&lt;/i&gt; impacted power users, and kept them from accessing one of the most critical workflows. By tracing the user flow instead of just watching service health, &lt;b&gt;they identified the broken release in five minutes and had it fixed within the hour.&lt;/b&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;A language learning app with 200+ microservices cut debugging time by 12x. &lt;b&gt;One engineer spotted bot activity and confirmed it in under ten minutes&lt;/b&gt; — before it could slow the site to a halt and lose them users. &amp;quot;This has saved years of my life.&amp;quot;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;One of the world’s biggest AI research companies learned to better detect otherwise-quiet crash loops during model training, &lt;b&gt;dramatically speeding up the process of evolving from model to model.&lt;/b&gt; Monitoring the actual user experience instead of just infrastructure metrics made the difference.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Same pattern every time: the fix was straightforward once they could see where the money moment broke.&lt;/p&gt;&lt;h2&gt;The bottom line&lt;/h2&gt;&lt;p&gt;Code breaks. Always has. Always will.&lt;/p&gt;&lt;p&gt;The teams that win aren&amp;#39;t the ones with the greenest dashboards. They&amp;#39;re the ones who find the broken stuff fast and fix it before users notice.&lt;/p&gt;&lt;p&gt;Your uptime is not your product. Users succeeding is your product. Everything else is noise.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Paginating large datasets in production: Why OFFSET fails and cursors win]]></title><description><![CDATA[The things that separate an MVP from a production-ready app are polish, final touches, and the Pareto ‘last 20%’ of work. Many of the bugs, edge cases, and perf...]]></description><link>https://blog.sentry.io/paginating-large-datasets-in-production-why-offset-fails-and-cursors-win/</link><guid isPermaLink="false">https://blog.sentry.io/paginating-large-datasets-in-production-why-offset-fails-and-cursors-win/</guid><pubDate>Thu, 15 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;The things that separate an MVP from a production-ready app are polish, final touches, and the Pareto ‘last 20%’ of work. Many of the bugs, edge cases, and performance issues will come to the surface after you launch, when the user stampede puts a serious strain on your application. If you’re reading this, you’re probably sitting on the 80% mark, ready to tackle the rest.&lt;/p&gt;&lt;p&gt;In this article, we’ll look at the case of how to paginate large datasets at scale, where things can go wrong, and how database indexes shape the outcome.&lt;/p&gt;&lt;p&gt;This article is part of a series of common pain points when bringing an MVP to production:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Paginating Large Datasets in Production: Why OFFSET Fails and Cursors Win (this one)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://blog.sentry.io/ai-driven-caching-strategies-instrumentation/&quot;&gt;AI-driven caching strategies and instrumentation&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;The pain (post-launch reality)&lt;/h2&gt;&lt;p&gt;Everything was fine during testing, but after a while pages started taking seconds to load. The smart move is to have Sentry set up before you launch. Even if you don’t do any custom instrumentation, Sentry will let you know when your database queries get slow:&lt;/p&gt;&lt;p&gt;The screenshot shows a &lt;a href=&quot;https://docs.sentry.io/product/issues/issue-details/performance-issues/slow-db-queries/&quot;&gt;Slow DB Query issue&lt;/a&gt; in Sentry. Because I used Drizzle ORM with &lt;code&gt;node-postgres&lt;/code&gt;, Sentry automatically instrumented all database queries for me. With that telemetry data, Sentry is able to surface slow database queries.&lt;/p&gt;&lt;p&gt;If we scroll a bit lower we’ll see the request info:&lt;/p&gt;&lt;p&gt;From this, we can see:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;The query was invoked in the &lt;code&gt;GET /admin/tickets&lt;/code&gt; request&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The query contains &lt;code&gt;OFFSET&lt;/code&gt; so it’s an offset-based pagination query&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The query was invoked on the page 321&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The query took 3.85s&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://sentry.io/product/seer/&quot;&gt;Seer&lt;/a&gt; guessed that it’s likely a missing index&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Let’s confirm that we’re really missing indexes:&lt;/p&gt;&lt;p&gt;Seer was right! The combination of a large dataset, lack of database indexes, and an offset-based pagination is amplifying this issue. Because there’s no database index, and the pagination is offset-based, navigating to higher-numbered pages like in this case 321 forces the database to scan 321 × page_size rows. That’s where the slowdown happens.&lt;/p&gt;&lt;h2&gt;The solution (indexes and cursors)&lt;/h2&gt;&lt;p&gt;We can fix this issue by refactoring the pagination from offset-based to cursor-based, and for that we’ll also need to add a database index.&lt;/p&gt;&lt;h3&gt;Database indexes&lt;/h3&gt;&lt;p&gt;Database indexes are lookup structures that let the database find rows without scanning the whole table. Think of them as a sorted map the engine can jump through quickly instead of going through the data row by row. It’s not free - you pay for extra disk (not a lot really) and write cost (indexes need to be updated after every write), but reads become &lt;i&gt;dramatically&lt;/i&gt; faster, especially at scale.&lt;/p&gt;&lt;h3&gt;Cursor-based pagination&lt;/h3&gt;&lt;p&gt;Offset-based pagination forces the engine to scan and skip N rows each time, which gets slower as N increases. Cursor-based pagination on the other hand uses a stable value from the last retrieved row (the “cursor”) to fetch the next page, letting the database jump directly to where it left off. The cursor is usually an indexed column, like &lt;code&gt;created_at&lt;/code&gt;, or &lt;code&gt;id&lt;/code&gt;, or a combination of both. Cursor-based pagination wins by a landslide in real-world performance and stability.&lt;/p&gt;&lt;h3&gt;Applying the fix&lt;/h3&gt;&lt;p&gt;Let’s create a composite index that combines the &lt;code&gt;created_at&lt;/code&gt; and the &lt;code&gt;id&lt;/code&gt;:&lt;/p&gt;&lt;p&gt;We’re adding &lt;code&gt;CONCURRENTLY&lt;/code&gt; to avoid write locks, and &lt;code&gt;IF NOT EXISTS&lt;/code&gt; to keep migrations idempotent.&lt;/p&gt;&lt;p&gt;Now let’s modify our SQL to use the &lt;code&gt;id&lt;/code&gt; and &lt;code&gt;created_at&lt;/code&gt; derived from the cursor to fetch the next page:&lt;/p&gt;&lt;p&gt;Now since we need to calculate and pass the cursor when paginating, our URLs will change from: &lt;code&gt;?page=321&lt;/code&gt; to &lt;code&gt;?cursor={nextCursor}&amp;amp;prevCursor={prevCursor}&lt;/code&gt;. Our backend will parse the &lt;code&gt;nextCursor&lt;/code&gt;, extract the &lt;code&gt;created_at&lt;/code&gt; and &lt;code&gt;id&lt;/code&gt; from it, and properly construct the &lt;code&gt;WHERE&lt;/code&gt; clause we see in the SQL above. The &lt;code&gt;prevCursor&lt;/code&gt; is just for navigating to the previous page.&lt;/p&gt;&lt;h2&gt;The Proof (watching duration drop)&lt;/h2&gt;&lt;p&gt;A few minutes after applying the changes, it’s time to check the results. We’ll go in Sentry &amp;gt; Insights &amp;gt; Backend &amp;gt; Queries, click on the “+” button to add a filter, select “Spans &amp;gt; &lt;code&gt;sentry.normalized_description&lt;/code&gt;”, add it, and modify it to contain the start of our query: &lt;code&gt;SELECT … FROM tickets LEFT JOIN&lt;/code&gt;. This filter will capture both the old offset-based query and the new one. This will update the charts below, and if we look at the “Average Duration” chart we’ll easily spot when we deployed the fix.&lt;/p&gt;&lt;p&gt;It’s easy to spot. And no, the chart doesn’t zero out. It dropped from ~8s to ~13ms. Mind you, this is running locally, but even in production it’s still going to be significantly faster.&lt;/p&gt;&lt;p&gt;That’s the effect of having database indexes and refactoring the pagination from offset-based to cursor-based. Our database doesn’t need to scan through thousands or millions of rows each time we want to navigate to a page. The index allows it to jump to a specific row instead of “walking” to it, and the cursor tells it which row it needs to jump to. Efficient!&lt;/p&gt;&lt;h2&gt;The takeaway (what production actually demands)&lt;/h2&gt;&lt;p&gt;This wasn’t a fancy optimization. No exotic data structures. No caching layer. No AI. Just respecting how databases work when the testing data is gone and reality shows up.&lt;/p&gt;&lt;p&gt;Offset-based pagination plus missing indexes is a tax you don’t notice until traffic forces you to pay it. The bill arrives late, compounds fast, and lands directly on user experience. Cursor-based pagination with proper indexes flips the cost model entirely: predictable, stable, and boring in the best possible way.&lt;/p&gt;&lt;p&gt;The important part isn’t that the query went from seconds to milliseconds. It’s that the shape of the performance curve changed. Offset pagination degrades linearly as your data grows. Cursor-based pagination stays flat. &lt;i&gt;Flat curves are how systems survive growth&lt;/i&gt;.&lt;/p&gt;&lt;p&gt;Sentry’s role here isn’t just “finding a slow query.” It closes the feedback loop between theory and reality. You make a change, deploy it, and immediately see whether the system agrees with your mental model. In this case, the database nodded enthusiastically.&lt;/p&gt;&lt;p&gt;Production readiness lives in these details. Indexes. Access patterns. Measurement. Not glamorous, but decisive. MVPs prove ideas. Production systems prove discipline.&lt;/p&gt;&lt;h2&gt;Further reading and references&lt;/h2&gt;&lt;p&gt;You can learn more about this topic in our docs about &lt;a href=&quot;https://docs.sentry.io/product/insights/backend/queries/&quot;&gt;monitoring database queries&lt;/a&gt;, and our &lt;a href=&quot;https://docs.sentry.io/product/insights/backend/&quot;&gt;backend performance insights module&lt;/a&gt;. You can also check out this blog on &lt;a href=&quot;https://blog.sentry.io/fix-n-plus-one-database-issues-with-sentry-seer/&quot;&gt;how to eliminate N+1 database query issues&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;If you&amp;#39;re new to Sentry, you can explore our &lt;a href=&quot;https://sandbox.sentry.io/issues/&quot;&gt;interactive Sentry sandbox&lt;/a&gt; or &lt;a href=&quot;https://sentry.io/signup/&quot;&gt;sign up for free&lt;/a&gt;. You can also join us on &lt;a href=&quot;https://discord.gg/sentry&quot;&gt;Discord&lt;/a&gt; to ask any questions you may have.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Logging in React Native with Sentry]]></title><description><![CDATA[Logs are often the first place dev teams look when they investigate an issue. But logs are often added as an afterthought, and developers struggle with the bala...]]></description><link>https://blog.sentry.io/logging-react-native-with-sentry/</link><guid isPermaLink="false">https://blog.sentry.io/logging-react-native-with-sentry/</guid><pubDate>Wed, 14 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Logs are often the first place dev teams look when they investigate an issue. But logs are often added as an afterthought, and developers struggle with the balance of logging too much or too little.&lt;/p&gt;&lt;p&gt;As a seasoned developer, you may remember a time when you were asked to investigate an issue and then handed a 200 MB plaintext log file. Three hours and four Python scripts later, you would realize that the problem was in a different component.&lt;/p&gt;&lt;p&gt;Over time, many standards and libraries have been developed to make logging easier, and React Native has plenty of logging functions. Our initial &lt;a href=&quot;https://blog.sentry.io/a-guide-to-logging-in-react-native/&quot;&gt;guide to logging in React Native&lt;/a&gt; is a good starting point. However, if you want to take things to the next level, this guide shows you how to use Sentry’s logging features to ensure your logs are useful and all the information you need is readily available.&lt;/p&gt;&lt;h2&gt;Getting Logs into Sentry&lt;/h2&gt;&lt;p&gt;This guide uses a simple React Native (Expo) app to show examples of Sentry’s various logging features. The app is a contact form with some basic validation on the fields. If you want to follow along and try for yourself, you can find the app in the &lt;a href=&quot;https://github.com/getsentry/sentry-react-native-logging-form-example&quot;&gt;demo repository&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;i&gt;Demo React Native app interface&lt;/i&gt;&lt;/p&gt;&lt;p&gt;If you have an existing React Native application, or plan to build one, this guide provides the steps required to take full advantage of logging with Sentry.&lt;/p&gt;&lt;h3&gt;Set up&lt;/h3&gt;&lt;p&gt;To start using Sentry features, we first need to install Sentry into our project. Start by creating a new project in Sentry and choosing React Native as the platform. Then, run the installation wizard to set up the necessary configurations in your local project:&lt;/p&gt;&lt;p&gt;Enable logging when prompted. When the configuration completes, you should see code like the following added to your main &lt;code&gt;App.js&lt;/code&gt; class:&lt;/p&gt;&lt;p&gt;This initializes the Sentry SDK and enables logging with the Sentry logging library.&lt;/p&gt;&lt;h3&gt;Sentry Logger API&lt;/h3&gt;&lt;p&gt;Now that we’ve initialized Sentry and enabled logs, we can use the &lt;code&gt;Sentry.logger&lt;/code&gt; namespace to send logs to our Sentry dashboard.&lt;/p&gt;&lt;p&gt;We can log messages with different log levels:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;trace&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;debug&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;info&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;warn&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;error&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;fatal&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;We can also use the &lt;code&gt;Sentry.logger.fmt&lt;/code&gt; function to add properties to the log message.&lt;/p&gt;&lt;p&gt;In the example app, add a new log call to the &lt;code&gt;handleSubmit&lt;/code&gt; function:&lt;/p&gt;&lt;p&gt;After we click the &lt;b&gt;Submit&lt;/b&gt; button, we will see an info level log on our Sentry dashboard.&lt;/p&gt;&lt;p&gt;We can manually add additional attributes to the logs. These don’t form part of the main log message but instead attach to the log and allow us to search or filter by them later.&lt;/p&gt;&lt;p&gt;To try this out, update the log that we added previously:&lt;/p&gt;&lt;p&gt;When we submit our form, we will see both the new &lt;code&gt;filterID&lt;/code&gt; and &lt;code&gt;extraMessage&lt;/code&gt; fields in the log payload.&lt;/p&gt;&lt;h3&gt;Integrations&lt;/h3&gt;&lt;p&gt;We have the Sentry Logger API working, but our app already has logging built with the default JavaScript &lt;code&gt;console&lt;/code&gt; object. Luckily, we don’t have to rewrite all our logging because Sentry also provides a way to integrate with default JavaScript logging.&lt;/p&gt;&lt;p&gt;Update the &lt;code&gt;Sentry.init&lt;/code&gt; call with the following code after the &lt;code&gt;enableLogs: true&lt;/code&gt; line in &lt;code&gt;App.js&lt;/code&gt;:&lt;/p&gt;&lt;p&gt;This causes all our existing logging to be sent to our Sentry. We can exclude certain log levels if we like. For example, if we want to reduce clutter on our dashboard, we could opt to send only &lt;code&gt;error&lt;/code&gt; level logs.&lt;/p&gt;&lt;p&gt;However, as you’ll see later in this guide, we can also reduce clutter using the filtering in the dashboard itself, so it’s often better to send Sentry as much useful information as possible.&lt;/p&gt;&lt;p&gt;Let’s restart the app, fill in the form, and take a look at the logs that are sent.&lt;/p&gt;&lt;p&gt;Even with these integrated logs, Sentry links all the extra information available, such as the originating environment and browser type. All these data points can be used to filter, search, or group our logs.&lt;/p&gt;&lt;h2&gt;Searching, filtering, and grouping&lt;/h2&gt;&lt;p&gt;The Logs dashboard gives us access to all our logs in one place, as well as to tools for finding exactly what we need.&lt;/p&gt;&lt;p&gt;At the top of the dashboard, we can use the search bar to find specific log messages. We can search for text that appears in the log message itself or use Sentry’s query syntax to search by specific fields.&lt;/p&gt;&lt;p&gt;For example, we can get all the logs relating to submitting a form by typing &lt;code&gt;sub&lt;/code&gt; in the search bar.&lt;/p&gt;&lt;p&gt;We can also search by log level. To see only &lt;code&gt;warning&lt;/code&gt; logs, use the query &lt;code&gt;severity:warn&lt;/code&gt; in the search bar.&lt;/p&gt;&lt;p&gt;We can also filter by the extra attributes that we added earlier. Type &lt;code&gt;filterID:01&lt;/code&gt; in the search bar.&lt;/p&gt;&lt;p&gt;When we have logs coming from multiple parts of our application, we can use custom attributes to quickly isolate logs from specific components or flows.&lt;/p&gt;&lt;p&gt;We can combine multiple filters together. For example, we could filter by both filterID and log level to see only warning logs from a specific part of our app.&lt;/p&gt;&lt;p&gt;Sentry also lets us group logs by different attributes. To do so, click the &lt;b&gt;&amp;gt;&amp;gt; Advanced&lt;/b&gt; button, then open the &lt;b&gt;Group By&lt;/b&gt; dropdown and select the type &lt;code&gt;severity&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;Our logs are now organized by their log level, making it easier to see how many logs we have at each severity. We can also group by environment, browser type, or any custom attribute we’ve added to our logs.&lt;/p&gt;&lt;p&gt;This becomes particularly useful when debugging an issue that affects specific users or environments. We can quickly group by browser or operating system to see if a problem is isolated to certain platforms.&lt;/p&gt;&lt;h2&gt;Debugging a real application&lt;/h2&gt;&lt;p&gt;To see the real value of Sentry logging, let’s look at a more complete example. We’ve built a cat voting app, where users can upvote or downvote cat pictures. The app has a React Native frontend that talks to an Express.js backend with a SQLite database. The frontend fetches cat images from an external API and stores them in the database.&lt;/p&gt;&lt;p&gt;If you want to explore the logging features for yourself, you can clone the &lt;a href=&quot;https://github.com/ritza-co/sentry-react-native-logging-cats-example&quot;&gt;app repository&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;We’ve only set up Sentry on the frontend, as this guide is focused on React Native. We could, and should, create an Express.js Sentry project for our backend code as well.&lt;/p&gt;&lt;h3&gt;The app structure&lt;/h3&gt;&lt;p&gt;The app consists of:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;A voting screen that displays cat images with upvote and downvote buttons&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;A winner screen that shows the most popular cat&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;A context provider that manages data fetching and state&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;An API service that handles all backend communication&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;This is what the Sentry setup looks like in the &lt;code&gt;frontend/App.js&lt;/code&gt; file:&lt;/p&gt;&lt;h3&gt;Tracking user actions&lt;/h3&gt;&lt;p&gt;In &lt;code&gt;frontend/src/screens/CatListScreen.js&lt;/code&gt;, we log when users vote on cats. We use both the logger API and &lt;a href=&quot;https://docs.sentry.io/platforms/react-native/enriching-events/breadcrumbs/&quot;&gt;breadcrumbs&lt;/a&gt; to create a trail of what the user does:&lt;/p&gt;&lt;h3&gt;Debugging a backend issue&lt;/h3&gt;&lt;p&gt;Let’s suppose some users report getting a &lt;code&gt;500 Internal Server Error&lt;/code&gt; (an API error) when they vote on specific cats. We turn to our Logs dashboard to investigate.&lt;/p&gt;&lt;p&gt;First, we check the dashboard for recent error logs. We filter by &lt;code&gt;severity:error&lt;/code&gt; and see several &lt;b&gt;Vote submission failed&lt;/b&gt; logs. We click on one of them to see the details.&lt;/p&gt;&lt;p&gt;The log shows us:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;The cat the user tried to vote on (&lt;code&gt;catId: &amp;quot;133&amp;quot;&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The type of vote it was (&lt;code&gt;voteType: &amp;quot;upvote&amp;quot;&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The error message from the backend (&lt;code&gt;errorMessage: &amp;quot;API Error: 500&amp;quot;&lt;/code&gt;)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;The timestamp of when the error happened&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Because this is an &lt;code&gt;error&lt;/code&gt; level event, Sentry also logs an &lt;code&gt;Issue&lt;/code&gt;. We navigate to our Issues dashboard and click on the corresponding error event.&lt;/p&gt;&lt;p&gt;In the issue entry, we can scroll down and see the breadcrumb trail leading up to the error. The logs show the user loaded the cat list, clicked the upvote button, and then the API request failed with a &lt;code&gt;500&lt;/code&gt; HTTP status code.&lt;/p&gt;&lt;p&gt;Now we know the problem is on the backend. We check the backend server logs around the same timestamp and find a database constraint error. The backend attempted to insert a vote, but the database rejected it due to a foreign key constraint violation (the cat ID doesn’t exist in the database).&lt;/p&gt;&lt;p&gt;Looking back at the Sentry logs, we search for when this cat was added. We filter by &lt;code&gt;operation:fetchCats&lt;/code&gt; and see that the app fetched cats from the external API, but when it tried to save them to the database, one of the requests failed. The Sentry log shows:&lt;/p&gt;&lt;p&gt;The problem is clear: When the app fetched new cats, some were successfully added to the database while others failed due to duplicate IDs. Users could see cats that weren’t in the database, and when they tried to vote on those cats, the backend rejected the vote.&lt;/p&gt;&lt;p&gt;We can fix the issue by improving error handling in the cat fetching logic. When cats fail to save, we remove them from the UI so that users can’t vote on cats aren’t in the database:&lt;/p&gt;&lt;h3&gt;Performance monitoring&lt;/h3&gt;&lt;p&gt;In addition to errors, we can also track performance using Sentry’s spans feature. In &lt;code&gt;frontend/src/context/CatsContext.js&lt;/code&gt;, we can monitor how long it takes to load cats:&lt;/p&gt;&lt;p&gt;Using spans provides better developer experience for performance tracking. You can view span data in the Performance tab of your Sentry dashboard, where you can easily identify slow operations and bottlenecks in your application.&lt;/p&gt;&lt;h2&gt;The &lt;code&gt;beforeSendLog&lt;/code&gt; function&lt;/h2&gt;&lt;p&gt;We can use the &lt;code&gt;beforeSendLog&lt;/code&gt; function to filter logs before they’re sent to Sentry. This is useful for controlling exactly which data reaches Sentry, helping us balance between having enough logs and not overwhelming our dashboard with noise.&lt;/p&gt;&lt;p&gt;For example, we can filter out debug-level logs in production, or remove sensitive information before sending logs to Sentry:&lt;/p&gt;&lt;p&gt;By returning &lt;code&gt;null&lt;/code&gt;, we prevent the log from being sent to Sentry. This approach helps us maintain clean, relevant logs while protecting sensitive user information.&lt;/p&gt;&lt;h2&gt;Going beyond Logs&lt;/h2&gt;&lt;p&gt;Sentry logging turns logs from raw data into a useful and intuitive tool that could save us countless hours when diagnosing an issue. If you are still unclear about any of the steps required to set up Sentry’s logging features, you can turn to our documentation on &lt;a href=&quot;https://docs.sentry.io/platforms/react-native/logs/&quot;&gt;setting up logs &lt;/a&gt;or &lt;a href=&quot;https://docs.sentry.io/product/drains/&quot;&gt;setting up drains and forwarders&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;As we touched on in this guide, logging goes hand-in-hand with Sentry’s other features, such as &lt;a href=&quot;https://docs.sentry.io/platforms/react-native/tracing/instrumentation/automatic-instrumentation/&quot;&gt;tracing&lt;/a&gt; and &lt;a href=&quot;https://docs.sentry.io/platforms/react-native/profiling/&quot;&gt;profiling&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;With all these features set up, you can begin to take full advantage of the Sentry toolbox.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Not everything that breaks is an error: a Logs and Next.js story]]></title><description><![CDATA[Stack traces are great, but they only tell you what broke. They rarely tell you why. When an exception fires, you get a snapshot of the moment things went sidew...]]></description><link>https://blog.sentry.io/not-everything-that-breaks-is-an-error-a-logs-and-next-js-story/</link><guid isPermaLink="false">https://blog.sentry.io/not-everything-that-breaks-is-an-error-a-logs-and-next-js-story/</guid><pubDate>Tue, 13 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Stack traces are great, but they only tell you &lt;i&gt;what&lt;/i&gt; broke. They rarely tell you &lt;i&gt;why&lt;/i&gt;. When an exception fires, you get a snapshot of the moment things went sideways, but the context leading up to that moment? Gone.&lt;/p&gt;&lt;p&gt;That&amp;#39;s where &lt;a href=&quot;https://sentry.io/product/logs/&quot;&gt;logs&lt;/a&gt; come in. A well-placed log can be the difference between hours of head-scratching and a five-minute fix. Let me show you what I mean with a real bug I encountered recently.&lt;/p&gt;&lt;h2&gt;Protecting an AI-powered Next.js endpoint from bots&lt;/h2&gt;&lt;p&gt;I&amp;#39;ve been working on &lt;a href=&quot;https://webvitals.com/&quot;&gt;WebVitals&lt;/a&gt;, a Next.js application powered by AI. You enter a domain, and it runs a series of tool calls to fetch performance data, then uses an AI agent to parse the results and give you actionable suggestions for improving your web vitals.&lt;/p&gt;&lt;p&gt;On the frontend, I&amp;#39;m using the AI SDK&amp;#39;s &lt;code&gt;useChat&lt;/code&gt; hook to handle the conversation:&lt;/p&gt;&lt;p&gt;The &lt;code&gt;/api/chat&lt;/code&gt; endpoint is a standard Next.js API route, which means anyone can hit it from anywhere. Since each request costs money (OpenAI isn&amp;#39;t free), I needed some protection against bots and malicious actors trying to spike my bill.&lt;/p&gt;&lt;p&gt;Vercel has a neat solution for this: bot protection via their &lt;code&gt;checkBotId&lt;/code&gt; function. It looks at the incoming request and determines if it&amp;#39;s coming from a bot. Simple, effective, and no CAPTCHAs asking users to identify crosswalks.&lt;/p&gt;&lt;h2&gt;A production bug that only affected Firefox and Safari&lt;/h2&gt;&lt;p&gt;Everything worked perfectly in local development. Deployed to production, tested in Chrome. Still perfect. Then I opened Firefox.&lt;/p&gt;&lt;p&gt;&amp;quot;Access denied.&amp;quot; The same request that worked in Chrome was getting blocked in Firefox. Safari had the same issue.&lt;/p&gt;&lt;p&gt;I checked Sentry. The error was showing up across multiple browsers, but only Firefox and Safari were affected. Chrome users were fine.&lt;/p&gt;&lt;p&gt;I tried fixing it. Multiple releases, multiple attempts. The error kept coming back. The stack trace wasn&amp;#39;t helpful, it just showed me that the bot check was returning &lt;code&gt;true&lt;/code&gt; for these browsers. But &lt;i&gt;why&lt;/i&gt; would Firefox and Safari be flagged as bots when Chrome wasn&amp;#39;t?&lt;/p&gt;&lt;p&gt;The stack trace couldn&amp;#39;t answer that question.&lt;/p&gt;&lt;h2&gt;Adding logs to capture the missing context&lt;/h2&gt;&lt;p&gt;This is the kind of problem where you need more context than an error alone can provide. I needed to see what data the &lt;code&gt;checkBotId&lt;/code&gt; function was working with when it made its decision.&lt;/p&gt;&lt;p&gt;So I added a log:&lt;/p&gt;&lt;p&gt;Nothing fancy. Just log the bot check result along with the user agent string that was passed to the function. Bot protection typically works by examining the user agent, so this seemed like the right data to capture.&lt;/p&gt;&lt;p&gt;The key here is that Sentry logs are high-cardinality. You can pass any attributes you want, and you&amp;#39;ll be able to search and filter by them later. No need to decide upfront which attributes are &amp;quot;important&amp;quot;. Just log what might be useful and let Sentry handle the rest.&lt;/p&gt;&lt;h2&gt;Using Sentry Logs to identify the root cause&lt;/h2&gt;&lt;p&gt;With logs in place, I headed over to Sentry&amp;#39;s Logs view and searched for my &amp;quot;Bot ID check result&amp;quot; messages. I added the &lt;code&gt;isBot&lt;/code&gt; attribute as a column so I could quickly scan the results. (In Sentry, boolean values show as 0 for false and 1 for true.)&lt;/p&gt;&lt;p&gt;I found a request that passed the bot check: &lt;code&gt;isBot: 0&lt;/code&gt;. Looking at the details, the user agent was exactly what you&amp;#39;d expect: a standard Chrome user agent string.&lt;/p&gt;&lt;p&gt;Then I looked at a request that failed: &lt;code&gt;isBot: 1&lt;/code&gt;. The user agent was... not what I expected.&lt;/p&gt;&lt;p&gt;Instead of the browser&amp;#39;s user agent, I was seeing &lt;code&gt;ai-sdk&lt;/code&gt;. The AI SDK was sending its own user agent string instead of the browser&amp;#39;s.&lt;/p&gt;&lt;p&gt;This explained everything. When the AI SDK makes requests to the backend, it uses its own user agent. Vercel&amp;#39;s bot protection sees &lt;code&gt;ai-sdk&lt;/code&gt; and thinks, reasonably, that it&amp;#39;s not a real browser. Bot detected. Access denied.&lt;/p&gt;&lt;p&gt;But why only Firefox and Safari? Because something in how those browsers (or my setup in those browsers) was causing the AI SDK&amp;#39;s user agent to be used instead of the browser&amp;#39;s. Chrome happened to pass through the correct user agent.&lt;/p&gt;&lt;p&gt;To confirm my hunch, I used Sentry&amp;#39;s trace connection feature. Everything in Sentry is linked by trace, so I could navigate from the log entry back to the full trace view and see the broader context of the request.&lt;/p&gt;&lt;p&gt;Sure enough, the trace confirmed this was coming from Firefox. Mystery solved.&lt;/p&gt;&lt;h2&gt;Fixing the issue once the data told the story&lt;/h2&gt;&lt;p&gt;The solution was straightforward. In Vercel&amp;#39;s firewall settings, I added a rule to bypass bot protection for requests where the user agent contains &lt;code&gt;ai-sdk&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;Saved the rule, published the changes, and tried again in Firefox.&lt;/p&gt;&lt;p&gt;It worked. No more access denied errors. It’s also being tracked in a &lt;a href=&quot;https://github.com/vercel/ai/issues/9256&quot;&gt;Github issue&lt;/a&gt; on the AI SDK for those who are curious.&lt;/p&gt;&lt;h2&gt;What this bug clarified about logging and debugging&lt;/h2&gt;&lt;p&gt;This bug would have taken much longer to diagnose without logs. The error itself, &amp;quot;Access denied&amp;quot;, told me nothing about &lt;i&gt;why&lt;/i&gt; the request was being denied. The stack trace showed me &lt;i&gt;where&lt;/i&gt; it happened, but not the data that caused it.&lt;/p&gt;&lt;p&gt;A few takeaways:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Logs provide context that stack traces can&amp;#39;t.&lt;/b&gt; When you&amp;#39;re debugging, you often need to know what the data looked like at a specific point in time. Errors capture the moment of failure; logs capture the journey.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;High-cardinality attributes are powerful.&lt;/b&gt; Being able to search logs by any attribute: &lt;code&gt;isBot&lt;/code&gt;, &lt;code&gt;userAgent&lt;/code&gt;, makes it trivial to slice and dice your data. You don&amp;#39;t have to predict which attributes will be useful ahead of time.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Trace connection ties everything together.&lt;/b&gt; Seeing a log in isolation is useful, but being able to jump from a log to the full trace (and vice versa) gives you the complete picture. In this case, it let me confirm that the AI SDK user agent was indeed coming from Firefox requests.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;If you&amp;#39;re already using Sentry for &lt;a href=&quot;https://sentry.io/product/error-monitoring/&quot;&gt;error tracking&lt;/a&gt;, adding logs is a natural next step. For new projects, you can use the &lt;code&gt;Sentry.logger&lt;/code&gt; API directly. If you have existing logging with something like Pino, check out the &lt;a href=&quot;https://docs.sentry.io/platforms/javascript/guides/nextjs/logs/#integrations&quot;&gt;logging integrations&lt;/a&gt; to pipe those logs into Sentry automatically.&lt;/p&gt;&lt;p&gt;Head on over to our &lt;a href=&quot;https://docs.sentry.io/platforms/javascript/guides/nextjs/logs/&quot;&gt;Next.js Logs docs&lt;/a&gt; to learn more about how to send structured logs from your application to Sentry for debugging and observability. Or just check out our &lt;a href=&quot;https://sentry.io/quickstart/logs/?sdk=nextjs&quot;&gt;Logs quickstart guide&lt;/a&gt; and get up and running in no time.&lt;/p&gt;&lt;p&gt;Not everything that breaks throws an error. Sometimes you just need to see what was happening.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Meet Sentry’s 2025 Fall Interns]]></title><description><![CDATA[At Sentry, interns aren’t just observers, they’re teammates who ship meaningful work. This fall, our software engineering interns across our San Francisco and T...]]></description><link>https://blog.sentry.io/meet-sentrys-2025-fall-interns/</link><guid isPermaLink="false">https://blog.sentry.io/meet-sentrys-2025-fall-interns/</guid><pubDate>Mon, 12 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;At &lt;a href=&quot;https://sentry.io/welcome/&quot;&gt;Sentry&lt;/a&gt;, interns aren’t just observers, they’re teammates who ship meaningful work. This fall, our software engineering interns across our San Francisco and Toronto offices jumped right in, contributing to real projects that made a tangible impact. They built features, fixed tricky bugs, improved performance, and brought fresh perspectives that elevated our teams. Beyond sharpening their technical skills, they showed us exactly what the next generation of engineering talent can do. Here’s a look at their stories ✨.&lt;/p&gt;&lt;h2&gt;Why Sentry?&lt;/h2&gt;&lt;h3&gt;Max&lt;/h3&gt;&lt;p&gt;I really wanted to try working at a smaller company which Sentry provided. I thought that it was really cool being able to work in such a tight knit environment where you can see the decisions being made in a way that bigger companies don’t allow.&lt;/p&gt;&lt;h3&gt;Jerry&lt;/h3&gt;&lt;p&gt;I manage a course review platform (&lt;a href=&quot;http://uwflow.com&quot;&gt;UWFlow.com&lt;/a&gt;) on the side, and one of the core maintainers told me to integrate Sentry because they used it at work. I begrudgingly did it, only to find it&amp;#39;s actually not a bad tool. I later saw they were hiring and figured why not. &lt;/p&gt;&lt;h3&gt;Cliff&lt;/h3&gt;&lt;p&gt;When I previously interned at larger tech companies in the past, I felt that I had no real impact or ownership on any of the projects I worked on. The opportunity to receive a larger scope and ship impactful code that contributes to Sentry’s product stood out to me, which is why I chose to intern here.&lt;/p&gt;&lt;h2&gt;What did you work on? &lt;/h2&gt;&lt;h3&gt;Max&lt;/h3&gt;&lt;p&gt;I was on the Codecov team and I worked primarily on enabling reruns for tasks. This feature allowed admin users and support staff to much more easily help our customers best use the Codecov product.&lt;/p&gt;&lt;h3&gt;Jerry&lt;/h3&gt;&lt;p&gt;I was on the &lt;a href=&quot;https://sentry.io/product/session-replay/&quot;&gt;Session Replay&lt;/a&gt; team. I shipped live streaming replays, in addition to redesigning the Replay UI with playlists, and previous/next navigation. &lt;/p&gt;&lt;h3&gt;Cliff&lt;/h3&gt;&lt;p&gt;On the Session Replay team, I mainly worked on migrating Replay’s breadcrumb data from its own Snuba dataset to EAP (Events Analytics Platform), Sentry’s unified storage for observability data. &lt;/p&gt;&lt;h2&gt;What was the internship like?&lt;/h2&gt;&lt;h3&gt;Max&lt;/h3&gt;&lt;p&gt;I feel this internship provided a great opportunity to work on impactful projects where I could see the final business impact. A lot of times, I feel internships are defined completely by your manager/mentors, but here, I felt I was able to decide my own experience. I got to work on important issues as they came up, and helped the team in deciding what direction to take next. I feel like I was able to achieve a ton and learn a lot. Alongside the technical aspect, I really enjoyed doing all the internship events with the rest of the interns. We did an escape room and Chinatown food tour, and they were so fun.&lt;/p&gt;&lt;h3&gt;Jerry&lt;/h3&gt;&lt;p&gt;One of the most talent dense companies I’ve worked at in terms of engineering capability. My team was responsible for foundational projects that power Session Replay, and it was really interesting to be able to work alongside them and understand how they do things. In addition, there is a focus on work-life-balance that’s rare in tech. At the end of the day, it’s the people for me. &lt;/p&gt;&lt;h3&gt;Cliff&lt;/h3&gt;&lt;p&gt;I was able to work on challenging and high impact tasks during my internship here, which allowed me to solve really interesting problems and learn new concepts. My team was really friendly and gave me the opportunity to pursue projects that I was interested in (for example I was initially assigned a frontend project, but because of my interest in working on backend tasks, I was assigned a project that aligned with what I was interested in). &lt;/p&gt;&lt;h2&gt;Looking ahead&lt;/h2&gt;&lt;p&gt;If you&amp;#39;re eager to explore our ongoing internship programs and gain valuable experience, we invite you to check out our &lt;a href=&quot;https://sentry.io/careers/&quot;&gt;careers page&lt;/a&gt; for more information. You&amp;#39;ll find insights into our internship programs, application processes, and the benefits of joining our team.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Unity SDK 4.0.0: Console support, logs, user feedback and more]]></title><description><![CDATA[We just released the Sentry SDK for Unity 4.0.0 , our biggest update yet. This major release brings comprehensive gaming console support, structured logging, us...]]></description><link>https://blog.sentry.io/introducing-gaming-console-support-logs-user-feedback-unity/</link><guid isPermaLink="false">https://blog.sentry.io/introducing-gaming-console-support-logs-user-feedback-unity/</guid><pubDate>Thu, 08 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;We just released the Sentry SDK for Unity &lt;a href=&quot;https://github.com/getsentry/sentry-unity/releases/tag/4.0.0&quot;&gt;&lt;code&gt;4.0.0&lt;/code&gt;&lt;/a&gt; , our biggest update yet. This major release brings comprehensive gaming console support, structured logging, user feedback capabilities, and significant improvements to help you build better games across all platforms. Here&amp;#39;s what&amp;#39;s new:&lt;/p&gt;&lt;h2&gt;Gaming console support&lt;/h2&gt;&lt;p&gt;The Sentry SDK for Unity now provides native support for Xbox and PlayStation, bringing the full scope of Sentry&amp;#39;s error tracking to gaming consoles. The SDK automatically syncs the scope to the native layer, so that when the game experiences a crash on consoles the captured issue has full stack traces with proper C# line numbers, Sentry provides custom contexts, tags, and breadcrumbs. This unified experience across all platforms makes it easier to triage and fix issues regardless of where they occur.&lt;/p&gt;&lt;h2&gt;Structured logs&lt;/h2&gt;&lt;p&gt;Structured logs are now production-ready in the Sentry SDK for Unity. This means that log output is directly connected to errors, crashes, and performance issues in your game.&lt;/p&gt;&lt;p&gt;The SDK automatically captures debug log output based on your configuration, creating structured log entries that you can browse and search on Sentry. When a player experiences a crash during scene loading or gets stuck in a loading screen, you&amp;#39;ll have the complete log trail leading up to the problem, making it much easier to diagnose hard-to-reproduce issues.&lt;/p&gt;&lt;h2&gt;User feedback&lt;/h2&gt;&lt;p&gt;User Feedback support is now available in the Sentry SDK for Unity, enabling players to report issues or share general feedback about their gameplay experience. This helps bridge the gap between technical error data and player perspective, allowing you to see real player insights alongside stack traces and technical diagnostics.&lt;/p&gt;&lt;p&gt;The SDK includes a ready-to-use &lt;code&gt;SentryUserFeedback&lt;/code&gt; prefab that you can drag and drop into your scenes or customize by creating your own prefab variant. Players can provide written messages with optional screenshots attached, helping you understand issues from their point of view. Of course, if you want to build your own custom feedback UI, you can submit User Feedback programmatically using the static API. Learn more about implementing User Feedback in the &lt;a href=&quot;https://docs.sentry.io/product/user-feedback/&quot;&gt;documentation&lt;/a&gt;.&lt;/p&gt;&lt;h2&gt;Simplified configuration&lt;/h2&gt;&lt;p&gt;The &lt;code&gt;4.0.0&lt;/code&gt; release streamlines SDK configuration by consolidating Runtime and BuildTime configurations into a single &lt;code&gt;OptionsConfiguration&lt;/code&gt; script. This simplification makes it easier to configure platform-specific options using preprocessor directives. This also allows you to opt-out of the auto-initialization behaviour, enabling you to programmatically call &lt;code&gt;Init&lt;/code&gt; at any point during the game’s lifecycle, without sacrificing any features or support, i.e. native error coverage on mobile.&lt;/p&gt;&lt;p&gt;Check out the &lt;a href=&quot;https://docs.sentry.io/platforms/unity/migration/#changes-to-the-programmatic-configuration&quot;&gt;Migration Guide&lt;/a&gt; for details on updating your configuration.&lt;/p&gt;&lt;h2&gt;Performance and reliability improvements&lt;/h2&gt;&lt;p&gt;This release includes numerous fixes that improve stability across all platforms, most notably:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Fixed Android race conditions&lt;/b&gt;&lt;/p&gt;&lt;p&gt; that could cause crashes, especially in concurrent scenarios, by replacing the ThreadPool with a dedicated background worker thread that properly manages JNI lifecycle&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Improved WebGL exception capture&lt;/b&gt;&lt;/p&gt;&lt;p&gt; through the logging integration for better stack trace support&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Additional improvements include better debug symbol discovery, correct scene name reporting, improved Burst support, and thread-safe screenshot capture.&lt;/p&gt;&lt;h2&gt;Enhanced context and debugging&lt;/h2&gt;&lt;p&gt;The SDK now provides richer context automatically:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Screenshots on crashes&lt;/b&gt; - When targeting Windows with screenshot capture enabled, the SDK will now capture and attach screenshots to native crashes&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Memory reporting&lt;/b&gt; - Allocated memory is now reported with every event and transaction&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Low memory breadcrumbs&lt;/b&gt; - Automatic breadcrumbs for &lt;code&gt;Application.lowMemory&lt;/code&gt; events&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Lifecycle breadcrumbs&lt;/b&gt; - Automatic tracking when the game loses and regains focus&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Session improvements&lt;/b&gt; - Exceptions are correctly marked as unhandled rather than crashed, and sessions now persist from game start to end&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;For third-party plugin users, the SDK automatically marks stack frames from popular libraries like Cysharp and DOTween as non-in-app, significantly improving stack trace readability and issue grouping.&lt;/p&gt;&lt;h2&gt;Breaking changes&lt;/h2&gt;&lt;p&gt;This major release drops support for Unity 2020, which reached End of Life in 2023. The minimum supported version is now Unity 2021. Additionally, if you&amp;#39;re running your game or your server on Ubuntu 20.04, you should update to Ubuntu 22.04 before upgrading to this SDK version, as sentry-native is now built against the newer Ubuntu version. For a complete list of breaking changes and migration guidance, see the &lt;a href=&quot;https://github.com/getsentry/sentry-unity/blob/main/CHANGELOG.md#400&quot;&gt;full changelog&lt;/a&gt;.&lt;/p&gt;&lt;h2&gt;Platform support and getting started&lt;/h2&gt;&lt;p&gt;The Sentry Unity SDK supports Windows, Linux, macOS, iOS, Android, Xbox, and PlayStation.&lt;/p&gt;&lt;p&gt;To get started:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Download from &lt;a href=&quot;https://github.com/getsentry/sentry-unity/releases&quot;&gt;GitHub Releases&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Read the &lt;a href=&quot;https://docs.sentry.io/platforms/unity/&quot;&gt;Unity SDK documentation&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Explore the updated sample scenes&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Have questions or feedback? Join the conversation in &lt;a href=&quot;https://github.com/getsentry/sentry-unity/discussions&quot;&gt;Discussions&lt;/a&gt;! If you&amp;#39;re new to Sentry, you can explore our &lt;a href=&quot;https://sandbox.sentry.io/issues/&quot;&gt;interactive Sentry sandbox&lt;/a&gt; or &lt;a href=&quot;https://sentry.io/signup/&quot;&gt;sign up for free&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Trace-connected structured logging with LogTape and Sentry]]></title><description><![CDATA[As our applications grow from simple side projects into complex distributed systems with many users, the “old way” of console.log debugging isn’t going to hold ...]]></description><link>https://blog.sentry.io/trace-connected-structured-logging-with-logtape-and-sentry/</link><guid isPermaLink="false">https://blog.sentry.io/trace-connected-structured-logging-with-logtape-and-sentry/</guid><pubDate>Wed, 07 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;As our applications grow from simple side projects into complex distributed systems with many users, the “old way” of &lt;code&gt;console.log&lt;/code&gt; debugging isn’t going to hold up. To build truly observable systems, we have to transition from simple text logs to structured, queryable, trace-connected events.&lt;/p&gt;&lt;p&gt;&lt;i&gt;Would you rather watch this blog in a video format? Check out &lt;/i&gt;&lt;a href=&quot;https://www.youtube.com/watch?v=k_qhTXAyiUs&quot;&gt;&lt;i&gt;Production Logging for JS with LogTape + Sentry on YouTube&lt;/i&gt;&lt;/a&gt;&lt;i&gt;.&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;b&gt;TL;DR: The Logging Strategy Shift&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Many of us treat logs like a breadcrumb trail, verifying that each line executes, and logging the outputs for debugging. In production, that breadcrumb trail turns into a mountain of noise. We need to move from logging the process to logging the milestone.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Clear the noise:&lt;/b&gt; Move away from &amp;quot;thin&amp;quot; logs that create noise, are difficult to query and correlate.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Embrace high cardinality:&lt;/b&gt; Pack your logs with &amp;quot;fat&amp;quot; context that builds up over a task. Include User IDs, Order IDs, Cart information, and more so you can query for the data you need for any given event.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Connect the dots:&lt;/b&gt; Use Sentry to keep logs trace-connected; linking every log to the specific request that triggered it.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;The Log Dump: Why &lt;code&gt;console.log&lt;/code&gt; fails in production&lt;/h2&gt;&lt;p&gt;Once logs stop being centralized and chronological, &lt;code&gt;console.log&lt;/code&gt; breaks down. In a production environment with multiple users and services, your logs quickly turn into an interleaved stream of events with no clear way to reconstruct what happened for any single request.&lt;/p&gt;&lt;p&gt;Without a shared trace to connect related logs and useful, filterable data, these logs become essentially useless in production.&lt;/p&gt;&lt;h2&gt;Implementing production-grade logging with LogTape and Sentry&lt;/h2&gt;&lt;p&gt;Sentry provides &lt;a href=&quot;https://docs.sentry.io/product/explore/logs/#trace-connected-debugging-flow&quot;&gt;trace-connected logging&lt;/a&gt;. With traces, you can see the full context of a request, including all of the logs associated with it. This will give us an easy way to query for the traces and logs associated with an issue or request.&lt;/p&gt;&lt;p&gt;Additionally, Sentry provides a powerful query engine that we can use to search our logs based on attributes and structured data. From there, we can create alerts and dashboards based on our results.&lt;/p&gt;&lt;p&gt;&lt;b&gt;&lt;/b&gt;&lt;a href=&quot;https://logtape.org/&quot;&gt;&lt;b&gt;LogTape&lt;/b&gt;&lt;/a&gt; is a lightweight logging library for all JavaScript runtimes. Logging libraries like LogTape provide a way for us to instrument our code with automatic, rich structured logging and send those logs to Sentry using a &amp;quot;log sink&amp;quot;.&lt;/p&gt;&lt;p&gt;Structured logging is a format where, instead of simple strings, we treat a log as a structured object with defined properties.&lt;/p&gt;&lt;p&gt;That allows us to write powerful queries and filters to find and surface the data we need from our logs for debugging in production.&lt;/p&gt;&lt;p&gt;&lt;i&gt;Example from the &lt;/i&gt;&lt;a href=&quot;https://logtape.org/manual/struct&quot;&gt;&lt;i&gt;LogTape Structured Logging manual&lt;/i&gt;&lt;/a&gt;&lt;/p&gt;&lt;h3&gt;&lt;b&gt;Quick start: Next.js setup&lt;/b&gt;&lt;/h3&gt;&lt;p&gt;We&amp;#39;ll be using the Next.js framework for this example, but the concepts can be applied to any JavaScript framework.&lt;/p&gt;&lt;p&gt;You can follow the Quick Start guide for your framework of choice to initialize Sentry in your project. For &lt;a href=&quot;https://docs.sentry.io/platforms/javascript/guides/nextjs/&quot;&gt;Next.js&lt;/a&gt;, we&amp;#39;ll use the &lt;code&gt;@sentry/wizard&lt;/code&gt; to initialize Sentry in the project.&lt;/p&gt;&lt;p&gt;After the wizard completes, you should see that a few files were created in your project, which will automatically instrument your project with error monitoring and begin capturing traces and logs.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;instrumentation-client.ts&lt;/code&gt; — Runs in the browser&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;sentry.server.config.ts&lt;/code&gt; — Runs in Node.js&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;sentry.edge.config.ts&lt;/code&gt; — Runs in edge runtimes&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Your instrumentation-client.ts file should look similar to this:&lt;/p&gt;&lt;p&gt;You&amp;#39;ll also have a new &lt;code&gt;/sentry-example-page&lt;/code&gt; route that will help you test that Sentry is working.&lt;/p&gt;&lt;p&gt;If you start your development server, navigate to the example page, and click the &amp;quot;Throw error&amp;quot; button, you&amp;#39;ll see the issue captured in Sentry, and the associated trace.&lt;/p&gt;&lt;p&gt;This &lt;a href=&quot;https://docs.sentry.io/concepts/key-terms/tracing/&quot;&gt;trace&lt;/a&gt; links everything related to a request, including issues, session replays, and logs.&lt;/p&gt;&lt;h2&gt;Configuring LogTape with Sentry&lt;/h2&gt;&lt;p&gt;We are already configured to receive logs on Sentry, but we need to define &lt;i&gt;how&lt;/i&gt; we want to send those logs.&lt;/p&gt;&lt;p&gt;To receive structured logs with LogTape, we&amp;#39;ll take advantage of the &amp;quot;&lt;a href=&quot;https://logtape.org/manual/sinks&quot;&gt;log sink&lt;/a&gt;&amp;quot;. Rather than (or in addition to) sending the logs to the console, we can send our trace-connected logs &lt;i&gt;directly&lt;/i&gt; to Sentry.&lt;/p&gt;&lt;p&gt;First, install the LogTape and LogTape Sentry packages:&lt;/p&gt;&lt;h3&gt;Client-side configuration&lt;/h3&gt;&lt;p&gt;In your &lt;code&gt;instrumentation-client.ts&lt;/code&gt; file that we looked at above, we&amp;#39;ll add the LogTape configuration.&lt;/p&gt;&lt;p&gt;We&amp;#39;ll break this down after we configure the rest, but in short, we&amp;#39;re configuring LogTape so that when we use the &lt;code&gt;logger&lt;/code&gt; object, it will send the logs to the console &lt;i&gt;and&lt;/i&gt; Sentry.&lt;/p&gt;&lt;p&gt;We are also setting a &lt;a href=&quot;https://logtape.org/manual/categories&quot;&gt;category&lt;/a&gt; for the logs, which will be used to group logs together in Sentry by domain.&lt;/p&gt;&lt;h3&gt;Server-side configuration&lt;/h3&gt;&lt;p&gt;LogTape has a special feature called &lt;a href=&quot;https://logtape.org/manual/contexts&quot;&gt;contexts&lt;/a&gt; that allows us to pass data down the call stack. Meaning, we can easily append contextual data to the log stack at any point, to be included in any logs submitted.&lt;/p&gt;&lt;p&gt;On the backend, we can take advantage of &lt;a href=&quot;https://logtape.org/manual/contexts#implicit-contexts&quot;&gt;implicit contexts&lt;/a&gt;, which will allow us to append data to all logs in the stack. By adding the configuration below, we can utilize the &lt;a href=&quot;https://logtape.org/manual/contexts#basic-usage&quot;&gt;&lt;code&gt;withContext&lt;/code&gt;&lt;/a&gt; method to automatically insert data at the current scope, and all subroutines will inherit that information automatically.&lt;/p&gt;&lt;p&gt;In your &lt;code&gt;sentry.server.config.ts&lt;/code&gt; file, we&amp;#39;ll add the LogTape configuration for the server.&lt;/p&gt;&lt;p&gt;We&amp;#39;ll use &lt;a href=&quot;https://logtape.org/manual/contexts#implicit-contexts&quot;&gt;implicit context inheritance&lt;/a&gt; to build up and collect data throughout our app in a partially automated way. Then when we do finally log a message, it will contain debugging information from the whole stack until that point.&lt;/p&gt;&lt;h2&gt;Using LogTape in your Next.js app with Sentry&lt;/h2&gt;&lt;p&gt;With Sentry and LogTape configured, we can start using the &lt;code&gt;logger&lt;/code&gt; object to log messages.&lt;/p&gt;&lt;p&gt;We use &lt;code&gt;getLogger&lt;/code&gt; to fetch the &lt;a href=&quot;https://logtape.org/manual/categories#root-logger&quot;&gt;root logger&lt;/a&gt; for the given category, which we set in the &lt;code&gt;instrumentation-client.ts&lt;/code&gt;, and &lt;code&gt;sentry.server.config.ts&lt;/code&gt; files for the frontend and backend, respectively.&lt;/p&gt;&lt;p&gt;You&amp;#39;ll notice here we have also set an additional category &lt;code&gt;api&lt;/code&gt;, which will ensure all logs on this scoped &lt;a href=&quot;https://logtape.org/manual/categories#child-loggers&quot;&gt;child logger&lt;/a&gt; will be tagged with &lt;code&gt;category: nextapp-demo.api&lt;/code&gt; , which we can query later.&lt;/p&gt;&lt;p&gt;You can continue to scope this further as needed with &lt;a href=&quot;https://logtape.org/manual/categories#nesting&quot;&gt;nesting&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;In &lt;a href=&quot;https://sentry.io/explore/logs/&quot;&gt;Explore &amp;gt; Logs&lt;/a&gt;, this query is searching for logs where &lt;code&gt;category&lt;/code&gt; &amp;quot;contains&amp;quot; &lt;code&gt;nextapp-demo.api&lt;/code&gt;, meaning this query shows &lt;i&gt;all&lt;/i&gt; API logs, including those under the &lt;code&gt;.api.posts&lt;/code&gt;, nested category.&lt;/p&gt;&lt;h2&gt;Querying LogTape in Sentry&lt;/h2&gt;&lt;p&gt;After collecting a few logs from your frontend and backend in your app, navigate to the &lt;a href=&quot;https://sentry.io/explore/logs/&quot;&gt;Log Explorer in Sentry&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;We saw that we can set categories, like &lt;code&gt;api&lt;/code&gt;, or even an array of nested categories. Combined with the structured data sent in our logs, we can build queries based on any available attributes.&lt;/p&gt;&lt;p&gt;Each log created with our logger will contain the inherited attributes, all options we can query on.&lt;/p&gt;&lt;h3&gt;From queries to alerts and dashboards&lt;/h3&gt;&lt;p&gt;Once your logs are structured and queryable, you can turn those same queries into &lt;a href=&quot;https://docs.sentry.io/product/alerts/&quot;&gt;alerts&lt;/a&gt; and &lt;a href=&quot;https://docs.sentry.io/product/dashboards/custom-dashboards/&quot;&gt;dashboards&lt;/a&gt; in Sentry.&lt;/p&gt;&lt;p&gt;Now you can configure alerts, for example, where there is a higher-than-average number of &lt;code&gt;warn&lt;/code&gt; logs in a given component or service, and you can configure that specific alert to notify a specific team or developer.&lt;/p&gt;&lt;h2&gt;Strategy: What (and when) to log&lt;/h2&gt;&lt;p&gt;Before we instrument every function, we need a plan. High-volume production apps can generate millions of logs, leading to &amp;quot;noise&amp;quot; that&amp;#39;s potentially difficult to sift through, and eats through storage limits.&lt;/p&gt;&lt;h3&gt;Choosing the right level&lt;/h3&gt;&lt;p&gt;We use log levels as a top-level filter to reduce the noise. In production, we typically set our &lt;b&gt;Sentry sink&lt;/b&gt; to &lt;code&gt;info&lt;/code&gt; or &lt;code&gt;warn&lt;/code&gt;, while keeping the &lt;b&gt;console sink&lt;/b&gt; at &lt;code&gt;debug&lt;/code&gt; for local work.&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Debug:&lt;/b&gt; High-volume data (e.g., &amp;quot;Rendering PostItem ID: 123&amp;quot;).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Info:&lt;/b&gt; Major lifecycle events (e.g., &amp;quot;User login,&amp;quot; &amp;quot;Payment processed&amp;quot;).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Warn:&lt;/b&gt; Recoverable issues (e.g., &amp;quot;API timeout, retrying&amp;quot;).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Error:&lt;/b&gt; Critical failures that require immediate attention.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3&gt;When to log: The event-driven approach&lt;/h3&gt;&lt;p&gt;To keep &lt;i&gt;our&lt;/i&gt; signal-to-noise ratio healthy, we don&amp;#39;t log every line of code. Instead, we log transitions:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;The &amp;quot;Happy Path&amp;quot; Boundaries:&lt;/b&gt; Log when a major process starts and ends (e.g., &lt;code&gt;checkout_started&lt;/code&gt; -&amp;gt; &lt;code&gt;checkout_completed&lt;/code&gt;).&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;External Dependencies:&lt;/b&gt; Calling an external dependency has a high potential for failure. You may want to add a log prior to calling any external dependencies, specifically with data from the request payload that may be useful in debugging in the event of failures in the future.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Recoverable Errors (Warn):&lt;/b&gt; Use the &lt;code&gt;warn&lt;/code&gt; level for things that didn&amp;#39;t break the app but aren&amp;#39;t &amp;quot;normal,&amp;quot; like a cache miss that required a heavy database rebuild. These types of queries will be especially useful for monitoring performance or creating alerts.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h3&gt;Less logs, more cardinality&lt;/h3&gt;&lt;p&gt;A common trap is thinking you shouldn&amp;#39;t “stuff” your logs with data because it might bloat log sizes or increase costs. In reality, a thousand logs won&amp;#39;t help you if they don’t contain the specific data needed to reproduce a bug.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Cardinality&lt;/b&gt; refers to the uniqueness of the data within your logs. High-cardinality data includes things like &lt;code&gt;userId&lt;/code&gt;,&lt;code&gt; sessionId&lt;/code&gt;, &lt;code&gt;orderId&lt;/code&gt;, and &lt;code&gt;requestId&lt;/code&gt;. While older self-managed logging stacks struggled with high-cardinality data (it made queries slow), modern tools like Sentry thrive on it.&lt;/p&gt;&lt;h3&gt;The shift: From &amp;quot;chatty&amp;quot; to &amp;quot;contextual&amp;quot;&lt;/h3&gt;&lt;p&gt;We want to move away from logging every individual line of execution and instead move toward logging &lt;i&gt;milestones&lt;/i&gt; with accumulated context.&lt;/p&gt;&lt;p&gt;Consider a cart checkout flow. In the &amp;quot;old way,&amp;quot; you might have several distinct log lines:&lt;/p&gt;&lt;p&gt;&lt;b&gt;The &amp;quot;Chatty&amp;quot; trace (Distributed but thin)&lt;/b&gt;&lt;/p&gt;&lt;p&gt;You have multiple “thin” logs, each trace connected, but the data within can’t easily be queried together. To see the &lt;code&gt;orderId&lt;/code&gt; and the &lt;code&gt;totalAmount&lt;/code&gt;, you have to click through the trace, find the specific &amp;quot;Payment processed&amp;quot; log, and hope the developer included the ID there.&lt;/p&gt;&lt;p&gt;&lt;b&gt;The &amp;quot;High-Cardinality&amp;quot; event (Rich and actionable)&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Instead of relying on the trace to &amp;quot;stitch together&amp;quot; a story from thin logs, we log &lt;i&gt;milestones&lt;/i&gt; with accumulated context.&lt;/p&gt;&lt;p&gt;Even with tracing, high-cardinality logs are superior for two reasons:&lt;/p&gt;&lt;p&gt;&lt;b&gt;1.  Global searchability&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Tracing helps you debug &lt;i&gt;one specific&lt;/i&gt; request. High-cardinality logs help you debug &lt;i&gt;the entire system&lt;/i&gt;. You can go to the Sentry Log Explorer and ask:&lt;/p&gt;&lt;p&gt;&lt;i&gt;&amp;quot;Show me every &amp;#39;Purchase Completed&amp;#39; event in the last 24 hours where &lt;/i&gt;&lt;code&gt;&lt;i&gt;discountCode&lt;/i&gt;&lt;/code&gt;&lt;i&gt; was &amp;#39;SAVE20&amp;#39; and &lt;/i&gt;&lt;code&gt;&lt;i&gt;latencyMs&lt;/i&gt;&lt;/code&gt;&lt;i&gt; was &amp;gt; 2000.&amp;quot;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;You can&amp;#39;t do that if the discount code was buried in a different log line three steps back in the trace.&lt;/p&gt;&lt;p&gt;&lt;b&gt;2. Reduced mental overhead&lt;/b&gt;&lt;/p&gt;&lt;p&gt;When you open an issue in Sentry, seeing one &amp;quot;fat&amp;quot; log entry with all relevant IDs and state is significantly faster than hunting through a list of 20 &amp;quot;thin&amp;quot; logs to piece together what the user was doing.&lt;/p&gt;&lt;p&gt;By focusing on &amp;quot;Event-Driven&amp;quot; logs with high cardinality, you turn your logs from a simple diagnostic trail into a powerful internal analytics engine.&lt;/p&gt;&lt;h2&gt;Real-world usage of LogTape in React&lt;/h2&gt;&lt;p&gt;Unlike the backend, the browser lacks the &lt;code&gt;AsyncLocalStorage&lt;/code&gt; API for implicit context. To avoid manually attaching user data to every log, we can use React Context to automate inheritance.&lt;/p&gt;&lt;p&gt;Let’s take a look at how to structure our project with React Contexts to automatically instrument our logs with useful data contextually.&lt;/p&gt;&lt;h3&gt;Create a logger context.&lt;/h3&gt;&lt;p&gt;In &lt;code&gt;*lib/logger-context.tsx*&lt;/code&gt;, or wherever you define your context providers, create this file.&lt;/p&gt;&lt;p&gt;Then, wrap your app with the provider. This is usually done with layouts.&lt;/p&gt;&lt;p&gt;Now every component gets a logger with user context automatically:&lt;/p&gt;&lt;p&gt;Every log automatically includes &lt;code&gt;{ user: { id, email, name } }&lt;/code&gt; without manual configuration!&lt;/p&gt;&lt;h2&gt;The .LOG (Recap)&lt;/h2&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Structured data &amp;gt; Strings:&lt;/b&gt; Use objects to make logs searchable.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Nested organization:&lt;/b&gt; Use nesting categories to easily query up and down a stack.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Sink strategy:&lt;/b&gt; Utilize filtered sinks to send only the data you want to send, where you want to send it.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Context is King:&lt;/b&gt; Use &lt;code&gt;AsyncLocalStorage&lt;/code&gt; (Server) and &lt;code&gt;React Context&lt;/code&gt; (Client) to stop repeating yourself in log statements, and ensure the data you need is included.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Traceability:&lt;/b&gt; Easily discover all logs in a single request via traces.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;High cardinality:&lt;/b&gt; Use high cardinality logs at the end of events to create rich metrics from queries.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Monitoring:&lt;/b&gt; Use &lt;a href=&quot;https://docs.sentry.io/product/alerts/&quot;&gt;alerts&lt;/a&gt; and &lt;a href=&quot;https://docs.sentry.io/product/dashboards/custom-dashboards/&quot;&gt;custom dashboards&lt;/a&gt; to keep an eye on important metrics.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;You can read more about LogTape and structured logging in the &lt;a href=&quot;https://logtape.org/manual/install&quot;&gt;LogTape manual&lt;/a&gt;. Get more acquainted with Sentry’s logs with our &lt;a href=&quot;https://docs.sentry.io/product/explore/logs/&quot;&gt;interactive demos&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Another year, another $750,000 to Open Source maintainers]]></title><description><![CDATA[Bored yet? 2025 was the fifth year in a row (2024, 2023, 2022, 2021) that Sentry gave a pretty hefty chunk of change to the maintainers of the Open Source softw...]]></description><link>https://blog.sentry.io/another-year-another-750-000-to-open-source-maintainers/</link><guid isPermaLink="false">https://blog.sentry.io/another-year-another-750-000-to-open-source-maintainers/</guid><pubDate>Tue, 06 Jan 2026 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Bored yet? 2025 was the fifth year in a row (&lt;a href=&quot;https://blog.sentry.io/we-just-gave-750-000-dollars-to-open-source-maintainers/&quot;&gt;2024&lt;/a&gt;, &lt;a href=&quot;https://blog.sentry.io/we-just-gave-500-000-dollars-to-open-source-maintainers/&quot;&gt;2023&lt;/a&gt;, &lt;a href=&quot;https://blog.sentry.io/we-just-gave-260-028-dollars-to-open-source-maintainers/&quot;&gt;2022&lt;/a&gt;, &lt;a href=&quot;https://blog.sentry.io/we-just-gave-154-999-dollars-and-89-cents-to-open-source-maintainers/&quot;&gt;2021&lt;/a&gt;) that Sentry gave a pretty hefty chunk of change to the maintainers of the Open Source software that we rely on and love.&lt;/p&gt;&lt;p&gt;This is our first report since we launched the &lt;a href=&quot;https://opensourcepledge.com/&quot;&gt;Open Source Pledge&lt;/a&gt;, which brings together companies that share our respect for the independent maintainers in the community. Pledge members have collectively paid $4.5M to Open Source maintainers and foundations since launch. No more excuses! Companies paying maintainers is real. You should &lt;a href=&quot;https://opensourcepledge.com/join/&quot;&gt;join the party&lt;/a&gt;. :-)&lt;/p&gt;&lt;p&gt;As always, you can see the &lt;a href=&quot;https://thanks.dev/o/sentry-251112&quot;&gt;details of our primary distribution&lt;/a&gt; ($375k) on &lt;a href=&quot;https://thanks.dev/home&quot;&gt;thanks.dev&lt;/a&gt; (TD). They’re the easy button. You have no excuse. Go &lt;a href=&quot;https://thanks.dev/home&quot;&gt;sign up your company for thanks.dev&lt;/a&gt; and pay the maintainers of the projects you rely on. If you meet the Pledge minimum ($2000/dev/yr) and blog about it, we’ll &lt;a href=&quot;https://opensourcepledge.com/members/&quot;&gt;add your company to the list&lt;/a&gt;. The more who join, the more who &lt;i&gt;will&lt;/i&gt; join, and the stronger and more resilient the Open Source ecosystem will be.&lt;/p&gt;&lt;p&gt;We also continue to work with &lt;a href=&quot;https://oscollective.org/&quot;&gt;Open Source Collective&lt;/a&gt; / &lt;a href=&quot;https://funds.ecosyste.ms/&quot;&gt;Ecosyste.ms Funds&lt;/a&gt; ($75k) as well as &lt;a href=&quot;https://github.com/sponsors&quot;&gt;GitHub Sponsors&lt;/a&gt; ($50k). Like thanks.dev, Ecosyste.ms Funds makes it easy to support a number of projects at once, but they don’t look at our dependencies to see what projects we’re actually using in a given ecosystem. Their overall data is top-notch, though, and so we give 10% through them to ensure broad support for the ecosystems we rely on. Sadly, Microsoft mothballed Sponsors this past year. There were no Sponsor-related announcements at Universe, and they’ve stopped new feature development on the product with the shift to AI.&lt;/p&gt;&lt;p&gt;Speaking of shifts to AI, Sentry is of course &lt;a href=&quot;https://x.com/brexHQ/status/2005733057244913818&quot;&gt;selling shovels&lt;/a&gt; as fast as we can, while also &lt;a href=&quot;https://blog.sentry.io/sentry-just-got-an-upgrade-and-its-all-free/&quot;&gt;shoveling&lt;/a&gt;, ourselves. &lt;a href=&quot;https://numfocus.org/&quot;&gt;NumFOCUS&lt;/a&gt; manages &lt;a href=&quot;https://numfocus.org/sponsored-projects&quot;&gt;a lot of the Python projects&lt;/a&gt; at the heart of the modern AI stack that most companies rely on, including Sentry, so this year we’ve added them to the list of foundations we support. Thanks.dev also handles all these logistics for me, which is a big help. Really, you have no excuse.&lt;/p&gt;&lt;table&gt;&lt;tr&gt;&lt;th&gt;&lt;p&gt;&lt;b&gt;Recipient&lt;/b&gt;&lt;/p&gt;&lt;/th&gt;&lt;th&gt;&lt;p&gt;&lt;b&gt;Amount ($)&lt;/b&gt;&lt;/p&gt;&lt;/th&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;p&gt;Django Software Foundation&lt;/p&gt;&lt;/td&gt;&lt;td&gt;&lt;p&gt;30,000&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;p&gt;Outreachy&lt;/p&gt;&lt;/td&gt;&lt;td&gt;&lt;p&gt;25,000&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;p&gt;Open Source Initiative&lt;/p&gt;&lt;/td&gt;&lt;td&gt;&lt;p&gt;20,000&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;p&gt;Python Software Foundation&lt;/p&gt;&lt;/td&gt;&lt;td&gt;&lt;p&gt;16,750&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;p&gt;Geomys&lt;/p&gt;&lt;/td&gt;&lt;td&gt;&lt;p&gt;15,000&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;p&gt;OpenJS Foundation&lt;/p&gt;&lt;/td&gt;&lt;td&gt;&lt;p&gt;15,000&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;p&gt;PostgreSQL&lt;/p&gt;&lt;/td&gt;&lt;td&gt;&lt;p&gt;15,000&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;p&gt;rrweb&lt;/p&gt;&lt;/td&gt;&lt;td&gt;&lt;p&gt;15,000&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;p&gt;Rust Software Foundation&lt;/p&gt;&lt;/td&gt;&lt;td&gt;&lt;p&gt;15,000&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;p&gt;PHP Foundation&lt;/p&gt;&lt;/td&gt;&lt;td&gt;&lt;p&gt;12,000&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;p&gt;Ruby Central&lt;/p&gt;&lt;/td&gt;&lt;td&gt;&lt;p&gt;10,000&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;p&gt;.NET Foundation&lt;/p&gt;&lt;/td&gt;&lt;td&gt;&lt;p&gt;10,000&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;p&gt;NumFOCUS&lt;/p&gt;&lt;/td&gt;&lt;td&gt;&lt;p&gt;10,000&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;p&gt;Apache Software Foundation&lt;/p&gt;&lt;/td&gt;&lt;td&gt;&lt;p&gt;10,000&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;p&gt;&lt;b&gt;TOTAL&lt;/b&gt;&lt;/p&gt;&lt;/td&gt;&lt;td&gt;&lt;p&gt;&lt;b&gt;218,750&lt;/b&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td&gt;&lt;p&gt;&lt;b&gt;% of 750k&lt;/b&gt;&lt;/p&gt;&lt;/td&gt;&lt;td&gt;&lt;p&gt;&lt;b&gt;29&lt;/b&gt;&lt;/p&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;&lt;p&gt;As ever, a hearty thank you to all you Open Source maintainers out there. Keep up the great work!&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Building a Code Review system that uses prod data to predict bugs]]></title><description><![CDATA[This post takes a closer look at how Sentry’s AI Code Review actually works. As part of Seer, Sentry’s AI debugger, it uses Sentry context to accurately predict...]]></description><link>https://blog.sentry.io/building-a-code-review-system-that-uses-prod-data-to-predict-bugs/</link><guid isPermaLink="false">https://blog.sentry.io/building-a-code-review-system-that-uses-prod-data-to-predict-bugs/</guid><pubDate>Thu, 18 Dec 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;This post takes a closer look at how Sentry’s &lt;a href=&quot;https://sentry.io/product/ai-code-review/&quot;&gt;AI Code Review&lt;/a&gt; actually works.&lt;/p&gt;&lt;p&gt;As part of &lt;a href=&quot;https://sentry.io/product/seer/&quot;&gt;Seer&lt;/a&gt;, Sentry’s AI debugger, it uses Sentry context to accurately predict bugs. It runs automatically or on-demand, pointing out issues and suggesting fixes before you ship.&lt;/p&gt;&lt;p&gt;We know AI tools can be noisy, so this system focuses on finding real bugs in your actual changes—not spamming you with false positives and unhelpful style tips. By combining AI with your app’s Sentry data—how it runs and where it’s broken before—it helps you avoid shipping new bugs in the future.&lt;/p&gt;&lt;h2&gt;High-level architecture&lt;/h2&gt;&lt;p&gt;The code review system detects bugs using both code analysis and Sentry data to deliver suggestions to your PR.&lt;/p&gt;&lt;p&gt;Here’s an overview of AI Code Review’s architecture:&lt;/p&gt;&lt;h2&gt;Bug prediction pipeline&lt;/h2&gt;&lt;p&gt;To predict bugs with as much precision as possible, we employ a multi-step pipeline based on hypothesis and verification:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Filtering&lt;/b&gt; - In this step we gather PR information and filter down PR files to the most error-prone. Especially important for large PRs;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Predicting&lt;/b&gt; - The exciting part. Here we run multiple agents that draft bug hypothesis and verify them;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Packaging &amp;amp; Shipping&lt;/b&gt; - Aggregate suggestions, filter and parse them into comments, then send them to your PR;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Later on, we will look at some traces from the &lt;a href=&quot;https://github.com/getsentry/sentry&quot;&gt;&lt;code&gt;getsentry/sentry&lt;/code&gt;&lt;/a&gt; repo that shows the pipeline in action.&lt;/p&gt;&lt;h3&gt;Filtering&lt;/h3&gt;&lt;p&gt;For PRs with only a few changes, we add all files into the agent’s context. But to prevent the agent from being overwhelmed by large changes, if more than five files will be changed, we narrow it down to most error-prone files.&lt;/p&gt;&lt;p&gt;This is done using an LLM that is instructed to drop testing files and doc changes, as well as files that superficially look less error-prone. That said, test files are searched during both draft and verify agent runs (in the step below).&lt;/p&gt;&lt;h3&gt;Predicting&lt;/h3&gt;&lt;p&gt;Because context is king, all agents have access to different tools that provide them with rich context to understand the code being analyzed.&lt;/p&gt;&lt;p&gt;Our predictions are ultimately based on:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Actual code change&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;PR description&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Commit messages&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Sentry historical data for this repository (see &lt;a href=&quot;#how-sentry-context-is-used-5&quot;&gt;How Sentry context is used&lt;/a&gt; for details)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Code from the repository (via code-search tools)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Web information (via web-search tools)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;“Memories” gathered for the repository across PRs. These are specific details about your repository that the system learns overtime with every PR it analyzes. We keep a list of up-to-date, relevant tidbits of information about the repository that is updated with every new PR analyzed. These are things like “tests for this repository use &lt;code&gt;pytest&lt;/code&gt; assertions” or “The &lt;code&gt;root_task&lt;/code&gt; in &lt;code&gt;tasks.py&lt;/code&gt; has a hard time limit of 15 minutes (900 seconds).”&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The agent workflow generates hypotheses and attempts to verify each one.&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Draft hypotheses&lt;/b&gt;: A drafting agent creates a report containing an initial analysis of potential bugs. This report is split into at most 3 bug hypotheses.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Verify hypothesis&lt;/b&gt;: Concurrently, each hypothesis is analyzed by a dedicated agent. Each verify agent has access to tools to fetch sentry issues and event details. See &lt;a href=&quot;#how-sentry-context-is-used-5&quot;&gt;How Sentry context is used&lt;/a&gt; for details.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Gather results&lt;/b&gt;: A final agent gathers all the results and aggregates them into a list of verified bug predictions.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;By focusing the verifying agents into a single hypothesis they can deep dive and more correctly assert if that is a valid bug or not.&lt;/p&gt;&lt;p&gt;That’s how we keep the signal-to-noise ratio high for bug predictions.&lt;/p&gt;&lt;h3&gt;Example&lt;/h3&gt;&lt;p&gt;Let’s look at another example trace for &lt;a href=&quot;https://github.com/getsentry/sentry/pull/100005&quot;&gt;this PR&lt;/a&gt; in &lt;code&gt;getsentry/sentry&lt;/code&gt;. Specifically for commit &lt;code&gt;596e046c05cb6ec43ebf924dbc40fd1e3f40db26&lt;/code&gt;.&lt;/p&gt;&lt;hr/&gt;&lt;h4&gt;&lt;i&gt;File filtering&lt;/i&gt;&lt;/h4&gt;&lt;hr/&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Draft Hypothesis&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;Now I have a comprehensive understanding of the code changes. Let me analyze the potential issues:&lt;/i&gt;&lt;/p&gt;&lt;h4&gt;&lt;i&gt;&lt;b&gt;Analysis&lt;/b&gt;&lt;/i&gt;&lt;/h4&gt;&lt;p&gt;&lt;i&gt;This code change is refactoring &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; calls from using string-based event names to using class-based event objects. The repository context mentions that &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; calls should be wrapped in &lt;/i&gt;&lt;i&gt;&lt;code&gt;try/except&lt;/code&gt;&lt;/i&gt;&lt;i&gt; blocks to prevent failures from propagating, as analytics recording is considered non-critical.&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;Looking at the changes:&lt;/i&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Most changes lack error handling&lt;/b&gt;&lt;/i&gt;&lt;i&gt;: In files like &lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;&lt;code&gt;src/sentry/integrations/slack/webhooks/event.py&lt;/code&gt;&lt;/i&gt;&lt;i&gt;, &lt;/i&gt;&lt;i&gt;&lt;code&gt;src/sentry/integrations/discord/webhooks/message_component.py&lt;/code&gt;&lt;/i&gt;&lt;i&gt;, &lt;/i&gt;&lt;i&gt;&lt;code&gt;src/sentry/integrations/discord/webhooks/base.py&lt;/code&gt;&lt;/i&gt;&lt;i&gt;, and &lt;/i&gt;&lt;i&gt;&lt;code&gt;src/sentry/sentry_apps/installations.py&lt;/code&gt;&lt;/i&gt;&lt;i&gt;, the refactoring changes the &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; calls but doesn&amp;#39;t add any &lt;/i&gt;&lt;i&gt;&lt;code&gt;try/except&lt;/code&gt;&lt;/i&gt;&lt;i&gt; error handling.&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Only one file adds proper error handling&lt;/b&gt;&lt;/i&gt;&lt;i&gt;: In &lt;/i&gt;&lt;i&gt;&lt;code&gt;src/sentry/rules/actions/integrations/base.py&lt;/code&gt;&lt;/i&gt;&lt;i&gt;, the code properly wraps the new &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; call in a &lt;/i&gt;&lt;i&gt;&lt;code&gt;try/except&lt;/code&gt;&lt;/i&gt;&lt;i&gt; block with &lt;/i&gt;&lt;i&gt;&lt;code&gt;sentry_sdk.capture_exception(e)&lt;/code&gt;&lt;/i&gt;&lt;i&gt;.&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Missing imports&lt;/b&gt;&lt;/i&gt;&lt;i&gt;: The new event classes are being imported, but I need to verify if these could fail during instantiation.&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h4&gt;&lt;i&gt;&lt;b&gt;Potential Bugs&lt;/b&gt;&lt;/i&gt;&lt;/h4&gt;&lt;p&gt;&lt;i&gt;Based on my analysis, here are the potential bugs I&amp;#39;ve identified:&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;1. Missing error handling for &lt;/b&gt;&lt;/i&gt;&lt;i&gt;&lt;code&gt;&lt;b&gt;analytics.record()&lt;/b&gt;&lt;/code&gt;&lt;/i&gt;&lt;i&gt;&lt;b&gt; calls&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Location&lt;/b&gt;&lt;/i&gt;&lt;i&gt;:&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Description&lt;/b&gt;&lt;/i&gt;&lt;i&gt;: The repository context specifically states that &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; calls should be wrapped in &lt;/i&gt;&lt;i&gt;&lt;code&gt;try/except&lt;/code&gt;&lt;/i&gt;&lt;i&gt; blocks to prevent failures from propagating and causing user-facing errors. However, most of the refactored &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; calls in this change do not include this error handling, while they&amp;#39;re changing from string-based to class-based event recording.&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;The new class-based approach could potentially introduce new failure modes during event class instantiation that weren&amp;#39;t present with the old string-based approach. For example, if the event class constructors perform validation on the input parameters or if there are issues with the class registration, these could raise exceptions that would now propagate up the call stack and potentially crash the request.&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Important things to investigate further&lt;/b&gt;&lt;/i&gt;&lt;i&gt;:&lt;/i&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Whether the new event classes perform any validation during instantiation that could raise exceptions&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Whether the old string-based &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; calls had any implicit error handling that&amp;#39;s now missing&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Whether these analytics calls are in critical request paths where an exception would cause user-facing errors&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;2. Potential field mismatch in refactored analytics calls&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Location&lt;/b&gt;&lt;/i&gt;&lt;i&gt;: &lt;/i&gt;&lt;i&gt;&lt;code&gt;src/sentry/integrations/slack/webhooks/event.py:205~211&lt;/code&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Description&lt;/b&gt;&lt;/i&gt;&lt;i&gt;: The original analytics call included a &lt;/i&gt;&lt;i&gt;&lt;code&gt;user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt; field implicitly, but the new &lt;/i&gt;&lt;i&gt;&lt;code&gt;IntegrationSlackChartUnfurl&lt;/code&gt;&lt;/i&gt;&lt;i&gt; class shows &lt;/i&gt;&lt;i&gt;&lt;code&gt;user_id: int | None = None&lt;/code&gt;&lt;/i&gt;&lt;i&gt; as an optional field. The refactored code doesn&amp;#39;t pass &lt;/i&gt;&lt;i&gt;&lt;code&gt;user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt;, so it will default to &lt;/i&gt;&lt;i&gt;&lt;code&gt;None&lt;/code&gt;&lt;/i&gt;&lt;i&gt;. I need to check if this was intentional or if &lt;/i&gt;&lt;i&gt;&lt;code&gt;user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt; should be passed from the request context.&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Important things to investigate further&lt;/b&gt;&lt;/i&gt;&lt;i&gt;:&lt;/i&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Whether the original call was supposed to include a &lt;/i&gt;&lt;i&gt;&lt;code&gt;user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt; parameter&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Whether the missing &lt;/i&gt;&lt;i&gt;&lt;code&gt;user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt; field affects analytics data collection in a meaningful way&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;i&gt;Based on the repository context emphasizing that &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; calls should be wrapped in &lt;/i&gt;&lt;i&gt;&lt;code&gt;try/except&lt;/code&gt;&lt;/i&gt;&lt;i&gt; blocks, and the fact that most of the changes in this PR don&amp;#39;t include this protection, I consider the missing error handling to be the primary potential bug that could cause unexpected crashes in production.&lt;/i&gt;&lt;/p&gt;&lt;hr/&gt;&lt;p&gt;You’ll notice that in what becomes hypothesis 1 the agent says “The repository context specifically states that &lt;code&gt;analytics.record()&lt;/code&gt; calls should be wrapped in try/except blocks to prevent failures from propagating and causing user-facing errors”. This comes from the “memories” that we generated for the repository.&lt;/p&gt;&lt;hr/&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Memory context&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;The exact piece of memory passed to the agent:

When calling &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; in API endpoints or other critical request paths, wrap the call in a &lt;/i&gt;&lt;i&gt;&lt;code&gt;try...except Exception as e: sentry_sdk.capture_exception(e)&lt;/code&gt;&lt;/i&gt;&lt;i&gt;. This ensures that failures in analytics event recording (e.g., due to invalid event data or internal analytics system issues) do not propagate and cause user-facing errors or block the main request flow. Analytics recording is considered non-critical for the primary user experience.&lt;/i&gt;&lt;/p&gt;&lt;hr/&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Analysis of Hypothesis 1&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;The initial analysis of the bug before evaluating it against the bug guidelines:
&lt;/i&gt;&lt;i&gt;&lt;code&gt;&amp;lt;initial_followup&amp;gt;&lt;/code&gt;&lt;/i&gt;&lt;i&gt;
Based on my thorough investigation of the codebase, I now have a clear understanding of the hypothesis and the evidence. Let me analyze this systematically:&lt;/i&gt;&lt;/p&gt;&lt;h4&gt;&lt;i&gt;Evidence Analysis&lt;/i&gt;&lt;/h4&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;1. Repository Context Verification&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;The repository context correctly states that &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; calls should be wrapped in &lt;/i&gt;&lt;i&gt;&lt;code&gt;try/except&lt;/code&gt;&lt;/i&gt;&lt;i&gt; blocks. However, my investigation shows this guidance is inconsistently followed across the codebase.&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;2. Existing Error Handling Patterns&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;Looking at actual implementations:&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Examples WITH proper error handling:&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;code&gt;src/sentry/middleware/devtoolbar.py&lt;/code&gt;&lt;/i&gt;&lt;i&gt;: Wraps &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; in &lt;/i&gt;&lt;i&gt;&lt;code&gt;try/except&lt;/code&gt;&lt;/i&gt;&lt;i&gt; with &lt;/i&gt;&lt;i&gt;&lt;code&gt;sentry_sdk.capture_exception(e)&lt;/code&gt;&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;code&gt;src/sentry/rules/actions/integrations/base.py&lt;/code&gt;&lt;/i&gt;&lt;i&gt;: The code change itself shows proper error handling for &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; calls&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;code&gt;src/sentry/api/endpoints/organization_events_stats.py&lt;/code&gt;&lt;/i&gt;&lt;i&gt;: Contains &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; wrapped in &lt;/i&gt;&lt;i&gt;&lt;code&gt;try/except&lt;/code&gt;&lt;/i&gt;&lt;i&gt; with &lt;/i&gt;&lt;i&gt;&lt;code&gt;sentry_sdk.capture_exception(e)&lt;/code&gt;&lt;/i&gt;&lt;i&gt; specifically for agent monitoring queries&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Examples WITHOUT error handling:&lt;/b&gt;&lt;/i&gt;&lt;i&gt;
From my search results, I found hundreds of &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; calls throughout the codebase that lack &lt;/i&gt;&lt;i&gt;&lt;code&gt;try/except&lt;/code&gt;&lt;/i&gt;&lt;i&gt; protection, suggesting this is a widespread pattern, not an isolated issue.&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;3. Sentry Issues Evidence&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;The Sentry issues I examined show:&lt;/i&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Issue 6871320642&lt;/b&gt;&lt;/i&gt;&lt;i&gt;: A KeyError occurring during analytics event serialization, specifically in  &lt;/i&gt;&lt;i&gt;&lt;code&gt;SentryAppSchemaValidationError.serialize()&lt;/code&gt;&lt;/i&gt;&lt;i&gt;. This demonstrates that &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; can indeed fail with runtime exceptions.&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Issue 6847278399&lt;/b&gt;&lt;/i&gt;&lt;i&gt;: A ValueError occurring when analytics event validation fails for missing required attributes in &lt;/i&gt;&lt;i&gt;&lt;code&gt;TrialUsageEndNotification&lt;/code&gt;&lt;/i&gt;&lt;i&gt;. This shows another way analytics events can throw exceptions.&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;i&gt;Both of these issues show that the analytics system can fail in various ways, supporting the hypothesis that &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; calls can raise exceptions.&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;4. New Class-Based Events Risk Assessment&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;Looking at the code change, the refactoring moves from string-based events to class-based events:&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Old approach:&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;New approach:&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;The new approach introduces event class instantiation, which could potentially fail during:&lt;/i&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Class constructor validation&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Field validation&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Dataclass transformation (as evidenced by the Sentry issues)&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;5. Critical Request Path Analysis&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;The files being modified include:&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;These are critical request paths where an unhandled exception would cause user-facing errors.&lt;/i&gt;&lt;/p&gt;&lt;h4&gt;&lt;i&gt;Bottom-Up Assessment&lt;/i&gt;&lt;/h4&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Evidence supporting the hypothesis:&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;✅ Repository context explicitly states &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; should be wrapped in &lt;/i&gt;&lt;i&gt;&lt;code&gt;try/catch&lt;/code&gt;&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;✅ Real Sentry issues show &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; can fail with various exceptions&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;✅ Only one of the modified files (&lt;/i&gt;&lt;i&gt;&lt;code&gt;src/sentry/rules/actions/integrations/base.py&lt;/code&gt;&lt;/i&gt;&lt;i&gt;) includes proper error handling&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;✅ The other files lack error handling despite being in critical request paths&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;✅ The new class-based approach potentially introduces new failure modes&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;✅ Similar patterns in other parts of the codebase already include &lt;/i&gt;&lt;i&gt;&lt;code&gt;try/catch&lt;/code&gt;&lt;/i&gt;&lt;i&gt; for &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;What I couldn&amp;#39;t verify:&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;The exact likelihood of the new event classes failing during instantiation&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Whether the specific event classes used in the code change have known validation issues&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h4&gt;&lt;i&gt;Conclusion&lt;/i&gt;&lt;/h4&gt;&lt;p&gt;&lt;i&gt;The hypothesis is &lt;/i&gt;&lt;i&gt;&lt;b&gt;VALID&lt;/b&gt;&lt;/i&gt;&lt;i&gt; as a real bug. The evidence strongly supports that:&lt;/i&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;The repository explicitly requires &lt;/b&gt;&lt;/i&gt;&lt;i&gt;&lt;code&gt;&lt;b&gt;analytics.record()&lt;/b&gt;&lt;/code&gt;&lt;/i&gt;&lt;i&gt;&lt;b&gt; calls to be wrapped in &lt;/b&gt;&lt;/i&gt;&lt;i&gt;&lt;code&gt;&lt;b&gt;try/except&lt;/b&gt;&lt;/code&gt;&lt;/i&gt;&lt;i&gt;&lt;b&gt; blocks&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Real production issues demonstrate that &lt;/b&gt;&lt;/i&gt;&lt;i&gt;&lt;code&gt;&lt;b&gt;analytics.record()&lt;/b&gt;&lt;/code&gt;&lt;/i&gt;&lt;i&gt;&lt;b&gt; can and does fail with various exceptions&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;The code change violates this guideline in multiple critical request paths&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;The new class-based approach could introduce additional failure modes compared to the old string-based approach&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;&lt;i&gt;This is a genuine bug that could cause unexpected crashes in production when analytics events fail to record. The fact that only one of the modified files (&lt;/i&gt;&lt;i&gt;&lt;code&gt;src/sentry/rules/actions/integrations/base.py&lt;/code&gt;&lt;/i&gt;&lt;i&gt;) includes proper error handling while the others don&amp;#39;t makes this a clear violation of the established pattern and could lead to user-facing errors in critical integration workflows.
&lt;/i&gt;&lt;i&gt;&lt;code&gt;&amp;lt;/initial_followup&amp;gt;&lt;/code&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;The revised followup findings after applying the bug guidelines:
&lt;/i&gt;&lt;i&gt;&lt;code&gt;&amp;lt;revised_followup&amp;gt;&lt;/code&gt;&lt;/i&gt;&lt;i&gt;
Looking at my analysis against the specific filtering guidelines for missing error handling:&lt;/i&gt;&lt;/p&gt;&lt;h4&gt;&lt;i&gt;Filtering Against Missing Error Handling Guidelines&lt;/i&gt;&lt;/h4&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;1. Evidence that the block could raise an error:&lt;/b&gt;&lt;/i&gt;&lt;i&gt; ✅ &lt;/i&gt;&lt;i&gt;&lt;b&gt;STRONG EVIDENCE&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Two concrete Sentry issues (6871320642, 6847278399) demonstrate &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; failures with &lt;/i&gt;&lt;i&gt;&lt;code&gt;KeyError&lt;/code&gt;&lt;/i&gt;&lt;i&gt; and &lt;/i&gt;&lt;i&gt;&lt;code&gt;ValueError&lt;/code&gt;&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Repository context explicitly states analytics calls should be wrapped in &lt;/i&gt;&lt;i&gt;&lt;code&gt;try/except&lt;/code&gt;&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Multiple existing examples in the codebase already wrap &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; in &lt;/i&gt;&lt;i&gt;&lt;code&gt;try/except&lt;/code&gt;&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;2. Graceful handling is the right thing to do:&lt;/b&gt;&lt;/i&gt;&lt;i&gt; ✅ &lt;/i&gt;&lt;i&gt;&lt;b&gt;CONFIRMED&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Repository context explicitly states analytics recording is &amp;quot;non-critical for the primary user experience&amp;quot;&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Failures should not propagate to cause user-facing errors&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Analytics is clearly intended to be non-blocking&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;3. Code execution context - end-user vs backend:&lt;/b&gt;&lt;/i&gt;&lt;i&gt; ✅ &lt;/i&gt;&lt;i&gt;&lt;b&gt;USER-FACING CODE&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;These are all user-facing integration endpoints where unhandled exceptions would:&lt;/i&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Cause webhook failures visible to users&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Break integration functionality&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Return error responses to external services (Slack, Discord)&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;4. Tracing execution to bottom of stack:&lt;/b&gt;&lt;/i&gt;&lt;i&gt; ✅ &lt;/i&gt;&lt;i&gt;&lt;b&gt;CONFIRMED&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Webhook failures would propagate to external services&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Installation failures would be visible to users in the UI&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;No higher-level error handling wraps these analytics calls&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h4&gt;&lt;i&gt;Revised Analysis&lt;/i&gt;&lt;/h4&gt;&lt;p&gt;&lt;i&gt;The hypothesis is &lt;/i&gt;&lt;i&gt;&lt;b&gt;VALID&lt;/b&gt;&lt;/i&gt;&lt;i&gt; as a real bug. This meets all criteria for missing error handling:&lt;/i&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Strong evidence of potential failures&lt;/b&gt;&lt;/i&gt;&lt;i&gt;: Real production Sentry issues demonstrate &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; can fail&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;User-facing code paths&lt;/b&gt;&lt;/i&gt;&lt;i&gt;: These are integration endpoints where failures directly impact users&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Repository guidelines violated&lt;/b&gt;&lt;/i&gt;&lt;i&gt;: Explicit guidance to wrap &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; calls is not followed&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Graceful degradation required&lt;/b&gt;&lt;/i&gt;&lt;i&gt;: Analytics failures should not break core functionality&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;&lt;i&gt;The bug could cause:&lt;/i&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Webhook integration failures for Slack/Discord users&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Sentry App installation failures&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;Broken integration workflows&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;i&gt;This is a genuine missing error handling bug in critical user-facing code paths.
&lt;/i&gt;&lt;i&gt;&lt;code&gt;&amp;lt;/revised_followup&amp;gt;&lt;/code&gt;&lt;/i&gt;&lt;/p&gt;&lt;hr/&gt;&lt;p&gt;As part of evidence gathered by the agent to validate hypothesis 1 you’ll notice that it points to past Sentry issues in the code base. They follow from similar code patterns as the one the agent was analyzing.&lt;/p&gt;&lt;hr/&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Analysis of Hypothesis 2&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;The initial analysis of the bug before evaluating it against the bug guidelines:
&lt;/i&gt;&lt;i&gt;&lt;code&gt;&amp;lt;initial_followup&amp;gt;&lt;/code&gt;&lt;/i&gt;&lt;i&gt;
Based on my analysis, I can now provide a definitive assessment of the hypothesis. Let me examine the evidence:&lt;/i&gt;&lt;/p&gt;&lt;h4&gt;&lt;i&gt;Analysis of the Hypothesis&lt;/i&gt;&lt;/h4&gt;&lt;p&gt;&lt;i&gt;The hypothesis states that &amp;quot;The refactored analytics call for &lt;/i&gt;&lt;i&gt;&lt;code&gt;IntegrationSlackChartUnfurl&lt;/code&gt;&lt;/i&gt;&lt;i&gt; does not pass a &lt;/i&gt;&lt;i&gt;&lt;code&gt;user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt;, causing it to default to &lt;/i&gt;&lt;i&gt;&lt;code&gt;None&lt;/code&gt;&lt;/i&gt;&lt;i&gt;. The original string-based call may have implicitly included this field, so its omission in the new class-based call could result in a loss of important analytics data.&amp;quot;&lt;/i&gt;&lt;/p&gt;&lt;h4&gt;&lt;i&gt;Evidence from the Code&lt;/i&gt;&lt;/h4&gt;&lt;p&gt;&lt;i&gt;Looking at the exact code change in &lt;/i&gt;&lt;i&gt;&lt;code&gt;src/sentry/integrations/slack/webhooks/event.py&lt;/code&gt;&lt;/i&gt;&lt;i&gt;:&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Original code:&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;New code:&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Key findings:&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;The original call did NOT include &lt;/b&gt;&lt;/i&gt;&lt;i&gt;&lt;code&gt;&lt;b&gt;user_id&lt;/b&gt;&lt;/code&gt;&lt;/i&gt;&lt;i&gt;: The original &lt;/i&gt;&lt;i&gt;&lt;code&gt;analytics.record()&lt;/code&gt;&lt;/i&gt;&lt;i&gt; call only passed &lt;/i&gt;&lt;i&gt;&lt;code&gt;organization_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt; and &lt;/i&gt;&lt;i&gt;&lt;code&gt;unfurls_count=0&lt;/code&gt;&lt;/i&gt;&lt;i&gt;. There was no &lt;/i&gt;&lt;i&gt;&lt;code&gt;user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt; parameter in the original call.&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;The &lt;/b&gt;&lt;/i&gt;&lt;i&gt;&lt;code&gt;&lt;b&gt;IntegrationSlackChartUnfurl&lt;/b&gt;&lt;/code&gt;&lt;/i&gt;&lt;i&gt;&lt;b&gt; class definition&lt;/b&gt;&lt;/i&gt;&lt;i&gt; (from &lt;/i&gt;&lt;i&gt;&lt;code&gt;src/sentry/integrations/slack/analytics.py&lt;/code&gt;&lt;/i&gt;&lt;i&gt;):&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Context where &lt;/b&gt;&lt;/i&gt;&lt;i&gt;&lt;code&gt;&lt;b&gt;user_id&lt;/b&gt;&lt;/code&gt;&lt;/i&gt;&lt;i&gt;&lt;b&gt; is available&lt;/b&gt;&lt;/i&gt;&lt;i&gt;: In the &lt;/i&gt;&lt;i&gt;&lt;code&gt;on_link_shared&lt;/code&gt;&lt;/i&gt;&lt;i&gt; method where this analytics call occurs, there is access to &lt;/i&gt;&lt;i&gt;&lt;code&gt;slack_request.user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt; and &lt;/i&gt;&lt;i&gt;&lt;code&gt;request.user&lt;/code&gt;&lt;/i&gt;&lt;i&gt;. Throughout the file, &lt;/i&gt;&lt;i&gt;&lt;code&gt;slack_request.user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt; is used extensively for logging and other purposes.&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Different usage in another file&lt;/b&gt;&lt;/i&gt;&lt;i&gt;: In &lt;/i&gt;&lt;i&gt;&lt;code&gt;src/sentry/integrations/slack/unfurl/discover.py&lt;/code&gt;&lt;/i&gt;&lt;i&gt;, the same &lt;/i&gt;&lt;i&gt;&lt;code&gt;IntegrationSlackChartUnfurl&lt;/code&gt;&lt;/i&gt;&lt;i&gt; class IS used with a &lt;/i&gt;&lt;i&gt;&lt;code&gt;user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt;:&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h4&gt;&lt;i&gt;Conclusion&lt;/i&gt;&lt;/h4&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;The hypothesis is INVALID.&lt;/b&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;Here&amp;#39;s why:&lt;/i&gt;&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;No implicit &lt;/b&gt;&lt;/i&gt;&lt;i&gt;&lt;code&gt;&lt;b&gt;user_id&lt;/b&gt;&lt;/code&gt;&lt;/i&gt;&lt;i&gt;&lt;b&gt; in original call&lt;/b&gt;&lt;/i&gt;&lt;i&gt;: The fundamental assumption of the hypothesis—that the original string-based call &amp;quot;may have implicitly included&amp;quot; a &lt;/i&gt;&lt;i&gt;&lt;code&gt;user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt; field—is factually incorrect. The original call explicitly did NOT include any &lt;/i&gt;&lt;i&gt;&lt;code&gt;user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt; parameter.&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Consistent behavior&lt;/b&gt;&lt;/i&gt;&lt;i&gt;: Both the old and new implementations omit &lt;/i&gt;&lt;i&gt;&lt;code&gt;user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt;, so there&amp;#39;s no change in behavior. The new call defaulting &lt;/i&gt;&lt;i&gt;&lt;code&gt;user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt; to &lt;/i&gt;&lt;i&gt;&lt;code&gt;None&lt;/code&gt;&lt;/i&gt;&lt;i&gt; is the same outcome as the original call not including &lt;/i&gt;&lt;i&gt;&lt;code&gt;user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt; at all.&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;i&gt;&lt;b&gt;Not a data loss&lt;/b&gt;&lt;/i&gt;&lt;i&gt;: Since the original call never captured &lt;/i&gt;&lt;i&gt;&lt;code&gt;user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt;, there&amp;#39;s no &amp;quot;loss of important analytics data&amp;quot; as the hypothesis suggests. The analytics were already missing this information.&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;&lt;i&gt;While it&amp;#39;s true that the refactored code could potentially include the &lt;/i&gt;&lt;i&gt;&lt;code&gt;user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt; (since &lt;/i&gt;&lt;i&gt;&lt;code&gt;slack_request.user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt; is available in the context), the absence of this field is not a bug introduced by the refactoring—it&amp;#39;s a preservation of the existing behavior.&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;This is not a bug that would cause server crashes, major logical issues, or performance problems. At most, it could be considered a missed opportunity for enhanced analytics, but that would be a separate enhancement request, not a bug introduced by this refactoring.
&lt;/i&gt;&lt;i&gt;&lt;code&gt;&amp;lt;/initial_followup&amp;gt;&lt;/code&gt;&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;The revised followup findings after applying the bug guidelines:
&lt;/i&gt;&lt;i&gt;&lt;code&gt;&amp;lt;revised_followup&amp;gt;&lt;/code&gt;&lt;/i&gt;&lt;i&gt;
Since I concluded in my followup analysis that the predicted bug is &lt;/i&gt;&lt;i&gt;&lt;b&gt;not valid&lt;/b&gt;&lt;/i&gt;&lt;i&gt;, I&amp;#39;m skipping this filtering step as instructed.&lt;/i&gt;&lt;/p&gt;&lt;p&gt;&lt;i&gt;The hypothesis was invalid because the original analytics call never included a &lt;/i&gt;&lt;i&gt;&lt;code&gt;user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt; parameter, so the refactored call maintaining this same behavior (with &lt;/i&gt;&lt;i&gt;&lt;code&gt;user_id&lt;/code&gt;&lt;/i&gt;&lt;i&gt; defaulting to &lt;/i&gt;&lt;i&gt;&lt;code&gt;None&lt;/code&gt;&lt;/i&gt;&lt;i&gt;) is not a bug - it&amp;#39;s preserving the existing functionality without any data loss.
&lt;/i&gt;&lt;i&gt;&lt;code&gt;&amp;lt;/revised_followup&amp;gt;&lt;/code&gt;&lt;/i&gt;&lt;/p&gt;&lt;hr/&gt;&lt;p&gt;In the end Hypothesis 1 is marked as a potential bug and showed to the user &lt;a href=&quot;https://github.com/getsentry/sentry/pull/100005#discussion_r2369093715&quot;&gt;here&lt;/a&gt;. It includes reference to one of the Sentry issues used to validate the hypothesis.&lt;/p&gt;&lt;h2&gt;How Sentry context is used&lt;/h2&gt;&lt;p&gt;The verify agent is instructed to adopt a “click-into” flow for analyzing issues. First, it searches for a large set of issues, receiving a short summary of each. Next, if the agent believes a particular issue is relevant to the code change, it fetches details about that issue. This flow allows the agent to decide on its own whether it needs runtime knowledge to analyze the bug, and keeps its context focused.Here are the search tools the agent may call:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Fetch past Sentry Issues by keywords. This tool is akin to giving the agent the ability to search the Issue Feed in the Sentry UI.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Fetch past Sentry Issues by error type. This tool is useful when the agent can guess the bug’s specific error type (e.g., KeyError) and wants to see if similar bugs have occurred in the past. For example, is the code attempting to access data which—upon inspection of the object in production—is missing keys/attributes?&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Fetch past Sentry Issues by file name and function name. This tool returns issues with an error whose stack trace overlapped with a specific function. The agent is instructed to use this tool to inspect variable values in production, or to determine if certain functions are known to be part of error-prone paths in the code.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Each of these searches returns, for each matching issue, the issue ID, title, message, code location, and (if it exists) &lt;a href=&quot;https://docs.sentry.io/product/ai-in-sentry/seer/#issue-scan&quot;&gt;Seer’s summary&lt;/a&gt; of the issue, which is based on the error event details and breadcrumbs. With this information, the agent can determine if any of the returned issues are relevant enough to warrant a closer look. If so, it can call a tool to fetch the event details, which includes the stack trace and variable values for that issue. This tool also returns (if it exists) &lt;a href=&quot;https://docs.sentry.io/product/ai-in-sentry/seer/#issue-fix&quot;&gt;Seer’s root cause analysis&lt;/a&gt; of the issue. At this point, the agent may incorporate this production/runtime context into its analysis. If it does, the bug prediction comment contains links to relevant Sentry issues.&lt;/p&gt;&lt;h2&gt;Maintaining quality predictions&lt;/h2&gt;&lt;p&gt;After we gather all suggestions, they go through an extra filtering stage to ensure you only receive suggestions that are relevant to you. We filter on a number of criteria, but the main ones are:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt; &lt;b&gt;Confidence and severity&lt;/b&gt;
The agents that generate suggestions also provide an estimate of their confidence that the issue should be addressed and an assessment of its severity. The scale is 0.000 to 1.000, with 1.000 being a guaranteed production crash. Suggestions with low confidence or low severity are discarded.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Similarity with past suggestions&lt;/b&gt;&lt;/p&gt;&lt;p&gt;In suggestion similarity we look at past suggestions for a repository and how that team reacted to them (by tracking 👍/👎 reactions in the comments). We use embeddings and cosine similarity (aka vector search) to filter out suggestions that are too similar to downvoted suggestions sent in the past. This empowers teams to select what kinds of suggestions they want to receive from the reviews.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;Agent evaluation&lt;/h2&gt;&lt;p&gt;To validate our system’s performance, we periodically run a set of known bugged and safe PRs through our bug predictor. We do this to evaluate performance and cost, prompt changes, model changes, new context and architectural changes. Each run gives us numbers for precision, recall, and accuracy that we&amp;#39;ve consistently improved over time. This also provides early signal if we&amp;#39;re about to introduce regressions to the system. That said, while these metrics provide a useful directional signal, we recognize their limitations and avoid over-indexing on them. This controlled, consistent evaluation has been key to maintaining confidence in the system as it evolves. And while building this evaluation pipeline was costly upfront, it’s more than paid off in the reliability and trust we now have in the product.&lt;/p&gt;&lt;h3&gt;Dataset collection&lt;/h3&gt;&lt;p&gt;All datasets are stored in Langfuse. Each dataset item specifies an existing PR and what the expected bug(s) are (can also be no bug). Currently we have several datasets we run our evals against:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;bug-prediction&lt;/code&gt; - the giant set of baseline tests. This was manually curated by the team when we initially started building the AI agent. Today it contains around 70 items. Typically we use this to report our scores (precision, recall, accuracy)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;bug-prediction-performance-issues&lt;/code&gt; - a dataset dedicated to performance issues, with the introduction of improving the agent to predict performance bugs we’ve introduced a set of PRs with performance issues.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3&gt;Context mocking&lt;/h3&gt;&lt;p&gt;One of the core features of the bug prediction agent is the ability to analyze Sentry context therefore in the agent evaluations it is important to evaluate the qualities of it.&lt;/p&gt;&lt;p&gt;The simplest approach is to directly fetch Sentry context from the live Sentry API similar to how the production system would do it, however doing so would introduce three major problems in the context of evals:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;The results from Sentry API between multiple runs would be inconsistent because new Sentry contexts (eg a new Sentry Issue was created) might be introduced or existing Sentry contexts might be updated between the first and second run.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Depending on the dataset item (ie the PR used for eval) and when the evaluations was ran there might be a chance that a leaking problem might be introduced, meaning information that should not have been available to this test was made available. An example of this would be if the PR being tested was able to fetch a Sentry Issue that was created as a result of that said PR. In production this would not be an issue because naturally the changes in the PR has not made it to the system yet when the agent is ran, however in our evaluation dataset the PRs we selected might have been merged already.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Querying from the live API is not very performant, the evaluations we currently have is aimed at understanding the quality of the predictions not so much the performance aspect of it (performance testing is separate from evaluations).&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;To solve for each of the problems above we have opted to use a “context mocking” solution where Sentry context are snapshotted, cached, and fetched locally during evaluation runs.&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;To snapshot existing DB of Sentry contexts we run a tool when a new dataset of PRs are created. This tool fetches all Sentry contexts up to the timestamp of the PRs ensuring nothing is fetched from timestamps after the creation of those PRs.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;All the data is sanitized and indexed into a small SQLite file that is uploaded to our cloud storage to be downloaded at the start of each evaluation run.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;When running the agent in evaluation mode, instead of fetching from live Sentry API, we mock those tool calls to these set of mocked tools that would fetch Sentry context directly from the local SQLite DB.&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;h2&gt;Where AI Code Review is going next &lt;/h2&gt;&lt;p&gt;Like any software, we’re constantly working to make AI Code Review better and more useful. We continue to improve it through evaluations, investigating new context to add, changing prompts, and a lot more. We’re exploring more context sources to find the most impactful context, and working on making the predictions more actionable so you can fix issues as easily as the review brings them up.&lt;/p&gt;&lt;p&gt;&lt;a href=&quot;https://docs.sentry.io/product/ai-in-sentry/ai-code-review/&quot;&gt;Try it out&lt;/a&gt; and &lt;a href=&quot;https://discord.com/channels/621778831602221064/1415468783648247870&quot;&gt;let us know how it’s going&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Cocoa SDK 9.0.0 has landed]]></title><description><![CDATA[We just released Cocoa SDK 9.0.0, here's what's new, and what's changed. It's been a while since the last major version. The last major release, 8.0.0, shipped ...]]></description><link>https://blog.sentry.io/cocoa-sdk-9-0-0-has-landed/</link><guid isPermaLink="false">https://blog.sentry.io/cocoa-sdk-9-0-0-has-landed/</guid><pubDate>Tue, 16 Dec 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;We just released &lt;a href=&quot;https://github.com/getsentry/sentry-cocoa/releases/tag/9.0.0&quot;&gt;Cocoa SDK 9.0.0&lt;/a&gt;, here&amp;#39;s what&amp;#39;s new, and what&amp;#39;s changed.&lt;/p&gt;&lt;p&gt;It&amp;#39;s been a while since the last major version. The last major release, 8.0.0, shipped on &lt;b&gt;January 16, 2023&lt;/b&gt;. After 57 minor and 47 bug fix releases, it’s finally time for a new major version to land: &lt;b&gt;9.0.0&lt;/b&gt;.&lt;/p&gt;&lt;h2&gt;Why now&lt;/h2&gt;&lt;p&gt;Our minimum supported OS versions had drifted &lt;b&gt;low&lt;/b&gt; enough that some users started seeing more Xcode warnings than they’d like. A bit too much &amp;quot;&lt;a href=&quot;https://develop.sentry.dev/sdk/philosophy/#compatibility-is-king&quot;&gt;Compatibility is King&lt;/a&gt;&amp;quot;. So it’s really time to do a major update and bump the minimum supported OS versions.&lt;/p&gt;&lt;h2&gt;What kind of release this is&lt;/h2&gt;&lt;p&gt;Version 9 is a maintenance major. This means:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;We bumped minimum OS versions&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;We enabled a few features by default&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;We cleaned up a bunch of small API issues&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;It should be easy for you to upgrade. &lt;i&gt;Famous last words.&lt;/i&gt;&lt;/p&gt;&lt;h2&gt;Changes you’ll want to know about&lt;/h2&gt;&lt;p&gt;Most of the changes in version 9 fall into three buckets: platform requirements, defaults we’ve turned on for you, and cleanup we’ve been meaning to do for a while. Here are the essentials:&lt;/p&gt;&lt;h3&gt;Updated minimum supported OS versions&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;We bumped the minimum supported OS versions:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;iOS&lt;/b&gt;: from 11.0 to 15.0&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;tvOS&lt;/b&gt;: from 11.0 to 15.0&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;macOS&lt;/b&gt;: from 10.13 to 10.14&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;watchOS&lt;/b&gt;: from 4.0 to 8.0&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;visionOS&lt;/b&gt;: 1.0 (unchanged)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3&gt;Tooling and defaults&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;We now use Xcode 16 for building the precompiled XCFramework, and we set the &lt;code&gt;swift-tools-version&lt;/code&gt; to 6.0.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://docs.sentry.io/platforms/apple/logs/&quot;&gt;Structured logs&lt;/a&gt; are no longer experimental.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;HTTP client errors now mark sessions as errored. This provides better visibility into failed network requests in the release health data.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://docs.sentry.io/platforms/apple/configuration/app-hangs/#app-hangs-v2&quot;&gt;App Hang Tracking V2&lt;/a&gt; is now the default on iOS, tvOS, and macCatalyst.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://docs.sentry.io/platforms/apple/tracing/&quot;&gt;Tracing&lt;/a&gt;: We enabled &lt;a href=&quot;https://docs.sentry.io/platforms/apple/tracing/instrumentation/automatic-instrumentation/#prewarmed-app-start-tracing&quot;&gt;pre-warmed app start tracing&lt;/a&gt; by default, and the app start duration now finishes when the first frame is drawn instead of when the OS posts the UIWindowDidBecomeVisibleNotification.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://docs.sentry.io/platforms/apple/profiling/&quot;&gt;Profiling&lt;/a&gt;: We removed the deprecated &lt;a href=&quot;https://docs.sentry.io/platforms/apple/profiling/#transaction-based-profiling-removed-in-900&quot;&gt;transaction-based profiling&lt;/a&gt;. You now have to use &lt;a href=&quot;https://docs.sentry.io/platforms/apple/profiling/#enable-ui-profiling&quot;&gt;UIProfiling&lt;/a&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3&gt;Package managers and SDKs&lt;/h3&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Compiling from source via SPM works again, but we don’t officially support it yet.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Our &lt;a href=&quot;https://github.com/Carthage/Carthage&quot;&gt;Carthage&lt;/a&gt; support has been broken for a while, and at this point it’s clear it’s not something most teams are relying on. So we’re officially dropping it.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;We added a new SDK, SentryDistribution, that keeps internal builds up to date; it’s the Sentry version of Emerge Tools’ &lt;a href=&quot;https://github.com/EmergeTools/ETDistribution&quot;&gt;ETDistribution&lt;/a&gt; 🚀.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The complete list of all changes is even longer. If you want to see everything, check out the &lt;a href=&quot;https://docs.sentry.io/platforms/apple/migration/#breaking-changes&quot;&gt;migration guide&lt;/a&gt; or the &lt;a href=&quot;https://github.com/getsentry/sentry-cocoa/releases/tag/9.0.0&quot;&gt;9.0.0 changelog&lt;/a&gt; to see it.&lt;/p&gt;&lt;h2&gt;Upgrading to version 9&lt;/h2&gt;&lt;p&gt;Simply update your package manager to use the latest version of version 9 and then check the &lt;a href=&quot;https://docs.sentry.io/platforms/apple/migration/#breaking-changes&quot;&gt;migration guide&lt;/a&gt; to see if you need to change anything. That&amp;#39;s all there is to it for most setups.&lt;/p&gt;&lt;h2&gt;What about version 8?&lt;/h2&gt;&lt;p&gt;We have stopped feature development for version 8 and will only ship critical bug fixes. You still can use version 8 and aren’t forced to upgrade to version 9, but we still recommend updating to the latest major version if possible.&lt;/p&gt;&lt;h2&gt;If you&amp;#39;re ready to upgrade&lt;/h2&gt;&lt;p&gt;If you’re able to, give version 9 a try. We’re excited to see what you build with it. And if something breaks, or even just feels off, &lt;a href=&quot;https://github.com/getsentry/sentry-cocoa/issues&quot;&gt;open an issue&lt;/a&gt;. We’ll help you sort it out.  &lt;/p&gt;</content:encoded></item><item><title><![CDATA[A better way to monitor your AI agents in .NET apps]]></title><description><![CDATA[We launched agent monitoring earlier this year, allowing our users to instrument LLM usage and tool calls in their applications. However, we only had Agent Moni...]]></description><link>https://blog.sentry.io/agent-monitoring-net-apps/</link><guid isPermaLink="false">https://blog.sentry.io/agent-monitoring-net-apps/</guid><pubDate>Thu, 11 Dec 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;We &lt;a href=&quot;https://blog.sentry.io/sentrys-updated-agent-monitoring/&quot;&gt;launched agent monitoring&lt;/a&gt; earlier this year, allowing our users to instrument LLM usage and tool calls in their applications. However, we only had Agent Monitoring support for Python and JavaScript. We’ve been working on creating an Agent Monitoring SDK for .NET — specifically for &lt;code&gt;Microsoft.Extensions.AI.Abstractions&lt;/code&gt;.&lt;/p&gt;&lt;h2&gt;Introducing &lt;code&gt;Sentry.Extensions.AI&lt;/code&gt;&lt;/h2&gt;&lt;p&gt;&lt;code&gt;&lt;/code&gt;&lt;a href=&quot;https://www.nuget.org/packages/Sentry.Extensions.AI&quot;&gt;&lt;code&gt;Sentry.Extensions.AI&lt;/code&gt;&lt;/a&gt; is our drop-in instrumentation layer for .NET LLM packages that are based on &lt;code&gt;Microsoft.Extensions.AI.Abstractions&lt;/code&gt;. You can instrument your LLM usage including:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;LLM calls&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Inputs and outputs&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Token count&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Model name&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Tool calls input/output&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Issues related to the LLM call&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Total cost&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;All of this is available to see in Sentry as spans and events, so you can correlate AI behaviour with the rest of your application: HTTP requests, background jobs, database queries, and more.&lt;/p&gt;&lt;h2&gt;What is &lt;code&gt;Microsoft.Extensions.AI.Abstractions&lt;/code&gt;?&lt;/h2&gt;&lt;p&gt;The &lt;code&gt;AI.Abstractions&lt;/code&gt; package is a low-level contract layer for many other libraries. It contains pure interfaces and data models for generative AI in .NET. It is intended for other libraries to implement. It has minimal dependencies so it can be the base for this ecosystem of libraries.&lt;/p&gt;&lt;p&gt;This is not to be confused with &lt;code&gt;Microsoft.Extensions.AI&lt;/code&gt;, which includes utilities such as &lt;code&gt;ChatClientBuilder&lt;/code&gt;, and built-in capabilities such as logging and tool invocation. The relationship between the abstraction package and &lt;code&gt;Microsoft.Extensions.AI&lt;/code&gt; are very similar to the relationship between &lt;code&gt;Microsoft.Extensions.Logging&lt;/code&gt; and its abstraction package.&lt;/p&gt;&lt;p&gt;Building our agent monitoring around &lt;code&gt;Microsoft.Extensions.AI.Abstractions&lt;/code&gt; allow our users to use any LLM library they want, as long as they implement &lt;code&gt;IChatClient&lt;/code&gt; from the abstractions package. For example, our ASP.NET Core sample project uses &lt;code&gt;Microsoft.Extensions.AI.OpenAI&lt;/code&gt; , which provides us with &lt;code&gt;IChatClient&lt;/code&gt; implementation with OpenAI APIs. One can just as easily swap out which LLM it is using by using a different library with &lt;code&gt;IChatClient&lt;/code&gt; implementation.&lt;/p&gt;&lt;h3&gt;How it works&lt;/h3&gt;&lt;p&gt;&lt;code&gt;Sentry.Extensions.AI&lt;/code&gt; works by wrapping your existing &lt;code&gt;IChatClient&lt;/code&gt; and tools, so that every LLM call and tool invocation is automatically instrumented without changing your application logic.&lt;/p&gt;&lt;p&gt;In code, it looks roughly like this:&lt;/p&gt;&lt;p&gt;&lt;code&gt;AddSentry&lt;/code&gt; wraps the OpenAI &lt;code&gt;IChatClient&lt;/code&gt;, and &lt;code&gt;AddSentryToolInstrumentation&lt;/code&gt; instruments tool calls. We intercept requests and responses, measure how long operations take, capture token usage and errors, and then pass everything through to the underlying client so the behaviour of your app doesn’t change.&lt;/p&gt;&lt;p&gt;Most of the work in this library was about doing that as transparently and cheaply as possible, while still handling tricky cases like streaming responses and multi-step tool-call loops.&lt;/p&gt;&lt;h3&gt;Handling streaming responses without breaking error handling&lt;/h3&gt;&lt;p&gt;One of the trickiest parts was instrumenting &lt;code&gt;IChatClient.GetStreamingResponseAsync&lt;/code&gt;, which returns an &lt;code&gt;IAsyncEnumerable&amp;lt;ChatResponseUpdate&amp;gt;&lt;/code&gt;. I wanted to:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Wrap the streaming loop with Sentry spans&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Keep overhead minimal&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Catch &lt;i&gt;any&lt;/i&gt; exception thrown while fetching the next token, record it, and still re-throw it to the caller&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;But C# doesn’t let you &lt;code&gt;yield return&lt;/code&gt; from inside a &lt;code&gt;try/catch&lt;/code&gt; that needs to cover &lt;code&gt;MoveNextAsync&lt;/code&gt;, and using &lt;code&gt;foreach&lt;/code&gt; would implicitly wrap &lt;code&gt;MoveNextAsync&lt;/code&gt; and &lt;code&gt;yield return&lt;/code&gt; together.&lt;/p&gt;&lt;p&gt;The solution was to work with the async enumerator directly and separate the logic between advancing the stream and yielding the value:&lt;/p&gt;&lt;p&gt;By calling &lt;code&gt;MoveNextAsync()&lt;/code&gt; inside the &lt;code&gt;try/catch&lt;/code&gt; and doing &lt;code&gt;yield return&lt;/code&gt; &lt;b&gt;afterwards&lt;/b&gt;, we can:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Preserve the original streaming behavior&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Capture any exceptions thrown by the provider’s enumerator&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Enrich and finish our spans once the stream ends or fails&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The result is full visibility into streaming responses with essentially no extra overhead for the caller.&lt;/p&gt;&lt;h3&gt;Capturing the &lt;i&gt;whole&lt;/i&gt; tool-call loop in a single span&lt;/h3&gt;&lt;p&gt;Another challenge was capturing &lt;b&gt;one span that represents the entire “LLM + tools” loop&lt;/b&gt;, not just individual model calls or tool invocations. In the screenshot below, you can see that one span is a parent of all these other spans. This is what we call an agent span.&lt;/p&gt;&lt;p&gt;The agent span shows the duration of the whole LLM interaction, including any tool calls and text generation. It also contains the original input and the final output.&lt;/p&gt;&lt;p&gt;When you use the &lt;code&gt;FunctionInvokingChatClient&lt;/code&gt; — or &lt;code&gt;UseFunctionInvocation&lt;/code&gt; — from &lt;code&gt;Microsoft.Extensions.AI&lt;/code&gt;, the LLM call flow looks roughly like this:&lt;/p&gt;&lt;p&gt;We wanted one span that covered this entire loop. From the first LLM call, through all tool calls, to the final response. The problem was &lt;code&gt;FunctionInvokingChatClient&lt;/code&gt; lives in &lt;code&gt;Microsoft.Extensions.AI&lt;/code&gt;, not in &lt;code&gt;Microsoft.Extensions.AI.Abstractions&lt;/code&gt;, and my instrumentation is built around the abstractions layer. There was no obvious hook at the “whole loop” level.&lt;/p&gt;&lt;p&gt;The workaround was to piggyback on &lt;code&gt;FunctionInvokingChatClient&lt;/code&gt;&amp;#39;s existing telemetry:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;code&gt;FunctionInvokingChatClient&lt;/code&gt; starts an &lt;code&gt;Activity&lt;/code&gt; when its tool-call loop begins and stops it when the loop finishes.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;We created an &lt;code&gt;ActivityListener&lt;/code&gt; that taps into the &lt;code&gt;Activity&lt;/code&gt;, with its &lt;code&gt;ActivityStarted&lt;/code&gt; and &lt;code&gt;ActivityStopped&lt;/code&gt; callback functions set to create Sentry spans.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Inside that span, we still record the individual LLM calls and tool calls as child spans.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;This gives us exactly what we wanted. A single top-level span that represents the full agent/tool orchestration, without needing direct access to &lt;code&gt;FunctionInvokingChatClient&lt;/code&gt; from the abstractions layer.&lt;/p&gt;&lt;h2&gt;Future of Agent Monitoring in .NET&lt;/h2&gt;&lt;p&gt;Because &lt;code&gt;Microsoft.Extensions.AI.Abstractions&lt;/code&gt; sits at the base of many AI libraries, this integration is just the beginning.&lt;/p&gt;&lt;p&gt;Microsoft’s new agent framework, &lt;code&gt;Microsoft.Agents.AI&lt;/code&gt;, builds on these abstractions, and so do other higher-level frameworks like Semantic Kernel. That means the same concepts we use today for instrumenting raw &lt;code&gt;IChatClient&lt;/code&gt; calls can be extended to:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Track multi-step agent workflows&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Visualize tool and plugin orchestration&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Add &lt;a href=&quot;https://sentry.io/solutions/ai-observability/&quot;&gt;observability&lt;/a&gt; to Semantic Kernel pipelines, planners, and skills&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Our goal is for &lt;code&gt;Sentry.Extensions.AI&lt;/code&gt; to become the standard way to monitor .NET AI workloads — whether you’re calling a single model directly or orchestrating complex agentic systems on top of &lt;code&gt;Microsoft.Extensions.AI.Abstractions&lt;/code&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Meet Web Vitals Performance Issues]]></title><description><![CDATA[We’ve introduced a new type of Performance Issues, Web Vitals Performance Issues. These issues will be opened for the highest opportunity pages in your applicat...]]></description><link>https://blog.sentry.io/meet-web-vitals-performance-issues/</link><guid isPermaLink="false">https://blog.sentry.io/meet-web-vitals-performance-issues/</guid><pubDate>Mon, 08 Dec 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;We’ve introduced a new type of Performance Issues, &lt;a href=&quot;https://docs.sentry.io/product/issues/issue-details/performance-issues/web-vitals/&quot;&gt;Web Vitals Performance Issues&lt;/a&gt;. These issues will be opened for the &lt;a href=&quot;https://docs.sentry.io/product/insights/frontend/web-vitals/#opportunity&quot;&gt;highest opportunity pages&lt;/a&gt; in your application if your Web Vitals metrics drop into our &lt;i&gt;meh,&lt;/i&gt; or &lt;i&gt;poor&lt;/i&gt; thresholds for performance.&lt;/p&gt;&lt;p&gt;We’ve built these issues with &lt;a href=&quot;https://docs.sentry.io/product/ai-in-sentry/seer/#issue-fix&quot;&gt;Seer Issue Fix&lt;/a&gt; specifically in mind. Our goal is to not just alert you about low vitals scores, we want to give you actionable steps you can take to improve your scores and, when possible, fix the problem for you.&lt;/p&gt;&lt;h2&gt;What are Web Vitals, and why should you care?&lt;/h2&gt;&lt;p&gt;&lt;a href=&quot;https://web.dev/articles/vitals&quot;&gt;Web Vitals&lt;/a&gt; are standard metrics for understanding how your site’s user experience compares to benchmarks for performance. A website with good vitals scores isn’t guaranteed to have a great user experience, but a website with bad vitals is — most likely — a bad user experience.&lt;/p&gt;&lt;p&gt;But hey, even if you don’t care about providing a delightful user experience, Web Vitals can affect &lt;a href=&quot;https://developers.google.com/search/docs/appearance/core-web-vitals&quot;&gt;your ranking in Google’s search&lt;/a&gt;, &lt;a href=&quot;https://shopify.dev/docs/apps/launch/built-for-shopify/requirements#performance&quot;&gt;your listings on Shopify&lt;/a&gt;, etc. &lt;i&gt;tldr; they affect your bottom line&lt;/i&gt; 🤑.&lt;/p&gt;&lt;p&gt;The three &lt;a href=&quot;https://web.dev/articles/vitals#core-web-vitals&quot;&gt;Core Web Vitals&lt;/a&gt; (&lt;i&gt;the ones you should care about most&lt;/i&gt;) are:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Largest Contentful Paint (&lt;/b&gt;&lt;a href=&quot;https://webvitals.com/lcp&quot;&gt;&lt;b&gt;LCP&lt;/b&gt;&lt;/a&gt;&lt;b&gt;):&lt;/b&gt; How long does it take the largest element on your page to render? &lt;i&gt;Measuring loading&lt;/i&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Interaction to Next Paint (&lt;/b&gt;&lt;a href=&quot;https://webvitals.com/inp&quot;&gt;&lt;b&gt;INP&lt;/b&gt;&lt;/a&gt;&lt;b&gt;):&lt;/b&gt; When interacting with a page element, how long does it take the next frame to render? &lt;i&gt;Measuring interactivity&lt;/i&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Cumulative Layout Shift (&lt;/b&gt;&lt;a href=&quot;https://webvitals.com/cls&quot;&gt;&lt;b&gt;CLS&lt;/b&gt;&lt;/a&gt;&lt;b&gt;): &lt;/b&gt;How much do elements shift during pageload? &lt;i&gt;Measuring page stability&lt;/i&gt;.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;Web Vitals and Sentry&lt;/h2&gt;&lt;p&gt;Sentry already has &lt;a href=&quot;https://docs.sentry.io/product/insights/frontend/web-vitals/&quot;&gt;a great out-of-the-box dashboard&lt;/a&gt; for measuring Web Vitals metrics across your application — &lt;i&gt;we talk about the feature &lt;/i&gt;&lt;a href=&quot;https://sentry.io/for/web-vitals/&quot;&gt;&lt;i&gt;here&lt;/i&gt;&lt;/a&gt;&lt;i&gt;, and &lt;/i&gt;&lt;a href=&quot;https://blog.sentry.io/your-bad-lcp-score-might-be-a-backend-issue/&quot;&gt;&lt;i&gt;here&lt;/i&gt;&lt;/a&gt;&lt;i&gt;, and also &lt;/i&gt;&lt;a href=&quot;https://blog.sentry.io/performance-monitoring-for-every-developer-web-vitals-and-function/&quot;&gt;&lt;i&gt;here&lt;/i&gt;&lt;/a&gt;&lt;i&gt;.&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Here are some things some things that make Sentry’s existing Web Vitals functionality cool:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;We collect metrics from real user sessions. &lt;i&gt;As developers are well aware, things usually run perfectly on your own computer.&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;We collect metrics for authenticated pages without additional hoops to jump through.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;You can see your Web Vitals performance in the context of other telemetry — &lt;i&gt;maybe a page was slow to load because an exception was thrown and an operation retried.&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Today we’re releasing a slick update to Sentry’s built-in Web Vitals features, &lt;i&gt;Web Vitals Performance Issues.&lt;/i&gt;&lt;/p&gt;&lt;h2&gt;Introducing Web Vitals Performance Issues&lt;/h2&gt;&lt;p&gt;Web Vitals Performance Issues are a new type of Performance Issue that will be triggered when the highest traffic pages of your application are exhibiting poor vitals metrics for an extended period of time.&lt;/p&gt;&lt;p&gt;&lt;i&gt;In my opinion, the killer feature?&lt;/i&gt; With Sentry’s &lt;a href=&quot;https://blog.sentry.io/seer-sentrys-ai-debugger-is-generally-available/&quot;&gt;Seer Agent&lt;/a&gt;, these issues won’t just warn you about a bad score, they’ll help you root cause and actually &lt;i&gt;fix&lt;/i&gt; the underlying problem.&lt;/p&gt;&lt;h3&gt;Why didn’t we do this sooner?&lt;/h3&gt;&lt;p&gt;Up until now, we opted not to alert users about poor Web Vitals scores (&lt;i&gt;even though &lt;/i&gt;&lt;a href=&quot;https://blog.sentry.io/performance-issues-slow-you-can-act-on-quickly/&quot;&gt;&lt;i&gt;Performance Issues aren’t new&lt;/i&gt;&lt;/a&gt;&lt;i&gt;)&lt;/i&gt;, what was preventing us?&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;We didn’t want to create busy work for developers. &lt;i&gt;Although important, poor Web Vitals measurements are secondary to signals like errors in an application.&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;We didn’t want to spam users with a alerts. &lt;i&gt;A bad Web Vitals metric may be a problem across dozens of pages.&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;We didn’t want to create issues that weren’t actionable. &lt;i&gt;“Okay, I have a bad CLS score, now what?”, we imagined users asking.&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;Here’s how the approach we’ve taken addresses these concerns…&lt;/p&gt;&lt;h3&gt;Avoiding spam and busy work&lt;/h3&gt;&lt;p&gt;We’ve taken a few steps to avoid overwhelming engineers with a wall of Web Vitals issues:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Issues are created at a lower priority than critical errors, so they can be filtered out of the issue stream.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Issues are only opened for the top 5 highest opportunity pages.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;We only reevaluate Web Vitals scores every two weeks, and won’t open new issues for the category of problem for the same page.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;With Seer Root Cause Analysis, we hope that we can give developers a helping hand fixing poor Web Vitals scores (&lt;i&gt;not just alert them).&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h3&gt;Making issues actionable&lt;/h3&gt;&lt;p&gt;How are we making sure these issues are actionable?&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;We’ll only open Web Vitals issues if you have &lt;a href=&quot;https://sentry.io/product/seer/&quot;&gt;Seer&lt;/a&gt; enabled on your account. &lt;i&gt;We didn’t want to create alerts without also helping you fix them&lt;/i&gt;.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;We attach various signals to the Web Vitals issues, making it possible for our AI agent to root cause and fix the problem: &lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Seer has access to the codebase exhibiting the poor web vitals score.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Seer has access to traces representing typical user sessions on the page.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Seer has access to the historical Web Vitals metrics for the given page.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;Wrap things up already…&lt;/h2&gt;&lt;p&gt;It’s my hope that, with some of these design decisions we’ve made, we’ve built something that:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;Will truly help people better diagnose and fix gnarly performance issues on their websites.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Won’t introduce annoying new tedium to engineers that makes them want to throw their laptop out the window 😅. &lt;i&gt;Note, if you do find yourself wanting to throw something, the feature can be turned off &lt;/i&gt;&lt;a href=&quot;https://docs.sentry.io/product/issues/issue-details/performance-issues/#configuration&quot;&gt;&lt;i&gt;here&lt;/i&gt;&lt;/a&gt;&lt;i&gt;.&lt;/i&gt;&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;Read more about &lt;a href=&quot;https://docs.sentry.io/product/issues/issue-details/performance-issues/web-vitals/&quot;&gt;Web Vitals Performance Issues in our documentation&lt;/a&gt;, and please be liberal with your feedback.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[AI updates for all Sentry users]]></title><description><![CDATA[Instead of giving you yet another chatbot, we built AI straight into the parts of Sentry where teams lose time, turning your existing data into instant context ...]]></description><link>https://blog.sentry.io/sentry-just-got-an-upgrade-and-its-all-free/</link><guid isPermaLink="false">https://blog.sentry.io/sentry-just-got-an-upgrade-and-its-all-free/</guid><pubDate>Thu, 04 Dec 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Instead of giving you yet another chatbot, we built AI straight into the parts of Sentry where teams lose time, turning your existing data into instant context — and it’s now available to all Sentry users.&lt;/p&gt;&lt;h2&gt;Just ask your question with natural-language queries in Trace Explorer&lt;/h2&gt;&lt;p&gt;&lt;a href=&quot;https://docs.sentry.io/product/explore/trace-explorer/&quot;&gt;Trace Explorer&lt;/a&gt; now accepts plain-language questions and turns them into real queries. Instead of remembering operators or field names, you can describe what you want directly.&lt;/p&gt;&lt;p&gt;A question like &lt;b&gt;“What’s the p90 latency of my DB?”&lt;/b&gt; generates the underlying query, identifies the relevant spans, and surfaces the latency distribution without requiring you to write any syntax. You get the same depth of trace data as before, with much less effort spent forming the query.&lt;/p&gt;&lt;h2&gt;Get a first take on what’s causing an issue&lt;/h2&gt;&lt;p&gt;When you hit a new issue, the first question is usually &lt;i&gt;“Where do I start?”&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Initial guess - located on each issue - automatically runs an initial analysis of the issue context to determine what the problem could be. It provides a starting point for you before running a deeper analysis with &lt;a href=&quot;https://docs.sentry.io/product/ai-in-sentry/seer/&quot;&gt;Seer&lt;/a&gt;, which looks at your source code and additional Sentry telemetry to more accurately determine the root cause.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;//images.ctfassets.net/em6l9zw4tzag/5Svvmk80u56Rd2NM0oeLcc/0841a581efbcfd05023f8564c88251e7/seer-initial-guess.png&quot; alt=&quot;Seer panel showing an “Initial Guess” message: “Backend failed to validate inventory for 3 units of Product ID 4,” with a prominent “Find Root Cause” button.&quot; /&gt;&lt;/p&gt;&lt;h2&gt;Get to the insight faster with Session Replay Summaries&lt;/h2&gt;&lt;p&gt;Jumping to the moment an error occurred is useful, but understanding how the user got there still takes time. Long sessions often involve dozens of interactions, network calls, or UI states that matter but aren’t visible from the error timestamp alone. &lt;a href=&quot;https://docs.sentry.io/product/explore/session-replay/web/&quot;&gt;Replay Summaries&lt;/a&gt; analyze the replay’s metadata—DOM events, network requests, console logs—and generate a short explanation of the events that actually contributed to the failure, along with time stamps linking you to each event.&lt;/p&gt;&lt;p&gt;For example, if a user clicks the checkout button, sends a request that returns a 500, and your frontend fails and shows the user an error state, the summary will combine these into a single narrative:&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;//images.ctfassets.net/em6l9zw4tzag/25KyMpqlqeTXhOBVWsYFGw/831bfacd94c3f200bb607c7fe631cca9/replay-summary.png&quot; alt=&quot;Replay Summary panel showing a user flow: navigated to products, clicked add to cart with N+1 query and slow DB errors, then repeatedly attempted checkout and hit 500 server errors, with timestamps for each step.&quot; /&gt;&lt;/p&gt;&lt;p&gt;Head to&lt;a href=&quot;http://sentry.sentry.io/orgredirect/explore/replays&quot;&gt; Explore &amp;gt; Replays&lt;/a&gt; and click into a specific replay to see the &lt;b&gt;AI summary&lt;/b&gt; tab.&lt;/p&gt;&lt;h2&gt;Translate frustration into something you can actually use with User Feedback Summaries&lt;/h2&gt;&lt;p&gt;&lt;a href=&quot;https://docs.sentry.io/platforms/javascript/user-feedback/&quot;&gt;User feedback&lt;/a&gt; is helpful, but reading it at scale is slow and often inconsistent. People describe the same issue in completely different ways, mix in frustration, or focus on symptoms rather than what actually went wrong. Located at the top of the User Feedback view, &lt;a href=&quot;https://docs.sentry.io/product/user-feedback/#ai-powered-user-feedback-summaries&quot;&gt;user feedback summaries&lt;/a&gt; process all incoming feedback and generates a concise, high-level explanation of what users are collectively experiencing across the projects and date ranges you have selected.&lt;/p&gt;&lt;p&gt;The system looks across every submission in your project and identifies the dominant themes: what users were trying to do, what failed, and how those failures cluster. You can still dive into specific submissions when needed, but you no longer have to manually read and categorize them to understand the broader problem.&lt;/p&gt;&lt;p&gt;&lt;img src=&quot;//images.ctfassets.net/em6l9zw4tzag/ARf21mShxpRGczllPjvyK/918f709e45f5d704482470702beb9bbd/user-feedback-summary.png&quot; alt=&quot;User feedback page with an “Experimental” summary noting system slowness and checkout failures, tag chips below, and a list of inbox items from Angelo with timestamps and issue IDs.&quot; /&gt;&lt;/p&gt;&lt;h2&gt;Bring Sentry context into your AI tools with Sentry MCP&lt;/h2&gt;&lt;p&gt;Context switching takes you out of flow and disrupts your thinking, so we created an &lt;a href=&quot;https://docs.sentry.io/product/sentry-mcp/&quot;&gt;MCP server &lt;/a&gt;that lets you interact with Sentry data without leaving Cursor, Claude Code, Codex, or your favorite client. Sentry’s MCP can access data about your organizations, projects, teams, issues, errors, releases, performance and more.&lt;/p&gt;&lt;p&gt;For example, you can ask it to identify and fix the most critical issue in a given project, as shown here. Sentry’s MCP identifies a missing null check that would cause a 500 error. It shows what’s broken and why, and implements the fix, all through Cursor. &lt;/p&gt;&lt;h2&gt;Less hunting, more fixing&lt;/h2&gt;&lt;p&gt;These updates are available today for all Sentry users. You can try them directly and we’ll continue improving them as we get more real-world usage and feedback. Give it a try and &lt;a href=&quot;https://discord.com/channels/621778831602221064/1211800107855646783&quot;&gt;let us know what you think&lt;/a&gt;.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[<100ms E-commerce: Instant loads with Speculation Rules API]]></title><description><![CDATA[In e-commerce, we all know that speed = money. I know it, you know it, Amazon knows it, eBay knows it, Shopify knows it, everyone knows it. In this article we’l...]]></description><link>https://blog.sentry.io/less-than-100ms-e-commerce-instant-loads-with-speculation-rules-api/</link><guid isPermaLink="false">https://blog.sentry.io/less-than-100ms-e-commerce-instant-loads-with-speculation-rules-api/</guid><pubDate>Mon, 24 Nov 2025 01:00:00 GMT</pubDate><content:encoded>&lt;p&gt;In e-commerce, we all know that speed = money. I know it, you know it, &lt;a href=&quot;https://www.conductor.com/academy/page-speed-resources/faq/amazon-page-speed-study/&quot;&gt;Amazon knows it&lt;/a&gt;, &lt;a href=&quot;https://web.dev/case-studies/shopping-for-speed-on-ebay&quot;&gt;eBay knows it&lt;/a&gt;, &lt;a href=&quot;https://performance.shopify.com/blogs/blog/how-sunday-citizen-improved-conversions-by-focusing-on-performance&quot;&gt;Shopify knows it&lt;/a&gt;, everyone knows it. In this article we’ll see how we can improve the perceived performance of our site’s critical pages, like the Product Details page, the Cart page, the Checkout page. We’re going to use the &lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/Elements/script/type/speculationrules&quot;&gt;Speculation Rules API&lt;/a&gt; (SRA) to prerender/prefetch them, and also explain how certain frameworks like Next.js offer their own prefetching mechanisms.&lt;/p&gt;&lt;h2&gt;Speculation Rules API&lt;/h2&gt;&lt;p&gt;The SRA is an &lt;b&gt;experimental feature&lt;/b&gt; available only in &lt;b&gt;Chromium browsers&lt;/b&gt; that allows websites hint to the browser which pages a user is likely to visit next, so the browser can start either preloading or prerendering them ahead of time. This makes navigating to those pages feel instant.&lt;/p&gt;&lt;p&gt;&lt;code&gt;prerender&lt;/code&gt; rules make the browser fully download, render, and load them in an invisible tab. This will load all subresources, run all JavaScript, and even perform data fetches started by JavaScript. Navigating to a prerendered page is instant. It’s literally the browser “switching tabs” to a completely loaded page.&lt;/p&gt;&lt;p&gt;&lt;code&gt;prefetch&lt;/code&gt; rules make the browser download the page’s document only. It doesn’t render the page in the background, or execute any JavaScript, or load any subresources. It just skips the main &lt;code&gt;document&lt;/code&gt; HTTP request upon navigation, but the page will still need to be rendered, and resources loaded. This still leads to major performance improvements, but it’s significantly lighter than the &lt;code&gt;prerender&lt;/code&gt; rules.&lt;/p&gt;&lt;p&gt;These rules are added on the page through a &lt;code&gt;&amp;lt;script type=&amp;quot;speculationrules&amp;quot;&amp;gt;&lt;/code&gt; element, and are defined with JSON:&lt;/p&gt;&lt;p&gt;The speculation rules above will prefetch all of the &lt;code&gt;/products/*&lt;/code&gt; matching URLs that are present on that page &lt;i&gt;eagerly&lt;/i&gt;, and also the &lt;code&gt;/cart&lt;/code&gt; page. It doesn’t matter how you add these rules - you can hardcode them at the bottom of the &lt;code&gt;&amp;lt;head&amp;gt;&lt;/code&gt;, or you can dynamically generate them with JavaScript at runtime. There’s no “deadline” for adding these rules. The browser will start prefetching/prerendering the moment it sees them in the DOM.&lt;/p&gt;&lt;p&gt;There are options for the eagerness, different types of matchers and selectors, and other gotchas, and we won’t cover them in this article, so make sure to check out the official &lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/Elements/script/type/speculationrules&quot;&gt;Speculation rules API doc on MDN&lt;/a&gt;.&lt;/p&gt;&lt;h2&gt;Speculation rules API fallback for different browsers&lt;/h2&gt;&lt;p&gt;Since SRA is only available in modern versions of chromium-based browsers, our Safari and Firefox users will miss out on that love. But there’s still something we can do for them too. We can’t really replicate the SRA functionality, but we can &lt;b&gt;leverage the framework-specific prefetching&lt;/b&gt; feature that most modern frameworks provide. Usually it’s in a form of a prop or element attribute that you set to a link that makes the framework prefetch that page either on page load, on hover, or skip prefetching it altogether.&lt;/p&gt;&lt;p&gt;Most of the frameworks automatically prefetch pages either on page load, or as they get scrolled in. For example, both &lt;b&gt;Next.js&lt;/b&gt; (&lt;a href=&quot;https://en.nextjs.im/docs/app/api-reference/components/link#prefetch&quot;&gt;docs&lt;/a&gt;) and &lt;b&gt;Nuxt 3&lt;/b&gt; (&lt;a href=&quot;https://nuxt.com/docs/3.x/api/components/nuxt-link#prefetch-links&quot;&gt;docs&lt;/a&gt;) prefetch pages by default. &lt;b&gt;SvelteKit&lt;/b&gt; (&lt;a href=&quot;https://svelte.dev/docs/kit/link-options&quot;&gt;docs&lt;/a&gt;) and &lt;b&gt;Remix&lt;/b&gt; (&lt;a href=&quot;https://v2.remix.run/docs/components/link#prefetch&quot;&gt;docs&lt;/a&gt;) do not by default, but give you options to define prefetching. Visit the docs links of your framework of choice to see how you can configure prefetching.&lt;/p&gt;&lt;p&gt;Having framework-level prefetching enabled is a great fallback because it still improves the navigation performance of non-chromium browsers, but also won’t hurt chromium browsers. In case where a chromium browser loads a page with both speculation rules and framework-specific prefetches, the speculation rules will take precedence. For example, let’s say all &lt;code&gt;/product/*&lt;/code&gt; pages are prefetched through speculation rules, but on hover as well. When the user lands on the page, the speculation rules will make the browser prefetch all product pages and cache them, so when the user hovers on one of the products the browser won’t prefetch it again since it’s already in cache.&lt;/p&gt;&lt;h2&gt;How SRA improves storefront performance&lt;/h2&gt;&lt;p&gt;To put all of this to a test, I developed (&lt;i&gt;ahem, vibe coded&lt;/i&gt;) a &lt;a href=&quot;https://github.com/nikolovlazar/demo-optimize-cx&quot;&gt;demo Next.js storefront&lt;/a&gt; and instrumented it with &lt;a href=&quot;https://sentry.io/for/nextjs/&quot;&gt;Sentry’s Next.js SDK&lt;/a&gt;. The SDK automatically measures page loads, but to distinguish the prerender metrics from the prefetch ones, I needed to just add a tag with the value of the optimization mode whenever it changed:&lt;/p&gt;&lt;p&gt;After this, all I needed to do is create a custom widget that charts the navigations grouped by the optimization method:&lt;/p&gt;&lt;p&gt;(widget specs - spans dataset, p90 of span.duration visualization, filter: &lt;code&gt;span.description contains /products/:id&lt;/code&gt; and &lt;code&gt;span.name contains navigation&lt;/code&gt;, grouped by &lt;code&gt;optimization_mode&lt;/code&gt;)&lt;/p&gt;&lt;p&gt;And look at that! Quite the difference between prefetching/prerendering and no optimization at all. &lt;i&gt;No optimization also means the framework-level prefetch is disabled&lt;/i&gt;. This data is obtained from &lt;a href=&quot;https://demo-optimize-cx.vercel.app/&quot;&gt;a Vercel deployment&lt;/a&gt;, triggered by an &lt;a href=&quot;https://github.com/nikolovlazar/demo-optimize-cx/blob/main/scripts/k6-browser-load.js&quot;&gt;automated script&lt;/a&gt;. Even on my painfully simple demo app the improvement is significant. I would anticipate the improvement to be much bigger on a real e-commerce storefront with tons of other features like analytics, widgets, all sorts of scripts.&lt;/p&gt;&lt;p&gt;Now from this chart we can’t really say for sure that prerendering is faster/better than prefetching, or vice versa. But one thing is for sure - any optimization is times better than no optimization at all. I would encourage you to &lt;a href=&quot;https://docs.sentry.io/platforms/javascript/guides/nextjs/&quot;&gt;set up Sentry’s Next.js SDK&lt;/a&gt; in your application to measure and see the difference in your app. Your values will definitely look different than mine above, and the difference between prerendering and prefetching could be more obvious.&lt;/p&gt;&lt;h2&gt;Speculation rules gotchas&lt;/h2&gt;&lt;p&gt;The SRA does not represent a free performance boost. Just like anything, it comes with some caveats that you should be mindful of. Here are some of them:&lt;/p&gt;&lt;p&gt;&lt;b&gt;Server load and bills&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;When prerendering, we’re running full SSR + data calls even if the user never clicks on those pages. To fix this, prefer &lt;b&gt;prefetch&lt;/b&gt; over &lt;b&gt;prerender&lt;/b&gt;, use &lt;b&gt;prerender &lt;/b&gt;sparringly, only for links of highest following probability.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If there are many links in the viewport, there are going to be many prefetches/prerenders. This is a classic &lt;a href=&quot;https://en.wikipedia.org/wiki/Thundering_herd_problem&quot;&gt;thundering herd&lt;/a&gt; scenario. To fix this, use conservative eagerness in SRA, and set &lt;code&gt;prefetch={false}&lt;/code&gt; on low-prob links.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;b&gt;Analytics / experiments&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Since prerendering also invokes all JavaScript on all of the prerendered pages, that means analytics SDKs will also be invoked, resulting in inflated pageviews / conversions. To fix this, you can fire analytics events &lt;i&gt;on activation&lt;/i&gt; - check &lt;code&gt;document.visibilityState === &amp;#39;visible&amp;#39;&lt;/code&gt; after the &lt;code&gt;visibilitychange&lt;/code&gt; event, or if you’re doing analytics on the server-side, set a &lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Sec-Speculation-Tags#speculation_from_a_rule_with_a_tag&quot;&gt;Speculation Rule Tag&lt;/a&gt; and look for that header, or the &lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/Sec-Purpose&quot;&gt;Sec-Purpose&lt;/a&gt; header.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;If you’re also running A/B tests, you’ll experience results skew as well. To fix this, assign experiments &lt;i&gt;on activation&lt;/i&gt;, defer experiment beacons until visible.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;b&gt;Product behavior quirks&lt;/b&gt;&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Prerendering may use current cookies. Tokens can expire before activation, which will result in an auth mismatch / stale session. To fix this, use short-lived cookies, refresh on activation.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Various side-effects during render, like inventory holds, impressions, “last seen” tracking, etc. To fix this, make SSR/data idempotent + read only, and move any side-effects to activation or click.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;This is far from an exhaustive list of prerendering/prefetching gotchas, so make sure you do your due diligence - profile real traffic, monitor server load, verify analytics accuracy, and test under production conditions before you roll it out broadly.&lt;/p&gt;&lt;h2&gt;Speculation rules in your application&lt;/h2&gt;&lt;p&gt;Now that we know the benefits and caveats, let’s talk about how to use the Speculation Rules API in a balanced and strategic way.&lt;/p&gt;&lt;p&gt;We know that prerendering is heavier than prefetching, but it does more of the loading upfront, which should result in better performance. With that in mind, we should use prerendering sparingly. In an e-commerce storefront scenario, I would use prerendering only on a &lt;b&gt;smaller products section&lt;/b&gt;, like a “Featured products” section at the top that lists at most 4-5 products, or on event-based sections like a Black Friday banner that’s too good to pass on. These types of sections usually have big CTAs and draw more clicks to them.&lt;/p&gt;&lt;p&gt;Prefetching on the other hand is lighter, but still an important optimization approach. I would use prefetching on other less important (but still important) product cards, or category pages linked in a navigation bar. I would even configure the prefetch to be on “hover” instead of page load just to be safe. If the Black Friday section is first-class, these would be second-class. Prefetch them, but be mindful of how many pages you’re prefetching (unless you prefetch on hover).&lt;/p&gt;&lt;p&gt;For all the other pages that aren’t that important, you can either disable the framework-imposed automatic prefetching to ease up your server load and bills, or keep hover-only prefetching.&lt;/p&gt;&lt;p&gt;Make sure to use prerendering on the &lt;i&gt;&lt;b&gt;critical flow&lt;/b&gt;&lt;/i&gt; - the flow that &lt;i&gt;“makes money”&lt;/i&gt; - user lands on homepage, visits a product, adds to cart, goes to checkout... Strategically placing prerendering rules on pages on this flow will significantly improve the performance of the critical flow, and result in more conversions, more checkouts, more money spent on your website. That’s what you ultimately want, right? With the Speculation Rules API, you can be &lt;i&gt;really smart&lt;/i&gt; about this.&lt;/p&gt;&lt;p&gt;Most importantly, make sure to keep an eye on the performance in production after you deploy the improvements. Set up Sentry in your project, and create a &lt;a href=&quot;https://www.youtube.com/watch?v=PooH-yP1IiE&quot;&gt;Critical Experience Monitoring Dashboard&lt;/a&gt; that gives you an overview of how the &lt;i&gt;critical flow&lt;/i&gt; is performing, and if you need to improve a certain part of it:&lt;/p&gt;&lt;h2&gt;Bringing it all together&lt;/h2&gt;&lt;p&gt;In this article we saw how the &lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/Elements/script/type/speculationrules&quot;&gt;Speculation Rules API&lt;/a&gt; (and framework-level prefetching) can drastically improve the perceived performance of your &lt;a href=&quot;https://sentry.io/solutions/ecommerce/&quot;&gt;e-commerce&lt;/a&gt; storefront by doing the loading upfront. We used Sentry to measure the performance boost in a production environment and concluded that prerendering/prefetching pages significantly improves the page load performance compared to no optimization at all.&lt;/p&gt;&lt;p&gt;But all that improvement doesn’t come without gotchas. We saw how SRA / prefetching can increase your server load, skew your analytics, and even prerender pages with stale auth cookies. Nothing to be afraid of though! If you have eyes on your app in production (&lt;i&gt;ahem, &lt;/i&gt;&lt;a href=&quot;https://sentry.io/signup/&quot;&gt;&lt;i&gt;use Sentry&lt;/i&gt;&lt;/a&gt;) you can react fast and fix any issue before it affects a large number of users.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Eliminating N+1 Queries with Seer’s Automated Root Cause Analysis]]></title><description><![CDATA[When I was working at Shopify, major traffic moments were our Superbowl. We initiated code-freeze weeks before to make sure merchants wouldn’t have any unexpect...]]></description><link>https://blog.sentry.io/fix-n-plus-one-database-issues-with-sentry-seer/</link><guid isPermaLink="false">https://blog.sentry.io/fix-n-plus-one-database-issues-with-sentry-seer/</guid><pubDate>Mon, 24 Nov 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;When I was working at Shopify, major traffic moments were our Superbowl. We initiated code-freeze weeks before to make sure merchants wouldn’t have any unexpected issues during one of the most important times of the year. Sometimes, though, you need to ship updates last minute.&lt;/p&gt;&lt;p&gt;Picture this: It’s 11:47 PM on the day before your massive sale goes live. You’ve just deployed a new &lt;code&gt;/sale&lt;/code&gt; page with 50+ products at discounted prices. Marketing is about to email 500,000 subscribers. Everything tested fine with your sample data.&lt;/p&gt;&lt;p&gt;At 12:13 AM, you get your first Sentry alert.&lt;/p&gt;&lt;h2&gt;The problem&lt;/h2&gt;&lt;p&gt;Your &lt;code&gt;/sale&lt;/code&gt; endpoint is averaging 4+ seconds per request. Users are experiencing timeouts. You need to fix it now.&lt;/p&gt;&lt;h2&gt;Sentry catches the problem&lt;/h2&gt;&lt;p&gt;You open Sentry and see it&amp;#39;s already identified the issue: &lt;a href=&quot;https://docs.sentry.io/product/issues/issue-details/performance-issues/n-one-queries/&quot;&gt;&lt;b&gt;N+1 Query&lt;/b&gt;&lt;/a&gt;. Sentry automatically analyzed your transaction spans and found that your &lt;code&gt;/api/sale&lt;/code&gt; endpoint is making 150+ sequential database queries per request.&lt;/p&gt;&lt;p&gt;The issue details show the characteristic pattern: one initial query to fetch all products, followed by repeated queries for each product&amp;#39;s sale price, metadata, and category information. Classic N+1.&lt;/p&gt;&lt;p&gt;You implemented it this way because it was straightforward: get all products, then loop through and fetch their sale data. It looked clean. It worked perfectly with 5 test products. But with 50 products on sale? That&amp;#39;s 151 queries per page load.&lt;/p&gt;&lt;h2&gt;Using Seer&lt;/h2&gt;&lt;p&gt;You open the issue in Sentry and click &amp;quot;Find Root Cause&amp;quot;&lt;/p&gt;&lt;p&gt;Seer analyzes the trace data and your codebase, then provides a root cause analysis:&lt;/p&gt;&lt;p&gt;&lt;i&gt;Sequential database calls inside a product loop create an N+3 query pattern, resulting in 54 queries and 10+ seconds latency.&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Seer pinpoints the exact code:&lt;/p&gt;&lt;h2&gt;The fix&lt;/h2&gt;&lt;p&gt;You ask Seer to generate a fix. It provides an optimized solution:&lt;/p&gt;&lt;p&gt;One query instead of 150. You approve the fix and Seer opens a pull request with the changes.&lt;/p&gt;&lt;p&gt;You merge and deploy. P95 response times drop from 7 seconds to just under 2 seconds with P50 scores going from 3 seconds to 275 milliseconds&lt;/p&gt;&lt;h2&gt;The difference&lt;/h2&gt;&lt;p&gt;The entire process from &amp;quot;something&amp;#39;s wrong&amp;quot; to &amp;quot;fix deployed&amp;quot; took 6 minutes:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Automatic Detection&lt;/b&gt; (0 minutes): Sentry identified the N+1 issue as soon as it happened&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Root Cause Analysis&lt;/b&gt; (2 minutes): Seer analyzed the trace data and pinpointed the exact problem&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Solution Generation&lt;/b&gt; (1 minute): Seer provided production-ready code with proper SQL joins&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;PR and Deploy&lt;/b&gt; (3 minutes): Review, merge, and ship&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;Without Seer, you’d spend hours digging through traces and logs. And when user traffic is surging, every minute spent debugging is one your users spend waiting.&lt;/p&gt;&lt;p&gt;Seer didn&amp;#39;t just identify the problem, it explained the pattern, showed exactly where it was happening, provided production-ready code, and opened a PR.&lt;/p&gt;&lt;h2&gt;Why it matters&lt;/h2&gt;&lt;p&gt;During high-traffic moments or &lt;a href=&quot;https://sentry.io/resources/holiday-e-commerce-checklist/&quot;&gt;key launches&lt;/a&gt;, slow debugging is expensive. When users are experiencing issues, you need answers fast.&lt;/p&gt;&lt;p&gt;Seer provides those answers. It analyzes your performance data, explains issues clearly, and generates concrete solutions. It combines Sentry&amp;#39;s automatic issue detection with AI-powered root cause analysis and code generation.&lt;/p&gt;&lt;p&gt;The next time you hit a performance issue during a critical moment, Sentry will catch it and Seer can help you fix it.&lt;/p&gt;&lt;hr/&gt;&lt;p&gt;See how Seer could fix your next N+1 issue before users notice. &lt;a href=&quot;https://sentry.io/product/seer/&quot;&gt;Learn more about Seer&lt;/a&gt; and get started with AI-powered debugging. &lt;/p&gt;</content:encoded></item><item><title><![CDATA[Seer can now trigger Cursor Agents to fix your bugs]]></title><description><![CDATA[We just launched our Cursor Cloud Agent integration. Now when Seer finds a bug, it can hand it off to Cursor—replete with all the context Sentry has about the i...]]></description><link>https://blog.sentry.io/seer-can-now-trigger-cursor-agents-to-fix-your-bugs/</link><guid isPermaLink="false">https://blog.sentry.io/seer-can-now-trigger-cursor-agents-to-fix-your-bugs/</guid><pubDate>Thu, 20 Nov 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;We just launched our &lt;a href=&quot;https://docs.sentry.io/organization/integrations/cursor/&quot;&gt;Cursor Cloud Agent integration&lt;/a&gt;&lt;b&gt;. &lt;/b&gt;Now when &lt;a href=&quot;https://sentry.io/product/seer/&quot;&gt;Seer&lt;/a&gt; finds a bug, it can hand it off to Cursor—replete with all the context Sentry has about the issue—to write the fix and create a PR for you.&lt;/p&gt;&lt;h2&gt;Fully automated, validated, code fixes&lt;/h2&gt;&lt;p&gt;You can now autonomously run a coding agent within your full running codebase environment, all in the background.&lt;/p&gt;&lt;p&gt;When Seer detects and analyzes an issue to find the root cause, it can now send the root cause and the issue context to a Cursor Cloud Agent. The agent gets:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Full issue context:&lt;/b&gt; stack traces, breadcrumbs, user impact, the works&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Seer&amp;#39;s Root Cause Analysis:&lt;/b&gt; our deep analysis into what actually broke&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;b&gt;Your running codebase: &lt;/b&gt;the Cursor Cloud Agent has your full running codebase, and can run your code&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Then it gets to work. Autonomously. While you grab coffee.&lt;/p&gt;&lt;p&gt;You’ll come back to a ready pull request made by the agent in your repo.&lt;/p&gt;&lt;h2&gt;Two ways to use it&lt;/h2&gt;&lt;h3&gt;Manual trigger&lt;/h3&gt;&lt;p&gt;From any issue&amp;#39;s Seer Root Cause Analysis card, click the dropdown next to &lt;b&gt;Find Solution&lt;/b&gt; and launch a Cursor Cloud Agent. Perfect for when you want to delegate specific bugs.&lt;/p&gt;&lt;h3&gt;Automated workflow&lt;/h3&gt;&lt;p&gt;Set up Seer Automation to auto-trigger Cursor for certain types of issues. Configure this in your Seer settings by selecting Cursor Cloud Agent as the stopping point. Now your most critical (or most annoying) bugs get queued for fixes automatically.&lt;/p&gt;&lt;h2&gt;What can you do with it?&lt;/h2&gt;&lt;p&gt;We’ve seen a lot of people copy our Root Cause Analysis to local Cursor agents to continue to debug. Now, you don’t have to copy paste as Cursor Cloud Agents will show up in your agents tab on your local Cursor IDE for you to iterate on and help your debugging workflow.&lt;/p&gt;&lt;p&gt;Also, because Cursor Cloud Agents spin up with your fully working code environment, it can run type checks, lint, and tests. With this, you can get issues automatically triaged and merge-ready.&lt;/p&gt;&lt;h2&gt;Get started in 3 steps&lt;/h2&gt;&lt;p&gt;Setting up takes a few minutes:&lt;/p&gt;&lt;ol&gt;&lt;li&gt;&lt;p&gt;In Sentry, navigate to &lt;a href=&quot;http://sentry.io/orgredirect/settings/integrations&quot;&gt;&lt;b&gt;Settings &amp;gt; Integrations&lt;/b&gt;&lt;/a&gt;&lt;b&gt; &lt;/b&gt;and find Cursor Agent&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Click Install and provide your Cursor API key (find it in Cursor Account Settings under Integrations &amp;gt;User API Keys)&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Go to Settings &amp;gt; Projects &amp;gt; [Your Project] &amp;gt; Seer, then configure your stopping point to be “Hand off to Cursor Cloud Agent”&lt;/p&gt;&lt;/li&gt;&lt;/ol&gt;&lt;p&gt;Now, you will start seeing PRs from your Cursor Cloud Agent popping up on your repos. &lt;/p&gt;&lt;p&gt;Check out the &lt;a href=&quot;https://docs.sentry.io/organization/integrations/cursor/&quot;&gt;docs&lt;/a&gt; for more. And if you&amp;#39;re new to Sentry, &lt;a href=&quot;https://sentry.io/signup/&quot;&gt;sign up for free&lt;/a&gt; to get started.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[The metrics product we built worked — But we killed it and started over anyway]]></title><description><![CDATA[Two years ago, Sentry built a metrics product that worked great on paper. But when we dogfooded it, we realized it was not what our customers really needed. Two...]]></description><link>https://blog.sentry.io/the-metrics-product-we-built-worked-but-we-killed-it-and-started-over-anyway/</link><guid isPermaLink="false">https://blog.sentry.io/the-metrics-product-we-built-worked-but-we-killed-it-and-started-over-anyway/</guid><pubDate>Wed, 19 Nov 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Two years ago, Sentry built a metrics product that worked great on paper. But when we dogfooded it, we realized it was not what our customers really needed. Two weeks before launch, we killed the whole thing. Here’s what we learned, why classical time-series metrics break down for debugging modern applications, and how we rebuilt the system from scratch.&lt;/p&gt;&lt;h2&gt;Why our first metrics product was not good&lt;/h2&gt;&lt;p&gt;We set out to build a metrics product for developers more than two years ago–before I even joined Sentry. We ended up following the path most observability platforms take: pre-aggregating metrics into time series. This approach promised to make tracking things like endpoint latency or request volume efficient and fast. And our team succeeded; tracking an individual metric was fast and cheap.&lt;/p&gt;&lt;p&gt;But we have a culture of dogfooding at Sentry and as we started to put the system to real use, we started to feel the pain. It’s the same problem that plagues every similarly-designed metrics product: the age-old &lt;a href=&quot;https://en.wikipedia.org/wiki/Cartesian_product&quot;&gt;Cartesian product&lt;/a&gt;-problem. If you didn’t know, in traditional metrics, a new time series needs to be stored for every unique value of an attribute you add (e.g. for every individual server name you store under the “server” attribute.) And if you want to do &lt;i&gt;multiple&lt;/i&gt; attributes, you need to deal with every &lt;i&gt;combination&lt;/i&gt; of values. All of this comes from the fundamental issue that &lt;i&gt;when you are pre-aggregating you need to define your questions up front&lt;/i&gt;.&lt;/p&gt;&lt;p&gt;Now sometimes this isn’t a big issue. If you’re just tracking memory usage on 30 servers, or CPU% of each of the 16 cores on those servers, it’s all good.&lt;/p&gt;&lt;p&gt;But the problem is that Sentry’s goal is to give developers rich context to debug and fix code when it breaks. To this end, we know developers &lt;i&gt;love&lt;/i&gt; to add lots of context/attributes to make it easier to understand what’s going on. But when the extra context explodes the cost, it incentivizes developers to &lt;i&gt;not track&lt;/i&gt; all the context they want. Our engineers dogfooding our product continuously found themselves stuck between a rock and a hard place… Not good.&lt;/p&gt;&lt;h2&gt;Cardinality is a harsh mistress&lt;/h2&gt;&lt;p&gt;Let’s look at an example. Assume that each individual time series costs $0.01/mo. (That’s the right order of magnitude for the industry.) And let’s say you are tracking the latency of a specific endpoint called 100,000 times/day. Reasonable situation. Now let’s say one developer is interested in which (of 8) servers it’s being served on and adds an attribute for that. And another is interested in which (of 200) customer accounts it’s for. Then another wants to know which (of 12) sub-request types it is. Then the kicker–a developer wants to know which (of 5,000) users it’s for. I’ll take your guesses for what the cost is up to… OK, did you guess ~$1M/mo? Yes, that’s the real number.&lt;/p&gt;&lt;p&gt;And, honestly, this is an easy example. It’s not unusual for a developer to want to capture 10-20 attributes.&lt;/p&gt;&lt;p&gt;Of course you can play games like trying to avoid certain combinations of combined filters to keep things in check, but the management of it all is a pain. Long story short, we became convinced that we were about to ship a bunch of painful tradeoffs to our users that would make it hard for them to succeed–especially for Sentry’s mission of providing the richest possible debugging context. And ultimately it was me, just two weeks before our planned ship date (and two weeks after I joined), who decided to pull the plug on the product. I was not popular.&lt;/p&gt;&lt;h2&gt;Didn’t we see this coming?&lt;/h2&gt;&lt;p&gt;I know what you’re saying: Are you guys dumb? Surely this was totally predictable. You’re right. When we started the metrics project, there were significant debates up front. And indeed, the original vision was about trying to connect our metrics to the existing tracing telemetry that our users have. But in a series of scope drifts, bad communications, and fears about cost structure and being different, we ended up making some sacrifices. It was a painful lesson we had to re-learn: When building a new product you have to make sure to be very clear up front about the vision, and diligent that you’re not compromising or straying from it as you build.&lt;/p&gt;&lt;h2&gt;Why was the vision “trace connected”?&lt;/h2&gt;&lt;p&gt;Wait, you just said the problem was cost blow up. What’s this about connecting to traces? Well, dear reader, the problems with v1 went even deeper…&lt;/p&gt;&lt;p&gt;Let’s put aside all considerations about cardinality and cost. Assume infinite compute. The other problem with time series is that taking a bunch of &lt;code&gt;increment()&lt;/code&gt; or &lt;code&gt;gauge()&lt;/code&gt; calls in your code and aggregating them to 1 second granularity sucks, because it destroys the connection between data and code. You can graph the metrics over time, sure, but that graph lives in a parallel universe. It is connected to your code (and your traces, logs, errors, etc.) only indirectly via timestamp.&lt;/p&gt;&lt;p&gt;And that’s a problem, because, again, our goal isn’t to give developers dashboards, it’s to give them context—direct, actionable, connected context—so when something goes wrong, they can see &lt;i&gt;why&lt;/i&gt;.&lt;/p&gt;&lt;p&gt;Correlating indirectly via timestamp simply doesn&amp;#39;t do that very well. You look at a spike in latency, or an increase in error rate, and then jump over to a separate system, hunting through traces or logs, trying to reconcile timelines. This kind of “roughly around that time” debugging makes you pull your hair out, and is the (other) achilles heel of traditional metrics.&lt;/p&gt;&lt;h2&gt;Can our customers afford the best?&lt;/h2&gt;&lt;p&gt;&lt;i&gt;So what would a system look like that didn’t blow up in cardinality, and didn’t aggregate to 1 second granularity?&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Dave, you say, I know where you are going, and it sounds expensive… Yes, let’s talk about what it costs to keep all the raw data and aggregate it on the fly, and the tradeoffs.&lt;/p&gt;&lt;p&gt;But, first, a bit of context: most observability and analytics systems have moved away from pre-aggregation and toward raw-event storage with on-demand aggregation. This has been enabled by cheaper storage, massively parallel compute, and the rise of columnar query engines. It’s been a mega trend that started in the early 2000s with Hadoop, and has slowly swept across the industry ever since. Logs, tracing, BI (do you remember OLAP cubes?), even Security are all domains where early tools relied heavily on pre-defined aggregations and now almost exclusively store raw data and analyze it on demand. &lt;i&gt;Is it metrics’ time?&lt;/i&gt;&lt;/p&gt;&lt;p&gt;Let’s use the previous example for endpoint latency. Sorry, more math… Let’s say that the attributes we used in the example above (server, customer, sub type, user email) amount to 250 bytes of raw data. And, let’s just use the price of Sentry logs as a cost proxy ($0.50/GB/mo). At the stated rate of 100,000 times/day, we are talking a cost of, drumroll… $0.37/mo. Yes, that’s right. It’s way more ($0.36/mo more) than it costs to track an &lt;i&gt;individual&lt;/i&gt; time series, but way less (~$1M/mo less) than if you wanted to track those four attributes too. The lesson is: though your metric vendor’s sales rep thinks otherwise, you &lt;i&gt;really&lt;/i&gt; don’t want to be on the wrong side of an exponential.&lt;/p&gt;&lt;p&gt;Now, the astute among you probably realize that the 100,000 times/day is a key variable here as well. Traditional metrics &lt;i&gt;don’t&lt;/i&gt; scale in cost with this metric, but raw-data based metrics do. So, if you have a metric that fires 1M, or 10M times per day, we’re talking about $3.70, or $37/mo if you want to keep everything. Which is a lot for 1 metric, but honestly, if you have an important user-facing endpoint firing 10M times per day, congratulations! And if you want to limit costs, you can always sample (my personal go to), or just find a higher-level aggregate counter to log.&lt;/p&gt;&lt;p&gt;What’s that, you hate sampling or anything lossy? Well, I hate to break it to you, but if you have an endpoint firing 10M/day, it’s firing 100+ times per second on average, and that 1-second-bucket time series is already aggregating a ton of detail away anyway.&lt;/p&gt;&lt;h2&gt;Starting over with trace-connected telemetry&lt;/h2&gt;&lt;p&gt;Back to our story. We pulled the plug. The next steps were straightforward, but ended up taking us a while. We went back to the drawing board to build event-based metrics.&lt;/p&gt;&lt;p&gt;That decision actually led to a much deeper rearchitecture at Sentry—and the creation of a new, generic telemetry analytics system we internally call the Event Analytics Platform (EAP). It’s based on ClickHouse, which is an excellent modern columnar store. We’ve given a couple of talks about it, but this work probably deserves its own blog post. We first used EAP to power-up our existing tracing product, building new slicing and querying capabilities. We next used it to deliver our recently-shipped logging product. Now, more than 18 months after pulling the plug on metrics 1.0, it’s backing our &lt;a href=&quot;https://docs.sentry.io/product/explore/metrics/&quot;&gt;next generation Metrics product&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;What’s new about Metrics this time around is that every metric event is stored independently in EAP &lt;i&gt;and&lt;/i&gt; connected to its trace ID. This solves both the cardinality problem, &lt;i&gt;and&lt;/i&gt; the connectedness problem. It means that we can slice and dice metrics dynamically and link them to other telemetry. It also means that you can keep adding more and more contextual tags, even ones with super high cardinality, without fear of blowing up costs.&lt;/p&gt;&lt;h2&gt;Metrics with context&lt;/h2&gt;&lt;p&gt;We’ve been back to dogfooding, and this time we’re pretty happy. The trace-connected model unlocks debugging workflows that weren’t possible before. Instead of jumping between dashboards and guessing at time correlations, you can follow the data directly:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;See a spike in checkout failures? Jump from that metric to the exact trace where a failure happened; jump from the trace to an attached Sentry error.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Notice an increase in retries? Break down the metrics by span to see which service is at fault.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Watching p95 latency climb? Drill into the worst offenders and find related session replays to see what the user is experiencing.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;Metrics about your application&lt;/h2&gt;&lt;p&gt;What about Infra monitoring? Well, we expect some of our users will do it, but we’re not really pushing that use case for &lt;a href=&quot;https://docs.sentry.io/product/explore/metrics/&quot;&gt;Sentry metrics&lt;/a&gt;. This point created a good deal of internal discussion since it’s so hard to decouple the idea of metrics from the idea of infrastructure. And time-series pre-aggregation is sooo tempting for infra because machine-level metrics are&lt;i&gt; already&lt;/i&gt; disconnected from your code, and it could be cheap, and, look, this simple little daemon can slurp up all of these signals, and, look, charts!&lt;/p&gt;&lt;p&gt;I won’t lie, we use infra metrics, and we know they aren’t going away. But for us, metrics are much more powerful when they are&lt;i&gt; application-level&lt;/i&gt; signals, connected to the underlying code that produced them. Traditional CPU and memory graphs have their place, but modern developers, especially ones building on high-level platform abstractions, care less about infra and more about higher-level application health: login failures, payment errors, request latencies. Those are the metrics that tell you what’s really happening to your users.&lt;/p&gt;&lt;p&gt;As we add modern metrics to our own product, we find that we are using the dashboards-of-1000-time-series a lot less, and I think you will too.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;h2&gt;Built for Seer&lt;/h2&gt;&lt;p&gt;Since I’m officially a CTO, I’d be remiss to go an entire blog without mentioning AI. And, even though some of our efforts were just getting going when we made the call to reboot metrics, it turns out it was the right call for our AI debugging ambitions as well. &lt;/p&gt;&lt;p&gt;If you don’t know, we have an &lt;a href=&quot;https://docs.sentry.io/product/ai-in-sentry/seer/&quot;&gt;agent we call Seer&lt;/a&gt; built into Sentry. It root causes issues, proposes fixes, and will do even more soon. Behind the scenes, Seer uses tool calls to traverse all of the connected telemetry in Sentry—&lt;a href=&quot;https://docs.sentry.io/product/issues/&quot;&gt;errors&lt;/a&gt;, &lt;a href=&quot;https://docs.sentry.io/concepts/key-terms/tracing/&quot;&gt;traces&lt;/a&gt;, &lt;a href=&quot;https://docs.sentry.io/product/explore/logs/&quot;&gt;logs&lt;/a&gt;, and now &lt;a href=&quot;https://docs.sentry.io/product/explore/metrics/&quot;&gt;metrics&lt;/a&gt;. What we keep finding with our testing is that data volume (a million examples) &lt;i&gt;isn’t&lt;/i&gt; very important to make Seer debug well, but every additional type of connected data pays &lt;i&gt;huge&lt;/i&gt; dividends. So, by making metrics first-class citizens in our telemetry graph, we ended up giving Seer a lot more connected context to work with. &lt;/p&gt;&lt;p&gt;I think this was a good demonstration for our team of the importance of sticking to your original vision for something. The original “make it all trace-connected” really did pay dividends vs. just shipping a traditional metrics solution as a separate feature, and even for a feature we hadn’t fully envisioned at the time.&lt;/p&gt;&lt;h2&gt;The long road was worth it&lt;/h2&gt;&lt;p&gt;If you’re reading this, you probably know how hard it is to kill a product you’ve already built. It’s harder still to admit that what you have, while functional, isn’t &lt;i&gt;right&lt;/i&gt;. The sunk cost fallacy is a mighty foe. And I’m really sorry to all of the beta testers who were enjoying metrics v1, but we’re convinced the wait was worth it for the new version.&lt;/p&gt;&lt;p&gt;So, this wasn’t exactly a traditional marketing blog post announcing a product, but since we’re taking a different tack on this one, I thought it would be interesting to share the thought process. I hope you get a chance to try our &lt;a href=&quot;https://docs.sentry.io/product/explore/metrics/&quot;&gt;new metrics product&lt;/a&gt; as we continue to build it and improve it, and hopefully this story also gives you a little inspiration for your own software projects to not be afraid to make the tough calls.&lt;/p&gt;</content:encoded></item><item><title><![CDATA[Introducing Logs, User Feedback, and more in the Sentry Godot SDK]]></title><description><![CDATA[With the first stable releases out of the gate, we’re happy to announce that Sentry’s Godot SDK is now ready for general use, supporting Windows, Linux, macOS, ...]]></description><link>https://blog.sentry.io/introducing-logs-user-feedback-godot-sdk/</link><guid isPermaLink="false">https://blog.sentry.io/introducing-logs-user-feedback-godot-sdk/</guid><pubDate>Tue, 18 Nov 2025 10:00:00 GMT</pubDate><content:encoded>&lt;p&gt;With the first stable releases out of the gate, we’re happy to announce that &lt;a href=&quot;https://docs.sentry.io/platforms/godot/&quot;&gt;Sentry’s Godot SDK&lt;/a&gt; is now ready for general use, supporting Windows, Linux, macOS, iOS and Android. We started full-time development a year ago with just a few prototypes, and now it&amp;#39;s finally here - built on top of the mature Sentry platform SDKs, it comes as a GDExtension add-on that you can easily add to your Godot projects.&lt;/p&gt;&lt;p&gt;If you’re new to Sentry, it’s an &lt;a href=&quot;https://sentry.io/solutions/game-developers/&quot;&gt;application monitoring software&lt;/a&gt; that helps you manage your game&amp;#39;s health during development, QA and after release. It can automatically track crashes, runtime errors, logs, and even player feedback so you can spend less time chasing bugs and more time creating awesome experiences. Keep reading to explore the key features in the Godot SDK.&lt;/p&gt;&lt;h2&gt;Crashes and errors&lt;/h2&gt;&lt;p&gt;Sentry gathers error and crash reports, grouping them into issues in a convenient dashboard, helping you prioritise which issues need fixing first. It automatically captures Godot Engine runtime and script errors showing detailed stack traces with optional local and member variables information and even surrounding script source context lines when available.&lt;/p&gt;&lt;p&gt;For crashes in C++ layer, Sentry can collect and send minidumps, letting you know when your game crashes and where in the Godot source code it happens. This information can help you understand the cause of the crash, and share the stack trace (or minidump file) with Godot developers, or even fix the issue yourself.&lt;/p&gt;&lt;h2&gt;Bringing more context to debugging your game&lt;/h2&gt;&lt;p&gt;There are almost infinite possible hardware configurations out there. Finding out why some players experience a bug that you can&amp;#39;t reproduce locally can feel daunting. Sentry can help by giving detailed context about hardware and software your players are using, and every issue includes this information for the specific event, so you can see the exact configuration that triggered the problem. And that&amp;#39;s not all - our SDK can send runtime logs, engine statistics, scene tree snapshots, and even in-game screenshots from the moment just before an issue is triggered.&lt;/p&gt;&lt;h2&gt;Structured logs&lt;/h2&gt;&lt;p&gt;&lt;a href=&quot;https://docs.sentry.io/platforms/godot/logs/&quot;&gt;Structured logs&lt;/a&gt; are now available in the Godot SDK as well, which means you can capture and link log output to crashes and performance issues in your game. The SDK can capture log output automatically: any time you print() in code or the engine spits out something, Sentry captures it in structured logs. You can then browse and search those log entries directly from the issue. So when a player gets stuck during a scene change or a crash occurs right after loading a resource, you’ll see the exact log trail leading up to the failure, available directly from the issue or trace view:&lt;/p&gt;&lt;h2&gt;User feedback &lt;/h2&gt;&lt;p&gt;We’ve also added &lt;a href=&quot;https://docs.sentry.io/platforms/godot/user-feedback/&quot;&gt;User Feedback&lt;/a&gt; support, so players can share details about errors or just give general feedback on their gameplay experience. This means you can see real player insights alongside the usual technical details, helping you prioritize fixes and understand the player experience beyond the stack trace.

There&amp;#39;s a ready-to-use User Feedback UI in the &lt;code&gt;addons/sentry/user_feedback&lt;/code&gt; folder that you can use or customize to your liking. Here&amp;#39;s what it looks like in action:&lt;/p&gt;&lt;p&gt;Of course, if you want to get creative, you can build your own interface and submit feedback from code using our API:&lt;/p&gt;&lt;h2&gt;Testing improvements&lt;/h2&gt;&lt;p&gt;While preparing the SDK for general availability, we also didn&amp;#39;t forget about testing. Our biggest achievement in this area was adding deep validation of event JSON content, which already helped us iron out small inconsistencies between platforms and catch several bugs.&lt;/p&gt;&lt;p&gt;To make this possible, we built a GDScript module that adds a fluent API for sophisticated &lt;a href=&quot;https://github.com/getsentry/gdunit-json-assert&quot;&gt;JSON content testing&lt;/a&gt;. It&amp;#39;s important to start testing from GDScript, since this is how most users interact with the SDK&amp;#39;s API. This approach increases our confidence in the release process, knowing that we have a good foundation for catching potential regressions.&lt;/p&gt;&lt;p&gt;This expressive syntax makes it easy to define precise expectations for JSON structures while keeping tests readable and concise. Testing output is also quite informative, showing which steps fail in the call chain, along with helpful bits of JSON content.&lt;/p&gt;&lt;h2&gt;Major version changes&lt;/h2&gt;&lt;p&gt;Every major release is an opportunity to look back and improve things. The biggest change is the redesigned SDK initialization process. Instead of relying on a configuration script with a somewhat unclear lifecycle, you can now simply initialize and configure the SDK manually whenever it makes the most sense in your game.&lt;/p&gt;&lt;p&gt;Now that you can initialize the SDK manually, our advice is quite simple: do it as early as possible! The best place to do that is in your project’s main loop script. We also took an opportunity to add a method for cleanly closing the SDK connection if it’s no longer needed. This rounds out our improvements to the SDK lifecycle.&lt;/p&gt;&lt;h2&gt;Platform support and getting started&lt;/h2&gt;&lt;p&gt;The Sentry Godot SDK comes with support for Windows, Linux, macOS, iOS and Android. Coming soon: support for Web and C# exports, as well as support for W4 console forks. 
Crash reporting for &lt;a href=&quot;https://docs.sentry.io/platforms/playstation/&quot;&gt;PlayStation&lt;/a&gt; and &lt;a href=&quot;https://docs.sentry.io/platforms/nintendo-switch/&quot;&gt;Switch&lt;/a&gt; is already supported by Sentry, no SDK needed. &lt;/p&gt;&lt;p&gt;To get started, you can &lt;a href=&quot;https://github.com/getsentry/sentry-godot&quot;&gt;download it from GitHub&lt;/a&gt;, and &lt;a href=&quot;https://docs.sentry.io/platforms/godot/&quot;&gt;refer to the documentation here&lt;/a&gt;. Got any questions or feedback? You can reach out in &lt;a href=&quot;https://github.com/getsentry/sentry-godot/discussions&quot;&gt;Discussions&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;If you’re new to Sentry, you can explore our &lt;a href=&quot;https://sandbox.sentry.io/issues/?project=4509331580780549&amp;statsPeriod=14d&quot;&gt;interactive Sentry sandbox&lt;/a&gt; or &lt;a href=&quot;https://sentry.io/signup/&quot;&gt;sign up for free&lt;/a&gt;. &lt;/p&gt;</content:encoded></item><item><title><![CDATA[Introducing webvitals.com: Find out what’s slowing down your site]]></title><description><![CDATA[Developers don’t need another “run this tool, stare at a number, and feel bad about it” website. So we built something different. WebVitals helps you analyze, o...]]></description><link>https://blog.sentry.io/introducing-webvitals-com/</link><guid isPermaLink="false">https://blog.sentry.io/introducing-webvitals-com/</guid><pubDate>Tue, 18 Nov 2025 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Developers don’t need another “run this tool, stare at a number, and feel bad about it” website. So we built something different.&lt;/p&gt;&lt;p&gt;&lt;a href=&quot;https://webvitals.com/&quot;&gt;WebVitals&lt;/a&gt; helps you analyze, optimize, and ship faster websites, all in one place. Built by the same folks who obsess over stack traces and slow queries, it connects the dots between performance metrics and what’s actually slowing your users down.&lt;/p&gt;&lt;p&gt;In one place, you can:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;See how real users experience your site&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Spot the biggest slowdowns instantly&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Get clear next steps (no jargon, no guessing)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2&gt;How webvitals.com works&lt;/h2&gt;&lt;p&gt;Enter your domain, hit go, and we’ll take it from there. Under the hood, WebVitals runs a couple tool calls using the Vercel AI SDK:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;One to &lt;b&gt;Google’s PageSpeed API&lt;/b&gt; - To pull real user performance data from the last 28 days, so called &lt;a href=&quot;https://developer.chrome.com/docs/crux&quot;&gt;CrUX data&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;One to &lt;b&gt;Cloudflare’s URL Scanner&lt;/b&gt; - To detect your site’s technology stack&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Those results get combined and analyzed to surface the stuff that actually matters. You’ll see what’s solid and where things could use a little love. Each report provides clear, contextual steps for improving your metrics, using CrUX data for additional guidance. CrUX data is the 75th percentile of metrics from actual usage data from real users.&lt;/p&gt;&lt;h2&gt;The Core Web Vitals report&lt;/h2&gt;&lt;p&gt;You get real, actionable insight into your Core Web Vitals:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://webvitals.com/lcp&quot;&gt;&lt;b&gt;Largest Contentful Paint (LCP)&lt;/b&gt;&lt;/a&gt; – how fast your main content loads&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://webvitals.com/inp&quot;&gt;&lt;b&gt;Interaction to Next Paint (INP)&lt;/b&gt;&lt;/a&gt; – how responsive your page feels when users interact&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://webvitals.com/fcp&quot;&gt;&lt;b&gt;First Contentful Paint (FCP)&lt;/b&gt;&lt;/a&gt; – when &lt;i&gt;something&lt;/i&gt; finally shows up&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://webvitals.com/ttfb&quot;&gt;&lt;b&gt;Time to First Byte (TTFB)&lt;/b&gt;&lt;/a&gt; – how long it takes your server to respond&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;&lt;a href=&quot;https://webvitals.com/cls&quot;&gt;&lt;b&gt;Cumulative Layout Shift (CLS)&lt;/b&gt;&lt;/a&gt; – how stable your layout is (or isn’t)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Each metric page breaks down which browsers track it, what “good” scores look like, and why it matters. We even added motion animations (made with the &lt;a href=&quot;https://motion.dev/&quot;&gt;Motion&lt;/a&gt; animation library, because why not) to make all that data a little easier on the eyes. &lt;/p&gt;&lt;h2&gt;Why Core Web Vitals matter&lt;/h2&gt;&lt;p&gt;Core Web Vitals are a reflection of what real people feel when they use your site.&lt;/p&gt;&lt;p&gt;If your page takes too long to paint, shifts around while loading, or lags when someone clicks a button, users notice, and they bounce. A slow site can hurt your more than just CWV metrics. It affects your &lt;a href=&quot;https://developers.google.com/search/docs/appearance/core-web-vitals&quot;&gt;search engine rankings&lt;/a&gt;, conversion rate, and in the long run &lt;a href=&quot;https://web.dev/case-studies/vitals-business-impact&quot;&gt;your revenue&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;These metrics turn vague “feels slow” feedback into measurable, fixable problems. They’re the closest thing to a shared language between your performance graphs and your users’ frustration levels.&lt;/p&gt;&lt;h2&gt;Go deeper with Sentry&lt;/h2&gt;&lt;p&gt;Tools like Google’s PageSpeed Insights and Lighthouse are great for public pages, but they don’t cover everything. Pages hidden behind logins like your authenticated dashboards, admin panels, or internal tools more often than not require other tools to examine, and that’s where Sentry enters the picture.&lt;/p&gt;&lt;p&gt;With &lt;a href=&quot;https://docs.sentry.io/product/insights/frontend/web-vitals/&quot;&gt;Sentry’s Web Vitals dashboard&lt;/a&gt;, you can track performance across &lt;i&gt;every&lt;/i&gt; page of your app, not just the public ones. Sentry has an SDK for just about every language/framework. You’ll see detailed breakdowns with real user data, so you can understand and fix what’s slowing you down.&lt;/p&gt;&lt;p&gt;For example, the following Next.js image gallery app has some performance issues:&lt;/p&gt;&lt;p&gt;It loads slowly and shows a blank page while the image data is being fetched. The hero image also causes a cumulative layout shift (CLS).&lt;/p&gt;&lt;p&gt;To fix these issues, you can quickly set up and configure Sentry in your Next.js application to start capturing performance issues by running npx @sentry/wizard@latest -i nextjs within your project, then follow the installation wizard, which guides you through the whole setup process.&lt;/p&gt;&lt;p&gt;Once you’re done setting it up, the Web Vitals page in Sentry is a great starting point for investigating poor Web Vitals and figuring out which pages are affecting your web performance the most.&lt;/p&gt;&lt;p&gt;The&lt;b&gt; Performance Score&lt;/b&gt; ring summarizes the overall perceived performance of your app. The Core Web Vitals metrics make up the ring and show their relative weight and impact on the score. Hovering your mouse over a section of the ring shows you the score and opportunity for that metric. &lt;/p&gt;&lt;p&gt;The opportunity is the difference between the current score and the maximum possible score (which is 100) for that metric.&lt;/p&gt;&lt;p&gt;The &lt;b&gt;Score Breakdown&lt;/b&gt; to the right of the Performance Score shows an area chart of your Performance Score over time, which makes it easy to track core web vitals over time and identify regressions. The filters at the top of the area chart let you filter by environment, time range, and browser type.&lt;/p&gt;&lt;p&gt;The core web vitals are shown below the Performance Score. The metric values are shown in seconds or milliseconds. The performance score is also displayed and color coded: good is 90+, meh is 50-90, and bad is below 50. The table at the bottom of the page shows the core web vitals per page.&lt;/p&gt;&lt;p&gt;Click on a page route in a table row to get web vitals for that page.&lt;/p&gt;&lt;p&gt;The table at the bottom of this page shows traces where the web vitals were measured as well as the source element for the largest contentful paint (LCP), which helps you find the lines of code causing the slowdown. You can filter by web vital or search for a specific trace.&lt;/p&gt;&lt;p&gt;Another issue you’ve noticed is that the website&amp;#39;s homepage is taking a long time to load. This could indicate that your server is struggling to keep up with requests. Looking again at the Web Vitals page, you see that the page has a very low Time To First Byte (TTFB) score.&lt;/p&gt;&lt;p&gt;You can then navigate to the page summary for the home page, and use the Web Vital drop-down to show traces by TTFB score. Then open a specific trace by clicking on the appropriate link.&lt;/p&gt;&lt;p&gt;Looking at this trace, you can quickly see there is a two-second delay between the user sending the initial GET request and the response from the Page Server Component. TTFB is calculated from the following request phases:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;&lt;p&gt;Redirect time. &lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Service worker startup time.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;DNS lookup.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Connection and TLS negotiation.&lt;/p&gt;&lt;/li&gt;&lt;li&gt;&lt;p&gt;Request, up until the point at which the first byte of the response has arrived.&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The trace is not showing any evidence of the first four phases, and so you can determine that it is the actual request phase that is taking the full two seconds. Selecting this phase and clicking “More Samples” will allow you to see other examples of the same action, and you will be able to compare the request times from other traces.&lt;/p&gt;&lt;p&gt;In this example, there is a suspiciously consistent two-second delay that can be observed on each one of the samples. This is because an artificial slowdown was added to the example web app using setTimeout(). In a production environment, this may be harder to diagnose, as there can be &lt;a href=&quot;https://web.dev/articles/optimize-ttfb&quot;&gt;many reasons&lt;/a&gt; for a slow web server response, but tracing should give you a good idea of where to focus your optimization efforts.&lt;/p&gt;&lt;p&gt;Removing the artificial delay solves your TTFB issue, but to improve the performance of the image gallery app, you need additional fixes. First, you could minify the hero image using &lt;a href=&quot;https://tinypng.com/&quot;&gt;Tinify&lt;/a&gt;, and then replace the &lt;code&gt;&amp;lt;img&amp;gt;&lt;/code&gt; elements with Next.js &lt;a href=&quot;https://nextjs.org/docs/app/api-reference/components/image&quot;&gt;&lt;code&gt;Image&lt;/code&gt;&lt;/a&gt; components and provide a width and height to prevent cumulative layout shift (CLS):&lt;/p&gt;&lt;p&gt;`Image` components could be used for the image gallery images as well, so that the images are lazy-loaded. This means that they are fetched and rendered only when they are close to entering the user&amp;#39;s viewport. &lt;/p&gt;&lt;p&gt;The page could also be converted from a client component to a &lt;a href=&quot;https://react.dev/reference/rsc/server-components&quot;&gt;server component&lt;/a&gt; to avoid the data fetching being render blocking.&lt;/p&gt;&lt;p&gt;With those fixes implemented, the Core Web Vital metrics look a lot better than before:&lt;/p&gt;&lt;p&gt;You can see the improvement over time in the &lt;b&gt;Score Breakdown&lt;/b&gt; chart for the images page:&lt;/p&gt;&lt;p&gt;Think of Sentry as the logical next step after tools like WebVitals. Web tools are great at giving you a quick overview, while Sentry shows you everything underneath, what’s happening, where, and why.&lt;/p&gt;&lt;h2&gt;Try WebVitals out&lt;/h2&gt;&lt;p&gt;Head to&lt;a href=&quot;https://webvitals.com&quot;&gt; webvitals.com&lt;/a&gt;, test your site, and see how it stacks up. You’ll get fast, clear insights, and maybe even a few things to fix before your next deploy.&lt;/p&gt;&lt;p&gt;Then, when you’re ready to go beyond surface-level metrics, bring it home with &lt;a href=&quot;https://sentry.io/signup/&quot;&gt;Sentry&lt;/a&gt;. Because knowing your site is slow is fine. Knowing &lt;i&gt;why&lt;/i&gt;, and also &lt;i&gt;how&lt;/i&gt;, to fix it is better.&lt;/p&gt;</content:encoded></item></channel></rss>