Fixing JavaScript observability, one library at a time
Over the past few weeks, we have been driving a cross-ecosystem effort to replace the “monkey-patching” that powers all JavaScript APM tools today with something built into the runtime. Here is why, how, and where it stands.
This applies to server-side JavaScript only (Node.js, Bun, Deno, Cloudflare Workers). Browsers do not have diagnostics_channel and lack the async context propagation primitives needed to polyfill it.
Monkey-patching does not scale
My teammate Sigrid wrote a detailed breakdown of why monkey-patching is failing and how TracingChannel solves it.
The short version: every JavaScript APM tool, including Sentry’s, instruments libraries by intercepting require() and import calls at runtime using import-in-the-middle (IITM) and require-in-the-middle (RITM). This breaks with ECMAScript Modules (ESM), does not work in non-Node runtimes, conflicts with bundlers, and couples us to internal implementation details we do not control. The SDK also must load before the library it instruments, or instrumentation silently does nothing.
This is not a Sentry-specific problem. Every APM vendor maintaining JavaScript instrumentation deals with the same fragility. The ecosystem is stuck.
Most library maintainers do not think about observability. They do not know what they would need to expose, and adopting something like OpenTelemetry means taking on an implementation burden, not just adding a standard. APMs managed to patch their way around this for years, so nobody on the library side ever had to figure it out.
But there’s a better way.
TracingChannels - observability without patching
In late 2025, we were working with Pooya Parsa (creator of Nitro, h3, and the unjs ecosystem) on the best way to build a Sentry SDK for the Nitro framework. During that conversation, my teammate Sigrid suggested we look into TracingChannel, a built-in API from Node’s diagnostics_channel module. Sigrid’s blog post covers that API in depth, but the core idea is simple: if a library publishes structured events on a TracingChannel, any APM tool can subscribe to those events without patching anything. The library just says “a query started” and “a query ended,” and whoever is listening can create spans from that.
// Library side (e.g. inside mysql2)
import { tracingChannel } from 'diagnostics_channel';
const queryChannel = tracingChannel('mysql2:query');
queryChannel.tracePromise(async () => {
return await connection.query(sql);
}, { query: sql, serverAddress: host, serverPort: port });
The cost of this added code is minimal, so this is an easy sell for library maintainers. From APM’s side, we just need to subscribe to that tracing channel and we get the events. No IITM, no RITM, no loader hooks, no initialization ordering. Zero overhead when nobody is listening. Works across Node, Bun, and Deno. Bundler safe. The API has been available since Node 18, and dc-polyfill covers runtimes that lack it, which already matches our support range.
Everyone agrees, nobody is pushing
After getting enough learnings about the tracing channel API and how to make it work with OpenTelemetry, I opened an issue on Otel JS in November 2025 to discuss TracingChannel support.
The response was positive. A while after, someone from the OTel team even created a draft API approach for integrating TracingChannel into the OTel SDK.
But there is no significant push to drive ecosystem adoption. The draft exists; the ecosystem work does not.
Everyone agrees that TracingChannel is the future of JavaScript observability, but nobody is doing the work of getting libraries to adopt it. We have many instrumentations across databases, web frameworks, message queues, and AI providers that need TracingChannel support. That is a mountain of upstream PRs, each requiring understanding the library’s internals, writing a proposal that maintainers will accept, implementing the changes, and iterating on review feedback.
So I thought “fine, why not just get the ball rolling?”
The first step was proving the pattern works. I had already built TracingChannel support by hand in h3, srvx, unstorage, db0, and Nitro as part of the earlier SDK work. The unjs ecosystem was receptive and moved fast, which gave us shipped examples to point to and an end-to-end mental model: how events should be shaped, how context propagation flows, how to make it work with OTel, and what semantic conventions to follow.
We also learned early that you can’t just say “hey you should use TracingChannel,” which is just begging to be shelved to collect dust. Instead, like we did with Nitro, we say “Hey, we will do it for you and help you own it.” Accepting code into a repository adds a burden of maintenance, so we offer to help own it and make it part of the library.
With that in mind, I reached out to pg, mysql2, and redis to gauge their interest, offering to fully own this ‘til it ships and provide support even after. These are the top database driver libraries in the ecosystem, accounting for over 60 million downloads per week combined. If we can get TracingChannel in them, we can get other libraries. All three said yes and were open to receiving a PR.
I also reached out to Stephen Belanger, the creator of the diagnostics_channel API in Node.js core. He is now helping push this forward, providing feedback on proposals and acting as the voice of authority which is sometimes needed to convince maintainers.
So one by one, we’re making this happen across the ecosystem.
For context on how this fits into the bigger picture: My team is working on making our SDK runtime-agnostic, we are working multiple paths in parallel, most of which have an immediate effect. The TracingChannel initiative work is the long-term play. We cannot expect users to upgrade to new library versions overnight, and we probably won’t convince everyone to implement them at the same time so the migration will be gradual.
Scaling it with AI
Here is the practical reality: Being one person trying to add TracingChannel support to 44 libraries is just not going to happen. I do not know the internals of any of them. I have never looked at the Redis protocol implementation or mysql2’s query pipeline before this project.
So I built a feedback loop using Claude Code that handles the per-library heavy lifting via SKILLS:
- Research and Propose. Given a library name, Claude researches its async model, existing OTel instrumentation, maintenance status, and internal architecture, then drafts a proposal following all the patterns we have established. I review and adjust before it goes anywhere.
- Implement. Given an approved proposal, Claude produces a working implementation with tests, handling
tracePromise/traceCallbackselection,hasSubscribersguards, Node 18 compatibility, and integration tests against real services via Docker. - Capture Review Feedback. When a PR gets reviewed upstream, Claude triages every comment, assesses validity, suggests responses, and flags patterns that should inform future proposals. I decide what to act on and handle all communication with maintainers myself.
- Update the Tracker. Claude fetches the latest status of every upstream PR and keeps the migration tracker current.
Each cycle feeds the next one. Learnings from one library’s review process improve the next library’s proposal. The knowledge compounds and is dumped into a LEARNING.md file to guide future work.
To clarify the human/AI split: Claude handles research, boilerplate implementation, and pattern application. I handle architecture decisions, insertion point identification, all maintainer communication, and final review of every line before it ships. Critically, every commit is co-authored and AI involvement is made transparent. Library maintainers interact with a human, not with an AI. I kept certain parts human-led because that shows respect to the maintainer’s work, which is critical to convincing them to adopt code into their library.
This approach turned what would be a multi-year solo effort into a production line where I can keep dishing out proposals every day, start implementations in parallel, learn from them all and integrate the learnings into pending and future work.
10 merged, 34 to go
We are tracking many instrumentations across four categories. Here is where things stand:
| Category | Total | Merged | PR Open | In Discussion | Not Started |
|---|---|---|---|---|---|
| OTel-provided | 24 | 4 | 2 | 6 | 12 |
| Sentry-built | 10 | 0 | 0 | 1 | 9 |
| Other ecosystem | 8 | 5 | 2 | 1 | 0 |
| Logging | 2 | 1 | 0 | 0 | 1 |
| Total | 44 | 10 | 4 | 8 | 22 |
Notable wins:
- mysql2 - Merged. One of the most popular database drivers in the npm ecosystem.
- node-redis and ioredis - Both merged. The two dominant Redis clients now ship
TracingChannelsupport. - h3, srvx, unstorage - All merged. The unjs ecosystem was early and enthusiastic. This touches Nitro, which in turn touches Nuxt and other downstream frameworks.
We also helped establish ecosystem coordination through an e18e umbrella issue and the untracing spec that standardizes TracingChannel usage for library authors.
What this means for Sentry
This flips the instrumentation model. Libraries own the contract, and we subscribe to it. Every problem described above (ESM breakage, init ordering, runtime lock-in, bundler conflicts) goes away. Our instrumentation code gets simpler, and we stop maintaining runtime-specific hacks.
This also benefits every APM tool, not just Sentry. Driving it builds trust with library maintainers and the broader community, sure, but several maintainers have specifically called out that they appreciate the approach because it helps everyone and is not biased towards any one APM provider.
The flywheel is starting
Take node-redis as a case study. During our collaboration with the Redis team, they were already working on their own first-party OpenTelemetry instrumentation. They wanted our TracingChannel proposal to align with and power that instrumentation. We re-implemented their already shipped metrics plugin using tracing channels and it worked without changing a single test. Now, we are helping them with traces.
Shortly after mysql2 shipped TracingChannel support, someone independently built mysql2-otel-instrumentation, a pure diagnostics_channel subscriber that replaces OTel’s monkey-patched @opentelemetry/instrumentation-mysql2. The motivation was exactly the problem we are solving: RITM was not working. A library adds TracingChannel support, and the subscribers manifest on their own.
What’s next
We have open PRs against Express, PostgreSQL (pg), Knex, and GraphQL, the kind of libraries where TracingChannel support means millions of applications get better observability without changing a line of their own code. MongoDB, Mongoose, Prisma, and Hono are in active discussion, and we have drafted proposals for Koa and Consola. There are still 20+ libraries on the list we have not reached out to yet, including Node’s built-in HTTP module, Kafka clients, and AI provider SDKs.
Beyond individual library adoption, the next layer is reducing duplication on the consumer side. Right now, every APM tool that subscribes to a TracingChannel has to independently map library payloads to OpenTelemetry semantic conventions. We are designing a shared mapper registry, a set of co-maintained modules that translate TracingChannel events into standardized spans and attributes. The goal is to build and prove this internally at Sentry first, then open-source it so any APM vendor can plug in. If a library ships TracingChannel support and a mapper exists, instrumentation becomes automatic.
The long-term picture is an ecosystem where libraries emit events as a first-class concern, mappers are community-maintained, and APM tools compete on what they do with the data rather than on how creatively they can patch your dependencies. We are not there yet, but the flywheel is turning.
You can help by talking about tracing channels and advocating for their adoption in the libraries you use. If you maintain a library and want to add TracingChannel support, the untracing conventions and our published proposals are a good starting point.