Back to Blog Home

Tracing Just Got a Whole Lot More Useful: Search, Visualize, and Alert with Sentry’s new Query Engine

Will McMullen image
Sasha Blumenfeld image

Will McMullen, Sasha Blumenfeld -

Tracing Just Got a Whole Lot More Useful: Search, Visualize, and Alert with Sentry’s new Query Engine

For a while, tracing in Sentry was... fine. You could open up a slow transaction, poke around, find the N+1, and feel like a hero. But if you wanted to answer more complex questions - like why your payment API was getting slower in Europe, or which CDN was silently tanking your image loads - things got harder.

We didn't really build it to help with answering broad questions. They were built for “What happened to this request?” Not “What’s happening across all of them?”

So we rebuilt it.

With the new Trace Explorer and Span Metrics, you can turn your raw tracing data into something you can actually work with. Query your raw data to spot outliers, identify patterns, create alerts, root cause, and most importantly, start using tracing proactively

Let’s go over the new features, and how you can use them to answer complex questions about your application’s performance and reliability across all your traces. 

Trace Explorer + Span Metrics = Actual Answers

Issues in distributed apps don’t start with a bang, they start with a whimper. A bloated API response, a misconfigured CDN, a spike in auth latency. Inadvertently, they slip by unnoticed in your logs and error rates. By the time you hear about it, users are already annoyed and someone’s asking for a retro.

With Trace Explorer, you can now:

  • Query spans by operation (http.client, auth, etc), service name, or custom attributes to narrow in your data set to relevant spans

  • Calculate metrics (p95, count, avg) on any attribute, at query time to visualize broad-stroke trends and spikes for any metric you might find useful, like span.duration or token_usage

  • Group by things like user.region, cdn.provider, or span.description to spot common patterns and outliers

  • Compare queries side-by-side to see if feature flags, tags, or user attributes impact your metrics

  • Build dashboards and alerts to stay proactive

And unlike full-blown metrics platforms, this doesn't take a six-month rollout or a separate team to manage. It just works… straight from your existing tracing data.

For a quick step-by-step guide, check out our last post for the early adopter program, or read the docs on how to send span metrics.

What questions can you solve with Trace Explorer? 

Users in Japan are dropping off early.

You don’t need a war room. Instead, you query your spans by span.op = http.client, then filter for user.geo.country_code = JP. Sort by p95(span.duration), and group by span.description. Instantly, one endpoint jumps out - you click into the trace samples, and get to debugging. 

You’re getting flooded with 404s. No idea from where.

You filter for status_code = 404, sort by count(spans), group by span.description, and instantly see which endpoints are broken, and how often they’re breaking. Turns out someone shipped a frontend route mismatch

You want to see which endpoints are using caches.

You’re looking for opportunities to optimize performance, so you create and save a query for the count / performance of all your cached & uncached endpoints. Search for http.response_delivery_type is cache, group by span.description, select Compare Queries and adjust the 2nd query to is not cache, and finally save the query for later use. 

Users are complaining about login. Again.

You filter by span.op = auth, group by auth.provider, and see one vendor spiked latency last Thursday. Want to track it next time? Save the query and turn it into an alert. Done.

Marketing says the homepage feels slow overseas.

You add cdn.provider and image_url as span attributes. Group by cdn_provider. Boom: your fallback CDN is consistently dragging load times in Asia. Time to have a very fun conversation with your infra team.

Pro tips

  • Suggested Queries on the left sidebar help you get started fast—no config needed.

  • Use "See Full List" to browse all available span attributes and start experimenting.

  • Dig into individual spans - you might find custom instrumentation your team already added that unlocks more insights.

Your current setup is the tip of the iceberg

If you're already using Sentry for tracing, you can start querying immediately. No metric definition files. No agent magic. No storage tax. Everything’s calculated on the fly. And if you want to get deeper, you can attach custom attributes to spans like this:

Click to Copy
const span = Sentry.getActiveSpan(); if (span) { // Add individual metrics span.setAttribute("database.rows_affected", 42); span.setAttribute("cache.hit_rate", 0.85); // Add multiple metrics at once span.setAttributes({ "memory.heap_used": 1024000, "queue.length": 15, "processing.duration_ms": 127, }); }

Custom attributes unlock a whole new world of potential. We have a whole page dedicated to documenting examples like a file upload & processing pipeline, LLM monitoring, E-Commerce transactions, and more on our Guides page

TL;DR

Tracing doesn’t stop at debugging. With Trace Explorer and Span Metrics, you can finally use your span data like real analytics. Query anything, group by anything, visualize and alert in minutes, not months.

→ Get started with our Docs

→ Already have tracing? Check it out under Explore → Traces

→ Join the Sentry Discord to chat with the team

→ Start a free trial if you’re not already using Sentry

Share

Share on Twitter
Share on Bluesky
Share on HackerNews
Share on LinkedIn

Published

Sentry Sign Up CTA

Code breaks, fix it faster

Sign up for Sentry and monitor your application in minutes.

Try Sentry Free

Topics

Performance Monitoring

600+ Engineers, 1 Tool: Anthropic's Sentry Story

Listen to the Syntax Podcast

Of course we sponsor a developer podcast. Check it out on your favorite listening platform.

Listen To Syntax
© 2025 • Sentry is a registered Trademark of Functional Software, Inc.