Watching everything is watching nothing: Sampling strategy for Sentry
Watching everything is watching nothing: Sampling strategy for Sentry
TL;DR - Blanket sampling rates can be wasteful or inefficient. Capture 100% of the signal with less of the noise and fine-tune how you monitor your applications with custom sampling logic
In a high-traffic production environment, telemetry is your most direct link to the user experience. Every Span, Trace, Log, and Replay sent to Sentry gives you high-fidelity visibility into what is actually happening in production.
But to extract the most value out of that visibility, you have to know how to filter signal from noise. If you treat a routine "page load" on a stable legacy route with the same intensity as a critical experience, like a checkout flow, or a brand-new feature launch, you aren't optimizing the data you collect.
To build an observability strategy that survives scale and doesn’t break quotas, you need to move past "blanket sampling." You want to prioritize high-resolution data where things are critical or changing fast, and optimize your setup where the system is stable.
Why not sample 100% of everything?
You can!
And if your app is small or brand new, that might actually be the right plan. But as you scale, "100% of everything" usually stops being a practical option, for a couple reasons:
Signal-to-Noise: Telemetry data is more useful when you know what happened at a glance. Needing to parse through 1 million “user clicked button” spans to discover the 100 times users experienced issues during checkout isn’t efficient for you, for queries, or any LLMs that might be consuming the same information as us.
The Network Footprint: While the SDK is highly optimized, every interaction, every function call, it starts to add up. By sampling only what you need, performance stays high while still collecting valuable information.
The basic dials: Static sample rates
When you initialize Sentry, you get four main options for adjusting how much data you send. Understanding how these interact is the first step:
// Sentry.client.config.ts
import * as Sentry from "@sentry/nextjs";
Sentry.init({
dsn: process.env.SENTRY_DSN,
// 1. Errors: We want to capture 100% of errors.
// 1.0 is the default, and this is usually omitted.
sampleRate: 1.0,
// 2. Tracing: A uniform slice of your traffic.
tracesSampleRate: process.env.NODE_ENV === "development" ? 1.0 : 0.05,
// 3. Full Replays: Great for UX audits.
replaysSessionSampleRate: 0.01,
// 4. Error Replays: The "Flight Recorder" that triggers on crashes.
replaysOnErrorSampleRate: 1.0,
});sampleRate: This is for errors. We almost always leave this at 1.0 because if something breaks, we want to know every single time.tracesSampleRate: This provides a cross-section of your traffic. It’s your primary lever for managing the volume of performance data.replaysSessionSampleRate: This records the whole session from the start. It's high-fidelity, so you usually only need a small percentage to see how the "average" user navigates.replaysOnErrorSampleRate: This is a buffer. It only sends the replay if an error occurs, capturing the 60 seconds of activity leading up to the error.
Precision control: The tracesSampler
tracesSamplerTraces are potentially the most important metric we have, responsible for monitoring performance, errors, and connecting all of our data together. In a production environment, you almost always want to sample 100% of traces. However, especially when dealing with very high-traffic applications, you don’t need to collect all of your traces if you trace strategically.
Instead of a blanket percentage, Sentry lets you pass a tracesSampler function to tracesSampleRate. This allows you to make a decision in real-time based on the context of the request.
What’s in the sampling context?
When a span starts, the sampler receives a samplingContext object. You’ll have this data automatically and can use it, along with anything else you may pass in, to decide if you should sample.
interface SamplingContext {
name: string; // e.g. "GET /api/v1/checkout"
attributes: SpanAttributes | undefined; // feature flags, user tiers, etc.
parentSampled: boolean | undefined; // Did the upstream service sample this?
parentSampleRate: number | undefined; // Sample rate from incoming trace
inheritOrSampleWith: (fallbackRate: number) => number; // Inherit parent decision or fallback
}In your tracesSampler function, either call the context object or directly destructure the values you need, and provide a return value ranging from 0 to 1.
Sentry.init({
dsn: process.env.SENTRY_DSN,
tracesSampler: ({name}) => {
// Sample 50% of login transactions
if (name.includes("/login") || attributes?.flow === "login") {
return 0.5;
}
},
});You can find a few useful example sampling functions in our docs.
Staying in sync with inheritOrSampleWith
inheritOrSampleWithIf the backend has already decided to sample a trace, the frontend should usually follow suit. You can use the inheritOrSampleWith utility to handle this. Destructure it from the samplingContext and call it to inherit the backend sample, or fall back to another value.
Sentry.init({
tracesSampler: ({ name, attributes, inheritOrSampleWith }) => {
// New features: Keep 100% of the data for a new UI launch.
if (attributes?.['feature_flag.new_v2_ui'] === true) {
return 1.0;
}
// Fallback: Respect the backend's choice, or default to 5%.
return inheritOrSampleWith(0.05);
},
});Smart sampling with Session Replay
Traces tell you what happened and where (e.g., "The database was slow"). Replays show you how it happened (e.g., "The user clicked the button five times because the loading spinner didn't show up").
By default, Session Replay is configured to record a percentage of all user sessions, based on replaysSessionSampleRate. There is also a separate sample rate that runs in a buffer and only submits if the user encounters an error; we typically leave that one at 1.0 or a very high percentage.
Sentry.init({
dsn: process.env.SENTRY_DSN,
// Define how likely Replay events are sampled.
replaysSessionSampleRate: 0.1,
// Define how likely Replay events are sampled when an error occurs.
replaysOnErrorSampleRate: 1.0,
});While a typical Sentry plan comes with 5 million spans, we start with 50 Session Replays (though you can add additional of both). Not only do we have our plan quotas to worry about, recording a Session Replay is a little more taxing on our users.
While setting a sample rate is a good start, in a production application, we may want to be a bit more intentional with when we record. We may want to record certain parts of our app with greater frequency, like new features or parts of the critical experience flow.
While there isn't a replaysSampler yet, it is possible to manually initiate and manage replays using replay.start().
Custom useSessionReplay Hook
useSessionReplay HookSession replays can be configured with an overall sample rate, but we can also choose to manually instrument Session Replay captures ourselves.
Let’s take a look at how we could build a custom React Hook to implement in our React/Next.js projects that will help us dynamically control how often we sample Session Replays per page.
First, remember to disable the replaysSessionSampleRate since we'll be implementing our own manually.
Sentry.init({
dsn: process.env.SENTRY_DSN,
// Disable - Using useSessionReplay instead
replaysSessionSampleRate: 0,
// Define how likely Replay events are sampled when an error occurs.
replaysOnErrorSampleRate: 1.0,
});Then, add this wherever you store your hooks in your React app. We’ll use this in our app’s pages to define where we want to sample, how often, and we’ll even be able to inject some additional context.
// lib/use-session-replay.ts
"use client";
import { useEffect } from "react";
import * as Sentry from "@sentry/nextjs";
export interface UseSessionReplayOptions {
enabled?: boolean;
sampleRate?: number;
tags?: Record<string, string>;
}
export function useSessionReplay(options: UseSessionReplayOptions = {}) {
const { enabled = true, sampleRate = 1.0, tags } = options;
useEffect(() => {
if (!enabled) return;
// Apply sample rate - randomly decide if this session should be recorded
const shouldRecord = Math.random() < sampleRate;
if (!shouldRecord) return;
const replay = Sentry.getReplay?.();
if (!replay) {
console.warn("Sentry replay integration not found. Ensure replayIntegration() is configured.");
return;
}
// Add any custom tags before starting
if (tags) {
Object.entries(tags).forEach(([key, value]) => {
Sentry.setTag(key, value);
});
}
// Stop any existing buffered session to avoid including data from other pages
// Then start a fresh recording that only captures post-hydration events
replay.stop();
replay.start();
// Stop and flush when component unmounts, then restart buffer mode
// to ensure error replay continues working on subsequent pages
return () => {
replay.stop();
// Restart buffer mode for error replay capture
// This ensures errors on other pages are still captured
replay.startBuffering();
};
}, [enabled, sampleRate, tags]);
}This will let you dial in session replays to where you need them most, and even add important additional context.
It’s worth noting this method is not 100% free of downsides. This will only start recording after the component is mounted, so you may miss out on hydration issues that happen in the earliest part of the render. Though we may not be able to see that, we will capture it in Web Vitals.
What about Logs?
Logs are a recent addition to Sentry. If you enableLogs in your Sentry init config you can start sending logs to Sentry – either via the Sentry Logger from the SDK, or an integration like the Console Logging Integration.
Sentry receives 100% of logs by default if the integration is enabled. However, in high-traffic environments, you should filter logs at the source, using Sentry’s beforeSendLog or another logging tool, like LogTape and structured logging.
We recently talked about using LogTape with the Sentry sink to elevate how we approach logging in our production apps.
import { configure, getConsoleSink } from "@logtape/logtape";
import { getSentrySink } from "@logtape/sentry";
await configure({
sinks: {
console: getConsoleSink(),
sentry: getSentrySink()
},
loggers: [
{ category: "app", lowestLevel: "debug", sinks: ["console", "sentry"] }
],
});At the most basic level, we can limit what logs are sent, per sink, based on the “severity” level. We may instrument our app with debug logs, but at scale, these may be noisy and eat into our quotas. Instead, we can decide to filter out the lowest levels of logs by configuring lowestLevel.
With well-planned, structured logging, we can easily filter on specific data attributes. This is useful not only for limiting and processing what data we send (and where) for noise to signal reasons, but also for data scrubbing.
const logger = getLogger(['app', 'api']);
// This stays local (low signal)
logger.debug('User opened menu');
// This is sent to Sentry (explicitly marked)
logger.info('Payment processed', {
telemetry: true,
amount: 50,
currency: 'USD'
});
// This is sent to Sentry (automatic due to error level)
logger.error('Database connection failed');Using LogTape filters we can instrument our own filtering logic, including even our own built-in sampling logic if we wished.
await configure({
sinks: {
console: getConsoleSink(),
sentry: getSentrySink()
},
filters: {
// 1. High Signal: Always send errors or explicit telemetry
isHighSignal(record: LogRecord) {
const isTelemetry = record.properties.telemetry === true;
const isCritical = record.level === "error" || record.level === "fatal";
return isTelemetry || isCritical;
},
// 2. Random Sampling: Keep a 5% slice of regular traffic for context
sampleRate(record: LogRecord) {
const SAMPLE_RATE = 0.05;
return Math.random() < SAMPLE_RATE;
},
// 3. Env Guard: Prevent dev noise in production
noDev(record: LogRecord) {
return process.env.NODE_ENV === "production";
}
},
loggers: [
{
category: ["app", "api"],
lowestLevel: "debug",
sinks: ["console", "sentry"],
// Use a logical combination:
// (IsHighSignal OR SampleRate) AND NoDev
filters: [(rec) => (isHighSignal(rec) || sampleRate(rec)) && noDev(rec)],
}
],
});Read more about filtering and querying logs using LogTape.
Summary: Sampling telemetry at scale
Setting up Sentry for the first time is simple: instrument your SDK, set your sample rates to 100%, and wait to start seeing all your errors, performance data, logs, session replays, and more coming in. But, as traffic ramps up and monitoring priorities change, you ideally want to optimize out the noise from the signal, and focus attention where it is needed.
Traces: Traces map and connect events in your apps. While you typically want to record most if not all of your traces, with enough traffic, it makes sense to prioritize high-impact targets.
Session Replays: Replays are your highest-fidelity tool, but they carry the most weight. Instead of a blanket percentage, use
replay.start()to trigger recordings only during critical user flows, like at checkout, or where new features are enabled, and the visual context is highly valuable.Logs: Use
beforeSendLogto filter by severity or custom metadata. Stop sending "Info" logs for every routine event; adopt structured logging so that when a log does hit Sentry, it’s already formatted to be searchable and high-signal.
New to Sentry? Sign up get started, or check out our quickstart guides for Logs and Session Replay.

