← Back to Blog Home

Next.js Observability Gaps & How to Close Them

Next.js Observability Gaps & How to Close Them

Next.js runs code across three distinct environments: client (browser), server (Node.js), and edge runtimes. The Sentry wizard creates separate initialization files for each runtime:

  • instrumentation-client.ts for browser code
  • sentry.server.config.ts for Node.js backend
  • sentry.edge.config.ts for edge middleware

Running the setup command generates these files automatically:

npx @sentry/wizard@latest -i nextjs

The wizard also creates a global error boundary component, wraps your Next.js configuration with withSentryConfig, and handles source map uploads for readable stack traces.

Key configuration considerations:

  • Set tracesSampleRate to 1.0 during development but reduce to 10-20% in production to manage quota consumption
  • The sendDefaultPii option attaches user IP addresses to sessions and events, enabling better user correlation
  • Edge runtime configuration can be simplified if middleware only handles routing, reducing unnecessary trace data

Always invoke Sentry.setUser() after authentication to ensure user context flows across all observability signals.

Hydration Errors: The Problem

Hydration occurs when React attaches event listeners to server-rendered HTML. Mismatches between server and client renders trigger hydration errors—a frequent source of production bugs that provide minimal debugging information.

A common scenario: components reading from browser APIs like localStorage. The server renders with default values while the client renders with stored preferences, creating a mismatch React cannot reconcile.

Production error messages offer no useful context—typically just minified React decoder URLs and chunk references.

The HTML Diff Solution

Sentry provides a side-by-side diff showing server-rendered HTML (marked in red) versus client-rendered markup (marked in green). This visual comparison reveals exactly which DOM nodes diverged between renders.

Sentry hydration error diff with Session Replay

When Session Replay is enabled, Sentry automatically groups hydration errors into issues without consuming your error quota, since they’re derived from replay data.

Typical fix pattern: Defer browser API reads to useEffect hooks so initial renders match between server and client, then apply stored preferences after hydration completes.

Server Actions: Manual Instrumentation Required

Server actions implement form handling and mutations in Next.js. Unlike most operations, they don’t emit OpenTelemetry spans, requiring explicit instrumentation.

The reason: server actions aren’t exposed through standard instrumentation hooks due to how Turbopack bundles them. Building a compiler to auto-instrument them would be unreasonably complex.

Wrapping Server Actions

Use Sentry.withServerActionInstrumentation() to instrument each action:

"use server";

import * as Sentry from "@sentry/nextjs";
import { headers } from "next/headers";

export async function login(formData: FormData) {
  return Sentry.withServerActionInstrumentation(
    "login",
    {
      headers: await headers(),
      formData,
      recordResponse: true,
    },
    async () => {
      const result = await authenticateUser(formData);
      return result;
    },
  );
}

The headers parameter enables distributed tracing by reading trace IDs and baggage metadata, connecting client-initiated requests with server-side execution into one continuous trace.

Production Error Context

Next.js intentionally strips error details from server-side failures in production builds to prevent sensitive data leakage. The client receives generic messages like “An error occurred in a server component render.”

Sentry captures the complete server-side exception with full stack traces, providing the debugging context that sanitized client messages cannot.

Logs and Metrics: Strategic Choices

Different telemetry types serve different purposes:

  • Errors: Indicate broken functionality requiring fixes; trigger alerts and issue creation
  • Logs: Contextual breadcrumbs attached to traces; queryable and high-cardinality
  • Metrics: Counters, durations, gauges for dashboards and aggregate pattern detection

Enable logs by setting enableLogs: true in each Sentry init file:

Sentry.init({
  dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
  tracesSampleRate: 0.1,
  enableLogs: true,
});

Structured logging captures contextual information:

import * as Sentry from "@sentry/nextjs";

Sentry.logger.info("User added talk to schedule", {
  userId: session.user.id,
  talkId: talk.id,
  action: "add_to_schedule",
});

Critical distinction: Logs and metrics avoid sampling. While traces might sample at 10%, you receive 100% of your log and metric data, making them ideal for signals that cannot tolerate gaps.

Database Query Visibility

ORMs like Drizzle abstract database operations, making queries invisible to distributed tracing by default. Traces show that a server action took 850ms without revealing why.

Adding a database client integration surfaces each query as a span with actual SQL statements. For Turso + libSQL, use the libsqlIntegration:

import { libsqlIntegration } from "@sentry/node";

Sentry.init({
  integrations: [libsqlIntegration()],
});

This enables Query Insights, automatically surfacing N+1 query patterns and identifying slow calls.

AI Token Usage and Costs

The Vercel AI SDK integration (enabled by default in Node runtime) tracks AI model usage when you set experimental_telemetry on function calls:

const result = await generateObject({
  model: openai("gpt-4"),
  schema: z.object({...}),
  experimental_telemetry: {
    isEnabled: true,
  },
});

Sentry captures per-model token usage, cost breakdowns, and tool call traces. When multiple AI agents use different models, each receives its own named span, clarifying whether slowness originates from model latency or downstream queries.

This article is based on a live workshop; the full livestream is available on YouTube.

FAQs

Can Sentry track AI token usage and costs in a Next.js app?

Yes, through the Vercel AI SDK integration. Enable experimental_telemetry on AI function calls to capture per-model token usage, cost data, and tool traces. With multiple agents using different models, each gets its own named span showing whether delays come from the model or database queries.

Does Sentry work across all three Next.js runtimes?

Yes, but each runtime requires its own config file. The wizard creates all three automatically. Edge configuration can be simplified if middleware only handles routing to reduce noise.

Do Sentry logs get sampled like traces?

No. Logs and metrics receive 100% of your data regardless of tracesSampleRate settings. Enable them by setting enableLogs: true in init files.

Are database queries visible in Sentry traces with Drizzle or other ORMs?

Not by default. ORMs abstract SQL, so traces show operation duration without revealing queries. Adding database client integrations (like libsqlIntegration for Turso) surfaces each query as a span with SQL statements and enables Query Insights.

Why are Next.js hydration errors so hard to debug in production?

Minified React error messages point to decoder URLs providing almost no information. Sentry's HTML diff tool shows before/after DOM comparisons, revealing exactly which elements or attributes caused mismatches. Session Replay integration creates automatic grouped issues without affecting error quota.

Does Sentry work with Next.js App Router and server actions?

Yes, but server actions require manual instrumentation with Sentry.withServerActionInstrumentation(). Pass the headers() value to connect client and server traces. Without this, you get two disconnected traces instead of one unified view.

Syntax.fm logo

Listen to the Syntax Podcast

Of course we sponsor a developer podcast. Check it out on your favorite listening platform.

Listen To Syntax