Better Auth

Performance & Composition

Watch session resolution time, enable session caching, wire the standalone Nitro hook, and combine with the AI SDK integration.

getSession() costs a database query on every request. The integration measures it for you and exposes the timing as auth.resolvedIn so you can spot regressions before users do.

Watch session resolution time

Wide Event — slow session resolution
{
  "auth": { "resolvedIn": 245, "identified": true },
  "duration": "312ms"
}

When auth.resolvedIn is high relative to duration, your auth backend is the bottleneck.

Tune for high traffic

  1. Enable cookie caching in Better Auth so session lookups don't hit the database every time.
  2. Use exclude on createAuthMiddleware to skip public routes that don't need user context.
  3. Use include to limit resolution to specific route patterns instead of the entire app.

A common P95 target after caching: auth.resolvedIn < 5ms.

Standalone Nitro

createAuthIdentifier is a factory that creates a Nitro request hook. Designed for standalone Nitro apps where the evlog Nitro module handles hook ordering.

For Nuxt, use createAuthMiddleware in a server middleware instead — Nitro plugin hook ordering can cause the logger to not be available yet in the request hook.
server/plugins/evlog-auth.ts
import { createAuthIdentifier } from 'evlog/better-auth'
import { auth } from './lib/auth'

export default defineNitroPlugin((nitroApp) => {
  nitroApp.hooks.hook('request', createAuthIdentifier(auth, {
    exclude: ['/api/auth/**', '/api/public/**'],
  }))
})

It accepts the same options as createAuthMiddleware.

Combine with the AI SDK

When you also use evlog/ai, your wide events include both user identity and AI metrics in a single event:

Wide Event — AI + User
{
  "method": "POST",
  "path": "/api/chat",
  "status": 200,
  "duration": "4.5s",
  "userId": "QBX9tPjJQExWawAbNll75",
  "user": {
    "id": "QBX9tPjJQExWawAbNll75",
    "name": "Hugo Richard",
    "email": "hugo@example.com"
  },
  "auth": { "resolvedIn": 8, "identified": true },
  "ai": {
    "calls": 1,
    "model": "claude-sonnet-4.6",
    "provider": "anthropic",
    "inputTokens": 3312,
    "outputTokens": 814,
    "totalTokens": 4126,
    "msToFirstChunk": 234,
    "msToFinish": 4500,
    "tokensPerSecond": 180
  }
}

This is the power of wide events — one event per request, all context in one place: who made the request, what they did, how the AI responded, and how it performed.