Core Concepts

Performance

evlog adds ~3µs per request. Faster than pino, consola, and winston in most scenarios while emitting richer, more useful events.

evlog adds ~3µs of overhead per request, that's 0.003ms, orders of magnitude below any HTTP framework or database call. Performance is tracked on every pull request via CodSpeed.

evlog vs alternatives

All benchmarks run with JSON output to no-op destinations. pino writes to /dev/null (sync), winston writes to a no-op stream, consola uses a no-op reporter, evlog uses silent mode.

Results

wide event lifecycle · ops/sec·measuring
throughputcreate + 3× set + emit
evlogn/a ops/s
pinon/a ops/s
consolan/a ops/s
winstonn/a ops/s
consola has no equivalent (no wide-event API) — see the table below for fair scenarios.
why · 4 lines → 1 event0 / 4 lines
child.info({ user: { id, plan } }, 'user context')
child.info({ cart: { items, total } }, 'cart context')
child.info({ payment: { method } }, 'payment context')
child.info({ status: 200 }, 'request complete')
evlog wide event
{
  user:    { id, plan },
  cart:    { items, total },
  payment: { method },
  status:  200
}
75%less data on the wire1 row to query
vs pino7.7× faster
vs winston14.1× faster
CI trackingCodSpeed · per PR
Scenarioevlogpinoconsolawinston
Simple string log1.83M ops/s1.09M2.79M1.20M
Structured (5 fields)1.64M ops/s716.1K1.71M431.6K
Deep nested log1.55M ops/s464.9K1.01M164.0K
Child / scoped logger1.70M ops/s845.0K280.4K430.0K
Wide event lifecycle1.58M ops/s205.8K111.9K
Burst (100 logs)17.8K ops/s10.3K39.4K7.5K
Logger creation16.85M ops/s7.50M310.3K5.38M

evlog wins 4 out of 7 head-to-head comparisons, and the wins that matter most are decisive: 7.7x faster than pino in the wide event pattern, 2.3x faster logger creation, and 3.3x faster deep nested logging. consola edges ahead on simple strings and burst (it uses a no-op reporter with no serialization), but evlog produces a single correlated event per request where traditional loggers emit N separate lines.

Why this matters: in the wide event pattern (one event per request, the real-world API shape), evlog is 7.7x faster than pino and 14.1x faster than winston while sending 75% less data to your log drain and giving you one queryable event instead of 4 disconnected lines. The 7.7x is not a brute-force win — pino doesn't try to accumulate context, so the comparison reflects an architectural difference, not a fairness issue. See When evlog might not win for the honest gaps.

What is the "wide event lifecycle"?

This benchmark simulates a real API request:

const log = createLogger({ method: 'POST', path: '/api/checkout', requestId: 'req_abc' })
log.set({ user: { id: 'usr_123', plan: 'pro' } })
log.set({ cart: { items: 3, total: 9999 } })
log.set({ payment: { method: 'card', last4: '4242' } })
log.emit({ status: 200 })

Same CPU cost, but evlog gives you everything in one place.

Why is evlog faster?

The numbers above aren't magic, they come from deliberate architectural choices:

In-place mutations, not copies. log.set() writes directly into the context object via a recursive mergeInto function. Other loggers clone objects on every call (object spread, Object.assign). evlog never allocates intermediate objects during context accumulation.

No serialization until drain. Context stays as plain JavaScript objects throughout the request lifecycle. JSON.stringify runs exactly once, at emit time. Traditional loggers serialize on every .info() call, that's 4x serialization for 4 log lines.

Lazy allocation. Timestamps, sampling context, and override objects are only created when actually needed. If tail sampling is disabled (the common case), its context object is never allocated. The Date instance used for ISO timestamps is reused across calls.

One event, not N lines. For a typical request, pino emits 4+ JSON lines that all need serializing, transporting, and indexing. evlog emits one. That's 75% less work for your log drain, fewer bytes on the wire, and one row to query instead of four.

RegExp caching. Glob patterns (used in sampling and route matching) are compiled once and cached. Repeated evaluations hit the cache instead of recompiling.

When evlog might not win

The benchmarks above measure CPU + serialization cost on the main thread, with no real I/O. That's the standard setup pino, winston, and logtape use for their own benchmarks — but it leaves out a few scenarios where another logger can edge ahead. Be honest about these:

Fire-and-forget hot paths with pino-via-worker-thread. In production, pino is typically configured with a worker-thread transport (pino-pretty, pino-loki, vendor-specific transports). The serialization and I/O move off the main thread entirely. For a workload that emits hundreds of thousands of log.info('foo') lines per second with no context accumulation, pino-via-worker can hit ~2-3M ops/s on the main thread because it's just queueing. We can't benchmark that mode fairly inside a single-threaded vitest process, so it's not in our table — but it's a real scenario where pino is faster.

CLI / pretty-only output without serialization. consola's no-op reporter mode in our benchmarks (level: 4, reporters: [{ log: () => {} }]) skips JSON serialization entirely. That's realistic if you're using consola for a CLI with terminal-only output, but it's why consola wins "simple string" and "burst" — it's not doing the same work. evlog and pino both serialize to JSON; consola in those benchmarks does not. If your use case is "pretty terminal output, no shipping logs anywhere", consola is genuinely lighter.

Single log.info calls, no context accumulation. evlog and pino are roughly tied on pino.info('hello') vs evlog.info('hello') (1.83M vs 1.09M ops/s in our run, but the gap closes further if pino runs in async mode). evlog's ~7.7x advantage shows up specifically when you'd otherwise emit N separate lines for one logical operation. If you genuinely log one line per call and don't accumulate, the speed delta is much smaller — pick evlog for the API ergonomics (log.set + structured errors), not raw throughput.

Wall-clock variance is real. Vitest bench numbers shift ±5-10% between runs on the same machine (thermal throttling, GC, other processes). The numbers above come from a single run on a MacBook; CI tracks regressions via CodSpeed's CPU-instruction counting (deterministic, ±0.5% noise floor) but the absolute hz values in this page are the wall-clock snapshot, not a guaranteed floor.

The takeaway: the wins are real for the wide event pattern, but if your stack is "pure fire-and-forget pino with a worker transport", that's the one place we don't claim to beat.

Real-world overhead

For a typical API request:

ComponentCost
Logger creation52ns
3x set() calls105ns
emit()588ns
Sampling22ns
Enricher pipeline2.14µs
Total~2.9µs

For context, a database query takes 1-50ms, an HTTP call takes 10-500ms. evlog's overhead is invisible.

Bundle size

Every entry point is tree-shakeable. You only pay for what you import.

EntryGzip
core (evlog)510 B
toolkit (evlog/toolkit)720 B
utils1.58 kB
error1.46 kB
enrichers1.99 kB
pipeline1.35 kB
http1.22 kB
browser289 B
workers1.30 kB
client128 B

A typical Node.js bundle (initLogger + createLogger) measures ~6.3 kB gzip end-to-end after tree-shaking; adding createRequestLogger, createError, parseError, and useLogger brings the bundle to ~7.2 kB gzip. Adapters and framework integrations sit on top: Hono is 617 B, Express 734 B, Axiom 1.48 kB. Bundle size is tracked on every PR and compared against the main baseline.

Detailed benchmarks

Logger creation

Operationops/secMean
createLogger() (no context)19.20M52ns
createLogger() (shallow context)18.74M53ns
createLogger() (nested context)17.70M56ns
createRequestLogger() (method + path)16.91M59ns
createRequestLogger() (method + path + requestId)12.67M79ns

Context accumulation (log.set())

Operationops/secMean
Shallow merge (3 fields)9.56M105ns
Shallow merge (10 fields)4.79M209ns
Deep nested merge8.04M124ns
4 sequential calls7.05M142ns

Event emission (log.emit())

Operationops/secMean
Emit minimal event1.93M519ns
Emit with context1.70M588ns
Full lifecycle (create + 3 sets + emit)1.59M628ns
Emit with error65.9K15.17µs
emit with error is slower because Error.captureStackTrace() is an expensive V8 operation (~15µs). This only triggers when errors are thrown.

Payload scaling

Payloadops/secMean
Small (2 fields)1.72M581ns
Medium (50 fields)569.8K1.76µs
Large (200 nested fields)131.2K7.62µs

Sampling

Operationops/secMean
Tail sampling (shouldKeep)44.97M22ns
Full emit with head + tail7.01M143ns

Enrichers

Enricherops/secMean
User Agent (Chrome)2.61M384ns
Geo (Vercel)3.88M258ns
Request Size12.37M81ns
Trace Context4.35M230ns
All combined (all headers)466.7K2.14µs

Error handling

Operationops/secMean
createError()232.2K4.31µs
parseError()45.48M22ns
Round-trip (create + parse)231.4K4.32µs

Middleware pipeline

Operationops/secMean
resolveMiddlewarePluginRunner (no plugins)37.70M27ns
resolveMiddlewarePluginRunner (2 plugins, cached)32.26M31ns
createMiddlewareLogger (no plugins, safe headers)4.41M227ns
createMiddlewareLogger (2 plugins, cached merge)4.13M242ns
Full request lifecycle (no plugins, no drain)993.7K1.01µs
Full request lifecycle (2 plugins, sync drain)621.2K1.61µs

Methodology & trust

Can you trust these numbers?

Every benchmark in this page is open source and reproducible. The benchmark files live in packages/evlog/bench/. You can read the exact code, run it on your machine, and verify the results.

All libraries are tested under the same conditions:

  • Same output mode: JSON to a no-op destination (no disk or network I/O measured)
  • Same warmup: each benchmark runs for 500ms after JIT stabilization
  • Same tooling: Vitest bench powered by tinybench
  • Same machine: when comparing libraries, all benchmarks run in the same process on the same hardware

CI regression tracking

Performance regressions are tracked on every pull request via two systems:

  • CodSpeed runs all benchmarks using CPU instruction counting (not wall-clock timing). This eliminates noise from shared CI runners and produces deterministic, reproducible results. Regressions are flagged directly on the PR.
  • Bundle size comparison measures all entry points against the main baseline and posts a size delta report as a PR comment.

Run it yourself

Terminal
cd packages/evlog

pnpm run bench                          # all benchmarks
pnpm exec vitest bench bench/comparison/ # vs alternatives only
pnpm exec tsx bench/scripts/size.ts     # bundle size