Edge runtime vs Node runtime: when each one wins for SaaS
Edge wins on cold start and global latency. Node wins on libraries, duration, and database work. The decision is not binary, and most production SaaS use both.
The Edge runtime is a V8 isolate environment that runs your code in 300+ Points of Presence with sub-100ms cold starts; the Node runtime is a full Node.js process that runs in regional data centers with broader API access and higher startup cost. They solve different problems, and the right answer for a SaaS is almost never "all Edge" or "all Node".
This piece is for engineering leads picking a runtime per route in Next.js 16 or any framework that exposes the same split. We focus on what actually breaks in production, not the marketing copy.
TL;DR
Pick Edge for routes that are short, latency-sensitive, and use only Web APIs: middleware, auth checks, A/B test routing, geolocation rewrites, streaming token relays. Pick Node for routes that pull a heavy npm dependency, talk to a SQL database with a persistent driver, run for more than a few seconds, generate PDFs, or rely on the file system. Most production SaaS we ship runs middleware on Edge and CRUD APIs on Node, and that split alone covers ~95% of decisions.
Comparison at a glance
AxisEdge runtimeNode runtimeEngineV8 isolate (Web APIs subset)Full Node.js processCold start (P50)~100ms~250ms warm, ~860ms coldGeographic distributionGlobal PoPsOne or a few regionsnpm packagesEdge-compatible onlyAnything on npmBundle size limit1 to 4 MB on VercelUp to 250 MB unzippedMax duration300s (must start streaming in 25s)Up to 800s on Vercel ProFile systemNoneFullNative modulesNoneYesPersistent connectionsNoYes (with caveats)Where Edge wins
Cold-start latency
The single most-cited benchmark on the topic, run by openstatus, measured Edge functions at 106ms P50 versus Vercel serverless at 246ms warm and 859ms cold, a 9x improvement on cold paths and a 2x improvement when warm (openstatus, 2024). The reason is structural: V8 isolates share a single OS process, so booting one is a sub-millisecond memory allocation. A Node serverless function has to spin up a Node process, parse node_modules, and run your application code, which is hundreds of milliseconds even on a small bundle.
For middleware that runs on every request, this matters. Even a 200ms cold-start tax on a request-gating middleware is enough to wreck your TTFB targets and push Largest Contentful Paint into the yellow.
Global proximity
Edge functions execute at the PoP closest to the user. A request from São Paulo to a Frankfurt-only Node region takes 200+ms in network alone before any code runs. The same request to an Edge function lands in a São Paulo PoP and the round-trip drops to 30ms. For routes whose answer is "set a cookie, redirect, or rewrite", this is the entire latency budget.
Streaming tokens from an LLM
Edge runtimes implement ReadableStream natively and the Vercel platform pipes them straight to the client without buffering. For chat UIs streaming tokens from Claude, GPT, or Gemini, Edge keeps perceived latency low because tokens hit the user as they arrive at the PoP, not after a Node region rebuffers them.
Where Node wins
Library compatibility
Edge runs a strict subset of Web APIs. There is no fs, no child_process, no net, no native crypto, no eval, no new Function(). Any npm package that touches those, directly or transitively, will not run (Next.js Edge API reference).
The list of incompatible packages is long: jsonwebtoken, bcrypt, sharp, puppeteer, pdfkit, most ORMs that ship native drivers, and any package that uses Node-specific globals. Some have edge-compatible siblings (jose for JWT, @node-rs/argon2 for hashing, Prisma's edge client), but the migration is not free, especially when you discover the rewrite halfway through a feature.
Database access
Persistent TCP connections to a SQL database do not exist in Edge. The model is HTTP-based: every query opens a fresh connection through a proxy or a serverless driver. That works for small read-heavy patterns, but transactions, prepared statements, and long-running queries are slower and more expensive than the Node equivalent. We see this in practice on Supabase: a Node serverless function with a pooled postgres client routinely handles a 6-statement transaction in 40ms; the equivalent through the Supabase HTTP API from Edge is 120 to 200ms.
Long-running and CPU-heavy work
Edge functions on Vercel must start streaming a response within 25 seconds, even though the total invocation can run for up to 300s (Vercel, March 2025). For PDF generation, image transforms, ZIP packaging, or any synchronous CPU work that blocks before producing output, Node is the safer runtime. The CPU itself is also weaker on Edge: V8 isolates are scheduled aggressively and can be preempted, while Node functions get a dedicated execution slot.
Bundle size
Vercel's Edge bundle ceiling is between 1 MB and 4 MB depending on plan and platform, and that includes everything imported transitively (Vercel Functions Limits). One mid-size SDK is enough to blow it. Node functions ship up to 250 MB unzipped, which is rarely a constraint.
The middleware question (Next.js 15.5+)
For two years, Next.js middleware was Edge-only. That forced teams to do auth checks against an HTTP-based session store (Upstash, Clerk, Auth.js JWT) and to avoid any Node-specific dependency in the middleware path. In Next.js 15.5, Vercel made Node.js runtime support for middleware stable after going through 15.2 experimental (Next.js 15.5 release notes). Next.js 16 carries this forward.
The practical effect: you can now run a Node-only library (a session validator that needs crypto, an ORM client, a feature-flag SDK that uses fs) in middleware without juggling Edge polyfills. The trade-off is the cold-start regression: Node middleware is slower to boot than Edge middleware, so the wins from "keep middleware on Edge" still hold for high-traffic gating routes.
Our rule of thumb on this: keep Edge middleware as the default; switch to Node middleware only when the Edge alternative requires three Edge-compatible package swaps to satisfy one feature.
What we actually run in production
The split we ship on most Studio SaaS projects:
- Edge: middleware (i18n routing, auth gate, A/B routing, geolocation),
/api/health, image rewrite endpoints, streaming LLM relays, low-stakes read endpoints that hit Upstash Redis. - Node: all admin CRUD against Supabase, file uploads to R2, Stripe webhook handlers, PDF generation, anything that touches the file system, scheduled jobs.
The default for a new route is Node, and we promote a route to Edge only when we measure a latency win that justifies the constraint surface. "Default to Edge" is a common mistake we see in early-stage SaaS: it pushes auth and DB access into a runtime where they are slower and more expensive, then pushes the team into a 2-week port back to Node when a Stripe webhook needs to run for 12 seconds.
The decision tree
Faced with a new route, we walk this in order:
- Does the route need to read the request and act in under 50ms (geolocation, auth gate, rewrite)? Edge.
- Does the route stream tokens from an LLM? Edge.
- Does the route depend on a single npm package that pulls Node APIs? Node.
- Does the route do a multi-statement DB transaction? Node.
- Could the route run for more than 25 seconds before starting to stream? Node.
- Is the bundle for this route already past 1 MB? Node.
- None of the above, and the route is high-traffic? Edge, measured.
- Otherwise: Node, default.
Decisions made this way are reversible. Decisions made by following "all Edge, it's faster" are usually not, because by the time the team hits the wall, half the codebase has been written against Web APIs only.
Sources
Frequently asked questions
- Can I mix Edge and Node runtimes in the same Next.js app?
- Yes, and most production apps do. You set the runtime per route segment with the route-segment config (export const runtime = 'edge' or 'nodejs'), and Next.js builds and deploys each route accordingly. Middleware is a separate decision since Next.js 15.5 made Node middleware stable, but you still pick one runtime for the middleware as a whole.
- Is Edge always cheaper than Node on Vercel?
- Not always. Edge invocations are billed differently and the GB-Hours math depends on your traffic shape. For high-traffic short routes, Edge tends to be cheaper because of faster cold starts and less compute time per request. For long-running CPU-bound work, Node is usually cheaper because Edge would either time out or burn longer at higher rates.
- What happens if my Edge function exceeds the bundle size limit?
- The deploy fails. Vercel surfaces a build error naming the route and the bundle size. The fix is either to swap an Edge-incompatible package for a lighter alternative, lazy-load on the client, or move the route to the Node runtime. There is no soft-degradation, the route does not ship until the bundle is under the limit.
- Should I use Edge for my Stripe webhook?
- No. Stripe webhooks need raw body access, signature verification with crypto primitives that some libraries implement only in Node, and tolerance for occasional 5 to 15 second processing windows. Run them on Node with maxDuration set generously, and let the queue or DB write absorb retries.
Studio
Start a project.
One partner for companies, public sector, startups and SaaS. Faster delivery, modern tech, lower costs. One team, one invoice.