Writing
Networking
March 18, 20262 min read

WebSockets vs HTTP — connection economics, backpressure, and when streaming wins

Framing the transport choice as an operations problem: fan-out, heartbeats, scaling stateful sockets, and falling back to SSE or polling without shame.

WebSocket
HTTP/2
SSE
scaling
On this page

Choosing between HTTP request/response and persistent sockets is less about novelty and more about who owns buffering, authentication refresh, reconnect storms, and fan-out.

HTTP excels when work is fundamentally named resources fetched on demand—cache headers, CDN semantics, retries, idempotency keys, chunked transfer, and gigantic middleware ecosystems already speak this dialect.

Persistent connections shine when workloads are streaming, duplex, latency-sensitive coordination—fleet telemetry, collaborative cursors (with sane CRDT overlays), realtime monitors. They trade away convenient request boundaries for continuously open control loops.

WebSockets specifically

After the HTTP handshake upgrade, frames flow both ways across a TCP connection guarded by masking rules (primarily defending intermediaries historically). Proxies sometimes mishandle idle sockets; heartbeats (ping/pong) and timeouts must be deliberate so half-open calamities disappear quietly instead of pinning memory forever.

Operational checklist:

ConcernPattern
Auth/session refreshRe-auth on reconnect; never trust dormant sockets forever
Reconnect backoffJitter to avoid simultaneous stampedes redeploy-wide
BackpressureApp-level quotas; naive TCP buffering hides bursts until kernels explode latency
Multi-node fan-outPub/sub backbone bridging nodes (Redis, NATS, managed buses)

Server-Sent Events and long polling

SSE is gloriously simple for server → browser push channels over HTTP—not duplex, yet extremely debuggable and CDN-friendly versus raw sockets in some setups.

Long polling is fallback archaeology that still survives corporate proxies allergic to Upgrade headers. Respect it pragmatically rather than mocking it—it may be someone’s survivable outage mitigation.

When HTTP/2 multiplexing steals partial wins

If your “realtime-ish” UX is periodic small updates, multiplexed HTTP streams with short-lived compression contexts may suffice without socket state sprawled everywhere. Evaluate engineering hours defending socket clusters versus pragmatic polling cadence tightening.

There is no laurel wreath exclusively for sockets. Winning architecture names the buffering contract honestly: bounded queues, surfaced pressure to clients, and observability correlating websocket sessions to tracing spans.

Share