Mezon new architect and new mezon proto

1 min read

Breaking the Speed Limit: How Mezon-Proto & Raw QUIC are Redefining Our Architecture

At Mezon, we’ve always been obsessed with performance. For a long time, our stack—Nginx + Unix Sockets + Golang (fasthttp)—served us well. But as our traffic scaled and latency requirements tightened, we realized that even the most optimized HTTP/1.1 or H2 stacks carry a “protocol tax” we were no longer willing to pay.

Today, we are excited to share a major architectural shift: the move to Mezon-Proto powered by Raw QUIC.

By stripping away legacy layers and moving toward a custom transport implementation, we’ve effectively decoupled our high-speed “Data Plane” from our complex “Control Plane.”


The Problem: The “HTTP Tax”

Even with fasthttp in Go, standard web traffic requires significant CPU cycles for:

  1. Header Parsing: Validating and processing bulky HTTP headers.
  2. Context Switching: Moving data between Nginx and our Go backend.
  3. TCP Overhead: Dealing with head-of-line blocking and multi-step handshakes.

We needed a way to serve the majority of our requests (the “Hot Path”) with near-zero processing overhead.


The Solution: A Dual-Path Architecture

Our new architecture splits traffic into two lanes: the Hot Path for lightning-fast data retrieval and the Cold Path for complex business logic.

1. The Hot Path: Mezon-Proto over Raw QUIC

For data-heavy requests, the mezon-proto-client talks directly to our new Mezon-Quic Server (written in C).

  • Zero HTTP Parsing: We use a lightweight, binary protocol.
  • Tiered Caching: The server checks an L1 Local Cache first. If it misses, it queries Valkey (L2), populates L1, and streams raw bytes back to the client.
  • The Result: The CPU spends its time moving bytes, not parsing strings.

2. The Cold Path: Lean IPC

When a request requires complex logic, it hits our Mezon-API (Golang) via a Unix Domain Socket.

  • Goodbye fasthttp: Since we are no longer serving public HTTP traffic directly from Go, we’ve removed fasthttp.
  • Raw Sockets: The Go backend now receives pre-parsed protocol data through the Unix socket, reducing memory allocations and GC pressure significantly.

Why This Matters

By moving the “Hot Path” to a C-based QUIC implementation and using Unix sockets for internal communication, we’ve achieved:

  • Drastic Latency Reduction: Bypassing Nginx and the TCP stack saves milliseconds on every round trip.
  • Memory Efficiency: Removing HTTP header maps and string parsing in Go reduces our memory footprint and makes Garbage Collection (GC) pauses almost non-existent.
  • Simplified Backend: Our Go engineers can focus on business logic without worrying about tuning a high-performance web server.
Avatar photo

Leave a Reply

Your email address will not be published. Required fields are marked *