My attempt at understanding Redis internals
NOTE: This is not a Redis tutorial. This is only relevant if you already use Redis and want to understand why it behaves the way it does.
My problems with Redis.
1. “Redis is single-threaded” didn’t make sense
One of the first things everyone hears about Redis is that it is single-threaded.
At first, that sounded like a massive limitation.
In most backend systems, single-threaded servers are slow, block easily, and don’t scale. Yet Redis is somehow able to handle hundreds of thousands—or even millions—of requests per second. That contradiction bothered me.
The key realization was that Redis is single-threaded only for command execution, not for I/O.
Redis uses a single-threaded event loop where:
- Network I/O is non-blocking (epoll / kqueue)
- Commands are executed sequentially
- Each command is extremely fast (mostly O(1) or O(log n))
This design avoids locks entirely. No mutexes. No race conditions. No context switching overhead. The cost of serialization is traded for predictability and cache efficiency.
Once I understood this, Redis stopped feeling “limited” and started feeling intentionally constrained. It is fast because it does less, not more.
2. Persistence felt unreliable (until I understood the tradeoff)
Redis is marketed as an in-memory datastore, yet it offers persistence. That confused me.
If data lives in memory:
- Why does Redis persist to disk?
- Why are there multiple persistence modes?
- Why can data still be lost even when persistence is enabled?
The answer is that Redis does not pretend to be a fully durable database.
Redis gives you explicit control over durability.
- RDB (snapshots) are fast, compact, and cheap—but you can lose recent writes.
- AOF (append-only file) is safer—but increases write latency and disk usage.
- You can even combine both.
This is not accidental complexity. Redis is honest about the tradeoff:
You choose how much data loss you are willing to tolerate.
Once I stopped expecting Redis to behave like Postgres, its persistence model made sense. It is not unsafe—it is explicit.
3. Redis data structures are not abstractions
This was the biggest shift in how I thought about Redis.
In most databases, data structures are abstract. You don’t think about how a list or map is implemented internally. In Redis, the data structure is the API.
A Redis:
LISTis not just a listHASHis not just a hashmapZSETis not just a sorted collection
Each data structure has:
- Multiple internal encodings
- Different memory layouts depending on size
- Different performance characteristics
For example:
- Small hashes are stored as compact contiguous memory
- Sorted sets use both hash tables and skiplists
- Streams embed consumer offsets directly into the data structure
This is why Redis feels so powerful but also opinionated. You are not just choosing an API—you are choosing a memory and performance model.
4. Replication is simple—and intentionally weak
Redis replication initially felt too simple.
- Asynchronous replication
- Replicas can lag
- Failover can lose writes
At first, this felt unsafe. But Redis is not trying to be a strongly consistent database. It optimizes for:
- Low latency
- High availability
- Operational simplicity
Redis makes a clear statement:
If you need strict correctness, use something else. If you need speed and control, Redis is the right tool.
Once I accepted that Redis is not transactional by default, its replication model became easier to reason about.
What Redis taught me
Understanding Redis internals changed how I think about system design:
- Constraints can improve performance
- Single-threaded does not mean slow
- Data structures are system boundaries
- Explicit tradeoffs are better than hidden ones
Redis works not because it is magical, but because it refuses to do too much.
Final thoughts
This post barely scratches the surface. There is still a lot more to Redis internals:
- Event loop design
- Memory eviction policies
- Pub/Sub vs Streams
- Lua scripting
- Cluster hash slots
But even this much understanding made me a better Redis user—and a better systems engineer.
If you are using Redis in production and treating it like “just a cache”, you are probably leaving performance—and correctness—on the table.