Redis, Memcached, cache-aside, TTL, and invalidation strategies for fast reads across instances.
Redis, Memcached, cache-aside, TTL, and invalidation strategies for fast reads across instances.
Lesson outline
In-memory cache in one app instance is fast but not shared: with multiple instances, each has its own cache and the same key might be fetched from the DB many times. A distributed cache (e.g. Redis, Memcached) is shared across instances. All instances read and write the same key-value store (usually over the network). Reads that hit the cache avoid the DB and reduce latency and load.
Use for: session data, computed or aggregated results, API responses from external services, and rate limit counters.
Cache-aside (lazy loading): the app checks the cache first. On miss, load from the DB, write to the cache, then return. On hit, return from the cache. The app owns the logic; the cache is a side store. Simple and common. Write-through (write to cache and DB together) or write-behind (write to cache, async to DB) are alternatives when you need different consistency or write performance.
Invalidation: when data changes, delete the key (or update it). Next read will miss and repopulate from DB. Be careful with stale reads if you update the DB but forget to invalidate the cache.
TTL (time-to-live): set an expiration on each key (e.g. 5 minutes, 1 hour). After TTL, the key is gone; the next read is a miss and repopulates. TTL limits staleness and saves memory. Use short TTL for volatile data, longer for stable data.
When the cache is full, the server evicts keys (e.g. LRU—least recently used). Configure max memory and eviction policy (Redis: maxmemory-policy). Prefer evicting less critical keys (e.g. by prefix or type).
Redis: rich data structures (strings, hashes, lists, sets, sorted sets), pub/sub, Lua scripting, persistence (optional). Good for sessions, leaderboards, rate limits, and when you need more than plain key-value. Memcached: simple key-value, multi-threaded, often used for plain cache. Both are in-memory and fast; Redis is more featureful, Memcached is simpler. Choose Redis when you need structures or persistence; Memcached when you want minimal complexity.
Caching helps when: read-heavy, expensive to compute or fetch, and tolerates staleness (or you invalidate on write). It does not help when: write-heavy (cache keeps getting invalidated), highly personalized (low reuse per key), or strong consistency required (cache might be stale). Monitor hit rate; low hit rate may mean wrong keys, too short TTL, or a workload that does not benefit.
Ready to see how this works in the cloud?
Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.
View role-based pathsSign in to track your progress and mark lessons complete.
Questions? Discuss in the community or start a thread below.
Join DiscordSign in to start or join a thread.