[Tech Simplified] Caching Strategies

A short article about the different types of Caching Strategies

2 min readSep 9, 2023
Oriental Art from the Internet

Cache Aside pattern

If the data is available in the Cache, it will be returned. If the data isn’t available (a cache miss), the application will need to issue another query to the database.

  • Introduce network hops
  • Code complexity: Responsibility is on the developer to write the cache/database synchronisation logic and also the code to update/read the database.
  • Suitable when READ operations > WRITE operations. Cached data does not change frequently.
  • Used in Redis

Read Through pattern

The Application always only request data from the Cache. If the Cache does not hold the requested data, the cache will query the database on behalf of the application, using an underlying provider plugin.

After retrieving the data, the cache will update itself and return the data to the application.

  • The application is not aware of the database. This makes the code cleaner and readable.
  • This method involves writing a provider plugin for database fetch.
  • Suitable for frequent READ operations and Data retrieval from underlying data source is slow.

Write Through pattern

When the application updates a piece of data in the cache (Call put to change a cache entry), the operation will not complete until the data has also been stored in the underlying database.

  • Suitable for frequent WRITE Operations. Ensures data consistency in Cache and Data source.
  • May introduce latency for write operations due to the need to update two ends.

Write Behind pattern

Modified cache entries are asynchronously written to the data source after a configured delay, whether after 10 seconds or 20 minutes.

Write operations are initially stored in a queue (e.g., message queue), and acknowledgments are sent immediately to the application. A background process asynchronously flushes the queued writes to the data source.

Note that this only applies to cache inserts and updates — cache entries are removed synchronously from the data source.

  • Suitable for frequent WRITE operation, and low-latency writes are critical. It can help absorb write spikes and decouple write operations from the data source.
  • Possible data inconsistencies if the cache is out of sync with the data source for a short time, increased complexity due to queuing.

The end!




𓆉︎ 𝙳𝚛𝚎𝚊𝚖𝚎𝚛 🪴𝙲𝚛𝚎𝚊𝚝𝚘𝚛 👩‍💻𝚂𝚘𝚏𝚝𝚠𝚊𝚛𝚎 𝚎𝚗𝚐𝚒𝚗𝚎𝚎𝚛 ☻ I write & reflect weekly about software engineering, my life and books. Ŧ๏ɭɭ๏ฬ ๓є!