Cache (GitHub System Design Primer) PDF

Document Details

DecisiveGreatWallOfChina1467

Uploaded by DecisiveGreatWallOfChina1467

Tags

caching databases system design computer science

Summary

This document is about caching in system design. It focuses on different caching strategies like cache-aside and write-through and their application in real-world scenarios. The document also lists applications that may need or not need query-level caching and explains the workflow of each pattern.

Full Transcript

14 Cache (GitHub System Design Primer) Cache *[ Source: Scalable system design patterns ]()* Caching improves page load times and can reduce the load on your servers and == == ==...

14 Cache (GitHub System Design Primer) Cache *[ Source: Scalable system design patterns ]()* Caching improves page load times and can reduce the load on your servers and == == == == == == == == == == == databases. In this model, the dispatcher will first lookup if the request has been == == == *** *** == == made before and try to find the previous result to return, in order to save the actual == == execution. (Otherwise the Dispatcher forwards the request to a worker ) == == Databases often benefit from a uniform distribution of reads and writes across its == == == partitions. Popular items can skew the distribution , causing bottlenecks. == == == == == == == — Putting a cache in front of a database can help absorb uneven loads and spikes == == == == == == == == in traffic.== == == Client caching == Caches can be located on the client side ( OS or browser ), server side, or in a == == == == == == [ ]() *** *** == distinct cache layer. == == CDN caching == [ CDNs are considered a type of cache. ]() == Web server caching == [ == Reverse proxies and caches such as Varnish can serve static and dynamic content == ]() [ == == ]() == == == == directly. Web servers can also cache requests , returning responses without having to == == == == contact application servers. == == == Database caching == Your database usually includes some level of caching in a default configuration , == == optimized for a generic use case. Tweaking these settings for specific usage patterns == == can further boost performance. == Application caching == In-memory caches such as Memcached and Redis are key-value stores between == == == == == == your application and your data storage. Since the data is held in RAM , it is much == == == == == == == faster than typical databases where data is stored on disk. RAM is more limited than == == == == disk , so cache invalidation algorithms such as least recently used (LRU) can help == [ == == ]() [ == == ]() invalidate 'cold' entries and keep 'hot' data in RAM. == == == == == Redis has the following additional features: == == Persistence option == == Built-in data structures such as sorted sets and lists == == == == == ⠀ There are multiple levels you can cache that fall into two general categories: == == ** == database queries and objects : == ** ** == == ** == Row level == == Query-level == Fully-formed serializable objects == == == Fully-rendered HTML == ⠀ *** Generally, you should try to avoid file-based caching , as it makes cloning and auto- *** *** *** == == == == == scaling more difficult. == Why does file based caching make cloning and auto-scaling more difficult? File-based caching makes cloning and auto-scaling more difficult due to the ** ** ** ** following key reasons: ** 1. Lack of Shared State Across Instances ** ** File-based caches are stored locally on the filesystem of a single server ** instance. In a distributed system: When new instances are cloned or added during auto-scaling, they do not have access to the cached data stored on other instances. Each instance ends up with its own isolated cache, leading to cache ** fragmentation. ** ** 2. Cache Inconsistency ** Since the cache is stored locally: Different instances might have different versions of cached data. ** ** This leads to cache inconsistency, where requests handled by ** ** different instances may yield inconsistent results. Synchronizing file-based caches across multiple instances is non-trivial and requires additional mechanisms, such as file replication or a shared storage solution. ** 3. Increased Latency for New Instances ** When new instances are cloned during auto-scaling: They start with an empty cache because the cached files are not ** ** shared. This results in a "cold start", where the new instances must fetch all ** ** data from the original source (e.g., database) again. This degrades performance until the cache warms up. ** ** ** 4. Storage Limitations ** In auto-scaling environments, the local disk space available for caching on ** ** each instance might become a bottleneck: If the cache grows too large, the system risks running out of disk space on some instances. This makes file-based caching harder to manage at scale. ** 5. Difficulty in Load Balancing ** In environments with multiple clones of an application: A load balancer directs requests to different instances. ** ** Since each instance has its own isolated file-based cache, there is no ** guarantee that a cache hit on one instance will be valid on another. ** This reduces the overall cache hit rate, increasing the load on the backend systems. ** 6. Challenges with Deployment and Consistency ** When deploying updates or scaling, maintaining the state of the cache ** ** across instances becomes a challenge: ** Cache invalidation becomes complex, as the file-based cache on all ** instances needs to be updated simultaneously. If not done correctly, this can lead to stale or inconsistent data being ** ** served. ** Solutions to Address These Challenges ** To mitigate these issues, distributed or centralized caching systems are preferred for environments that require cloning and auto-scaling. Examples include: 1. In-Memory Distributed Caching: ** ** Tools like Redis, Memcached, or Amazon ElastiCache store cache ** ** ** ** ** ** data in memory and share it across instances, ensuring consistency and availability. 2. Shared Network File Systems: ** ** Solutions like AWS EFS (Elastic File System) or Google Cloud ** ** ** Filestore allow multiple instances to access the same cached files. ** 3. Stateless Applications: ** ** Design the application to be stateless and rely on centralized caching ** ** instead of local file storage. ** Summary ** File-based caching is challenging in auto-scaling and cloning scenarios because it leads to isolated caches, inconsistent data, cold starts, and increased ** ** ** ** ** ** management complexity. To handle these environments effectively, use ** centralized or distributed caching systems that can scale dynamically and ** maintain shared state across instancess. Caching at the database query level == == Whenever you query the database, hash the query as a key and store the result to == == == == == == == == the cache. This approach suffers from expiration issues : == == == == Hard to delete a cached result with complex queries == == * If one piece of data changes such as a table cell, you need to delete all cached * == queries that might include the changed cell == Applications Needing Query-Level Caching: 1. E-commerce Websites: Product catalogs or pricing data that rarely changes, ** ** reducing database load for high-traffic queries. 2. Social Media Platforms: User profiles or feed data that doesn't update ** ** frequently, improving response times for repeated queries. Applications That May Not Need Query-Level Caching: 1. Banking Applications: Real-time account balances must always reflect up-to- ** ** date information to ensure accuracy. 2. Stock Trading Platforms: Live stock prices change rapidly, requiring real-time ** ** updates for precision. ℹ Caching at the object level == == See your data as an object , similar to what you do with your application code. Have == == your application assemble the dataset from the database into a class instance or a == == == data structure(s ): == * Remove the object from cache if its underlying data has changed * *** *** == == Allows for asynchronous processing : workers assemble objects by consuming == == == == the latest cached object == == ⠀ How is it possible to cache an activity stream? Caching an activity stream involves storing precomputed or partially computed ** ** streams of user activities (e.g., posts, comments, likes) to improve performance and scalability. Instead of relying on live, real-time connections like an event bus, activity streams are cached as individual objects (e.g., posts or events) or as preassembled streams, allowing for faster retrieval and reduced backend load. Updates to the stream trigger cache invalidation or incremental updates, * * * * ensuring freshness, while time-based expiration or event-driven invalidation keeps data consistent. This approach enhances scalability and user experience, particularly for systems like social media platforms or dashboards, by serving cached results efficiently without the need to recompute the entire stream on every request. == When to update the cache == Since you can only store a limited amount of data in cache , you'll need to determine == == == which cache update strategy works best for your use case. == == == == == **~ Cache-aside ~** Diagram of Cache-aside caching *[ Source: From cache to in-memory data grid ]()* This diagram illustrates the cache-aside caching pattern, where a **client ** ** ** == retrieves data by first checking a cache and, if necessary, falls back to storage ** **== == == ** ** (often a database or persistent data store) to retrieve and update the cache. == == Components of cache-aside caching: 1. Client: ** ** The client is the primary user or application that requests data. In this pattern, the client first interacts with the cache to try and retrieve the data it needs. 2. Cache: ** ** The cache is a fast, in-memory storage layer that temporarily holds == == frequently accessed data to reduce load on the main storage. == == If the client finds the requested data in the cache, it can be returned immediately, reducing latency and improving performance. 3. Storage: ** ** Storage represents the main data source, such as a database or persistent storage system. If the requested data is not found in the cache (a "cache miss"), the ** ** client fetches it from storage. Workflow of Cache-aside Pattern: 1. Cache Lookup: ** ** When the client needs data, it first checks the cache. If the data is in the cache (a " cache hit "), it is returned to the client == == immediately. 2. Cache Miss and Data Retrieval: ** ** If the data is not in the cache (a " cache miss "), the client retrieves the == == data from the storage. 3. Cache Update: ** ** After fetching the data from storage, the client updates the cache with this data. Subsequent requests for the same data can then be served from the cache. 4. Data Expiration: ** ** Cached data may have an expiration time or time-to-live (TTL) , after == == which it is removed from the cache to ensure freshness. When the cache entry expires, the next client request will follow the cache-aside workflow again, fetching updated data from storage if necessary. ** Benefits of Cache-aside: ** ** Reduced Latency: Frequently requested data can be served quickly from ** the cache, reducing database load and improving response time. ** Data Freshness: Cached data can be configured to expire, ensuring that the ** cache eventually reflects updated values from storage. ** Flexibility: This pattern allows selective caching of data, providing flexibility ** based on usage patterns. == This cache-aside pattern is commonly used in applications that require high ** ** == == performance and quick access to frequently requested data , while still == == == == == == == == == == maintaining an up-to-date and reliable primary data store. == == == == == == The application is responsible for reading and writing from storage. The cache does == == *** not interact with storage directly. The application does the following: *** == == == Look for entry in cache, resulting in a cache miss Load entry from the database Add entry to cache Return entry ⠀ ```python def get_user(self, user_id): user = cache.get("user.{0}", user_id) if user is None: user = db.query("SELECT * FROM users WHERE user_id = {0}", user_id) if user is not None: key = "user.{0}".format(user_id) cache.set(key, json.dumps(user)) return user ``` [ == Memcached is generally used in this manner. == ]() Subsequent reads of data added to cache are fast. Cache-aside is also referred to as == lazy loading. Only requested data is cached , which avoids filling up the cache with == == == data that isn't requested. == Disadvantage(s): cache-aside == Each cache miss results in three trips , which can cause a noticeable delay. == == == == == == Data can become stale if it is updated in the database. This issue is mitigated == == == == == by setting a time-to-live (TTL) which forces an update of the cache entry, or by using write-through. == == When a node fails , it is replaced by a new, empty node , increasing latency. == == == * * == == == == Applications Needing Cache-Aside Caching: 1. E-commerce Platforms: Product details or inventory data that is frequently ** ** accessed but updated infrequently, improving response time while keeping data reasonably current. 2. Content Management Systems (CMS): Popular blog posts or static pages that ** ** are read often but edited rarely, balancing performance with the ability to refresh stale data when needed. Applications That May Not Need Cache-Aside Caching: 1. Real-Time Messaging Apps: Chat messages and notifications require immediate ** ** updates and are better suited for in-memory or streaming-based caching. 2. Stock Trading Platforms: Live market data changes rapidly and caching ** ** introduces delays, making it unsuitable for cache-aside patterns. How is Cache-aside caching typically configured? Cache-aside caching is typically coded in the application layer, where the application ** ** explicitly interacts with both the cache and the database. Here's how it works in practice: 1. Application Logic: The application first checks the cache for data. ** ** If the data exists, it's returned from the cache. If not, the application fetches it from the database, updates the cache, and then returns the data. However, branded caching technologies can provide APIs or configurations to ** ** streamline this process. Examples include: ** Redis: Widely used for cache-aside implementations with libraries like Redisson ** for Java. ** Memcached: Lightweight solution where developers integrate cache-aside logic ** into their applications. ** AWS ElastiCache: Managed caching service (supports Redis and Memcached) ** that integrates with application code for cache-aside patterns. In summary, cache-aside caching is primarily implemented in application code, but ** ** technologies like Redis or Memcached offer tools to simplify this. **~ Write-through Caching ~** **Diagram of Write-through caching ** *[ Source: Scalability, availability, stability, patterns ]()* This diagram illustrates the write-through caching pattern, where data written ** ** * by a user is immediately stored in both the cache and the database (DB) , * * * ** == == ** ** == == ** == ensuring data consistency between the two layers. == Components and Workflow of Write-through caching: 1. User:** ** The user initiates a write operation, such as updating or creating data. * * 2. Step 1: Write to Cache: ** ** The data is first written to the cache. * * ** == == ** This ensures that the updated data is immediately available in the == == cache for quick read access. == == 3. Step 2: Store in Database: ** ** After writing to the cache, the data is also stored in the database ** == (DB). == ** == This guarantees that both the cache and the database contain the same, up-to-date information. == 4. Step 3: Return to User: ** ** Once the data has been written to both the cache and the database, a == response is sent back to the user, confirming that the write operation == is complete. Key Characteristics of Write-through Caching: ** Consistency: This pattern ensures data consistency between the cache and ** == the database since all writes go through both layers. == ** Immediate Availability in Cache: Newly written data is available in the ** == cache immediately after the write operation , allowing for faster access on == subsequent reads. ** Slight Latency in Writes: Since data is written to both the cache and the ** database, write operations might experience slight latency but provide == == == high data consistency. == Benefits of Write-through Caching: ** Data Consistency: By writing to both the cache and the database ** simultaneously, this pattern maintains consistent data across both layers, == reducing the chances of cache misses. == ** Cache as Primary Source for Reads: With data immediately available in the ** cache, read operations can be faster , enhancing overall system == == == performance for frequently accessed data. == == == The write-through caching pattern is commonly used in applications where ** ** == == == data consistency is crucial , as it ensures that data in the cache always reflects == == == == the latest updates made to the database. This approach is particularly useful for == == systems where read performance is prioritized and where consistency == == == == == == == == between the cache and database needs to be maintained at all times. == == The application uses the cache as the main data store , reading and writing data to == == == == == == == == it , while the cache is responsible for reading and writing to the database : == == == == == == == == == 1. Application adds/updates entry in cache 2. Cache synchronously writes entry to data store == == 3. Return ⠀ Application code: ```python set_user(12345, {"foo":"bar"}) ``` Cache code: ```python def set_user(user_id, values): user = db.query("UPDATE Users WHERE id = {0}", user_id, values) cache.set(user_id, user) ``` == Write-through is a slow overall operation due to the write operation , but == == == == == == == == subsequent reads of just written data are fast. Users are generally more tolerant == == == == == == == == == == of latency when updating data than reading data. Data in the cache is not stale. == == == == == == Disadvantage(s): write through caching == When a new node is created due to failure or scaling , the new node will not == == == == == == == cache entries until the entry is updated in the database. == == == == Cache-aside (lazy loading from db) in conjunction with write through can == == == == mitigate this issue. == == Most data written might never be read (waste of caching resources) , which can == == be minimized with a TTL. == == == Applications Needing Write-Through Caching: 1. E-commerce Shopping Carts: Ensures every update to the cart is immediately ** ** written to both the cache and the database for consistency and reliability. 2. User Profile Management Systems: Updates to user profiles (e.g., name or ** ** settings) require real-time synchronization between cache and database to prevent stale data. Applications That May Not Need Write-Through Caching: 1. Analytics Platforms: Aggregated data can tolerate eventual consistency, making ** ** write-behind or batch processing more suitable. 2. Log Management Systems: High-frequency logging doesn't require immediate ** ** cache-database synchronization as logs are rarely queried immediately after being written. How is Write-through caching typically configured? Write-through caching is typically implemented and managed by branded caching ** technologies, rather than being coded in the application. These technologies handle ** the synchronization between the cache and the database automatically. When the application writes data, it is written to the cache first, and the cache synchronously writes the same data to the underlying database. The application does not need to directly manage database writes, as the caching solution ensures consistency. * Branded Technology Solutions for Write-through caching: * 1. Redis (with Persistence enabled): Supports write-through caching with Redis ** ** Persistence (RDB snapshots or AOF logs) to ensure data is synchronized to storage. 2. AWS ElastiCache (Redis or Memcached): Provides managed solutions for write- ** ** through caching with minimal configuration. 3. Hazelcast: In-memory caching solution with built-in write-through support for ** ** databases. 4. Ehcache: Java-based caching solution that supports write-through mechanisms. ** ** Write-through caching is usually configured in branded caching solutions like Redis, ** ** AWS ElastiCache, Hazelcast, or Ehcache, which handle synchronization automatically, reducing the need for custom application code. **~ Write-behind (write-back) caching ~ ** Diagram of Write-behind caching *[ Source: Scalability, availability, stability, patterns ]()* This diagram illustrates the write-behind (or write-back) caching pattern, where ** ** ** ** data written by the user is initially stored in the cache and then asynchronously ** ** == == persisted to the database (DB). This approach reduces latency for the user ** ** == == == == == == == by deferring the database write. == == == Components and Workflow of Write-behind caching: 1. User: ** ** The user initiates a write operation, such as updating or creating data. 2. Step 1: Write to Cache: ** ** The data is first written to the cache , making it immediately available * * ** == == ** == for read operations. == == == == == This provides a low-latency response time , as the user doesn’t have to == == == wait for the data to be written to the database. == 3. Step 2: Add Event to Queue: ** ** After writing to the cache , an event representing the write == == ** == == ** == operation is added to a queue. == ** == == ** This event serves as a record of the pending database write. == == == == 4. Step 3: Return to User: ** ** == Once the data is stored in the cache and the event is added to the == == == == == == == == queue , the system immediately returns a response to the user , == == == == == == == completing the user’s interaction. 5. Step 4: Asynchronous Write to Database : ** == == == == ** An event processor picks up events from the queue asynchronously. ** == == ** == == == == The event processor then performs the actual write operation to the == == ** == database (DB) based on the information in the event. == ** This deferred database write reduces immediate load on the == database , allowing it to process writes in batches or at intervals. == == == Key Characteristics of Write-behind Caching: ** Asynchronous Database Writes: By writing to the cache first and then ** deferring the database write, the pattern reduces the response time for the == user. == ** Event Queue: The queue ensures that all write operations are logged and** == == == == == eventually processed , maintaining eventual consistency between the cache == and database. ** Optimized Database Load: Asynchronous writes can be batched or ** == == spaced out, reducing the frequency of database write operations and == == == improving database performance. == Benefits of Write-behind Caching: ** Improved User Response Time: Users receive a quick response as the ** == == == system doesn’t wait for the database write. == ** Database Load Reduction: Asynchronous writes allow the database to ** == == handle fewer, potentially batched writes , optimizing performance for other == == == operations. == ** Eventual Consistency: The system ensures eventual consistency between ** == == the cache and database , meaning data in the database will eventually == == == == match the cache. This write-behind caching pattern is commonly used in applications where fast ** ** == write response times are important, and immediate consistency with the == ==~~ ~~ database is not required. It allows for optimized performance and a smoother == == user experience , with data being written to the database asynchronously in the == == == background. In write-behind, the application does the following: Add/update entry in cache Asynchronously write entry to the data store, improving write performance ⠀ == Disadvantage(s): write-behind == There could be data loss if the cache goes down prior to its contents hitting == == == == == the data store. == It is more complex to implement write-behind than it is to implement cache- == == aside or write-through. Applications Needing Write-Behind Caching: 1. Billing Systems: Frequently updated billing data can be written to the cache first ** ** and asynchronously persisted to the database to improve write performance. 2. Event Logging Systems: High-frequency logs are cached and written to the ** ** database in batches, reducing database write overhead. Applications That May Not Need Write-Behind Caching: 1. Banking Applications: Transactions require immediate and consistent updates ** ** to both the cache and the database to prevent discrepancies. 2. Real-Time Monitoring Systems: Sensor data or live metrics need to be written ** ** directly to ensure accuracy and immediate availability. How is Write-behind caching typically configured? Write-behind caching is typically implemented through branded caching ** technologies, which handle asynchronous writes to the underlying database, ** offloading this responsibility from the application. The application writes data to the cache. The caching solution asynchronously writes the data to the database in the background, often in batches or with a delay, to improve performance and reduce database load. * Branded Technology Solutions: * 1. Redis: Supports custom write-behind patterns through extensions or libraries, ** ** often combined with message queues for asynchronous processing. 2. Hazelcast: Provides built-in write-behind caching with support for delayed, ** ** batched writes to the database. 3. Ehcache: Includes write-behind caching out of the box, configurable to handle ** ** asynchronous writes efficiently. 4. AWS ElastiCache (Redis or Memcached): Can be paired with custom Lambda ** ** functions or other services for write-behind implementations. Write-behind caching is usually configured in branded caching solutions like Redis, ** ** Hazelcast, or Ehcache, which manage asynchronous writes efficiently, reducing the need for custom application logic. **~ Refresh-ahead caching ~** Diagram of Refresh-ahead caching. *[ Source: From cache to in-memory data grid ]()* This diagram illustrates the refresh-ahead caching pattern, where a cache ** ** ** == == ** *** proactively refreshes or preloads data before it expires , ensuring that *** * * *** *** == == ==*** *** *** ***== == == == frequently accessed data remains up-to-date for the client without == == == == == == == ** == == ** == requiring constant access to the main storage. ~~ ~~ == == ** **== Components and Workflow of Refresh-ahead caching: 1. Client: ** ** The client is the application or user requesting data. == == * * == == * * == == When the client requests data, it first checks the cache. * * == == *** If the data is in the cache, it is returned immediately , providing a *** * * * == == * == low-latency response. == 2. Cache: ** ** The cache is an in-memory storage layer that holds frequently == == == == == == == accessed data to reduce the need for direct access to the storage. == == == == == == == == == With the refresh-ahead pattern, the cache periodically refreshes its ** ** ==* *== * * == data before it expires , based on expected access patterns. == ==* * * *== * * == == This proactive approach keeps data in the cache fresh , reducing * * == == == cache misses. == 3. Storage: ** ** == Storage is the main data source , such as a database or persistent == == == == == * * == storage system. == The cache interacts with the storage (indicated by the dashed line ) == == == == == == == to retrieve or refresh data when necessary. * * * * * * == In a refresh-ahead model, the cache refreshes data from storage * * * * * * ==* before it expires , ensuring that the most current data is available to * * *== == the client. == Key Characteristics of Refresh-ahead Caching: ** == Proactive Data Refresh : The cache refreshes data from storage based on == ** * anticipated client access , reducing the likelihood of serving stale data. * == == == == ** == Low Latency for Clients : Clients benefit from faster response times == ** == == because data is already cached and updated before being requested. == == == == ** Reduced Storage Load: Since data is refreshed in the cache in advance, ** there are fewer direct requests to the storage , optimizing storage == == == performance. == Benefits of Refresh-ahead Caching: ** == High Availability of Fresh Data : Frequently accessed data remains up-to- == ** == date in the cache , improving the quality of responses to clients. == == == ** == Optimized Performance : By proactively updating the cache, this pattern == ** == minimizes cache misses and reduces the load on storage , enhancing == == == == overall system performance. == ** == Improved User Experience : Users receive the latest data from the cache == ** == == == without experiencing delays associated with cache misses and data == == == == fetching from the main storage. == The refresh-ahead caching pattern is ideal for applications with predictable ** ** == == == access patterns , where certain data is frequently requested , and it’s beneficial == == == == to keep this data fresh and readily available in the cache. This approach == enhances performance and ensures a consistent, low-latency experience for == clients. == You can configure the cache to automatically refresh any recently accessed cache *** *** == entry prior to its expiration. == == == Refresh-ahead can result in reduced latency vs read-through if the cache can == == *** *** == == ==*** *** accurately predict which items are likely to be needed in the future. == == Disadvantage(s): refresh-ahead == == Not accurately predicting which items are likely to be needed in the future can == == result in reduced performance than without refresh-ahead. == == == Applications Needing Refresh-Ahead Caching: 1. Streaming Platforms: Frequently accessed recommendations or trending ** ** * * * content lists can be preemptively refreshed to ensure users always get updated * results without delay. 2. Weather Forecasting Systems: Regularly updated weather data can be ** ** preloaded into the cache based on expected access patterns, avoiding stale results during peak usage. Applications That May Not Need Refresh-Ahead Caching: 1. Stock Trading Platforms: Rapidly changing stock prices make preemptive ** ** refresh impractical due to the high volatility and need for real-time accuracy. 2. Chat Applications: Conversations and notifications are user-triggered, and ** ** preemptively refreshing cache entries provides little benefit. How is Refresh-ahead caching typically configured? Refresh-ahead caching is typically configured in branded caching technologies, ** ** which manage the proactive refreshing of cache entries before they expire, reducing the need for application-level coding. The caching solution monitors access patterns and preemptively refreshes frequently accessed or soon-to-expire cache entries by fetching updated data from the database. The application simply reads from the cache, benefiting from always-fresh data. * Branded Technology Solutions: * 1. Redis (with custom scripts): Allows implementing refresh-ahead logic using Lua ** ** scripts or libraries to refresh keys before expiration. 2. Hazelcast: Provides built-in refresh-ahead caching features with configurable ** ** refresh policies. 3. Ehcache: Includes support for refresh-ahead caching, enabling preemptive ** ** refreshes of expiring entries. 4. AWS ElastiCache: Can be paired with automated triggers or Lambda functions ** ** to implement refresh-ahead strategies. Refresh-ahead caching is primarily configured in branded caching solutions like ** ** Redis, Hazelcast, or Ehcache, which handle proactive refreshing with minimal application code involvement. ⠀ == (General) Disadvantage(s): cache == Need to maintain consistency between caches and the source of truth such as == == == == == == the database through cache invalidation. == == [ == == ]() Cache invalidation is a difficult problem , there is additional complexity == == == == * associated with when to update the cache. * == == Need to make application changes such as adding Redis or memcached. == == == == == == ⠀ Source(s) and further reading [ From cache to in-memory data grid ]() [ Scalable system design patterns ]() [ Introduction to architecting systems for scale ]() [ Scalability, availability, stability, patterns ]() [ Scalability ]() [ AWS ElastiCache strategies ]() [ Wikipedia ]()

Use Quizgecko on...
Browser
Browser