The use of Redis cache greatly improves the performance and efficiency of applications, especially for data querying. But at the same time, it also brings some problems. Among them, the most important problem is the consistency of data, which is strictly unsolvable. If data consistency is required, then caching cannot be used.
Other typical problems are cache penetration, cache avalanche, and cache breakdown. At present, there are also more popular solutions in the industry. This article is not to solve these three problems more perfectly, nor is it to subvert the popular solutions in the industry. Instead, we will demonstrate these three problem phenomena from the actual code operation. The reason for doing this is that it is difficult to have a very vivid concept in the head just by looking at the academic explanation of these problems, and with actual code demonstrations, you can deepen your understanding and understanding of these problems.
Cache penetration
Cache penetration refers to querying data that does not exist in a database. If the key does not exist or the key has expired, the database is queried, and the queryed objects are put into the cache. If the database query object is empty, it is not cached.
Code flow
- parameter pass the primary key ID of the object
- Get the object from the cache based on the key
- If the object is not empty, it returns directly
- If the object is empty, perform a database query
- If the object queried from the database is not empty, put it in the cache (set the expiration time) Imagine this situation, what would happen if the parameter passed in was -1? This -1 is an object that must not exist. The database will be queried every time, and each query will be empty, and it will not be cached every time. If there is a malicious attack, this vulnerability can be exploited to put pressure on the database or even overwhelm it. Even if UUID is used, it is easy to find a non-existent KEY and attack.
In my work, I will use the method of caching null values, that is, step 5 in the [code process], if the object queried from the database is empty, it is also put into the cache, but the set cache expiration time is short, such as setting it to 60 seconds.
Cache avalanche
Cache avalanche refers to the expiration of the cache set during a certain period of time.
One of the reasons for the avalanche, for example, when writing this article, it will soon be at zero o'clock on the twelfth day, and there will soon be a wave of rush buying. Then at one o'clock in the morning, the cache of this batch of goods will expire. The access query for this batch of goods falls on the database, and for the database, there will be periodic pressure peaks.
When Xiaobian does e-commerce projects, he generally adopts different categories of goods and caches different cycles. Goods in the same category, plus a random factor. In this way, the cache expiration time can be dispersed as much as possible, and the cache time of products in popular categories is longer, and the cache time of products in unpopular categories is shorter, which can also save the resources of the caching service.
In fact, centralized expiration is not very fatal, and the more fatal cache avalanche is that a node of the cache server goes down or disconnects. Because the cache avalanche that naturally occurs must be created in a certain period of time, then the database can withstand the pressure, and at this time, the database can also withstand the pressure. It is nothing more than periodic pressure on the database. The downtime of the cache service node causes unpredictable pressure on the database server, and it is likely to overwhelm the database in an instant.
Cache breakdown
Cache breakdown refers to a key that is very hot, constantly carrying large concurrency, large concurrency concentrates on accessing this point, when this key fails, continuous large concurrency breaks through the cache and directly requests the database, just like drilling a hole in a barrier.
When Xiaobian was doing e-commerce projects, he made this product a "hit".
In fact, in most cases, this kind of explosion is difficult to put crushing pressure on the database server. There are few companies that have reached this level. Therefore, the pragmatic editor has prepared for the main products early, so that the cache will never expire. Even if some products ferment themselves into hits, they can be set as never expiring.
The main road is simple, and the mutex key mutual rejection lock is really not used.
|