【Preface】
When the data reaches a certain amount, the database will become the bottleneck of the whole system, and the optimization strategy is generally adopted as read and write separation, and the database realizes read and write separation by dividing the main database from the database (write request operation from the main database, read request operation from the database);
【Ideas for solving data delays】
1. The principle of master-slave synchronization (here we take the most popular MySQL as an example)
Here is a schematic diagram of a classic MySQL master-slave data synchronization via binlog:
2. How do problems arise?
1. From the above schematic, it is not difficult to find that master-slave synchronization has a certain delay, which affects the delay size:
(1) The size of the delay depends on the amount of data generated from the last synchronization to the present
(2) The current network situation between servers
(3) The pressure of the master-slave server itself (CPU, memory, IO, etc.)
2. Since the database service is generally in the intranet, and the server will be higher in the configuration (more than the actual need) when purchased, the synchronization is basically very fast, generally in milliseconds;
3. In general business scenarios, millisecond latency can be ignored;
4. There are general and special cases, and some special situations require real-time millisecond time difference. Here are common solutions for these special situations.
3. Data latency solutions:
1. Scheme 1: Write the program double (write the main database and read the database at the same time)
2. Scheme 2: Read the program to check the main database
3. Scheme 3: Write the main database and write the cache (set a certain expiration time, generally slightly greater than the maximum delay of database synchronization), read the program, read the cache, and read the slave database
4. Advantages and disadvantages of the three schemes:
1. Scheme 1: Double writing will consume a certain amount of performance, which is relatively simple to implement, and is not suitable for high concurrent writing scenarios;
2. Scheme 2: The reading program will affect the performance of the main library, which is relatively simple to implement, and is not suitable for high concurrent read scenarios.
3. Scheme 3: In most cases, read and write consume more write performance, which is more complex to implement, and is suitable for both high concurrent reads and writes (cache reads and writes are very fast);
【Summary】
1. Implementation is important, but more important is the idea;
2. Many underlying principles and ideas are universal Original:https://blog.csdn.net/zhanghan18 ... le/details/91638443
|