|
Test Objectives:Reduce the impact of memory on SQL instances Test examples:An online instance of 100G memory Test steps: 18:20 Start recording PERF data 18:30 Adjust the maximum SQL memory to 3/4 of its original size (75G) - it takes 3.9 seconds 18:40 Restore the maximum SQL memory to its original size of 100G - 3.8 seconds 18:50 Stop recording PERF data Time out 19:20 Start recording PERF data 19:30 Stop recording PERF data
Record content: Performance data for 4*10 minutes was recorded 18:20 -18:30 Stage 1, normal memory period 18:30 -18:40 Phase 2, low memory runtime 18:40 -18:50 Stage 3, Recovery Period 1 19:20 -19:30 Stage 4, recovery period 2 Process(_Total)% Processor Time Process(sqlservr)% Processor Time Processor(_Total)% Processor Time Processor(_Total)% User Time Processor(_Total)% Privileged Time PhysicalDisk(_Total)% Idle Time PhysicalDisk(_Total)% Disk Time PhysicalDisk(_Total)Avg. Disk Queue Length PhysicalDisk(_Total)Current Disk QueueLength MemoryPage Faults/sec MemoryAvailable MBytes MemoryPages/sec Databases(_Total)Active Transactions General StatisticsUser Connections
Test results: CPU:变化不明显,影响可忽略 IO parameters have changed significantly, IDEALTIME has dropped by 1%, and queue and DISKTIME are basically about twice as much as before. And there will be a slow recovery process (more than 1 hour) in the future. In terms of SQL, active transactions will increase significantly (3 times), memory and IO replacement will also increase (about 2 times), and the execution plan recompilation will be particularly noticeable, especially LAZYWRITE writes. Test data:Attachment EXCEL
Attached: Some legends
|