The following is a summary of the problems in implementing a high-performance socket component, if you only need to deal with thousands of concurrent applications, then you can pay attention to the code writing, but you need to face tens of thousands or tens of thousands of concurrent applications. The summary of the following questions is believed to be of great help to the writing of this application.
SocketAsyncEventArgs
This object is provided after .NET 2.0 sp1 and is mainly used to implement high-performance socket data send and receive processing (for a more detailed introduction, you can go to MSDN). This object provides three ways to set the buffers for sending and receiving related sends, SetBuffer(Int32, Int32), SetBuffer(Byte(), Int32, Int32, BufferList, the first two cannot coexist with the latter ( MSDN explains why). When you set a Buffer, whether it's SetBuffer(Byte(), Int32, Int32) or BufferList, try to set it only once per SocketAsyncEventArgs instance throughout the program lifetime, as this setting can be very resource-intensive. It is recommended to set the data buffer through SetBuffer(Byte(), Int32, Int32) during the construction of SocketAsyncEventArgs, and then use SetBuffer(Int32, Int32) to handle it. When you want to set a BufferList, it is best not to change <byte>the byte[] source referenced by IList<ArraySegment>. If it is changed, it will cause SocketAsyncEventArgs to rebind the buffer and affect the efficiency.
SocketAsyncEventArgs pool
As mentioned above, try not to change the buffer referenced by SocketAsyncEventArgs as much as possible, in order to achieve this goal. Therefore, it is necessary to build a SocketAsyncEventArgs application pool and initialize the SocketAsyncEventArgs object as much as possible at the beginning of the program. In addition to reducing the creation of SocketAsyncEventArgs, constructing pools can also greatly save memory. The main reason is that you can't know how big each message is, of course, you can give the message a maximum limit before designing, and then set the buffer corresponding to SocketAsyncEventArgs. However, this is a great waste of memory, because not all messages have a maximum length. Allocate an appropriate amount of buffer size to SocketAsyncEventArgs, provide calls through pools, and flexibly write messages to one or more SocketAsyncEventArgs, or store multiple messages in one SocketAsyncEventArgs for processing.
queue
I see that many practices are to open threads directly or throw them to the thread pool after receiving data, which is very bad because it does not better control the work of threads, including the waiting of threads. With custom threads + queues, you can control how many threads are responsible for what work, and the queued work will only exist in the queue; There will be no large number of threads or a large number of lines waiting, which will cause the operating system to lose resources due to thread scheduling.
Delayed consolidation of data
Delayed merge data transmission is a means to solve the problem of excessive network IO operations, which is not used in many scenarios, but it is common in game servers. Someone asked me a question before, if there are 400 users in the scene, each user's environment change will tell the other users. If the combined data is not used, it will produce a very fearful network IO operation, which is difficult for the IO operation number system to carry. Therefore, it is necessary to merge and send data within an appropriate delay interval for the current application. |