I've been using HttpClient incorrectly for years, but eventually the nightmare came. My website was unstable and my customers were very angry and with a simple fix, the performance was greatly improved and the instability was eliminated.
At the same time, I actually improved the performance of my application with more efficient socket usage.
Microservices can be a difficult problem to deal with. As more services are added and monolithic applications are decomposed, there tend to be more and more communication paths between services. There are many communication options, but HTTP is a very popular option. If the microservice is built in C# or any .NET language, chances are you're already using HttpClient.
The problem lies
The using statement is a C# feature that handles one-time objects. Once the using block is complete, then the one-time object (in this case HttpClient) is out of scope and disposed of. Call the dispose method and clean up any resources being used. This is a very typical pattern in .NET that we use for everything from the database to the flow writer. In fact, any object with an external resource that must be cleaned uses that IDisposable interface.
And you can't be blamed for wanting to wrap it in using. First, it is considered a good practice to do so. In fact, Microsoft's official documentation using:
In general, when using an IDisposable object, it should be declared and instantiated in the using statement. Second, all the code you may have seen...... The beginning of the HttpClient will tell you to use the using statement block, including the latest documentation ASP.NET the site itself. The same is said in the article on the Internet.
But HttpClient is different. Although it implements the IDisposable interface, it is actually a shared object. This means that behind the scenes it is reentrant and thread-safe. You should share an instance of HttpClient throughout the lifetime of your application, rather than creating a new instance for each execution. HttpClient let's see why.
See for yourself
Here's a simple program to demonstrate HttpClient:
This will be directed to http://aspnetmonsters.comOpen 10 requests and do GET, we just print the status code, so we know it's working. The output will be:
Wait, there's more!
All work and everything is right for the world. except it is not. If we pull out the netstat tool and look at the socket status on the machine running it, we will see:
C:\code\socket>NETSTAT.EXE ... Proto Local Address Foreign Address State TCP 10.211.55.6:12050 waws-prod-bay-017:http TIME_WAIT TCP 10.211.55.6:12051 waws-prod-bay-017:http TIME_WAIT TCP 10.211.55.6:12053 waws-prod-bay-017:http TIME_WAIT TCP 10.211.55.6:12054 waws-prod-bay-017:http TIME_WAIT TCP 10.211.55.6:12055 waws-prod-bay-017:http TIME_WAIT TCP 10.211.55.6:12056 waws-prod-bay-017:http TIME_WAIT TCP 10.211.55.6:12057 waws-prod-bay-017:http TIME_WAIT TCP 10.211.55.6:12058 waws-prod-bay-017:http TIME_WAIT TCP 10.211.55.6:12059 waws-prod-bay-017:http TIME_WAIT TCP 10.211.55.6:12060 waws-prod-bay-017:http TIME_WAIT TCP 10.211.55.6:12061 waws-prod-bay-017:http TIME_WAIT TCP 10.211.55.6:12062 waws-prod-bay-017:http TIME_WAIT TCP 127.0.0.1:1695 SIMONTIMMS742B:1696 ESTABLISHED ... Well, it's weird...... The application has exited, but there are still a bunch of these connections open to the Azure machine hosting the ASP.NET Monsters website. They are inTIME_WAITstatus, which means that the connection has been closed on one side (ours), but we are still waiting to see if any other packets come in because they may have been delayed on the network somewhere. Here's a diagram of the TCP/IP status:
Windows will remain connected in this state for 240 seconds (as set by setting [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\TcpTimedWaitDelay]). Windows has a limit on how quickly you can open a new socket, so if you run out of connection pools, then you may see an error like this:
Unable to connect to the remote server
System.Net.Sockets.SocketException: Only one usage of each socket address (protocol/network address/port) is normally permitted. Searching for it in Google will give you some bad advice on reducing connection timeouts. In fact, when running on a server with HttpClient or a similarly constructed application correctly, reducing timeouts can lead to other adverse consequences. We need to understand what "right" means and solve the underlying problem, not tinker with machine-level variables.
Fix it
If we share an instance of HttpClient then we can reduce socket waste by reusing them:
Note that for the whole application, we only have one shared instance. HttpClient still works as before (actually a bit faster due to socket reuse). Netstat now only shows:
TCP 10.211.55.6:12254 waws-prod-bay-017:http ESTABLISHED In a production scenario, my socket count averages around 4000 and peaks at over 5000, effectively squeezing the available resources on the server, causing the service to crash. After implementing the change, the number of sockets in use dropped from an average of over 4000 to consistently less than 400, typically around 100.
This is part of a chart from our monitoring tool that shows what happens after we deploy a limited number of fix proofs to a select number of microservices.
It was dramatic. If you have any type of load, you need to keep these two things in mind:
Make your HttpClient static. Do not discard or package your use unless you are explicitly looking for a specific behavior (such as causing your service to fail). HttpClient
summary
The socket exhaustion issue we've been struggling with for months is gone, and our customers have a virtual parade. I can't underestimate how unobvious this mistake is. Over the years, we've been used to dealing with implemented objects, IDisposable, and many refactoring tools like R# and CodeRush actually warn you if you don't. In this case, disposing of HttpClient is the wrong approach. HttpClient implements IDisposable and encourages bad behavior is unfortunate
Original:The hyperlink login is visible.
|