iPerf3 is a tool for actively measuring the maximum bandwidth achievable on an IP network. It supports adjusting various parameters related to timing, buffers, and protocols (TCP, UDP, SCTP with IPv4 and IPv6). For each test, it reports bandwidth, loss, and other parameters. This is a new implementation that does not share code with the original iPerf and is not backwards compatible. iPerf was originally developed by NLANR/DAST. iPerf3 was primarily developed by ESnet/Lawrence Berkeley National Laboratory.
The iperf family of tools performs active measurements to determine the maximum bandwidth achievable on an IP network. It supports adjusting various parameters related to timing, protocols, and buffers. For each test, it reports the measured throughput, loss, and other parameters.
This release, sometimes referred to as iperf3, is a redesign of the original version developed by NLANR / DAST. iperf3 is a new implementation from the ground up that aims for a smaller, simpler codebase, and a version of the library that can be used in other programs. iperf3 also incorporates features from many other tools like nuttcp and netperf, which are missing from the original iperf. These include, for example, zero copy mode and optional JSON output. Note that iperf3 is not backwards compatible with the original iperf.
iPerf3 can be used to test the network communication speed of the device, and iPerf3 can be installed on two computer devices, one as the server and the other as the client, and the communication speed can be tested by sending messages to each other through iPerf3.
The iPerf3 parameters are as follows:
Usage: iperf [-s|-c host] [options] iperf [-h|--help] [-v|--version]
Server or Client: -p, --port # server port to listen on/connect to -f, --format [kmgKMG] format to report: Kbits, Mbits, KBytes, MBytes -i, --interval # seconds between periodic bandwidth reports -F, --file name xmit/recv the specified file -B, --bind <host> bind to a specific interface -V, --verbose more detailed output -J, --json output in JSON format --logfile f send output to a log file -d, --debug emit debugging output -v, --version show version information and quit -h, --help show this message and quit Server specific: -s, --server run in server mode -D, --daemon run the server as a daemon -I, --pidfile file write PID file -1, --one-off handle one client connection then exit Client specific: -c, --client <host> run in client mode, connecting to <host> -u, --udp use UDP rather than TCP -b, --bandwidth #[KMG][/#] target bandwidth in bits/sec (0 for unlimited) (default 1 Mbit/sec for UDP, unlimited for TCP) (optional slash and packet count for burst mode) -t, --time # time in seconds to transmit for (default 10 secs) -n, --bytes #[KMG] number of bytes to transmit (instead of -t) -k, --blockcount #[KMG] number of blocks (packets) to transmit (instead of -t or -n) -l, --len #[KMG] length of buffer to read or write (default 128 KB for TCP, 8 KB for UDP) --cport <port> bind to a specific client port (TCP and UDP, default: ephemeral port) -P, --parallel # number of parallel client streams to run -R, --reverse run in reverse mode (server sends, client receives) -w, --window #[KMG] set window size / socket buffer size -M, --set-mss # set TCP/SCTP maximum segment size (MTU - 40 bytes) -N, --no-delay set TCP/SCTP no delay, disabling Nagle's Algorithm -4, --version4 only use IPv4 -6, --version6 only use IPv6 -S, --tos N set the IP 'type of service' -Z, --zerocopy use a 'zero copy' method of sending data -O, --omit N omit the first n seconds -T, --title str prefix every output line with this string --get-server-output get results from server --udp-counters-64bit use 64-bit counters in UDP test packets
[KMG] indicates options that support a K/M/G suffix for kilo-, mega-, or giga-
iperf3 homepage at:http://software.es.net/iperf/ Report bugs to: https://github.com/esnet/iperf Windows 64-bit version download address:The hyperlink login is visible.
LINUX servers, taking CentOS as an example, can install the iPerf3 tool using the yum command, the command is as follows:
server
With the Linux server as the server side, execute the following command:
client
Using my local computer as the client side, I executed the following command:
Remark:192.168.50.227 is the IP address on the Sever side
summary
The server log shows that a test request was received from 192.168.50.243, source port 22376. The client side conducts a continuous test for 10 seconds, and displays the number of bytes transmitted per second and bandwidth information. Statistics sent and received are summarized after the test is completed. Listening for port 5201 continues after the client connection is closed.
Connecting to host 192.168.50.227, port 5201 [ 4] local 192.168.50.243 port 22377 connected to 192.168.50.227 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 112 MBytes 943 Mbits/sec [ 4] 1.00-2.00 sec 112 MBytes 940 Mbits/sec [ 4] 2.00-3.00 sec 112 MBytes 941 Mbits/sec [ 4] 3.00-4.00 sec 112 MBytes 940 Mbits/sec [ 4] 4.00-5.00 sec 112 MBytes 941 Mbits/sec [ 4] 5.00-6.00 sec 112 MBytes 941 Mbits/sec [ 4] 6.00-7.00 sec 112 MBytes 942 Mbits/sec [ 4] 7.00-8.00 sec 112 MBytes 941 Mbits/sec [ 4] 8.00-9.00 sec 112 MBytes 942 Mbits/sec [ 4] 9.00-10.00 sec 112 MBytes 942 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec sender [ 4] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec receiver
iperf Done. Both the server and client devices are Gigabit Etherports, and the routers are also Gigabit Etherports, so Bandwidth of 941 Mbits/sec is normal.
Test virtual machines under ESXI
Both are CentOS systems, and the physical router is assigned a private IP address, which is tested through the private IP as follows:
Connecting to host 192.168.50.227, port 5201 [ 5] local 192.168.50.131 port 35394 connected to 192.168.50.227 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 2.72 GBytes 23.3 Gbits/sec 0 1.39 MBytes [ 5] 1.00-2.00 sec 2.74 GBytes 23.5 Gbits/sec 0 1.48 MBytes [ 5] 2.00-3.00 sec 2.60 GBytes 22.3 Gbits/sec 0 1.48 MBytes [ 5] 3.00-4.00 sec 2.58 GBytes 22.2 Gbits/sec 0 1.48 MBytes [ 5] 4.00-5.00 sec 2.67 GBytes 23.0 Gbits/sec 0 1.48 MBytes [ 5] 5.00-6.00 sec 2.65 GBytes 22.7 Gbits/sec 0 1.48 MBytes [ 5] 6.00-7.00 sec 2.67 GBytes 23.0 Gbits/sec 0 1.48 MBytes [ 5] 7.00-8.00 sec 2.64 GBytes 22.7 Gbits/sec 0 1.48 MBytes [ 5] 8.00-9.00 sec 2.63 GBytes 22.6 Gbits/sec 0 1.48 MBytes [ 5] 9.00-10.00 sec 2.67 GBytes 22.9 Gbits/sec 0 1.48 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 26.6 GBytes 22.8 Gbits/sec 0 sender [ 5] 0.00-10.04 sec 26.6 GBytes 22.7 Gbits/sec receiver
iperf Done. This is a bit abnormal, because I am a Gigabit router, and the test speed is 22.7 Gbits/sec, is it not through a physical network card?
Access Information:https://communities.vmware.com/t ... Routes/ta-p/2783083
VM1 and VM2 are connected to same vSwitch called "vSwitch1" ,same port group called Production and also same VLAN called VLAN 20 and also both are running in the same ESXi host called ESX1. Network traffic between these VM's (VM1 & VM2) does not go to physical NICs on the ESXi host and this frames also not forwarded to physical network like physical switch and router because VM's will communicate within the vSwitch and results in achieving the increased network speed and lesser network latency. VM1 and VM2 are connected to the same vSwitch named "vSwitch1", the same port group named Production, and the same VLAN named VLAN 20, and both are running in the same ESXi host named ESX1. Network traffic between these VMs (VM1 and VM2).Does not go to the physical NIC on the ESXi host, and these frames alsoIt is not forwarded to the physical network(like physical switches and routers) because VMs will communicate within the vSwitch, resulting in higher network speeds and less network latency.
I tested the environment myself.Two VMs are on the same host and vSwitch, but not on the same port group, it seems that it is not forwarded to the physical network card and the physical network.
|