aboutgitcodebugslistschat
path: root/test/perf/pasta_tcp
Commit message (Collapse)AuthorAgeFilesLines
* test: Speed up by cutting on eye candy and performance test durationStefano Brivio2024-08-151-17/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We have a number of delays when we switch to new layouts that were added to make the tests visually easier to follow, together with blinking status bars. Shorten the delays and avoid blinking the status bar if $FAST is set to 1 (no demo mode). Shorten delays in busy loops to 10ms, instead of 100ms, and skip the one-second fixed delay when we wait for the status of a command. Cut the duration of throughput and latency tests to one second, down from ten. Somewhat surprisingly, the results we get are rather consistent, and not significantly different from what we'd get with 10 seconds. This, together with Podman's commit 20f3e8909e3a ("test/system: pasta_test_do add explicit port check"), cuts the time needed on my setup for full test run from approximately 37 minutes to...: $ time ./run [exited] PASS: 165, FAIL: 0 Log at /home/sbrivio/passt/test/test_logs/test.log real 15m34.253s user 0m0.011s sys 0m0.011s Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Tested-by: David Gibson <david@gibson.dropbear.id.au>
* test: iperf3 3.16 introduces multiple threads, drop our own implementation ↵Stefano Brivio2024-07-251-30/+28
| | | | | | | | | | | | | | | | | | of that Starting from iperf3 version 3.16, -P / --parallel spawns multiple clients as separate threads, instead of multiple streams serviced by the same thread. So we can drop our lib/test implementation to spawn several iperf3 client and server processes and finally simplify things quite a bit. Adjust number of threads and UDP sending bandwidth to values that seem to be more or less matching previous throughput tests on my setup. Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Tested-by: David Gibson <david@gibson.dropbear.id.au>
* test/perf: Simplify calculation of "omit" time for TCP throughputDavid Gibson2023-11-071-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | For the TCP throughput tests, we use iperf3's -O "omit" option which ignores results for the given time at the beginning of the test. Currently we calculate this as 1/6th of the test measurement time. The purpose of -O, however, is to skip over the TCP slow start period, which in no way depends on the overall length of the test. The slow start time is roughly speaking log_2 ( max_window_size / MSS ) * round_trip_time These factors all vary between tests and machines we're running on, but we can estimate some reasonable bounds for them: * The maximum window size is bounded by the buffer sizes at each end, which shouldn't exceed 16MiB * The mss varies with the MTU we use, but the smallest we use in tests is ~256 bytes * Round trip time will vary with the system, but with these essentially local transfers it will typically be well under 1ms (on my laptop it is closer to 0.03ms) That gives a worst case slow start time of about 16ms. Setting an omit time of 0.1s uniformly is therefore more than enough, and substantially smaller than what we calculate now for the default case (10s / 6 ~= 1.7s). This reduces total time for the standard benchmark run by around 30s. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
* test/perf: Remove unnecessary --pacing-timer optionsDavid Gibson2023-11-071-2/+2
| | | | | | | | | | We always set --pacing-timer when invoking iperf3. However, the iperf3 man page implies this is only relevant for the -b option. We only use the -b option for the UDP tests, not TCP, so remove --pacing-timer from the TCP cases. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
* test/perf: Small MTUs for spliced TCP aren't interestingDavid Gibson2023-11-071-52/+1
| | | | | | | | | | | | | | | | | | | | | | | Currently we make TCP throughput measurements for spliced connections with a number of different MTU values. However, the results from this aren't really interesting. Unlike with tap connections, spliced connections only involve the loopback interface on host and container, not a "real" external interface. lo typically has an MTU of 65535 and there is very little reason to ever change that. So, the measurements for smaller MTUs are rarely going to be relevant. In addition, the fact that we can offload all the {de,}packetization to the kernel with splice(2) means that the throughput difference between these MTUs isn't very great anyway. Remove the short MTUs and only show spliced throughput for the normal 65535 byte loopback MTU. This reduces runtime of the performance tests on my laptop by about 1 minute (out of ~24 minutes). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
* test/perf: Start iperf3 server less oftenDavid Gibson2023-11-071-25/+52
| | | | | | | | | | | | | | | | | | | Currently we start both the iperf3 server(s) and client(s) afresh each time we want to make a bandwidth measurement. That's not really necessary as usually a whole batch of bandwidth measurements can use the same server. Split up the iperf3 directive into 3 directives: iperf3s to start the server, iperf3 to make a measurement and iperf3k to kill the server, so that we can start the server less often. This - and more importantly, the reduced number of waits for the server to be ready - reduces runtime of the performance tests on my laptop by about 4m (out of ~28minutes). For now we still restart the server between IPv4 and IPv6 tests. That's because in some cases the latency measurements we make in between use the same ports. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
* test/perf: Remove stale iperf3c/iperf3s directivesDavid Gibson2023-11-071-2/+1
| | | | | | | | | | Some older revisions used separate iperf3c and iperf3s test directives to invoke the iperf3 client and server. Those were combined into a single iperf3 directive some time ago, but a couple of places still have the old syntax. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
* passt: Relicense to GPL 2.0, or any later versionStefano Brivio2023-04-061-1/+1
| | | | | | | | | | | | | | | | | | | In practical terms, passt doesn't benefit from the additional protection offered by the AGPL over the GPL, because it's not suitable to be executed over a computer network. Further, restricting the distribution under the version 3 of the GPL wouldn't provide any practical advantage either, as long as the passt codebase is concerned, and might cause unnecessary compatibility dilemmas. Change licensing terms to the GNU General Public License Version 2, or any later version, with written permission from all current and past contributors, namely: myself, David Gibson, Laine Stump, Andrea Bolognani, Paul Holzinger, Richard W.M. Jones, Chris Kuhn, Florian Weimer, Giuseppe Scrivano, Stefan Hajnoczi, and Vasiliy Ulyanov. Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
* test/perf/pasta_tcp: Add host to namespace cases for traffic via tapStefano Brivio2023-01-051-0/+57
| | | | | | | | | | Similarly to UDP cases, these were missing as it wasn't clear, when the other tests were introduced, if using the global address of a namespace, from the host, should have resulted in connections being routed via the tap interface. Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
* test/perf: Disable periodic throughput reports to avoid vhost hangStefano Brivio2022-09-221-1/+1
| | | | | | | | | | | | | | | | | | It appears that if we run throughput tests with one-second periodic reports, the sending side of the vhost channel used for SSH-based command dispatch occasionally stops working altogether. I haven't investigated this further, all I see is that output is truncated at some point, and doesn't resume. If we use gzip compression (ssh -C) this happens less frequently, but it still happens, seemingly indicating the issue is probably related to vhost itself. Disable periodic reports in iperf3 clients. The -i options were actually redundant, so remove them from both test files as well as from test_iperf3(). Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
* test/perf: Switch performance test duration to 10 seconds instead of 30Stefano Brivio2022-09-221-1/+1
| | | | | | | | | It looks like the workaround for the virtio_net TX hang issue is working less reliably with the new command dispatch mechanism, I'm not sure why. Switch to 10 seconds, at least for the moment. Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
* test/perf: Always use /sbin/sysctl in tcp testStefano Brivio2022-09-221-3/+3
| | | | | Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
* test/perf: Check for /sbin/sysctl with which(1), not simply sysctlStefano Brivio2022-09-221-1/+1
| | | | | | | | | | Otherwise, we're depending on having /sbin in $PATH. For some reason I didn't completely grasp, with the new command dispatch mechanism that's not the case anymore, even if I have /sbin in $PATH in the parent shell. Signed-off-by: Stefano Brivio <sbrivio@redhat.com> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
* test: Rewrite test_iperf3David Gibson2022-09-071-1/+1
| | | | | | | | | | | | | | | test_iperf3() is a pretty inscrutable mess of nested background processes. It has a number of ugly sleeps needed to wait for things to complete. Rewrite it to be cleaner: * Use the construct (a & b & wait) to run 'a' and 'b' in parallel, but then wait for them both to complete before continuing * This allows us to wait for both the server and client to finish, rather than sleeping * Use jq to do all the math we need to get the final result, rather than jq followed by some complicated 'bc' mangling Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* test: Parameterize run time for throughput performance testsDavid Gibson2022-09-071-20/+22
| | | | | | | | | | | | Currently all the throughput tests are run for 30s. This is reflected in both the actual parameters given to the iperf commands, but also in the matching sleeps in test_iperf3. Allow this to be adjusted more easily with a new parameter to test_iperf3. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> [sbrivio: Reflect new parameter in comment to test_iperf3()] Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
* test: Combine iperf3c and iperf3s into a single DSL commandDavid Gibson2022-09-071-34/+17
| | | | | | | | | | | | | | | | | These two commands in the DSL to run an iperf client and server are always used together, and some of the parameters must match between them. The iperf3s must also be run more or less immediately after iperf3c, since iperf3c will run a client in the background after a sleep and requires a server to be running before it will work. A bunch of things can be made cleaner if we make a single DSL command that runs both sides of the test. For now make the combined command work exactly like the two commands together did, warts and all. This does lose the ability for the DSL scripts to give additional options to the iperf3 server, but we weren't using that anyway. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* tests: Explicitly list test files in test/run, remove "onlyfor" supportDavid Gibson2022-07-141-1/+0
| | | | | | | | | | | | | | | | | Currently test/run uses wildcards to run all of the tests in a directory. However, that wildcard list is filtered down by the "onlyfor" directives in the test files... usually to a single file. Therefore, just explicitly list the files we *really* want to run for this test mode. This makes it easier to see at the top level what tests will be executed, and to change that list temporarily while debugging specific failures. This means the "onlyfor" directive no longer has any purpose, and we can remove it. "onlyfor" was also the only used of the $MODE variable, so we can remove that too. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* Don't abbreviate ip(8) arguments in examples and testsDavid Gibson2022-06-151-3/+3
| | | | | | | | ip(8)'s ability to take abbreviated arguments (e.g. "li sh" instead of "link show") is very handy when using it interactively, but it doesn't make for very readable scripts and examples when shown that way. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* test/perf: Try sourcing maximum scaling frequency from cpufreqStefano Brivio2021-10-211-1/+4
| | | | | | | On most recent CPUs, that's a better indication of all-core turbo frequency, or non-turbo frequency, than /proc/cpuinfo. Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
* test/perf: Use CPU frequency from /proc/cpuinfo instead of cpupower(1)Stefano Brivio2021-10-191-2/+2
| | | | | | Get it to work also in nested virtualisation environments. Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
* test: Add CI/demo scriptsStefano Brivio2021-09-271-0/+256
Not really quick, definitely dirty. Signed-off-by: Stefano Brivio <sbrivio@redhat.com>