aboutgitcodebugslistschat
path: root/test/lib/test
Commit message (Collapse)AuthorAgeFilesLines
* test: Create common state directories for temporary filesDavid Gibson2022-09-131-0/+4
| | | | | | | | | | | | | | | | | | The test scripts create a bunch of temporary files to keep track of internal state. Some are made in /tmp with individual mktemp calls, some go in the passt source directory, and some go in $LOGDIR. This can sometimes make it messy to clean up after failed test runs. Start cleaning this up by creating a single "state" directory ($STATEBASE) in /tmp for all the state or temporary files used by a single test run. Clean it up automatically in cleanup() - except when DEBUG==1, because those files can be useful for debugging test script failures. We create subdirectories under $STATEBASE for each setup function, exposed as $STATESETUP. We also create subdirectories for each test script and expose those to the scripts as __STATEDIR__. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* test: Integration of old-style pane execution and new context executionDavid Gibson2022-09-131-81/+57
| | | | | | | | | | | | | | | | We're creating a system for tests to more reliably execute commands in various contexts (e.g. host, guest, namespace). That transition is going to happen over a number of steps though, so in the meantime we need to deal with both the old-style issuing of commands via typing into and screen scraping tmux panels, and the new-style system for executing commands in context. Introduce some transitional helpers which will issue a command via context if the requested context is initialized, but will otherwise fall back to the old style tmux panel based method. Re-implement the various test DSL commands in terms of these new helpers. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* test: Rewrite test_iperf3David Gibson2022-09-071-41/+27
| | | | | | | | | | | | | | | test_iperf3() is a pretty inscrutable mess of nested background processes. It has a number of ugly sleeps needed to wait for things to complete. Rewrite it to be cleaner: * Use the construct (a & b & wait) to run 'a' and 'b' in parallel, but then wait for them both to complete before continuing * This allows us to wait for both the server and client to finish, rather than sleeping * Use jq to do all the math we need to get the final result, rather than jq followed by some complicated 'bc' mangling Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* test: Parameterize run time for throughput performance testsDavid Gibson2022-09-071-4/+6
| | | | | | | | | | | | Currently all the throughput tests are run for 30s. This is reflected in both the actual parameters given to the iperf commands, but also in the matching sleeps in test_iperf3. Allow this to be adjusted more easily with a new parameter to test_iperf3. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> [sbrivio: Reflect new parameter in comment to test_iperf3()] Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
* test: Combine iperf3c and iperf3s into a single DSL commandDavid Gibson2022-09-071-47/+43
| | | | | | | | | | | | | | | | | These two commands in the DSL to run an iperf client and server are always used together, and some of the parameters must match between them. The iperf3s must also be run more or less immediately after iperf3c, since iperf3c will run a client in the background after a sleep and requires a server to be running before it will work. A bunch of things can be made cleaner if we make a single DSL command that runs both sides of the test. For now make the combined command work exactly like the two commands together did, warts and all. This does lose the ability for the DSL scripts to give additional options to the iperf3 server, but we weren't using that anyway. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* tests: Clean up better after iperf testsDavid Gibson2022-07-221-1/+1
| | | | | | | | | | | | | The iperf based test commands create a bunch of .bw and .pid files for each iperf client and server. The server side .bw files are cleaned up afterwards, but the pid files are not, and none of the client side files are cleaned up. The latter doesn't really matter when the client is run on ephemeral guests, but sometimes we run it in a namespace that shares the filesystem with the host. Clean up all of these files after the tests. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* tests: Explicitly list test files in test/run, remove "onlyfor" supportDavid Gibson2022-07-141-9/+0
| | | | | | | | | | | | | | | | | Currently test/run uses wildcards to run all of the tests in a directory. However, that wildcard list is filtered down by the "onlyfor" directives in the test files... usually to a single file. Therefore, just explicitly list the files we *really* want to run for this test mode. This makes it easier to see at the top level what tests will be executed, and to change that list temporarily while debugging specific failures. This means the "onlyfor" directive no longer has any purpose, and we can remove it. "onlyfor" was also the only used of the $MODE variable, so we can remove that too. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* tests: Don't automatically traverse directories of test filesDavid Gibson2022-07-141-2/+2
| | | | | | | | | | | | | | | The top level listing control of which tests to run is in test/run, however it uses the test() function which runs an entire directory of test files, filtered by some criteria. This makes it awkward to narrow down to a subset of tests when debugging a specific failure. To make this easier, have test() take an explicit list of test files to run, and have the caller in test/run handle the directory traversal. The construct we use for this is pretty awkward to handle the fact that we're in the source tree root directory rather than test/ at this point in test/run. Later cleanups will improve that. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* tests: Remove not-very-useful "req" directiveDavid Gibson2022-07-141-18/+10
| | | | | | | | | | | | | | | | | The test scripts support a "req" directive which requires one test script to be run before another. It's implemented by doing a topological sort based on these directives in the runner scripts, which is about as awkward as you'd expect in Bourne shell. It turns out we only use this functionality in one place - to make the "make install" test run after the plain "make" test. We also already have a simpler way of making sure tests run in a specific order: just put them into the same test script file. So, remove support for the "req" directive and just fold the build/all and build/install test scripts together. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* tests: Introduce makefile for building test assetsDavid Gibson2022-07-141-1/+1
| | | | | | | | | | | | | | A number of passt/pasta testcases have initial steps which are just about building images or other assets we need for the test proper. Repeating these for each test run can be quite costly. This patch makes a start on moving this sort of test asset building to a separate phase before running the tests proper. For now just add a Makefile to handle the asset building (although it doesn't build anything yet), and make the path where we'll be building the assets available to the tests. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* tests: Don't count number of test units for demosStefano Brivio2022-05-191-2/+4
| | | | | | | | ...there are no 'test' directives in demo, and this causes a script failure. Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
* tests: Simplify *tools commands using pane_statusDavid Gibson2022-05-191-15/+10
| | | | | | | | | Now that we have pane_status to check the success of commands issued to panes, we can more easily check for the success of the 'which' commands used to check tool availability, rather than constructing, then parsing special "skip" output. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* tests: Add pane_status command to check for success of issued commandsDavid Gibson2022-05-191-19/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | When we use pane_wait to wait for a command issued to a tmux pane to finish we have no idea whether the command succeeded or not. This means that the test scripts can keep running long after the point something vital has failed, making it difficult to work out what went wrong. Add a new pane_status command that checks for success of the issued command and use it in most places instead of pane_wait. We still need explicit pane_wait where we're gathering explicit output with pane_parse, because the way we check the status with 'echo $?' means we lose track of that output. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> [sbrivio: - instead of quitting the script, make a test fail if a command issued in a pane fails during a test, and loop until the status code is numeric in pane_status() as a hack to make it a bit more robust - retain usage of pane_wait() in iperf3 and teardown functions as we interrupt iperf3, passt, and pasta, so a non-zero exit code is expected - drop bogus ns_{1,2}_wait() calls in teardown_two_guests(), those functions were never implemented - use pane_status() for "guest" test directives too ] Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
* tests: Don't ignore errors during scriptDavid Gibson2022-05-191-5/+5
| | | | | | | | | | | | | | | | Most commands issued during the testing scripts aren't explicitly checked for errors. Therefore, if they fail, the shell will just keep on executing. This makes it difficult to figure out where things started going wrong if things fall over. Run the whole script with the set -e mode so that it will exit in the case of any (unchecked) failing command. To make this work we do need to add explicit checks / fallbacks for some commands which we expect to fail. Signed-off-by: David Gibson <david@gibson.dropbear.id.au> [sbrivio: use sh -e instead of setting -e later, so that we don't miss anything before set -e is issued] Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
* tests: Add some debugging output for the test scripts themselvesDavid Gibson2022-05-191-0/+2
| | | | | | | | | | | | The DEBUG option for tests/run enables debugging options to passt/pasta, however that doesn't help with debugging the test scripts themselves, which are fairly fragile. Extend the DEBUG option so it also prints information on each command in the test scripts to make it easier to work out where things are falling over. Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
* test: Add demo for Podman with pastaStefano Brivio2022-02-221-0/+35
| | | | | | | | ...showing setup steps, some peculiarities as --net option, and a general side-to-side comparison with slirp4netns(1), including "quick" TCP and UDP throughput and latency benchmarks. Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
* test/lib/test: Introduce 'def' directive for frequently used patternsStefano Brivio2022-01-281-229/+267
| | | | | | | | | For distribution tests, we'll repeat some tests frequently. Add a 'def' directive that starts a block, ended by 'endef', whose execution can then be triggered by simply giving its name as a directive itself. Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
* test/lib/test: Wait a bit longer before terminating iperf3 processesStefano Brivio2021-10-211-3/+3
| | | | | | | Sometimes tests run a few seconds longer than expected, wait a few more seconds. Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
* test: Add CI/demo scriptsStefano Brivio2021-09-271-0/+378
Not really quick, definitely dirty. Signed-off-by: Stefano Brivio <sbrivio@redhat.com>