| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
If we don't, guest command dispatch will fail altogether, given that
we use cat(1) on the enter file, which contains spaces.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
We use the [ "$x" -eq "$x" ] syntax to check if $x is a number. The
behaviour is clearly implied by POSIX, but some shells might actually
report the (intended) error, and dash floods script.log with
"Illegal number" error messages. Hide them.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
| |
The tests generate a performance report in $BASEPATH/perf.js and
hooks/pre-push copies it to the website. To avoid cluttering the working
directory, instead put perf.js in $LOGDIR/web, since it's a test output
artefact. Update hooks/pre-push to copy from its new location.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The asciinema video handling creates a number of temporary files (.uncat,
.start, .stop) which currently go into the source tree. Put them in the
temporary state directory to avoid clutter.
The final processed output is now placed into test_logs/web/ along with the
corresponding .js file with links, since they're essentially test
artefacts. hooks/pre-push is updated to look for those files in the new
location when updating the web site.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
| |
Currently they go in the passt source tree with a fixed names, which means
their presence can mess with subsequent test runs.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
| |
The capture files are more or less a different form of log output from the
tests, so place them in $LOGDIR.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
| |
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of using the 'temp' and 'tempdir' DSL directives to create
temporary files, use fixed paths relative to __STATEDIR__. This has two
advantages:
1) The files are automatically cleaned up if the tests fail (and even if
that doesn't work they're easier to clean up manuall)
2) When debugging tests it's easier to figure out which of the temporary
files are relevant to whatever's going wrong
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
| |
Currently the context command dispatch subsystem creates a bunch of
temporary files in $LOGDIR, which is messy. Store them in $STATEDIR which
is for precisely this purpose. The logs from each context still go into
$LOGDIR.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
| |
We use this fifo to send messages to the information pane. Put it in the
state directory so it doesn't need its own cleanup.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test scripts create a bunch of temporary files to keep track of
internal state. Some are made in /tmp with individual mktemp calls, some
go in the passt source directory, and some go in $LOGDIR. This can
sometimes make it messy to clean up after failed test runs.
Start cleaning this up by creating a single "state" directory ($STATEBASE)
in /tmp for all the state or temporary files used by a single test run.
Clean it up automatically in cleanup() - except when DEBUG==1, because
those files can be useful for debugging test script failures.
We create subdirectories under $STATEBASE for each setup function, exposed
as $STATESETUP. We also create subdirectories for each test script and
expose those to the scripts as __STATEDIR__.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
| |
FFPMPEG_PID_FILE is set (creating a temporary file), then never used.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
| |
Put the pieces together to use the new style context based dispatch for
the passt_in_pasta tests.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
| |
Now that we have all the pieces we need for issuing commands both into
namespaces and into guests, we can use those to convert the two_guests to
using only the new style context command issue.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Extends the context system in the test scripts to allow executing commands
within a guest. Do this without requiring an existing network in the guest
by using socat to run ssh via a vsock connection.
We do need some additional "sleep"s in the tests, because the new
faster dispatch means that sometimes we attempt to connect before
socat has managed to listen.
For now, only use this for the plain "passt" tests. The "passt_in_ns" and
other tests have additional complications we still need to deal with.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
| |
Extend the context system to allow commands to be run in a namespace
created with unshare, and use it for the namespace used in the pasta tests.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
| |
Convert the pasta and passt tests to use new-style context execution
for the things that run in the "passt" frame. Don't touch the
passt_in_ns or two_guests tests yet, because they run passt inside a
namespace which introduces some additional complications we have yet
to handle.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
| |
Convert most of the tests to use the new-style system for issuing commands
for all host commands. We leave the distro tests for now: they use
the same pane for both host and guest commands which we'll need some more
things to deal with.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We're creating a system for tests to more reliably execute commands in
various contexts (e.g. host, guest, namespace). That transition is going
to happen over a number of steps though, so in the meantime we need to deal
with both the old-style issuing of commands via typing into and screen
scraping tmux panels, and the new-style system for executing commands in
context.
Introduce some transitional helpers which will issue a command via context
if the requested context is initialized, but will otherwise fall back to
the old style tmux panel based method. Re-implement the various test DSL
commands in terms of these new helpers.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We're moving to a new way of the tests dispatching commands to running in
contexts (host, guest, namespace, etc.). As we make this transition,
though, we still want the user to be able to watch the commands running
in a context, as they previously could from the commands issued in the
pane.
Add a helper to set up a pane to watch a context's log to allow this. In
some cases we currently issue commands from several different logical
contexts in the same pane, so allow a pane to watch several contexts at
once. Also use tail's --retry option to allow starting the watch before
we've initialized the context which will be useful in some cases.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For the tests, we need to run commands in various contexts: in the host,
in a guest or in a namespace. Currently we do this by running each context
in a tmux pane, and using tmux commands to type the commands into the
relevant pane, then screen-scrape the output for the results if we need
them.
This is very fragile, because we have to make various assumptions to parse
the output. Those can break if a shell doesn't have the prompt we expect,
if the tmux pane is too small or in various other conditions.
This starts some library functions for a new "context" system, that
provides a common way to invoke commands in a given context, in a way that
properly preserves stdout, stderr and the process return code.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
test_iperf3() is a pretty inscrutable mess of nested background processes.
It has a number of ugly sleeps needed to wait for things to complete.
Rewrite it to be cleaner:
* Use the construct (a & b & wait) to run 'a' and 'b' in parallel, but
then wait for them both to complete before continuing
* This allows us to wait for both the server and client to finish, rather
than sleeping
* Use jq to do all the math we need to get the final result, rather than
jq followed by some complicated 'bc' mangling
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently all the throughput tests are run for 30s. This is reflected in
both the actual parameters given to the iperf commands, but also in the
matching sleeps in test_iperf3.
Allow this to be adjusted more easily with a new parameter to test_iperf3.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: Reflect new parameter in comment to test_iperf3()]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These two commands in the DSL to run an iperf client and server are always
used together, and some of the parameters must match between them. The
iperf3s must also be run more or less immediately after iperf3c, since
iperf3c will run a client in the background after a sleep and requires a
server to be running before it will work.
A bunch of things can be made cleaner if we make a single DSL command that
runs both sides of the test. For now make the combined command work
exactly like the two commands together did, warts and all.
This does lose the ability for the DSL scripts to give additional options
to the iperf3 server, but we weren't using that anyway.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently in at least some of the testcases we kill qemu processes we're
done with by issuing a Control-C to the tmux panel it's running in. That
makes things harder as we try to move towards allowing "headless" testing
without tmux.
So, instead always use an explicit kill on a pid derived from a pidfile
for killing qemu. Note that we don't need to remove the pidfiles
afterwards, because qemu does that itself when terminated.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
| |
For the passt and passt_in_ns tests we have a "shutdown" testcase that
checks for any errors from the passt process we were using (including
valgrind warnings). Do the same for pasta tests, so that we catch any
error codes from the pasta process.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The "valgrind" test cases are designed to pick up errors reported when
passt is running under valgrind. But what it actually does is just kill
the passt process, then see if it had a non-zero exit code. That means it
will equally well pick up any other problems which caused passt to exit
with an error status: either something detected within passt or as a result
of passt being killed by an unexpected signal.
The fact that the "valgrind" test is actually responsible for shutting down
the passt process is non-obvious and can lead to problems when selectively
running tests during debugging.
Rename the "valgrind" tests to "shutdown" tests and run it regardless of
whether we're using valgrind or not. This allows us to remove an ugly
speacial case in the passt_in_ns teardown code.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently the build tests and distro tests share a common setup function.
That works for now, but changes we want to make will mean they need
slightly different setup, so split the setup functions in preparation.
Currently, neither build nor distro tests have any teardown function.
Again, future changes are going to mean we need to do some teardown, so
create some empty for now teardown functions in preparation.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
| |
The DEMO_XTERM and CI_XTERM variables defined in test/lib/term aren't used
anywhere. Remove them.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The iperf based test commands create a bunch of .bw and .pid files for
each iperf client and server. The server side .bw files are cleaned
up afterwards, but the pid files are not, and none of the client side
files are cleaned up. The latter doesn't really matter when the
client is run on ephemeral guests, but sometimes we run it in a
namespace that shares the filesystem with the host.
Clean up all of these files after the tests.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
| |
Before starting the guests, these tests configure addresses in a pasta
namespace using dhclient. However, because it's a user namespace, it's
not running as "real" root and can't write to the dhclient pid file.
This doesn't stop it working, but causes an ugly error message which we
can avoid by using the --no-pid option.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
teardown_passt_in_ns() sends a ^D to the NS pane, which appears to be
intended to terminate the nsenter running there, leaving the namespace.
However, we've also sent a ^D to the PASST pane which will exit the pasta
instance which created the namespace. With the namespace destroyed the
nsenter in the NS pane will be killed, so it does not need to be exited
explicitly.
In fact sending the extra ^D can be harmful, since it will exit the shell
in which the nsenter was run, causing the whole pane to be closed. That
can then mean that the "pane_wait NS" hangs indefinitely. I believe this
will sometimes work, because there's a race between the various options
here, but it should be more reliable without the extra ^D.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The dhcp/passt and dhcp/passt_in_ns tests at least, and maybe others
use 'hout' commands that need to be able to detect empty output.
However, we don't set PS1, which means the screen-scraping logic which
detects this may not be reliable. In addition, if the host is using a
recent bash, it will have bracketed paste mode enabled which will also
add escape codes which will mess up the empty output detection.
Set the prompt and disable bracketed paste mode from the passt and
passt_in_ns setups to avoid these problems.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
| |
Similar case as the one fixed by David's patch "tests: Remove
unnecessary ^D in passt_in_ns teardown": we happen to pseudo-randomly
close panes by unnecessarily exiting the parent shells there, and
subsequent pane_wait directives hang.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently test/run uses wildcards to run all of the tests in a directory.
However, that wildcard list is filtered down by the "onlyfor" directives
in the test files... usually to a single file.
Therefore, just explicitly list the files we *really* want to run for this
test mode. This makes it easier to see at the top level what tests will
be executed, and to change that list temporarily while debugging specific
failures.
This means the "onlyfor" directive no longer has any purpose, and we can
remove it. "onlyfor" was also the only used of the $MODE variable, so we
can remove that too.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The top level listing control of which tests to run is in test/run, however
it uses the test() function which runs an entire directory of test files,
filtered by some criteria. This makes it awkward to narrow down to a
subset of tests when debugging a specific failure.
To make this easier, have test() take an explicit list of test files to
run, and have the caller in test/run handle the directory traversal. The
construct we use for this is pretty awkward to handle the fact that we're
in the source tree root directory rather than test/ at this point in
test/run. Later cleanups will improve that.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test scripts support a "req" directive which requires one test script
to be run before another. It's implemented by doing a topological sort
based on these directives in the runner scripts, which is about as awkward
as you'd expect in Bourne shell.
It turns out we only use this functionality in one place - to make the
"make install" test run after the plain "make" test. We also already have
a simpler way of making sure tests run in a specific order: just put them
into the same test script file.
So, remove support for the "req" directive and just fold the build/all and
build/install test scripts together.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
| |
This utility function is never called.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move the download of mbuto and using it to create a sample initramfs to
the asset build makefile, rather than embedding it in the test scripts
themselves.
The two_guests tests used to use two separate copies of the mbuto
image. As an initramfs the mbuto image is strictly readonly though,
so that's not necessary. So, also use the same image for both guests.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A number of passt/pasta testcases have initial steps which are just about
building images or other assets we need for the test proper. Repeating
these for each test run can be quite costly.
This patch makes a start on moving this sort of test asset building to
a separate phase before running the tests proper. For now just add a
Makefile to handle the asset building (although it doesn't build
anything yet), and make the path where we'll be building the assets
available to the tests.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A lot of tests and examples invoke qemu with the command "kvm". However,
as far as I can tell, "kvm" being aliased to the appropriate qemu system
binary is Debian specific. The binary names from qemu upstream -
qemu-system-$ARCH - also aren't universal, but they are more common (they
should be good for both Debian and Fedora at least).
In order to still get KVM acceleration when available, we use the option
"-M accel=kvm:tcg" to tell qemu to try using either KVM or TCG in that
order
A number of the places we invoked "kvm" are expecting specifically an x86
guest, and so it's also safer to explicitly invoke qemu-system-x86_64.
Some others appear to be independent of the target arch (just wanting the
same arch as the host to allow KVM acceleration). Although I suspect there
may be more subtle x86 specific options in the qemu command lines, attempt
to preserve arch independence by using $(uname -m).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
| |
This depends on a future change in mbuto to accept external profile
files. Add a file defining what we need for tests and demos, dropping
udhcpc and script as they're not needed anymore, and switch to it.
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
For some reason, the passt/pasta tests and examples use dhclient for
DHCPv6, but in most cases use udhcpc for DHCPv4. Change it to use dhclient
for both DHCPv4 and DHCPv6. This means one less tool we need for testing,
plus dhclient is easily available on Fedora whereas udhcpc is not.
Note that the passt tests still rely on udhcpc indirectly because mbuto
wants to put it into the guest images it generates.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A number of tests and examples use dhclient in both IPv4 and IPv6 modes.
We use "dhclient -6" for IPv6, but usually just "dhclient" for IPv4. Add
an explicit "-4" argument to make it more clear and explicit.
In addition, when dhclient is run from within pasta it usually won't be
"real" root, and so will not have access to write the default global pid
file. This results in a mostly harmless but irritating error:
Can't create /var/run/dhclient.pid: Permission denied
We can avoid that by using the --no-pid flag to dhclient.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
| |
ip(8)'s ability to take abbreviated arguments (e.g. "li sh" instead of
"link show") is very handy when using it interactively, but it doesn't make
for very readable scripts and examples when shown that way.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
| |
Having all those 'echo $?' is rather distracting in demos.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Sefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
...there are no 'test' directives in demo, and this causes a
script failure.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Now that we have pane_status to check the success of commands issued to
panes, we can more easily check for the success of the 'which' commands
used to check tool availability, rather than constructing, then parsing
special "skip" output.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we use pane_wait to wait for a command issued to a tmux pane to finish
we have no idea whether the command succeeded or not. This means that the
test scripts can keep running long after the point something vital has
failed, making it difficult to work out what went wrong.
Add a new pane_status command that checks for success of the issued command
and use it in most places instead of pane_wait. We still need explicit
pane_wait where we're gathering explicit output with pane_parse, because
the way we check the status with 'echo $?' means we lose track of that
output.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio:
- instead of quitting the script, make a test fail if a command
issued in a pane fails during a test, and loop until the status code is
numeric in pane_status() as a hack to make it a bit more robust
- retain usage of pane_wait() in iperf3 and teardown functions as we
interrupt iperf3, passt, and pasta, so a non-zero exit code is expected
- drop bogus ns_{1,2}_wait() calls in teardown_two_guests(), those
functions were never implemented
- use pane_status() for "guest" test directives too
]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Most commands issued during the testing scripts aren't explicitly checked
for errors. Therefore, if they fail, the shell will just keep on
executing. This makes it difficult to figure out where things started
going wrong if things fall over.
Run the whole script with the set -e mode so that it will exit in the case
of any (unchecked) failing command. To make this work we do need to add
explicit checks / fallbacks for some commands which we expect to fail.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: use sh -e instead of setting -e later, so that we don't miss
anything before set -e is issued]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|