| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Just like we do for PCAP, DEBUG and KERNEL. Otherwise, running tests
with TRACE=1 will not actually enable tracing output.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
| |
...instead of echo: otherwise, bash won't handle escape sequences we
use to colour messages (and 'echo -e' is not specified by POSIX).
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
| |
This is quite useful at least for myself as I'm usually running tests
using a guest kernel that's not the same as the one on the host.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
| |
It's qemu-system-i386, but uname -m reports i686. I didn't test i486
and i586.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Because the host and guest share the same IP address with passt/pasta, it's
not possible for the guest to directly address the host. Therefore we
allow packets from the guest going to a special "NAT to host" address to be
redirected to the host, appearing there as though they have both source and
destination address of loopback.
Currently that special address is always the address of the default
gateway (or none). That can be a problem if we want that gateway to be
addressable by the guest. Therefore, allow the special "NAT to host"
address to be overridden on the command line with a new --map-host-loopback
option.
In order to exercise and test it, update the passt_in_ns and perf
tests to use this option and give different mapping addresses for the
two layers of the environment.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We have a number of delays when we switch to new layouts that were
added to make the tests visually easier to follow, together with
blinking status bars. Shorten the delays and avoid blinking the
status bar if $FAST is set to 1 (no demo mode).
Shorten delays in busy loops to 10ms, instead of 100ms, and skip the
one-second fixed delay when we wait for the status of a command.
Cut the duration of throughput and latency tests to one second, down
from ten. Somewhat surprisingly, the results we get are rather
consistent, and not significantly different from what we'd get with
10 seconds.
This, together with Podman's commit 20f3e8909e3a ("test/system:
pasta_test_do add explicit port check"), cuts the time needed on my
setup for full test run from approximately 37 minutes to...:
$ time ./run
[exited]
PASS: 165, FAIL: 0
Log at /home/sbrivio/passt/test/test_logs/test.log
real 15m34.253s
user 0m0.011s
sys 0m0.011s
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Tested-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
| |
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
of that
Starting from iperf3 version 3.16, -P / --parallel spawns multiple
clients as separate threads, instead of multiple streams serviced by
the same thread.
So we can drop our lib/test implementation to spawn several iperf3
client and server processes and finally simplify things quite a bit.
Adjust number of threads and UDP sending bandwidth to values that seem
to be more or less matching previous throughput tests on my setup.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Tested-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When using the old-style "pane" methods of executing commands during the
tests, we need to scan the shell output for prompts in order to tell when
commands have finished. This is inherently unreliable because commands
could output things that look like prompts, and prompts might not look like
we expect them to. The only way to really fix this is to use a better way
of dispatching commands, like the newer "context" system.
However, it's awkward to convert everything to "context" right at the
moment, so we're still relying on some tests that do work most of the time.
It is, however, particularly sensitive to fancy coloured prompts using
escape sequences. Currently we try to handle this by stripping actual
ESC characters with tr, then looking for some common variants.
We can do a bit better: instead strip all escape sequences using sed before
looking for our prompt. Or, at least, any one using [a-zA-Z] as the
terminating character. Strictly speaking ANSI escapes can be terminated by
any character in 0x40..0x7e, which isn't easily expressed in a regexp.
This should capture all common ones, though.
With this transformation we can simplify the list of patterns we then look
for as a prompt, removing some redundant variants.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
| |
As commit 29269705239f ("test/perf: Small MTUs for spliced TCP
aren't interesting") drops all columns for TCP test MTUs except for
one, in throughput test for pasta's local flows, the first column we
need to highlight in that table is now the second one.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently we start both the iperf3 server(s) and client(s) afresh each time
we want to make a bandwidth measurement. That's not really necessary as
usually a whole batch of bandwidth measurements can use the same server.
Split up the iperf3 directive into 3 directives: iperf3s to start the
server, iperf3 to make a measurement and iperf3k to kill the server, so
that we can start the server less often. This - and more importantly, the
reduced number of waits for the server to be ready - reduces runtime of the
performance tests on my laptop by about 4m (out of ~28minutes).
For now we still restart the server between IPv4 and IPv6 tests. That's
because in some cases the latency measurements we make in between use the
same ports.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
iperf3 generates statistics about its run on both the client and server
sides. They don't have exactly the same information, but both have the
pieces we need (AFAICT the server communicates some nformation to the
client over the control socket, so the most important information is in the
client side output, even if measured by the server).
Currently we use the server side information for our measurements. Using
the client side information has several advantages though:
* We can directly wait for the client to complete and we know we'll have
the output we want. We don't need to sleep to give the server time to
write out the results.
* That in turn means we can wrap up as soon as the client is done, we
don't need to wait overlong to make sure everything is finished.
* The slightly different organisation of the data in the client output
means that we always want the same json value, rather than requiring
slightly different onces for UDP and TCP.
The fact that we avoid some extra delays speeds up the overal run of the
perf tests by around 7 minutes (out of around 35 minutes) on my laptop.
The fact that we no longer unconditionally kill client and server after
a certain time means that the client could run indefinitely if the server
doesn't respond. We mitigate that by setting 1s connect timeout on the
client. This isn't foolproof - if we get an initial response, but then
lose connectivity this could still run indefinitely, however it does cover
by far the most likely failure cases. --snd-timeout would provide more
robustness, but I've hit odd failures when trying to use it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
Using this, rather than using "nstool info" to get the pid then manually
connecting with nsenter makes things a little simpler.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
| |
The new subcommand gives more information about the holder process and its
namespace, and may be further extended in future. Add some options which
give the old behaviour for existing scripts.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
Having the "subcommand" first is more conventional and will make it more
natural for future extensions I have planned.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
In preparation for extending what it does.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
context_run() has a race condition if two commands are run in close
proximity (generally involving at least one in the background). Because we
always use the same name for the temporary fifo files, if another command
is issued while the fifos for the first still exist, mkfifo will fail,
typically causing the entire test script to jam.
Create unique names for the temporary fifos to avoid this problem.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In practical terms, passt doesn't benefit from the additional
protection offered by the AGPL over the GPL, because it's not
suitable to be executed over a computer network.
Further, restricting the distribution under the version 3 of the GPL
wouldn't provide any practical advantage either, as long as the passt
codebase is concerned, and might cause unnecessary compatibility
dilemmas.
Change licensing terms to the GNU General Public License Version 2,
or any later version, with written permission from all current and
past contributors, namely: myself, David Gibson, Laine Stump, Andrea
Bolognani, Paul Holzinger, Richard W.M. Jones, Chris Kuhn, Florian
Weimer, Giuseppe Scrivano, Stefan Hajnoczi, and Vasiliy Ulyanov.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
...instead of doing it after the test. Now that we have pre-built
guest images, we might also have old JSON files from previous,
interrupted test runs.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
qemu commit 13c6be96618c ("net: stream: add unix socket") introduces
support for native AF_UNIX support, finally making qrap useless.
We can't quite drop that yet until a qemu release includes it, and
then we'll need to wait a while for users to switch anyway, but at
least for tests, we can use that support.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As pasta now configures that target network namespace with
--config-net, we need to wait for addresses and routes to be actually
present. Just sending netlink messages doesn't mean this is done
synchronously.
A more elegant alternative, which probably makes sense regardless of
this test setup, would be to query, from pasta, addresses and routes
we added, and wait until they're there, before proceeding.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These show a summary of memory usage in kernel and userspace with
different port forwarding configurations, details of userspace usage
using 'nm' (passt only uses statically allocated memory), and details
of kernel memory from slab reporting facilities.
This adds a new test image, mbuto.mem.img, with harcoded IPv4 and
IPv6 addresses and routes, and just the tools we need to start and
stop passt, to report from /proc/slabinfo, /proc/meminfo, and to
print and parse symbol sizes using nm(1).
passt can't pivot_root() for sandboxing purposes on ramfs, so we need
to create another filesystem and chroot into it, first.
We don't want to use pane context functions, as we're checking memory
usage for sockets: resort to screen-scraping.
Configure a dummy interface to provide passt with an appearance of
working IPv4 and IPv6 connectivity, contributed by David.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
| |
This can be used for generic cell values with an arbitrary scale.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
| |
Instead of just disabling performance reports if running in demo
mode. This allows us to use table functions outside of performance
reports.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
| |
I'm going to add yet another one of those, for which I have no quick
solution. It's a regression in some sense, but at least if we make
this regression more observable and defined, it should be easier to
find a comprehensive solution later, within this or another testing
framework.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To test log files on a tmpfs mount, we need to unshare the mount
namespace, which means using a context for the passt pane is not
really practical at the moment, as we can't open a shell there, so
we would have to encapsulate all the commands under 'unshare -rUm',
plus the "inner" pasta command, running in turn a tcp_rr server.
It might be worth fixing this by e.g. detecting we are trying to
spawn an interactive shell and adding a special path in the context
setup with some form of stdin redirection -- I'm not sure it's doable
though.
For this reason, add a new layout, using a context only for the host
pane, while keeping the old command dispatch mechanism for the passt
pane.
We also need a new setup function that doesn't start pasta: we want
to start and restart it with different options.
Further, we need a 'pint' directive, to send an interrupt to the
passt pane: add that in lib/test.
All the tests before the one involving tmpfs and a detached mount
namespace were also tested with the context mechanism. To make an
eventual conversion easier, pass tcp_crr directly as a command on
pasta's command line where feasible.
While at it, fix the comment to the teardown_pasta() function.
The new test set can be semi-conveniently run as:
./run pasta_options/log_to_file
and it checks basic log creation, size of the log file after flooding
it with debug entries, rotations, and basic consistency after
rotations, on both an existing filesystem and a tmpfs, chosen as
it doesn't support collapsing data ranges via fallocate(), hence
triggering the fall-back mechanism for logging rotation.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To keep this simple, only support tests that have corresponding setup
and teardown functions implied by their path. For example:
./run passt/ndp
will trigger the 'passt' setup and teardown functions.
This is not really elegant, but it looks robust, and while David is
considering proper alternatives, it should be quite useful.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
| |
This loop goes through and gives a numeric label to each pane, even though
we name the panes properly shortly thereafter. Looks like a leftover from
some earlier version. Remove it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The setup functions for passt_in_ns and two_guests perform some fairly slow
dhclient calls to configure the network in the namespace before starting
the guest. This isn't really part of the tests, just necessary for the
operations later.
We can simplify and speed this up a bit by using pasta's '--config-net'
option to configure the networking for us. As a bonus this means we have
at least a minimal test of the --config-net option itself.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we start passt or pasta, it may take a short time to be ready to
handle packets, especially if running under valgrind. We have a
number of semi-arbitrary fixed sleeps to account for this.
We can do this more robustly by exploiting the fact that pasta/passt
doesn't write its pidfile until it's ready to go, so if we wait for
the pidfile to be created, we can proceed with confidence.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Add a shell helper function to wait for some command to succeed - typically
a test for something to be done by a background process. Use it in the
context code which waits for the guest to respond to ssh-over-vsock
connections.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
...it doesn't actually exist, and this error now causes the demo to
stop.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
It's not used anymore. While at it, fix the function name in the
comment to perf_report_append_js().
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
I'm not sure why, but dhclient hangs otherwise. This reflects what we
do in the passt_in_ns setup steps.
Eventually, this whole block could go away if we let pasta configure
this network namespace with --config-net.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
If we start another server on the same port right away, we might fail
to bind the port. A small delay appears to be needed -- I'm not
entirely sure why at this point.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unfortunately, this partially counters recent efforts by David to
speed up these tests, but it looks like iperf3 clients don't reliably
terminate, in some rare cases I couldn't isolate yet.
For the moment being, reintroduce the time-based wait approach, now
using the configurable test duration, and terminate the servers at
the end of it, in case they're stuck. There's no point in keeping
the 'sleep 2' later, so drop that, and while at it, make sure that
the stuck servers have time to flush the JSON output before we use
it.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It appears that if we run throughput tests with one-second periodic
reports, the sending side of the vhost channel used for SSH-based
command dispatch occasionally stops working altogether. I haven't
investigated this further, all I see is that output is truncated
at some point, and doesn't resume.
If we use gzip compression (ssh -C) this happens less frequently,
but it still happens, seemingly indicating the issue is probably
related to vhost itself.
Disable periodic reports in iperf3 clients. The -i options were
actually redundant, so remove them from both test files as well as
from test_iperf3().
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
An iperf3 client might fail to send the control message indicating
the end of the test, if the kernel buffer doesn't accept it, and exit
without having sent it, as the control socket is non-blocking. Should
this happen, the server will just wait forever for this message,
instead of terminating.
Restore some of the behaviour that went away with the
"test: Rewrite test_iperf3" patch: instead of waiting on servers to
terminate, wait on the clients. When they are done, wait 2 seconds,
and then send SIGINT to the servers, which make them still write
out the JSON report before terminating.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
If we don't, guest command dispatch will fail altogether, given that
we use cat(1) on the enter file, which contains spaces.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
We use the [ "$x" -eq "$x" ] syntax to check if $x is a number. The
behaviour is clearly implied by POSIX, but some shells might actually
report the (intended) error, and dash floods script.log with
"Illegal number" error messages. Hide them.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
| |
The tests generate a performance report in $BASEPATH/perf.js and
hooks/pre-push copies it to the website. To avoid cluttering the working
directory, instead put perf.js in $LOGDIR/web, since it's a test output
artefact. Update hooks/pre-push to copy from its new location.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The asciinema video handling creates a number of temporary files (.uncat,
.start, .stop) which currently go into the source tree. Put them in the
temporary state directory to avoid clutter.
The final processed output is now placed into test_logs/web/ along with the
corresponding .js file with links, since they're essentially test
artefacts. hooks/pre-push is updated to look for those files in the new
location when updating the web site.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
| |
Currently they go in the passt source tree with a fixed names, which means
their presence can mess with subsequent test runs.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
| |
The capture files are more or less a different form of log output from the
tests, so place them in $LOGDIR.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
| |
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of using the 'temp' and 'tempdir' DSL directives to create
temporary files, use fixed paths relative to __STATEDIR__. This has two
advantages:
1) The files are automatically cleaned up if the tests fail (and even if
that doesn't work they're easier to clean up manuall)
2) When debugging tests it's easier to figure out which of the temporary
files are relevant to whatever's going wrong
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
| |
Currently the context command dispatch subsystem creates a bunch of
temporary files in $LOGDIR, which is messy. Store them in $STATEDIR which
is for precisely this purpose. The logs from each context still go into
$LOGDIR.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
| |
We use this fifo to send messages to the information pane. Put it in the
state directory so it doesn't need its own cleanup.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The test scripts create a bunch of temporary files to keep track of
internal state. Some are made in /tmp with individual mktemp calls, some
go in the passt source directory, and some go in $LOGDIR. This can
sometimes make it messy to clean up after failed test runs.
Start cleaning this up by creating a single "state" directory ($STATEBASE)
in /tmp for all the state or temporary files used by a single test run.
Clean it up automatically in cleanup() - except when DEBUG==1, because
those files can be useful for debugging test script failures.
We create subdirectories under $STATEBASE for each setup function, exposed
as $STATESETUP. We also create subdirectories for each test script and
expose those to the scripts as __STATEDIR__.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
| |
FFPMPEG_PID_FILE is set (creating a temporary file), then never used.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|