| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Packet size can make a big difference to UDP throughput, so it makes sense
to measure it for a variety of different sizes. Currently we do this by
adjusting the MTU on the relevant interface before running iperf3.
However, the UDP packet size has no inherent connection to the MTU - it's
controlled by the sender, and the MTU just affects whether the packet will
make it through or be fragmented. The only reason adjusting the MTU works
is because iperf3 bases its default packet size on the (path) MTU.
We can test this more simply by using the -l option to the iperf3 client
to directly control the packet size, instead of adjusting the MTU.
As well as simplifying this lets us test different packet sizes for host to
ns traffic. We couldn't do that previously because we don't have
permission to change the MTU on the host.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently we make TCP throughput measurements for spliced connections with
a number of different MTU values. However, the results from this aren't
really interesting.
Unlike with tap connections, spliced connections only involve the loopback
interface on host and container, not a "real" external interface. lo
typically has an MTU of 65535 and there is very little reason to ever
change that. So, the measurements for smaller MTUs are rarely going to be
relevant.
In addition, the fact that we can offload all the {de,}packetization to the
kernel with splice(2) means that the throughput difference between these
MTUs isn't very great anyway.
Remove the short MTUs and only show spliced throughput for the normal
65535 byte loopback MTU. This reduces runtime of the performance tests on
my laptop by about 1 minute (out of ~24 minutes).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently we start both the iperf3 server(s) and client(s) afresh each time
we want to make a bandwidth measurement. That's not really necessary as
usually a whole batch of bandwidth measurements can use the same server.
Split up the iperf3 directive into 3 directives: iperf3s to start the
server, iperf3 to make a measurement and iperf3k to kill the server, so
that we can start the server less often. This - and more importantly, the
reduced number of waits for the server to be ready - reduces runtime of the
performance tests on my laptop by about 4m (out of ~28minutes).
For now we still restart the server between IPv4 and IPv6 tests. That's
because in some cases the latency measurements we make in between use the
same ports.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
iperf3 generates statistics about its run on both the client and server
sides. They don't have exactly the same information, but both have the
pieces we need (AFAICT the server communicates some nformation to the
client over the control socket, so the most important information is in the
client side output, even if measured by the server).
Currently we use the server side information for our measurements. Using
the client side information has several advantages though:
* We can directly wait for the client to complete and we know we'll have
the output we want. We don't need to sleep to give the server time to
write out the results.
* That in turn means we can wrap up as soon as the client is done, we
don't need to wait overlong to make sure everything is finished.
* The slightly different organisation of the data in the client output
means that we always want the same json value, rather than requiring
slightly different onces for UDP and TCP.
The fact that we avoid some extra delays speeds up the overal run of the
perf tests by around 7 minutes (out of around 35 minutes) on my laptop.
The fact that we no longer unconditionally kill client and server after
a certain time means that the client could run indefinitely if the server
doesn't respond. We mitigate that by setting 1s connect timeout on the
client. This isn't foolproof - if we get an initial response, but then
lose connectivity this could still run indefinitely, however it does cover
by far the most likely failure cases. --snd-timeout would provide more
robustness, but I've hit odd failures when trying to use it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Some older revisions used separate iperf3c and iperf3s test directives to
invoke the iperf3 client and server. Those were combined into a single
iperf3 directive some time ago, but a couple of places still have the old
syntax.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ugly as hell, but we keep breaking things otherwise, and I keep
forgetting to run this manually (as long as it's based on my local
Podman setup, that's the only alternative).
We need to clone the Podman repository as distribution packages don't
contain test scripts, typically. While at it, build the latest
version which is what really matters.
As we're planning anyway to revamp the test framework, I'd be
inclined to just add this without too many thoughts, and have it as
a nice-to-have requirement reminder for the new framework.
Link: https://github.com/containers/podman/pull/19699
Suggested-by: Paul Holzinger <pholzing@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
| |
nstool loops on accept(), but failed to close the accepted socket fds
before continuing on. So, with repeated commands it would eventually die
with an EMFILE.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Normal filesystem paths can be very long (PATH_MAX is around 8k), however
Unix domain sockets can only use relatively short paths (UNIX_PATH_MAX is
108 on Linux). Currently nstool will simply truncate paths that are too
long, leading to difficult to understand failures.
Make such failures clearer, with an explicit error message if given a path
that's too long.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
If we enter a mount namespace with nstool exec our working directory will
be changed to / in the new mount ns. This is surprising if we haven't
actually altered any mounts yet in the new ns. Instead, change the working
directory to match that of the holder process in this situation.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
This is possible useful in nstool info and has further uses for nstool
exec.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
Using this, rather than using "nstool info" to get the pid then manually
connecting with nsenter makes things a little simpler.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Unlike ${DEBUG} we don't initialize ${TRACE} to 0 if not set, which cases
failures when testing it later. That failure acts as though it is false,
however it emits spurious errors in script.log, which can make it harder to
spot real errors.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
This allows you to run commands within a user namespace with the
privilege that comes from owning that userns.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This combines nstool info -pw <sock> with nsenter with various options for
a more convenient and less verbose of entering existing nstool managed
namespaces.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
Will make things a bit less verbose in future.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
So that we'll probably give a better error if you point it at something
that's not an nstool hold control socket.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Give nstool the ability to detect what namespaces the target process is in,
relative to where it's called. That is, those namespace types for which
the target is not in the same namespace as the caller. For now, just
print this information with "info", which can be useful for debugging.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
| |
The new subcommand gives more information about the holder process and its
namespace, and may be further extended in future. Add some options which
give the old behaviour for existing scripts.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
This will make it easier to differentiate the options to those commands
further in future.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
Easier to see it there.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
Having the "subcommand" first is more conventional and will make it more
natural for future extensions I have planned.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
In preparation for extending what it does.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
context_run() has a race condition if two commands are run in close
proximity (generally involving at least one in the background). Because we
always use the same name for the temporary fifo files, if another command
is issued while the fifos for the first still exist, mkfifo will fail,
typically causing the entire test script to jam.
Create unique names for the temporary fifos to avoid this problem.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In practical terms, passt doesn't benefit from the additional
protection offered by the AGPL over the GPL, because it's not
suitable to be executed over a computer network.
Further, restricting the distribution under the version 3 of the GPL
wouldn't provide any practical advantage either, as long as the passt
codebase is concerned, and might cause unnecessary compatibility
dilemmas.
Change licensing terms to the GNU General Public License Version 2,
or any later version, with written permission from all current and
past contributors, namely: myself, David Gibson, Laine Stump, Andrea
Bolognani, Paul Holzinger, Richard W.M. Jones, Chris Kuhn, Florian
Weimer, Giuseppe Scrivano, Stefan Hajnoczi, and Vasiliy Ulyanov.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
Fedora 32-35 are now old enough that they're not on all mirrors. Fetch
them from the archive server instead.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
The current debian cloud images no longer include ppc64. Change to using
the latest snapshot which does include ppc64.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On shell 'exit' commands, running shells from pasta, we might get:
Cannot set tty process group (No such process)
as some TTY devices might be unaccessible. This is harmless, but
after commit "pasta: propagate exit code from child command", we'll
get test failures there, at least with dash.
Ignore those explicitly with a ugly workaround: we can't simply do
something like:
exit || :
because the failure is reported by the shell itself once it exits,
regardless of the command evaluation.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Similarly to UDP cases, these were missing as it wasn't clear, when
the other tests were introduced, if using the global address of a
namespace, from the host, should have resulted in connections being
routed via the tap interface.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These were missing as it wasn't clear, when the other tests were
introduced, if using the global address of a namespace, from the
host, should have resulted in traffic being routed via the tap
interface (as opposed to the loopback interface). We now clarified
that's actually the case.
Use same values and thresholds as the tests for loopback traffic, as
throughput figures currently indicate there isn't much difference.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
| |
...instead of doing it after the test. Now that we have pre-built
guest images, we might also have old JSON files from previous,
interrupted test runs.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
Now that we install the binary in /bin, and we have a link from
/usr/bin, change the path in the test itself as well. Otherwise
it works with bash but not with dash for some reason.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that we require 13c6be96618c ("net: stream: add unix socket")
in qemu to run the tests, we can also assume that commit df8d07081718
("virtio-net: fix bottom-half packet TX on asynchronous completion")
is present, as it was merged before that one.
This fixes the issue we attempted to work around in passt TCP and
UDP performance tests: finally drop that stuff.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
qemu commit 13c6be96618c ("net: stream: add unix socket") introduces
support for native AF_UNIX support, finally making qrap useless.
We can't quite drop that yet until a qemu release includes it, and
then we'll need to wait a while for users to switch anyway, but at
least for tests, we can use that support.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As pasta now configures that target network namespace with
--config-net, we need to wait for addresses and routes to be actually
present. Just sending netlink messages doesn't mean this is done
synchronously.
A more elegant alternative, which probably makes sense regardless of
this test setup, would be to query, from pasta, addresses and routes
we added, and wait until they're there, before proceeding.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These show a summary of memory usage in kernel and userspace with
different port forwarding configurations, details of userspace usage
using 'nm' (passt only uses statically allocated memory), and details
of kernel memory from slab reporting facilities.
This adds a new test image, mbuto.mem.img, with harcoded IPv4 and
IPv6 addresses and routes, and just the tools we need to start and
stop passt, to report from /proc/slabinfo, /proc/meminfo, and to
print and parse symbol sizes using nm(1).
passt can't pivot_root() for sandboxing purposes on ramfs, so we need
to create another filesystem and chroot into it, first.
We don't want to use pane context functions, as we're checking memory
usage for sockets: resort to screen-scraping.
Configure a dummy interface to provide passt with an appearance of
working IPv4 and IPv6 connectivity, contributed by David.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
| |
This can be used for generic cell values with an arbitrary scale.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
| |
Instead of just disabling performance reports if running in demo
mode. This allows us to use table functions outside of performance
reports.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
| |
I'm going to add yet another one of those, for which I have no quick
solution. It's a regression in some sense, but at least if we make
this regression more observable and defined, it should be easier to
find a comprehensive solution later, within this or another testing
framework.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
They're too slow to cope with current release cycles, and they
haven't found bugs in months, also because clang-tidy and cppcheck
would find most of them earlier.
Disable them for the moment. We should pre-install gcc and make in
non-x86 images, as those run on my test machine with qemu TCG, and
that's the real slow-down here. Then we can re-enable them.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To test log files on a tmpfs mount, we need to unshare the mount
namespace, which means using a context for the passt pane is not
really practical at the moment, as we can't open a shell there, so
we would have to encapsulate all the commands under 'unshare -rUm',
plus the "inner" pasta command, running in turn a tcp_rr server.
It might be worth fixing this by e.g. detecting we are trying to
spawn an interactive shell and adding a special path in the context
setup with some form of stdin redirection -- I'm not sure it's doable
though.
For this reason, add a new layout, using a context only for the host
pane, while keeping the old command dispatch mechanism for the passt
pane.
We also need a new setup function that doesn't start pasta: we want
to start and restart it with different options.
Further, we need a 'pint' directive, to send an interrupt to the
passt pane: add that in lib/test.
All the tests before the one involving tmpfs and a detached mount
namespace were also tested with the context mechanism. To make an
eventual conversion easier, pass tcp_crr directly as a command on
pasta's command line where feasible.
While at it, fix the comment to the teardown_pasta() function.
The new test set can be semi-conveniently run as:
./run pasta_options/log_to_file
and it checks basic log creation, size of the log file after flooding
it with debug entries, rotations, and basic consistency after
rotations, on both an existing filesystem and a tmpfs, chosen as
it doesn't support collapsing data ranges via fallocate(), hence
triggering the fall-back mechanism for logging rotation.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
The distro and performance tests are by far the slowest part of the passt
testsuite. Move them to the end of the testsuite run, so that it's easier
to do a quick test during development by letting the other tests run then
interrupting the test runner.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To keep this simple, only support tests that have corresponding setup
and teardown functions implied by their path. For example:
./run passt/ndp
will trigger the 'passt' setup and teardown functions.
This is not really elegant, but it looks robust, and while David is
considering proper alternatives, it should be quite useful.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add the -Wextra -pedantic and -std=c99 flags when compiling the nsholder
test helper to get extra compiler checks, like we already use for the
main source code.
While we're there, fix some %d (signed) printf descriptors being used
for unsigned values (uid_t and gid_t). Pointed out by cppcheck.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This loop goes through and gives a numeric label to each pane, even though
we name the panes properly shortly thereafter. Looks like a leftover from
some earlier version. Remove it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Many of our tests are based around performing transfers of sample data
across passt/pasta created links. The data flow here can be a bit
hard to follow since, e.g. we create a file transfer it to the guest,
then transfer it back to the host across several different tests.
This also means that the test cases aren't independent of each other.
Because we don't have the original file available at both ends in some
cases, we compare them by generating md5sums at each end and comparing
them, which is a bit complicated.
Make a number of changes to simplify this:
1. Pre-generate the sample data files as a test asset, rather than
building them on the fly during the tests proper
2. Include the sample data files in the mbuto guest image
3. Because we have good copies of the original data available in all
contexts, we can now simply use 'cmp' to check if the transfer
has worked, avoiding md5sum complications.
4. Similarly we can always use the original copy of the sample data
on the send side of each transfer, meaning that the tests become
more independent of each other.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The setup functions for passt_in_ns and two_guests perform some fairly slow
dhclient calls to configure the network in the namespace before starting
the guest. This isn't really part of the tests, just necessary for the
operations later.
We can simplify and speed this up a bit by using pasta's '--config-net'
option to configure the networking for us. As a bonus this means we have
at least a minimal test of the --config-net option itself.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When we start passt or pasta, it may take a short time to be ready to
handle packets, especially if running under valgrind. We have a
number of semi-arbitrary fixed sleeps to account for this.
We can do this more robustly by exploiting the fact that pasta/passt
doesn't write its pidfile until it's ready to go, so if we wait for
the pidfile to be created, we can proceed with confidence.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
These are hangovers from older ways of shutting down the pasta/passt
processes and no longer serve any purpose.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Add a shell helper function to wait for some command to succeed - typically
a test for something to be done by a background process. Use it in the
context code which waits for the guest to respond to ssh-over-vsock
connections.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
...it doesn't actually exist, and this error now causes the demo to
stop.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|