| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If stderr is closed, after we fork to background, glibc's
implementation of perror() will try to re-open it by calling dup(),
upon which the seccomp filter causes the process to terminate,
because dup() is not included in the list of allowed syscalls.
Replace perror() calls that might happen after isolation_postfork().
We could probably replace all of them, but early ones need a bit more
attention as we have to check whether log.c functions work in early
stages.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
| |
The development of the Debian package is now at:
https://salsa.debian.org/sbrivio/passt
Drop contrib/debian, it's finally obsolete.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
AppArmor resolves executable links before profile attachment rules
are evaluated, so, as long as pasta is installed as a link to passt,
there's no way to differentiate the two cases. Merge the two profiles
and leave a TODO note behind, explaining two possible ways forward.
Update the rules so that passt and pasta are actually usable, once
the profile is installed. Most required changes are related to
isolation and sandboxing features.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
| |
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
| |
The AUDIT_ARCH defines in seccomp.h corresponding to HPPA are
AUDIT_ARCH_PARISC and AUDIT_ARCH_PARISC64.
Build error spotted in Debian's buildd log on
phantom.physik.fu-berlin.de.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
On mips64el, gcc -dumpmachine correctly reports mips64el as
architecture prefix, but for some reason seccomp.h defines
AUDIT_ARCH_MIPSEL64 and not AUDIT_ARCH_MIPS64EL. Mangle AUDIT_ARCH
accordingly.
Build error spotted in Debian's buildd logs from Loongson build.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
| |
Drop it from the internal FLAGS variable, but honour -O2 if passed in
CFLAGS. In Debian packages, dpkg-buildflags uses it as hardening
flag, and we get a QA warning if we drop it:
https://qa.debian.org/bls/bytag/W-dpkg-buildflags-missing.html
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
| |
CPPFLAGS allow the user to pass pre-processor flags. This is unlikely
to be needed at the moment, but the Debian Hardening Walkthrough
reasonably requests it to be handled in order to fully support
hardened build flags:
https://wiki.debian.org/HardeningWalkthrough#Handling_dpkg-buildflags_in_your_upstream_build_system
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Given that we use just the first valid DNS resolver address
configured, or read from resolv.conf(5) on the host, to forward DNS
queries to, in case --dns-forward is used, we don't need to duplicate
dns[] to dns_send[]:
- rename dns_send[] back to dns[]: those are the resolvers we
advertise to the guest/container
- for forwarding purposes, instead of dns[], use a single field (for
each protocol version): dns_host
- and rename dns_fwd to dns_match, so that it's clear this is the
address we are matching DNS queries against, to decide if they need
to be forwarded
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
| |
Reported-by: Paul Holzinger <pholzing@redhat.com>
Fixes: dd09cceaee21 ("Minor improvements to IPv4 netmask handling")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we disable a given IP version automatically (no corresponding
default route on host) or administratively (--ipv4-only or
--ipv6-only options), we don't initialise related buffers and
services (DHCP for IPv4, NDP and DHCPv6 for IPv6). The "tap"
handlers will also ignore packets with a disabled IP version.
However, in commit 3c6ae625101a ("conf, tcp, udp: Allow address
specification for forwarded ports") I happily changed socket
initialisation functions to take AF_UNSPEC meaning "any enabled
IP version", but I forgot to add checks back for the "enabled"
part.
Reported by Paul: on a host without default IPv6 route, but IPv6
enabled, connect, using IPv6, to a port handled by pasta, which
tries to send data to a tap device without initialised buffers
for that IP version and exits because the resulting write() fails.
Simpler way to reproduce: pasta -6 and inbound IPv4 connection, or
pasta -4 and inbound IPv6 connection.
Reported-by: Paul Holzinger <pholzing@redhat.com>
Fixes: 3c6ae625101a ("conf, tcp, udp: Allow address specification for forwarded ports")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
| |
...so that we avoid printing some lines twice because log-level is
still set to LOG_EMERG, as if logging configuration didn't happen
yet.
While at it, note that logging to stderr doesn't really depend on
whether debug mode is enabled or not.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
While it's important to fail in that case, it makes little sense to
fail quietly: it's better to tell qemu explicitly that something went
wrong and that we won't recover, by closing the socket.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I got all paranoid after triggering a divide-by-zero general
protection fault in passt with a qemu version without the virtio_net
TX hang fix, while flooding UDP. I start thinking this was actually
coming from some random changes I was playing with, but before
reaching this conclusion I reviewed once more the relatively short
path in tap_handler_passt() before we start using packet_*()
functions, and found this.
Never observed in practice, but artificially reproduced with changes
in qemu's socket interface: if we don't receive from qemu a complete
length descriptor in one recv() call, or if we receive a partial one
at the end of one call, we currently disregard the rest, which would
make the stream inconsistent.
Nothing really bad happens, except that from that point on we would
disregard all the packets we get until, if ever, we get the stream
back in sync by chance.
Force reading a complete packet length descriptor with a blocking
recv(), if needed -- not just a complete packet later.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
Now that we install the binary in /bin, and we have a link from
/usr/bin, change the path in the test itself as well. Otherwise
it works with bash but not with dash for some reason.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
We can't get rid of qrap quite yet, but at least we should start
telling users it's not going to be needed anymore starting from qemu
7.2.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that we require 13c6be96618c ("net: stream: add unix socket")
in qemu to run the tests, we can also assume that commit df8d07081718
("virtio-net: fix bottom-half packet TX on asynchronous completion")
is present, as it was merged before that one.
This fixes the issue we attempted to work around in passt TCP and
UDP performance tests: finally drop that stuff.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
qemu commit 13c6be96618c ("net: stream: add unix socket") introduces
support for native AF_UNIX support, finally making qrap useless.
We can't quite drop that yet until a qemu release includes it, and
then we'll need to wait a while for users to switch anyway, but at
least for tests, we can use that support.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As pasta now configures that target network namespace with
--config-net, we need to wait for addresses and routes to be actually
present. Just sending netlink messages doesn't mean this is done
synchronously.
A more elegant alternative, which probably makes sense regardless of
this test setup, would be to query, from pasta, addresses and routes
we added, and wait until they're there, before proceeding.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that we allow loopback DNS addresses to be used as targets for
forwarding, we need to check if DNS answers come from those targets,
before deciding to eventually remap traffic for local redirects.
Otherwise, the source address won't match the one configured as
forwarder, which means that the guest or the container will refuse
those responses.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With --dns-forward, if the host has a loopback address configured as
DNS server, we should actually use it to forward queries, but, if
--no-map-gw is passed, we shouldn't offer the same address via DHCP,
NDP and DHCPv6, because it's not going to be reachable.
Problematic configuration:
* systemd-resolved configuring the usual 127.0.0.53 on the host: we
read that from /etc/resolv.conf
* --dns-forward specified with an unrelated address, for example
198.51.100.1
We still want to forward queries to 127.0.0.53, if we receive one
directed to 198.51.100.1, so we can't drop 127.0.0.53 from our list:
we want to use it for forwarding. At the same time, we shouldn't
offer 127.0.0.53 to the guest or container either.
With this change, I'm only covering the case of automatically
configured DNS servers from /etc/resolv.conf. We could extend this to
addresses configured with command-line options, but I don't really
see a likely use case at this point.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Seen in a Google Compute Engine environment with a machine configured
via cloud-init-dhcp, while testing Podman integration for pasta: the
assigned address has a /32 netmask, and there's a default route,
which can be added on the host because there's another route, also
/32, pointing to the default gateway. For example, on the host:
ip -4 address add 10.156.0.2/32 dev eth0
ip -4 route add 10.156.0.1/32 dev eth0
ip -4 route add default via 10.156.0.1
This is not a valid configuration as far as I can tell: if the
address is configured as /32, it shouldn't be used to reach a gateway
outside its derived netmask. However, Linux allows that, and
everything works.
The problem comes when pasta --config-net sources address and default
route from the host, and it can't configure the route in the target
namespace because the gateway is invalid. That is, we would skip
configuring the first route in the example, which results in the
equivalent of doing:
ip -4 address add 10.156.0.2/32 dev eth0
ip -4 route add default via 10.156.0.1
where, at this point, 10.156.0.1 is unreachable, and hence invalid
as a gateway.
Sourcing more routes than just the default is doable, but probably
undesirable: pasta users want to provide connectivity to a container,
not reflect exactly whatever trickery is configured on the host.
Add a consistency check and an adjustment: if the configured default
gateway is not reachable, shrink the given netmask until we can reach
it.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
| |
A number of functions describe themselves as taking a pointer to 'sin_addr
or sin6_addr'. Those are field names, not type names. Replace them with
the correct type names, in_addr or in6_addr.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
We recently converted to using struct in_addr rather than bare in_addr_t
or uint32_t to represent IPv4 addresses in network order. This makes it
harder forget to apply the correct endian conversions.
We omitted the IPv4 addresses stored in struct tap4_l4_t, however. Convert
those as well.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We recently corrected some errors handling the endianness of IPv4
addresses. These are very easy errors to make since although we mostly
store them in network endianness, we sometimes need to manipulate them in
host endianness.
To reduce the chances of making such mistakes again, change to always using
a (struct in_addr) instead of a bare in_addr_t or uint32_t to store network
endian addresses. This makes it harder to accidentally do arithmetic or
comparisons on such addresses as if they were host endian.
We introduce a number of IN4_IS_ADDR_*() helpers to make it easier to
directly work with struct in_addr values. This has the additional benefit
of making the IPv4 and IPv6 paths more visually similar.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This macro checks if an IPv4 address is in the loopback network
(127.0.0.0/8). There are two places where we open code an identical check,
use the macro instead.
There are also a number of places we specifically exclude the loopback
address (127.0.0.1), but we should actually be excluding anything in the
loopback network. Change those sites to use the macro as well.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are several minor problems with our parsing of IPv4 netmasks (-n).
First, we don't reject nonsensical netmasks like 0.255.0.255. Address this
structurally by using prefix length instead of netmask as the primary
variable, only converting (and validating) when we need to. This has the
added benefit of making some things more uniform with the IPv6 path.
Second, when the user specifies a prefix length, we truncate the output
from strtol() to an integer, which means we would treat -n 4294967320 as
valid (equivalent to 24). Fix types to check for this.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The INADDR_LOOPBACK constant is in host endianness, and similarly the
IN_MULTICAST macro expects a host endian address. However, there are some
places in passt where we use those with network endian values. This means
that passt will incorrectly allow you to set 127.0.0.1 or a multicast
address as the guest address or DNS forwarding address. Add the necessary
conversions to correct this.
INADDR_ANY and INADDR_BROADCAST logically behave the same way, although
because they're palindromes it doesn't have an effect in practice. Change
them to be logically correct while we're there, though.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
These show a summary of memory usage in kernel and userspace with
different port forwarding configurations, details of userspace usage
using 'nm' (passt only uses statically allocated memory), and details
of kernel memory from slab reporting facilities.
This adds a new test image, mbuto.mem.img, with harcoded IPv4 and
IPv6 addresses and routes, and just the tools we need to start and
stop passt, to report from /proc/slabinfo, /proc/meminfo, and to
print and parse symbol sizes using nm(1).
passt can't pivot_root() for sandboxing purposes on ramfs, so we need
to create another filesystem and chroot into it, first.
We don't want to use pane context functions, as we're checking memory
usage for sockets: resort to screen-scraping.
Configure a dummy interface to provide passt with an appearance of
working IPv4 and IPv6 connectivity, contributed by David.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
| |
This can be used for generic cell values with an arbitrary scale.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
| |
Instead of just disabling performance reports if running in demo
mode. This allows us to use table functions outside of performance
reports.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On ramfs, connecting to a non-existent UNIX domain socket yields
EACCESS, instead of ENOENT. This is visible if we use passt directly
on rootfs (a ramfs instance) from an initramfs image.
It's probably wrong for ramfs to return EACCES, but given the
simplicity of the filesystem, I doubt we should try to fix it there
at the possible cost of added complexity.
Also, this whole beauty should go away once qrap-less usage is
established, so just accept EACCES as indication that a conflicting
socket does not, in fact, exist.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
| |
I'm going to add yet another one of those, for which I have no quick
solution. It's a regression in some sense, but at least if we make
this regression more observable and defined, it should be easier to
find a comprehensive solution later, within this or another testing
framework.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
| |
Unfortunately Bugzilla doesn't enable sharing of queries to
unregistered users:
https://bugzilla.mozilla.org/show_bug.cgi?id=400063
...but we can still use ugly search links.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
| |
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In pasta mode, ICMP and ICMPv6 echo sockets relay back to us any
reply we send: we're on the same host as the target, after all. We
discard them by comparing the last sequence we sent with the sequence
we receive.
However, on the first reply for a given identifier, the sequence
might be zero, depending on the implementation of ping(8): we need
another value to indicate we haven't sent any sequence number, yet.
Use -1 as initialiser in the echo identifier map.
This is visible with Busybox's ping, and was reported by Paul on the
integration at https://github.com/containers/podman/pull/16141, with:
$ podman run --net=pasta alpine ping -c 2 192.168.188.1
...where only the second reply would be routed back.
Reported-by: Paul Holzinger <pholzing@redhat.com>
Fixes: 33482d5bf293 ("passt: Add PASTA mode, major rework")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
| |
...instead of just reporting errors.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
| |
This only worked for ICMPv6: ICMP packets have no TCP-style header,
so they are handled as a special case before packet sequences are
formed, and the call to tap_packet_debug() was missing.
Fixes: bb708111833e ("treewide: Packet abstraction with mandatory boundary checks")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
| |
Having -f implied by -d (and --trace) usually saves some typing, but
debug mode in background (with a log file) is quite useful if pasta
is started by Podman, and is probably going to be handy for passt
with libvirt later, too.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
They're too slow to cope with current release cycles, and they
haven't found bugs in months, also because clang-tidy and cppcheck
would find most of them earlier.
Disable them for the moment. We should pre-install gcc and make in
non-x86 images, as those run on my test machine with qemu TCG, and
that's the real slow-down here. Then we can re-enable them.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
The out-of-tree Podman patch needs to be rebased every second week or
so, and I'm currently trying to get that upstream:
https://github.com/containers/podman/pull/16141
Disable demo generation for the moment, so that I avoid wasting time
with those rebases. We'll re-enable it later.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To test log files on a tmpfs mount, we need to unshare the mount
namespace, which means using a context for the passt pane is not
really practical at the moment, as we can't open a shell there, so
we would have to encapsulate all the commands under 'unshare -rUm',
plus the "inner" pasta command, running in turn a tcp_rr server.
It might be worth fixing this by e.g. detecting we are trying to
spawn an interactive shell and adding a special path in the context
setup with some form of stdin redirection -- I'm not sure it's doable
though.
For this reason, add a new layout, using a context only for the host
pane, while keeping the old command dispatch mechanism for the passt
pane.
We also need a new setup function that doesn't start pasta: we want
to start and restart it with different options.
Further, we need a 'pint' directive, to send an interrupt to the
passt pane: add that in lib/test.
All the tests before the one involving tmpfs and a detached mount
namespace were also tested with the context mechanism. To make an
eventual conversion easier, pass tcp_crr directly as a command on
pasta's command line where feasible.
While at it, fix the comment to the teardown_pasta() function.
The new test set can be semi-conveniently run as:
./run pasta_options/log_to_file
and it checks basic log creation, size of the log file after flooding
it with debug entries, rotations, and basic consistency after
rotations, on both an existing filesystem and a tmpfs, chosen as
it doesn't support collapsing data ranges via fallocate(), hence
triggering the fall-back mechanism for logging rotation.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
We need to zero out the checksum field before calculating the
checksum, of course. I have no idea how this passed the "icmp" test
set, looking into it.
Reported-by: Paul Holzinger <pholzing@redhat.com>
Fixes: 67ab6171729c ("Add csum_icmp4() helper for calculating ICMP checksums")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Commit 84fec4e998b6 ("Clean up parsing of port ranges") drops the
strspn() call before the parsing of excluded port ranges, because now
we're checking against any stray characters at every step.
However, that also has the effect of passing ~ as first character to
the new parse_port_range(), which makes no sense: we already checked
that ~ is the first character before the call, so skip it.
Alona reported this output:
Invalid port specifier ~15000,~15001,~15006,~15008,~15020,~15021,~15090
while the whole specifier is indeed valid.
Reported-by: Alona Paz <alkaplan@redhat.com>
Fixes: 84fec4e998b6 ("Clean up parsing of port ranges")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
...instead of one fourth. On the main() -> conf() -> nl_sock_init()
call path, LTO from gcc 12 on (at least) x86_64 decides to inline...
everything: nl_sock_init() is effectively part of main(), after
commit 3e2eb4337bc0 ("conf: Bind inbound ports with
CAP_NET_BIND_SERVICE before isolate_user()").
This means we exceed the maximum stack size, and we get SIGSEGV,
under any condition, at start time, as reported by Andrea on a recent
build for CentOS Stream 9.
The calculation of NS_FN_STACK_SIZE, which is the stack size we
reserve for clones, was previously obtained by dividing the maximum
stack size by two, to avoid an explicit check on architecture (on
PA-RISC, also known as hppa, the stack grows up, so we point the
clone to the middle of this area), and then further divided by two
to allow for any additional usage in the caller.
Well, if there are essentially no function calls anymore, this is
not enough. Divide it by eight, which is anyway much more than
possibly needed by any clone()d callee.
I think this is robust, so it's a fix in some sense. Strictly
speaking, though, we have no formal guarantees that this isn't
either too little or too much.
What we should do, eventually: check cloned() callees, there are just
thirteen of them at the moment. Note down any stack usage (they are
mostly small helpers), bonus points for an automated way at build
time, quadruple that or so, to allow for extreme clumsiness, and use
as NS_FN_STACK_SIZE. Perhaps introduce a specific condition for hppa.
Reported-by: Andrea Bolognani <abologna@redhat.com>
Fixes: 3e2eb4337bc0 ("conf: Bind inbound ports with CAP_NET_BIND_SERVICE before isolate_user()")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
| |
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Starting with version 8.1.0, libvirt uses JSON syntax when
generating the arguments to -device, so they will now look like
{"driver":"virtio-scsi-pci","bus":"pci.3","addr":"0x0"}
instead of
virtio-scsi-pci,bus=pci.3,addr=0x0
qrap needs to parse these arguments and extract the bus number
in order to figure out what address to use for the virtio-net
device it adds, and the libvirt change described above has
broken this parsing logic.
Tweak the code so that both styles are accepted and handled
correctly.
Note that, when JSON is in use, qrap needs to generate its own
command line options in that format as well or things will not
work as expected.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
The IPv4 specific dhcp() manually constructs L2 and IP headers to send its
DHCP reply packet, unlike its IPv6 equivalent in dhcpv6.c which uses the
tap_udp6_send() helper. Now that we've broaded the parameters to
tap_udp4_send() we can use it in dhcp() to avoid some duplicated logic.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tap_ip4_send() has special case logic to compute the checksums for UDP
and ICMP packets, which is a mild layering violation. By using a suitable
helper we can split it into tap_udp4_send() and tap_icmp4_send() functions
without greatly increasing the code size, this removing that layering
violation.
We make some small changes to the interface while there. In both cases
we make the destination IPv4 address a parameter, which will be useful
later. For the UDP variant we make it take just the UDP payload, and it
will generate the UDP header. For the ICMP variant we pass in the ICMP
header as before. The inconsistency is because that's what seems to be
the more natural way to invoke the function in the callers in each case.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
We send ICMPv6 packets to the guest from both icmp.c and from ndp.c. The
case in ndp() manually constructs L2 and IPv6 headers, unlike the version
in icmp.c which uses the tap_icmp6_send() helper from tap.c Now that we've
broaded the parameters of tap_icmp6_send() we can use it in ndp() as well
saving some duplicated logic.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|