| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When I reworked udp_init() to move most of the port binding logic
to conf_ports, I accidentally dropped this bit of automatic port
detection (and binding) at start-up.
On -U auto, in pasta mode, udp_sock_init_ns() binds ports in the
namespace that correspond to ports bound in the init namespace,
but on -u auto, nothing actually happens after port detection.
Add udp_sock_init_init() to deal with this, and while at it fix
the comment to udp_sock_init_ns(): the latter takes care of
outbound "connections".
This is currently not covered by tests, and the UDP port needs to
be already bound in the namespace when pasta starts (periodic
detection for UDP is a missing feature at the moment). It can be
checked like this:
$ unshare -rUn
# echo $$
590092
# socat -u UDP-LISTEN:5555 STDOUT
$ pasta -q -u auto 590092
$ echo "test" | socat -u STDIN UDP:localhost:5555
Reported-by: Paul Holzinger <pholzing@redhat.com>
Fixes: 3c6ae625101a ("conf, tcp, udp: Allow address specification for forwarded ports")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The logic in tcp_timer() was inverted. fwd_out should expose the host
ports in the ns. Therfore it must read the ports on the host and then
bind them in the netns. The same for fwd_in which checks ports in the
ns and then exposes them on the host.
Note that this only fixes tcp ports, udp does not seems to work at all
right now with the auto mode.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Fixes: 1128fa03fe73 ("Improve types and names for port forwarding configuration")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of:
https://fedoraproject.org/wiki/SELinux_Policy_Modules_Packaging_Draft
follow this:
https://fedoraproject.org/wiki/PackagingDrafts/SELinux_Independent_Policy
which seems to make more sense and fixes the issue that, on a fresh
install, without a reboot, the file contexts for the binaries are not
actually updated.
In detail:
- labels are refreshed using the selinux_relabel_pre and
selinux_relabel_post on install, upgrade, and uninstall
- use the selinux_modules_install and selinux_modules_uninstall
macros, instead of calling 'semodule' directly (no functional
changes in our case)
- require the -selinux package on SELinux-enabled environments and if
the current system policy is "targeted"
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A cross-architecture build might pass a target-specific CC on 'make',
and not on 'make install', and this is what happens in Debian
cross-qa tests.
Given that we select binaries to be installed depending on the target
architecture, this means we would build AVX2 binaries in any case on
a x86_64 build machine.
By overriding TARGET in package build rules, we can tell the Makefile
about the target architecture, also for the 'install' (Makefile)
target.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
By default, 65520 bytes are advertised, and zero disables DHCP and
NDP options.
Fixes: ec2b58ea4dc4 ("conf, dhcp, ndp: Fix message about default MTU, make NDP consistent")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On X32 (ILP32 using AMD64 system call ABI) and glibc,
struct timespec::tv_nsec is __syscall_slong_t and not a long int, see
also https://sourceware.org/bugzilla/show_bug.cgi?id=16437 and
timespec(3type). Fine, we could cast that down to long and be done
with it.
But it turns out that also time_t (not guaranteed to be equivalent to
any type) is a long long int, and there we can't downcast.
To keep it simple, cast both to long long int, and change formats to
%lli, to avoid format warnings from gcc.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
| |
Link: https://github.com/fedora-selinux/selinux-policy/pull/1613
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
...to fit accepted Fedora practices.
Link: https://github.com/fedora-selinux/selinux-policy/pull/1613
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
| |
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
| |
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
...it's been a while.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
| |
...and in any case, this patch doesn't offer any advantage over the
current upstream integration.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
| |
Native support was introduced with commit 13c6be96618c, QEMU 7.2.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
| |
See https://github.com/containers/podman/pull/16141, shipped in
Podman 4.4.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Tom reports that a pattern of repated ~1 MiB chunks downloads over
NNTP over TLS, on Podman 4.4 using pasta as network back-end, results
in pasta taking one full CPU thread after a while, and the download
never succeeds.
On that setup, we end up re-sending the same frame over and over,
with a consistent 65 534 bytes size, and never get an
acknowledgement from the tap-side client. This only happens for the
default MTU value (65 520 bytes) or for values that are slightly
smaller than that (down to 64 499 bytes).
We hit this condition because the MSS value we use in
tcp_data_from_sock(), only in pasta mode, is simply clamped to
USHRT_MAX, and not to the actual size of the buffers we pre-cooked
for sending, which is a bit less than that.
It looks like we got away with it until commit 0fb7b2b9080a ("tap:
Use different io vector bases depending on tap type") fixed the
setting of iov_len.
Luckily, since it's pasta, we're queueing up to two frames at a time,
so the worst that can happen is a badly segmented TCP stream: we
always have some space at the tail of the buffer.
Clamp the MSS value to the appropriate maximum given by struct
tcp{4,6}_buf_data_t, no matter if we're running in pasta or passt
mode.
While at it, fix the comments to those structs to reflect the current
struct size. This is not really relevant for any further calculation
or consideration, but it's convenient to know while debugging this
kind of issues.
Thanks to Tom for reporting the issue in a very detailed way and for
providing a test setup.
Reported-by: Tom Mombourquette <tom@devnode.com>
Link: https://github.com/containers/podman/issues/17703
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In general, we don't terminate or report failures if we fail to bind
to some ports out of a given port range specifier, to allow users to
conveniently specify big port ranges (or "all") without having to
care about ports that might already be in use.
However, running out of the open file descriptors quota is a
different story: we can't do what the user requested in a very
substantial way.
For example, if the user specifies '-t all' and we can only bind
1024 sockets, the behaviour is rather unexpected.
Fail whenever socket creation returns -ENFILE or -EMFILE.
Link: https://bugs.passt.top/show_bug.cgi?id=27
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The comments say we should return 0 on partial success, and an error
code on complete failure. Rationale: if the user configures a port
forwarding, and we succeed to bind that port for IPv4 or IPv6 only,
that might actually be what the user intended.
Adjust the two functions to reflect the comments.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
| |
...starting from sock_l4(), pass negative error (errno) codes instead
of -1. They will only be used in two commits from now, no functional
changes intended here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
| |
musl doesn't define those, use our own definition there. This is a
trivial implementation, similar to what's shipped by e.g. uClibc,
glibc, libiio.
Reported-by: Chris Kuhn <kuhnchris+github@kuhnchris.eu>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Roughly inspired from a patch by Chris Kuhn: fix up includes so that
we can build against musl: glibc is more lenient as headers generally
include a larger amount of other headers.
Compared to the original patch, I only included what was needed
directly in C files, instead of adding blanket includes in local
header files. It's a bit more involved, but more consistent with the
current (not ideal) situation.
Reported-by: Chris Kuhn <kuhnchris+github@kuhnchris.eu>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
While building against musl, gcc informs us that 'stderr' is a
protected keyword. This probably comes from a #define stderr (stderr)
in musl's stdio.h, to avoid a clash with extern FILE *const stderr,
but I didn't really track it down. Just rename it to force_stderr, it
makes more sense.
[sbrivio: Added commit message]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
...instead of BUFSIZ. On musl, BUFSIZ is 1024, so we'll typically
truncate the response to the request we send in nl_link(). It's
usually 8192 or more with glibc.
There doesn't seem to be any macro defining the rtnetlink maximum
message size, and iproute2 just hardcodes 1024 * 1024 for the receive
buffer, but the example in netlink(7) makes somewhat sense, looking
at the kernel implementation.
It's not very clean, but we're very unlikely to hit that limit,
and if we do, we'll find out painlessly, because NLA_OK() will tell
us right away.
Reported-by: Chris Kuhn <kuhnchris+passt@kuhnchris.eu>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I didn't notice earlier: libslirp (and slirp4netns) supports binding
outbound sockets to specific IPv4 and IPv6 addresses, to force the
source addresse selection. If we want to claim feature parity, we
should implement that as well.
Further, Podman supports specifying outbound interfaces as well, but
this is simply done by resolving the primary address for an interface
when the network back-end is started. However, since kernel version
5.7, commit c427bfec18f2 ("net: core: enable SO_BINDTODEVICE for
non-root users"), we can actually bind to a specific interface name,
which doesn't need to be validated in advance.
Implement -o / --outbound ADDR to bind to IPv4 and IPv6 addresses,
and --outbound-if4 and --outbound-if6 to bind IPv4 and IPv6 sockets
to given interfaces.
Given that it probably makes little sense to select addresses and
routes from interfaces different than the ones given for outbound
sockets, also assign those as "template" interfaces, by default,
unless explicitly overridden by '-i'.
For ICMP and UDP, we call sock_l4() to open outbound sockets, as we
already needed to bind to given ports or echo identifiers, and we
can bind() a socket only once: there, pass address (if any) and
interface (if any) for the existing bind() and setsockopt() calls.
For TCP, in general, we wouldn't otherwise bind sockets. Add a
specific helper to do that.
For UDP outbound sockets, we need to know if the final destination
of the socket is a loopback address, before we decide whether it
makes sense to bind the socket at all: move the block mangling the
address destination before the creation of the socket in the IPv4
path. This was already the case for the IPv6 path.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
In preparation for the next patch, make it clear that the first
routable interface fetched via netlink, or the one configured via
-i/--interface, is simply used as template to copy addresses and
routes, not an interface we actually use to derive the source address
(which will be _bound to_) for outgoing packets.
The man page and usage message appear to be already clear enough.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Even libvirt itself will configure passt to write log, PID and socket
files to different locations depending on whether the domain is
started as root (/var/log/libvirt/...) or as a regular user
(/var/log/<PID>/libvirt/...), and user_tmp_t would only cover the
latter.
Create interfaces for log and PID files, so that callers can specify
different file contexts for those, and modify the interface for the
UNIX socket file to allow different paths as well.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Tested-by: Laine Stump <laine@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Laine reports that with a simple:
<portForward proto='tcp'>
<range start='2022' to='22'/>
</portForward>
in libvirt's domain XML, passt won't start as it fails to bind
arbitrary ports. That was actually the intention behind passt_port_t:
the user or system administrator should have explicitly configured
allowed ports on a given machine. But it's probably not realistic, so
just allow any port to be bound and forwarded.
Also fix up some missing operations on sockets.
Reported-by: Laine Stump <laine@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Tested-by: Laine Stump <laine@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Otherwise, it's unusable as stand-alone tool, or in foreground mode,
and it's also impossible to get output from --help or --version,
because for SELinux it's just a daemon.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Tested-by: Laine Stump <laine@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
|
|
|
|
|
|
| |
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Tested-by: Laine Stump <laine@redhat.com>
Reviewed-by: Laine Stump <laine@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When a ssize_t is an int:
udp.c: In function ‘udp_sock_handler’:
udp.c:774:23: warning: comparison of integer expressions of different signedness: ‘unsigned int’ and ‘ssize_t’ {aka ‘int’} [-Wsign-compare]
774 | for (i = 0; i < n; i += m) {
| ^
udp.c:781:43: warning: comparison of integer expressions of different signedness: ‘unsigned int’ and ‘ssize_t’ {aka ‘int’} [-Wsign-compare]
781 | for (m = 1; i + m < n; m++) {
|
Change 'i' and 'm' counters in udp_sock_handler() to signed versions,
to match ssize_t n.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
| |
sh4 builds currently fail because of this:
https://buildd.debian.org/status/fetch.php?pkg=passt&arch=sh4&ver=0.0%7Egit20230216.4663ccc-1&stamp=1677091350&raw=0
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Debian cross-building automatic checks:
http://crossqa.debian.net/src/passt
currently fail because we don't use the right target architecture and
compiler while building the system call lists and resolving their
numbers in seccomp.sh. Pass ARCH and CC to seccomp.sh and use them.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
ia64 needs to use __clone2() as clone() is not available, but glibc
doesn't export the prototype. Take it from clone(2) to avoid an
implicit declaration:
util.c: In function ‘do_clone’:
util.c:512:16: warning: implicit declaration of function ‘__clone2’ [-Wimplicit-function-declaration]
512 | return __clone2(fn, stack_area + stack_size / 2, stack_size / 2,
| ^~~~~~~~
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
One day, libvirt might actually support running passt to provide
guest connectivity. Should libvirtd (or virtqemud) start passt, it
will need to access socket and PID files in specific locations, and
passt needs to accept SIGTERM in case QEMU fails to start after passt
is already started.
To make this more convenient, split the current profile into two
abstractions, for passt and for pasta, so that external programmes
can include the bits they need (and especially not include the pasta
abstraction if they only need to start passt), plus whatever specific
adaptation is needed.
For stand-alone usage of passt and pasta, the 'passt' profile simply
includes both abstractions, plus rules to create and access PID and
capture files in default or reasonable ($HOME) locations.
Tested on Debian with libvirt 9.0.0 together with a local fix to start
passt as intended, namely libvirt commit c0efdbdb9f66 ("qemu_passt:
Avoid double daemonizing passt"). This is an example of how the
libvirtd profile (or virtqemud abstraction, or virtqemud profile) can
use this:
# support for passt network back-end
/usr/bin/passt Cx -> passt,
profile passt {
/usr/bin/passt r,
owner @{run}/user/[0-9]*/libvirt/qemu/run/passt/* rw,
signal (receive) set=("term") peer=/usr/sbin/libvirtd,
signal (receive) set=("term") peer=libvirtd,
include if exists <abstractions/passt>
}
translated:
- when executing /usr/bin/passt, switch to the subprofile "passt"
(not the "discrete", i.e. stand-alone profile), described below.
Scrub the environment (e.g. LD_PRELOAD is dropped)
- in the "passt" subprofile:
- allow reading the binary
- allow read and write access to PID and socket files
- make passt accept SIGTERM from /usr/sbin/libvirtd, and
libvirtd peer names
- include anything else that's needed by passt itself
Suggested-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
While generating -device as JSON when JSON is in use is
mandatory, because not doing so can often prevent the VM from
starting up, using JSON for -netdev simply makes things a bit
nicer. No reason not to do it, though.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For pc machines, devices are placed directly on pci.0 with
addresses like
bus=pci.0,addr=0xa
and in this case the existing code works correctly.
For q35 machines, however, a separate PCI bus is created for
each devices using a pcie-root-port, and the resulting
addresses look like
bus=pci.9,addr=0x0
In this case, we need to treat PCI addresses as decimal, not
hexadecimal, both when parsing and generating them.
This issue has gone unnoticed for a long time because it only
shows up when enough PCI devices are present: for small
numbers, decimal and hexadecimal overlap, masking the issue.
Reported-by: Alona Paz <alkaplan@redhat.com>
Fixes: 5307faa05997 ("qrap: Strip network devices from command line, set them up according to machine")
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
| |
When JSON support was introduced, the drop_args array has
not been updated accordingly.
Fixes: b944ca185587 ("qrap: Support JSON syntax for -device")
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
| |
The JSON version of the template is incorrect, making qrap
completely unusable when JSON arguments are used together
with the pc machine type.
Fixes: b944ca185587 ("qrap: Support JSON syntax for -device")
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The pci.0 bus on a pc machine has 32 slots.
For q35 machines, we don't expect devices to be plugged into
pcie.0 directly, so technically we could have a very large
number of slots by adding many pcie-root-ports, but even in
this scenario 32 is a reasonable number.
Signed-off-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
| |
If we define die() as a variadic macro, passing __VA_ARGS__ to err(),
and calling exit() outside err() itself, we can drop the workarounds
introduced in commit 36f0199f6ef4 ("conf, tap: Silence two false
positive invalidFunctionArg from cppcheck").
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
ShellCheck reports (SC2034) that __qemu_arch is not used. Use it,
and silence the resulting SC2086 warning as we want word splitting on
options we pass with it.
While at it, silence SC2317 warnings for commands in cleanup() that
appear to be unreachable: cleanup() is only called as trap.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
| |
...and, given that I keep getting this wrong, add a convenience
macro, MAX_FROM_BITS().
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
| |
If tcp_timer_ctl() gets a socket number greater than SOCKET_MAX
(2 ^ 24), we return error but we don't close the socket. This is a
rather formal issue given that, at least on Linux, socket numbers are
monotonic and we're in general not allowed to open so many sockets.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If there are no TCP options in the header, tcp_tap_handler() will
pass the corresponding pointer, fetched via packet_get(), as NULL to
tcp_conn_from_sock_finish(), which in turn indirectly calls
tcp_opt_get().
If there are no options, tcp_opt_get() will stop right away because
the option length is indicated as zero. However, if the logic is
complicated enough to follow for static checkers, adding an explicit
check against NULL in tcp_opt_get() is probably a good idea.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We use the return value of fls() as array index for debug strings.
While fls() can return -1 (if no bit is set), Coverity Scan doesn't
see that we're first checking the return value of another fls() call
with the same bitmask, before using it.
Call fls() once, store its return value, check it, and use the stored
value as array index.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Recently, commit 4ddbcb9c0c55 ("tcp: Disable optimisations
for tcp_hash()") worked around yet another issue we hit with gcc 12
and '-flto -O2': some stores affecting the input data to siphash_20b()
were omitted altogether, and tcp_hash() wouldn't get the correct hash
for incoming connections.
Digging further into this revealed that, at least according to gcc's
interpretation of C99 aliasing rules, passing pointers to functions
with different types compared to the effective type of the object
(for example, a uint8_t pointer to an anonymous struct, as it happens
in tcp_hash()), doesn't guarantee that stores are not reordered
across the function call.
This means that, in general, our checksum and hash functions might
not see parts of input data that was intended to be provided by
callers.
Not even switching from uint8_t to character types, which should be
appropriate here, according to C99 (ISO/IEC 9899, TC3, draft N1256),
section 6.5, "Expressions", paragraph 7:
An object shall have its stored value accessed only by an lvalue
expression that has one of the following types:
[...]
— a character type.
does the trick. I guess this is also subject to interpretation:
casting passed pointers to character types, and then using those as
different types, might still violate (dubious) aliasing rules.
Disable gcc strict aliasing rules for potentially affected functions,
which, in turn, disables gcc's Type-Based Alias Analysis (TBAA)
optimisations based on those function arguments.
Drop the existing workarounds. Also the (seemingly?) bogus
'maybe-uninitialized' warning on the tcp_tap_handler() > tcp_hash() >
siphash_20b() path goes away with -fno-strict-aliasing on
siphash_20b().
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
cppcheck 2.10 reports:
tcp.c:1815:12: style: Condition 'wnd>prev_scaled' is always false [knownConditionTrueFalse]
if ((wnd > prev_scaled && wnd * 99 / 100 < prev_scaled) ||
^
tcp.c:1808:8: note: Assignment 'wnd=((1<<(16+8))<(wnd))?(1<<(16+8)):(wnd)', assigned value is less than 1
wnd = MIN(MAX_WINDOW, wnd);
^
tcp.c:1811:19: note: Assuming condition is false
if (prev_scaled == wnd)
^
tcp.c:1815:12: note: Condition 'wnd>prev_scaled' is always false
if ((wnd > prev_scaled && wnd * 99 / 100 < prev_scaled) ||
^
but this is not actually the case: wnd is typically greater than 1,
and might very well be greater than prev_scaled as well.
I bisected this down to cppcheck commit b4d455df487c ("Fix 11349: FP
negativeIndex for clamped array index (#4627)") and reported findings
at https://github.com/danmar/cppcheck/pull/4627.
Suppress the warning for the moment being.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
openlog() can be used to set "ident" and have all the log messages
prefixed by it, but only if we call syslog() -- this is implemented
by C libraries.
We don't log messages with syslog(), though, as we have a custom
implementation to ensure we don't need dynamic memory allocation.
This means that it's perfectly useless to call openlog(), and that we
have to prefix every message we log by the identifier on our own.
Reported-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Tested-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Andrea reports that with a Fedora 37 guest running on a Fedora 37
host, both using systemd-resolved, with passt connecting them,
running with default options, DNS queries don't work.
systemd-resolved on the host is reachable only at the loopback
address 127.0.0.53.
We advertise the default gateway address to the guest as resolver,
because our local address is of course unreachable from there, which
means we see DNS queries directed to the default gateway, and we
redirect them to 127.0.0.1. However, systemd-resolved doesn't answer
on 127.0.0.1.
To fix this, set @dns_match to the address of the default gateway,
unless a different resolver address is explicitly configured, so that
we know we explicitly have to map DNS queries, in this case, to the
address of the local resolver.
This means that in udp_tap_handler() we need to check, first, if
the destination address of packets matches @dns_match: even if it's
the address of the local gateway, we want to map that to a specific
address, which isn't necessarily 127.0.0.1.
Do the same for IPv6 for consistency, even though IPv6 defines a
single loopback address.
Reported-by: Andrea Bolognani <abologna@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Tested-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The logic handling which resolvers we add, and whether to add them,
is getting rather cramped in get_dns(): split it into separate
functions.
No functional changes intended.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Tested-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of the address of the first resolver we advertise to
the guest or namespace.
This was one of the intentions behind commit 3a2afde87dd1 ("conf,
udp: Drop mostly duplicated dns_send arrays, rename related fields"),
but I forgot to implement this part. In practice, they are usually
the same thing, unless /etc/resolv.conf points to a loopback address.
Fixes: 3a2afde87dd1 ("conf, udp: Drop mostly duplicated dns_send arrays, rename related fields")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Tested-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|