| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
| |
Port numbers are unsigned values, but we're storing them in (signed) int
variables in some places. This isn't actually harmful, because int is
large enough to hold the entire range of ports. However in places we don't
want to use an in_port_t (usually to avoid overflow on the last iteration
of a loop) it makes more conceptual sense to use an unsigned int. This will
also avoid some problems with later cleanups.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
| |
Now that we've delayed initialization of the UDP specific "reverse" map
until udp_init(), the only difference between the various 'remap' functions
used in conf_ports() is which array they target. So, simplify by open
coding the logic into conf_ports() with a pointer to the correct mapping
array.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The configuration for how to forward ports in and out of the guest/ns is
divided between several different variables. For each connect direction
and protocol we have a mode in the udp/tcp context structure, a bitmap
of which ports to forward also in the context structure and an array of
deltas to apply if the outward facing and inward facing port numbers are
different. This last is a separate global variable, rather than being in
the context structure, for no particular reason. UDP also requires an
additional array which has the reverse mapping used for return packets.
Consolidate these into a re-used substructure in the context structure.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
enum conf_port_type is local to conf.c and is used to track the port
forwarding mode during configuration. We don't keep it around in the
context structure, however the 'init_detect_ports' and 'ns_detect_ports'
fields in the context are based solely on this. Rather than changing
encoding, just include the forwarding mode into the context structure.
Move the type definition to a new port_fwd.h, which is kind of trivial at
the moment but will have more stuff later.
While we're there, "conf_port_type" doesn't really convey that this enum is
describing how port forwarding is configured. Rename it to port_fwd_mode.
The variables (now fields) of this type also have mildly confusing names
since it's not immediately obvious whether 'ns' and 'init' refer to the
source or destination of the packets. Use "in" (host to guest / init to
ns) and "out" (guest to host / ns to init) instead.
This has the added bonus that we no longer have locals 'udp_init' and
'tcp_init' which shadow global functions.
In addition, add a typedef 'port_fwd_map' for a bitmap of each port number,
which is used in several places.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The context structure contains a batch of fields specific to IPv4 and to
IPv6 connectivity. Split those out into a sub-structure.
This allows the conf_ip4() and conf_ip6() functions, which take the
entire context but touch very little of it, to be given more specific
parameters, making it clearer what it affects without stepping through the
code.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
After recent changes, conf_ip() now has essentially entirely disjoint paths
for IPv4 and IPv6 configuration. So, it's cleaner to split them out into
different functions conf_ip4() and conf_ip6().
Splitting these out also lets us make the interface a bit nicer, having
them return success or failure directly, rather than manipulating c->v4
and c->v6 to indicate success/failure of the two versions.
Since these functions may also initialize the interface index for each
protocol, it turns out we can then drop c->v4 and c->v6 entirely, replacing
tests on those with tests on whether c->ifi4 or c->ifi6 is non-zero (since
a 0 interface index is never valid).
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[sbrivio: Whitespace fixes]
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's quite plausible for a host to have both IPv4 and IPv6 connectivity,
but only via different interfaces. For example, this will happen in the
case that IPv6 connectivity is via a tunnel (e.g. 6in4 or 6rd). It would
also happen in the case that IPv4 access is via a tunnel on an otherwise
IPv6 only local network, which is a setup that might become more common in
the post IPv4 address exhaustion world.
In turns out there's no real need for passt/pasta to get its IPv4 and IPv6
connectivity via the same interface, so we can handle this situation fairly
easily. Change the core to allow eparate external interfaces for IPv4 and
IPv6. We don't actually set these separately for now.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
| |
If the first packet_get() call doesn't assign len, the second one
will also return NULL, but gcc doesn't see this.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
gcc 12.1.x (e.g. current OpenSUSE Tumbleweed, x86_64 only,
gcc-12-1.4.x86_64) reports:
tcp.c: In function ‘tcp_send_flag’:
tcp.c:1014:9: warning: writing 16 bytes into a region of size 0 [-Wstringop-overflow=]
1014 | memcpy(low_rtt_dst + hole++, &conn->a.a6, sizeof(conn->a.a6));
| ^
tcp.c:559:24: note: at offset -16 into destination object ‘low_rtt_dst’ of size 128
559 | static struct in6_addr low_rtt_dst[LOW_RTT_TABLE_SIZE];
|
but 'hole' can't be -1, because the low_rtt_dst table is guaranteed
to have a hole: if we happened to write to the last entry, we'll go
back to index 0 and clear that one.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This feature is available in slirp4netns but was missing in passt and
pasta.
Given that we don't do dynamic memory allocation, we need to bind
sockets while parsing port configuration. This means we need to
process all other options first, as they might affect addressing and
IP version support. It also implies a minor rework of how TCP and UDP
implementations bind sockets.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
Reported by Coverity: it doesn't see that tcp{4,6}_l2_buf_used are
set to zero by tcp_l2_data_buf_flush(), repeat that explicitly here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
| |
A flag or event bit is always set by callers. Reported by Coverity.
Signed-by-off: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
| |
Not an issue with a sane kernel behaviour. Reported by Coverity.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
All instances were harmless, but it might be useful to have some
debug messages here and there. Reported by Coverity.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
Field doff in struct tcp_hdr is 4 bits wide, so optlen in
tcp_tap_handler() is already bound, but make that explicit.
Reported by Coverity.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
| |
Harmless except for two bad debugging prints.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
| |
The existing sizes provide no measurable differences in throughput
and packet rates at this point. They were probably needed as batched
implementations were not complete, but they can be decreased quite a
bit now.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
...we don't really need two extra bits, but it's easier to organise
things differently than to silence this.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
| |
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Implement a packet abstraction providing boundary and size checks
based on packet descriptors: packets stored in a buffer can be queued
into a pool (without storage of its own), and data can be retrieved
referring to an index in the pool, specifying offset and length.
Checks ensure data is not read outside the boundaries of buffer and
descriptors, and that packets added to a pool are within the buffer
range with valid offset and indices.
This implies a wider rework: usage of the "queueing" part of the
abstraction mostly affects tap_handler_{passt,pasta}() functions and
their callees, while the "fetching" part affects all the guest or tap
facing implementations: TCP, UDP, ICMP, ARP, NDP, DHCP and DHCPv6
handlers.
Suggested-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
| |
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
...by:
- storing the chained-hash next connection pointer as numeric
reference rather than as pointer
- storing the MSS as 14-bit value, and rounding it
- using only the effective amount of bits needed to store the hash
bucket number
- explicitly limiting window scaling factors to 4-bit values
(maximum factor is 14, from RFC 7323)
- scaling SO_SNDBUF values, and using a 8-bit representation for
the duplicate ACK sequence
- keeping window values unscaled, as received and sent
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We can't take for granted that the hard limit for open files is
big enough as to allow to delay closing sockets to a timer.
Store the value of RTLIMIT_NOFILE we set at start, and use it to
understand if we're approaching the limit with pending, spliced
TCP connections. If that's the case, close sockets right away as
soon as they're not needed, instead of deferring this task to a
timer.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With a lot of concurrent connections, the bitmap scan approach is
not really sustainable.
Switch to per-connection timerfd timers, set based on events and on
two new flags, ACK_FROM_TAP_DUE and ACK_TO_TAP_DUE. Timers are added
to the common epoll list, and implement the existing timeouts.
While at it, drop the CONN_ prefix from flag names, otherwise they
get quite long, and fix the logic to decide if a connection has a
local, possibly unreachable endpoint: we shouldn't go through the
rest of tcp_conn_from_tap() if we reset the connection due to a
successful bind(2), and we'll get EACCES if the port number is low.
Suggested by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
This should never happen, but there are no formal guarantees: ensure
socket numbers are below SOCKET_MAX.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
| |
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Using events and flags instead of states makes the implementation
much more straightforward: actions are mostly centered on events
that occurred on the connection rather than states.
An example is given by the ESTABLISHED_SOCK_FIN_SENT and
FIN_WAIT_1_SOCK_FIN abominations: we don't actually care about
which side started closing the connection to handle closing of
connection halves.
Split out the spliced implementation, as it has very little in
common with the "regular" TCP path.
Refactor things here and there to improve clarity. Add helpers
to trace where resets and flag settings come from.
No functional changes intended.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
| |
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In pasta mode, when we get data from sockets and write it as single
frames to the tap device, we batch receive operations considerably,
and then (conceptually) split the data in many smaller writes.
It looked like an obvious choice, but performance is actually better
if we receive data in many small frame-sized recvmsg()/recvmmsg().
The syscall overhead with the previous behaviour, observed by perf,
comes predominantly from write operations, but receiving data in
shorter chunks probably improves cache locality by a considerable
amount.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To reach (at least) a conceptually equivalent security level as
implemented by --enable-sandbox in slirp4netns, we need to create a
new mount namespace and pivot_root() into a new (empty) mountpoint, so
that passt and pasta can't access any filesystem resource after
initialisation.
While at it, also detach IPC, PID (only for passt, to prevent
vulnerabilities based on the knowledge of a target PID), and UTS
namespaces.
With this approach, if we apply the seccomp filters right after the
configuration step, the number of allowed syscalls grows further. To
prevent this, defer the application of seccomp policies after the
initialisation phase, before the main loop, that's where we expect bad
things to happen, potentially. This way, we get back to 22 allowed
syscalls for passt and 34 for pasta, on x86_64.
While at it, move #syscalls notes to specific code paths wherever it
conceptually makes sense.
We have to open all the file handles we'll ever need before
sandboxing:
- the packet capture file can only be opened once, drop instance
numbers from the default path and use the (pre-sandbox) PID instead
- /proc/net/tcp{,v6} and /proc/net/udp{,v6}, for automatic detection
of bound ports in pasta mode, are now opened only once, before
sandboxing, and their handles are stored in the execution context
- the UNIX domain socket for passt is also bound only once, before
sandboxing: to reject clients after the first one, instead of
closing the listening socket, keep it open, accept and immediately
discard new connection if we already have a valid one
Clarify the (unchanged) behaviour for --netns-only in the man page.
To actually make passt and pasta processes run in a separate PID
namespace, we need to unshare(CLONE_NEWPID) before forking to
background (if configured to do so). Introduce a small daemon()
implementation, __daemon(), that additionally saves the PID file
before forking. While running in foreground, the process itself can't
move to a new PID namespace (a process can't change the notion of its
own PID): mention that in the man page.
For some reason, fork() in a detached PID namespace causes SIGTERM
and SIGQUIT to be ignored, even if the handler is still reported as
SIG_DFL: add a signal handler that just exits.
We can now drop most of the pasta_child_handler() implementation,
that took care of terminating all processes running in the same
namespace, if pasta started a shell: the shell itself is now the
init process in that namespace, and all children will terminate
once the init process exits.
Issuing 'echo $$' in a detached PID namespace won't return the
actual namespace PID as seen from the init namespace: adapt
demo and test setup scripts to reflect that.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
clang-tidy from LLVM 13.0.1 reports some new warnings from these
checkers:
- altera-unroll-loops, altera-id-dependent-backward-branch: ignore
for the moment being, add a TODO item
- bugprone-easily-swappable-parameters: ignore, nothing to do about
those
- readability-function-cognitive-complexity: ignore for the moment
being, add a TODO item
- altera-struct-pack-align: ignore, alignment is forced in protocol
headers
- concurrency-mt-unsafe: ignore for the moment being, add a TODO
item
Fix bugprone-implicit-widening-of-multiplication-result warnings,
though, that's doable and they seem to make sense.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The existing behaviour is not really practical: an automated agent in
charge of starting both qemu and passt would need to fork itself to
start passt, because passt won't fork to background until qemu
connects, and the agent needs to unblock to start qemu.
Instead of waiting for a connection to daemonise, do it right away as
soon as a socket is available: that can be considered an initialised
state already.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Depending on the C library, but not necessarily in all the
functions we use, statx() might be used instead of stat(),
getdents() instead of getdents64(), readlinkat() instead of
readlink(), openat() instead of open().
On aarch64, it's clone() and not fork(), and dup3() instead of
dup2() -- just allow the existing alternative instead of dealing
with per-arch selections.
Since glibc commit 9a7565403758 ("posix: Consolidate fork
implementation"), we need to allow set_robust_list() for
fork()/clone(), even in a single-threaded context.
On some architectures, epoll_pwait() is provided instead of
epoll_wait(), but never both. Same with newfstat() and
fstat(), sigreturn() and rt_sigreturn(), getdents64() and
getdents(), readlink() and readlinkat(), unlink() and
unlinkat(), whereas pipe() might not be available, but
pipe2() always is, exclusively or not.
Seen on Fedora 34: newfstatat() is used on top of fstat().
syslog() is an actual system call on some glibc/arch combinations,
instead of a connect()/send() implementation.
On ppc64 and ppc64le, _llseek(), recv(), send() and getuid()
are used. For ppc64 only: ugetrlimit() for the getrlimit()
implementation, plus sigreturn() and fcntl64().
On s390x, additionally, we need to allow socketcall() (on top
of socket()), and sigreturn() also for passt (not just for
pasta).
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
On some architectures, the page size is bigger than the maximum size
of an Ethernet frame.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
Bitmap manipulating functions would otherwise refer to inconsistent
sets of bits on big-endian architectures. While at it, fix up a
couple of casts.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tcpi_bytes_acked and tcpi_min_rtt are only available on recent
kernel versions: provide fall-back paths (incurring some grade of
performance penalty).
Support for getrandom() was introduced in Linux 3.17 and glibc 2.25:
provide an alternate mechanism for that as well, reading from
/dev/random.
Also check if NETLINK_GET_STRICT_CHK is defined before using it:
it's not strictly needed, we'll filter out irrelevant results from
netlink anyway.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
This is the only remaining Linux-specific include -- drop it to avoid
clang-tidy warnings and to make code more portable.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
| |
For some reason, those are not reported by recent versions of gcc.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
| |
...I forgot two of them.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
| |
This was actually fine "on the wire", but it's inconsistent with the
way we hash other addresses/protocols and also ends up with a wrong
endianness in captures in case we replace the address with our
default gateway.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
...mostly false positives, but a number of very relevant ones too,
in tcp_get_sndbuf(), tcp_conn_from_tap(), and siphash PREAMBLE().
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Unions and structs, you all have names now.
Take the chance to enable bugprone-reserved-identifier,
cert-dcl37-c, and cert-dcl51-cpp checkers in clang-tidy.
Provide a ffsl() weak declaration using gcc built-in.
Start reordering includes, but that's not enough for the
llvm-include-order checker yet.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
Detect missing tcpi_snd_wnd in struct tcp_info at build time,
otherwise build fails with a pre-5.3 linux/tcp.h header.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
This shouldn't happen on any sane configuration, but I just met an
example of that: the default IPv6 gateway on the host is configured
with a global unicast address, we use that as source for RA, DHCPv6
replies, and the guest ignores it. Same later on if we talk TCP or
UDP and the guest has no idea where that address comes from.
Use our link-local address in case the gateway address is global.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
Most are just about style and form, but a few were actually
serious mistakes (NDP-related).
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
| |
All false positives so far.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
| |
A mix of unchecked return values, a missing permission mask for
open(2) with O_CREAT, and some false positives from
-Wstringop-overflow and -Wmaybe-uninitialized.
Reported-by: Martin Hauke <mardnh@gmx.de>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
For some reason, on 4.19, splice() doesn't honour SOCK_NONBLOCK from
accept4() while reading from a TCP socket. Pass SPLICE_F_NONBLOCK
explicitly in all cases.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
If the window isn't updated on !c->tcp.kernel_snd_wnd, we still
have to send ACKs if the ACK sequence was updated, or if an error
occurred while querying TCP_INFO.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
doesn't match
...and while at it, reverse the operands in the window equality
comparison to detect the need for fast re-transmit: it's easier
to read this way.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|