| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
|
|
|
|
|
|
|
|
| |
This is an example interface, currently unused, so it went undetected:
m4 macros need a backtick at the beginning of a block instead of a
single quote.
Fixes: 1f4b7fa0d75d ("passt, pasta: Add examples of SELinux policy modules")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Ouch, I accidentally pushed the previous change without running the
tests:
- we need to check, in conf_ports(), that udp_sock_init()
managed to bind at least a port, not the opposite
- for -T and -U, we have no way to know if we'll manage to bind
the port later, so never report an error for those
Fixes: 3d0de2c1d727 ("conf, tcp, udp: Exit if we fail to bind sockets for all given ports")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
The newly introduced die() calls exit(), but cppcheck doesn't see it
and warns about possibly invalid arguments used after the check which
triggers die(). Add return statements to silence the warnings.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
This goto exists purely to move this exception case out of line. Although
that does make the "normal" path a little clearer, it comes at the cost of
not knowing how where control will flow after jumping to the zero_len
label. The exceptional case isn't that long, so just put it inline.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
This goto can be handled just as simply and more clearly with a do while.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
The goto here really doesn't improve clarity or brevity at all. Use a
clearer construct.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In tap_send_frames() we send a number of frames to the tap device, then
also write them to the pcap capture file (if configured). However the tap
send can partially fail (short write()s or similar), meaning that some
of the requested frames weren't actually sent, but we still write those
frames to the capture file.
We do give a debug message in this case, but it's misleading to add frames
that we know weren't sent to the capture file. Rework to avoid this.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Based on a patch from Laine, and reports from Laine and Yalan: fix
the "22-80:32-90" example, and improve wording for the other ones:
instead of using "to" to denote the end of a range, use "between ...
and", so that it's clear we're *not* referring to target ports.
Reported-by: Laine Stump <laine@redhat.com>
Reported-by: Yalan Zhang <yalzhang@redhat.com>
Fixes: da20f57f19dc ("passt, qrap: Add man pages")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar to the conf_print() fix from commit 4129764ecaeb ("conf: Fix
mask calculation from prefix_len in conf_print()"): to calculate an
IPv4 netmask from the prefix length, we need to left shift 32 all-one
bits by 32 minus the prefix length -- not by the prefix length
itself.
Reported-by: Yalan Zhang <yalzhang@redhat.com>
Fixes: dd09cceaee21 ("Minor improvements to IPv4 netmask handling")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
David points out that using multiple counters to go over the iov
array, namely 'i' and 'iov', makes mistakes easier. We can't just use
'iov', unless we reserve an element with zero iov_len at the end,
which isn't really justified.
Simply use 'i' to iterate over the array.
Link: https://archives.passt.top/passt-dev/Y+mfenvLn3VJ7Dg5@yekko/
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
passt supports ranges of forwarded ports as well as 'all' for TCP and
UDP, so it might be convenient to proceed if we fail to bind only
some of the desired ports.
But if we fail to bind even a single port for a given specification,
we're clearly, unexpectedly, conflicting with another network
service. In that case, report failure and exit.
Reported-by: Yalan Zhang <yalzhang@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that logging functions force printing messages to stderr before
passt forks to background, we'll have duplicate messages when running
from an interactive terminal, or if --stderr is passed, because at
some point we set LOG_PERROR in our __openlog() wrapper.
We could defer setting LOG_PERROR, but that would change option
semantics in other, unexpected ways. We could force calling
passt_vsyslog() as long as the mask is set to LOG_EMERG, but that
complicates the logic in logging functions even further.
Go the easy way for now: don't force printing to stderr with
LOG_EMERG if LOG_PERROR is already set. We should seriously consider a
rework of those logging functions at this point.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
|
|
|
|
|
|
|
|
| |
This actually leaves us with 0 uses of err(), but someone could want
to use it in the future, so we may as well leave it around.
Signed-off-by: Laine Stump <laine@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
| |
Signed-off-by: Laine Stump <laine@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
...and return void to simplify the caller.
Signed-off-by: Laine Stump <laine@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
Again, it can then be made to return void, simplifying the caller.
Signed-off-by: Laine Stump <laine@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
As with conf_ports, this allows us to make the function return void.
Signed-off-by: Laine Stump <laine@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rather than having conf_ports() (possibly) log an error, and then
letting the caller log the entire usage() message and exit, just log
the errors and exit immediately (using die()).
For some errors, conf_ports would previously not log any specific
message, leaving it up to the user to determine the problem by
guessing. We replace all of those silent returns with die()
(logging a specific error), thus permitting us to make conf_ports()
return void, which simplifies the caller.
While modifying the two callers to conf_ports() to not check for a
return value, we can further simplify the code by removing the check
for a non-null optarg, as that is guaranteed to never happen (due to
prior calls to getopt_long() with "argument required" for all relevant
options - getopt_long() would have already caught this error).
Signed-off-by: Laine Stump <laine@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Nearly all of the calls to usage() in conf() occur immediately after
logging a more detailed error message, and the fact that these errors
are occuring indicates that the user has already seen the passt usage
message (or read the manpage). Spamming the logfile with the complete
contents of the usage message serves only to obscure the more detailed
error message. The only time when the full usage message should be output
is if the user explicitly asks for it with -h (or its synonyms)
As a start to eliminating the excessive calls to usage(), this patch
replaces most calls to err() followed by usage() with a call to die()
instead. A few other usage() calls remain, but their removal involves
bit more nuance that should be properly explained in separate commit
messages.
Signed-off-by: Laine Stump <laine@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Almost all occurences of err() are either immediately followed by
exit(EXIT_FAILURE), usage(argv[0]) (which itself then calls
exit(EXIT_FAILURE), or that is what's done immediately after returning
from the function that calls err(). Modify the errfn macro so that its
instantiations can include exit(EXIT_FAILURE) at the end, and use that
to create a new function die() that will log an error and then
exit.
Signed-off-by: Laine Stump <laine@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Once a log file (specified on the commandline) is opened, the logging
functions will stop sending error logs to stderr, which is annoying if
passt has been started by another process that wants to collect error
messages from stderr so it can report why passt failed to start. It
would be much nicer if passt continued sending all log messages to
stderr until it daemonizes itself (at which point the process that
started passt can assume that it was started successfully).
The system log mask is set to LOG_EMERG when the process starts, and
we're already using that to do "special" logging during the period
from process start until the log level requested on the commandline is
processed (setting the log mask to something else). This period
*almost* matches with "the time before the process is daemonized"; if
we just delay setting the log mask a tiny bit, then it will match
exactly, and we can use it to determine if we need to send log
messages to stderr even when a log file has been specified and opened.
This patch delays the setting of the log mask until immediately before
the call to __daemon(). It also modifies logfn() slightly, so that it
will log to stderr any time log mask is LOG_EMERG, even if a log file
has been opened.
Signed-off-by: Laine Stump <laine@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
Fedora 32-35 are now old enough that they're not on all mirrors. Fetch
them from the archive server instead.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
The current debian cloud images no longer include ppc64. Change to using
the latest snapshot which does include ppc64.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When creating a new spliced connection, we need to get a socket in the
other ns from the originating one. To avoid excessive ns switches we
usually get these from a pool refilled on a timer. However, if the pool
runs out we need a fallback. Currently that's done by passing -1 as the
socket to tcp_splice_connnect() and running it in the target ns.
This means that tcp_splice_connect() itself needs to have different cases
depending on whether it's given an existing socket or not, which is
a separate concern from what it's mostly doing. We change it to require
a suitable open socket to be passed in, and ensuring in the caller that we
have one.
This requires adding the fallback paths to the caller, tcp_splice_new().
We use slightly different approaches for a socket in the init ns versus the
guest ns.
This also means that we no longer need to run tcp_splice_connect() itself
in the guest ns, which allows us to remove a bunch of boilerplate code.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tcp_conn_new_sock() first looks for a socket in a pre-opened pool, then if
that's empty creates a new socket in the init namespace. Both parts of
this are duplicated in other places: the pool lookup logic is duplicated in
tcp_splice_new(), and the socket opening logic is duplicated in
tcp_sock_refill_pool().
Split the function into separate parts so we can remove both these
duplications.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tcp_splice.c has some explicit extern declarations to access the
socket pools. This is pretty dangerous - if we changed the type of
these variables in tcp.c, we'd have tcp.c and tcp_splice.c using the
same memory in different ways with no compiler error. So, move the
extern declarations to tcp_conn.h so they're visible to both tcp.c and
tcp_splice.c, but not the rest of pasta.
In fact the pools for the guest namespace are necessarily only used by
tcp_splice.c - we have no sockets on the guest side if we're not
splicing. So move those declarations and the functions that deal
exclusively with them to tcp_splice.c
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
With the creation of the tcp_sock_refill_pool() helper, the ns==true and
ns==false paths for tcp_sock_refill() now have almost nothing in common.
Split the two versions into tcp_sock_refill_init() and tcp_sock_refill_ns()
functions.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tcp_sock_refill() contains two near-identical loops to refill the IPv4
and IPv6 socket pools. In addition, if we get an error on the IPv4
pool we exit early and won't attempt to refill the IPv6 pool. At
least theoretically, these are independent from each other and there's
value to filling up either pool without the other. So, there's no
strong reason to give up on one because the other failed.
Address both of these with a helper function 'tcp_sock_refill_pool()' to
refill a single given pool.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
Future compilers will not support implicit ints by default, causing
the probe to always fail.
Link: https://bugs.passt.top/show_bug.cgi?id=42
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
On shell 'exit' commands, running shells from pasta, we might get:
Cannot set tty process group (No such process)
as some TTY devices might be unaccessible. This is harmless, but
after commit "pasta: propagate exit code from child command", we'll
get test failures there, at least with dash.
Ignore those explicitly with a ugly workaround: we can't simply do
something like:
exit || :
because the failure is reported by the shell itself once it exits,
regardless of the command evaluation.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Exits codes are very useful for scripts, when the pasta child execvp()
call fails with ENOENT that parent should also exit with > 0. In short
the parent should always exit with the code from the child to make it
useful in scripts.
It is easy to test with: `pasta -- bash -c "exit 3"; echo $?`
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
By default clone() will create a child that does not send SIGCHLD when
the child exits. The caller has to specifiy the SIGNAL it should get in
the flag bitmask.
see clone(2) under "The child termination signal"
This fixes the problem where pasta would not exit when the execvp()
call failed, i.e. when the command does not exists.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When spawning a child command with pasta command... pasta should not
leak fds that it opened. Only the fds that were already open should be
given to the child.
Run `pasta --config-net -- ls -l /proc/self/fd` from a terminal where
only stdin/out/err are open. The fd 3 was opend by ls to read the
/proc/self/fd dir. But fd 5 is the netlink socket that was opend in
pasta. To prevent such a leak we will open the socket with SOCK_CLOEXEC.
Signed-off-by: Paul Holzinger <pholzing@redhat.com>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are some places in passt/pasta which #include <assert.h> and make
various assertions. If we hit these something has already gone wrong, but
they're there so that we a useful message instead of cryptic misbehaviour
if assumptions we thought were correct turn out not to be.
Except.. the glibc implementation of assert() uses syscalls that aren't in
our seccomp filter, so we'll get a SIGSYS before it actually prints the
message. Work around this by adding our own ASSERT() implementation using
our existing err() function to log the message, and an abort(). The
abort() probably also won't work exactly right with seccomp, but once we've
printed the message, dying with a SIGSYS works just as well as dying with
a SIGABRT.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
David reports that TCP transfers might stall, especially with smaller
socket buffer sizes, because we reset the ACK_FROM_TAP_DUE flag, in
tcp_tap_handler(), whenever we receive an ACK segment, regardless of
its sequence number and the fact that we might still be waiting for
one. This way, we might fail to re-transmit frames on ACK timeouts.
We need, instead, to:
- indicate with the @retrans field only re-transmissions for the same
data sequences. If we make progress, it should be reset, given that
it's used to abort a connection when we exceed a given number of
re-transmissions for the same data
- unset the ACK_FROM_TAP_DUE flag if and only if the acknowledged
sequence is the same as the last one we sent, as suggested by David
- keep it set otherwise, if progress was done but not all the data we
sent was acknowledged, and update the expiration of the ACK timeout
Add a new helper for these purposes, tcp_update_seqack_from_tap().
To extend the ACK timeout, the new helper sets the ACK_FROM_TAP_DUE
flag, even if it was already set, and conn_flag_do() triggers a timer
update. This part should be revisited at a later time, because,
strictly speaking, ACK_FROM_TAP_DUE isn't a flag anymore. One
possibility might be to introduce another connection attribute for
events affecting timer deadlines.
Reported-by: David Gibson <david@gibson.dropbear.id.au>
Link: https://bugs.passt.top/show_bug.cgi?id=41
Suggested-by: David Gibson <david@gibson.dropbear.id.au>
Fixes: be5bbb9b0681 ("tcp: Rework timers to use timerfd instead of periodic bitmap scan")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
...instead of repeatedly sending out the first one in iov.
Fixes: e21ee41ac35a ("tcp: Combine two parts of pasta tap send path together")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Adapted from a patch by Paul Holzinger: when pasta spawns a command,
operating without a pre-existing user and network namespace, it needs
to wait for the tap device to be configured and its handler ready,
before the command is actually executed.
Otherwise, something like:
pasta --config-net nslookup passt.top
usually fails as the nslookup command is issued before the network
interface is ready.
We can't adopt a simpler approach based on SIGSTOP and SIGCONT here:
the child runs in a separate PID namespace, so it can't send SIGSTOP
to itself as the kernel sees the child as init process and blocks
the delivery of the signal.
We could send SIGSTOP from the parent, but this wouldn't avoid the
possible condition where the child isn't ready to wait for it when
the parent sends it, also raised by Paul -- and SIGSTOP can't be
blocked, so it can never be pending.
Use SIGUSR1 instead: mask it before clone(), so that the child starts
with it blocked, and can safely wait for it. Once the parent is
ready, it sends SIGUSR1 to the child. If SIGUSR1 is sent before the
child is waiting for it, the kernel will queue it for us, because
it's blocked.
Reported-by: Paul Holzinger <pholzing@redhat.com>
Fixes: 1392bc5ca002 ("Allow pasta to take a command to execute")
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To send frames on the tap interface, the UDP uses a fairly complicated two
level batching. First multiple frames are gathered into a single "message"
for the qemu stream socket, then multiple messages are send with
sendmmsg(). We now have tap_send_frames() which already deals with sending
a number of frames, including batching and handling partial sends. Use
that to considerably simplify things.
This does make a couple of behavioural changes:
* We used to split messages to keep them under 32kiB (except when a
single frame was longer than that). The comments claim this is
needed to stop qemu from closing the connection, but we don't have any
equivalent logic for TCP. I wasn't able to reproduce the problem with
this series, although it was apparently easy to reproduce earlier.
My suspicion is that there was never an inherent need to keep messages
small, however with larger messages (and default kernel buffer sizes)
the chances of needing more than one resend for partial send()s is
greatly increased. We used not to correctly handle that case of
multiple resends, but now we do.
* Previously when we got a partial send on UDP, we would resend the
remainder of the entire "message", including multiple frames. The
common code now only resends the remainder of a single frame, simply
dropping any frames which weren't even partially sent. This is what
TCP always did and is probably a better idea for UDP too.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In passt mode, when writing frames to the qemu socket, we might get a short
send. If we ignored this and carried on, the qemu socket would get out of
sync, because the bytes we actually sent wouldn't correspond to the length
header we already sent. tap_send_frames_passt() handles that by doing a
a blocking send to complete the message, but it has a few flaws:
* We only attempt to resend once: although it's unlikely in practice,
nothing prevents the blocking send() from also being short
* We print a debug error if send() returns non-zero.. but send() returns
the number of bytes sent, so we actually want it to return the length
of the remaining data.
Correct those flaws and also be a bit more thorough about reporting
problems here.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Update the UDP code to use the tap layer abstractions for initializing and
updating the L2 and lower headers. This will make adding other tap
backends in future easier.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently tap_send_frames() expects the frames it is given to include the
vnet_len field, even in pasta mode which doesn't use it (although it need
not be initialized in that case). To match, tap_iov_base() and
tap_iov_len() construct the frame in that way.
This will inconvenience future changes, so alter things to set the buffers
to include just the frame needed by the tap backend type.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Update the TCP code to use the tap layer abstractions for initializing and
updating the L2 and lower headers. This will make adding other tap
backends in future easier.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently both the TCP and UDP code need to deal in various places with the
details of the L2 headers, and also the tap-specific "vnet_len" header.
This makes abstracting the tap interface to new backends (e.g. vhost-user
or tun) more difficult.
To improve this abstraction, create a new 'tap_hdr' structure which
represents both L2 (always Ethernet at the moment, but might be vary in
future) and any additional tap specific headers (such as the qemu socket's
vnet_len field). Provide helper functions and macros to initialize, update
and use it.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
tcp_l2_buf_fill_headers() returns the size of the generated frame including
the ethernet header. The caller then adds on the size of the vnet_len
field to get the total frame size to be passed to the tap device.
Outside the tap code, though, we never care about the ethernet header size
only the final total size we need to put into an iovec. So, consolidate
the total frame size calculation within tcp_l2_buf_fill_headers().
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
tcp_sock[46]_iov_init() initialize the length of each iovec buffer to
MSS_DEFAULT. That will always be overwritten before use in
tcp_data_to_tap, so it's redundant. It also wasn't correct, because it
didn't correctly account for the header lengths in all cases.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
| |
We have separate IPv4 and IPv6 versions of a macro to construct an
initializer for ethernet headers. However, now that we have htons_constant
it's easy to simply paramterize this with the ethernet protocol number.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Both the TCP and UDP iov_init functions have some large structure literals
defined in "field order" style. These are pretty hard to read since it's
not obvious what value corresponds to what field. Use named field style
initializers instead to make this clearer.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
We have several places where we have fairly ugly #ifdefs on __BYTE_ORDER
where we need network order values in a constant expression (so we can't
use htons() or htonl()). We can do this more cleanly by using a single
__BYTE_ORDER ifdef to define htons_constant() and htonl_constant()
macros, then using those in all the other places.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
The functions which do the final steps of sending TCP packets on through
the tap interface - tcp_l2_buf_flush*() - no longer have anything that's
actually specific to TCP in them, other than comments and names. Move them
all to tap.c.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
tcp_l2_buf_flush() open codes the loop across each frame in a group, but
but calls tcp_l2_buf_write_one() to send each frame to the pasta tuntap
device. Combine these two pasta-specific operations into
tcp_l2_buf_flush_pasta() which is a little cleaner and will enable further
cleanups.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|