| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
With an increased sending buffer size for the AF_UNIX socket, we
can get slightly lower overhead.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
| |
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
| |
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
...to spare some syscalls. If it's not enough, the timer will take
care of it.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
| |
The receiver might take this as a duplicate ACK othewise.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
Now that we fixed the issue with small receiving buffers, we can
safely increase this back and get slightly lower syscall overhead.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If net.core.rmem_max and net.core.wmem_max sysctls have low values,
we can get bigger buffers by not trying to set them high -- the
kernel would lock their values to what we get.
Try, instead, to get bigger buffers by queueing as much as possible,
and if maximum values in tcp_wmem and tcp_rmem are bigger than this,
that will work.
While at it, drop QUICKACK option for non-spliced sockets, I set
that earlier by mistake.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
If the connection is local or the RTT was comparable to the time it
takes to queue a batch of messages, we can safely use a large MSS
regardless of the sending buffer, but otherwise not.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If we start with a very small sending buffer, we can make the kernel
expand it if we cause the congestion window to get bigger, but this
won't reliably happen if we use just half (other half is accounted
as overhead).
Scale usage depending on its own size, we might eventually get some
retransmissions because we can't queue messages the sender sends us
in-window, but it's better than keeping that small buffer forever.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
...and from the sending socket only if the MTU is not configured.
Otherwise, a connection to a host from a local guest, with a
non-loopback destination address, will get its MSS from the MTU of the
outbound interface with that address, which is unnecessary as we know
the guest can send us larger segments.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
| |
Detecting bound ports at start-up time isn't terribly useful: do this
periodically instead, if configured.
This is only implemented for TCP at the moment, UDP is somewhat more
complicated: leave a TODO there.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This introduces a number of fundamental changes that would be quite
messy to split. Summary:
- advertised window scaling can be as big as we want, we just need
to clamp window sizes to avoid exceeding the size of our "discard"
buffer for unacknowledged data from socket
- add macros to compare sequence numbers
- force sending ACK to guest/tap on PSH segments, always in pasta
mode, whenever we see an overlapping segment, or when we reach a
given threshold compared to our window
- we don't actually use recvmmsg() here, fix comments and label
- introduce pools for pre-opened sockets and pipes, to decrease
latency on new connections
- set receiving and sending buffer sizes to the maximum allowed,
kernel will clamp and round appropriately
- defer clean-up of spliced and non-spliced connection to timer
- in tcp_send_to_tap(), there's no need anymore to keep a large
buffer, shrink it down to what we actually need
- introduce SO_RCVLOWAT setting and activity tracking for spliced
connections, to coalesce data moved by splice() calls as much as
possible
- as we now have a compacted connection table, there's no need to
keep sparse bitmaps tracking connection activity -- simply go
through active connections with a loop in the timer handler
- always clamp the advertised window to half our sending buffer,
too, to minimise retransmissions from the guest/tap
- set TCP_QUICKACK for originating socket in spliced connections,
there's no need to delay them
- fix up timeout for unacknowledged data from socket
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
A random initial sequence number based on a secret has already been
there for a while.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Until now, messages would be passed to protocol handlers in a single
batch only if they happened to be dequeued in a row. Packets
interleaved between different connections would result in multiple
calls to the same protocol handler for a single connection.
Instead, keep track of incoming packet descriptors, arrange them in
sequences, and call protocol handlers only as we completely sorted
input messages in batches.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
| |
This significantly improves fairness in serving concurrent connections.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
| |
...we now have SPLICE_FIN_{FROM,TO,BOTH} too.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
That might just mean we shut down the socket -- but we still have to
go through the other states to ensure a orderly shutdown guest-side.
While at it, drop the EPOLLHUP check for unhandled states: we should
never hit that, but if we do, resetting the connection at that point
is probably the wrong thing to do.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
Now that we dropped EPOLLET, we'll keep getting EPOLLRDHUP, and
possibly EPOLLIN, even if there's nothing to read anymore.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
| |
That's a guarantee that we don't need to retry writing.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
EPOLLHUP just means we shut down one side of the connection on
*one* socket: remember, we have two sockets here.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
| |
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
...throughput isn't everything: this leads (of course) to horrible
latency with small, sparse messages. As a consequence, there's no
need to set TCP_NODELAY either.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
| |
If we're at the first message in a batch, it's safe to get the
window value from it, and there's no need to subtract anything for
a comparison on that's not even done -- we'll override it later in
any case if we find messages with a higher ACK sequence number.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
| |
...so that we don't try to close them again, even if harmless.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
...tcp_handler_splice() doesn't guarantee we read all the available
data, the sending buffer might be full.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
Checking it only when the cached value is smaller than the current
window of the receiver is not enough: it might shrink further while
the receiver window is growing.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
| |
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
| |
...otherwise, we'll mix indices with non-spliced connections.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
socket
If we couldn't write the whole batch of received packets to the socket,
and we have missing segments, we still need to request their
retransmission right away, otherwise it will take ages for the guest to
figure out we're missing them.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
| |
...instead of waiting for the remote peer to do that -- it's
especially important in case we request retransmissions from the
guest, but it also helps speeding up slow start. This should
probably be a configurable behaviour in the future.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
| |
Seen with iperf3: a control connection is established, no data flows
for a while, all segments are acknowledged. The socket starts closing
it, and we immediately time out because the last ACK from tap was one
minute before that.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
It carries no data and usually duplicates the previous ACK sequence,
but it's just a FIN.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
| |
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
Seen with iperf3: the first packet from socket (data connection) is
65520 bytes and doesn't fit in the window.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This fixes a number of issues found with some heavier testing with
uperf and neper:
- in most closing states, we can still accept data, check for EPOLLIN
when appropriate
- introduce a new state, ESTABLISHED_SOCK_FIN_SENT, to track the fact
we already sent a FIN segment to the tap device, for proper sequence
number bookkeeping
- for pasta mode only: spliced connections also need tracking of
(inferred) FIN segments and clean half-pipe shutdowns
- streamline resetting epoll_wait bitmaps with a new function,
tcp_tap_epoll_mask(), instead of repeating the logic all over the
place
- set EPOLLET for tap connections too, whenever we are waiting for
EPOLLRDHUP or an event from the tap to proceed with data transfer,
to avoid useless loops with EPOLLIN set
- impose an additional limit on the sending window advertised to the
guest, given by SO_SNDBUF: it makes no sense to completely fill
the sending buffer and send a zero window: stop a bit before we
hit that
- handle *all* interrupted system calls as needed
- simplify the logic for reordering of out-of-order segments received
from tap: it's not a corner case, and the previous logic allowed
for deadloops
- fix comparison of seen IPv4 address when we get a new connection
from a socket directed to the configured guest address
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
This went lost in a recent rework: if the guest wants to connect
directly to the host, it can use the address of the default gateway.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
| |
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
| |
More details here after rebase.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
As data from socket is forwarded to the guest, sendmmsg() might send
fewer bytes than requested in three different ways:
- failing altogether with a negative error code -- ignore that,
we'll get an error on the UNIX domain socket later if there's
really an issue with it and reset the connection to the guest
- sending less than 'vlen' messages -- instead of assuming success
in that case and waiting for the guest to send a duplicate ACK
indicating missing data, update the sequence number according to
what was actually sent and spare some retransmissions
- somewhat unexpectedly to me, sending 'vlen' or less than 'vlen'
messages, returning up to 'vlen', with the last message being
partially sent, and no further indication of errors other than
the returned msg_len for the last partially sent message being
less than iov_len.
In this case, we would assume success and proceed as nothing
happened. However, qemu would fail to parse any further message,
having received a partial descriptor, and eventually close the
connection, logging:
serious error: oversized packet received,connection terminated.
as the length descriptor for the next message would be sourced
from the middle of the next successfully sent message, not from
its header.
Handle this by checking the msg_len returned for the last (even
partially) sent message, and force re-sending the missing bytes,
if any, with a blocking sendmsg() -- qemu must not receive
anything else than that anyway.
While at it, allow to send up to 64KiB for each message, the
previous 32KiB limit isn't actually required, and just switch to a
new message at each iteration on sending buffers, they are already
MSS-sized anyway, so the check in the loop isn't really needed.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With a kernel older than 5.3 (no_snd_wnd set), ack_pending in
tcp_send_to_tap() might be true at the beginning of a new connection
initiated by a socket. This means we send the first SYN segment to the
tap together with ACK set, which is clearly invalid and triggers the
receiver to reply with an RST segment right away.
Set ack_pending to 0 whenever we're sending a SYN segment. In case of a
SYN, ACK segment sent by the caller, the caller passes the ACK flag
explicitly.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
| |
Socket-facing functions don't guarantee that all data is handled before
they return: stick to level-triggered mode for TCP sockets.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
| |
...and while at it, fix an issue in the calculation of the last IOV
buffer size: if we can't receive enough data to fill up the window,
the last buffer can be filled completely.
Also streamline the code setting iovec lengths if cached values are
not matching.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We won't necessarily have another choice to ACK in a timely fashion
if we skip ACKs from a number of states (including ESTABLISHED) when
there's enough window left. Check for ACKed bytes as soon as it makes
sense.
If the sending window is not reported by the kernel, ACK as soon as
we queue onto the socket, given that we're forced to use a rather
small window.
In FIN_WAIT_1_SOCK_FIN, we also have to account for the FIN flag sent
by the peer in the sequence.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
Sending 64 frames in a batch looks quite bad when a duplicate ACK
comes right at the beginning of it. Lowering this to 32 doesn't
affect performance noticeably, with 16 the impact is more apparent.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Similar to UDP, but using a simple sendmsg() on iovec-style buffers
from tap instead, as we don't need to preserve message boundaries.
A quick test in PASTA mode, from namespace to init via tap:
# ip link set dev pasta0 mtu 16384
# iperf3 -c 192.168.1.222 -t 60
[...]
[ ID] Interval Transfer Bitrate
[ 5] 0.00-60.00 sec 80.4 GBytes 11.5 Gbits/sec receiver
# iperf3 -c 2a02:6d40:3cfc:3a01:2b20:4a6a:c25a:3056 -t 60
[...]
[ ID] Interval Transfer Bitrate
[ 5] 0.00-60.01 sec 39.9 GBytes 5.71 Gbits/sec receiver
# ip link set dev pasta0 mtu 65520
# iperf3 -c 192.168.1.222 -t 60
[...]
[ ID] Interval Transfer Bitrate
[ 5] 0.00-60.01 sec 88.7 GBytes 12.7 Gbits/sec receiver
# iperf3 -c 2a02:6d40:3cfc:3a01:2b20:4a6a:c25a:3056 -t 60
[...]
[ ID] Interval Transfer Bitrate
[ 5] 0.00-60.00 sec 79.5 GBytes 11.4 Gbits/sec receiver
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
| |
There's no need to constantly query the socket for number of
acknowledged bytes if we're far from exhausting the sending window,
just do it if we're at least down to 90% of it.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
| |
...instead of just 127.0.0.1.
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
| |
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
| |
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
...similarly to what was done for UDP. Quick performance test with
32KiB buffers, host to VM:
$ iperf3 -c 192.0.2.2 -N
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 8.47 GBytes 7.27 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 8.45 GBytes 7.26 Gbits/sec receiver
$ iperf3 -c 2a01:598:88ba:a056:271f:473a:c0d9:abc1
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-10.00 sec 8.43 GBytes 7.24 Gbits/sec 0 sender
[ 5] 0.00-10.00 sec 8.41 GBytes 7.22 Gbits/sec receiver
Signed-off-by: Stefano Brivio <sbrivio@redhat.com>
|