<feed xmlns='http://www.w3.org/2005/Atom'>
<title>passt, branch 2025_12_15.b40f5cd</title>
<subtitle>Plug A Simple Socket Transport</subtitle>
<link rel='alternate' type='text/html' href='https://passt.top/passt/'/>
<entry>
<title>tcp: Use less-than-MSS window on no queued data, or no data sent recently</title>
<updated>2025-12-15T07:11:54+00:00</updated>
<author>
<name>Stefano Brivio</name>
<email>sbrivio@redhat.com</email>
</author>
<published>2025-12-13T13:19:13+00:00</published>
<link rel='alternate' type='text/html' href='https://passt.top/passt/commit/?id=b40f5cd8c8e16c6eceb1f26eb895527fda84068b'/>
<id>b40f5cd8c8e16c6eceb1f26eb895527fda84068b</id>
<content type='text'>
We limit the advertised window to guests and containers to the
available length of the sending buffer, and if it's less than the MSS,
since commit cf1925fb7b77 ("tcp: Don't limit window to less-than-MSS
values, use zero instead"), we approximate that limit to zero.

This way, we'll trigger a window update as soon as we realise that we
can advertise a larger value, just like we do in all other cases where
we advertise a zero-sized window.

By doing that, we don't wait for the peer to send us data before we
update the window. This matters because the guest or container might
be trying to aggregate more data and won't send us anything at all if
the advertised window is too small.

However, this might be problematic in two situations:

1. one, reported by Tyler, where the remote (receiving) peer
   advertises a window that's smaller than what we usually get and
   very close to the MSS, causing the kernel to give us a starting
   size of the buffer that's less than the MSS we advertise to the
   guest or container.

   If this happens, we'll never advertise a non-zero window after
   the handshake, and the container or guest will never send us any
   data at all.

   With a simple 'curl https://cloudflare.com/', we get, with default
   TCP memory parameters, a 65535-byte window from the peer, and 46080
   bytes of initial sending buffer from the kernel. But we advertised
   a 65480-byte MSS, and we'll never actually receive the client
   request.

   This seems to be specific to Cloudflare for some reason, probably
   deriving from a particular tuning of TCP parameters on their
   servers.

2. another one, hypothesised by David, where the peer might only be
   willing to process (and acknowledge) data in batches.

   We might have queued outbound data which is, at the same time, not
   enough to fill one of these batches and be acknowledged and removed
   from the sending queue, but enough to make our available buffer
   smaller than the MSS, and the connection will hang.

Take care of both cases by:

a. not approximating the sending buffer to zero if we have no outboud
   queued data at all, because in that case we don't expect the
   available buffer to increase if we don't send any data, so there's
   no point in waiting for it to grow larger than the MSS.

   This fixes problem 1. above.

b. also using the full sending buffer size if we haven't send data to
   the socket for a while (reported by tcpi_last_data_sent). This part
   was already suggested by David in:

     https://archives.passt.top/passt-dev/aTZzgtcKWLb28zrf@zatzit/

   and I'm now picking ten times the RTT as a somewhat arbitrary
   threshold.

   This is meant to take care of potential problem 2. above, but it
   also happens to fix 1.

Reported-by: Tyler Cloud &lt;tcloud@redhat.com&gt;
Link: https://bugs.passt.top/show_bug.cgi?id=183
Suggested-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
Reviewed-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We limit the advertised window to guests and containers to the
available length of the sending buffer, and if it's less than the MSS,
since commit cf1925fb7b77 ("tcp: Don't limit window to less-than-MSS
values, use zero instead"), we approximate that limit to zero.

This way, we'll trigger a window update as soon as we realise that we
can advertise a larger value, just like we do in all other cases where
we advertise a zero-sized window.

By doing that, we don't wait for the peer to send us data before we
update the window. This matters because the guest or container might
be trying to aggregate more data and won't send us anything at all if
the advertised window is too small.

However, this might be problematic in two situations:

1. one, reported by Tyler, where the remote (receiving) peer
   advertises a window that's smaller than what we usually get and
   very close to the MSS, causing the kernel to give us a starting
   size of the buffer that's less than the MSS we advertise to the
   guest or container.

   If this happens, we'll never advertise a non-zero window after
   the handshake, and the container or guest will never send us any
   data at all.

   With a simple 'curl https://cloudflare.com/', we get, with default
   TCP memory parameters, a 65535-byte window from the peer, and 46080
   bytes of initial sending buffer from the kernel. But we advertised
   a 65480-byte MSS, and we'll never actually receive the client
   request.

   This seems to be specific to Cloudflare for some reason, probably
   deriving from a particular tuning of TCP parameters on their
   servers.

2. another one, hypothesised by David, where the peer might only be
   willing to process (and acknowledge) data in batches.

   We might have queued outbound data which is, at the same time, not
   enough to fill one of these batches and be acknowledged and removed
   from the sending queue, but enough to make our available buffer
   smaller than the MSS, and the connection will hang.

Take care of both cases by:

a. not approximating the sending buffer to zero if we have no outboud
   queued data at all, because in that case we don't expect the
   available buffer to increase if we don't send any data, so there's
   no point in waiting for it to grow larger than the MSS.

   This fixes problem 1. above.

b. also using the full sending buffer size if we haven't send data to
   the socket for a while (reported by tcpi_last_data_sent). This part
   was already suggested by David in:

     https://archives.passt.top/passt-dev/aTZzgtcKWLb28zrf@zatzit/

   and I'm now picking ten times the RTT as a somewhat arbitrary
   threshold.

   This is meant to take care of potential problem 2. above, but it
   also happens to fix 1.

Reported-by: Tyler Cloud &lt;tcloud@redhat.com&gt;
Link: https://bugs.passt.top/show_bug.cgi?id=183
Suggested-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
Reviewed-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>conf, fwd: Move initialisation of auto port scanning out of conf()</title>
<updated>2025-12-12T21:38:56+00:00</updated>
<author>
<name>David Gibson</name>
<email>david@gibson.dropbear.id.au</email>
</author>
<published>2025-12-12T07:10:35+00:00</published>
<link rel='alternate' type='text/html' href='https://passt.top/passt/commit/?id=35fa86a7871767d6a382b13e71c429abf47f88ab'/>
<id>35fa86a7871767d6a382b13e71c429abf47f88ab</id>
<content type='text'>
We call fwd_scan_ports_init() at (almost) the end of conf().  It's a bit
odd to do actual work from a function that's ostensibly about getting our
configuration.  It's not the only instance of this, but to make things a
bit clearer move the call to main(), right after flow_init().

Signed-off-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
We call fwd_scan_ports_init() at (almost) the end of conf().  It's a bit
odd to do actual work from a function that's ostensibly about getting our
configuration.  It's not the only instance of this, but to make things a
bit clearer move the call to main(), right after flow_init().

Signed-off-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tcp: Remove extra space from TCP_INFO debug messages (trivial)</title>
<updated>2025-12-12T21:38:53+00:00</updated>
<author>
<name>David Gibson</name>
<email>david@gibson.dropbear.id.au</email>
</author>
<published>2025-12-12T07:10:34+00:00</published>
<link rel='alternate' type='text/html' href='https://passt.top/passt/commit/?id=5be1a224d35991ac491e3da851e42c5965fbc5d7'/>
<id>5be1a224d35991ac491e3da851e42c5965fbc5d7</id>
<content type='text'>
Debug messages about which tcp_info fields are supported contained an
extra space, always ending with "  supported".

Signed-off-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Debug messages about which tcp_info fields are supported contained an
extra space, always ending with "  supported".

Signed-off-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>pasta: Clean up waiting pasta child on failures</title>
<updated>2025-12-12T21:23:14+00:00</updated>
<author>
<name>David Gibson</name>
<email>david@gibson.dropbear.id.au</email>
</author>
<published>2025-12-11T03:54:36+00:00</published>
<link rel='alternate' type='text/html' href='https://passt.top/passt/commit/?id=87f1a917d881d0881d6db5fdc2345f345a0e30d1'/>
<id>87f1a917d881d0881d6db5fdc2345f345a0e30d1</id>
<content type='text'>
When pasta is invoked with a command rather than an existing namespace to
attach to, it spawns a child process to run a shell or other command.  We
create that process during conf(), since we need the namespace to exist for
much of our setup.  However, we don't want the specified command to run
until the pasta network interface is ready for use.  Therefore,
pasta_spawn_cmd() executing in the child waits before exec()ing.  main()
signals the child to continue with SIGUSR1 shortly before entering the
main forwarding loop.

This has the downside that if we exit due to any kind of failure between
conf() and the SIGUSR1, the child process will be around waiting
indefinitely.  The user must manually clean this up.

Make this cleaner, by having the child use PR_SET_PDEATHSIG to have
itself killed if the parent dies during this window.  Technically
speaking this is racy: if the parent dies before the child can call
the prctl() it will be left zombie-like as before.  However, as long
as the parent completes pasta_wait_for_ns() before dying, I wasn't
able to trigger the race.  Since the consequences of this going wrong
are merely a bit ugly, I think that's good enough.

Suggested-by: Paul Holzinger &lt;pholzing@redhat.com&gt;
Signed-off-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
Reviewed-by: Paul Holzinger &lt;pholzing@redhat.com&gt;
Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
When pasta is invoked with a command rather than an existing namespace to
attach to, it spawns a child process to run a shell or other command.  We
create that process during conf(), since we need the namespace to exist for
much of our setup.  However, we don't want the specified command to run
until the pasta network interface is ready for use.  Therefore,
pasta_spawn_cmd() executing in the child waits before exec()ing.  main()
signals the child to continue with SIGUSR1 shortly before entering the
main forwarding loop.

This has the downside that if we exit due to any kind of failure between
conf() and the SIGUSR1, the child process will be around waiting
indefinitely.  The user must manually clean this up.

Make this cleaner, by having the child use PR_SET_PDEATHSIG to have
itself killed if the parent dies during this window.  Technically
speaking this is racy: if the parent dies before the child can call
the prctl() it will be left zombie-like as before.  However, as long
as the parent completes pasta_wait_for_ns() before dying, I wasn't
able to trigger the race.  Since the consequences of this going wrong
are merely a bit ugly, I think that's good enough.

Suggested-by: Paul Holzinger &lt;pholzing@redhat.com&gt;
Signed-off-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
Reviewed-by: Paul Holzinger &lt;pholzing@redhat.com&gt;
Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>treewide: Introduce passt_exit() helper</title>
<updated>2025-12-12T21:20:02+00:00</updated>
<author>
<name>David Gibson</name>
<email>david@gibson.dropbear.id.au</email>
</author>
<published>2025-12-11T03:54:35+00:00</published>
<link rel='alternate' type='text/html' href='https://passt.top/passt/commit/?id=e6612fe0a7cf4860b0d81d3b886f95273d979d1d'/>
<id>e6612fe0a7cf4860b0d81d3b886f95273d979d1d</id>
<content type='text'>
In d0006fa78 ("treewide: use _exit() over exit()"), we replaced use of
the normal exit(3) with direct calls to _exit(2).  That was because glibc
exit(3) made some unexpected futex() calls, which hit our seccomp profile.

We've since had some bugs due to missing the extra cleanup that exit(3)
implemented, for which we've added explicit cleanup calls.  Specifically,
we have fflush() calls in some places to avoid leaving incomplete messages
on stdout/stderr, and in other places fsync_pcap_and_log() to avoid
leaving incomplete log or pcap files.

It's easy to forget these when adding new error paths, so instead,
implement our own passt_exit() wrapper to perform vital cleanup then call
_exit(2).  This also provides an obvious place to add any additional
cleanups we discover we need in future.

Signed-off-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
In d0006fa78 ("treewide: use _exit() over exit()"), we replaced use of
the normal exit(3) with direct calls to _exit(2).  That was because glibc
exit(3) made some unexpected futex() calls, which hit our seccomp profile.

We've since had some bugs due to missing the extra cleanup that exit(3)
implemented, for which we've added explicit cleanup calls.  Specifically,
we have fflush() calls in some places to avoid leaving incomplete messages
on stdout/stderr, and in other places fsync_pcap_and_log() to avoid
leaving incomplete log or pcap files.

It's easy to forget these when adding new error paths, so instead,
implement our own passt_exit() wrapper to perform vital cleanup then call
_exit(2).  This also provides an obvious place to add any additional
cleanups we discover we need in future.

Signed-off-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tcp: Suppress new instance of cppcheck bug 14191</title>
<updated>2025-12-12T21:19:46+00:00</updated>
<author>
<name>Laurent Vivier</name>
<email>lvivier@redhat.com</email>
</author>
<published>2025-12-11T03:54:34+00:00</published>
<link rel='alternate' type='text/html' href='https://passt.top/passt/commit/?id=d6c5b6ee1ddeafdede00a5745ee0d72fea565356'/>
<id>d6c5b6ee1ddeafdede00a5745ee0d72fea565356</id>
<content type='text'>
ee9b2361d ("cppcheck: Suppress a buggy cppcheck warning") added a
a suppression for a cppchec bug, since filed (and fixed) in upstream
cppcheck as https://trac.cppcheck.net/ticket/14191.  9139e60fd ("tcp:
Acknowledge everything if it looks like bulk traffic, not interactive")
introduced a new point which triggers the same cppcheck bug.

Add a suppression for the new instance.  This is a revision of Laurent's
earlier patch, updating the comments to make the connection between the
two instances clear, and adding unmatchedSuppression so it doesn't cause
a bogus warning with unaffected cppcheck versions.

Signed-off-by: Laurent Vivier &lt;lvivier@redhat.com&gt;
Signed-off-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
ee9b2361d ("cppcheck: Suppress a buggy cppcheck warning") added a
a suppression for a cppchec bug, since filed (and fixed) in upstream
cppcheck as https://trac.cppcheck.net/ticket/14191.  9139e60fd ("tcp:
Acknowledge everything if it looks like bulk traffic, not interactive")
introduced a new point which triggers the same cppcheck bug.

Add a suppression for the new instance.  This is a revision of Laurent's
earlier patch, updating the comments to make the connection between the
two instances clear, and adding unmatchedSuppression so it doesn't cause
a bogus warning with unaffected cppcheck versions.

Signed-off-by: Laurent Vivier &lt;lvivier@redhat.com&gt;
Signed-off-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>pif: Correctly set scope_id for guest-side link local addresses</title>
<updated>2025-12-10T07:37:29+00:00</updated>
<author>
<name>David Gibson</name>
<email>david@gibson.dropbear.id.au</email>
</author>
<published>2025-12-10T07:02:57+00:00</published>
<link rel='alternate' type='text/html' href='https://passt.top/passt/commit/?id=d04c48032bcf724550d0b8f652fd00efcd2dfad0'/>
<id>d04c48032bcf724550d0b8f652fd00efcd2dfad0</id>
<content type='text'>
pif_sockaddr() is supposed to generate a suitable socket address, either
for the host, or for the guest, depending on the 'pif' parameter.  When
given a link-local address, this means it needs to generate a suitable
scope_id to specify which link.  It does this for the host, but not for the
guest.

I think this was done on the assumption that we won't ever generate guest
side link local addresses when forwarding connections.  That, however, is
not the case, at least with the recent extensions to "local mode".  Fix the
problem by properly populating the scope_id field for guest addresses.

Link: https://bugs.passt.top/show_bug.cgi?id=181
Signed-off-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
pif_sockaddr() is supposed to generate a suitable socket address, either
for the host, or for the guest, depending on the 'pif' parameter.  When
given a link-local address, this means it needs to generate a suitable
scope_id to specify which link.  It does this for the host, but not for the
guest.

I think this was done on the assumption that we won't ever generate guest
side link local addresses when forwarding connections.  That, however, is
not the case, at least with the recent extensions to "local mode".  Fix the
problem by properly populating the scope_id field for guest addresses.

Link: https://bugs.passt.top/show_bug.cgi?id=181
Signed-off-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tcp: Correct timer expiry value in trace message</title>
<updated>2025-12-10T07:37:06+00:00</updated>
<author>
<name>David Gibson</name>
<email>david@gibson.dropbear.id.au</email>
</author>
<published>2025-12-10T07:02:56+00:00</published>
<link rel='alternate' type='text/html' href='https://passt.top/passt/commit/?id=696709d74b240088ffeda7f2c72b16e75879c689'/>
<id>696709d74b240088ffeda7f2c72b16e75879c689</id>
<content type='text'>
000601ba8 ("tcp: Adaptive interval based on RTT for socket-side
acknowledgement checks") added (amongst other things) a new trace message
showing the expiry time for the TCP timer in ms rather than s.

Unfortunately there were some arithmetic errors in the message, meaning it
will print incorrect numbers.  Correct them

Fixes: 000601ba86da ("tcp: Adaptive interval based on RTT for socket-side acknowledgement checks")
Link: https://bugs.passt.top/show_bug.cgi?id=182
Signed-off-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
000601ba8 ("tcp: Adaptive interval based on RTT for socket-side
acknowledgement checks") added (amongst other things) a new trace message
showing the expiry time for the TCP timer in ms rather than s.

Unfortunately there were some arithmetic errors in the message, meaning it
will print incorrect numbers.  Correct them

Fixes: 000601ba86da ("tcp: Adaptive interval based on RTT for socket-side acknowledgement checks")
Link: https://bugs.passt.top/show_bug.cgi?id=182
Signed-off-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>tcp_splice, flow: Add socket to epoll set before connect(), drop assert</title>
<updated>2025-12-09T00:27:24+00:00</updated>
<author>
<name>Stefano Brivio</name>
<email>sbrivio@redhat.com</email>
</author>
<published>2025-12-08T21:18:01+00:00</published>
<link rel='alternate' type='text/html' href='https://passt.top/passt/commit/?id=c3f1ba70237a9e66822aff3aa5765d0adf6f6307'/>
<id>c3f1ba70237a9e66822aff3aa5765d0adf6f6307</id>
<content type='text'>
...otherwise, if we have a real error on connect() (that is, not
EINPROGRESS), we'll return early from tcp_splice_connect() and later
try to fetch the epoll file descriptor:

  ASSERTION FAILED in flow_epollfd (flow.c:362): f-&gt;epollid &lt; ((1 &lt;&lt; 8) - 1)

which is still (correctly) EPOLLFD_ID_INVALID.

Replace the ASSERT() in flow_epollfd() with a warning, as it looks
like there might be harmless cases where the socket is not in the
epoll set yet, and we'll just crash for nothing. We can turn this back
to an ASSERT() once we audit these paths in more detail.

Link: https://bodhi.fedoraproject.org/updates/FEDORA-2025-93b4eb64c3#comment-4473411
Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
Reviewed-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
...otherwise, if we have a real error on connect() (that is, not
EINPROGRESS), we'll return early from tcp_splice_connect() and later
try to fetch the epoll file descriptor:

  ASSERTION FAILED in flow_epollfd (flow.c:362): f-&gt;epollid &lt; ((1 &lt;&lt; 8) - 1)

which is still (correctly) EPOLLFD_ID_INVALID.

Replace the ASSERT() in flow_epollfd() with a warning, as it looks
like there might be harmless cases where the socket is not in the
epoll set yet, and we'll just crash for nothing. We can turn this back
to an ASSERT() once we audit these paths in more detail.

Link: https://bodhi.fedoraproject.org/updates/FEDORA-2025-93b4eb64c3#comment-4473411
Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
Reviewed-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>fedora: Fix build on Fedora 43, selinux_requires_min not available on Copr builders</title>
<updated>2025-12-08T10:17:14+00:00</updated>
<author>
<name>Stefano Brivio</name>
<email>sbrivio@redhat.com</email>
</author>
<published>2025-12-08T10:17:14+00:00</published>
<link rel='alternate' type='text/html' href='https://passt.top/passt/commit/?id=e8b56a3d2456a62eed5ce4297134b26427c2e5b6'/>
<id>e8b56a3d2456a62eed5ce4297134b26427c2e5b6</id>
<content type='text'>
For some reason, on Copr:

  Building target platforms: aarch64
  Building for target aarch64
  error: line 42: Unknown tag: %selinux_requires_min
  Child return code was: 1

Use %selinux_requires_min starting from current Rawhide / Fedora 44,
there it works.

Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
For some reason, on Copr:

  Building target platforms: aarch64
  Building for target aarch64
  error: line 42: Unknown tag: %selinux_requires_min
  Child return code was: 1

Use %selinux_requires_min starting from current Rawhide / Fedora 44,
there it works.

Signed-off-by: Stefano Brivio &lt;sbrivio@redhat.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
