Question about IPQoS Defaults

Chris Rapier rapier at psc.edu
Wed Jul 30 04:41:53 AEST 2025


I know that this is likely a very niche question but I was hoping to 
understand things a little better.

Background:
I'm implementing RFC 8305 (Happy Eyeballs) for SSH. When connecting to 
dual stack targets it will start a race between an IPv6 connection and 
an IPv4 connection and use which every connects first. To test this I 
set up tc qdisc filters to impose a 600ms delay on IPv6 connections to 
my target. The assumptions being that the excessive delay would favor 
IPv4 connections.

Problem:
In my tests the IPv6 connection *always* ended up being the connection 
used even though the RTT was 600ms higher than the IPv4 connection. I 
then noticed the same issue when I was using an OpenSSH client under the 
same circumstances. If I used "ssh -4 target.host" I would still see a 
600ms delay on the path even though a "ping -4 target.host" would return 
with a 2ms RTT. The interactive and bulk data sessions over SSH would 
always end up seeing that excessive delay. The only situation in which 
that was true was the ssh package under Ubuntu.

After a bunch of testing I found out that Ubuntu reverts the IPQoS 
default changes made in commit 5ee8448a. I absolutely understand why 
these changes were made to IPQoS and I have a way to resolve the issue 
in my code. The problem is that I don't understand why I'm seeing the 
behaviour that I am. Why does setting IPQoS to lowdelay work in my, 
admittedly unique, situation while using the default of AF21 seems to 
produce this excessive delay across IPv4 connections?

I set up the filter using:
tc qdisc add dev enp0s5 root handle 1: prio
tc qdisc add dev enp0s5 parent 1:3 handle 30: netem delay 600ms
tc filter add dev enp0s5 parent 1: prio 3 u32 match u16 0x6000 0xf000 at 
0 flowid 1:3  # Delay all IPv6

Maybe my test environment is faulty?

Chris


More information about the openssh-unix-dev mailing list