zzz
fired up the testnet, and performance is really really bad, so I'll be looking to see what broke
zlatinb
you didn't revert bandwidth limits by any chance?
zzz
no, first thing I checked
zzz
found it, just broke it this morning, but will definitely do testnet more often
zlatinb
regarding the interop testing before next release, I think I'll stick to ssu1 and ntcp2
zlatinb
unless the decision is to enable ssu2 by default
zzz
definitely not default in next release
zlatinb
that's a good one, removes the need for JNA openjdk.java.net/jeps/419
zzz
Testnet 0-hop eepget over SSU2, 25 ms latency one-way
zzz
Drop % KBytes/sec
zzz
------ ----------
zzz
0 1407
zzz
0.1 638
zzz
0.2 383
zzz
0.5 283
zzz
1 146
zzz
2 95
zzz
5 34
zlatinb
I've never tested drops above 1 but the speeds up to that level are reasonable
zzz
sure, drops above 1% with 3 hops would be hopeless
zzz
since there's eepget on top of it, it's not really a test of "native" transport-layer throughput
zzz
I'm going to do a quick test of SSU 1 with the same setup
zlatinb
looking at that last commit, it's a risky business not doing a full equals() after a hashcode collision
zlatinb
and since you're not putting the hc in any collection why not just use long?
zzz
sure, might be bad idea
zzz
it's a hashcode of a 4-byte long, a byte, and an (optional) byte array or null
zlatinb
but are you using hc in a way that requires it to be an int?
zlatinb
intellij says no :)
zzz
no
zzz
Testnet 0-hop eepget over SSU1, 25 ms latency one-way
zzz
Drop % KBytes/sec
zzz
------ ----------
zzz
0 2444
zzz
0.1 943
zzz
0.2 743
zzz
0.5 485
zzz
1 451
zzz
2 214
zzz
5 105
zlatinb
that's a bit unexpected
zzz
not really when you think about it
zzz
haven't shaken out all the ack issues, nacks processing, fast retx, etc
zzz
the way it all interacts with the last couple years of SSU 1 changes is a bit fuzzy
zzz
I also think we need an immediate-ack-request flag, and maybe an ECN flag, we don't have either now
zlatinb
well, all that stuff was in the NORM specs :)
zzz
baby steps
zzz
but the plan is to fix up and optimize the interactions between the SSU2 ack handling and the SSU1 congestion control
zzz
right now it's a little ad-hoc and in particular nacks aren't handled at all
zzz
and I thionk there's some stuff with how individual fragments are acked vs. the whole i2np msg
zzz
but simply an immediate-ack flag, or the receiver figuring out the send window is probably full and should ack immediately, would be a quick fix
zzz
need to go back and read the QUIC window stuff again, it seemed really bad last time I looked, but worth another pass
zzz
what you don't see from speed numbers is how much more efficient SSU 2 is
zzz
and do we even have an explicit goal to improve the speed over SSU 1?
zlatinb
improve no, but there shouldn't be deterioration
zlatinb
if SSU2 is more efficient, it should score higher on a zero-delay test
zlatinb
on my testnet the zero-delay is CPU-bound
zzz
I mean efficiency as in overhead/payload ratio
zzz
not cpu. and I didn't do any 0-delay tests
zzz
agreed, shouldn't be deterioration, I'll add a goal if there isn't one