orignal
two question
orignal
even if your dest is server but you establish outgoing connectuin what's the reason to not bundle LS?
orignal
if LS comes after SYN block can I assume it's in the same msg?
orignal
didn't not get about outgoing SYN
orignal
I create an outgong streaming one when I get remote LS
orignal
what;s your limit for pending lookups? It's not in params
dr|z3d
bundling LS adds overhead.
dr|z3d
which is why it's normally only enabled for multihomed servers.
dr|z3d
or maybe you mean bundling the LS on connection, not continually?
orignal
I bundles LeaseSet whenever I have it updated
orignal
and stop it when receive confirmation
orignal
what? are you saying that Java server doesn't send LS to client when updated?
orignal
Now I see why I keep disconnecting from here and never from Ilita
orignal
rebuild
orignal
LeaseSet is cheap
dr|z3d
no, I was thinking about multi-homing where the LS? is bundled.
dr|z3d
maybe I got that wrong, details are fuzzy.
orignal
if a clients always need to re-request server's leaseset it's dumb
dr|z3d
true.
dr|z3d
every 10m should be sufficient.
orignal
tunnels might die earlie, etc.
orignal
we need clarification from zzz
dr|z3d
I say 10m, if you want to make sure you don't have stale leases (for example if a multihome you're using goes down), then every 5m is probably fine.
orignal
I rerequest only if it expired
orignal
and expect to receive it with data
zzz
yes we send LS when updated
zzz
those were just brainstorming examples of mine of things that are legal
zzz
drz is right, for the p2p example it might save a little bandwidth to not send the LS, at the cost of latency, but yes LS is cheap
zzz
correction, we do not have a limit on pending lookups, I was thinking of something else
orignal
if you don't have such limit you can be attacked this way
zzz
yeah I'll think about it, but our streaming new connection limits provide a lot of mitigation
dr|z3d
welcome to #saltr, Over1
dr|z3d
we throttle lookups per client dest, zzz, perhaps that was what you were thinking about?
dr|z3d
or rather, per peer.
orignal
it's oppoosite
orignal
you trottle what you receive
orignal
but in the scenario you need to trottle what you send
zzz
yeah it's per peer, not total
zzz
our streaming new conn throttle is pretty strict by default so there's no way we're getting anywhere close to ddos levels of lookups because the syn acks or resets will start getting dropped pretty quickly
hk
zzz: I've heard that the nature of i2p makes it resistant to DDoS (compared to tor at least) what mechanisms exactly enable that?
zzz
hk, the general philosophy is that everything has limits, so the mechanisms are max size queues, max latencies before dropping (anti-buffer-bloat), max size for all tables, max connections, max peers, etc., and to apply those mechanisms everywhere necessary
hk
oohhh nice nice okay
zzz
I don't think that i2p's "nature" makes it ddos-resistant; in fact, it makes it more vulnerable in a lot of ways, because you often don't know who is generating the traffic
hk
right I guess we've seen that quite a lot lately
orignal
what's your limit for pending incoming connection?
zzz
at the streaming layer?
orignal
yes
orignal
for a streaming dest say per port
zzz
it's not a pending limit, but a limit on new connections per minute and hour, both per-peer and total, and it's configurable
zzz
defaults are pretty low, to protect users running small sites. If you have a big site, you have to change the limits
orignal
will implemen this param too
zzz
we have 6 params
zzz
Per Minute Per Hour Per Day
zzz
per client 30 80 200
zzz
total 50 0 0
orignal
do you have limit for number of NS per minute?
zzz
those look like the defaults for http server tunnels, but I think defaults are different for different tunnel types
orignal
NS messages can also exceed all CPU quickly
orignal
x25519+elligator
zzz
looking...
zzz
we don't have an explicit limit but all garlic message handling goes on a job queue and we start taking various actions if the job queue falls behind
orignal
got it
orignal
implemting soething similar
orignal
if queue is long drop new messages
zzz
our design was done by jrandom on 2004 computers and ElGamal, so 2024 computers with x25519 is probably 100x faster; anything can be overloaded in theory but I haven't gotten any reports in a long time
zzz
but I'm pretty sure all our asymmetric crypto for all protocols is done in threads + queues dedicated for that, more or less
hk
dr|z3d: Have you considered getting a wikipedia article written for I2PSnark?
hk
you might get more users through articles like en.wikipedia.org/wiki/Comparison_of_BitTorrent_clients
dr|z3d
Never considered it, hk. Feel free to update the page at your leisure.
hk
understood :)
orignal
we saw that the tread was busy with x25518
orignal
for lookups
uop23ip
dr|z3d, i got port open for both udp/tcp. Looking in /peers i see 0 inbound ssu2 and 100++ outbound. ntcp ok with higher numbers in/out. In /info i see ssu2 with assigned port. No errors in logs. deactivated inbound ipv6, ipv4 open. Something wrong with my config? 0 inbound ssu2, ok and normal?
dr|z3d
Should be normal, uop23ip, peers prefer NTCP when available.