orignal
I will check if there is a problem
orignal
<html><body>You are being <a href="http://git.idk.i2p/users/sign_in">redirected</a>.</body></html
orignal
and this link doesn't respond
orignal
hanging forever
R4SAS
pulling i2p packages to my repo
orignal
eyedeekay the issue has been identified
orignal
git.idk.i2p sends giant HTTP header
orignal
and we did't merge them prorerly
orignal
the fix is comming in i2pd
orignal
but please check why you do it
eyedeekay
I'll have a look, if there's anything I can do on my end I will make it happen
eyedeekay
Likely that enormous CSP header
zzz
how big?
R4SAS
Content-Security-Policy: base-uri 'self'; child-src google.com/recaptcha recaptcha.net content.googleapis.com content-compute.googleapis.com ...
R4SAS
all google stuff listed
zzz
ok, just saw it with firefox inspect
zzz
about 1053 chars FYI
R4SAS
on issues page with autentification?
zzz
here's the whole thing:
zzz
Content-Security-Policy
zzz
base-uri 'self'; child-src google.com/recaptcha recaptcha.net content.googleapis.com content-compute.googleapis.com content-cloudbilling.googleapis.com content-cloudresourcemanager.googleapis.com googletagmanager.com/ns.html i2pgit.org/admin i2pgit.org/assets i2pgit.org/-/speedscope/index.html i2pgit.org/-/sandbox/mermai
zzz
d i2pgit.org/assets blob: data:; connect-src 'self' ws://i2pgit.org cdn.co…gleapis.com content-cloudresourcemanager.googleapis.com; img-src * data: blob: *.google-analytics.com *.googletagmanager.com; manifest-src 'self'; media-src 'self' data:; object-src 'none'; script-src 'self' 'unsafe-inline' 'unsafe-eval' recaptcha.net apis.google.com cdn.cookielaw.or
zzz
g *.onetrust.com cdn.bizible.com/scripts/bizible.js *.googletagmanager.com 'nonce-/R7rvSnQ6HY+U2F50/7dGw=='; style-src 'self' 'unsafe-inline'; worker-src 'self' 'unsafe-inline'
R4SAS
orignal talk about whole header
R4SAS
with Connection, Permissions, etc, etc
zzz
most of that is crap for the in-net site anyway
dr|z3d
there's probably a setting you can bump to increase the max header size.
zzz
setting for what?
dr|z3d
max size of header.
zzz
or else what? truncate?
dr|z3d
or else fail. at least in nginx.
zzz
there is no max size in the standard iirc, just best practices
R4SAS
dr|z3d: problem on our http proxy side
dr|z3d
that's as may be, but the webserver usually has a hard limit you need to increase for heavy headers.
zzz
the old school rfc that allowed headers to be split and glued back together is deprecated because it's such a security mess
R4SAS
which can't handle such long header
zzz
let me see what our max is
dr|z3d
R4SAS: if nginx, could be you need to check proxy_headers_hash_max_size
R4SAS
dr|z3d: huh?
R4SAS
we connecting to git.idk.i2p
R4SAS
wgy are you talking about nginx?
dr|z3d
if nginx is handling the proxying..
dr|z3d
otherwise, nevermind.
R4SAS
why*
dr|z3d
nginx can do many things, I thought you might be using nginx to reverse proxy.
R4SAS
idk what eyedeekay uses as reverse proxy
eyedeekay
It's nginx
dr|z3d
eyedeekay: if nginx, could be you need to check proxy_headers_hash_max_size :)
dr|z3d
there may be other related settings you can tweak, depends how you're handling things, but it sounds like you need to increase the header buffers one way or another.
R4SAS
he can just cut whole CSP header with google stuff
zzz
ok, research done:
dr|z3d
yeah, the google stuff can be removed, but that's probably doing stats or something.
dr|z3d
(all the more reason for it to be dropped)
zzz
orignal, R4SAS - what we do:
zzz
Blinded message
zzz
if that's exceeded:
zzz
Blinded message
zzz
we also have a total size limit for all headers:
zzz
Blinded message
dr|z3d
except it's probably the _response_ headers that are too large, not the _request_ headers.
dr|z3d
minor thing.
zzz
correction:
zzz
Blinded message
zzz
we also return a 431 for that
zzz
we also have a check for the request line being too long
zzz
that's the same 8K limit
zzz
but the error return for that is:
zzz
Blinded message
dr|z3d
ok, so basically i2pd needs more generous buffers for the headers; java i2p does fine as is. that seems to be the issue, in summary :)
zzz
tl;dr your limits can be whatever is reasonable, but return an error code if exceeded
dr|z3d
and what zzz said.
zzz
"more generous" is not sufficient.
R4SAS
orignal added reading continuation
zzz
you have to make sure you don't let an attacker do line splitting and getting split headers through
zzz
or OOM or crash or buffer overflow
dr|z3d
Not seen anything on the reponse header front requiring more than 128KB, if that helps.
dr|z3d
I may have seen something pushing ~60KB headers recently.
zzz
you can't let somebody send you a 100 MB header line
dr|z3d
KB, not MB.
dr|z3d
lol @ 100MB.
zzz
or a million 100-byte header lines
zzz
or a 100 MB request line
dr|z3d
absolutely not, I agree.
orignal
zzz, it's around 3800 bytes
R4SAS
orignal: ping )))
orignal
just headr
orignal
huh?
dr|z3d
what I was driving at is that 128KB is probably about as generous as you need to be.
orignal
zzz first header was like 1800 but after redirect second one was 3800
orignal
it's fixed alreadt
dr|z3d
3.8K is nothing special, anyways.
orignal
just never met the siatusation when header is bigger then one streaming packet
orignal
ofc, just never saw it before
orignal
zzz, good point but OOM
orignal
just 128K max. right?
dr|z3d
yeah, if you want a hard limit that should never be exhausted.
orignal
no prolem for me
orignal
to implement this check
dr|z3d
unlikely you'll see anything near that, but as I mentioned, I've seen ~60K headers before.
R4SAS
do we really need 60K header in i2p?
dr|z3d
not really needed anywhere, but that doesn't stop people :)
dr|z3d
java i2p strips a few unneeded headers which may or may not help keep things sane.
R4SAS
eyedeekay: didn't know why, but related_branches always returns 404
dr|z3d
X-Powered-By and junk like that.
zzz
I don't remember where I got the 8K per-header limit but I believed that I researched chrome/firefox/microsoft
orignal
an attacker can make a signle like bery long
zzz
right, the main point is that crashes, request splitting/smuggling, and memory corruption may all be possible if it isn't done right, and the attack vector is super-easy
dr|z3d
that's 2017, but still, that suggests an 8K limit
orignal
why can we limit to 8K?
dr|z3d
> One is likely to run into these limits when using cookies to track some attribute of a visitor that has no upper limit. For example, imagine an e-commerce application stored the contents of a visitors shopping cart in a cookie. This would be problematic, as, when a user’s shopping cart exceeds some certain amount, the size Set-Cookie header containing the contents of the cart would exceed the HTTP respons
dr|z3d
e header size limit.
dr|z3d
pick a value, see if it causes issues, make 128K is too generous.
dr|z3d
*maybe
zzz
orignal, pick whatever limit you want, the important thing is to handle it correctly and return an error if exceeded
zzz
for each of three limits: request line length, header line length, total header size
orignal
why should I reuurn error rather than just close the socket?
zzz
that's the standard
zzz
also, possibly a 4th limit: total number of header lines
zzz
Blinded message
zzz
Blinded message
zzz
and it also would have made debugging the problem a lot easier :)
R4SAS
someone who wrote this manual are crazy
zzz
Author: idk
R4SAS
about third part with creating tunnel for SSH
orignal
I guessed so ))
eyedeekay
it works, I guess SOCKS would work too, and http_proxy works for cloning via HTTP
R4SAS
eyedeekay: ~/.ssh/config
zzz
you said 3800 bytes but according to firefox the headers total 2109 bytes for git.idk.i2p/users/sign_in - but it depends where you look I guess because there's internal i2p headers
R4SAS
a smmaaaaaaaalll hint for you
eyedeekay
Sure
zzz
eyedeekay, I questioned doing SSH at the time instead of just HTTP, but I think it was the right call to separate SSH and not really even support/advertise HTTP
zzz
better to keep the traffic on different tunnels, for throttling, etc.
orignal
I saw when printed out what we received
orignal
the proint is it exceeds one streaming packet size
orignal
so it doesn't matter 2100 or 3800
orignal
the result is the same
zzz
sure
eyedeekay
I leave it enabled so that other instances inside I2P can import code from us using the http clone URL, otherwise SSH is the normal thing on i2pgit
zzz
no matter what's hammering the website or whatever rewriting shenanigans might be broken, SSH has been super solid
zzz
definitely was the right way to go
eyedeekay
Yeah if I had it to do over again I might have fought harder for gitea, gitlab's ongoing issue has been keeping up with CSP and in general it's absolutely behemoth size
eyedeekay
But it's really good at a ton of other stuff and moving would be a mess so I'm just going to keep up with it
eyedeekay
gitlab's got this ongoing project to fix all the CSP issues it can have which makes everything a moving target
R4SAS
yes... same thing about gitlab's size and it's CPU/MEM usage
R4SAS
that's why I moved to gitea
eyedeekay
Yeah it's quite big. gitea is much easier to manage for the most part, but a whole other migration and all that goes into that just isn't something I have time to oversee right now
dr|z3d
yeah, gitea is like butter compared to gitlab. much less taxing on the cpu as well.
zzz
I guess it was all the tor trac migration scripts that clinched it for gitlab
dr|z3d
and gitea has a handy mirror feature which gitlab appears to lack.
zzz
anyway, congrats to the i2pd team for chasing down a nasty bug :)
dr|z3d
chocolate potatoes all round!
dr|z3d
it's a delicacy in Russia, apparently.
R4SAS
heh