eche|on
last time I build a news file was jan 2023
eche|on
so lets see if it breaks now
zzz
eche|on, I believe eyedeekay fixed it yesterday, see above ^^^
eche|on
no scroll buffer here
eche|on
so to late
eche|on
and as I cannot read i2p mail, I do not know anything
eche|on
but lets built debian files
eche|on
(but I archived the files which were in place in news, easy to got back in case)
zzz
ok well you two need to figure out how to coordinate on this type stuff
eche|on
as we both have less time and I am hard to reach... as you see, last built jan 2023, 9 month ago
zzz
so he just does it for you and doesn't tell you I guess
zzz
so I ask you both because I don't know what your agreement is and now you both do it
eche|on
usually I let him do, but today I did as you asked me to and did not verify the files in place
zzz
ok
eche|on
on files.i2p-projekt.de/update/bullseye is now 2.3.0-8 available
eche|on
and running locally here
eche|on
is there a point of running -8 on main routers?
zzz
do not recommend
zzz
still too unstable
zzz
but we're getting close
eche|on
ok, good
eche|on
its hard to reach postmans services at all, any of them
eche|on
seems like email is afflected, to
zzz
are you running a -x build?
zzz
eyedeekay reported everything from -1 to -4 was broken for reaching postman, fixed in -5 (I think)
eche|on
I did built the 2.3.0-1 for my routers, running since that day
eche|on
and it worked so lala the last weeks, but since 4 days, nothing
eche|on
and a strange issue: I get every mail twice in postmans inbox, with 30 min diff timestamp and diff mail id ?!?
zzz
oh you didn't say that. -8 is 1000x better than -1
eche|on
sorry, 2.3.0.0-1
zzz
re: dup email, I reported to postman; he says it's your problem not his and he reported it to you
zzz
so he is awaiting your response
eche|on
yeah, but I am rather limited these days with time
eche|on
is anyone else getting double emails?
zzz
ok, it's been happening for months, so pretty annoying
zzz
somebody said that R4SAS has seen it also
zzz
please investigate
eche|on
Ok, will dig deeper after I get healthy again
zzz
thanks
eche|on
I did not change anything on config, just upgrade to bookworm (which is somewhat strange on my desktop, too)
zzz
back to the cant-reach-postman bug, you're the first to report that it's a problem on the 2.3.0 release, not the dev build...
zzz
eyedeekay reported the 2.3.0 release was fine
zzz
so you can discuss it with him, I don't know any more
eche|on
got the email currently in webmail, will look later on.
eche|on
and got some ioea about double email now
eche|on
back to couch for getting healthy
zzz
ok, get well soon :)
dr|z3d
if you still have a dsa key for any of postman's services in your addressbook, purge them all.
dr|z3d
in theory not having dual leasesets should make access less problematic, but who knows what latent bugs are lurking when you remove your old keys and rebuild your tunnel.
dr|z3d
-1 is where subdbs were introduced iirc, eche|off should probably be running -8.
zzz
I don't know what 2.3.0.0-1 is, but I assumed the debian 2.3.0 release, not the 2.3.0-1 dev build
dr|z3d
I *think* it's the dev build where subdbs were introduced, but I could be wrong.
dr|z3d
but yeah, maybe it's different on debian.
dr|z3d
or I'm confusing 0-1 and 0-1 :)
zzz
tl;dr ech needs to talk to idk, which seems to be the answer for lots of problems these days
zzz
ping eyedeekay, I'm stuck
eyedeekay
Ack I am here sorry was fixing breakfast
eyedeekay
2.3.0 did not have subdbs which were the root of the reaching postman problem
eyedeekay
It was a lot less complicated set of changes
eyedeekay
2.3.0-1 was the dev build where subDbs were added and postman services were broken
eyedeekay
-5 fixed about half of the problems then there was the subdb clobbering issue spotted yesterday, but it doesn't make a ton of sense that it would be causing/related the doubled messages to me
zzz
ok eyedeekay I'm off on a new topic that could be a big one and I need to understand to make more review and MR progress
zzz
what was/is/will be your design intent:
zzz
can subdbs have RIs in them or not?
eyedeekay
As far as I can tell from every angle I've looked at it, no, it does not seem that they need them or get them naturally.
zzz
ok but from the code I don't see that clarity or intention from the beginning
zzz
not that long ago subdbs were reseeding and exploring... even today:
zzz
you have RI subdb tabs in the console; subdbs have ExpireRoutersJobs; they have kbuckets;
zzz
there's some convoluted changes for FloofillPeerSelector creation/use;
zzz
I don't see a single comment or line of code saying 'RIs not allowed in subdbs', or catching that case explicitly;
eyedeekay
Much of that was the worst of the bugs you filed I didn't know how to catch yet
eyedeekay
But the reason for that is because when we were working on it I didn't know at first if we would need them, specifically for searching for leaseSets
zzz
so you've decided that all tunnel builds, all lookups/stores, all select peers from the main db
eyedeekay
The answer became a clearer no after that internal consistency checking process from the email
zzz
right, so this was a journey, not so much a plan, and the current code shows that, and there's more cleanup required to have it all make sense
eyedeekay
Fair assessment
zzz
so there's no plan to add RIs-in-subdbs in some future release
eyedeekay
Not right now, related to issue you filed yesterday, got some functions to rip out
zzz
I'm working on a big cleanup MR but I got stuck not knowing the answers I now have
zzz
it's a little messy and is 1200 lines that mostly is deleting stuff but it's going to get tangled up in whatever you're up to right now
zzz
so it probably won't make it to you in its current form
eyedeekay
Much appreciated, I already have to rebase some of the stuff I am testing so don't worry about it too much, if I need help merging them I will ask :)
zzz
manual merging = more risk, better avoided
eyedeekay
Ok
zzz
I believe further cleanup needs to await your multihome changes; you have a guess on when those may be coming?
eyedeekay
I will to file the MR for it within the next 4 hours
zzz
woo, that's good news and much faster than I feared
eyedeekay
I got it mostly removed yesterday, the rest is reverts
zzz
ok, test test test :)
zzz
I'm actually sitting on like 6000 lines of diff in my workspace and I'm losing it a little
zzz
so as stuff gets fixed and merged hopefully I can regain control of my own thrashing
eyedeekay
Yeah I'll get the multihome and useSubDbs MR's filed today(Separately) and that should help a bunch once they're merged
zzz
so in retrospect, you should have started with a clear design goal: "I need a place to store LSes that come down tunnels"
zzz
and then consider how best to accomplish that
eyedeekay
Yes that would have been better, I just didn't know enough to know that for sure when I started. That's how my research process ended up in the checkin history
eyedeekay
More or less, the 'journey' as you said
zzz
yeah, I get it. However the implementation of 'store LSes somewhere else' sounds like a few hundred lines of code
zzz
and putting "research" in the trunk of a 20 year old project had predictable results :(
zzz
but I see a path now, it will all be fine
eyedeekay
Yeah, what I checked in as -1 was ultimately only useful for getting an understanding of what was going on and should not have gone in
eyedeekay
I definitely see that now
eyedeekay
Like having persistence for subdbs, that existed to help me answer whether it was substantially affecting performance to rebuild them on reboot, and to inspect the contents so I could see what was in them. Bunch of stuff was like that. Not organized enough, should have taken much more time on it.
RN
git.idk.i2p/i2p-hackers/i2p.i2p/-/issues/464 ◀━━ for the jbigi - stack guard issue
RN
eche|on, I also get the double emails
eyedeekay
All right, both MR's are up, the one that closes 461 is on top of the one that closes 409
RN
re: issues/464 should I upload a copy of my i2pupdate.zip for 2.3.0-8?
RN
I don't keep copies when I build a new one...
eyedeekay
Probably couldn't hurt, I have the disk space if you want to
RN
" File is too big (10.85MiB). Max filesize: 10MiB"
RN
lol-oops
eyedeekay
lol I'll find the setting and turn it up to 11
RN
hehe
RN
trying to clean it up a little and getting 504 - someone edited at same time
eyedeekay
OK I increased the limit to 15mb
eyedeekay
It might make me restart
RN
not letting me make the edits, so if you need to restart go ahead and I'll re do the edits/additions
RN
should I try the upload now, or wait?
eyedeekay
It takes about 5 minutes for gitlab to restart, if anybody's using it it's pretty disruptive to do, let me see whether I'm going to also version-upgrade when I do this...
eyedeekay
I gotta do some checking before I restart gitlab manually, need a few minutes
RN
ok... still making minor cleanups... please let me know when your gonna pull the trigger
dr|z3d
5m.. lol
dr|z3d
I can rebuild gitea *and* restart it in less time.
dr|z3d
a restart alone takes ~15s
RN
he's doing resarch to deide if he wants to do the upgrade.
RN
top of the $DAYPART to ya Dr
dr|z3d
the only thing you have to be aware of with gitea is it default tendency to archive stuff on demand and then keep those archives around unless you configure the resident cron job to cleanup. can fill up a disk pretty quick.
dr|z3d
salutations, RN
RN
eyedeekay, the size limit change did something... but now I get 413 request entity too large... so yeah a restart is probably needed.
dr|z3d
upload it to cake, RN. I think cake has a 20MB limit.
dr|z3d
though it may be less.
dr|z3d
nope, 20MB it is.
RN
I was considering that, or putting the file aside on my site, but since he is making this change to git I think I'll wait
RN
btw, the router in question for this issue is not mine, so no please no grief about reverting from plus. :)
dr|z3d
no grief was forthcoming, sorry to disappoint.
RN
Σ:Đ
eyedeekay
cake might be quickest I have to freeze versions before I restart, it's annoying and nerve wracking
RN
ok... cake it is...
RN
eyedeekay, when you got a sec... PM