Good day to all! Over the last 30 minutes or so, I’ve been having issues loading beehaw.org. Sometimes CSS is missing and the page layout is broken, and others there is a server side NGINX error.
Just wanted to make the admins aware this is happening. There are some NGINX settings that can be adjusted to make more threads available to NGINX if it is hitting a worker limit.
It’s been 8 days, still ongoing with multiple instances, and I do not see any open issue about ‘nginx 50x’ errors on Github project for lemmy. See public cry: https://lemmy.ml/post/1453121
Yes, Beehaw is struggling with uptime. From talking with the admins, this really isn’t an nginx issue. It’s more that the Lemmy code itself is immature, with memory leaks and SQL performance issues, and those issues are becoming more disruptive as the usage explodes.
If you’ve got development skills, helping out the Lemmy project on Github is probably the best way to help. If not, then just press F5 with the rest of us when the site goes down for a bit.
I have been, I’m RocketDerp on Github. I’ve been watching for weeks how none of the people running the major sites have opened an issue on observable problems, so I have done so myself:
Major data integrity issues ignored since June 14 issue opened: https://github.com/LemmyNet/lemmy/issues/3101
Obvious user-interface signs of the same problem reported June 19: https://github.com/LemmyNet/lemmy/issues/3203
The problems were going on weeks before I created these issues, and they are still being ignored. It wasn’t in the 0.18 announcement today, etc.
I’m not an official spokescritter, but I can assure you the Beehaw admins aren’t ignoring the issues. But ultimately it’s going to come down to someone getting PRs in to the code. I hope someone gets some performance-focused PRs in soon.
They are not informing the end-users of the problem, they are leaving people like me wasting their time calling out the problem. Denial isn’t just a river in Egypt. Lemmy isn’t scaling, it’s falling flat on it’s face, and the federation protocols of doing one single like per https transaction are causing servers to overload peer servers.
What are you asking for? I’m not smart enough to know what is going on here, but can relay the request to someone who is if you’re willing to dumb it down for me and ask nicely
Right out of the Lemmy documentation for servers:
Log them to a file and dump them somewhere public, like a github repository. What is gong on in these logs when 500 errors are happening?
Thanks for the suggestions. We are aware of how to review system logs and work to solve the issues. Right now there are a lot of moving parts, some of which we control and are responsible for, but a lot that we cannot.
As you know, an NGINX 500 issue is due to the server instance and not the client (you). For our stack, that could be an issue anywhere along the path with varnish, nginx, firewall rules, security/HIDS, host networking, docker networking, one or multiple services of the six containers, the docker service/daemon itself.
The issues are being addressed as we are able to troubleshoot, prove it, and verify a solution.
Do you consider 0.17.4 a “stable” release of Lemmy that is proven and production ready, or more like an experimental project under active development?
I do not grasp why no Github issues are being opened to discuss openly these problems with the Lemmy platform that I have seen on many instances.
sent this along
Thank you. It’s what’s going on inside of lemmy.ml that concerns me let most, and I just don’t grasp why the people running that server aren’t opening issues about the precise logged errors on their server so that newcomers to the project have an idea what is happening.
Because not every issue we’re experiencing even the 500’s , are a result of Lemmy or their code. There is no reason to share that with them.
Then what are they, when Nginx is failing to talk to the NodeJS app? I also consider this more than code, as they are also giving recommendations for performance tuning various components, etc.
I have a lot of suspicion so far that federation activity is causing 500 and other errors due to how it queues (swarms) other peers. It isn’t just the lemmy-ui webapp and end-users.
If you aren’t aware, Lemmy.ml has been down for the past 45 minutes, and could likely be causing your lemmy_server code to back up with all kinds of problems.
I’m actually working on these issues 10+ hours a day, for the past two weeks.