I’m aware that some of you have been getting some errors loading this instance. This was a configuration that I needed to be adjusted which has since been.
Do be patient if we run into other issues, I’ll be continuously working in the back-end with others to better your overall experience.
Still a lot of capacity available!
edit: For those that are interested in the configuring changes. I was using the default lemmy configuration for the nginx worker_connections
value. This value needed to be raised.
Shout out to @ruud@lemmy.world over at lemmy.world for helping me out.
Funny enough I’d say Shit Doesn’t Just Work - it takes your hard work. Thanks for helping fill the Reddit void @TheDude@sh.itjust.works
Just out of curiosity, would it help if when posting images we use services such as imgur (or alternatives)? I’m assuming if there is storage issues those type of posts are the biggest culprit.
Thank you for hosting this server @TheDude :)
Yes 100%. Using external image hosting service would reduce how much storage is being consumed
Would be cool if there was a plugin or something to link with imgur, etc.
Have you considered disabling image upload?
yes this is an option however the lemmy-ui which is responsible for displaying this page would still show the option. It could be manually removed and might be something I look more into at a later time.
As someone who is still hosting a forum, I would not suggest this. I have also tried to reduce footprint of the forum using external image hodting services and this has always ended up badly (images get lost as one hosting service changes policy and maybe disables direct linking or just closes it’s doors, whatever).
This is one of the reasons why I’m not tempted to open up a lemmy server, even though my hosting plan allows 3 subdomains. Used space will rise quickly, even if it’s just images… if videos are allowed as well, than all hell will break loose, even if they are processed and reduced in quality and resolution server side. I’ve seen it happen before, it’s a nightmare to revert things afterwards (users complain), not to mention you can’t revert the damage, that space is taken and that’s that.
My estimate is that, if only images are allowed and images are processed in webp, it’ll take about a week for a quite busy instance to reach the 1GB mark… probably a lot faster if it’s an NSFW instance (a few days). Think about this from a migration perspective after 1 year - it will be a nightmare.
congradulations, you have now beat reddit itself for uptime, lmfao.
Thank you for hosting this and keeping us informed!
Are things scaling well then? Is it still alright to recommend new people come here or would you rather hold off for a bit now?
I figured out the storage issue last night, the instance is only at about 20% utilization so we should be good to take on a good amount more. We probably will need to do some more tweaking as we grow but for now its looking pretty good!
In Montreal, do we need to send you some hard drives?
Wow. 20% usage at 2.4k users is pretty damn amazing.
its not so much about how many registered users as it is about how many online users. The online user count is what adds fuel to the fire!
Oh wow so it’s really the transactions per minute, not so much the baseline set of data?
Yeah, basically the problem is number of calls (requests) per minute. If it’s not too many, everything works fine, if it overloads, you get 500 errors 🤷.
Including calls to federated communities, yes?
I’m not sure about that, but most probably yes, since if you post on federated communities, your username pops up as someone@notonthisserver.com and the post is saved on your instance (from what I know).
Good to know. Keep up the good work, you’re the man.
Think long term. Don’t take too many new users or you’ll end up with loads of new content (mainly images). Storage is not expensive nowadays, but you can’t load new disks to the arrays indefinitely.
Can’t we just link to images hosted elsewhere? Like a certain site I once knew
We can, but then you might loose history. This is long known problem on forums. Trying to cut on disk space by linking to image hosts. Sometimes the images survive, most of the time they don’t. If that doesn’t bother most of the user base, fine, but certain communities would like to keep a history of the media, since sometimes they share valuabe information, like maybe schematics or art.
I guess it’s a balance. I remember the imageshack purge but I also don’t want to overload the instance.
Photobucket was next after that, not to mention countless others. Imgur has been pretty consistent over the years, but after 2 purges and hundreds of lost images (schematics in most cases) on my forum, I’m scared to death of trusting another image hosting company.
Basically, we still allow externally hosted images on the forum, but only for temp things, like buy/sell, stuff like that. Everything else is attached on the forum.
Imgur is deleting all images that were uploaded without being signed into an account.
if anyone starts getting any weird errors please do let me know but everything still seems to be running smoothly from my side.
I tried to create a new community a couple times and it hung a bit ago. I’ll try again tomorrow.
That’s not normal. Let me know if it happens again
Yeah still hanging for me (even on a different computer) :/ In looking at the dev tools, it doesn’t seem to do a POST or anything. The only thing it does is load the svg for the ‘thinking’ icon on the button.
Yeah, it hangs sometimes. Log out, close the tab, clean cache, then log in back again, should work 👍.
Thanks for addressing it so quickly!
Thanks for the update, i really like the transparency
Thanks for the update! Any plans for a separate channel outside of sh.itjust.works like Discord or Mastodon, just in case to give people a heads up if the server goes down or is in maintenance?
I like the idea of a status account on Mastodon. That’s probably a little more accessible than Discord.
Matrix would also be a viable alternative to Discord.
Great to hear, and thanks for the update!
Would love to hear some more details on the misconfiguration if you want to share @TheDude@sh.itjust.works
just updated the thread post to include more information for your curious beautiful mind
Not sure if you already did or the default config does it but if you only have one backend in an nginx reverse proxy it might also make sense to configure the max_fails and possibly the fail_timeout options so nginx won’t consider your backend down for a few seconds every time it receives a TCP error connecting to that (single) backend. max_fails=0 in the named upstream section (see http://nginx.org/en/docs/http/ngx_http_upstream_module.html#upstream ) is what you want here.
And through it all, the Dude abides.
Thanks for hosting, thanks for the expertise, thanks for the response time!
I joined for the server URL, and definitely stayed once I saw his username. Long live the Dude!
Thanks dude. The performance here is really excellent in my opinion. Had a couple of page errors but nothing that couldn’t be fixed by reloading.
Thanks @TheDude! Scaling systems like this is always a challenge and you really get to learn the performance quirks of the code. Thanks for all your work.
@TheDude@sh.itjust.works does Lemmy support a distributed configuration with multiple database and app servers, or are you limited to a single instance of everything?
The official supported deployments are single instance based, that being said things are already broken up into separate docker containers so it should be pretty easy to do. Would need to do some testing before hand. If this instance continues growing this way I’ll need to look into scaling horizontally instead of vertically
If this instance continues growing this way I’ll need to look into scaling horizontally instead of vertically
Could you elaborate what this means?
Better to have more servers as opposed to one ultra powerful server because the ultra powerful server tends to be more expensive than an equivalent strength collection of weaker servers. Also, you pay for all the power even during times you don’t need it whereas you can take down or add more weak servers as necessary.
Ah, that makes sense. Thank you for sharing.
Check out CockroachDB, a distributed SQL database compatible with PostgreSQL clients.
Thanks for the info! Keep up the great work man.