A few weeks ago Lemmy was buggy on computers and there were no good mobile clients out there, now on PC the site is pretty stable and fast, and there are now some pretty good iOS/Android clients too. Thanks to all the people who made this possible!

  • SneakyWaffles@vlemmy.net
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    edit-2
    1 year ago

    You sound like an old script kiddie who says they’re a hacker cause they ran a script from a forum. If it wasn’t obvious, I’m talking about actual web architecture. Not hobby junk. Managing to standup a tiny virtual instance for a few people does not mean that you understand anything.

    As I said, this I basic architecture shit. Like, an intern would understand the idea kinda basic.

    talking about “load balance” as a guarantee of uptime is the same as justifying using Mongo because it is we scale

    ??? Are you unironically implying that a site with a backend that has multiple servers stood up to spread the load won’t have tremendously better capacity, redundancy, and as a result better uptime than a single hobby pc in your living room or whatever you have setup?

    • rglullis@communick.news
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Can you please stop with the unnecessary snark and this silly attempt at dick-measuring? Are you upset at something?

      Are you unironically implying that a site with a backend that has multiple servers stood up to spread the load won’t have tremendously better capacity, redundancy…

      No. I am saying that the majority of websites out there don’t need to pay the costs or worry about this.

      Good engineering is about understanding trade-offs. We can be talking all day about the different strategies to have 4, 5 or 6 nines of availability, but all that would be pointless if the conversation is not anchored in how much will be the cost of implementing and operating such a solution.

      Lemmy - like all other social media software - does not need that. There is nothing critical about it. No one dies if the server goes offline for a couple of minutes in the month. No business will stop making money if we take the database down to do a migration instead of using blue-green deployments. Even the busiest instances are not seeing enough load to warrant more servers and are able to scale by simply (1) fine-tuning the database (which is the real bottleneck) and (2) launching more processes.

      Anyone that is criticizing Lemmy because “it can not scale out” is either talking out of their ass or a bad engineer. Possibly both.