I just saw some posts from beehaw.org and lemmy.world, looks like we’re back in business!

  • Teali0@kbin.social
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    I mean seriously? This dude’s project has had hardly any downtime (like 100% inaccessible downtime) in the last few days during a massive migration. How impressive is that? He was able to find a solution with cloudflare where, sure things were a little slow to load and didn’t federate, but I never found myself unable to access kbin. On top of that, he’s communicating clearly and often. I hope this succeeds. @ernest has absolutely earned it.

    • Sausage@kbin.social
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      There have been a few times I’ve been unable to log in, unresponsive pages etc., but considering how recently kbin.social was created, the massive influx of users and the fact that @ernest is managing all this himself - absolutely phenomenal job.

      • CoderKat@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        I haven’t looked into just what the backend architecture is like, but I’ve seen comments that suggested it may be a single physical server? If so, some short periods of downtime are unavoidable. I do high availability backend dev and it’s no easy task to have near perfect uptime. Having distributed servers across multiple locations is essential for that, but generally requires careful design. Databases also get more complicated when using distributed databases (but I swear by them).