Both Lemmy and mbin have a shitty way of treating authors of content that is censored by a moderator.

Lemmy: if your post is removed from a community timeline, you still have the content. In fact, your logged-in profile looks no different, as if the message is still there. It’s quite similar to shadow banning. Slightly better though because if you pay attention or dig around, you can at least discover that you were censored. But shitty nonetheless that you get no notification of the censorship.

Mbin: if your post is removed, you are subjected to data loss. I just wrote a high effort post europe@feddit.org and it was censored for not being “news”. There is no rule that your post must be news, just a subtle mention in the topic of news. In fact they delete posts that are not news, despite not having a rule along those lines. So my article is lost due to this heavy-handed moderation style. Mbin authors are not deceived about the status of their post like on lemmy, but authors suffer from data loss. They do not get a copy of what they wrote so they cannot recover and post it elsewhere.

It’s really disgusting that a moderator’s trigger happy delete button has data loss for someone else as a consequence. I probably spent 30 minutes writing the post only to have that effort thrown away by a couple clicks. Data loss is obviously a significant software defect.

    • ciferecaNinjo@fedia.ioOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      3 months ago

      Who cares?

      Anyone who values their own time and suffers from data loss cares about data loss, obviously.

      This is a serious question.

      Bizarre.

      Anything that is important to you should be backed up and/or archived. Relying on a third party social media app is folly.

      This is a bug report on faulty software. If you have a clever workaround to the bug, specifics would be welcome. A bug report is not the place for general life coaching or personal advice. If there is an emacs mode that stores posts locally and copies them into a lemmy or mbin community and keeps a synchronised history of the two versions, feel free to share the details. But note that even such a tool would still just be a workaround to the software defect at hand.

        • ciferecaNinjo@fedia.ioOP
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          3 months ago

          Exactly. You’ve made my point for me. Precisely why this defect is a defect. The user’s view should be separate and disjoint from the timeline. Lemmy proves the wisdom of that philosophy. But again, it’s a failure of software design to create a fragile system with an expectation that human users will manually compensate for lack of availaiblity and integrity. I know you were inadvertenly attempting again to blame the user (and victim) for poor software design.

          It’s a shame that kids are now being tought to produce software has lost sight of good design principles. That it’s okay to write software that suffers from data loss because someone should have another copy anyway (without realising that that other copy is also subject to failures nonetheless).

          • ֆᎮ⊰◜◟⋎◞◝⊱ֆᎮ@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            How have I made your point at all?

            You’re a bit incoherent with what you’re talking about. This has nothing to do with software design or anything else along those lines. This is a simple thing. If your data is valuable you secure it yourself.

            Thinking that a federated service is going to have a uniform or homogenous approach to things is folly on your end and a failure of understanding what the technology is.

            • ciferecaNinjo@fedia.ioOP
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              3 months ago

              How have I made your point at all?

              You have acknowledged the importance of having multiple points of failure. It’s a good start because the defect at hand is software with a single point of failure.

              You’re a bit incoherent with what you’re talking about.

              I suppose I assumed I was talking to someone with a bit of engineering history. It’s becoming clear that you don’t grasp software design. You’ve apparently not had any formal training in engineering and likely (at best) you’ve just picked up how to write a bit of code along the way. Software engineering so much more than that. You are really missing the big picture.

              This has nothing to do with software design or anything else along those lines.

              What an absurd claim to make. Of course it does. When software fails to to protect the data it’s entrusted with, it’s broken. Either the design is broken, or the implementation is broken (but design in the case at hand). Data integrity is paramount to infosec and critical to the duty of an application. Integrity is basically infosec 101. If you ever enter an infosec program, it’s the very first concept you’ll be taught. Then later on you might be taught that a good software design is built with security integrated into the design in early stages, as opposed to being an afterthought. Another concept you’ve not yet encounted is the principle of security in depth, which basically means it’s a bad idea to rely on a single mechanism. E.g. if you rely on the user to make a backup copy but then fail to protect the primary copy, you’ve failed to create security in depth, which requires having BOTH a primary copy AND a secondary copy.

              This is a simple thing. If your data is valuable you secure it yourself.

              That has nothing to do with the software defect being reported. While indeed it is a good idea to create backups, this does not excuse or obviate a poor software design that entails data loss and ultimately triggers a need for data recovery. When a software defect triggers the need for data recovery, in effect you have lost one of the redundant points of failure you advocated for.

              When you reach the university level, hopefully you will be given a human factors class of some kind. Or if your first tech job is in aerospace or a notably non-sloppy project, you’ll hopefully at least learn human factors on the job. If you write software that’s intolerant to human errors and which fails to account for human characteristics, you’ve created a poor design (or most likely, no design… just straight to code). When you blame the user, you’ve not only failed as an engineer but also in accountablity. If a user suffers from data loss because your software failed to protect the data, and you blame the user, any respectable org will either sack you or correct you. It is the duty of tech creators to assume that humans fuck up and to produce tools that is resilient to that. (maybe not in the gaming industry but just about any other type of project)

              Good software is better than your underdeveloped understanding of technology reveals.

              Thinking that a federated service is going to have a uniform or homogenous approach to things is folly

              Where do you get /uniform/ from? Where do you get /homogenous approach/ from? Mbin has a software defect that Lemmy does not. Reporting mbin’s defect in no way derives and expectation that mbin mirror Lemmy. Lemmy is merely an example of a tool that does not have the particular defect herein. Lemmy demonstrates one possible way to protect against data loss. There are many different ways mbin can solve this problem, but it has wholly failed because it did fuck all. It did nothing to protect from data loss.

              on your end and a failure of understanding what the technology is.

              It’s a failure on your part to understand how to design quality software. Judging from the quality of apps over the past couple decades, it seems kids are no longer getting instruction on how to build quality technology and you have been conditioned by this shift in recent decades toward poorly designed technology. It’s really sad to see.

                • ciferecaNinjo@fedia.ioOP
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  3 months ago

                  Yikes. I am disturbed to hear that. I was as well appalled with what I saw in a recent visit to a university. It’s baffling that someone could acquire those degrees without grasping the discipline. Obviously it ties in with the fall of software quality that began around the same time the DoD lifted the Ada mandate. But indeed, you would have to mention your credentials because nothing else you’ve written indicates having any tech background at all.