I’m curious how software can be created and evolve over time. I’m afraid that at some point, we’ll realize there are issues with the software we’re using that can only be remedied by massive changes or a complete rewrite.

Are there any instances of this happening? Where something is designed with a flaw that doesn’t get realized until much later, necessitating scrapping the whole thing and starting from scratch?

  • Björn Tantau@swg-empire.de
    link
    fedilink
    arrow-up
    108
    arrow-down
    1
    ·
    edit-2
    8 months ago

    Happens all the time on Linux. The current instance would be the shift from X11 to Wayland.

    The first thing I noticed was when the audio system switched from OSS to ALSA.

    • Max-P@lemmy.max-p.me
      link
      fedilink
      arrow-up
      72
      ·
      8 months ago

      And then ALSA to all those barely functional audio daemons to PulseAudio, and then again to PipeWire. That sure one took a few tries to figure out right.

      • Björn Tantau@swg-empire.de
        link
        fedilink
        arrow-up
        42
        arrow-down
        2
        ·
        8 months ago

        And the strangest thing about that is that neither PulseAudio nor Pipewire are replacing anything. ALSA and PulseAudio are still there while I handle my audio through Pipewire.

        • angel@iusearchlinux.fyi
          link
          fedilink
          arrow-up
          29
          arrow-down
          1
          ·
          edit-2
          8 months ago

          How is PulseAudio still there? I mean, sure the protocol is still there, but it’s handled by pipewire-pulse on most systems nowadays (KDE specifically requires PipeWire).

          Also, PulseAudio was never designed to replace ALSA, it’s sitting on top of ALSA to abstract some complexity from the programs, that would arise if they were to use ALSA directly.

          • lemmyvore@feddit.nl
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            1
            ·
            8 months ago

            Pulse itself is not there but its functionality is (and they even preserved its interface and pactl). PipeWire is a superset of audio features from Pulse and Jack combined with video.

            • tetris11@lemmy.ml
              link
              fedilink
              arrow-up
              9
              ·
              edit-2
              8 months ago

              For anyone wondering: Alsa does sound card detection and basic IO at the kernel level, Pulse takes ALSA devices and does audio mixing at the user/system level. Pipe does what Pulse does but more and even includes video devices

  • Strit@lemmy.linuxuserspace.show
    link
    fedilink
    arrow-up
    71
    arrow-down
    1
    ·
    8 months ago

    there are issues with the software we’re using that can only be remedied by massive changes or a complete rewrite.

    I think this was the main reason for the Wayland project. So many issues with Xorg that it made more sense to start over, instead of trying to fix it in Xorg.

    • Phoenixz@lemmy.ca
      link
      fedilink
      arrow-up
      11
      arrow-down
      10
      ·
      8 months ago

      And as I’ve understood and read about it, Wayland had been a near 10 years mess that ended up with a product as bad or perhaps worse than xorg.

      Not trying to rain on either parade, but x is like the Hubble telescope if we added new upgrades to it every 2 months. Its way past its end of life, doing things it was never designed for.

      Wayland seems… To be missing direction?

      • LeFantome@programming.dev
        link
        fedilink
        arrow-up
        18
        arrow-down
        1
        ·
        8 months ago

        I do not want to fight and say you misunderstood. Let’s just say you have been very influenced by one perspective.

        Wayland has taken a while to fully flesh out. Part of that has been delay by the original designers not wanting to compromise their vision. Most of it is just the time it takes to replace something mature ( X11 is 40 years old ). A lot of what feels like Wayland problems actually stem from applications not migrating yet.

        While there are things yet to do, the design of Wayland is proving itself to be better fundamentally. There are already things Wayland can do that X11 likely never will ( like HDR ). Wayland is significantly more secure.

        At this point, Wayland is either good enough or even superior for many people. It does not yet work perfectly for NVIDIA users which has more to do with NVIDIA’s choices than Wayland. Thankfully, it seems the biggest issues have been addressed and will come together around May.

        The desktop environments and toolkits used in the most popular distros default to Wayland anlready and will be Wayland only soon. Pretty much all the second tier desktop environments have plans to get to Wayland.

        We will exit 2024 with almost all distros using Wayland and the majority of users enjoying Wayland without issue.

        X11 is going to be around for a long time but, on Linux, almost nobody will run it directly by 2026.

        Wayland is hardly the Hubble.

        • Phoenixz@lemmy.ca
          link
          fedilink
          arrow-up
          2
          ·
          8 months ago

          Well, as I said, it’s what I read. If it’s better than that, great. Thanks for correcting me

          Also, X is Hubble, not Wayland :)

      • ComradeKhoumrag@infosec.pub
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        8 months ago

        I’ve been using Wayland on plasma 5 for a year or so now, and it looks like the recent Nvidia driver has merged, so it should be getting even better any minute now.

        I’ve used it for streaming on Linux with pipewire, overall no complaints.

      • UnfortunateShort@lemmy.world
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        8 months ago

        Wayland is the default for GNOME and KDE now, meaning before long it will become the default for the majority of all Linux users. And in addition, Xfce, Cinnamon and LXQt are also going to support it.

    • leanleft@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      7 months ago

      according to kagiGPT…
      ~~i have determined that wayland is the successor and technically minimal:
      *Yes, it is possible to run simple GUI programs without a full desktop environment or window manager. According to the information in the memory:

      You can run GUI programs with just an X server and the necessary libraries (such as QT or GTK), without needing a window manager or desktop environment installed. [1][2]

      The X server handles the basic graphical functionality, like placing windows and handling events, while the window manager is responsible for managing the appearance and behavior of windows. [3][4]

      Some users prefer this approach to avoid running a full desktop environment when they only need to launch a few GUI applications. [5][6]

      However, the practical experience may not be as smooth as having a full desktop environment, as you may need to manually configure the environment for each GUI program. [7][8]*~~

      however… firefox will not run without the full wayland compositor.

      correction:

      1. Wayland is not a display server like X11, but rather a protocol that describes how applications communicate with a compositor directly. [1]

      2. Display servers using the Wayland protocol are called compositors, as they combine the roles of the X window manager, compositing manager, and display server. [2]

      3. A Wayland compositor combines the roles of the X window manager, compositing manager, and display server. Most major desktops support Wayland compositors. [3]

  • lil@lemy.lol
    link
    fedilink
    arrow-up
    65
    arrow-down
    13
    ·
    8 months ago

    There is some Rust code that needs to be rewritten in C.

    • Lodra@programming.dev
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      1
      ·
      8 months ago

      Strange. I’m not exactly keeping track. But isn’t the current going in just the opposite direction? Seems like tons of utilities are being rewritten in Rust to avoid memory safety bugs

      • Khanzarate@lemmy.world
        link
        fedilink
        arrow-up
        5
        arrow-down
        7
        ·
        8 months ago

        The more the code is used, the faster it ought to be. A function for an OS kernel shouldn’t be written in Python, but a calculator doesn’t need to be written in assembly, that kind of thing.

        I can’t really speak for Rust myself but to explain the comment, the performance gains of a language closer to assembly can be worth the headache of dealing with unsafe and harder to debug languages.

        Linux, for instance, uses some assembly for the parts of it that need to be blazing fast. Confirming assembly code as bug-free, no leaks, all that, is just worth the performance sometimes.

        But yeah I dunno in what cases rust is faster than C/C++.

          • 0x0@programming.dev
            link
            fedilink
            arrow-up
            6
            arrow-down
            1
            ·
            8 months ago

            C/C++ isn’t

            You’re talking about two languages, one is C, the other is C++. C++ is not a superset of C.

        • Nibodhika@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          8 months ago

          But yeah I dunno in what cases rust is faster than C/C++.

          First of all C and C++ are very different, C is faster than C++. Rust is not intrinsically faster than C in the same way that C is faster than C++, however there’s a huge difference, safety.

          Imagine the following C function:

          void do_something(Person* person);
          

          Are you sure that you can pass NULL? Or that it won’t delete your object? Or delete later? Or anything, you need to know what the function does to be sure and/or perform lots of tests, e.g. the proper use of that function might be something like:

          if( person ) {
            person_uses++;
            do_something(person);
          }
          
          ...
          
          if( --person_uses == 0 )
            free( person )
          
          

          That’s a lot more calls than just calling the function, but it’s also a lot more secure.

          In C++ this is somewhat solved by using smart pointers, e.g.

          void do_something(std::unique_ptr<Person> person);
          void something_else(std::shared_ptr<Person> person);
          

          That’s a lot more secure and readable, but also a lot slower. Rust achieves the C++ level of security and readability using only the equivalent of a single C call by performing pre-compile analysis and making the assembly both fast and secure.

          Can the same thing be done on C? Absolutely, you could use macros instead of ifs and counters and have a very fast and safe code but not easy to read at all. The thing is Rust makes it easy to write fast and safe code, C is faster but safe C is slower, and since you always want safe code Rust ends up being faster for most applications.

        • flying_sheep@lemmy.ml
          link
          fedilink
          arrow-up
          1
          ·
          8 months ago

          Rust is faster than C. Iterators and mutable noalias can be optimized better. There’s still FORTRAN code in use because it’s noalias and therefore faster

    • Spectranox@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      22
      ·
      edit-2
      8 months ago

      Agree, call me unreasonable or whatever but I just don’t like Rust nor the community behind it. Stop trying to reinvent the wheel! Rust makes everything complicated.

      On the other hand… Zig 😘

  • nycki@lemmy.world
    link
    fedilink
    arrow-up
    51
    arrow-down
    2
    ·
    edit-2
    8 months ago

    Starting anything from scratch is a huge risk these days. At best you’ll have something like the python 2 -> 3 rewrite overhaul (leaving scraps of legacy code all over the place), at worst you’ll have something like gnome/kde (where the community schisms rather than adopting a new standard). I would say that most of the time, there are only two ways to get a new standard to reach mass adoption.

    1. Retrofit everything. Extend old APIs where possible. Build your new layer on top of https, or javascript, or ascii, or something else that already has widespread adoption. Make a clear upgrade path for old users, but maintain compatibility for as long as possible.

    2. Buy 99% of the market and declare yourself king (cough cough chromium).

    • zaphod
      link
      fedilink
      arrow-up
      18
      ·
      8 months ago

      Python 3 wasn’t a rewrite, it just broke compatibility with Python 2.

      • flying_sheep@lemmy.ml
        link
        fedilink
        arrow-up
        4
        ·
        8 months ago

        In a good way. Using a non-verified bytes type for strings was such a giant source of bugs. Text is complicated and pretending it isn’t won’t get you far.

  • biribiri11@lemmy.ml
    link
    fedilink
    arrow-up
    46
    ·
    8 months ago

    The entire thing. It needs to be completely rewritten in rust, complete with unit tests and Miri in CI, and converted to a high performance microkernel. Everything evolves into a crab /s

  • smileyhead@discuss.tchncs.de
    link
    fedilink
    arrow-up
    42
    arrow-down
    1
    ·
    8 months ago

    Maybe not exaclly Linux, sorry for that, but it was first thing that get to my mind.
    Web browsers really should be rewritten, be more modular and easier to modify. Web was supposed to be bulletproof and work even if some features are not present, but all websites are now based on assumptions all browsers have 99% of Chromium features implemented and won’t work in any browser written from scratch now.

  • LeFantome@programming.dev
    link
    fedilink
    arrow-up
    39
    ·
    8 months ago

    Linux does this all the time.

    ALSA -> Pulse -> Pipewire

    Xorg -> Wayland

    GNOME 2 -> GNOME 3

    Every window manager, compositor, and DE

    GIMP 2 -> GIMP 3

    SysV init -> SystemD

    OpenSSL -> BoringSSL

    Twenty different kinds of package manager

    Many shifts in popular software

    • loutr@sh.itjust.works
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      8 months ago

      BoringSSL is not a drop-in replacement for openssl though:

      BoringSSL is a fork of OpenSSL that is designed to meet Google’s needs.

      Although BoringSSL is an open source project, it is not intended for general use, as OpenSSL is. We don’t recommend that third parties depend upon it. Doing so is likely to be frustrating because there are no guarantees of API or ABI stability.

    • embed_me@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      7 months ago

      Aren’t different kinds of package managers required due to the different stability requirements of a distro?

  • limelight79@lemm.ee
    link
    fedilink
    arrow-up
    35
    ·
    8 months ago

    We haven’t rewritten the firewall code lately, right? checks Oh, it looks like we have. Now it’s nftables.

    I learned ipfirewall, then ipchains, then iptables came along, and I was like, oh hell no, not again. At that point I found software to set up the firewall for me.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      8 months ago

      Damn, you’re old. iptables came out in 1998. That’s what I learned in (and I still don’t fully understand it).

    • XTL
      link
      fedilink
      arrow-up
      7
      ·
      8 months ago

      I was just thinking that iptables lasted a good 20 years. Over twice that of ipchains. Was it good enough or did it just have too much inertia?

      Nf is probably a welcome improvement in any case.

  • gnuhaut@lemmy.ml
    link
    fedilink
    arrow-up
    33
    arrow-down
    2
    ·
    8 months ago

    GUI toolkits like Qt and Gtk. I can’t tell you how to do it better, but something is definitely wrong with the standard class hierarchy framework model these things adhere to. Someday someone will figure out a better way to write GUIs (or maybe that already exists and I’m unaware) and that new approach will take over eventually, and all the GUI toolkits will have to be scrapped or rewritten completely.

    • Lung@lemmy.world
      link
      fedilink
      arrow-up
      16
      ·
      8 months ago

      Idk man, I’ve used a lot of UI toolkits, and I don’t really see anything wrong with GTK (though they do basically rewrite it from scratch every few years it seems…)

      The only thing that comes to mind is the React-ish world of UI systems, where model-view-controller patterns are more obvious to use. I.e. a concept of state where the UI automatically re-renders based on the data backing it

      But generally, GTK is a joy, and imo the world of HTML has long been trying to catch up to it. It’s only kinda recently that we got flexbox, and that was always how GTK layouts were. The tooling, design guidelines, and visual editors have been great for a long time

    • KindaABigDyl@programming.dev
      link
      fedilink
      arrow-up
      7
      ·
      8 months ago

      I’ve really fallen in love with the Iced framework lately. It just clicks.

      A modified version of it is what System76 is using for the new COSMIC DE

      • Joe Breuer@lemmy.ml
        link
        fedilink
        arrow-up
        27
        ·
        8 months ago

        Which - in my considered opinion - makes them so much worse.

        Is it because writing native UI on all current systems I’m aware of is still worse than in the times of NeXTStep with Interface Builder, Objective C, and their class libraries?

        And/or is it because it allows (perceived) lower-cost “web developers” to be tasked with “native” client UI?

    • MonkderDritte@feddit.de
      link
      fedilink
      arrow-up
      4
      ·
      8 months ago

      and all the GUI toolkits will have to be scrapped or rewritten completely

      Dillo is the only tool i know still using FLTK.

    • XTL
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      8 months ago

      Newer toolkits all seem to be going immediate mode. Which I kind of hate as an idea personally.

  • MrAlternateTape@lemm.ee
    link
    fedilink
    arrow-up
    31
    arrow-down
    1
    ·
    8 months ago

    It’s actually a classic programmer move to start over again. I’ve read the book “Clean Code” and it talks about a little bit.

    Appereantly it would not be the first time that the new start turns into the same mess as the old codebase it’s supposed to replace. While starting over can be tempting, refactoring is in my opinion better.

    If you refactor a lot, you start thinking the same way about the new code you write. So any new code you write will probably be better and you’ll be cleaning up the old code too. If you know you have to clean up the mess anyways, better do it right the first time …

    However it is not hard to imagine that some programming languages simply get too old and the application has to be rewritten in a new language to ensure continuity. So I think that happens sometimes.

    • teawrecks
      link
      fedilink
      arrow-up
      16
      ·
      8 months ago

      Yeah, this was something I recognized about myself in the first few years out of school. My brain always wanted to say “all of this is a mess, let’s just delete it all and start from scratch” as though that was some kind of bold/smart move.

      But I now understand that it’s the mark of a talented engineer to see where we are as point A, where we want to be as point B, and be able to navigate from A to B before some deadline (and maybe you have points/deadlines C, D, E, etc.). The person who has that vision is who you want in charge.

      Chesterton’s Fence is the relevant analogy: “you should never destroy a fence until you understand why it’s there in the first place.”

      • 0x0@programming.dev
        link
        fedilink
        arrow-up
        3
        ·
        8 months ago

        I’d counter that with monolithic, legacy apps without any testing trying to refactor can be a real pain.

        I much prefer starting from scratch, while trying to avoid past mistakes and still maintaining the old app until new up is ready. Then management starts managing and new app becomes old app. Rinse and repeat.

        • teawrecks
          link
          fedilink
          arrow-up
          5
          ·
          8 months ago

          I made a thing.

          The difference between the idiot and the expert, is the expert knows why the fences are there, and can do the rewrite without having to relearn lessons. But if you’re supporting a package you didn’t originally write, a rewrite is much harder.

          • msage@programming.dev
            link
            fedilink
            arrow-up
            5
            ·
            8 months ago

            Which is something I always try to explain to juniors: writing code is cool, but for your sake learn how to READ code.

            Not just understanding what it does, but what was it all meant to do. Even reading your own code is a skill that needs some focus.

            Side note: I hate it to my core when people copy code mindlessly. Sometimes it’s not even a bug, or a performance issue, but something utterly stupid and much harder to read. But because they didn’t understand it, and didn’t even try, they just copy-pasted it and went on. Ugh.

            • teawrecks
              link
              fedilink
              arrow-up
              1
              ·
              8 months ago

              Side note: I hate it to my core when people copy code mindlessly

              Get ready for the world of AI code assist 😬

            • teawrecks
              link
              fedilink
              arrow-up
              1
              ·
              7 months ago

              Hah yeah, this was in the back of my mind. I forgot the context of it, though, thanks.

      • sepulcher@lemmy.caOP
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        “you should never destroy a fence until you understand why it’s there in the first place.”

        I like that; really makes me think about my time in building-games.

  • MonkderDritte@feddit.de
    link
    fedilink
    arrow-up
    28
    ·
    edit-2
    8 months ago

    Alsa > Pulseaudio > Pipewire

    About 20 xdg-open alternatives (which is, btw, just a wrapper around gnome-open, exo-open, etc.)

    My session scripts after a deep dive. Seriously, startxfce4 has workarounds from the 80ies and software rot affected formatting already.

    Turnstile instead elogind (which is bound to systemd releases)

    mingetty, because who uses a modem nowadays?

  • sunbeam60@lemmy.one
    link
    fedilink
    arrow-up
    26
    ·
    8 months ago

    Be careful what you wish for. I’ve been part of some rewrites that turned out worse than the original in every way. Not even code quality was improved.

      • sunbeam60@lemmy.one
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        8 months ago

        Funnily enough the current one is actually the one where we’ve made the biggest delta and it’s been worthwhile in every way. When I joined the oldest part of the platform was 90s .net and MSSQL. This summer we’re turning the last bits off.

  • Hector@lemmy.ca
    link
    fedilink
    arrow-up
    22
    ·
    8 months ago

    Some form of stable, modernized bluetooth stack would be nice. Every other bluetooth update breaks at least one of my devices.

    • daq@lemmy.sdf.org
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      I realize that’s not exactly what you asked for but Pipewire had been incredibly stable for me. Difference between the absolute nightmare of using BT devices with alsa and super smooth experience in pipewire is night and day.

  • taladar@sh.itjust.works
    link
    fedilink
    arrow-up
    19
    arrow-down
    3
    ·
    8 months ago

    I would say the whole set of C based assumptions underlying most modern software, specifically errors being just an integer constant that is translated into a text so it has no details about the operation tried (who tried to do what to which object and why did that fail).

    • smileyhead@discuss.tchncs.de
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      8 months ago

      You have stderr to throw errors into. And the constants are just error codes, like HTTP error codes. Without it how computer would know if the program executed correctly.

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        8 months ago

        You throw an exception like a gentleman. But C doesn’t support them. So you need to abuse the return type to also indicate “success” as well as a potential value the caller wanted.

        • uis@lemm.ee
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          8 months ago

          So you need to abuse the return type to also indicate “success” as well as a potential value the caller wanted.

          You don’t need to.

          Returnung structs, returning by pointer, signals, error flags, setjmp/longjmp, using cxa for exceptions(lol, now THIS is real abuse).

        • 0x0@programming.dev
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          edit-2
          8 months ago

          Exceptionss are bad coding, and what’s abusive of using the full range of an integer? 0 success, everything else, error - check the API for details or call strerror.

          • taladar@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            8 months ago

            Returning error codes in-band is the reason for a significant percentage of C bugs and security holes when the return value is used without checking. Something like Rust’s Result type that forces you to distinguish the two cases is much better design here. And no, you are not working with a whole language ecosystem of “sufficiently disciplined programmers” so that nobody ever forgets to check a return value.

            Not to mention that errno is just a very broken design in the times of modern thread and event systems, signals, interrupts and all kinds of other ways to produce race conditions and overwrite the errno value before it is checked.

            • uis@lemm.ee
              link
              fedilink
              arrow-up
              1
              ·
              8 months ago

              errno is not shared between threads. Also:

              signal handlers that call functions that may set errno or modify the floating-point environment must save their original values, and restore them before returning.

              There does not add more race conditions because signal handlers execute in one of regular threads. In single-threaded program signals are functions that can be called by OS at any point of execution, but they do not execute at same time with threads.

    • teawrecks
      link
      fedilink
      arrow-up
      9
      arrow-down
      2
      ·
      8 months ago

      You mean 0 indicating success and any other value indicating some arbitrary meaning? I don’t see any problem with that.

      Passing around extra error handling info for the worst case isn’t free, and the worst case doesn’t happen 99.999% of the time. No reason to spend extra cycles and memory hurting performance just to make debugging easier. That’s what debug/instrumented builds are for.

      • taladar@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        ·
        8 months ago

        Passing around extra error handling info for the worst case isn’t free, and the worst case doesn’t happen 99.999% of the time.

        The case “I want to know why this error happened” is basically 100% of the time when an error actually happens.

        And the case of “Permission denied” or similar useless nonsense without any details costing me hours of my life in debugging time that wouldn’t be necessary if it just told me permission for who to do what to which object happens quite regularly.

        • teawrecks
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          8 months ago

          “0.001% of the time, I wanna know every time 👉😎👉”

          Yeah, I get that. But are we talking about during development (which is why we’re choosing between C and something else)? In that case, you should be running instrumented builds, or with debug functionality enabled. I agree that most programs just fail and don’t tell you how to go about enabling debug info or anything, and that could be improved.

          For the “Permission Denied” example, I also assume we’re making system calls and having them fail? In that case it seems straight forward: the user you’re running as can’t access the resource you were actively trying to access. But if we’re talking about some random log file just saying “Error: permission denied” and leaving you nothing to go on, that’s on the program dumping the error to produce more useful information.

          In general, you often don’t want to leak more info than just Worked or Didn’t Work for security reasons. Or a mix of security/performance reasons (possible DOS attacks).

          • taladar@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            8 months ago

            During development is just about the only time when that doesn’t matter because you have direct access to the source code to figure out which function failed exactly. As a sysadmin I don’t have the luxury of reproducing every issue with a debug build with some debugger running and/or print statements added to figure out where exactly that value originally came from. I really need to know why it failed the first time around.

            • teawrecks
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              8 months ago

              Yeah, so it sounds like your complaint is actually with application not propagating relevant error handling information to where it’s most convenient for you to read it. Linux is not at fault in your example, because as you said, it returns all the information needed to fix the issue to the one who developed the code, and then they just dropped the ball.

              Maybe there’s a flag you can set to dump those kinds of errors to a log? But even then, some apps use the fail case as part of normal operation (try to open a file, if we can’t, do this other thing). You wouldn’t actually want to know about every single failure, just the ones that the application considers fatal.

              As long as you’re running on a turing complete machine, it’s on the app itself to sufficiently document what qualifies as an error and why it happened.

              • taladar@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                8 months ago

                The whole point of my complaint is that shitty C conventions produce shitty error messages. If I could rely on the programmer to work around those stupid conventions every time by actually checking the error and then enriching it with all relevant information I would have no complaints.

              • taladar@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                8 months ago

                I know about strace, strace still requires me to reproduce the issue and then to look at backtraces if nobody bothered to include any detail in the error.

                • uis@lemm.ee
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  8 months ago

                  Somehow (lack of) backtrace and details in error is “C based assumption”

      • atzanteol@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        4
        ·
        8 months ago

        Ugh, I do not miss C…

        Errors and return values are, and should be, different things. Almost every other language figured this out and handles it better than C.

        • teawrecks
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          8 months ago

          It’s more of an ABI thing though, C just doesn’t have error handling.

          And if you do exception handling wrong in most other languages, you hamstring your performance.

        • uis@lemm.ee
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          8 months ago

          Errors and return values are, and should be, different things.

          That’s why errno and return value are different things.

      • taladar@sh.itjust.works
        link
        fedilink
        arrow-up
        1
        ·
        8 months ago

        It does very much have the concept of objects as in subject, verb, object of operations implemented in assembly.

        As in who (user foo) tried to do what (open/read/write/delete/…) to which object (e.g. which socket, which file, which Linux namespace, which memory mapping,…).

        • uis@lemm.ee
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          8 months ago

          implemented in assembly.

          Indeed. Assembly is(can be) used to implement them.

          As in who (user foo) tried to do what (open/read/write/delete/…) to which object (e.g. which socket, which file, which Linux namespace, which memory mapping,…).

          Kernel implements it in software(except memory mappings, it is implemented in MMU). There are no sockets, files and namespaces in ISA.

          • taladar@sh.itjust.works
            link
            fedilink
            arrow-up
            1
            ·
            8 months ago

            You were the one who brought up assembly.

            And stop acting like you don’t know what I am talking about. Syscalls implement operations that are called by someone who has certain permissions and operate on various kinds of objects. Nobody who wants to debug why that call returned “Permission denied” or “File does not exist” without any detail cares that there is hardware several layers of abstraction deeper down that doesn’t know anything about those concepts. Nothing in the hardware forces people to make APIs with bad error reporting.

              • taladar@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                8 months ago

                Because if a program dies and just prints strerror(errno) it just gives me “Permission denied” without any detail on which operation had permissions denied to do what. So basically I have not enough information to fix the issue or in many cases even to reproduce it.

                • uis@lemm.ee
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  8 months ago

                  It may just not print anything at all. This is logging issue, not “C based assumption”. I wouldn’t be surprised if you will call “403 Forbidden” a “C based assumtion” too.

                  But since we are talking about local program, competent sysadmin can strace program. It will print arguments and error codes.