Tesla Whistleblower Says ‘Autopilot’ System Is Not Safe Enough To Be Used On Public Roads::“It affects all of us because we are essentially experiments in public roads.”

  • JohnEdwa
    link
    English
    45
    edit-2
    5 months ago

    Ah, but you see, his reasoning is that what if the camera and lidar disagree, then what? With only a camera based system, there is only one truth with no conflicts!

    Like when the camera sees the broad side of a white truck as clear skies and slams right at it, there was never any conflict anywhere, everything went just as it was suppo… Wait, shit.

    • @brbposting@sh.itjust.works
      link
      fedilink
      English
      305 months ago

      sees the broad side of a white truck as clear skies and slams right at it

      RIP Joshua Brown:

      The truck driver, Frank Baressi, 62, told the Associated Press that the Tesla driver Joshua Brown, 40, was “playing Harry Potter on the TV screen” during the collision and was driving so fast that “he went so fast through my trailer I didn’t see him”.

      • @girthero@lemmy.world
        link
        fedilink
        English
        135 months ago

        he went so fast through my trailer I didn’t see him”.

        Lidar would still prevail over stupidity in this situation. It does a better job detecting massive objects cars can’t go through.

    • @DreadPotato
      link
      English
      05 months ago

      what if the camera and lidar disagree, then what?

      This (sensor fusion) is a valid issue in mobile robotics. Adding more sensors doesn’t necessarily improve stability or reliability.

      • @ZapBeebz_@lemmy.world
        link
        fedilink
        English
        255 months ago

        After a point, yes. However, that point comes when the sensor you are adding is more than the second type in the system. The correct answer is to work into your algorithm a weighting system so the car can decide which sensor it trusts to not kill the driver, i.e. if the LIDAR sees the broadside of a trailer and the camera doesn’t, the car should believe the LIDAR over the camera, as applying the brakes and speeding into the obstacle at 60mph is likely the safer option.

        • @DreadPotato
          link
          English
          25 months ago

          Yes the solution is fairly simple in theory, implementing this is significantly harder, which is why it is not a trivial issue to solve in robotics.

          I’m not saying their decision was the right one, just that his argument with multiple sensors creating noise in the decision-making is a completely valid argument.

          • @lightnsfw@reddthat.com
            link
            fedilink
            English
            35 months ago

            Doesn’t seem too complicated… if ANY of the sensors see something in the way that the system can’t resolve then it should stop the vehicle/force the driver to take over

            • @DreadPotato
              link
              English
              2
              edit-2
              5 months ago

              Then you have a very unreliable system, stopping without actual reason all the time, causing immense frustration for the user. Is it safe? I guess, cars that don’t move generally are. Is it functional? No, not at all.

              I’m not advocating unsafe implementations here, I’m just pointing out that your suggestion doesn’t actually solve the issue, as it leaves a solution that’s not functional.

              • @lightnsfw@reddthat.com
                link
                fedilink
                English
                25 months ago

                If they’re using such unreliable sensors that they’re getting false positives all the time the system isn’t going to be functional in the first place.

                • @DreadPotato
                  link
                  English
                  2
                  edit-2
                  5 months ago

                  All sensors throw a shitload of false positives (or negatives) when used in the real world, that’s why the filtering and unification between sensors is so important, but also really hard to solve, while still getting a consistent and reliable solution.

            • Kogasa
              link
              fedilink
              English
              15 months ago

              “seeing an obstacle” is a high level abstraction. Sensor fusion is a lower level problem. It’s fundamentally kinda tricky to get coherent information out of multiple sensors looking partially at the same thing in different ways. Not impossible, but the basic model is less “just check each camera” and more sheafs