• ShadowRam@kbin.social
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    1 year ago

    won’t add much to an existing array of visible spectrum cameras.

    You do realize LIDAR is just a camera, but has an accurate distance per pixel right?

    It absolutely adds everything.

    But its surroundings are reliably captured by functional sensors

    No it’s not. That’s the point. LIDAR is the functional sensor required.

    You can not rely on stereoscopic camera’s.
    The resolution of distance is not there.
    It’s not there for humans.
    It’s not there for the simple reason of physics.

    Unless you spread those camera’s out to a width that’s impractical, and even then it STILL wouldn’t be as accurate as LIDAR.

    You are more then welcome to try it yourself.
    You can be even as stupid as Elon and dump money and rep into thinking that it’s easier or cheaper without LIDAR.

    It doesn’t work, and it’ll never work as good as a LIDAR system.
    Stereoscopic Camera’s will always be more expensive than LIDAR from a computational standpoint.

    AI will do a hell of a lot better recognizing things via a LIDAR Camera than a Stereoscopic Camera.

    • Eager Eagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      This assumes depth information is required for self driving, I think this is where we disagree. Tesla is able to reconstruct its surroundings from visual data only. In biology, most animals don’t have explicit depth information and are still able to navigate in their environments. Requiring LIDAR is a crutch.

      • Geek_King@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        I disagree with you, I don’t think visual camera’s alone are up to the task. There was an instance of a Tesla in auto pilot mode driving at night with the driver being drunk. This took place in Texas on the high way, the car’s camera footage was released and it showed the autopilot not identify the police car in the lane with it’s red/blue lights flashing as a stationary obstacle. Instead it didn’t realize there was a car in the way around 1 second before the 55 mph impact, and it turned of autopilot that 1 second before.

        Having multiple layers of sensors, some being good at actually sensing a stationary obstacle, plus accurate range finding, plus visual analysis to pick out people and animal, thats the way to go.

        Visual range only cameras were just reported to have a harder time recognizing people of color and children.

        • Eager Eagle@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          the car’s camera footage was released and it showed the autopilot not identify the police car in the lane with it’s red/blue lights flashing

          If the obstacle was visible in the footage, the incident could have been avoided with visible spectrum cameras alone. Once again, a problem with the data processing, not acquisition.

          • Geek_King@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            If we’re talking about the safety of the driver and people around them, why not both types of sensors? LIDAR has things it excels at, and visual spectrum cameras have things they do well too. That way the data processing side has more things to rely on, instead of all the eggs in one basket.

            • Eager Eagle@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              1 year ago

              why not both types of sensors

              Cost seems to be a pretty good reason. Admittedly, until I looked it up 5 minutes ago I thought it was just 100-200% more expensive than cameras, but it seems to be much more than that.

              On top of that, there are the problems of weather and high energy usage. This is more of a problem than just “not working on rain”: if the autonomous driving system is designed to rely on data from a sensor that stops working when it rains, this can be worse than not having that sensor in the first place. This is what I refer to by saying that LIDAR is a crutch.

              • Geek_King@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                That’s a pretty good point, the part about if it’s raining or snowing, LIDAR can’t be used, which could leave the system in a much worst spot. It’s getting to the point where I’m beginning to think that fully self driving cars just won’t be 100% possible in all conditions in all locations.

                For instance, where I live, we can have some bad winters, snow, ice, slippy conditions. People have a tough time with these conditions, and I’d imagine it’d be even harder for a self driving car, especially given how the sensor suites work. My car has that intelligent cruise control where it’ll slow down when it senses a car ahead of me, then match it’s speed. That feature stops working if too much snow accumulates on the sensors.

              • degrix@lemmy.hqueue.dev
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                1 year ago

                Optical cameras alone have issues as well that can’t be handled though. It’s the combination of the two along with other things like ultrasonic sensors that makes them safe. More sensors in general are better because they reduce the computational burden and provide redundancy - even if that redundancy is to safely stop.

                Cost is certainly an issue, but on $40k+ vehicles it’s cheap enough for other EV makes to include it in the cost. Volvo for instance is using Luminars version at a cost of about $500 (https://www.wired.com/story/sleeker-lidar-moves-volvo-closer-selling-self-driving-car/).

                Image processing is expensive even with dedicated hardware and LiDAR provides enough extra information to avoid needing to make make certain calculations off of images alone (like deltas between image series to calculate distance). Those calculations are further amplified by conditions where images alone don’t provide enough information - similar to how there are conditions where the LiDAR data alone wouldn’t be sufficient.

                • Eager Eagle@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Image processing is expensive

                  and you’re suggesting to use LIDAR which is more expensive and power hungry as a replacement for those computations?

                  • degrix@lemmy.hqueue.dev
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    1 year ago

                    I meant the computations are expensive, i.e. slow to perform even with good processors. When you need to do something millions of times, anything to make that faster helps with the overall safety of the system.