EagleEye, an AI-powered mixed-reality (MR) system designed to be built into soldiers’ helmets.

The modular hardware is a “family of systems,” according to Anduril’s announcement, including a heads-up display, spatial audio, and radio frequency detection. It can display mission briefings and orders, overlay maps and other information during combat, and control drones and military robotics.

“We don’t want to give service members a new tool—we’re giving them a new teammate,” says Luckey. “The idea of an AI partner embedded in your display has been imagined for decades. EagleEye is the first time it’s real.”

      • I know about AR, I mean the video game aim assist stuff. It just doesn’t seem practical. It seems ~the same as using a sight with a bunch of extra sensors in the middle. Seems like any calibration issues with sights would still exist plus all the work of integrating the gun (identificación, sensors for precise positioning, data transmission, another fucking battery). I don’t doubt MIC wants something like this, but there are always tradeoffs and it seems like in this case you add a lot of weight and complexity and maintenance for limited benefit. Then again, maybe they have a really nice aim assist system and the only thing holding them back was that the helmet was too heavy.

        • Awoo [she/her]@hexbear.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 days ago

          sensors for precise positioning

          No you’re overthinking it. The “sensor” already exists on the headset in the form of multiple cameras, which are set apart at specific distances allowing them to use multiple images from different locations to generate a 3d image. You can see this clearly on the current version of the headset (3 cameras in middle of helmet).

          The older prototypes they were working with had more:

          This can actually be done with just 2 cameras: https://youtu.be/5LWVtC4ZbK4

          The technique for this is very simple depth measurement. I’m sure you understand that if you have a 3d image of everything in frame what you can do with that is pretty simple and is going to be accurate. You can probably assume that these are using wide fisheye lens like this so they have an extremely clear view of everything:

            • Awoo [she/her]@hexbear.net
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 days ago

              All you really need are:

              1. Real time 3d model of what is currently being seen, achieved by multiple cameras.
              2. Real time 3d model of the rifle being aimed. With the ability to recognise where that rifle’s barrel is pointing. This can be achieved with a laser on the rifle, or it can be achieved with simple image recognition that currently already exists to do things like recognise a hand pointing at something accurately used for mouse pointing and clicking inside VR. All of this will work fine as long as the rifle being aimed is in frame of the camera, which it will be on such a wide fov camera.