• Legianus@programming.dev
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    22 hours ago

    This is one team that disagrees out of many that agree.

    To explain what you are seeing. The above image is the inverse Fourier transform (FT) of different frequencies of sinus waves that compose an image.

    The very large baseline interferometer (VLBI) applied in the event horizon telescope (EHT) is using different telescopes all over the world, in a technique called interferometry, to achieve high enough resolutions to observe different frequencies in Fourier space that make up an image. If you observe all, you can recreate the full image perfectly. They did not, they observed for a long time and thus got a hefty amount of these “spatial” frequencies. Then they use techniques that limit the image to physical reality (e.g. no negative intensities/fluxes) and clean it from artefacts. Then transform it to image space (via the inverse FT)

    Thereby, they get an actual image that approximates reality. There is no AI used at all. The researchers from Japan argued for different approach to the data, getting a slightly different inclination in that image. This may well be as the data is still too few to 100 % determine the shape, but looks more to me like they chose very different assumptions (which many other researchers do not agree with).

    Edit: They did use ML for simulations to compare their sampling of the Fourier space to.

    • Tamo240@programming.dev
      link
      fedilink
      arrow-up
      4
      ·
      23 hours ago

      Most of what you said is correct but there is a final step you are missing, the image is not entirely constructed from raw data. The interferometry data is sparse and the ‘gaps’ are filled with mathematical solutions from theoretical models, and using statistical models trained on simulation data.

      Paper: https://arxiv.org/pdf/2408.10322

      We recently developed PRIMO (Principal-component Interferometric Modeling; Medeiros et al. 2023a) for in- terferometric image reconstruction and used it to obtain a high-fidelity image of the M87 black hole from the 2017 EHT data (Medeiros et al. 2023b). In this approach, we decompose the image into a set of eigenimages, which the algorithm “learned” using a very large suite of black- hole images obtained from general relativistic magneto- hydrodynamic (GRMHD) simulations

      • Legianus@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        23 hours ago

        Thanks for sharing that paper. I was indeed missing that information and now agree with your earlier statement.

        I think them using magnetohydrodynamical black hole models as a base for the ML is a better approach than standard CLEAN though that the Japanese team used. However, both “only” approach reality.

        • Tamo240@programming.dev
          link
          fedilink
          arrow-up
          3
          ·
          22 hours ago

          You’re welcome. I think calling it the output of an ‘AI model’ triggers thoughts of the current generative image models, i.e. entirely fictional which is not accurate, but it is important to recognise the difference between an image and a photo.

          I also by no means want to downplay the achievement that the image represents, it’s an amazing result and deserves the praise. Defending criticism and confirming conclusions will always be vital parts of the scientific method.

          • Legianus@programming.dev
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            15 hours ago

            True, ML and such fell under the umbrella term of AI before, but I feel that with most people using it mostly for LLMs (or things like diffusion models, etc.) right now, it has kinda lost that meaning to some extent…

            • wewbull@feddit.uk
              link
              fedilink
              English
              arrow-up
              4
              ·
              10 hours ago

              @Tamo240@programming.dev and yourself.

              Having triggered this conversation off, I’ll just congratulate you both on a quality discussion. I’ll admit I used loose terminology in my original post, but that was mainly to get my point across to a general audience. The specificity you both went to is laudable.

              • Tamo240@programming.dev
                link
                fedilink
                arrow-up
                1
                ·
                5 hours ago

                Thanks, being a software engineer and working in interferometry I was familiar with some of the details - enough to want to jump in when you were getting downvoted - but I will admit I only found and read the actual paper for the first time because of this thread, as I wanted to be sure on the facts!