So, I’m selfhosting immich, the issue is we tend to take a lot of pictures of the same scene/thing to later pick the best, and well, we can have 5~10 photos which are basically duplicates but not quite.
Some duplicate finding programs put those images at 95% or more similarity.

I’m wondering if there’s any way, probably at file system level, for the same images to be compressed together.
Maybe deduplication?
Have any of you guys handled a similar situation?

  • tehnomad@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    Yeah, the duplicate finder uses a neural network to find duplicates I think. I went through my wedding album that had a lot of burst shots and it was able to detect similar images well.

    • ShortN0te@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Would be surprised if there is any AI involved. Finding duplicates is a solved problem.

      AI is only involved in object detection and face recognition.

      • tehnomad@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        I wasn’t sure if it was AI or not. According to the description on GitHub:

        Utilizes state-of-the-art algorithms to identify duplicates with precision based on hashing values and FAISS Vector Database using ResNet152.

        Isn’t ResNet152 a neural network model? I was careful to say neural network instead of AI or machine learning.

        • ShortN0te@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          Thanks for that link.

          AI is the umbrella term for ML, neural networks, etc.

          ResNet152 seems to be used only to recognice objects in the image to help when comparing images. I was not aware of that and i am not sure if i would classify it as actuall tool for image deduplication, but i have not looked at the code to determine how much they are doing with it.

          As of now they still state that they want to use ML technologies in the future to help, so they either forgot to edit the readme or they do not use it.