AlphaDev uncovered new sorting algorithms that led to improvements in the LLVM libc++ sorting library that were up to 70% faster for shorter sequences and about 1.7% faster for sequences exceeding 250,000 elements.

  • simple@lemmy.ml
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 year ago

    This is the stuff that makes me really excited for AI. Sure, having a personal assistant is nice. Generating images and music is also very cool. Optimizing software and hardware though, this is where things get amazing.

    Modern software is pretty abysmal when you think about it. In the last 20 years we’ve focused more on making things faster rather than making more optimized things. We ended up with ultra bloated operating systems, regularly 100mb+ apps, RAM sucking programs like web browsers and background apps that make even 8 gigabytes of RAM not enough, and so on.

    I’m waiting for a point where AI can start optimizing legacy code and say “Wait, this is really dumb and wastes so much energy”. Imagine Windows running on only 100mb of ram. Imagine apps and websites being 10x more efficient than they are now. It’s not that crazy of a concept, only a matter of time.

    • Knighthawk 0811
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      to be fair sometimes the bloat is for security. but yeah, I’m with you here

    • ericjmorey@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I agree. There’s a lot that automation can do that we generally know can be done, but don’t because the limited development resources are allocated towards easier payouts.

  • Sibbo@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    That is 70% faster on sequences of five elements. Using this for handling the end of the recursion in LLVMs current std::sort implementation results in a 1.7% speedup on sequences exceeding 250k elements.

    • greysemanticist
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I wonder if their paper has a plot of the speedups against number of elements. Did they just stop measuring at 250k? What was the shape of the curve?

      • Sibbo@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I didn’t bother reading, not so interested in AI stuff. But since they highlight these results, it likely does not get better for higher numbers.

        • greysemanticist
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I’ll bet it just ends up being the limitations of memory bandwidth to stuff things into registers for the optimized algorithm. Or, something like Mojo’s autotuning finds the best way to partition the work for the hardware.

  • Hexorg@beehaw.orgM
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    This write up talks about sequences of 3 and 4 items… does their full paper generalize to variable sized lists?

    • ericjmorey@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      From the main section of the paper published in Nature (which is available for free):

      Using AlphaDev, we have discovered fixed and variable sort algorithms from scratch that are both new and more efficient than the state-of-the-art human benchmarks. The fixed sort solutions for sort 3, sort 4 and sort 5 discovered by AlphaDev have been integrated into the standard sort function in the LLVM standard C++ library

      It seems they did find improvements for sorting variable sized list but only the sort 3, sort 4 and sort 5 algorithms got implemented in LLVM.

      • Hexorg@beehaw.orgM
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        1 year ago

        Oh I see! I didn’t realize llvm had specific-count implementation thanks!!