Barack Obama: “For elevator music, AI is going to work fine. Music like Bob Dylan or Stevie Wonder, that’s different”::Barack Obama has weighed in on AI’s impact on music creation in a new interview, saying, “For elevator music, AI is going to work fine”.

  • Inmate@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    Because you can teach a teen to do it in two weeks. He was a constitutional law professor, as well as the first elected African-American president in the United States. I learned LLMs in a couple months and I never used a comp until 2021. Why are you gatekeeping?

    • Daxtron2@lemmy.ml
      link
      fedilink
      English
      arrow-up
      17
      ·
      1 year ago

      Using the end product and having any idea how it works are two VERY different things.

      • Inmate@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 year ago

        I agree, my argument is that both aren’t challenging for even the average person if they really want/need to understand how these models produce refined noise informed by human patterns.

        There are electricians everywhere you know.

        This isn’t a random person thoughtlessly yelling one-sentence nonsense pablum on the Internet like you.

        You think this person can’t understand something as straightforward as programming, coming from law?

        https://en.wikipedia.org/wiki/Barack_Obama

        Please link your Wikipedia below 🫠

        • Daxtron2@lemmy.ml
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          It’s a bit more complicated than you’re making it out to be lmfao, there’s a reason it’s only really been viable for the past few years.

          • skulkingaround@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            The principles are really easy though. At its core, neural nets are just a bunch of big matrix multiplication operations. Training is still fundamentally gradient descent, which while it is a fairly new concept in the grand scheme of things, isn’t super hard to understand.

            The progress in recent years is primarily due to better hardware and optimizations at the low levels that don’t directly have anything to do with machine learning.

            We’ve also gotten a lot better at combining those fundamentals in creative ways to do stuff like GANs.