Meta says its new speech-generating AI model is too dangerous for public release.

Meta announced a new AI model called Voicebox yesterday, one it says is the most versatile yet for speech generation, but it’s not releasing it yet:

There are many exciting use cases for generative speech models, but because of the potential risks of misuse, we are not making the Voicebox model or code publicly available at this time. 

The model is still only a research project, but Meta says can generate speech in six languages from samples as short as two seconds and could be used for “natural, authentic” translation in the future, among other things.

  • zekiz@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    2 years ago

    Why do companies who develope AI models always say such bullshit. In the end it gets released anyway

    • eee@lemm.eeOP
      link
      fedilink
      arrow-up
      4
      ·
      2 years ago

      they say that because they’re not ready yet. Once it’s ready, you bet they’ll try and release and monetize it somehow.

    • Banzai51@midwest.social
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      It’s like a car company saying their hot rod is way too fast for the public. Now lots and lots of people want it, especially those willing to pay through the nose.

  • LostCause@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    2 years ago

    “You can be unethical and still be legal; that’s the way I live my life.” - Mark Zuckerberg, March 5, 2004.

    So knowing that, what is the danger? I‘d think of what it does to making news even more untrustworthy or be used for scamming people, but that can‘t be it, because that is simply unethical and he doesn‘t care about that.

    How would the release of it hurt their bottom line? Maybe I am already putting too much thought into this and it‘s just stupid hype building.

  • Vilian@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    2 years ago

    basically “we create a new ai with no concern with morality and it became as soulless as or company, and we don’t want to be sued for creating something that make 4chan look like a uplifting place to hang out with friends and family”

    • BrooklynMan@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      2 years ago

      i think you’re close, but having seen how facebook devs are with handling AI firsthand when I was in university, it’s probably more like:

      “we were screwing around with something we didn’t understand and made something more powerful than we could control, so we’ve had to shut it down for now until we figure out how to limit what it can do so we don’t set loose the universe’s Nazi-est chatbot. also, we trained it on FB user data, so you know it’s really bad.”

  • Th4tGuyII@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    2 years ago

    This is literally the “She goes to another school” trope but with AI -

    They’ve even got the demo equivalent of “totally real” picture you scraped of “her” from Google (or likely engineered together in their case).