• blame [they/them]
    link
    fedilink
    English
    352 months ago

    Not that I disagree with your conclusion because there’s an even simpler way to check if an app is listening: iOS and Android will tell you the mic is being used… Anyway, we do have always-on NNs listening for keywords (“Siri,”, “Hey google”, “Alexa”) so I agree that full ass voice transcription like whisper will run like dogshit on your phone they can certainly run a much much lighter model to pick up a handful of keywords.

      • blame [they/them]
        link
        fedilink
        English
        212 months ago

        To Camdat’s point, a general transcription is definitely not low power even if you have some kind of gating on when it transcribes. Obviously Apple and Google and Samsung and whoever makes the phone can turn on the mic without you knowing, otherwise how would their voice assistant work, but Apple probably isn’t letting Facebook have access to the mic without throwing something up on the status bar.

    • Camdat [none/use name]
      link
      fedilink
      English
      132 months ago

      Sure this is definitely true. I should clarify that single-word NNs do run on-device all the time, but those require specialized models that are trained only on those keywords. Once those models trigger they need to send everything else to the cloud.

      • blame [they/them]
        link
        fedilink
        English
        15
        edit-2
        2 months ago

        I agree. If I was going to do something like this for advertising though I wouldn’t really care too much about what people were saying so instead I’d just listen for some limited set of keywords (maybe for some of my top paying advertisers) and serve ads for keywords that hit recently. Keep it all on device until an ad actually needs to be served.