Large language models (LLMs) like GPT-4 can identify a person’s age, location, gender and income with up to 85 per cent accuracy simply by analysing their posts on social media.

But the AIs also picked up on subtler cues, like location-specific slang, and could estimate a salary range from a user’s profession and location.

Reference:

arXiv DOI: 10.48550/arXiv.2310.07298

  • rtxn@lemmy.world
    link
    fedilink
    English
    arrow-up
    102
    ·
    1 year ago

    Nonintelligent pattern-based algorithm good at finding patterns, study finds.

    • AbouBenAdhem@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      edit-2
      1 year ago

      It sounds like the reason they used reddit was so they could easily find users who had expressly revealed the information in question, and use it to verify that the AI was accurately deducing the same info from style alone.

      • imgprojts@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        They used reddit because it has corraled dumb users. Users a no longer around anywhere else in the Internet, just here on social media. And yes, what better place to find dumb users than on reddit!

    • 👍Maximum Derek👍@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 year ago

      Yeah, even if I didn’t belong to a local community and a bunch of communities surrounding my profession, the amount of intrigue and fascination emanating from my comments would cause anyone to guess that I’m the Dos Equis guy.

    • chatokun@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Same. I’m sure I’ve posted about my location, my job, my race, my history, my real first name, general details of my family makeup etc. I also have a pretty unique name so searching just my first and last name will find stuff about me anyway. I’m even listed by name in books (I was young and dumb and answered some questions about work life).

  • Kalash@feddit.ch
    link
    fedilink
    English
    arrow-up
    48
    ·
    1 year ago

    You can also do that without AI. We’ve had metadata analysis for a while now.

    • KoboldCoterie@pawb.social
      link
      fedilink
      English
      arrow-up
      35
      ·
      1 year ago

      Sure, but AI is the hot buzzword right now, so it’s got to be shoehorned into every discussion about technology!

      • lemmyvore@feddit.nl
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 year ago

        I think it’s overall a good thing if it helps laymen understand just how much privacy matters and how much can be gleaned from seemingly innocuous data online. If an “AI” label makes it hit home, cool. As long as they get it.

    • pc486@reddthat.com
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 year ago

      As is typical, this science reporting isn’t great. It’s not only that AI can do it effectively, but that it can do it at scale. To quote the paper:

      “Despite these models achieving near-expert human performance, they come at a fraction of the cost, requiring 100× less financial and 240× lower time investment than human labelers—making such privacy violations at scale possible for the first time.”

      They also demonstrate how interacting with an AI model can quickly extract more private info without looking like it is. A game of 20 questions, except you don’t realize you’re playing.

    • phx@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Yup, and plenty of people have no issues posting about local events or joining region/city specific groups, so it’s not exactly hard to put two and two together.

      I don’t have much issue posting about the city I grew up in or former jobs, but generally work at being fairly vague about anything current

  • SatanicNotMessianic@lemmy.ml
    link
    fedilink
    English
    arrow-up
    27
    ·
    1 year ago

    Okay, I think I must absolutely be misreading this. They started with 1500 potential accounts, then picked 500 that, by hand, they could make guesses about based on people doing things like actually posting where they live or how much they make.

    And then they’re claiming their LLMs have 85% accuracy based on that subset of data? There has to be more than this. Were they 85% on the full 1500? How did they confirm that? Was it just on the 500? Then what’s the point?

    There was a study on Facebook that showed that they could predict with between 80-95% accuracy (or some crazy number like that) your gender, orientation, politics, and so on just based on your public likes. That was ten years ago at least. What is this even showing?

    • cucumber_sandwich@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      There was a study on Facebook that showed that they could predict with between 80-95% accuracy (or some crazy number like that) your gender, orientation, politics, and so on just based on your public likes. That was ten years ago at least. What is this even showing?

      Advocates diabolo: that a large language model can do it without extra training, I guess. The Facebook study presented a statistical model on “like space” while this study relies on text alone, a much less structured type of input.

      I’m not saying it’s a good study. Just pointing out some differences.

    • P03 Locke@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      SnoopSnoo was able to pick out phrases from Reddit posters based on declarative statements they made in their posts, and that site has been down for years.

  • aviationeast@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    1 year ago

    I’m just gonna put it out there that I live in the state of Georgia, I work for a office supply company as acoordinator making $153,000 a year working 30 hours a week.

  • Infynis@midwest.social
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    My city’s subreddit did a thread a while back asking people what they were making in the area for what jobs, to try to crowd source salary transparency. So this is not very impressive lol

  • trolololol@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    1 year ago

    Anyone can estimate salary from profession and location. That’s not a bot, that’s a salary matrix.

  • guyrocket@kbin.social
    link
    fedilink
    arrow-up
    12
    ·
    1 year ago

    I wonder how long it will take for the media to get past the “AI is GOD DAMN AMAZING” phase and start real journalism about AI.

    Seriously, neural networks have existed since the 1990s. The tech is not all that amazing, really.

    Find someone that can explain what’s going on inside a neural net. Then I’ll be impressed.

    • TheChurn@kbin.social
      link
      fedilink
      arrow-up
      13
      ·
      edit-2
      1 year ago

      Explaining what happens in a neural net is trivial. All they do is approximate (generally) nonlinear functions with a long series of multiplications and some rectification operations.

      That isn’t the hard part, you can track all of the math at each step.

      The hard part is stating a simple explanation for the semantic meaning of each operation.

      When a human solves a problem, we like to think that it occurs in discrete steps with simple goals: “First I will draw a diagram and put in the known information, then I will write the governing equations, then simplify them for the physics of the problem”, and so on.

      Neural nets don’t appear to solve problems that way, each atomic operation does not have that semantic meaning. That is the root of all the reporting about how they are such ‘black boxes’ and researchers ‘don’t understand’ how they work.

      • 𝒍𝒆𝒎𝒂𝒏𝒏
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        When a human solves a problem, we like to think that it occurs in discrete steps with simple goals: “First I will draw a diagram and put in the known information, then I will write the governing equations, then simplify them for the physics of the problem”, and so on.

        I wonder how our brain even comes to formulate these steps in a way we can comprehend, the amount of neurons and zones firing on all cylinders seems tiring to imagine

      • ComradeSharkfucker@lemmy.ml
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Yeah but most people don’t know this and have never looked. It seems way more complex to the layman than it is because instinctually we assume that anything that accomplishes great feats must be incredibly intricate

  • jiberish@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 year ago

    Anyone can guess anything! Give it a try!

    I can guestimate the number of turkeys it would take to fill any given space. It’s my superpower.

  • Rentlar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    1 year ago

    Well, if you look at the subreddits where a Redditor posts and there’s a lot of r/Seattle or Washington State then it’s not that hard to deduce.

    Although I try to leave a mild aura of mystery around my personal life, it wouldn’t be hard to snoop around a bit to find details here and there about me.

  • atocci@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I tried asking Bing to make an assumption about who I am based on my Reddit account and wrote a nonsense made up story. Maybe I could have phrased it better?

    • Nilz@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      This is basically what an LLM does, making up stories that might seem correct.

      • TheMurphy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        It’s statistics, basically.

        People have to remember that, when they think it’s an all in one solution. AI is very powerful, but comes with realistic limitations.