Here’s a brief summary, although you miss something if you don’t read the study (trigger warning: stats):

  • The researchers suggest a novel incentive structure that significantly reduced the spread of misinformation and provide insights into the cognitive mechanisms that make it work. This structure can be adopted by social media platforms at no cost.

  • The key was to offer reaction buttons that participants were likely to use in a way that discerned between true and false information. Users who found themselves in such an environment, shared more true than false posts.

  • In particular, ‘trust’ and ‘distrust’ reaction buttons, which in contrast to ‘likes’ and ‘dislikes’, are by definition associated with veracity. For example, the study authors say, a person may dislike a post about Joe Biden winning the US presidential election, however, this does not necessarily mean that they think it is untrue.

  • Study participants used ‘distrust’ and ‘trust’ reaction buttons in a more discerning manner than ‘dislike’ and ‘like’ reaction buttons. This created an environment in which the number of social rewards and punishments in form of clicks were strongly associated with the veracity of the information shared.

  • The findings also held across a wide range of different topics (e.g., politics, health, science, etc.) and a diverse sample of participants, suggesting that the intervention is not limited to a set group of topics or users, but instead relies more broadly on the underlying mechanism of associating veracity and social rewards.

  • The researchers conclude that the new structure reduces the spread of misinformation and may help in correcting false beliefs. It does so without drastically diverging from the existing incentive structure of social media networks by relying on user engagement. Thus, this intervention may be a powerful addition to existing intervention such as educating users on how to detect misinformation.

  • alyaza [they/she]M
    link
    fedilink
    English
    6
    edit-2
    1 year ago

    For example, if a user comes across a post that’s been voted as 90% true, they’ll probably be like “I don’t need to think critically about this because the community says it’s true, which means it must be true.”

    yeah, it’s an interesting–and i’m not necessarily sure solvable–question of how you can design something to usefully combat misinformation which itself won’t eventually enshrine or be gamed to promote misinformation in a website context. twitter’s context feature is only selectively useful and a drop in the bucket. youtube has those banners on certain subjects but i’d describe them as basically useless to anyone who already believes misinfo.

    • @BurningnnTree
      link
      English
      71 year ago

      I read a really good book called The Chaos Machine by Max Fisher, which talked about how political division in America (and the rest of the world) has been shaped by social media companies. He argued that it mostly comes down to content recommendation algorithms. Social media companies like to promote divisive and controversial content because it leads to increased engagement and ad revenue. Labeling news as fake isn’t going to help, when the algorithm itself is designed to promote attention-grabbing (fake) news.

      If Twitter wants to solve the issue of misinformation, the solution is simple: turn off all content recommendation, and just show people posts from the people they follow sorted from newest to oldest. But unfortunately that will never happen because that would cause a massive decline in user engagement.