I think this has happened before. There are accounts of people who completely lost touch with reality after getting involved with certain scammers, cult-leaders, self-help gurus, “life coaches”, fortune tellers or the like. However, these perpetrators were real people who could only handle a limited number of victims at any given time. Also, they probably had their very specific methods and strategies which wouldn’t work on everybody, not even all the people who might have been the most susceptible. ChatGPT, on the other hand, can do this at scale. Also, it was probably trained from all websites and public utterances of any scammer, self-help author, (wannabe) cult leader, life coach, cryptobro, MLM peddler etc. available, which allows it to generate whatever response works best to keep people “hooked”. In my view, this alone is a cause for concern.
It’s also a case where I think the lack of intentionality hurts. I’m reminded of the way the YouTube algorithm contributed to radicalization by feeding people steadily more extreme versions of what they had already selected. The algorithm was (and is) just trying to pick the video that you would most likely click on next, but in so doing it ended up pushing people down the sales funnel towards outright white supremacy because what videos you were shown actually impacted which video you would choose to click next. Of course since the videos were user-supplied content they started taking advantage of that tendency with varying degrees of success, but the algorithm itself wasn’t “secretly fascist” and in the same way would, over time, push people deeper into other rabbit holes, whether that meant obscure horror games, increasingly unhinged rage video collections, and generally everything that was once called “the weird part of YouTube.”
ChatGPT and other bots don’t have failed academics and comedians trying to turn people into Nazis, but it does have a similar lack of underlying anything, and that means that unlike a cult with a specific ideology it’s always trying to create the next part of the story you most want to hear. We’ve seen versions of this that go down a conspiracy thriller route, a cyberpunk route, a Christian eschatology route, even a romance route. Like, it’s pretty well known that there are ‘cult hoppers’ who will join a variety of different fringe groups because there’s something about being in a fringe group that they’re attracted to. But there are also people who will never join scientology, or the branch davidians, or CrossFit, but might sign on with Jonestown or QAnon with the right prompting. LLMs, by virtue of trying to predict the next series of tokens rather than actually having any underlying thoughts, will, on a long enough timeframe, lead people down any rabbit hole they might be inclined to follow, and for a lot of people - even otherwise mentally healthy people - that includes a lot of very dark and dangerous places.
I think this has happened before. There are accounts of people who completely lost touch with reality after getting involved with certain scammers, cult-leaders, self-help gurus, “life coaches”, fortune tellers or the like. However, these perpetrators were real people who could only handle a limited number of victims at any given time. Also, they probably had their very specific methods and strategies which wouldn’t work on everybody, not even all the people who might have been the most susceptible. ChatGPT, on the other hand, can do this at scale. Also, it was probably trained from all websites and public utterances of any scammer, self-help author, (wannabe) cult leader, life coach, cryptobro, MLM peddler etc. available, which allows it to generate whatever response works best to keep people “hooked”. In my view, this alone is a cause for concern.
It’s also a case where I think the lack of intentionality hurts. I’m reminded of the way the YouTube algorithm contributed to radicalization by feeding people steadily more extreme versions of what they had already selected. The algorithm was (and is) just trying to pick the video that you would most likely click on next, but in so doing it ended up pushing people down the sales funnel towards outright white supremacy because what videos you were shown actually impacted which video you would choose to click next. Of course since the videos were user-supplied content they started taking advantage of that tendency with varying degrees of success, but the algorithm itself wasn’t “secretly fascist” and in the same way would, over time, push people deeper into other rabbit holes, whether that meant obscure horror games, increasingly unhinged rage video collections, and generally everything that was once called “the weird part of YouTube.”
ChatGPT and other bots don’t have failed academics and comedians trying to turn people into Nazis, but it does have a similar lack of underlying anything, and that means that unlike a cult with a specific ideology it’s always trying to create the next part of the story you most want to hear. We’ve seen versions of this that go down a conspiracy thriller route, a cyberpunk route, a Christian eschatology route, even a romance route. Like, it’s pretty well known that there are ‘cult hoppers’ who will join a variety of different fringe groups because there’s something about being in a fringe group that they’re attracted to. But there are also people who will never join scientology, or the branch davidians, or CrossFit, but might sign on with Jonestown or QAnon with the right prompting. LLMs, by virtue of trying to predict the next series of tokens rather than actually having any underlying thoughts, will, on a long enough timeframe, lead people down any rabbit hole they might be inclined to follow, and for a lot of people - even otherwise mentally healthy people - that includes a lot of very dark and dangerous places.