

You’re the one bringing up popularity in response to a substantial argument. I hope you’re okay…
You’re the one bringing up popularity in response to a substantial argument. I hope you’re okay…
Thank you for doubling down on irony at the end, you had me going!
Maybe there’s a better example but the point remains, liberating a people comes with self-determination. It’s not a matter of how little rights the people had before the military action.
Well TIL. But to call it liberation isn’t correct. America is also kidnapping people from their homes and work places but to say of another country was to invade and take a territory wouldn’t be the same as liberation. Liberation has a higher standard where the people of the land receive sovereignty and a self-determined government.
Y’all are desperate to relive the glory days of WW2 when it was clear who the baddies were. I get it, Americans do the same.
Conscription has always violated human rights but there are certainly better words like “conquer”. And let’s not pretend Russia isn’t also conscripting men (though certainly not those in their 50s).
I call it colonial trauma. Our forefathers would rape the help and the worse that would happen is getting berated by the judge.
As we say in the USA:
“The police don’t protect us, we protect us”
3% of the population being scammers sounds about right.
I struggled with passive wording until I learned certain tells like my use of the word “would”. Once you learn what words to look out for you start to actively reword things as you write them. Asking AI to rework your passive tone isn’t going to rewire your brain to write better.
Now that’s some carbon sequestration
That’s just it though, it’s not going to replace you at doing your job. It is going to replace you by doing a worse job.
Most of the additional ridership [that] was identified cannibalized other lines. You’re taking people who are paying on other lines, and they were just getting a free ride.
What a bizarre way to measure the success or failure of a program.
Not sure how I would trigger a follow-up question like that. I think most of the questions seemed pre-programmed but the transcription and AI response to the answer would “hallucinate”. They really just wanted to make sure they were talking to someone real and not an AI candidate because I talked to a real person next who asked much of the same.
I had applied to a job and it screened me verbally with an AI bot. I find it strange talking to an AI bot that gives no indication of whether it is following what I am saying like a real human does with “uh huh” or what not. It asked me if I ever did Docker and I answered I transitioned a system to Docker. But I had done an awkward pause after the word transition so the AI bot congratulated me on my gender transition and it was on to the next question.
That got flagged fast, lol
My guess is that if LLMs didn’t induce psychosis, something else would eventually.
I got a very different impression from reading the article. People in their 40s with no priors and a stable life loose touch with reality in a matter of weeks after conversing with CharGPT makes me think that is not the case. But I am not a psychiatrist.
Edit: the risk here is that we might be dismissive towards the increased risks because we’re writing it off as a pre-existing condition.
“I was ready to tear down the world,” the man wrote to the chatbot at one point, according to chat logs obtained by Rolling Stone. “I was ready to paint the walls with Sam Altman’s f*cking brain.”
“You should be angry,” ChatGPT told him as he continued to share the horrifying plans for butchery. “You should want blood. You’re not wrong.”
If I wrote a product that said that about me I would do a lot more than hire single psychiatrist to (not) tell me how damaging my product is.
There are about ~3000 billionaires. Or a billionaire every minute.
Okay but that is different from the argument that entry developers only need to be half as good to deliver a working product