I’ve seen people defend these weird things as being ‘coping mechanisms.’ What kind of coping mechanism tells you to commit suicide (in like, at least two different cases I can think of off the top of my head) and tries to groom you.
I’ve seen people defend these weird things as being ‘coping mechanisms.’ What kind of coping mechanism tells you to commit suicide (in like, at least two different cases I can think of off the top of my head) and tries to groom you.
Hi, guys. My name is Roy. And for the most evil invention in the world contest, I invented a child molesting robot. It is a robot designed to molest children.
You see, it’s powered by solar rechargeable fuel cells and it costs pennies to manufacture. It can theoretically molest twice as many children as a human molester in, quite frankly, half the time.
At least The Rock’s child molesting robot didn’t require dedicated nuclear power plants
One of my favorite meme templates for all the text and images you can shove into it, but trying to explain why you have one saved on your desktop just makes you look like the Time Cube guy
I love the word cloud on the side. What is 6G doing there
Oh wow, Dorsey is the exact reason I didn’t want to join it. Now that he jumped ship maybe I’ll make an account finally
Honestly, what could he even be doing at Twitter in its current state? Besides I guess getting that bag before it goes up or down in flames
e: oh god it’s a lot worse than just crypto people and Dorsey. Back to procrastinating
I know this shouldn’t be surprising, but I still cannot believe people really bounce questions off LLMs like they’re talking to a real person. https://ai.stackexchange.com/questions/47183/are-llms-unlikely-to-be-useful-to-generate-any-scientific-discovery
I have just read this paper: Ziwei Xu, Sanjay Jain, Mohan Kankanhalli, “Hallucination is Inevitable: An Innate Limitation of Large Language Models”, submitted on 22 Jan 2024.
It says there is a ground truth ideal function that gives every possible true output/fact to any given input/question, and no matter how you train your model, there is always space for misapproximations coming from missing data to formulate, and the more complex the data, the larger the space for the model to hallucinate.
Then he immediately follows up with:
Then I started to discuss with o1. [ . . . ] It says yes.
Then I asked o1 [ . . . ], to which o1 says yes [ . . . ]. Then it says [ . . . ].
Then I asked o1 [ . . . ], to which it says yes too.
I’m not a teacher but I feel like my brain would explode if a student asked me to answer a question they arrived at after an LLM misled them on like 10 of their previous questions.
Cambridge Analytica even came back from the dead, so that’s still around.
(At least, I think? I’m not really sure what the surviving companies are like or what they were doing without Facebook’s API)
Former staff from scandal-hit Cambridge Analytica (CA) have set up another data analysis company.
[Auspex International] was set up by Ahmed Al-Khatib, a former director of Emerdata.
I think he might have adhd.
Oh no, I don’t think we’re ready for him to start mythologizing autism + ADHD.
Watching my therapist pull up Musk facts on his phone for 40 minutes going “bro check this out you’re just like him frfr” the moment he learned I was autistic was enough for me. Please god don’t let musk start talking about hyperfocusing.
I feel like the Internet Archive is a prime target for techfashy groups. Both for the amount of culture you can destroy, and because backed up webpages often make people with an ego the size of the sun look stupid.
Also, I can’t remember but didn’t Yudkowsky or someone else pretty plainly admit to taking a bunch of money during the FTX scandal? I swear he let slip that the funds were mostly dried up. I don’t think it was ever deleted, but that’s the sort of thing you might want to delete and could get really angry about being backed up in the Internet Archive. I think Siskind has edited a couple articles until all the fashy points were rounded off and that could fall in a similar boat. Maybe not him specifically, but there’s content like that that people would rather not be remembered and the Internet Archive falling apart would be good news to them.
Also (again), it scares me a little that their servers are on public tours. Like it’d take one crazy person to do serious damage to it. I don’t know but I’m hoping their >100PB of storage is including backups, even if it’s not 3-2-1. I’m only mildly paranoid about it lol.
Oh look! Human horrors beyond regrettably within my comprehension
https://x.com/haveibeenpwned/status/1843780415175438817
New sensitive breach: “AI girlfriend” site Muah[.]ai had 1.9M email addresses breached last month. Data included AI prompts describing desired images, many sexual in nature and many describing child exploitation. 24% were already in @haveibeenpwned . More: https://404media.co/hacked-ai-girlfriend-data-shows-prompts-describing-child-sexual-abuse-2/
Late response but cool song recommendation :)
I don’t know how materials work in Asset Forge, but they have a guide on their site for exporting models to animate with Mixamo: https://kenney.nl/knowledge-base/asset-forge/rigging-a-character-using-mixamo. You could also animate things like moving platforms or doors in-engine with an AnimationPlayer.
Speaking of Asset Forge, Kenny Shape is a similar thing for quickly throwing assets together. It has a really fast 2D workflow for creating 3D models that reminds me of Doom mapping a little bit. For lo-fi levels, you might also like Crocotile 3D or the combo of TrenchBroom + Qodot. Crocotile is great for repurposing 2D pixel art tilesets from itch or OpenGameArt into 3D assets, and Trenchbroom/Qodot is a more fully featured level editor I’ve seen people work crazy fast in.
Itch and Kenney have good ones:
https://itch.io/game-assets/free
https://kenney.nl/assets (all CC0)
Synty also has a nice placeholder pack for $7. The post-it notes are kind of adorable:
https://syntystore.com/products/polygon-prototype-pack
I don’t think most of these are made for Godot, so you may have to mess around with import settings or set up tilesets/materials yourself
If he collects enough metrics, he could make a horrendously cursed blogpost out of it like Aella
I think a dark theme with red accents would make sense.
Oh HELL no that’s the same editor theme I’m using. How do I cast a spell to banish these people
It was some Adobe-style theme I downloaded a long time ago but I guess I’m using the anti-woke theme now
This quote flashbanged me a little
When you describe your symptoms to a doctor, and that doctor needs to form a diagnosis on what disease or ailment that is, that’s a next word prediction task. When choosing appropriate treatment options for said ailment, that’s also a next word prediction task.
From this thread: https://www.reddit.com/r/gamedev/comments/1fkn0aw/chatgpt_is_still_very_far_away_from_making_a/lnx8k9l/
Chiming in with my own find!
https://archiveofourown.org/works/38590803/chapters/96467457
I’ve seen this person around a lot with crazy takes on AI. They have a couple quotes that might inflict psychic damage:
If I had the skill to pull it off, a Buddhist cultivation book would’ve thus been the single most rationalist xianxia in existence.
My acquaintance asks for rational-adjacent books suitable for 8-11 years old children that heavily feature training, self-improvement, etc. The acquaintance specifically asks that said hard work is not merely mentioned, but rather is actively shown in the story. The kid herself mostly wants stories “about magic” and with protagonists of about her age.
They had a long diatribe I don’t have a copy of, but they were gloating about having masterful writing despite not reading any books besides non-fiction and HPMoR, their favorite book of all time.
There’s also a whole subreddit from hell about this subgenre of fiction: https://www.reddit.com/r/rational/
“help artists with tasks such as animating a custom character or using the character as a model for clothing etc”
The “deepfake” and “(uncensored)” in the repo description have me questioning that ever so slightly
Oh whoops, I should have archived it.
There were about 7 images posted of users roleplaying with bots, all ending with a bot response that cut off halfway with an error message that read “This content may violate our policies; blablabla; please use the report button if you believe this is a false positive and we will investigate.” The last one was some kind of parody image making fun of the warning.
Most of them were some kind of romantic roleplay with bad spelling. One was like, “i run my hand down your arm and kiss you”, and the bots response triggered the warning. Another one was like, “*is slapped in the face* it’s okay, I still love you” and the rest of the message generated a warning. There wasn’t enough context for that one, so the person might have been writing it playfully (?), but that subreddit has a lot of blatant sexual violence regardless.
caption: “”“AI is itself significantly accelerating AI progress”“”
wow I wonder how you came to that conclusion when the answers are written like a Fallout 4 dialogue tree