- cross-posted to:
- cybersecurity@sh.itjust.works
- cross-posted to:
- cybersecurity@sh.itjust.works
sure, but the source of the “Python CVE exploit” already has to exist in the AI’s training dataset, there are lots of example CVE scripts online, you could probably also find it with a quick Google.
This is a good read. LLMs will never be true AI, so breaking the censorship is akin to fighting back against jack-booted cops who think they know what’s best for you and that you should obey, i.e. the big corporations that run these things.
Why does that thumbnail bring to mind some kind of white supremacist ceremony
It’s the logo of “0din”, which is a Mozilla-backed bug bounty (say that five times fast) with a focus on GenAI
Im assuming there is an agenda to associate uncensored ai with extremism.
It really does not feel like AGI is near when all of these holes exist. Even when they are filtered for and thus patched over, the core issue is still in the model.
Ironically the smarter the ai gets the harder it is to censor. Also the more u sensor it the less intelligent and less truthful it becomes.
the less intelligent and less truthful it becomes.
Incorrect, because of this simple fact: garbage in, garbage out. Feed it the internet, get the internet.
Did u just make up a statement then pretend i said it?
Must be a hallucination
Agi and LLM are two different things that fall under the general umbrella term “AI”.
That a particular LLM can’t be censored doesn’t say anything about its abilities.