

Not any lazier. Script kiddies didn’t write the code themselves, either.
Not any lazier. Script kiddies didn’t write the code themselves, either.
You should try watching the live action series next - I bet you’d love it.
I’ve been meaning to look into Audiobook Shelf! Do you happen to know if it can be integrated with a TTS provider to narrate your ebooks, or does it only support listening to audiobooks you’ve uploaded?
What would moving your account here from Reddit entail? What would transfer with you? Your posts? Comments? Followed and moderated subreddits? Upvotes and downvotes?
I think the better question than “Does the experience system sound like it has potential,” then, is “Does the overall concept / system have potential?”
My gut is probably, but it depends a lot more on what you’re willing to put into it and what you want out of it. What’s your metric for success? If it’s something you want to run yourself and to share online to have a few groups use it, then that’s a lot more achievable than being able to get a publishing deal, for example. And in-between, publishing on drivethrurpg or something similar, at a nominal cost (like $2-$5), would take more effort than the former and less than the latter; and the higher the cost and the higher the number of players you’d want, the higher the effort you need to put in (and a lot of that isn’t just in system building, but in art, community building, marketing, etc.).
From what you’ve shared, it sounds like an interesting system. I could especially see it working in an academy setting where grinding skills to be able to pass practical exams is one of the players’ goals. I also could see it working well by a loosely GMed play by post system, with the players self-enforcing (or possibly leveraging some tools built into the site to track resource pools, experience, rolling, etc.), though I haven’t played in a forum game myself, so I might be way off-base.
Did your system have classes or was it completely free-form in terms of gaining access to those skill trees?
I run a Monster of the Week game and my players get experience throughout sessions, as well as at the end. The mechanics are basically:
I think other PbtA (Powered by the Apocalypse - systems inspired by Apocalypse World) systems do something similar.
I grew increasingly frustrated with the system of only distributing advancement/experience points at the end of a session.
Isn’t the simple fix to this to just distribute experience points as soon as they’re earned?
At some point, I started to divise a play system that relied on a split experience atribution system, with players being able to automatically rack experience points from directly using their skills/habilties, while the DM would keep a tally of points from goals/missions achieved, distributable at session end.
Your system sounds like the way that skill-based video game RPGs (Elder Scrolls games and Arcanum come to mind) handle experience.
In a lot of games I’ve played, I’d rather get experience for in-game accomplishments immediately and to be able to train skills like this during downtime - generally between games.
To those with more experience in TTRPGs: would this be feaseable? Or enticing? Interesting?
I could see people being interested in it. You get instant gratification and a bit of extra crunchiness. A lot of players enjoy that.
With the right skill system I could see this being useful. My main concern is that if you put this on top of a system with relatively few skills, it could encourage people to game it by grinding. There are ways to mitigate that, though.
In a system with fewer skills, instead of just being experience points, the “currency” you earned this way could be used for temporary power ups related to the skill in question.
You could also limit it so you only rewarded players for story-related tasks.
Copied from the post:
You may have seen reports of leaks of older text messages that had previously been sent to Steam customers. We have examined the leak sample and have determined this was NOT a breach of Steam systems.
We’re still digging into the source of the leak, which is compounded by the fact that any SMS messages are unencrypted in transit, and routed through multiple providers on the way to your phone.
The leak consisted of older text messages that included one-time codes that were only valid for 15-minute time frames and the phone numbers they were sent to. The leaked data did not associate the phone numbers with a Steam account, password information, payment information or other personal data. Old text messages cannot be used to breach the security of your Steam account, and whenever a code is used to change your Steam email or password using SMS, you will receive a confirmation via email and/or Steam secure messages.
You do not need to change your passwords or phone numbers as a result of this event. It is a good reminder to treat any account security messages that you have not explicitly requested as suspicious. We recommend regularly checking your Steam account security at any time at
We also recommend setting up the Steam Mobile Authenticator if you haven’t already, as it gives us the best way to send secure messages about your account and your account’s safety.
Assuming you’re using ollama (is there another reason to use ollama.com?), you can use compatible files from huggingface directly in ollama. The model page will give you the instructions for the command to run; I always change ollama run
to ollama pull
, though. Instructions: https://huggingface.co/docs/hub/ollama
You should be able to fit Qwen3 32B at Q4_K_M
with an acceptable context, and it did very well on math benchmarks (with thinking enabled). You can disable thinking by including /no_think
at the end of your prompt to speed up responses, but I’m not sure how well it handles math under those circumstances. I wouldn’t even consider disabling thinking unless you were grading one question per prompt.
The ollama Qwen3 page is https://ollama.com/library/qwen3:32b and the default 32B quant is Q4_K_M
. I personally am using the Q6_K
quant by unsloth, and their quants have been great (when supported by ollama), often being the first to fix bugs impacting other quantizations.
I’m not sure if Q4_K_M
is the optimal quant style for Intel Arc, but the others that might be better are not supported by ollama, anyway, as far as I know.
Qwen3’s real world knowledge is bad, so if there are questions that rely on that you may need to include the relevant facts as part of the prompt or use an ollama frontend that supports web searches.
Other options: This does seem like something Gemma3 27B would be good at, so it’s too bad you can’t use it. Older Gemmas may be good, but I’m not sure. Llama3.3 70B is also out, unless you have a decent amount of system RAM and are okay with offloading less than half to GPU. I could see it outperforming my recommendation below but I would be very surprised for the 8B version to outperform it. Older Qwen2.5 is decent at math but unless you grab QwQ doesn’t include thinking.
I’d recommend Colemak Mod-DH, personally - it seems ergonomically superior and switching later is a bit of a pain.
Retrieval-Augmented Generation (RAG) is probably the tech you’d want. It basically involves a knowledge library being built from the documents you upload, which is then indexed when you ask questions.
NotebookLM by Google is an off the shelf tool that is specialized in this, but you can upload documents to ChatGPT, Copilot, Claude, etc., and get the same benefit.
If you self hosted, Open WebUI with Ollama supports this, but far from the only one.
Wow, there isn’t a single solution in here with the obvious answer?
You’ll need a domain name. It doesn’t need to be paid - you can use DuckDNS. Note that whoever hosts your DNS needs to support dynamic DNS. I use Cloudflare for this for free (not their other services) even though I bought my domains from Namecheap.
Then, you can either set up Let’s Encrypt on device and have it generate certs in a location Jellyfin knows about (not sure what this entails exactly, as I don’t use this approach) or you can do what I do:
On your router, forward port 443 to the outbound secure port from your PI (which for simplicity’s sake should also be port 443). You likely also need to forward port 80 in order to verify Let’s Encrypt.
If you want to use Jellyfin while on your network and your router doesn’t support NAT loopback requests, then you can use the server’s IP address and expose Jellyfin’s HTTP ports (e.g., 8080) - just make sure to not forward those ports from the router. You’ll have local unencrypted transfers if you do this, though.
Make sure you have secure passwords in Jellyfin. Note that you are vulnerable to a Jellyfin or Traefik vulnerability if one is found, so make sure to keep your software updated.
If you use Docker, I can share some config info with you on how to set this all up with Traefik, Jellyfin, and a dynamic dns services all up with docker-compose services.
Why should we know this?
Not watching that video for a number of reasons, namely that ten seconds in they hadn’t said anything of substance, their first claim was incorrect (Amazon does not prohibit use of gen ai in books, nor do they require its use be disclosed to the public, no matter how much you might wish it did), and there was nothing in the description of substance, which in instances like this generally means the video will largely be devoid of substance.
What books is the Math Sorcerer selling? Are they the ones on Amazon linked from their page? Are they selling all of those or just promoting most of them?
Why do we think they were generated with AI?
When you say “generated with AI,” what do you mean?
And what’s the result? Are the books misleading in some way? That’s the most legitimate actual concern I can think of (I’m sure the people screaming that AI isn’t fair use would disagree, but if that’s the concern, settle it in court).
Look up “LLM quantization.” The idea is that each parameter is a number; by default they use 16 bits of precision, but if you scale them into smaller sizes, you use less space and have less precision, but you still have the same parameters. There’s not much quality loss going from 16 bits to 8, but it gets more noticeable as you get lower and lower. (That said, there’s are ternary bit models being trained from scratch that use 1.58 bits per parameter and are allegedly just as good as fp16 models of the same parameter count.)
If you’re using a 4-bit quantization, then you need about half that number in VRAM. Q4_K_M is better than Q4, but also a bit larger. Ollama generally defaults to Q4_K_M. If you can handle a higher quantization, Q6_K is generally best. If you can’t quite fit it, Q5_K_M is generally better than any other option, followed by Q5_K_S.
For example, Llama3.3 70B, which has 70.6 billion parameters, has the following sizes for some of its quantizations:
This is why I run a lot of Q4_K_M 70B models on two 3090s.
Generally speaking, there’s not a perceptible quality drop going to Q6_K from 8 bit quantization (though I have heard this is less true with MoE models). Below Q6, there’s a bit of a drop between it and 5 and then 4, but the model’s still decent. Below 4-bit quantizations you can generally get better results from a smaller parameter model at a higher quantization.
TheBloke on Huggingface has a lot of GGUF quantization repos, and most, if not all of them, have a blurb about the different quantization types and which are recommended. When Ollama.com doesn’t have a model I want, I’m generally able to find one there.
You said, and I quote “Find a better way.” I don’t agree with your premise - this is the better way - but I gave you a straightforward, reasonable way to achieve something important to you… and now you’re saying that “This is a discussion of principle.”
You’ve just proven that it doesn’t take a moderator to turn a conversation into a bad joke - you can do it on your own.
It’s a discussion of principle.
This is a foreign concept?
It appears to be a foreign concept for you.
I don’t believe that it’s a fundamentally bad thing to converse in moderated spaces; you do. You say “giving somebody the power to arbitrarily censor and modify our conversation is a fundamentally bad thing” like it’s a fact, indicating you believe this, but you’ve been given the tools to avoid giving others the power to moderate your conversation and you have chosen not to use them. This means that you are saying “I have chosen to do a thing that I believe is fundamentally bad.” Why would anyone trust such a person?
For that matter, is this even a discussion? People clearly don’t agree with you and you haven’t explained your reasoning. If a moderator’s actions are logged and visible to users, and users have the choice of engaging under the purview of a moderator or moving elsewhere, what’s the problem?
It is deeply bad that…
Why?
Yes, I know, trolls, etc…
In other words, “let me ignore valid arguments for why moderation is needed.”
But such action turns any conversation into a bad joke.
It doesn’t.
And anybody who trusts a moderator is a fool.
In places where moderator’s actions are unlogged and they’re not accountable to the community, sure - and that’s true on mainstream social media. Here, moderators are performing a service for the benefit of the community.
Have you never heard the phrase “Trust, but verify?”
Find a better way.
This is the better way.
Then why are you doing that, and why aren’t you at least hosting your own instance?
Yes, I know, trolls etc. But such action turns any conversation into a bad joke. And anybody who trusts a moderator is a fool.
Not just trolls - there’s much worse content out there, some of which can get you sent to jail in most (all?) jurisdictions.
And even ignoring that, many users like their communities to remain focused on a given topic. Moderation allows this to happen without requiring a vetting process prior to posting. Maybe you don’t want that, but most users do.
Find a better way.
Here’s an option: you can code a fork or client that automatically parses the modlog, finds comments and posts that have been removed, and makes them visible in your feed. You could even implement the ability to reply by hosting replies on a different instance or community.
For you and anyone who uses your fork, it’ll be as though they were never removed.
Do you have issues with the above approach?
As a user, you can:
If you host your own instance and communities within that instance, then at that point, you have full control, right? Other instances can de-federate from yours.
I recommend a used 3090, as that has 24 GB of VRAM and generally can be found for $800ish or less (at least when I last checked, in February). It’s much cheaper than a 4090 and while admittedly more expensive than the inexpensive 24GB Nvidia Tesla card (the P40?) it also has much better performance and CUDA support.
I have dual 3090s so my performance won’t translate directly to what a single GPU would get, but it’s pretty easy to find stats on 3090 performance.
It’s the new hyped up version of “no-code” or low-code solutions, but with AI so you have more flexibility to footgun.