- cross-posted to:
- hackernews@lemmy.bestiver.se
- cross-posted to:
- hackernews@lemmy.bestiver.se
The main Calibre dev is notoriously opinionated about not adding basic features or a proper web server, but they add AI integrations? lmao
The AI was in fact not stripped out. The only change they added so far is updating the readme.
Sounds like another “I’ll make the logo” project
I mean, maybe she does something in the future but I think it’s weird and irresponsible for others to promote the repo when nothing has been done.
That’s what I mean. People have these ideas for projects and forks, but no actual coding gets done, all they make is a logo hah
Yeah, there’s not even a working branch yet. And she’ll need to set up a CI pipeline to keep it synced with the upstream while making sure the AI stuff doesn’t get back in. It’s already 9 commits behind and nothing has even been done.
There’s AI in Calibre? What the fuck?
First I’d heard of it too, but yeah, there is now.

I’m a little curious how the AI version works with much accuracy. If the context window is like 1M tokens, that sounds like a lot. But it would have to tokenize the whole book and that book gets fed into prompt behind the scenes with every question. Everytime you ask a question not only is the LLM having to process your question, but it gets fed the entire prompt + book. Plus it gets fed all your previous conversation for the session. If you’re asking it questions about a Dostoevsky book, you’re probably going to fill up the context window pretty fast if not immediately. Then it will just start hallucinating answers because it can’t process all the context.
If they’re doing something fancy with tokenization or doing some kind of memory thing, then it seems like it would be suited for a standalone program. But it also says they’re using local LLMs on your computer? I mean those are going to have small context windows for sure. It seems like bloat as well. In order to run those models locally, you need to download the tensors. Those are several GB in size. I don’t want my e-book library management software to do that.
I don’t want my e-book library management software to do that.
It looks like it can just interface with a different program that someone would have to set up, find weights for, and get working themselves.
Interesting that unlike most of the other fixes and features, there is no issue # associated with these.
I guess they were done at purely the initiative of the dev.
deleted by creator
All models were made via disgusting violations of trust, disregard of the trust model of robots.txt, disregard of cultural artefacts and the consequences to culture and disgusting use of resources.
They are gross, their promotion is gross, and their use is gross.
deleted by creator
I don’t want or need a slop summary of my books, but as long as it’s strictly opt-in then it’s probably fine that it exists.
Not only is it opt in, you need to jump through a shit ton of hoops to opt in. I’m pretty sure it’s bring your only key, so you need to make an account with a model provider, give them your credit card, pay for credits, get an API key, save the API key, set the endpoint and model settings in calibre, add your key, and then you can use it. This won’t make calibre worse for anyone.
Some people in the community have been asking for AI plugins and features for a while. I don’t care for it myself, but if it is an option you can disable/enable then I don’t mind much. And it seems they’re supporting local models and whatnot.
Bigger issue is if it just by default leaks info to an AI without asking you or something. If it isn’t doing that then I’m just not going to use the AI features if I don’t find them useful tbh.
Thankfully, AI is super expensive and Calibre will probably never make calls by default because someone needs to pay for it (and the current implementation is bring your own key anyways). I can’t imagine the usecase of AI in an ebook reader other than textbooks without answer keys or index high prose.
Calibre is much much more than an ebook reader. Actually if anything the reader is one of the poorer aspects of it imo. It is a whole ebook and ebook library management and editing software. And reading the stuff about AI that people have been calling for it sounds like people want it to organize their library, do some “smart” operations for hundreds of books and stuff like that. But also just “explain to me what this means” type of stuff.
To me using Calibre as just an ebook reader feels like using Photoshop as just an image viewer
I can see the usecase, I just don’t find it useful myself.
I hate that the reader part registers itself as a default program for every file format. I didn’t mean to open this txt files with Calibre 😭
Calibre is also kind of a pain to use. I tried to use it once and went back to my syncthing folder of pirated epubs. The idea seems great but it doesn’t seem like it would make my life any easier.
It’s a massive beast and the UI is messy, ugly and daunting. But nothing else just does all the things it does. It works for my purpose (metadata editing, book jacket, library management, conversions etc). I don’t enjoy using it but there’s just nothing to replace it for me.
You can avoid the GUI entirely by using the included command-line tools. Haven’t done it myself though
I can’t imagine the usecase of AI in an ebook reader
I could see a hypothetical machine translation suite integrated directly into the reader being a useful tool, especially if it allowed interrogation, correction, and revision by the user in a way that an LLM could actually almost sort of do well enough for a casual context. I mean it would still be frustrating and error prone, but for a book without extant translations it could potentially be better than trying to bulk process it with a separate translation tool.
Although that’s not what they added. If I’m reading this right, what they added was the ability for it to make API calls to LM Studio, which is a framework (I believe open source too) for running text models locally with (also open source) model weights, with the current integration features being something about being able to “discuss selected books” with that local chatbot or ask it for recommendations, although I have no idea how any of that is supposed to work in practice. Since it is adding backend compatibility with local models, the machine translation angle I mentioned is at least a feasible addition that a plugin could add.
The whole thing’s silly and has extremely limited actual usecases, but anyone getting up in arms over it allowing compatibility with other, entirely locally-run open source programs is being even sillier. It’s not like they’re replacing extant functionality with ChatGPT API calls or some nonsense, just enabling hobbyists who go through the trouble of setting up this entire other suite of unrelated shit and manage to get it working to then do something sort of silly with it.
I love forks like this because they almost always last a long time.
yeah unless a core maintainer moves over too they’re nothingburgers sadly. Anyone can make a fork has its benefits but not always!
forks are the software equivalent of party splits except somehow even pettier.
It can be more productive than trying to resolve conflicts internally. But yeah someone has to actually do the work.
In this case the fork was created by someone who has never worked on Calibre before and they haven’t even cherry picked out the AI commits yet, only updated the readme.
See, this is how you know open source is communist, your favourite piece of software has the exact same sectarianism problems as your local Trots!
And the readmes, issue posts, and notes on merges are sort of like newspapers if you squint hard enough, too.
Some are certainly long enough or quibble-over-terminology enough to make a Trot paper blush… though they are usually a fair bit better on the signal to noise ratio, unless a maintainer’s having an ideological rant they’re often only long because software is complicated.
Not always, all PRs usually start as forks unless the person is part of the project and can do their work on a branch.
That’s just a Githubism
Yeah, but it also makes “forking” into a more ambiguous term overall. A GitHub fork can be a clone with a working branch or an actual fork and it’s not immediately obvious until you look at the code.
That’s why I use the README test to see if a forked repo is an actual fork since almost no one will modify the README if their GitHub fork is actually a work branch.
Esoecially when it’s a massive and from what I hear messily coded thing like Calibre. I’ve been looking for alternatives for a long time but nothing has the massive variety of features that Calibre has for library and book management.
Booklore is looking promising. It’s more like Calibre Web than plain Calibre though in that its an app you host and access via browser rather than just having your local Calibre install. It’s got some bugs still when it comes to book ingestion, but it doesn’t rely on a Calibre database whatsoever and just uses and optionally embeds book metadata, so you can just point it at your book folder and go.
How can calibre be distributed under GPL and have AI mixed up in its code? That stuff can’t be GPL can it?
from the git it’s AI integrations rather than distributing AI-written code or the like.
Then again, Torvalds has greenlit AI code in the Linux Kernel, which alone sets a massive precedent as it’s by far the biggest GPL-based project.
Hmmm I guess my understanding of the GPL is that it can’t rely on incompatibly-licensed components. It must be runnable with all GPL type toolchain.
Since the AI blobs/components/dependencies cannot be inspected prior to compiling dont they make the software non-gpl?
I could be wrong understanding something.
Models never touch the GPL licensed code. Invidious/yt-dlp are GPL compliant because it pulls data from a proprietary service, but everything that happens with the data pulled from the API is open source. What Calibre does is it has an option (that users can easily ignore) to make a request to OpenAI/Anthropic/openrouter/deepseek/etc to perform actions. None of the code from any model provider touches the Calibre codebase. It’s just an API call (again, like yt-dlp or whatever frontend of the week).
Oh ya that makes sense.
Too bad the calibre Dev didn’t just make this feature an add on.
Thankfully it’s very easy to not use
deleted by creator









