I’m a little curious how the AI version works with much accuracy. If the context window is like 1M tokens, that sounds like a lot. But it would have to tokenize the whole book and that book gets fed into prompt behind the scenes with every question. Everytime you ask a question not only is the LLM having to process your question, but it gets fed the entire prompt + book. Plus it gets fed all your previous conversation for the session. If you’re asking it questions about a Dostoevsky book, you’re probably going to fill up the context window pretty fast if not immediately. Then it will just start hallucinating answers because it can’t process all the context.
If they’re doing something fancy with tokenization or doing some kind of memory thing, then it seems like it would be suited for a standalone program. But it also says they’re using local LLMs on your computer? I mean those are going to have small context windows for sure. It seems like bloat as well. In order to run those models locally, you need to download the tensors. Those are several GB in size. I don’t want my e-book library management software to do that.
All models were made via disgusting violations of trust, disregard of the trust model of robots.txt, disregard of cultural artefacts and the consequences to culture and disgusting use of resources.
They are gross, their promotion is gross, and their use is gross.
Not only is it opt in, you need to jump through a shit ton of hoops to opt in. I’m pretty sure it’s bring your only key, so you need to make an account with a model provider, give them your credit card, pay for credits, get an API key, save the API key, set the endpoint and model settings in calibre, add your key, and then you can use it. This won’t make calibre worse for anyone.
There’s AI in Calibre? What the fuck?
First I’d heard of it too, but yeah, there is now.
I’m a little curious how the AI version works with much accuracy. If the context window is like 1M tokens, that sounds like a lot. But it would have to tokenize the whole book and that book gets fed into prompt behind the scenes with every question. Everytime you ask a question not only is the LLM having to process your question, but it gets fed the entire prompt + book. Plus it gets fed all your previous conversation for the session. If you’re asking it questions about a Dostoevsky book, you’re probably going to fill up the context window pretty fast if not immediately. Then it will just start hallucinating answers because it can’t process all the context.
If they’re doing something fancy with tokenization or doing some kind of memory thing, then it seems like it would be suited for a standalone program. But it also says they’re using local LLMs on your computer? I mean those are going to have small context windows for sure. It seems like bloat as well. In order to run those models locally, you need to download the tensors. Those are several GB in size. I don’t want my e-book library management software to do that.
It looks like it can just interface with a different program that someone would have to set up, find weights for, and get working themselves.
Interesting that unlike most of the other fixes and features, there is no issue # associated with these.
I guess they were done at purely the initiative of the dev.
deleted by creator
All models were made via disgusting violations of trust, disregard of the trust model of robots.txt, disregard of cultural artefacts and the consequences to culture and disgusting use of resources.
They are gross, their promotion is gross, and their use is gross.
deleted by creator
I don’t want or need a slop summary of my books, but as long as it’s strictly opt-in then it’s probably fine that it exists.
Not only is it opt in, you need to jump through a shit ton of hoops to opt in. I’m pretty sure it’s bring your only key, so you need to make an account with a model provider, give them your credit card, pay for credits, get an API key, save the API key, set the endpoint and model settings in calibre, add your key, and then you can use it. This won’t make calibre worse for anyone.