• I’m a little curious how the AI version works with much accuracy. If the context window is like 1M tokens, that sounds like a lot. But it would have to tokenize the whole book and that book gets fed into prompt behind the scenes with every question. Everytime you ask a question not only is the LLM having to process your question, but it gets fed the entire prompt + book. Plus it gets fed all your previous conversation for the session. If you’re asking it questions about a Dostoevsky book, you’re probably going to fill up the context window pretty fast if not immediately. Then it will just start hallucinating answers because it can’t process all the context.

    If they’re doing something fancy with tokenization or doing some kind of memory thing, then it seems like it would be suited for a standalone program. But it also says they’re using local LLMs on your computer? I mean those are going to have small context windows for sure. It seems like bloat as well. In order to run those models locally, you need to download the tensors. Those are several GB in size. I don’t want my e-book library management software to do that.

    • KobaCumTribute [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      I don’t want my e-book library management software to do that.

      It looks like it can just interface with a different program that someone would have to set up, find weights for, and get working themselves.