[deleted by user]

  • BaroqueInMind@piefed.social
    link
    fedilink
    English
    arrow-up
    79
    ·
    3 months ago

    I have no remarks, just really amused with your writing in your repo.

    Going to build a Docker and self host this shit you made and enjoy your hard work.

    Thank you for this!

  • FrankLaskey@lemmy.ml
    link
    fedilink
    English
    arrow-up
    29
    ·
    3 months ago

    This is very cool. Will dig into it a bit more later but do you have any data on how much it reduces hallucinations or mistakes? I’m sure that’s not easy to come by but figured I would ask. And would this prevent you from still using the built-in web search in OWUI to augment the context if desired?

  • WolfLink@sh.itjust.works
    link
    fedilink
    arrow-up
    22
    ·
    3 months ago

    I’m probably going to give this a try, but I think you should make it clearer for those who aren’t going to dig through the code that it’s still LLMs all the way down and can still have issues - it’s just there are LLMs double-checking other LLMs work to try to find those issues. There are still no guarantees since it’s still all LLMs.

    • skisnow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 months ago

      I haven’t tried this tool specifically, but I do on occasion ask both Gemini and ChatGPT’s search-connected models to cite sources when claiming stuff and it doesn’t seem to even slightly stop them bullshitting and claiming a source says something that it doesn’t.

        • skisnow@lemmy.ca
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          3 months ago

          How does having a key solve anything? Its not that the source doesn’t exist, it’s that the source says something different to the LLM’s interpretation of it.

            • skisnow@lemmy.ca
              link
              fedilink
              English
              arrow-up
              2
              ·
              3 months ago

              The hash proves which bytes the answer was grounded in, should I ever want to check it. If the model misreads or misinterprets, you can point to the source and say “the mistake is here, not in my memory of what the source was.”.

              Eh. This reads very much like your headline is massively over-promising clickbait. If your fix for an LLM bullshitting is that you have to check all its sources then you haven’t fixed LLM bullshitting

              If it does that more than twice, straight in the bin. I have zero chill any more.

              That’s… not how any of this works…

  • Angel Mountain@feddit.nl
    link
    fedilink
    arrow-up
    15
    ·
    3 months ago

    Super interesting build

    And if programming doesn’t pan out please start writing for a magazine, love your style (or was this written with your AI?)

      • Karkitoo@lemmy.ml
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        3 months ago

        meat popsicle

        ( ͡° ͜ʖ ͡°)

        Anyway, the other person is right. Your writing style is great !

        I successfully read your whole post and even the README. Probably the random outbursts grabbed my attention back to te text.

        Anyway version 2, this Is a very cool idea ! I cannot wait to either :

        • incorporate it to my workflows
        • let it sit in a tab to never be touched ever again
        • tgeoryceaft, do tests and request features so much as to burnout

        Last but not least, thank you for not using github as your primary repo

  • floquant@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    3 months ago

    Holy shit I’m glad to be on the autistic side of the internet.

    Thank you for proving that fucking JSON text files are all you need and not “just a couple billion more parameters bro”

    Awesome work, all the kudos.

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 months ago

    This seems astonishingly more useful than the current paradigm, this is genuinely incredible!

    I mean, fellow Autist here, so I guess I am also… biased towards… facts…

    But anyway, … I am currently uh, running on Bazzite.

    I have been using Alpaca so far, and have been successfully running Qwen3 8B through it… your system would address a lot of problems I have had to figurr out my own workarounds for.

    I am guessing this is not available as a flatpak, lol.

    I would feel terrible to ask you to do anything more after all of this work, but if anyone does actually set up a podman installable container for this that actually properly grabs all required dependencies, please let me know!

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        3 months ago

        Oh I entirely believe you.

        Hell hath no wrath like an annoyed high functioning autist.

        I’ve … had my own 6 month black out periods where I came up with something extremely comprehensive and ‘neat’ before.

        Seriously, bootstrapping all this is incredibly impressive.

        I would… hope that you can find collaborators, to keep this thing alive in the event you get into a car accident (metaphorical or literal), or, you know, are completely burnt out after this.

        … but yeah, it is… yet another immensely ironic aspect of being autistic that we’ve been treated and maligned as robots our whole lives, and then when the normies think they’ve actually built the AI from sci fi, no, turns out its basically extremely talented at making up bullshit and fudging the details and being a hypocrite, which… appalls the normies when they have to look into a hyperpowered mirror of themselves.

        And then, of course, to actually fix this, its some random autist no one has ever heard of (apologies if you are famous and i am unaware of this), who is putting in an enormous of effort, that… most likely, will not be widely recognized.

        … fucking normies man.

  • bilouba@jlai.lu
    link
    fedilink
    arrow-up
    13
    ·
    3 months ago

    Very impressive! Do you have benchmark to test the reliability? A paper would be awesome to contribute to the science.

      • bilouba@jlai.lu
        link
        fedilink
        arrow-up
        2
        ·
        3 months ago

        I understand, no idea on how to do it. I heard about SWE‑Bench‑Lite that seems to focus on real-world usage. Maybe try to contact “AI Explained” on YT, he’s the best IMO. Your solution might be novel or not but he might help you figuring that. If it is indeed novel, it might be worth it to share it with the larger community. Of course, I totally get that you might not want to do any of that. Thank you for your work!

  • ThirdConsul@lemmy.zip
    link
    fedilink
    arrow-up
    13
    ·
    3 months ago

    I want to believe you, but that would mean you solved hallucination.

    Either:

    A) you’re lying

    B) you’re wrong

    C) KB is very small

          • ThirdConsul@lemmy.zip
            link
            fedilink
            arrow-up
            3
            ·
            3 months ago

            The system summarizes and hashes docs. The model can only answer from those summaries in that mode

            Oh boy. So hallucination will occur here, and all further retrievals will be deterministically poisoned?

              • ThirdConsul@lemmy.zip
                link
                fedilink
                arrow-up
                3
                ·
                3 months ago

                Huh? That is the literal opposite of what I said. Like, diametrically opposite.

                The system summarizes and hashes docs. The model can only answer from those summaries in that mode. There’s no semantic retrieval step.

                No, that’s exactly what you wrote.

                Now, with this change

                SUMM -> human reviews

                That would be fixed, but will work only for small KBs, as otherwise the summary would be exhaustive.

                Case in point: assume a Person model with 3-7 facts per Person. Assume small 3000 size set of Persons. How would the SUMM of work? Do you expect a human to verify that SUMM? How are you going to converse with your system to get the data from that KB Person set? Because to me that sounds like case C, only works for small KBs.

                Again: the proposition is not “the model will never hallucinate.”. It’s “it can’t silently propagate hallucinations without a human explicitly allowing it to, and when it does, you trace it back to source version”.

                Fair. Except that you are still left with the original problem of you don’t know WHEN the information is incorrect if you missed it at SUMM time.

            • PolarKraken@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              3
              ·
              3 months ago

              Woof, after reading your “contributions” here, are you this fucking insufferable IRL or do you keep it behind a keyboard?

              Goddamn. I’m assuming you work in tech in some capacity? Shout-out to anyone unlucky enough to white-knuckle through a workday with you, avoiding an HR incident would be a legitimate challenge, holy fuck.

    • Kobuster@feddit.dk
      link
      fedilink
      arrow-up
      9
      ·
      3 months ago

      Hallucination isn’t nearly as big a problem as it used to be. Newer models aren’t perfect but they’re better.

      The problem addressed by this isn’t hallucination, its the training to avoid failure states. Instead of guessing (different from hallucination), the system forces a Negative response. That’s easy and any big and small company could do it, big companies just like the bullshit

  • Disillusionist@piefed.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 months ago

    Awesome work. And I agree that we can have good and responsible AI (and other tech) if we start seeing it for what it is and isn’t, and actually being serious about addressing its problems and limitations. It’s projects like yours that can demonstrate pathways toward achieving better AI.

  • UNY0N@lemmy.wtf
    link
    fedilink
    arrow-up
    11
    ·
    3 months ago

    THIS IS AWESOME!!! I’ve been working on using an obsidian vault and a podman ollama container to do something similar, with VSCodium + continue as middleware. But this! This looks to me like it is far superior to what I have cobbled together.

    I will study your codeberg repo, and see if I can use your conductor with my ollama instance and vault program. I just registered at codeberg, if I make any progress I will contact you there, and you can do with it what you like.

    On an unrelated note, you can download wikipedia. Might work well in conjunction with your conductor.

    https://en.wikipedia.org/wiki/Wikipedia:Database_download

  • Terces@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    3 months ago

    Fuck yeah…good job. This is how I would like to see “AI” implemented. Is there some way to attach other data sources? Something like a local hosted wiki?