• 3 Posts
  • 16 Comments
Joined 6 days ago
cake
Cake day: November 10th, 2025

help-circle
  • As a non-coder interested in self hosting and somewhat aware of cybersecurity, this is the most relevant take for me.

    An application that facilitates safe self-hosting of many different service is great, however for it to be actually safe and useful it must either be a cybersecurity service keeping up with the pace of threats (which is essentially the corporate closed source model) or from the ground up be an educational platform as much as an application. Documentation needs to not only be comprehensive, but also self-explanitory to a non-technical audience. It is not enough to state that a setting or feature exists, it must also be made clear why it should be used and what the consequences of different configurations are.

    This approach is almost never done effectively by FOSS projects unfortunately. Fortunately I think we are at the point where it is completely feasible for this type of educational approach to be fully replicable and adaptable from a creative commons source to the specific content structure of the application user manual using LLMs (local ones). The big question is, what is the trusted commons source of this information? I suppose there are MIT and other top university courses published for open use online that could serve as the source material, but it seems like there is likely a better formatted “IT User Guide Wiki” and “Cybersecurity Risk and Exploit Alert List” with frequent updates out there that I’m not aware of, perhaps the annals of various cybersecurity and IT associations?

    Anyway I’m aware this is basically calling for another big FOSS project to build a modular documentation generator, but man would it help a lot of these projects be viable for a wider audience and build a more literate public.









  • I remain convinced they have held back budget on AI because they are waiting for the bubble to burst so they can buy one of the bigger developers like Anthropic. Why burn a bunch of cash now just to loose the race when at the end of the day Open Source options might come out competitive or one of the leaders in the space can be bought out once valuations hit a reality check?



  • Well there are 15 days left on the kickstarter but it has been up for a while. I didn’t catch the medical office thing before, but makes perfect sense, they are clearly a commercial/enterprise targetted business and this is their first kickstarter. They just don’t know how to market to the masses.

    I agree the software documentation is lacking, they claim it is easy to setup but they don’t show what it is actually like.

    I get a sense that this could be a diamond in the rough but to your point about drivers I agree support is going to make or break this device. I think there are some indications that could be decent, the company itself appears to be software-first and targeting highly regulated industries (medical and transport) that require zero downtime. So long as the company itself survives I would guess drivers will likely stay updated. As long as the company survives.

    To that point, it seems like this kickstarter is a line in the water for rebranding their enterprise “private cloud” hardware for general use, but they half baked the launch.

    IDK, I’m tempted, but without better documentation it’s hard to spend that cash.





  • Makes sense to me. I have always thought that if the goal is to emulate human-level intelligence then developers should consider the human brain, which not only has multiple centers of cognition dedicated to different functional operations, but also is massively parallel with mirroring as a fundamental part of the cognitive process. Essentially LLMs are just like the language centers being forced to do the work of the entire brain.

    More functional systems will develop a top level information and query routing system with many specialized sub-models, including ongoing learning and integration functions. The mirroring piece is key there, because it allows the cognitive system to keep a “stable” copy of a sub-model in place while the redundant model is modified and tested by the learning and integration models, then cross checking functions between the new and old version to set which one gains “stable version” status for the next round of integration.

    Anyway, thanks for sharing.