As long as they just use it for their community and don’t fucking lock documentation behind discord I don’t really care. But this trend has been so annoying. Due to this I’m in so many servers I have to quit a server just to join a new one
It appears to be a couple of versions behind … and have some issues with dynamically linked libraries that hinder LSPs. Neither of these is Zed’s fault. I’m sure the packaged version will be up to date momentarily (given the interest in Zed, sooner rather than later). Not sure how easy the LSP thing will be to fix, though there are some workarounds in the github issue.
yeah the editor is being updated way too fast for nix to keep up. I’m sure it’ll be easier once it has its stable release. I see the have a nix flake in the repo, it would be great if they added a package to the outputs instead of just a devshell, nix users could easily build it from master or whichever tag they want.
There are solutions in this issue to the LSP issue. The editor would need to be built in an fhs-env, or they will need to find a way to make it uses binaries installed with nix instead of the ones it downloads itself. VSCode had a similar issue, so there is a version of the package that let’s you install extensions through nix, and another that uses an fhs-env that allows extensions to work out of the box.
Is it really worse tho? A single build, against a single runtime, free from distro specificities, packaged by the devs themselves instead of offloading the work on distro maintainers?
It is. Security problem in core library? Good luck waiting for 27 randos releasing an update. Whereas the distro updates it even before the issue becomes public.
Security wise it doesn’t matter, you run the code they wrote in any case. So either trust them or don’t. Where it matters is making a mess on your computer and possibly leaving cruft behind when uninstalling. But packages are in the works, Arch even has it since before linux support was announced officially.
I think you slipped in the discussion intendations somewhere, this branch of the discussion tree is about the implications of piping curl into bash vs. installing packages
That was my first thought as well, but I will say that uBlue distros had a signing issue preventing updates recently, due to an oversight with how they rotated their image signing keys, and the easiest (maybe only?) solution was to pipe a curl command to sh. Even though uBlue is trustworthy, they still recommended inspecting the script, which was only a few lines of code.
In this case, though, I dunno why they don’t just package it as a flatpak or appimage or put it up on cargo.
Edit: nvm, they have some package manager options.
It is worrisome that all the smug elitists are too incompetent to just leave off the pipe and review from stdout, or redirect to a file for further analysis.
Same people will turn around and full throat the aur screaming ‘btw’ to anyone who dares look in their direction.
By that logic you have to review the Zed source code as well. Either you trust Zed devs or you don’t - decide! If you suspect their install script does something fishy, they could do it just as well as part of the editor. If you run their editor you execute their code, if you run the install script you execute their code - it’s the same thing.
Aur is worse because there usually somebody else writes the PKGBUILD, and then you have to either decide whether to trust that person as well, or be confident enough for vetting their work yourself.
Eh using aur is a bit different since most of# them pull the projects git repo directly anyway. Yeah the project might have vulns but thats on you to inspect before building it as well as the pkgbuild itself
AFAIK it’s the copy cost for the memory. GPU makes sense only when the hardware allows this copy to go away. Generally, desktop PCs don’t have such specialized hardware.
I don’t see why you’d have to copy all that much. Depending on the rendering architecture, once all the glyphs are there you’d only need to send the relevant text data to be rendered. I don’t see that being much of a problem even when using SDFs.
It’s an extremely small amount of data by today’s standards and it can be updated on demand, but even if it couldn’t it would still be extremely fast to send over every frame.
If games do it, so can text editors. Real time text rendering on the GPU is a fairly common practice nowadays, unfortunately not in most GUI applications…
At this point I’m not expert enough to explain more details. You can check font renderers.
Below is what’s in my mind but it’s just a guess.
In typical PC architectures you have IO between the storage and the RAM, and then there’s the copying from the RAM to the VRAM, and editors maybe also want copying from the VRAM to RAM for decoration purposes etc.
I am familiar with the current PC and GPU architectures.
IO is a non issue. Even a massive file can be trivially memory mapped and parsed without much hassle, and in the case of a text editor you’d have to deal with IO only when opening or saving said file, not during rendering.
As for the rendering side, again, the amount of memory you’d have to transfer between RAM and VRAM would be minimal. The issue is latency, not speed, but that can be mitigated though asynchonous transfer operations, so if done properly stutters are unlikely.
Rendering monospaced fonts (with decorators and control characters) at thousands of frames a second nowadays is computationally trivial, take a look at refterm for an example. I suspect non-monospaced fonts would require more effort, but it’s doable.
As I said at the beginning, it’s not impossible, just a pain. But so is font rendering in general honestly :/
I thought we were past this as a society 😔
Not until after you convince these projects to stop using discord
As long as they just use it for their community and don’t fucking lock documentation behind discord I don’t really care. But this trend has been so annoying. Due to this I’m in so many servers I have to quit a server just to join a new one
https://zed.dev/docs/linux#installing-via-a-package-manager
ooh, available for “x86_65” on Alpine
(and they’ve fixed that now)
Have you really not heard of it? It is a new architecture that is a bit better than x64_64.
Finally. 65 bit processor.
x86_64++
Plus ultra!
imagine the nightmare of writing a 65 bit instruction set
I don’t think it has to be a nightmare per se if you start from scratch.
Instead of 8-bit bytes, you have 5-bit “bytes” (fyves?) Hoozah! Done.
only if double precision can be called high fyves
This is a mandatory rule now.
Now imagine designing a 65 bit computer. The bus, registers, alu…
You’ll probably waste a lotta chips since most of them are designed for working with powers of 2
I mean its already in the nix repos as well as homebrew which means its essentially taken care of
So it should say hey check your distros package repos first.
Yeah. Especially rather than saying “curl/bash” is the preferred way of installing.
I’ve been using it with the nix package manager. It’s awesome how easy nix works
It appears to be a couple of versions behind … and have some issues with dynamically linked libraries that hinder LSPs. Neither of these is Zed’s fault. I’m sure the packaged version will be up to date momentarily (given the interest in Zed, sooner rather than later). Not sure how easy the LSP thing will be to fix, though there are some workarounds in the github issue.
yeah the editor is being updated way too fast for nix to keep up. I’m sure it’ll be easier once it has its stable release. I see the have a nix flake in the repo, it would be great if they added a package to the outputs instead of just a devshell, nix users could easily build it from master or whichever tag they want.
There are solutions in this issue to the LSP issue. The editor would need to be built in an fhs-env, or they will need to find a way to make it uses binaries installed with nix instead of the ones it downloads itself. VSCode had a similar issue, so there is a version of the package that let’s you install extensions through nix, and another that uses an fhs-env that allows extensions to work out of the box.
A curl piped into a shell or some unofficial packages from various distros.
At this point I don’t get why these projects are not Flatpak-first.
Flatpak is worse for debugging, development, and reproducibility.
Its good for user friendly sandboxing, portability, and convenience.
Is it really worse tho? A single build, against a single runtime, free from distro specificities, packaged by the devs themselves instead of offloading the work on distro maintainers?
It is. Security problem in core library? Good luck waiting for 27 randos releasing an update. Whereas the distro updates it even before the issue becomes public.
I’ll have to come up with some examples and write something more detailed I think to explore this.
Until NixOS I was very in favor of language specific package managers and things like flatpak.
Flatpaks are reproducible https://ranfdev.com/blog/flatpak-builds-are-not-reproducible/
You see the conclusion of that article is that flatpaks are not repeoducible after presenting solutions to make it reproducible right?
@ParetoOptimalDev @GravitySpoiled Tusky 25.2
Device:
Fairphone FP4
Android-Version: 11
SDK-Version: 30
Account:
@carrabelloy
Version: 4.2.10
There are various package manager vectors for installation listed in the docs
Can’t we basically call this a remote access trojan?
Security wise it doesn’t matter, you run the code they wrote in any case. So either trust them or don’t. Where it matters is making a mess on your computer and possibly leaving cruft behind when uninstalling. But packages are in the works, Arch even has it since before linux support was announced officially.
This isn’t true because until the PR fixing it goes through it downloads other binaries without user consent.
I think you slipped in the discussion intendations somewhere, this branch of the discussion tree is about the implications of piping curl into bash vs. installing packages
So did fedora and nix
That was my first thought as well, but I will say that uBlue distros had a signing issue preventing updates recently, due to an oversight with how they rotated their image signing keys, and the easiest (maybe only?) solution was to pipe a curl command to
sh
. Even though uBlue is trustworthy, they still recommended inspecting the script, which was only a few lines of code.In this case, though, I dunno why they don’t just package it as a flatpak or appimage or put it up oncargo
.Edit: nvm, they have some package manager options.
It is worrisome that all the smug elitists are too incompetent to just leave off the pipe and review from stdout, or redirect to a file for further analysis.
Same people will turn around and full throat the
aur
screaming ‘btw’ to anyone who dares look in their direction.By that logic you have to review the Zed source code as well. Either you trust Zed devs or you don’t - decide! If you suspect their install script does something fishy, they could do it just as well as part of the editor. If you run their editor you execute their code, if you run the install script you execute their code - it’s the same thing.
Aur is worse because there usually somebody else writes the PKGBUILD, and then you have to either decide whether to trust that person as well, or be confident enough for vetting their work yourself.
Eh using aur is a bit different since most of# them pull the projects git repo directly anyway. Yeah the project might have vulns but thats on you to inspect before building it as well as the pkgbuild itself
There’s a reason why GUIs don’t render fonts in the GPU.
Because it’s a pain, there’s not much more to it really…
AFAIK it’s the copy cost for the memory. GPU makes sense only when the hardware allows this copy to go away. Generally, desktop PCs don’t have such specialized hardware.
I don’t see why you’d have to copy all that much. Depending on the rendering architecture, once all the glyphs are there you’d only need to send the relevant text data to be rendered. I don’t see that being much of a problem even when using SDFs. It’s an extremely small amount of data by today’s standards and it can be updated on demand, but even if it couldn’t it would still be extremely fast to send over every frame. If games do it, so can text editors. Real time text rendering on the GPU is a fairly common practice nowadays, unfortunately not in most GUI applications…
At this point I’m not expert enough to explain more details. You can check font renderers.
Below is what’s in my mind but it’s just a guess.
In typical PC architectures you have IO between the storage and the RAM, and then there’s the copying from the RAM to the VRAM, and editors maybe also want copying from the VRAM to RAM for decoration purposes etc.
I am familiar with the current PC and GPU architectures.
IO is a non issue. Even a massive file can be trivially memory mapped and parsed without much hassle, and in the case of a text editor you’d have to deal with IO only when opening or saving said file, not during rendering.
As for the rendering side, again, the amount of memory you’d have to transfer between RAM and VRAM would be minimal. The issue is latency, not speed, but that can be mitigated though asynchonous transfer operations, so if done properly stutters are unlikely.
Rendering monospaced fonts (with decorators and control characters) at thousands of frames a second nowadays is computationally trivial, take a look at refterm for an example. I suspect non-monospaced fonts would require more effort, but it’s doable.
As I said at the beginning, it’s not impossible, just a pain. But so is font rendering in general honestly :/
As I indicated, please check (articles and the documentations of) font renderers at this point.
It’s made in rust, therefore it must be safe!