Rust analyzer and compilation are very slow. My system is heating up, running out of ram and disk space. I have 8 GB ram.

I use helix editor.

  • BB_C@programming.dev
    link
    fedilink
    arrow-up
    4
    ·
    13 hours ago
    • Use zram so swapping doesn’t immediately slow things to a crawl.
    • Use cargo check, often. You don’t need to always compile.
    • Add a release-dev profile that inherits release, use cranelift for codegen in it, and turn off lto.

    Otherwise, it would be useful to know what kind of system you’re running, and how is the system load without any rust dev involvement. It would also be helpful to provide specifics. Your descriptions are very generic and could be entirely constructed from rust memes.

  • sga@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    16 hours ago

    editor does not matter for practical purposes (in my naive tests, helix is within +/- 5% of vim/nvim on most files (in terms of memory usage), i do not use any lsps). i generally do not use lsps (too much feedback for me), but even if i would, i get like 1-2% consstant cpu usage while working, whereas 0% without any lsp (averaged on 30 sec intervals).

    while compiling, you have 2 options - just for testing, you can run debug build and see if it works, and once you get something reasonable, try release build, it should be feature wise practically same, but faster/more optimised (cargo buid vs cargo build --release).

    in terms of crates, try to to not use many, i just try to not import if i can write a shitty wrap myself, but if you include something, try to to see cargo tree for heaviest crate, and try to make tree leaner. also, try to use native rust libs vs wrappers around system ones (in my experience, any wrapper around sys stuff uses make/cmake stuff and gets slower).

    now these are all things without changing anything about runtime. if you are willing to gain more performance by trading storage space, you should do 2 things - use sccache and shared system target dir.

    sccache (https://github.com/mozilla/sccache) is a wrapper around compiler, and essentially, you cache build files. if file hash remains same, and if you have same rust toolchain installed (it has not updated in breaking ways), it will reuse the earlier build files. this will often help by reusing something like 50-70% of crates which stay same (even across project).

    after installing, you just go to cargo dir ($CARGO_HOME) and editting config.toml

    
    [build]  
    rustc-wrapper = "sccache"  
    target-dir = "<something>/cargo/target"  
    
    [profile.release]  
    lto = true  
    strip = true  # Automatically strip symbols from the binary  
    
    [profile.release.build-override]  
    opt-level = 3  
    codegen-units = 16  
    
    [target.'cfg(target_os = "linux")']  
    # linker = "wild"  
    # rustflags = ["-Clink-arg=-melf_x86_64"]  
    linker = "clang"  
    rustflags = ["-Clink-arg=--ld-path=wild", "-Ctarget-cpu=native"]  
    
    

    target-dir = "<something>/cargo/target" makes it such that instead of each rust project having a separate target dir, you have same target dir for all projects. It can lead to some problems (essentially only 1 cargo compile can work at a time, and some more, but likely do not want to compile multiple projects together anyway as each crate is compiled in parallel as long as that is possible). it repeats a bit of what sccache is doing (reusing build files), but this makes it such that if same version of crate is used elsewhere, it will not even require a recompile (which sccache would have made faster anyway), but this way storage use is reduced as well.

    other than that, you may see i have done added option to perform lto and strip on release builds (will make compile times even longer, but you can get a bit more performance out). I have also changed linker (default is gcc), and i use wild as linker (and wild currently requires clang on x86 stuff), it can also be a tiny bit faster. try to see if you need it or not, as lto is no silver bullet (it can reduce performance, but even in worst cases it is within 2-5% of without, but in best cases it can be more). and just generally check config params for cargo debug and release profiles (play around codegen units, i think default is a higher).

  • Supercrunchy@programming.dev
    link
    fedilink
    arrow-up
    9
    ·
    18 hours ago

    8GB is not a lot to work with. It mostly depends on what crates you work with (unfortunately by default rust statically links all code in a single final step and that can be consuming a lot of RAM) Modern editors consume a lot of RAM, and if you have rust-analyzer running it’s going to use a lot more in addition to the editor.

    Tips:

    • trim down yout dependencies (check out cargo tree, cargo depgraph and the cargo build --timings ). Instead of enabling all features on crates, enable only what you need.
    • to free up disk space run cargo clean on the projects you are not currently working on, and regularly on your current project. If you change dependencies often it’s going to use up a lot of GBs of disk space. You’ll need to recompile all the dependencies every time you run it though, so don’t do it too often.
    • fully close other programs if possible (browsers, chat apps, etc). Look at your task manager to see what’s using up memory and kill as much as possible. Consider browsing from your phone instead, and using the PC browser just as needed.
    • emacs and vim have a very steep learning curve. I would suggest against that. If it’s still too frustrating to code like this, you can disable the language server (rust-analyzer), but it’s going to make coding harder. You’ll loose edit suggestions and error highlighting. It’s still possible to code without rust-analyzer but you’ll need to run cargo check very often and read up method names on docs.rs a lot more.
  • mina86@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    19 hours ago
    • My rule of thumb is at least 2GB of RAM per compilation jobs. Even if you have more cores, the jobs may start swapping or crashing slowing down the build in the end. This may of course depends on the size of the project.
    • Disable LTO during development. Is it only when you’re ready to release your binary.
    • If your editor doesn’t keep up, disable fancy IDE features such as Rust analyser. Run checks periodically the same way you run test.
  • infinitevalence@discuss.online
    link
    fedilink
    English
    arrow-up
    3
    ·
    19 hours ago

    Ahhh, Write != Compile…

    maybe get a 2nd low end computer like cast off corporate PC and have that do all your compiling while you keep working on your main PC.

  • TehPers@beehaw.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    19 hours ago

    8GB RAM isn’t a small amount (though by no means a lot). As far as RAM usage goes, the amount you need will scale with project+dependencies size, so for smaller projects, it shouldn’t be a problem at all.

    8GB RAM doesn’t tell us about the rest of your system though. What CPU do you have? Is your storage slow? Performance is affected by a lot of factors. A slow CPU will naturally run programs slower, fewer hardware threads means less running in parallel, and slower storage means that reading incremental build data and writing it could be a bottleneck.

    • FizzyOrange@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      15 hours ago

      8 GB is a really small amount. Even phones have had that much RAM for several years. The average desktop I built in 2012 had 16 GB of RAM.

      Plenty of modern computers only come with a small amount of RAM, because most people only need a small amount, but 8 GB is still a small amount.

      • TehPers@beehaw.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 hours ago

        “Small amount” is relative here. Not everybody needs to play Battlefield 6.

        For just programming, 8GB is way more than enough. Even my old work laptop had only 8GB and the issues didn’t show up until I had multiple Office products + Teams + a browser open, not from any of my dev software.

    • silly_goose@lemmy.todayOP
      link
      fedilink
      arrow-up
      3
      ·
      19 hours ago

      My system has a midrange amd cpu with 6 cores. I have an ssd.

      I think the issue has to do with procedural macros in maud. The project is about 1000 dependencies and 10k loc.

      Combined with tailwind classes and a complex ui, the templates get bloated. I have to test this theory.

      • sga@piefed.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        16 hours ago

        1000 dependencies

        that is too much. i use something with 900ish, and even withh cached builds, it takes like 20 mins, we just have too much linking going on

        • TehPers@beehaw.org
          link
          fedilink
          English
          arrow-up
          5
          ·
          15 hours ago

          Couldn’t agree more here, 1k dependencies would take a while to build even on my 9950x3d if only due to linking.

          It seems to me like the the issue is the project is either too bloated, or large enough to justify a workstation build. Breaking it into smaller, independent parts would also help here.

  • Paragone@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    19 hours ago

    If you have the ability to recompile your system ( Gentoo, changing the optimization from -O2 to -Os, optimizing for size ), that may significantly free your system’s resources for stuff.

    Also, if you CAN’T add RAM, can you speed your storage?

    Going from spinning-platter to SATA-SSD, or from SATA-SSD to NVMe, or from normal-NVMe to FAST-NVMe, would massively help your system’s speed.

    _ /\ _