• 0 Posts
  • 1.54K Comments
Joined 3 years ago
cake
Cake day: June 21st, 2023

help-circle

  • While I agree with your post, I do want to call out that Rust’s standard library does use a lot of unstable features and calls compiler intrinsics. Anyone can use the unstable features I believe with just #![feature(...)], but not the intrinsics (not that there’s much reason to call the intrinsics directly anyway).


  • You can design a language where you don’t need to generate code to accomplish this.

    Depending on what you mean by “generate code”, the only language at the level of C or C++ that I can think of that does this is Zig. Zig is weird though because you’re still doing what is functionally compile-time reflection, so in a way you’re still generating code, just in a different way.

    If you’re comparing to Python, JS, or even C#, those all come with runtimes that can compile/interpret new code at runtime. None of those languages are comparable here. Rust, C, C++, Zig, etc compile into assembly, and type information, impl information, etc are all lost after compilation (ignoring symbol names or anything tracked as debug info).

    If you’re specifically referring to Debug, Display, PartialEq, etc then the compiler doesn’t do that for you because Rust doesn’t assume that those traits are valid for everything.

    Unlike Java where new Integer(1) != new Integer(1) or JS where "" == 0, Rust requires you to specify when equality comparisons can be made, and requires you to write out the implementation (or use the derive for a simple, common implementation).

    Unlike C# where record class Secret(String Value); will print out the secret into your logs when it inevitably gets logged, Rust requires you to specify when a type can be formatted into a string, and how it should be formatted.

    Just because a language does things one way doesn’t mean every language ever should do things that same way. If you want it to work like another language you like to use, use the language you like to use instead. Rust language designers made explicit decisions to not be the same as other languages because they wanted to solve problems they had with those languages. Those other languages are still usable though, and many solved the same problems in other ways (C#'s nullable reference types, Python’s type hints, TypeScript, C++'s concepts, etc).




  • Part of why Python can do this is that it runs completely differently from Rust. Python is interpreted and can run completely arbitrary code at runtime. It’s possible to exec arbitrary Python.

    Rust is compiled ahead of time. Once compiled, aside from inspecting how the output looks and what symbol names it uses, there’s nothing that ties the output to Rust. At runtime, there is nothing to compile new arbitrary code, and compiling at runtime would be slow anyway. There is no interpreter built into the application or required to run it either.

    This is also why C, C++, and many other compiled languages can’t execute new arbitrary code at runtime.


  • Through macros? The term “meta-programming” had me lost since I’m only familiar with that in reference to C++ templates (and Rust’s generics are more like templates).

    println! and format! are macros because they use custom syntaxes and can reference local variables in a string literal provided to the macro:

    let a = 2;
    println!("{a:?} {b}", b=a);
    

    I don’t know how the derive macros would be function calls. They generate whole impls.

    Macros generate new code. This is the same idea as C macros (except Rust macros generate syntax trees, not tokens, but that’s a minor difference).

    So to answer your question as to why there are macros, it’s because you need to generate code based on the input. A function call can’t do that.






  • Wanna know what’s cheap, nutritious, and tasty? Black beans, white rice, and a corn (or flour) tortilla with a little butter (or butter alternative). Add some meat if you have it and some cilantro if you want, but the beans and rice are super cheap in bulk, and beans are great. You can also refry the beans (less healthy ofc) or cook the leftovers into gallo pinto. There’s also other ways to prepare them, like soups, tacos, burritos, etc. You can even turn the rice into horchata (just be careful to find a recipe that’s food safe!).

    Look, I grew up on this food, and it’s damn good. Also, the base ingredients are as cheap as it gets.

    Ironically, my partner is allergic to broccoli, so the suggested meal wouldn’t even work for us. And that’s of course ignoring vegans and vegetarians.


  • If he doesn’t care or need to verify it, then it doesn’t really matter.

    These tools are great at creating demoable MVPs. They’re terrible at creating maintainable codebases, and cannot be relied on to generate correct code. But if all you need is a demo or MVP, then it’s likely you don’t care, and that’s often the case for personal tools that non-coders want to use.

    The people using it to manage their personal finances are nuts though.


  • If your “friend” does not currently serve for a relevant military, then their battle may be best spent at home for now.

    For a US person, the obvious answer would be protesting, reaching out to representatives, and advocating against more unnecessary violence. For non-US, the first two don’t have the same effect, though your country could politically pressure Trump via threats of sanctions or such.

    If they request volunteers and your “friend” can do that, then that’s how they can use their experience, assuming they want to of course and understand potential consequences of doing so if their government doesn’t approve of it.


  • Ironically, it felt to me like the post deified algorithms itself, but this is the main takeaway:

    We should neither mystify, nor deify these systems, because it makes us forget that we have built them ourselves and infused them with meaning.

    An “algorithm” is nothing more than a set of instructions to follow to complete some kind of task. For example (and closely related), a sorting algorithm might attempt to sort a list by randomizing the list, then checking if it’s sorted and repeating if not (bogosort).

    Lemmy uses an algorithm to sort posts by “most recent”, for example, and I think that having a “most recent” sorting option is noncontroversial.

    Where algorithmic feeds become problematic, in my opinion, is when they start becoming invasive or manipulative. This is also usually when they become personalized. Lemmy, Reddit (within a subreddit), and other kinds of forums usually do not have personalized feeds, and the sorting algorithms for “hot” are usually noncontroversial (maybe there’s debate about effectiveness, but none usually about harm). Platforms like FB, Twitter, TikTok, Instagram, YT, etc all have personalized feeds that they use personal data to generate. They also are the most controversial, and usually what is referred to as “algorithmic” feeds.

    These personalized feeds are not magic. They often include ML black boxes in them, but training a model isn’t sorcery, nor are any of the other components to these algorithms. Like the article mentioned, they are written by people, and can be understood (for the most part), updated, and removed by people. There is no reason a personalized feed is required to invade your privacy or manipulate you. The only reason they do is because these companies are incentivized to do so to maximize how much ad revenue they make off you by keeping you engaged for longer.