

Is this an esolang? Could be nice for code golf maybe.


Is this an esolang? Could be nice for code golf maybe.


Shh you’ll pop the bubble if you start talking sensibly. It’s not an ASIC—it’s a specialized piece of hardware optimized to execute a model with unparalleled performance. Now buy my entire stock of them and all the supply for the next two years please.
(Figuring out the compose combination for an emdash took longer than I’d like to admit lol)


Can’t speak for Git, but caching responses is a common enough problem that it’s built into the standard HTTP headers.
As for building a cache, you’d want to know a few things:
You seem locked into using Git, and if that’s the case, you still need to consider the second point there. Do you plan to evict cache entries? Git repos can grow unbounded in size, and it doesn’t give you many options for determining what entries to keep.


Pre-LLM translation services also generally used AI, just via more traditional machine learning. The only difference is introducing a locally run LLM.
If it runs locally and is openly available, then it doesn’t make much difference to me if it’s a traditional model or a LLM.


Surely the same applies to active military, right? Because that seems like an equally good idea here. Which is to say this is idiotic af.


Do we know it doesn’t?


Terminator might be a little more popular.
It seems the only way to win is not to play.


The article doesn’t say they’re considering everything OTC, and the interview shows they explicitly are not.
That being said, it would be very entertaining to see opioids, amphetamines, etc. sold OTC.


For what it’s worth, open source dev can also work. If you can commit some time to a project you care deeply about and make regular contributions, that’s another form of experience, and I see no reason you couldn’t add that as a line to your resume alongside any other work experience.


This has always been an issue. From my experience, the best way to get in was through internships, co-ops, and other kinds of programs. Those tend to have lower requirements and count as experience.
Of course, today, things are a lot different. It’s a lot more competitive, and people don’t care anymore about actual software dev skills, just who can churn out SLOC the fastest.
Then when they need senior devs again, they can make a good offer.


If the version ranges for those dependencies which depend on vulnerable versions of packages cover the fixed versions as well, then just updating your Cargo.lock dependencies should pull the fixed versions. You can do this with cargo update.
If the ranges don’t cover the fixes, you have a couple options:
If you choose to patch the dependency, the version of the patched package still needs to be compatible with what your dependencies are requesting. If foo v2.1.1 depends on bar = "3", then it can’t use a patched bar v4.1.2 for example, but can use bar v3.3.4. You may need to backport a fix to an earlier version of a package in some cases. You can do that locally and use a path specifier in your patch for that.
In most cases, the vulnerability probably won’t affect you. You should check to make sure though on a case-by-case basis.


*Several states
Washington, for example, has a similar bill proposed.


To put some perspective into what our code looks like, there are very few tests (which may or may not pass), no formatter or linter for most of the code, no pipelines to block PRs, no gates whatsoever on PRs, and the code is somewhat typed sometimes (the Python, anyway). Our infrastructure was created ad-hoc, it’s not reproducible, there’s only one environment shared between dev and prod, etc.
I’ve been in multiple meetings with coworkers and my manager talking about how it is embarassing that this is what we’re shipping. For context, I haven’t been on this project for very long, but multiple projects we’re working on are like this.
Two years ago, this would have been unacceptable. Our team has worked on and shipped products used by millions of people. Today the management is just chasing the hype, and we can barely get one customer to stay with us.
The issue lies with the priorities from the top down. They want new stuff. They don’t care if it works, how maintainable it is, or even what the cost is. All they care about is “AI this” and “look at our velocity” and so on. Nobody cares if they’re shipping something that works, or even shipping the right thing.


Colleagues, and the issue is top-down. I’ve raised it as an issue already. My manager can’t do anything about it.
To add to the other comment, in general, you do as much as possible when resolving a card or effect.
The only exception is if something targets (specifically only with the word “target”) and either there are no valid targets when it would go on the stack (in which case it cannot be cast/triggered/activated), or the targets all become invalid when it’s time to resolve it (in which case you do nothing at all and just remove it from the stack).
So to answer your question, you reveal cards until you reveal a saga (which means you reveal every card in your library), then shuffle those cards back into your library.


Because if I spent my whole day reviewing AI-generated PRs and walking through the codebase with them only for the next PR to be AI-generated unreviewed shit again, I’d never get my job done.
I’d love to help people learn, but nobody will use anything they learn because they’re just going to ask an LLM to do their task for them anyway.
This is a people problem, and primarily at a high level. The incentive is to churn out slop rather than do things right, so that’s what people do.


The meta was always to skip 6 because of this. Sadly, you’re usually forced into it if the opponent hits any kind of card draw or ramp at all.


This is what happens to us. People put out a high volume of AI-generated PRs, nobody has time to review them, and the code becomes an amalgamation of mixed paradigms, dependency spaghetti, and partially tested (and horribly tested) code.
Also, the people putting out the AI-generated PRs are the same people rubber stamping the other PRs, which means PRs merge quickly, but nobody actually does a review.
The code is a mess.
Dolly’s name is already on a lot of stuff in Tennessee. Heck, Dollywood is named after her.
I don’t think there’s much objection either to slapping her name on things, which is rare.