Well, that's to be expected - the implementation of map expects a function that takes ownership of its inputs, so you get a type mismatch.
If you really want to golf things, you can tack your own map_ref (and friends) onto the Iterator trait. It's not very useful - the output can't reference the input - but it's possible!
I imagine you could possibly extend this to a combinator that returns a tuple of (Input, ref_map'd output) to get around that limitation, although I can't think of any cases where that would actually be useful.
It wouldn't be as relevant, since passing a function or method instead of a closure is much easier in Rust - you can just name it, while Ruby requires you to use the method method.
So instead of .map(|res| res.unwrap()) you can do .map(Result::unwrap) and it'll Just Workā¢.
You forget that many people live in areas where passenger rail infrastructure is not economically (or practically) viable. I, for one, pity the grain truck that has to drive over an unpaved road.
I don't know about dangerous, but case-insensitive Unicode comparison is annoying, expensive and probably prone to footguns compared to a simple byte-for-byte equality check.
Obviously, it can be done, but I guess Linux devs don't consider it worthwhile.
(And yes, all modern filesystems support Unicode. Linux stores them as arbitrary bytes, Apple's HFS uses... some special bullshit, and Windows uses UTF-16.)
Unfortunately, it's not that simple. The Remote* extensions rely on the (proprietary) VSCode server, and nobody has managed to hack it to work with e.g. Codium.
Mostly just Visual Studio Code, alongside the usual constellation of Git + assorted language toolchains.
It's plug and play at every level - no need to waste hours fucking around with an Emacs or (Neo)Vim configuration just to get a decent development environment set up.
(And yes, I would use Codium, but the remote containers extension is simply too good.)
Most of them, yes. The reddest stars (like Proxima Centauri) are too cool and dim to be visible to the naked eye, but if you go somewhere with no light pollution and let your eyes adjust you should be able to perceive some differences between stars.
After all, the discipline has always been about more than just learning the ropes of Python and C++. Identifying patterns and piecing them together is its essence.
Ironic, considering LLMs can't fucking do that. All they do is hallucinate the statistically likely answer to your prompt, with some noise thrown in. That works... okay at small scales (but even then, I've seen it produce some hideously unsound C functions) and completely falls apart once you increase the scope.
Short of true AGI, automatically generating huge chunks of your code will never end well. (See this video for a non-AI example. I give it two years tops before we see it happen with GPT.)
Also... not hating on English majors, but the author has no idea what they're talking about and is just regurgitating AI boosterism claims.
... Eh, no. I've seen GPT generate some incredibly unsound C despite being given half a page of text on the problem.