Something not mentioned yet: Forgejo, the software running Codeberg, has a smaller feature set and narrower scope than GitLab ("GitLab is the most comprehensive AI-powered DevSecOps Platform" from their website).
so the server and bandwidth will be the cheapest tier possible and the app developed by the lowest bidder
But billed at the rate of the most expensive tier of infrastructure and charged at the highest bidder’s price but outsourced to the lowest bidder, of course!
Years ago I wanted to learn how OpenBSD worked. Some people said to me “ah you want to get into programming at OS level? I was a bit disappointed with Go. But don’t learn C, learn Rust; Rust is the future there”. So as a total novice I looked at all 3 on the page. My impressions were: Go looks easy, C looks a bit harder, Rust looks… way too advanced for a beginner like me.
Later when I heard of Zig I started reading and it looked a bit more like what I expected a “future C” to look like.
I wish I had more time and skills to do work in C, Rust and Zig. I’m a Go programmer by trade.
This made me realise that the article is not about the quote or any sociology; it’s about politics and John Howard. I dislike articles like this just like the ones about Elon Musk. Political nonsense to get people riled up.
Zig is what I thought Rust would be like when I first heard of Rust. I'd love to try Zig for some hobby things but can't get it running on OpenBSD (yet!).
I am consistently surprised by what companies are willing to pay to not worry about capacity. Incredible DataDog bills, for example, because they didn’t want to think about how many application metrics to store; “just keep it all!”. And boy do they happily pay for it!
Ok I’m starting to understand where you’re coming from now! It sounds like the leaders are happy for humans to do the work of increasing capacity on-demand rather than tackle the engineering challenges in handling workload spikes. The priority is to appease customers who are from well-known, “impressive”, well-paying (maybe not?) companies. Does that sound sorta right?
Inform and throttle. Think about how your own computer works. If storage reaches its max capacity, you get a signal back saying “filesystem full” (or whatever), not “internal storage error”.
If the CPU gets busy, it doesn’t crash; things start slowing down, queued up, prioritised (and many other complicated mechanisms I’m not across!).
You could borrow those ideas, come up with a way to implement the behaviour in your systems, then present them to whoever could allocate the time & money.
Another approach is try to get a small, resource-constrained version of the system running and hammer it by loading heaps of data like those customers do. How does it behave? What are the fatal errors and what can we deal with later?
Agreed. I didn't know about these features - I've never written any Perl before - and I do find them kinda interesting and cool. But not really surprising.
A less clickbaity title might be "Exploring Raku's built-in shortcuts for CLIs" or something. Still 6 words. And I still would have clicked and enjoyed the article! Really appreciated its positive tone and clear examples!
Sorry my comment was really snarky - I apologise. Long day! I'll do better in the future :)
There has been criticism of this listicle format. Critics claim they are clickbait and machinated recycling of information/ideas. Listicles seem to exist to just get more ad impressions over entertaining and informing the reader.
The original article on the original site feels a bit like that. Loads of ads, with just one link to the actual nixos website, mid-sentence, towards the bottom of the article (where the majority of readers never get to).
I think I'm missing something. Don't the police or whoever check the license number, name etc. against a central record? Is this just about the convenience of not carrying around a plastic card? I feel like there's more to it but I don't know what.
Devil’s advocate: what about the posts and comments I’ve made via Lemmy? They could be presented as files (like email). I could read, write and remove them. I could edit my comments with Microsoft Word or ed. I could run some machine learning processing on all my comments in a Docker container using just a bind mount like you mentioned. I could back them up to Backblaze B2 or a USB drive with the same tools.
But I can’t. They’re in a PostgreSQL database (which I can’t query), accessible via a HTTP API. I’ve actually written a Lemmy API client, then used that to make a read-only file system interface to Lemmy (https://pkg.go.dev/olowe.co/lemmy). Using that file system I've written an app to access Lemmy from a weird text editing environment I use (developed at least 30 years before Lemmy was even written!): https://lemmy.sdf.org/post/1035382
They even have a term for this — local-first software — and point to apps like Obsidian as proof that it can work.
This touches on something that I've been struggling to put into words. I feel like some of the ideas that led to the separation of files and applications to manipulate them have been forgotten.
There's also a common misunderstanding that files only exist in blocks on physical devices.
But files are more of an interface to data than an actual "thing".
I want to present my files - wherever they may be - to all sorts of different applications which let me interact with them in different ways.
Only some self-hosted software grants us this portability.
McCodeBurger