Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)BO
Posts
0
Comments
51
Joined
2 yr. ago

  • A lot of simple parts build up in predictable ways to accomplish big things. The complexity is spread out and minimized.

    This has always felt untrue to me. The command line has always been simple parts. However we cannot argue that this applies to all Unix-like systems: The monolithic Linux kernel, Kerberos, httpd, SAMBA, X windowing, heck even OpenSSL. There's many examples of tooling built on top of Unix systems that don't follow that philosophy.

    The traditional Unix way of doing things is definitely very outdated though.

    Depends on what you mean. "Everything is a file"? Sure, that metaphor can be put to rest. "Low coupling, high cohesion"? That's even more valid now for cloud architectures. You cannot scale a monolith efficiently these days.

    In the end, Kubernetes is trying to impose a semi-distributed model of computation on a very NOT distributed operating system to the detriment of system complexity, maintainability, and security.

    Kubernetes is more complex than a single Unix system. It is less complex than manually configuring multiple systems to give the same benefits of Kubernetes in terms of automatic reconciliation, failure recovery, and declarative configuration. This is because those three are first class citizens in Kubernetes, whereas they're just afterthoughts in traditional systems. This also makes Kubernetes much more maintainable and secure. Every workload is containerized, every workload has predeclared conditions under which it should run. If it drifts out of those parameters Kubernetes automatically corrects that (when it comes to reconciliation) and/or blocks the undesirable behaviour (security). And Kubernetes keeps an audit trail for its actions, something that again in Unix land is an optional feature.

    If you work with the Kubernetes model then you spend 10% more time setting things up and 90% less time maintaining things.

    9P is much simpler and more elegant than HTTP

    It also has negligible adoption compared to HTTP. And unless it provides an order of magnitude advantage over HTTP, then it's going to be unlikely that developers will use it. Consider git vs mercurial. Is the latter better than git? Almost certainly. Is it 10x better? No, and that's why it finds it hard to gain traction against git.

    A filesystem does not exclusively mean an on-disk representation of a tree of files with a single physical point of origin. A filesystem can be just as “highly available” and distributed as any other way of representing resources of a system if not more so because of its abstractness.

    Even an online filesystem does not guarantee high availability. If I want highly available data I still need to have replication, leader election, load balancing, failure detection, traffic routing, and geographic distribution. You don't do those in the filesystem layer, you do them in the application layer.

    Also, you’re “disappointed” in me? Lmao

    Nice ad hominem. I guess it's rules for thee, but not for me.

    And how do you manage containers? With bespoke tools and infrastructure removed from the file abstraction. Which is another way Kubernetes is removed from the Unix way of doing things. Unless I’m mistaken, it’s been a long time since I touched Kubernetes.

    So what's the problem? Didn't you just say that the Unix way of doing things is outdated? Let the CSI plugin handle the filesystem side if things, and let Kubernetes focus on the workload scheduling and reconciliation.

    It’s not a preconception. They engaged with your way of doing things and didn’t like it.

    Dismissal based on flawed anecdote is preconception.

    By what standard? The standard of you and your employer? In general, you seem to be under the impression that the conventional hegemonic corporate “cloud” way of doing things is the only correct way and that everyone else is unskilled and not flexible.

    No. I'm not married to the "cloud" way of doing things. But if someone comes to me and says "Hey boblin, we want to implement something on system foo, can you help us?" and I am not used to doing things the foo way I will say "I'm not familiar with it but let's talk about your requirements, and why you chose foo" instead of "foo is for bureaucrats, I don't want to use it". I'd rather hire an open-mined junior than a gray-bearded Unix wizard that dismisses anything unfamilar. And I will also be the first person to reject use cases for Kubernetes when they do not make sense.

    just that you should be more open-minded and not judge everyone else seeking a different path to the conventional model of cloud/distributed computing as naive, unskilled people making “bad-faith arguments”.

    There are scenarios where cloud compute just does not make sense, like HPC. If the author had led with something like that, then they would have made a better argument. But instead they went for

    cloud-native tooling feels like it’s meant for bureaucrats in well-paid jobs,

    ,

    In the 90s my school taught us files and folders when we were 8 years old

    , and

    When you finally specify all those flags, neatly namespaced with . to make it feel all so very organised, you feel like you’ve achieved something. Sunk-cost fallacy kicks in: look at all those flags that I’ve tuned just so - it must be robust and performant!

    It's hard to not take that as bad faith.

  • I probably did go a bit ad hominem in my last paragraph. By the time I was done with the article I was very frustrated by what seemed to be some very bad faith arguments (straw man, false dilemma) that were presented.

  • This vmalert tool is just an interface to another, even more complicated piece of software.

    Not really just an interface. It is a pluggable service that connects to one or more TSDBs, performs periodic queries, and notifies another service when certain thresholds are exceeded. So with all those configuration options, why is the standalone binary expected to have defaults that may sound same on one system but insane in a different one? If the author wants out of the box configuration they could have gotten the helm chart or the operator and then that would be taken care of. But they seem to be deathly allergic to yaml, so I guess that won't happen.

    Since when do Unix tools output 3,000 word long usage info? Even GNU tools don't even come close...

    You just said that this software was much more complex than Unix tools. Also if only there were alternate documentation formats....

    HTTP and REST are very strange ways to accomplish IPC or networked communication on Unix when someone would normally accomplish the same thing with signals, POSIX IPC, a simpler protocol over TCP with BSD sockets, or any other thing already in the base system.

    Until you need authentication, out of the box libraries, observability instrumentation, interoperability... which can be done much more easily with a mature communication protocol like HTTP. And for those chasing the bleeding edge there's gRPC.

    I would hope the filesystems you use are "high availability" lol

    They're not, and I'm disappointed that you think they are. Any individual filesystem is a single point of failure. High availability lets me take down an entire system with zero service disruption because there's redundancy, load balancing, disaster recovery...

    the humble file metaphor can still represent these concepts

    They can, and they still do... Inside the container.

    It's not a lack of skill as your comment implies but rather a rejection of this way of doing things.

    Which I understand, I honestly do. I rejected containers for a (relatively) long time myself, and the argument that the author is making echoes what I would have said about containers. Which is why I believe myself to be justified in making the argument that I did, because rejecting a way of doing things based on preconception is a lack of flexibility, and in cloud ecosystems that translates to a lack of skill.

  • I am someone with kubernetes in my job title. If you as a developer are expected to know about kubernetes beyond containerizing your application then your company has set itself up for failure. As you aptly said kubernetes is an ecosystem, and the dev portion is a small niche of that.

  • You can’t run vmalert without flags

    Running grep without parameters is also pretty fucking useless.

    500 words in to the over 3,000 word dump, I gave up.

    Claims to have a Unix background, doesn't RTFM.

    Nobody really uses Kubernetes for day-to-day work, and it shows. Where UNIX concepts like files and pipes exist from OS internals up to interaction by actual people, cloud-native tooling feels like it’s meant for bureaucrats in well-paid jobs.

    Translation: Author does not understand APIs.

    Want an asynchronous, hierarchical, recursive, key-value database? With metadata like modified times and access control built-in? Sounds pretty fancy! Files and directories.

    Ok. Now give me high availability, atomic writes to sets of keys, caching, access control...

    I’m ashamed enough that I can’t really apply to these jobs

    This reads as "I applied to the jobs and got rejected. There's nothing wrong with me, so the jobs must be broken".

  • I remember using QEMM for the first time and finally being able to load games and applications that would otherwise not work.

    I remember having to fiddle with IRQ settings to get sound working.

    I remember the C64 emulator and finally being able to play Ultima 4 without having to constantly switch disks.

    I remember the experimental OS and hardware explosions: QNX (still alive as an automotive OS), BeOS, MenuetOS, Transmeta Crusoe.

    The Voodoo graphics cards!