50 years after Basic, most users still can't or won't program anything
FizzyOrange @ FizzyOrange @programming.dev Posts 1Comments 473Joined 2 yr. ago
Nonsense. There are way more programmers now than there were in the Windows 3.1/9x era when you couldn't avoid files and folders. Ok more people are exposed to computers in general, but still... Anyone who has the interest to learn isn't going to be stopped by not knowing what file and folders are.
It's like saying people don't become car mechanics because you don't have to hand crank your engine any more.
I'm not sure I agree. I think most people can understand recipes or instruction lists and totally could program, if they wanted to and had to. They just don't want to and usually don't have to. They find it boring, tedious and it's also increasingly inaccessible (e.g. JavaScript tooling is the classic example).
But I think mainly people just don't find it interesting. To understand this, think about law. You absolutely have the intellect to be a lawyer (you clever clog), so why aren't you? For me, it's mind-numbingly boring. If I was really into law and enjoyed decoding their unnecessarily obtuse language then I totally would be a lawyer. But I don't.
Bash is widely used in production environments for scripting all over enterprises.
But it shouldn't be.
The people you work with just don’t have much experience at lots of shops I would think.
More likely they do have experience of it and have learnt that it's a bad idea.
You’ve essentially dissed people who use it for CI/CD and suggested that their pipeline is not robust because of their choice of using Bash at all.
Yes, because that is precisely the case. It's not a personal attack, it's just a fact that Bash is not robust.
You're trying to argue that your cardboard bridge is perfectly robust and then getting offended that I don't think you should let people drive over it.
About shared libraries, many popular languages, Python being a pretty good example, do rely on these to get performance that would be really hard to get from their own interpreters / compilers, or if re-implementing it in the language would be pretty pointless given the existence of a shared library, which would be much better scrutinized, is audited, and is battle-tested. libcrypto is one example. Pandas depends on NumPy, which depends on, I believe, libblas and liblapack, both written in C, and I think one if not both of these offer a cli to get answers as well. libssh is depended upon by many programming languages with an ssh library (though there are also people who choose to implement their own libssh in their language of choice). Any vulnerabilities found in these shared libraries would affect all libraries that depend on them, regardless of the programming language you use.
You mean "third party libraries" not "shared libraries". But anyway, so what? I don't see what that has to do with this conversation. Do your Bash scripts not use third party code? You can't do a lot with pure Bash.
If your temporary small script morphs into a monster and you’re still using bash, bash isn’t at fault. You and your team are.
Well that's why I don't use Bash. I'm not blaming it for existing, I'm just saying it's shit so I don't use it.
You could use Deno, but then my point stands. You have to write a function to handle the case where an env var isn’t provided, that’s boilerplate.
Handling errors correctly is slightly more code ("boilerplate") than letting everything break when something unexpected happens. I hope you aren't trying to use that as a reason not to handle errors properly. In any case the extra boilerplate is... Deno.env.get("FOO")
. Wow.
What’s the syntax for mkdir? What’s it for mkdir -p? What about other options?
await Deno.mkdir("foo"); await Deno.mkdir("foo", { recursive: true });
What's the syntax for a dictionary in Bash? What about a list of lists of strings?
It means that all commands that return a non-zero exit code will fail the script. The problem is that exit codes are a bit overloaded and sometimes non-zero values don't indicate failure, they indicate some kind of status. For example in git diff --exit-code
or grep
.
I think I was actually thinking of pipefail
though. If you don't set it then errors in pipelines are ignored, which is obviously bad. If you do then you can't use grep
in pipelines.
Yeah Verilog. That's literally the language people use to design chips and FPGA bitstreams.
Someone has already done it: https://github.com/Redcrafter/verilog2factorio
And I certainly am not proposing that we can abandon robustness.
If you're proposing Bash, then yes you are.
You’ll probably hate this, but you can use set -u to catch unassigned variables.
I actually didn't know that, thanks for the hint! I am forced to use Bash occasionally due to misguided coworkers so this will help at least.
But you can’t eliminate their dependence on shared libraries that many commands also use, and that’s what my point was about.
Not sure what you mean here?
Just want to copy some files around and maybe send it to an internal chat for regular reporting? I don’t see why not.
Well if it's just for a temporary hack and it doesn't matter if it breaks then it's probably fine. Not really what is implied by "production" though.
Also even in that situation I wouldn't use it for two reasons:
- "Temporary small script" tends to smoothly morph into "10k line monstrosity that the entire system depends on" with no chance for rewrites. It's best to start in a language that can cope with it.
- It isn't really any nicer to use Bash over something like Deno. Like... I don't know why you ever would, given the choice. When you take bug fixing into account Bash is going to be slower and more painful.
I'm afraid your colleagues are completely right and you are wrong, but it sounds like you genuinely are curious so I'll try to answer.
I think the fundamental thing you're forgetting is robustness. Yes Bash is convenient for making something that works once, in the same way that duct tape is convenient for fixes that work for a bit. But for production use you want something reliable and robust that is going to work all the time.
I suspect you just haven't used Bash enough to hit some of the many many footguns. Or maybe when you did hit them you thought "oops I made a mistake", rather than "this is dumb; I wouldn't have had this issue in a proper programming language".
The main footguns are:
- Quoting. Trust me you've got this wrong even with
shellcheck
. I have too. That's not a criticism. It's basically impossible to get quoting completely right in any vaguely complex Bash script. - Error handling. Sure you can
set -e
, but then that breaks pipelines and conditionals, and you end up with really monstrous pipelines full ofpipefail
noise. It's also extremely easy to forgetset -e
. - General robustness. Bash silently does the wrong thing a lot.
instead of a
import os; os.args[1]
in Python, you just do$1
No. If it's missing $1
will silently become an empty string. os.args[1]
will throw an error. Much more robust.
Sure, there can be security vulnerability concerns, but you’d still have to deal with the same problems with your Pythons your Rubies etc.
Absolutely not. Python is strongly typed, and even statically typed if you want. Light years ahead of Bash's mess. Quoting is pretty easy to get right in Python.
I actually started keeping a list of bugs at work that were caused directly by people using Bash. I'll dig it out tomorrow and give you some real world examples.
Yeah I think you've made it worse than Rust in both cases. They clearly shouldn't be strings. And the second option is just unnecessarily confusing.
What is a "traditional programming language"? I don't think the popularity of Rust has anything whatsoever to do with AI.
Permanently Deleted
You could do some of it in Python, but some stuff needs low level access to registers, e.g. trap handlers and context switching.
Should you do that? Absolutely fucking not. It would be hilariously slow and inefficient. Hundreds of times, maybe thousands of times slower than C/C++ kernels.
They definitely didn't. Yes it is technically better to use their own new fancy system, but they're a business. Backwards compatibility is killer, even if you don't want it from a technical point of view.
I guarantee they looked at the numbers, interviewed users and asked them why they weren't using Deno, and the number one reason would have been "we'd love to but we need to be able to use the X node package".
They probably have to improve Node compatibility, but the Node API surface is actually not that big. They'll get there.
Yes, and then pass the context from the call sites of that function, and all the way up to main()
. Oh look you're refactored the entire app.
That's best cases too, you'd better hope your program isn't actually a shared library running in a SystemVerilog simulator with state instantiated from separate modules via DPI, or whatever.
30 years my ass
lol when you have 30 years experience you will have actually tried to do this a few times and realised it isn't usually as trivial as you hope it would be.
ChatGPT or Claude. Just be aware that sometimes they'll get things convincingly wrong. If you're new to programming and asking simple questions then it should be relatively uncommon though.
I take time to explain why they aren’t duplicates of similar questions.
Ha yes I've found that if you explain why your question isn't a duplicate of another question and link to it, people are more likely to report it as a duplicate of that question. So stupid.
it’s an easy refactor to make it not global
I have enough experience to know that making global state non-global is usually anything but easy.
I don't need to Google anything. I have 30 years experience writing C & C++.
This is not about storage durations
Yes it is.
https://en.cppreference.com/w/c/language/storage_duration
it’s local to a function
Only the visibility is local. The data is still global state. You can call that function from anywhere and it will use the same state. That's what global state means.
https://softwareengineering.stackexchange.com/a/314983
Some of the biggest issues with global state are that is makes testing difficult and it makes concurrent code more error-prone. Both of those are still true for locally scoped static variables.
that static variable is local to that function
Yes I know how static storage durations work. It's still global state, which is a code smell. Actually I'd go as far as to say global state is just bad practice, not just a smell. Occasionally it's the only option, and it's definitely the lazy option which I won't claim to never take!
// a method with a state, horrid in some contexts, great in others
Definitely another code smell!
nibbles.bas
, a classic. I wonder if you can pay it online.