It's certainly an argument I've heard a lot when talking about inconsistencies in the Bible. Usually it's blamed on translation, missing context, or exaggerated retellings. It was written by many different people who weren't necessarily talking to each other after all. I have a hard time taking any of it seriously.
I'm going to file this under the category of philosophy similar to "what if we're living in a simulation?" and "parallel universe" theory. As far as I'm aware we have no evidence that there's even such thing as a false vacuum, so this is all just speculation based on some theories.
Personally I just have an old micro USB cable I cut the end off of and soldered solid-core wire to. Just plug the USB-A end into a battery bank and the wires into the breadboard rails and you've got a stable 5V supply.
I rarely needed 3.3V on a breadboard, but when I did I usually had a 5V to 3.3V voltage translator already on the board which was enough to get by.
Any sort of op-amp circuit would easily make use of a 15V input, or better yet using the full 20V with a 10V reference to get +/-10V voltage rails for an amplifier circuit.
These don't seem to be particularly new panels. $600 and only 97% of the sRGB color space (= ~78% DCI-P3), meanwhile a similarly priced LG "QNED" can do 90-95% of DCI-P3. I'm not sure you can even call those TVs HDR if they're only 8-bit color. None of these models can even remotely compare to a brand new OLED TV.
I prefer my tutorials without reading someone's life story at the beginning. The intro contains so little info compared to the number of words being used. This reminds me of looking up a recipe and having to scroll past an essay on the history of someone's grandmother. I like it when documentation is as dense as possible, and ideally formatted in a logical way so it's easy to skim. Big paragraphs of English do not achieve this.
I got the same sort of impression in the "Write for beginners" section. The "good" example is like 3x as long but contains less actual information. The reader is already looking up a tutorial, you don't need to sell them on what they're about to do with marketing speak. I've really come to value conciseness in recent years.
As long as the labels don't end up on absolutely everything like in California.
It makes sense on things you actually consume, but a lot of other tech products and tools have the California warnings and it's become meaningless to me.
I have no way of knowing if just holding a thing increases my risk of cancer or if it's just an issue if I was to lick a surface or consume something inside. I mean, aluminum apparently causes cancer?!?What can I even do with that information?
Edit: I read the wrong list, Aluminum is fine but other metals like Lead and Nickel are bad. The problem is the labels don't tell you what the danger is. Does the product have a literal lead weight inside that you'll never touch? Or is the outside coated in one of the other 600 cancer causing chemicals? (https://oehha.ca.gov/media/downloads/proposition-65//p65chemicalslist.pdf)
Crazy that wood dust is on there. That explains why basically all IKEA furniture "may cause cancer"
Also, a key part of how GPT-based LLMs work today is they get the entire context window as their input all at once. Where as a human has to listen/read a word at a time and remember the start of the conversation on their own.
I have a theory that this is one of the reasons LLMs don't understand the progression of time.
The context window is a fixed size. If the conversation gets too long, the start will get pushed out and the AI will not remember anything from the start of the conversation.
It's more like having a notepad in front of a human, the AI can reference it, but not learn from it.
the kind of stuff that people with no coding experience make
The first complete program I ever wrote was in Basic. It took an input number and rounded it to the 10s or 100s digit. I had learned just enough to get it running. It was using strings and a bunch of if statements, so it didn't work for more than 3 digit numbers. I didn't learn about modulo operations until later.
In all honesty, I'm still pretty proud of it, I was in 4th or 5th grade after all đ. I've now been programming for 20+ years.
I think part of the problem is that LLMs stop learning at the end of the training phase, while a human never stops taking in new information.
Part of why I think AGI is so far away is because to run the training in real-time like a human, it would take more compute than currently exists. They should be focusing on doing more with less compute to find new more efficient algorithms and architectures, not throwing more and more GPUs at the problem. Right now 10x the GPUs gets you like 5-10% better accuracy on whatever benchmarks, which is not a sustainable direction to go.
2 of those games are from 2022 and 2023