According to this study, the eye can see a difference as high as 500 fps. While this is a specific scenario, it’s a scenario that could possibly happen in a video game, so I guess it means we can go to around 500 hz monitors before it becomes too much or unnessessary.
You can also define a vector by the equivalent “sides of the right triangle”. In 2D, the x,y coordinates. In computer science, vectors are n-tuples, so they represent a math/physics vector but in n-dimensions.
There are such cases, yes. The vast majority of fat people I know personally however are fat because they sit at an office job all day while having KFC for lunch everyday. No matter how you spin it, this is extremely unhealthy.
Which is exaxtly what I said, that it’s fast enough for most use cases.
In theory though, you will “gain performance” by rewriting it (well) in C for literally anything. Even if it’s disk/io, the actual time spent in your code will be lower, while the time spent in kernel mode will be just as long.
For example, you are running a server which reads files and returns data based on said files. The act of reading the file won’t be much faster, but if written in C, your parsers and actual logic behind what to do with the file will be.
But it’s as you said, this actual tiny performance gain isn’t worth it over development/resource cost most of the time.
How are they ignorant? It’s a known fact that java is slow, at least slower than some others. Sure, it’s still fast enough for 95% of use cases, but most code will run faster if written in, say, C. Will have 10x the amount of code and twice as many bugs though.
That’s a crazy take though.
Everyone knows that what you’re most familiar with is way more intuitive than something you’ve never touched in your life.
There are more households that drive cars than ride a bike - is a car therefore a more intuitive to use transport tool than a bike?
How intuitive something is only affects the initial experience. This is why driving a car usually takes a year to learn in most countries - it’s not very intuitive. If you know how to drive a car, however, you can learn to drive a bus much faster - it’s now intuitive because you already know how to drive a car, which is similar.
So of course whichever DE replicates windows the best is going to be the most intuitive. Doesn’t mean that it’s better once you’ve gotten used to it though.
The amount of force needed to deflect a large object is much smaller than to stop it. In fact, if done over a large enough distance, a tiny amount of force is sufficient.
Need an example? Imagine your big brother is skating down a slope. Could you block him, head on? Probably not. But what if your sister, who was skating next to him, were to slightly steer him out of the way so that he doesn’t hit you?
As an alternative, you can also slow him down over a long distance, requiring the same(?) force but applied in a smaller amount, longer.
Not anymore though. 10 years ago, sure, but now you’re forced to either bundle it with phone and cable for a reasonable price(for internet, you’re still buying 2 other things you might not need) or buy the minimum of 60mbps at a premium. And this is in a town of 500 people half an hour away from the nearest city. 15 years ago there was straight up no internet there.
According to the gnome website (https://wiki.gnome.org/Initiatives/Wayland/NVIDIA) gnome still uses a software cursor. Only KDE of all wayland projects has a hardware cursor with NVIDIA, while it AMD has it everywhere.
I have one tab per email account. A few for github issues I’m waiting to be fixed. One which is some random search I just use as reminder. None of which I have closed in months. I literally have a script to boot them up on my second monitor everytime I boot my pc.
According to this study, the eye can see a difference as high as 500 fps. While this is a specific scenario, it’s a scenario that could possibly happen in a video game, so I guess it means we can go to around 500 hz monitors before it becomes too much or unnessessary.