Obviously, "knowing which cloud services to enable" is a lesser skill than knowing how those services work. That is not a parallel or equal skill in any way.
But do you assume people are just going drrrrr brain off when they don't learn that one skillset you are accustomed to spotting?
However, you are now arguing a different point than I am getting from your original post. Maybe my fault in interpretation ofc, but the main difference (in my view) is:
You say "incompetent" and "less skilled" as general statements on senior engineers. Those statements are false.
You also say "missing the skills you are looking for" which is obviously true.
And the implication that before cloud, people developed the specific skills you need more naturally - because they had to. This makes sense and I believe it.
That is technically correct in a way, but I'll argue very wrong in a meaningful way.
Cloud services are meant to let you focus less on the plumbing, so naturally many skills in that will not be developed, and skills adjacent to it will be less developed.
Buttttt you must assume effort remains constant!
So you get to focus more on other things now. E.g. functional programming, product thinking, rapid prototyping, API stuff, breadth of languages, etc. I bet the seniors you are missing X and Y in have bigger Zs and also some Qs that you may not be used to consider, or have the experience to spot and evaluate.
Someone made a modified version of Quake back in the day, that rendered to stereoscopic 3D in a white noise pattern.
It was such a mindfuck to play!
You get 3D depth but no colors or shades or contrast. It's just shapes moving. So doors that were flush with the wall were impossible to see, but enemies in dark rooms were fully visible because there is no light or dark.
I like to imagine I got to experience what a bat sees with echolocation.
Yeah, funding is kinda not. I assumed the question was ignoring that, but I may have been mistaken.
Tsetlin machines are the ones I found most interesting. Strict yes/no logic stuff in the actual decision model, while the deeper complexity is in the training.
One interesting science field is "discrete AI" (probably has a few other names) which basically technically means "based on integers instead of floating point numbers". It has a few more implications on the models being more mathematically clean, but that's a long paragraph if I get into it.
The expecations are AI that is not based on absurd computing resources and black boxes, but getting the same benefits from low-power low-cost hardware and with outputs that can be more realistically queried to explain why the output became what it was.
E.g. if AI is used to make decisions on when to feed fish, and it feeds slightly too much, you'd want to be able to ask "why" and get a useful answer instead of today's "yeah idunno magic computer said so i guess training data lol"
It used to describe the coastline full of seaside trading towns before someone got the idea to make it a country.
The literalness also shows up in all the names for places in the country. They are 90% old spellings of "The place where people live", "the field for cows to feed on", "the settlement at the north of the fjord", "upper farm", "valley settlement", and like 1837 places called "a place you can live".
Sometimes, the feeling of "how big" a thing is, is extremely attractive. It is easy to fall into a trap of enjoying the bigness so much that it makes absurdly unfounded but attractively dramatic outcomes feel more "correct" than boring but realistic ones.
I think this doesn't really make sense for MS as a cost saving measure. It is a signal in order to sell copilot and other snaike oil to other companies hoping to cut costs.
Brock Samson / Buzz Lightyear / Lemony Snicket = Patrick Warburton