They make much more practical chest straps for things like GoPros - you can sort of forget it's there and just let it capture everything. I did that while white-water rafting and it worked very well.
Faraday bags work, period, if they are made and used properly. If there's no RF getting in or out, there's no GPS and no checking in with the towers. Inertial navigation doesn't work worth a damn after a little while, and it won't work at all when powered down. Obviously? black tape over the camera lenses unless you're ready to share what they're seeing, and the microphones can listen very well too, it is a phone after all, so bear that in mind.
If your "blue leaders" are any good, they enforce psych profiles on police recruitment that ensures you don't have a force full of racist authoritarian law in their own hands radicals, at least for the forces they control - feds may be a different story.
When I shut the lid on my work computer I assumed it was "off" or at least inactive. My home network showed me it was continuing to "check in" throughout the night.
Phone menu trees have their place, they can improve customer service - if they are implemented well, meaning: sparingly - just where they work well.
Same for AI, a simple: "would you like to try our AI common answers service while you wait for your customer service rep to become available, you won't lose your place in line?" can dramatically improve efficiency and effectiveness.
Of course, there's no substitute for having people who actually respond. I'm dealing with a business right now that seems to check their e-mails and answer their phones about once per month - that's approaching criminal negligence, or at least grounds for a CC charge-back.
Hallucinations and the cost of running the models.
So, inaccurate information in books is nothing new. Agreed that the rate of hallucinations needs to decline, a lot, but there has always been a need for a veracity filter - just because it comes from "a book" or "the TV" has never been an indication of absolute truth, even though many people stop there and assume it is. In other words: blind trust is not a new problem.
The cost of running the models is an interesting one - how does it compare with publication on paper to ship globally to store in environmentally controlled libraries which require individuals to physically travel to/from the libraries to access the information? What's the price of the resulting increased ignorance of the general population due to the high cost of information access?
What good is a bunch of knowledge stuck behind a search engine when people don't know how to access it, or access it efficiently?
Granted, search engines already take us 95% (IMO) of the way from paper libraries to what AI is almost succeeding in being today, but ease of access of information has tremendous value - and developing ways to easily access the information available on the internet is a very valuable endeavor.
Personally, I feel more emphasis should be put on establishing the veracity of the information before we go making all the garbage easier to find.
I also worry that "easy access" to automated interpretation services is going to lead to a bunch of information encoded in languages that most people don't know because they're dependent on machines to do the translation for them. As an example: shiny new computer language comes out but software developer is too lazy to learn it, developer uses AI to write code in the new language instead...
I'm not trained or paid to reason, I am trained and paid to follow established corporate procedures. On rare occasions my input is sought to improve those procedures, but the vast majority of my time is spent executing tasks governed by a body of (not quite complete, sometimes conflicting) procedural instructions.
If AI can execute those procedures as well as, or better than, human employees, I doubt employers will care if it is reasoning or not.
I think as we approach the uncanny valley of machine intelligence, it's no longer a cute cartoon but a menacing creepy not-quite imitation of ourselves.
My impression of LLM training and deployment is that it's actually massively parallel in nature - which can be implemented one instruction at a time - but isn't in practice.
It's not just the memorization of patterns that matters, it's the recall of appropriate patterns on demand. Call it what you will, even if AI is just a better librarian for search work, that's value - that's the new Google.
Not my hypothesis. And it is just bullshit, but if you pay attention, they have made similar runs at taxing and controlling the internet periodically since the 1990s.
Well, that's a big component: how efficient / environmentally destructive is the mining?
Also, as electricity consumption in areas like China, India, Africa increases, they're going to start needing big multiples of the amount of copper used in the US/Europe/ANZ to-date.
Truth is, there's precious little I need from Amazon, let alone need in a hurry. It mostly comes down to wants, supplies for projects which are themselves entirely optional, etc. As such, I will let a "want" item sit in my cart for weeks until it is joined by enough other want items to make the $25 or $35 or whatever arbitrary limit they have set to erase the arbitrary $6.99 small order shipping fee.
I don't like to think about how much I have spent there, and elsewhere online, for things I don't really need. I do like to take arbitrary months off from buying anything optional, kind of like intermittent fasting - gives me time to finish out things I have started, clean up stuff I have abandoned, do things that don't require "stuff."
If I am typical, the world could boycott over 80% of their amazon.com purchases without even bothering to get the stuff from elsewhere, and 80% of the remaining 20% could be sourced elsewhere, perhaps for 10-20% higher cost, perhaps not even that.
And get a burner.