Florida man points AR-15 in Uber driver's face, forces him to ground for dropping daughter off: deputies
_NoName_ @ JayDee @lemmy.ml Posts 0Comments 402Joined 3 yr. ago
I personally lean towards "we've done all this work and it's incredibly scary that modern observations actually tell us all the work we've put in is actually wrong and we have to create brand new formulas again."
General Electric was there too
The answer seems to be "it depends" and "if you have the right equipment and know-how"
Harm Reduction Rule
What's this mean? I'm OOTL.
Is it possible to get the joke at runtime using the spectre exploit?
The 'document' part also seems to be insanely hit-or-miss from my amateur experience. Self-documenting design/code is... well, not. Auto-generated documentation is also usually just as bad IMO. Producing good documentation really is a skill in and of itself.
Also small personal opinion: If your abstraction layers or algorithms are based off a technical concept, you should probably attribute that concept and provide links to further research, to eliminate future ambiguity or in case your reader lacks that background. Future you will probably thank you and anyone like me who immediately gets lost in jargon soup will also be thankful.
I think a good solution would to just have that script autogenerated by the flatpak, honestly.
Well, I come to the comments in good faith to hear perspectives and try and learn something. I had no aim of turning it into a debate.
It is frustrating when seeing what looks like could be an eye opening dialogue having that dialogue never materialize because neither party actually brings anything but insults to the table. You might as well not even comment at that point IMO.
This is what I was looking for! Thank you. You can get more mileage out of this work by linking this comment in the future.
Currently on break, I've decided to just form the comment I've been formulating into a blog post which I'll link when I've finished. I appreciate the patience.
You're correct, so far I've mainly laid out just personal theory and anecdote. I will hunt down some sources and would appreciate it if you do the same. You've now had 4 comments to do so and still have abstained from doing so.
I am gonna have to come back to this, since I have work coming up. I will edit this when I can and will DM you when I've come back to it.
Very fast response time, cool.
The only source you've provided so far is a hand-wave at a collection of governments - and multiple companies that have been explicitly known for being terrible for the worker - and an argument of theory from Marx in 1872. you've made a claim that central planning achieved these things, please actually cite the sources you've read.
Meanwhile, not sure what obfuscation has to do with central planning.
When you have people who specialize in politics working in a office space dictating what industrial workers do on the ground, the game of telephone often makes the higher ups make decisions that are actively counter-productive to progress and efficiency. As a factory worker who's safety standards and work procedures are dictated by people who don't even step onto the floor, this is a constant issue.
As an alternative, I think a centralized group helping formulate a general goal with success criteria, then leaving the rest of the planning to the actual workforces, is better for the worker and can actually end up more efficient in the right conditions.
EDIT:Modified the first part to make it clear that I mean Walmart and Amazon have been notoriously bad to their workforces, and that I'm not commenting on the countries.
Your comment does not promote actual discussion and I'd like you to do better, please.
Your comment is only refuting an argument and then supporting that refutal with an ad hominem attack, rather than actually providing a supporting argument.
I as a layman would also actually like to know why you believe that the critique of centralization is 'not based in reality'.
My prior understanding is that any time you obfuscate the management of a project from the workers of the project - regardless of the method of obfuscation (layers of management, distance, language barriers, subterfuge) or the project type - you inevitably end up with out-of-touch individuals directing.
Please tell me why this is an out of touch understanding of why centralization is an issue.
There is often a very limited market for underperforming hardware, which is how RISC-V chips will be starting out. There is a large amount of accumulated knowledge about, and workflow to accommodate, already established ISAs.
Due to most companies being publicly traded, taking risks is much less common, since a drop in profits could see a massive portion of the company's funds get pulled, or more likely the CEO being yanked by the board. So they play it safe and choose already established architectures.
I never claimed that the current software didn’t use machine learning
This is not AI.
This is your straight statement, and your only argument was saying it was done before AI was used in it. That's a poor argument. That's like arguing that self driving isn't AI because remote control car piloting existed.
Automated image manipulation vs having 100s of hours in Photoshop. That's AI vs what came before. Inputting a source file and getting a manipulated file after some amount of time, vs hours of meticulous work trying to get minor details right.
If we want to compare oldschool manipulation vs AI Manipulation, then yes, fakes now are on par with the insane skill of some image doctoring artists - you're just looking for different things - but it's at an exponentially lower cost than hiring a professional. Compare AI to itself, though? It's night and day. Early AI manipulation was atrocious. And modern AI manipulation is only going to get better. That is all due to breakthroughs in AI. imagine what the hell will happen when Sora becomes usable by anyone.
Machine learning has taken an originally hard thing to do and made it cheap and easy. Now, any schmuck can pump out doctored footage in an afternoon. That's why the AI porn is big- you can pay dirt cheap and give the model photos of any random woman and it'll make porn of them - and that fact has turned it into a much more viable business model than before, that's currently creating massive amounts of non consensual porn fakes- exponentially more than before.
You are pulling a no true Scotsman fallacy here. AI has always been a somewhat vague term, and it's explicitly a buzzword in today's systems.
This AI front has also been taking the current form for more than a decade, but it wasn't a public topic until now, because it was terrible up until now.
The relevant things is that AI is automating a normally human-centric practice via extensive training on a data model. All systems I've mentioned utilize that machine learning practice at some point in their process.
The statement about the deepfakes is just patently incorrect on your part. It is a trained model which takes an input, and outputs a manipulated output based on its training. That's enough to meet the criteria. Before it was fairly difficult and almost immediately identifiable as AI manipulated. It's now popular because it's gotten good enough to not be immediately noticeable, done fairly easily, and is at the point where it can be mostly automated.
If we're talking only about LLMs, then probably the biggest issues caused are threats to support line jobs, the enshittification of said help lines, blatant misinformation spread via those chat bots, and a variety of niche problems.
If we're spreading out to mean AI mor generally, we could talk about how facial recognition has now gotten good enough that it's being used to identify and catalogue pretty much anyone that passes a FR-equipped security system. Israel has actually been picking civilian targets via AI. We could also talk about "self driving" cars and the compeletely avoidable deaths they've caused. We could talk about how most convolution network AIs that identify graphic imagery and other horrific visuals use massive sweat shops to sort said graphic images for pennies. We could also talk about how mimicry AI has now been used to create both endless revenge porn of unwilling victims, and also faked the voice of others to try to scam them or make them not vote. There's plenty of damage AI as a whole has done, even if LLMs are the most minimal of all of them.
Before Helldivers? Lethal company.
There's literally a section titled 'why use UTC - not TAI?'.
There's definitely self-selection happening. A paranoid individual is more likely to feel the need to buy a gun. A person who wants control over others is more likely to feel that same need. A person with malicious or suicidal intent is more likely to feel that same need.
Meanwhile, it's entirely a coin-toss on whether a sane, responsible individual actually feels like they can/should own a firearm. I think as we get into worse civil unrest, we will inevitably see more individuals feel that they have no choice but to arm themselves, but for the time being it's going to the less savory folks rushing to buy.