BIDEN DROPS OUT OF THE PRESIDENTIAL RACE
ClamDrinker @ ClamDrinker @lemmy.world Posts 0Comments 258Joined 2 yr. ago
And even with that base set, even if a computer could theoretically try all trillion possibilities quickly, it’ll make a ton of noise, get throttled, and likely lock the account out long before it has a chance to try even the tiniest fraction of them
One small correction - this just isn't how the vast majority of password cracking happens. You'll most likely get throttled before you try 5 password and banned before you get to try 50. And it's extremely traceable what you're trying to do. Most cracking happens after a data breach, where the cracker has unrestricted local access to (hopefully) encrypted and salted password hashes.
People just often re-use their password or even forget to change it after a breach. That's where these leaked passwords get their value if you can decrypt them. So really, this is a non-factor. But the rest stands.
While this comic is good for people that do the former or have very short passwords, it often misleads from the fact that humans simply shouldn't try to remember more than one really good password (for a password manager) and apply proper supplementary techniques like 2FA. One fully random password of enough length will do better than both of these, and it's not even close. It will take like a week or so of typing it to properly memorize it, but once you do, everything beyond that will all be fully random too, and will be remembered by the password manager.
Permanently Deleted
Depends on what kind of AI enhancement. If it's just more things nobody needs and solves no problem, it's a no brainer. But for computer graphics for example, DLSS is a feature people do appreciate, because it makes sense to apply AI there. Who doesn't want faster and perhaps better graphics by using AI rather than brute forcing it, which also saves on electricity costs.
But that isn't the kind of things most people on a survey would even think of since the benefit is readily apparent and doesn't even need to be explicitly sold as "AI". They're most likely thinking of the kind of products where the manufacturer put an "AI powered" sticker on it because their stakeholders told them it would increase their sales, or it allowed them to overstate the value of a product.
Of course people are going to reject white collar scams if they think that's what "AI enhanced" means. If legitimate use cases with clear advantages are produced, it will speak for itself and I don't think people would be opposed. But obviously, there are a lot more companies that want to ride the AI wave than there are legitimate uses cases, so there will be quite some snake oil being sold.
What are you talking about? The open source community has trained these kinds of models. They're out there.
Yes and it's really getting to the point where it undoubtedly impacts his ability to perform without an aide basically at his side at all times. Imagine him telling another world leader something insane and the translator having to jump in correct him. Or worse, how can you be sure something he did mean to say was actually something he meant to say and not just another mistake? That kind of communication failure is very dangerous at the highest level.
I thought you were trolling, like in a "you have to double check if I'm bullshitting you now" kind of way. But it's actually true... and unlike his Putin remark he didn't seem to even notice he said it.
There is no such sexual material in the book. An innocent teenage girl asked a raunchy question to a friend she had a crush on because that's the kind of behavior teens display while they grow up and develop themselves. And she got shut down. Nothing graphical was ever shown. It's showing only what was written by that same normal girl disconnected and hidden from the world as they hide from murderous tyrannical nazi's. Raw and unfiltered thoughts and feelings of a normal developing teen, as the girl wrote it for herself, not us. As the Anne Frank Foundation said in the video "A book written by a 12 year old can be read by 12 year olds."
On the odd chance you aren't completely trolling. Anne Frank was a girl going through puberty. She had a crush on her friend and like any normal young person had to deal with scary, unknown, but very normal human feelings and desires of intimacy and love. It's her own fucking diary, she didn't self censor herself for prudes in 2024. She had a war and death hanging over her head at any moment.
And if you actually go look at the book, there is nothing graphic about it. To these prudes having these normal feelings and describing them in a diary is what they consider to be graphic. Here's a Dutch talkshow host absolutely clowning on these people just by showing the passage the controversy is actually about (with English subtitles)
You can certainly try to use the power as much as possible, or sell the energy to a country with a deficit. But the problem is that you would still need to invest a lot of money to make sure the grid can handle the excess if you build renewables to cover 100% of the grid demand for now and in the future. Centralized fuel sources require much less grid changes because it flows from one place and spreads from there, so infrastructure only needs to be improved close to the source. Renewables as decentralized power sources requires the grid to be strengthened anywhere they are placed, and often that is not practical, both in financial costs and in the engineers it takes to actually do that.
Would it be preferable? Yes. Would it happen before we already need to be fully carbon neutral? Often not.
I'd refer you to my other post about the situation in my country. We have a small warehouse of a few football fields which stores the highest radioactivity of unusable nuclear fuel, and still has more than enough space for centuries. The rest of the fuel is simply re-used until it's effectively regular waste. The time to build two new nuclear reactors here also costs only about 10 years, not 20.
Rather continue with wind and solar and then batteries for the money.
All of these things should happen regardless of nuclear progress. And they do happen. But again, building renewables isn't just about the price.
Some personal thoughts: My own country (The Netherlands) has despite a very vocal anti-nuclear movement in the 20th century completely flipped now to where the only parties not in favor of Nuclear are the Greens, who at times quote the fear as a reason not to do it. As someone who treats climate change as truly existential for our country that lies below projected sea levels, it makes them look unreasonable and not taking the issue seriously. We have limited land too, and a housing crisis on top of it. So land usage is a big pain point for renewables, and even if the land is unused, it is often so close to civilization that it does affect people's feelings of their surroundings when living near them, which might cause renewables to not make it as far as it could unrestricted. A nuclear reactor takes up fractions of the space, and can be relatively hidden from people.
All the other parties who heavily lean in to combating climate change at least acknowledge nuclear as an option that should (and are) being explored. And even the more climate skeptical parties see nuclear as something they could stand behind. Having broad support for certain actions is also important to actually getting things done. Our two new nuclear powered plants are expected to be running by 2035. Only ten years from now, ahead of our climate goals to be net-zero in 2040.
People are kind of missing the point of the meme. The point is that Nuclear is down there along with renewables in safety and efficiency. It's lacking the egregious cover up in the original meme, even if it has legitimate concerns now. And due to society's ever increasing demand for electricity, we will heavily benefit from having a more scalable solution that doesn't require covering and potentially disrupting massive amounts of land before their operations can be scaled up to meet extraordinary demand. Wind turbines and solar panels don't stop working when we can't use their electricity either, so it's not like we can build too many of them or we risk creating complications out of peak hours. Many electrical networks aren't built to handle the loads. A nuclear reactor can be scaled down to use less fuel and put less strain on the electrical network when unneeded.
It should also be said that money can't always be spent equally everywhere. And depending on the labor required, there is also a limit to how manageable infrastructure is when it scales. The people that maintain and build solar panels, hydro, wind turbines, and nuclear, are not the same people. And if we acknowledge that climate change is an existential crisis, we must put our eggs in every basket we can, to diversify the energy transition. All four of the safest and most efficient solutions we have should be tapped into. But nuclear is often skipped because of outdated conceptions and fear. It does cost a lot and takes a while to build, but it fits certain shapes in the puzzle that none of the others do as well as it does.
"You know you don't need to bring a dead horse every time you want catering right, Jim?"
deleted by creator
If you're here because of the AI headline, this is important to read.
We’re looking at how we can use local, on-device AI models -- i.e., more private -- to enhance your browsing experience further. One feature we’re starting with next quarter is AI-generated alt-text for images inserted into PDFs, which makes it more accessible to visually impaired users and people with learning disabilities.
They are implementing AI how it should be. Don't let all the shitty companies blind you to the fact what we call AI has positive sides.
Yes, it would be much better at mitigating it and beat all humans at truth accuracy in general. And truths which can be easily individually proven and/or remain unchanged forever can basically be 100% all the time. But not all truths are that straight forward though.
What I mentioned can't really be unlinked from the issue, if you want to solve it completely. Have you ever found out later on that something you told someone else as fact turned out not to be so? Essentially, you 'hallucinated' a truth that never existed, but you were just that confident it was correct to share and spread it. It's how we get myths, popular belief, and folklore.
For those other truths, we simply ascertain the truth to be that which has reached a likelihood we consider it to be certain. But ideas and concepts we have in our minds constantly float around on that scale. And since we cannot really avoid talking to other people (or intelligent agents) to ascertain certain truths, misinterpretations and lies can sneak in to cause us to treat as truth that which is not. To avoid that would mean the having to be pretty much everywhere to personally interpret the information straight from the source. But then things like how fast it can process those things comes in to play. Without making guesses about what's going to happen, you basically can't function in reality.
Yes, a theoretical future AI that would be able to self-correct would eventually become more powerful than humans, especially if you could give it ways to run magnitudes more self-correcting mechanisms at the same time. But it would still be making ever so small assumptions when there is a gap in the information it has.
It could be humble enough to admit it doesn't know, but it can still be mistaken and think it has the right answer when it doesn't. It would feel neigh omniscient, but it would never truly be.
A roundtrip around the globe on glass fibre takes hundreds of milliseconds, so even if it has the truth on some matter, there's no guarantee that didn't change in the milliseconds it needed to become aware that the truth has changed. True omniscience simply cannot exists since information (and in turn the truth encoded by that information) also propagates at the speed of light.
a big mistake you are making here is stating that it must be fed information that it knows to be true, this is not inherently true. You can train a model on all of the wrong things to do, as long it has the capability to understand this, it shouldn’t be a problem.
The dataset that encodes all wrong things would be infinite in size, and constantly change. It can theoretically exist, but realistically it will never happen. And if it would be incomplete it has to make assumptions at some point based on the incomplete data it has, which would open it up to being wrong, which we would call a hallucination.
I'm not sure where you think I'm giving it too much credit, because as far as I read it we already totally agree lol. You're right, methods exist to diminish the effect of hallucinations. That's what the scientific method is. Current AI has no physical body and can't run experiments to verify objective reality. It can't fact check itself other than be told by the humans training it what is correct (and humans are fallible), and even then if it has gaps in what it knows it will fill it up with something probable - but which is likely going to be bullshit.
All my point was, is that to truly fix it would be to basically create an omniscient being, which cannot exist in our physical world. It will always have to make some assumptions - just like we do.
Hallucinations in AI are fairly well understood as far as I'm aware. Explained in high level on the Wikipedia page for it. And I'm honestly not making any objective assessment of the technology itself. I'm making a deduction based on the laws of nature and biological facts about real life neural networks. (I do say AI is driven by the data it's given, but that's something even a layman might know)
How to mitigate hallucinations is definitely something the experts are actively discussing and have limited success in doing so (and I certainly don't have an answer there either), but a true fix should be impossible.
I can't exactly say why I'm passionate about it. In part I want people to be informed about what AI is and is not, because knowledge about the technology allows us to make more informed decision about the place AI takes in our society. But I'm also passionate about human psychology and creativity, and what we can learn about ourselves from the quirks we see in these technologies.
I'm not an expert in AI, I will admit. But I'm not a layman either. We're all anonymous on here anyways. Why not leave a comment explaining what you disagree with?
It will never be solved. Even the greatest hypothetical super intelligence is limited by what it can observe and process. Omniscience doesn't exist in the physical world. Humans hallucinate too - all the time. It's just that our approximations are usually correct, and then we don't call it a hallucination anymore. But realistically, the signals coming from our feet take longer to process than those from our eyes, so our brain has to predict information to create the experience. It's also why we don't notice our blinks, or why we don't see the blind spot our eyes have.
AI representing a more primitive version of our brains will hallucinate far more, especially because it cannot verify anything in the real world and is limited by the data it has been given, which it has to treat as ultimate truth. The mistake was trying to turn AI into a source of truth.
Hallucinations shouldn't be treated like a bug. They are a feature - just not one the big tech companies wanted.
When humans hallucinate on purpose (and not due to illness), we get imagination and dreams; fuel for fiction, but not for reality.
If you imagine that, you must also imagine Biden announcing this on X. And now stop imagining because that's actually what happened.