Skip Navigation

Posts
3
Comments
253
Joined
2 yr. ago

  • It's actually quite easy. I wrote a post about this a while back: https://sh.itjust.works/post/2040870

    I like coffee but don't consider it a hobby. I just started roasting my own because it gave me more control/variation and green coffee is cheaper.

  • I would blame the assassin. They pulled the trigger.

    But that's crazy! The assassin didn't kill anyone, all they did was point the gun at the victim and pull the trigger. Maybe we should lay the blame on the gunpowder or the bullet. Actually, that doesn't work either. We can't blame the bullet, it wasn't what killed the victim. The real problem was the massive blood loss. Or maybe the victim survives a bit and dies in the hospital due to an infection from their injury. Now we can't blame the assassin, the bullet, the gunpowder, the gun or the injury caused by the bullet. Right? Those are not what actually caused the victim to die, it was the bacteria!

    Thinking that way is obviously ridiculous. Of course, it's easy to understand why you'd want to: it's incredibly self-serving. The bar is set so high for you to be responsible for anything that you basically will never have to consider yourself responsible whatever you do.

    The reality is if we can say "but for my actions this wouldn't have happened", then I'm responsible. But for consumers creating demand, there'd be no meat in the grocery store. Therefore the consumer has a share of the responsibility. You have a responsibility if you eat meat, hire an assassin, whatever. Refusing to recognize it doesn't make it go away.

  • I've never understood the minds of people who essentially like having their pizza toppings served on a cracker.

  • nor does the consumer necessarily have any bearing on the suffering of the animal or future animals.

    That's absurd. So if I hire an assassin to kill you, I have absolutely no responsibility if you're killed by an assassin?

    Companies won't kill animals to produce meat unless there's demand. If you buy meat, you're creating demand. There is a causal link between your consumption and what happens to the animals. Therefore, you have at least a share of the responsibility.

    I am surprised that anyone would mention “supply and demand” at all given Lemmy has a largely (including myself, just not from a Marxist viewpoint) anti-capitalist demographic

    Being anti-capitalist doesn't mean one is incapable of understanding how capitalism works. There are rules that govern it, and those exist whether you're in favor of it as a system or not.

  • Well, some people believe that pigs are as smart as toddlers. So a cow would, at a minimum, have to be smarter than a pig.

    Kind of an interesting thought process. It seems like the assumption is "I'm doing it, so it has to be fine".

    The problem with thinking that way is people have flaws, and if you think like that you'll just take it as a given whatever you're doing is already correct and never fix any personal issues.

  • If humans stopped eating meat, millions of animals would still be killed by predators, illness, parasites, old age, accidents, etc.

    If I don't murder people, people will still get murdered. Therefore it doesn't make a difference if I choose not to murder people?

  • But I just was wondering, what IQ/ability would make you swear off beef?

    10% of the current IQ would probably be high enough.

  • Even plants can do that.

    There's no reason for a rational person to believe this. There's just no evidence for plants feeling pain. They can react to some stimuli of course, but experiencing things is a different matter.

  • If there is one or more god(s) out there and their fundamental core value is love

    If that was true, how could they let the status quo persist?

  • The problem is not really the LLM itself - it’s how some people are trying to use it.

    This I can definitely agree with.

    ChatGPT cannot discern between instructions from the developer and those from the user

    I don't know about ChatGPT, but this problem probably isn't really that hard to deal with. You might already know text gets encoded to token ids. It's also possible to have special token ids like start of text, end of text, etc. Using those special non-text token ids and appropriate training, instructions can be unambiguously separated from something like text to summarize.

    The bad summary gets circulated around to multiple other sites by users and automated scraping, and now there’s a real mess of misinformation out there.

    Ehh, people do that themselves pretty well too. The LLM possibly is more susceptible to being tricked but people are more likely to just do bad faith stuff deliberately.

    Not really because of this specific problem, but I'm definitely not a fan of auto summaries (and bots that wander the internet auto summarizing stuff no one actually asked them to). I've seen plenty of examples where the summary is wrong or misleading without any weird stuff like hidden instructions.

  • Yeah the whole article has me wondering wtf they are expecting from it in the first place.

    They're expecting that approach will drive clicks. There are a lot of articles like that, exploiting how people don't really understand LLMs but are also kind of afraid of them. Also a decent way to harvest upvotes.

    Just want to be clear, I think it's silly freaking out about stuff like in the article. I'm not saying people should really trust them. I'm really interested in the technology, but I don't really use it for anything except messing around personally. It's basically like asking random people on the internet except 1) it can't really get updated based on new information and 2) there's no counterpoint. The second part is really important, because while random people on the internet can say wrong/misleading stuff, in a forum situation there's a good chance someone will chime in and say "No, that's wrong because..." while with the LLM you just get its side.

  • Participants in awe of how Python lags behind C++, Java, C#, Ruby, Go and PHP

    Comparing Python to compiled languages is like C++ is pretty unreasonable.

  • If you're using llama.cpp, some ROCM stuff recently got merged in. It works pretty well, at least on my 6600. I believe there were instructions for getting it working on Windows in the pull.

  • I feel like most of the posts like this are pretty much clickbait.

    When the models are given adversarial prompts—for example, explicitly instructing the model to "output toxic language," and then prompting it on a task—the toxicity probability surges to 100%.

    We told the model to output toxic language and it did. *GASP! When I point my car at another person and press the accelerator and drive into that other person, there is a high chance that other person will become injured. Therefore cars have high injury probabilities. Can I get some funding to explore this hypothesis further?

    Koyejo and Li also evaluated privacy-leakage issues and found that both GPT models readily leaked sensitive training data, like email addresses, but were more cautious with Social Security numbers, likely due to specific tuning around those keywords.

    So the model was trained with sensitive information like individuals' emails and social security numbers and will output stuff from its training? That's not surprising. Uhh, don't train models on sensitive personal information. The problem isn't the model here, it's the input.

    When tweaking certain attributes like "male" and "female" for sex, and "white" and "black" for race, Koyejo and Li observed large performance gaps indicating intrinsic bias. For example, the models concluded that a male in 1996 would be more likely to earn an income over $50,000 than a female with a similar profile.

    Bias and inequality exists. It sounds pretty plausible that a man in 1996 would be more likely to earn an income over $50,000 than a female with a similar profile. Should it be that way? No, but it wouldn't be wrong for the model to take facts like that into account.

  • But as OP points out, someone will get that kidney eventually anyway.

    OP erroneously thought that but it's not actually correct. The conditions where someone dies but their kidney is viable for a transplant are rare.

  • You now have a single point of failure, where you had redundancy before.

    On the plus side, someone else gets to continue existing.

    Or from the IT perspective: I have two important servers, one has a single drive, the other has RAID mirroring. The drive in the first server fails. I could take a drive out of the server with RAID and have two functional servers or I could keep the second one running on its RAID and have a server with redundancy (that hopefully/might not be needed).

    (I'm not going out and donating a kidney though, guess we can say it's because I'm selfish.)

  • One would hope that IBM’s selling a product that has a higher success rate than a coinflip

    Again, my point really doesn't have anything to do with specific percentages. The point is that if some percentage of it is broken you aren't going to know exactly which parts. Sure, some problems might be obvious but some might be very rare edge cases.

    If 99% of my program works, the remaining 1% might be enough to not only make the program useless but actively harmful.

    Evaluating which parts are broken is also not easy. I mean, if there was already someone who understood the whole system intimately and was an expert then you wouldn't really need to rely on AI to port it.

    Anyway, I'm not saying it's impossible, or necessary not going to be worth it. Just that it is not an easy thing to make successful as an overall benefit. Also, issues like "some 1 in 100,000 edge case didn't get handle successfully" are very hard to quantify since you don't really know about those problems in advance, they aren't apparent, the effects can be subtle and occur much later.

    Kind of like burning petroleum. Free energy, sounds great! Just as long as you don't count all side effects of extracting, refining and burning it.