Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)FA
Posts
2
Comments
229
Joined
2 yr. ago

  • Maybe more apt for me would be, “We don’t need to teach math, because we have calculators.” Like…yeah, maybe a lot of people won’t need the vast amount of domain knowledge that exists in programming, but all this stuff originates from human knowledge. If it breaks, what do you do then?

    I think someone else in the thread said good programming is about the architecture (maintainable, scalable, robust, secure). Many LLMs are legit black boxes, and it takes humans to understand what’s coming out, why, is it valid.

    Even if we have a fancy calculator doing things, there still needs to be people who do math and can check. I’ve worked more with analytics than LLMs, and more times than I can count, the data was bad. You have to validate before everything else, otherwise garbage in, garbage out.

    It’s sounds like a poignant quote, but it also feels superficial. Like, something a smart person would say to a crowd to make them say, “Ahh!” but also doesn’t hold water long.

  • I generally agree. It’ll be interesting what happens with models, the datasets behind them (particularly copyright claims), and more localized AI models. There have been tasks where AI greatly helped and sped me up, particularly around quick python scripts to solve a rote problem, along with early / rough documentation.

    However, using this output as justification to shed head count is questionable for me because of the further business impacts (succession planning, tribal knowledge, human discussion around creative efforts).

    If someone is laying people off specifically to gap fill with AI, they are missing the forest for the trees. Morale impacts whether people want to work somewhere, and I’ve been fortunate enough to enjoy the company of 95% of the people I’ve worked alongside. If our company shed major head count in favor of AI, I would probably have one foot in and one foot out.

  • This has been my general worry: the tech is not good enough, but it looks convincing to people with no time. People don’t understand you need at least an expert to process the output, and likely a pretty smart person for the inputs. It’s “trust but verify”, like working with a really smart parrot.

  • Yeah, this phrase makes way more sense within the context of a game or game theory. For me, it goes back to fighting games or sports. People play to win in those settings. The rules are heavily defined, and the players must abide. These other examples are people misusing the phrase.

  • Let’s say “low performing” means you scored 20% or lower on the test. We’d write that as “25% scored 20% or lower.” But you could move the measure of “low” to whatever.

    It’s not “the bottom 25% were in the bottom 25%.” It’s “25% met the criteria for low.” Those are different things.

    …unless this is a /s that I’m too tired or socially inept to process. i’m trying to be helpful.

  • There was a similar study reported the other day about using FMRI imagining and AI to recreate the “thought content” of someone’s brain. It required training for the AI in the person’s brain and some other training. It does seem these techniques can work with some specified models, but yeah, it doesn’t seem like hooking someone’s brain up to this would create a movie of their mind or something.

    I think the more dangerous part is “This is step 0,” which this tech would have seemed impossible 10 years ago. Very strange times.

  • It’ll sound cheesy, but “Don’t Go Hollow” is that phrase for me.

    In 2019, I was hospitalized for suicidal ideation. When at in-patient, we didn’t get much to express ourselves. Every meal, we ate with plastic utensils and foam plates and cups for safety. I would carve that phrase into the cups, along with a bonfire.

    “Don’t Go Hollow” goes back to Dark Souls. It’s a phrase that means something in the game world, but it’s also metaphorical. What’s an avatar without the player? It’s like a body without spirit. You’re not progressing in the game because you checked out. If you want to keep going, you need to be present, to keep trying.

    Other ones that come to mind are “This is a moment. It will pass.” which I said in the showers that scared the fuck out of me, and “Fall down 7 times, get up 8.” “Let it rip,” from the Bear is another one I like.

  • Meta ethics focuses on the underlying framework behind morality. Whenever you’re asking, “But why is it moral?” That’s meta ethics.

    Meta ethics splits between cognitivism (moral statements can be true or false) and non-cognitivism (moral statements are not true or false). One popular cognitive branch is natural moral realism, the idea there are objective moral facts. One popular non-cognitivism branch is emotivism, the idea that moral statements all all complicated “yays” or “yucks” and express emotions rather than true/false statements.

    Cognitivism also has anti-realism, which is there are moral facts, but they are truth/false conditional based on each person or group. My issue is you lose the ability to call out certain behavior as wrong; slavery is wrong; not respecting others is wrong. If you want to believe all morality systems are valid, meaning your morality is no better than some radical thought group’s, then go ahead. On an emotional level, speciesism level, rights level, deontological level, utilitarian level, and many more slavery is wrong. Again, some nut job doesn’t invalidate all other thoughts. That’s my take.

  • Half of the comments in here are a bunch of equivocations on the words.

    “Objective” morality would mean there are good things to do, and bad things to do. What people actually do in some hypothetical or real society is different and wouldn’t undermine the objective status of morality.

    Listen to this example:

    • Todd wants to go to the bank before it closes.
    • Todd is not at the bank.
    • Todd should travel to the bank before it closes.

    This is a functional should statement. Maybe Todd does go, or maybe he doesn’t. But if he wants to fulfill his desires, he should travel if he wants to go to the bank. The point is that should statements, often used in morality, can inform us for less controversial topics.

    Here’s another take: why should we be rational? We could base our epistemology on breeding, money, or other random ends. If you think I should be rational, you’re leveraging morality to do that.

    Most people believe in objective morality, whether they understand it that way or not. Humans have disagreed over many subjects throughout history. Disagreement alone doesn’t undermine objectivity. It’s objectively true that the Earth revolves around the sun. Some nut case with a geocentric mindset isn’t going to convince me otherwise. You can argue it’s objective because we can test it, but how do I test my epistemology?

    This is just a philosophy 101 run around. I’m a moral pluralist who believes in utilizing many moral theories to help understand the moral landscape. If we were to study the human body, you’d use biology, physics, chemistry, and so on. When looking at a moral problem, I look at it from the main moral theories and look for consensus around a moral stance.

    I’m not interested in debating, but there’s so many posts making basic mistakes about morality. My undergraduate degree was in ethics, and I’ve published on meta ethics. We ain’t solving this in a lemmy thread, but there’s a lot of literature to read for those interested.