Skip Navigation

Posts
4
Comments
338
Joined
2 yr. ago

  • Y'know what? I'm gonna be even more of a furry now, just to spite you.

  • "The issues raised have been subject to rigorous engineering examination under [Federal Aviation Administration] oversight," the company said.

    You mean the guy you handed an FAA sash to and told "it would be an awful shame if this didn't get signed off on, we'd have to make some pretty severe job cuts, wink wink?"

  • After reading this article that got posted on Lemmy a few days ago, I honestly think we're approaching the soft cap for how good LLMs can get. Improving on the current state of the art would require feeding it more data, but that's not really feasible. We've already scraped pretty much the entire internet to get to where we are now, and it's nigh-impossible to manually curate a higher-quality dataset because of the sheer scale of the task involved.

    We also can't ask AI to curate its own dataset, because that runs into model collapse issues. Even if we don't have AI explicitly curate its own dataset, it's highly likely going to be a problem in the near future with the tide of AI-generated spam. I have a feeling that companies like Reddit signing licensing deals with AI companies are going to find that they mostly want data from 2022 and earlier, similar to manufacturers looking for low-background steel to make particle detectors.

    We also can't just throw more processing power at it because current LLMs are already nearly cost-prohibitive in terms of processing power per query (it's just being masked by VC money subsidizing the cost). Even if cost wasn't an issue, we're also starting to approach hard limits in physics like waste heat in terms of how much faster we can run current technology.

    So we already have a pretty good idea what the answer to "how good AI will get" is, and it's "not very." At best, it'll get a little more efficient with AI-specific chips, and some specially-trained models may provide some decent results. But as it stands, pretty much any organization that tries to use AI in any public-facing role (including merely using AI to write code that is exposed to the public) is just asking for bad publicity when the AI inevitably makes a glaringly obvious error. It's marginally better than the old memes about "I trained an AI on X episodes of this show and asked it to make a script," but not by much.

    As it stands, I only see two outcomes: 1) OpenAI manages to come up with a breakthrough--something game-changing, like a technique that drastically increases the efficiency of current models so they can be run cheaply, or something entirely new that could feasibly be called AGI, 2) The AI companies hit a brick wall, and the flow of VC money gradually slows down, forcing the companies to raise prices and cut costs, resulting in a product that's even worse-performing and more expensive than what we have today. In the second case, the AI bubble will likely pop, and most people will abandon AI in general--the only people still using it at large will be the ones trying to push disinfo (either in politics or in Google rankings) along with the odd person playing with image generation.

    In the meantime, what I'm most worried for are the people working for idiot CEOs who buy into the hype, but most of all I'm worried for artists doing professional graphic design or video production--they're going to have their lunch eaten by Stable Diffusion and Midjourney taking all the bread-and-butter logo design jobs that many artists rely on for their living. But hey, they can always do furry porn instead, I've heard that pays well~

  • Well, I've tried using it for the following:

    • Asking questions and looking up information in my job's internal knowledgebase, using a specially designed LLM trained specifically on our public and internal knowledgebase. It repeatedly gave me confidently incorrect answers and linked nonexistent articles.
    • Deducing a bit of Morse code that didn't have any spaces in it, creating an ambiguous word. I figured it could iterate through the possible solutions easily enough, saving me the time of doing it myself. I gave up in frustration after it repeatedly gave answers that were incorrect from the very first letter.

    If I ever get serious about looking for a new job, I'll probably try and have it type up the first draft of a cover letter for me. With my luck, it'll probably claim I was a combat veteran or some shit even though I'm a fat 40-something who's never even talked with a recruitment officer in their life.

    Oh, funny story--some of my coworkers at the job got the brilliant idea to use the company LLM to write responses to users for them. Needless to say, the users were NOT pleased to get messages signed "Company ChatGPT LLM." Management put their foot down immediately that doing it was a fireable offense and made it clear that we tracked every request sent to our chatbot.

  • The other flipside is that individual landlords aren't necessarily going to be any better than larger corporate landlords--for every individual landlord that rents their Nan's home at cost and keeps rent lower than inflation, there's probably at least one other landlord that jacks rent up year over year, drags their feet on maintenance, and tries to screw you out of your deposit when you move out. (The ones who do this usually tend to leverage their income into more property and turn into a slum lord, though, so the rule of thumb of 'don't make it your only job' still largely applies.)

    The real core of the issue is that we haven't built any new public housing for well on 2 decades by now, and the market has decided that the only new housing we should build are million dollar McMansions that squeeze into lots that would previously hold a much smaller house with a decent yard.

    What should be done is a massive investment in public housing at all levels of government to fill in the missing demand for low-cost housing, but we've been so collectively conditioned by four decades of Reagan-era "Government is not the solution, it is the problem" neoliberal thinking that the odds of this ever happening is roughly on par with McConnell agreeing to expand the supreme court and eliminate the electoral college.

  • I think that's just called informally splitting a mortgage, homie

  • We're making our last payment on our EV this month, and a few weeks ago I brought up the idea of maybe trading it in for a newer EV, since our current one was starting to show signs of possible battery degradation and it's a Leaf that's stuck with CHAdeMO charging instead of CCS/NACS charging. My husband asked me what car we'd consider replacing it with, and the instant I floated maybe looking at a used Tesla, my husband barked back "Absolutely NOT!" And the thing was, I couldn't find myself disagreeing, either.

    I know that my husband and I are far from the only ones who think the same way.

  • Yeah, happy to help. Sealioning really fucking sucks, because the only ways to counter it are:

    • Insult the troll until they go away
    • Refuse to play their game and give short, pithy responses without doing any research (or not linking the research you did)
    • Ignore the troll entirely
    • Copy your response and paste it whenever you see the troll asking the same question (which someone is doing in this very thread)
    • Create and maintain a collection of ready-to-go arguments with citations that you can copy/paste at the drop of a hat, which is a fair bit of work in of itself

    In case it's not obvious, most of the counters for sealioning look almost exactly like trolling itself, and it's almost impossible to tell a sealion from someone apart looking for a legitimate discussion at first glance--short of keeping track of individual usernames and watching them in multiple threads, the only way to know if someone is a sealion for sure is for at least one person to feed the troll at least one good response. It's what makes sealioning such an insidious technique, because fighting a sealion almost always results in a lower quality of discussion itself, giving the sealion another type of victory.

  • It's a specific form of trolling/bad-faith argument based on this comic. The idea behind sealioning is that you feign politeness and badger someone with seemingly-simple questions (that in reality require spending a sizable amount of time to answer) to get them to try to debate you. This can take the form of asking someone to elaborate a point, or provide citations to support a claim. If the victim takes the bait and responds legitimately, the troll ignores most of the message, claims any citations are invalid for some reason (biased source, misrepresenting what the article says, or just ignoring it exists entirely). The troll then cherry picks a few statements, and asks more questions about those, continuing the cycle, If the victim refers to previous posts, the troll pretends it either didn't happen or didn't actually answer their question (it did). If the victim refers to previously linked articles, the troll dismisses them and insists the victim provides "better" articles (that the troll will also dismiss out of hand). If the victim ever tells the troll to fuck off, the troll claims the moral high road and says they just "want a civil discussion" and "reasoned debate" over the topic.

    The goal is something like a reverse Gish Gallop. Where a gish gallop aims to overwhelm the victim with more arguments than can be addressed quickly in the hope that your opponent can't/won't take the time to respond and walk away, allowing you to claim victory, sealioning aims to trick the victim into spending hours writing a messages that you can respond to in under a minute with a few simple questions, creating a kind of denial-of-service attack.

  • Compared to how much effort it takes to learn how to draw yourself? The effort is trivial. It's like entering a Toyota Camry into a marathon and then bragging about how good you did and how hard it was to drive the course.

  • People dismiss AI art because they (correctly) see that it requires zero skill to make compared to actual art, and it has all the novelty of a block of Velveeta.

    If AI is no more a tool than Photoshop, go and make something in GIMP, or photoshop, or any of the dozens of drawing/art programs, from scratch. I'll wait.

  • LMFAO "uhm ackshually guys AI art takes skill just like human art"

    yeah bud, spending 30 minutes typing sentences into the artist crushing machine is grueling work

  • I haven't accidentally deleted a bunch of data yet (which, considering 99% of my interaction with Linux is when I'm SSH'd into a user's server, I am very paranoid about not doing), but I have run fsck on a volume without mounting the read/write flashcache with dirty blocks on it first.

    Oops.

  • He says himself that he was there to protect businesses, but he had no relation to the business beyond that of a standard employee, and his help was never requested--he didn't know the owners, his family didn't own the business, and he wasn't even a frequent customer IIRC.

    The most charitable interpretation is that an untrained, underage civilian took a semiautomatic rifle across state lines, to a protest happening in a town he didn't live in, to guard a business that he had no special relation to, and that never asked for his help.

    The more probable interpretation, given posts on his social media before the shooting (that weren't allowed to be shown in court), is that he wanted to play action hero and shoot some scumbags, and he got exactly what he hoped.

    EDIT: Apparently he worked at the business he was guarding, but the point still stands--he never got permission to defend the business, nor was it ever offered.

  • And look at the ttrpg.network community for a counterexample, they still have a pinned post on the dndmemes subreddit advertising Lemmy and ttrpgmemes gets like .1% of the traffic dndmemes does. And this is still after a months-long rebellion complete with allowing NSFW and restricting submissions to a single user account, both things that would normally kill a subreddit dead.

  • At this rate we're going to start getting memes about Lemmy reading comprehension lmao

  • Lmao, from an NPR article on the same topic:

    They filed an affidavit from an insurance broker saying it is "not possible" to find a bond that big. The broker was an expert witness for Trump during the trial.

    The trial judge already noted in his decision that this broker was a "close personal friend" of Trump's and had a financial interest in the outcome. A decision could come from the appeals court later this week.

    I'm sure the judge will give the broker's opinion all the deference it's due. /s