Skip Navigation

Posts
171
Comments
1,328
Joined
12 mo. ago

    1. Test the cable first if you have a spare.
    2. Test the AC adapter if you have a spare.
    3. If both fail, inspect the charging port with a flashlight. 4a) If it looks dirty, try cleaning it out with a toothpick (if you have a dedicated plastic tool for mobile repair, use that). 4b) If it doesn't look dirty, refer to 4a. What often happens is that lint from your pocket compacts over time as it gets in there and then gets pressed in by the charger.
    4. If this doesn't work and you have a good, locally owned mobile repair shop nearby, they might look at the port for free just to see if there's anything you missed.

    Only after all of this would I start to strongly consider the phone itself as the culprit.

  • don't say "very accurate"; say "exact"

    First line of this infographic is already deeply misleading. It's the equivalent of:

    don't say "very good"; say "perfect"

    It's overly superlative compared to what it's trying to replace. "Exact" is inherently "very accurate", but "very accurate" is not inherently "exact".

  • Dude, I'm sorry, I just don't know how else to tell you "you don't know what you're talking about". I'd refer you to Chapter 20 of Goodfellow et al.'s 2016 book on Deep Learning, but 1) it tragically came out a year before transformer models, and 2) most of it will go over your head without a foundation from many previous chapters. What you're describing – generative AI training on generative AI ad infinitum – is a death spiral. Literally the entire premise of adversarial training of generative AI is that for the classifier to get better, you need to keep funneling in real material alongside the fake material.

    You keep anthropomorphizing with "AI can already understand X", but that betrays a fundamental misunderstanding of what a deep learning model is: it doesn't "understand" shit about fuck; it's an unfathomably complex nonlinear algebraic function that transforms inputs to outputs. To summarize in a word why you're so wrong: overfitting. This is one of the first things you'll learn about in a ML class, and it's what happens when you let a model train on the same data over and over again forever. It's especially bad for a classifier to be overfitted when it's pitted against a generator, because a sufficiently complex generator will learn how to outsmart the overfitted classifier and it will find a cozy little local minimum that in reality works like dogshit but outsmarts the classifier which is its only job.

    You really, really, really just fundamentally do not understand how a machine learning model works, and that's okay – it's a complex tool being presented to people who have no business knowing what a Hessian matrix or a DCT is – but please understand when you're talking about it that these are extremely advanced and complex statistical models that work on mathematics, not vibes.

  • Your analogy simply does not hold here. If you're having an AI train itself to play chess, then you have adversarial reinforcement learning. The AI plays itself (or another model), and reward metrics tell it how well it's doing. Chess has the following:

    1. A very limited set of clearly defined, rigid rules.
    2. One single end objective: put the other king in checkmate before yours is or, if you can't, go for a draw.
    3. Reasonable metrics for how you're doing and an ability to reasonably predict how you'll be doing later.

    Here's where generative AI is different: when you're doing adversarial training with a generative deep learning model, you want one model to be a generator and the other to be a classifier. The classifier should be given some amount of human-made material and some amount of generator-made material and try to distinguish it. The classifier's goal is to be correct, and the generator's goal is for the classifier to pick completely randomly (i.e. it just picks on a coin flip). As you train, you gradually get both to be very, very good at their jobs. But you have to have human-made material to train the classifier, and if the classifier doesn't improve, then the generator never does either.

    Imagine teaching a 2nd grader the difference between a horse and a zebra having never shown them either before, and you hold up pictures asking if they contain a horse or a zebra. Except the entire time you just keep holding up pictures of zebras and expecting the child to learn what a horse looks like. That's what you're describing for the classifier.

  • The galaxy-brained group known as "Z12 + 1". "What if we did modular arithmetic but one-indexed."

    Edit: Actually, wait, it's worse: zero-indexing but we represent the zero element in Zn as 'n'. Kill it with fire.

  • You finally got here. This is the ninth "Your Sanctuary" location. But it's mine now. Take it from me, if you dare...

  • Notepad and WFE get thrown off hell in a cell into an announcer's table by Kate and Dolphin, respectively, but to say they "don't work" is intellectually lazy and dishonest.

    Who are you trying to convince right now? Linux and macOS users are probably never going back to Windows if they can help it, and Windows users will correctly say "but it's right there; I'm using it right now".

    • I've shown you how The Guardian has quoted a statement from the police about the Tesla incendiary to the exact same effect. So "it sure does feel like a pattern" sure feels a lot like bullshit you made up with no evidence.
    • After an FBI statement called it an "intentional act of terrorism", the Guardian article now references this three separate times (I think this was changed like a few hours after you wrote your comment).
    • You're making up a ridiculous strawman about colloquial versus technical terminology, where in reality domestic terrorism's legal definition is how it's used colloquially. You did read what I linked, right? Four hours after the bombing, where was the evidence the police were supposed to present showing it was terrorism in the colloquial sense? That it happened at a fertility clinic? Did you play Ace Attorney and think "Now that's how we should do detective work"?
    • "be[ing] very carefully precise with language" is 1) exactly what the police should be doing and consequently 2) exactly what any reputable newspaper should be reporting in the immediate aftermath absent additional sources, and 3) not even what was happening here; if you think not throwing around "terrorism" in the immediate aftermath of a bombing where the perpetrator is dead is "very carefully precise", then I hope high school essays and forum posts are the extent of your writing. If you want sensationalist bullshit, don't rag on good outlets; go to Newsweek and consume your slop.
    • Not at all what sealioning is.

    I don't know what you want except to make yourself look like a jackass who can't learn from their mistake when gracefully given the opportunity.

  • Didn't this just happen less than four hours ago? And ostensibly the perpetrator is dead? The police aren't lawyers and have more leeway with what they accuse people of (let alone a dead(?) person), but domestic terrorism has a specific criminal definition. In four hours, the police have responded, gotten people to safety, made sure the attacker was dead(?) and there were no others, and started to investigate the scene. And you surmise that during that investigation, they've so far found compelling evidence this person whose corpse(?) may not even be identified yet was motivated by one of the intentions in Criterion B?

    Also, who's "they" who very actively came out with terrorism first? Trump and Musk? Because literally of course the fascists did. I'd like to see what the police said in the first few hours of those attacks. Moreover, why do you want to whataboutism to alleged bad police behavior elsewhere to explain why the police should behave badly here?


    Edit: here's how The Guardian covered a story about an incendiary device at a Tesla dealership two months ago. Notice how it's fascist Trump mouthpiece Pam Bondi talking about "terrorism" so immediately, while the police statement mentions nothing of the sort.

    “On Monday, March 24, 2025, at approximately 8.04am, Austin police department (APD) officers responded to a found/abandoned hazardous call at the Tesla dealership located at 12845 N US 183 Hwy SVRD NB,” Austin police department said in a statement shared with CBS Austin.

    “When officers arrived on scene, they located suspicious devices and called the APD bomb squad to investigate. The devices, which were determined to be incendiary, were taken into police custody without incident. This is an open and ongoing investigation, and there is no further information available for release at this time.”

  • They're happy to call it an intentional act of violence, so they've ruled out a lot of the explanations for an exploding car.

    That's Criterion A and the first part of Criterion B* of domestic terrorism. There are three criteria, and the second part of Criterion B is the hardest.

    The bar for "terrorism" is pretty low - they charged an Atlanta student with is for tossing bottles of water and dry ice out his window.

    The bar for terrorism is as defined in what I just linked, and specifically Criterion B is where most of the uncertainty would lie.

    Regardless, it's definitely a journalistic choice whether to quote the police lieutenant's very careful, and possibly technical statement, or to quote the business owner (Musk) or US President speculating.

    The Guardian is a UK-based center-left newspaper with a generally good track record of journalistic integrity. Yes, quoting the police lieutenant is a choice here, because it's the correct one. They currently have the most information about the situation. This isn't rhetorical, I genuinely don't understand: do you want them quoting Trump's unhinged rant about this bombing that I don't think he's even put out yet?

    And maybe it just turns out that it's carefully ethical journalists reporting on potential right-wing violence, and usually unethical hacks reporting on possible attacks on the corporatocracy, but it sure does feel like a pattern.

    Dude, it's The Guardian. Here's how they recently covered Tesla dealerships if you care to explain how it's biased compared to this story.


    By "first part of", I mean the phrase "appears to be intended". What it appears to be intended to do is the hard part.

  • Something tells me you never visited the article itself and only read the first four paragraphs OP posted on Lemmy. If you had, you would've seen this:

    “Everything is in question, whether this is an act of terrorism,” Palm Springs police lieutenant William Hutchinson told the Desert Sun newspaper.

    They're not suspiciously avoiding anything; they may literally not know yet, and immediately jumping definitively to terrorism while they work out what happened is irresponsible, because "terrorism" isn't just an epithet: it's a real, actual, specific crime.

  • It's an easy mistake to make. For future reference, Wikiquote – a sister project of Wikipedia like Wiktionary and Wikimedia Commons are – is very often a good benchmark for whether famous people have said a quote.

    • For famous quotes that they've said, they're usually listed (if they are, there's a citation to exactly where that quote came from).
    • For famous quotes they didn't say, the "Misattributed" section often has the quote with a cited explanation of where it actually comes from.
    • For famous quotes they might've or probably didn't say, the "Disputed" section shows where it's first attributed to them but of course cannot provide a source where they themselves say it.

    It doesn't have every quote, but for very famous people, it filters out a lot of false positives. Since it gives you a citation, often you can leave a URL to the original source alongside your quote for further context and just so people who'd otherwise call BS have the source. And it sets a good example for others to cite their sources.

  • WHERMST

    Jump
  • I refuse to believe this isn't an AT-ST.

  • Uhh... yeah, goddamn. The Daily Beast citing the Daily Mail as their source is really something. Not only do we not use them as a source on Wikipedia, and not only was this the first source ever to be deprecated there in this way because of how egregious they are, but we don't even allow their online historical archives because they've been caught faking those too.

    The Daily Mail isn't a rag; it's sewage. It single-handedly motivated the idea that there are sources bad enough that Wikipedia just prohibits their usage everywhere (except in rare cases in an about-self fashion, but I don't know if editors would even trust that anymore). The Daily Beast isn't the pinnacle of credible journalism, but it isn't abysmal either.


    Edit: sorry, here's a source instead of just "my source is that I made it the fuck up."

  • This is entirely correct, and it's deeply troubling seeing the general public use LLMs for confirmation bias because they don't understand anything about them. It's not "accidentally confessing" like the other reply to your comment is suggesting. An LLM is just designed to process language, and by nature of the fact it's trained on the largest datasets in history, practically there's no way to know where this individual output came from if you can't directly verify it yourself.

    Information you prompt it with is tokenized, run through a transformer model whose hundreds of billions or even trillions of parameters were adjusted according to god only knows how many petabytes of text data (weighted and sanitized however the trainers decided), and then detokenized and printed to the screen. There's no "thinking" involved here, but if we anthropomorphize it like that, then there could be any number of things: it "thinks" that's what you want to hear; it "thinks" that based on the mountains of text data it's been trained on calling Musk racist, etc. You're talking to a faceless amalgam unslakably feeding on unfathomable quantities of information with minimal scrutiny and literally no possible way to enforce quality beyond bare-bones manual constraints.

    There are ways to exploit LLMs to reveal sensitive information, yes, but you have to then confirm that sensitive information is true, because you've just sent data into a black box and gotten something out. You can get a GPT to solve the sudoku puzzle, but you can't then parade that around before you've checked to make sure the puzzle is correct. You cannot ever, under literally any circumstance, trust anything a generative AI creates for factual accuracy; at best, you can use it as a shortcut to an answer which you can attempt to verify.

  • The fact this comment is so low means one of three things:

    • A shocking amount of /c/news readers have a subscription to The Wall Street Journal
    • A shocking amount of /c/news readers interact in the comments without reading the article first
    • A shocking amount of /c/news readers already know you can use archive.today to bypass paywalls (based tbh)
  • Lemmy Shitpost @lemmy.world

    Powerpuff Girls fans in the modern day reassessing how the show portrays non-heteronormativity

    Lemmy Shitpost @lemmy.world

    People in 2015 realizing we didn't have hoverboards yet

    Lemmy Shitpost @lemmy.world

    When you wake up on Monday and realize you didn't do your homework over the weekend

    Lemmy Shitpost @lemmy.world

    A student of Greek philosopher Aristotle learns ethics (300s BCE)

    Lemmy Shitpost @lemmy.world

    Viola Cadaverini from Ace Attorney when asked about anything

    Lemmy Shitpost @lemmy.world

    The rest of the world looking at the state of US democracy

    Lemmy Shitpost @lemmy.world

    When people ask who my favorite bariatric surgeon is

    Videos @lemmy.world

    The Making of LEGO Island: A Documentary

    Lemmy Shitpost @lemmy.world

    Me after I accomplish literally anything in Hollow Knight

    Lemmy Shitpost @lemmy.world

    90% of porn videos from the 2000s

    Lemmy Shitpost @lemmy.world

    When someone asks about my BCS fanfic where Jimmy becomes a partner at the law firm

    Lemmy Shitpost @lemmy.world

    The Nanny S1E1 (1993)

    Lemmy Shitpost @lemmy.world

    When I emerge from the basement for the first time in 10 months

    Lemmy Shitpost @lemmy.world

    The Teletubbies when commanded to return to the earth by the omnipresent voice trumpets

    Lemmy Shitpost @lemmy.world

    State Farm dropping California homeowners from their insurance (2024)

    Lemmy Shitpost @lemmy.world

    OnlyFans whales 3 hours after buying their favorite e-thot a $200 blender

    Lemmy Shitpost @lemmy.world

    Kids trying to keep a straight face when they lie about doing their chores

    Lemmy Shitpost @lemmy.world

    Me in an alternate reality where people didn't vote to skip the sex scene asking how they enjoyed the accompanying shitposts

    Lemmy Shitpost @lemmy.world

    When I try to BS my way through an essay on the Italian Renaissance

    Lemmy Shitpost @lemmy.world

    Me inviting people who keep up with this series to vote in a poll