More often than not, people who are passionate about something, such as Linux, take personal offense when someone says something incorrect or offensive about said thing. Oh, and blud is just to call someone a poser.
Tatsuro Yamashita was pretty impressive for several reasons: great singer/songwriter (he has some really solid range) and producer, S tier in singing English phonetically, and he's good in Japanese, too.
To be fair, the comments and posts you leave are technically being collected for display across the lemmyverse. In that sense, there's never going to be a zero data collection Lemmy client. Still, Liftoff currently has my vote. A decent little FOSS fork of Lemur, I believe.
Unfortunately, there's still that one guy in the comments trying to say that hypothetical, largely unproven solutions are better for baseload than something that's worked for decades.
We do something similar over at !mavica@normalcity.life, but with photos. Of course, we're using old floppy disk cameras, so the compression, aberration, and CCD weirdness is indeed authentic.
I think it's a very specific case that needs to be taken in a very narrow context; it's essentially an innocent mistake that needs to be recognized as such. The moment you step outside of that, I see no reasonable arguments for decriminalizing anything.
I forgot: are Lemmy's active and hot sorts chronological? They're pretty decent, but I do find stale content does get stuck on one that isn't there on the other.
Yeah, that's fair. The early versions GPT3 kinda sucked compared to what we have now. For example, it basically couldn't rhyme. RLHF or some of the more recent advanced seemed to turbocharge that aspect of LLMs.
I don't really think it's something people should do, but I can honestly see it happening to ordinary people if they aren't thinking about what they're doing.
Picking and choosing isn't the game I want to play, I'm just highlighting that there are circumstances that can result in actually innocent people doing things without thinking. Pornographic content of any kind (drawings or otherwise) that depicts underage people in any context is something I think should be illegal and avoided at all costs, but I'm highlighting that there is edge-cases in everything.
I mean, perhaps in the most general sense that is technically true. For example, there have been cases about this that have come from parents taking pictures of their kids in the bathtub, even if the charges were eventually dropped. If that particular court case had gone differently, it might've set a very destructive precedent that served only to rip apart families.
Still, 99% of the cases that produce this material are done so in an exploitative and abusive context; definitely not arguing with that. No idea what Aaron was talking about in that particular link, but this is the one counterexample that I think of that is valid, assuming it went a different direction in court.
You're absolutely right: there's what's called an alignment problem between what the human thinks looks superficially like a quality answer and what would actually be a quality answer.
You're correct in that it will always be somewhat of an arms race to detect generated content, as lossy compression and metadata scrubbing can do a lot to make an image unrecognizable to detectors. A few people are trying to create some sort of integrity check for media files, but it would create more privacy issues than it would solve.
We've had LLMs for quite some time now. I think the most notable release in recent history, aside from ChatGPT, was GPT2 in 2019, as it introduced a lot of people to to the concept. It was one of the first language models that was truly "large," although they've gotten much bigger since the release of GPT3 in 2020. RLHF and the focus on fine-tuning for chat and instructability wasn't really a thing until the past year.
Retraining image models on generated imagery does seem to cause problems, but I've noticed fewer issues when people have trained FOSS LLMs on text from OpenAI. In fact, it seems to be a relatively popular way to build training or fine-tuning datasets. Perhaps training a model from scratch could present issues, but generally speaking, training a new model on generated text seems to be less of a problem.
Critical reading and thinking was always a requirement, as I believe you say, but certainly it's something needed for interpreting the output of LLMs in a factual context. I don't really see LLMs themselves outperforming humans on reasoning at this stage, but the text they generate certainly will make those human traits more of a necessity.
Most of the text models released by OpenAI are so-called "Generative Pretrained Transformer" models, with the keyword being "transformer." Transformers are a separate model architecture from GANs, but are certainly similar in more than a few ways.
Unless I'm mistaken, aren't GANs mostly old news? Most of the current SOTA image generation models and LLMs are either diffusion-based, transformers, or both. GANs can still generate some pretty darn impressive images, even from a few years ago, but they proved hard to steer and were often trained to generate a single kind of image.
+1 for the importance of community engagement. This is how you build a community anywhere, especially on Lemmy. And when you build an awesome community, our feeds fill with good content that helps us keep people interested in the Lemmy federation.
I was incorrect; the first part of my answer was my initial guess, in which I thought a boolean was returned; this is not explicitly the case. I checked and found what you were saying in the second part of my answer.
You could use strict equality operators in a conditional to verify types before the main condition, or use Typescript if that's your thing. Types are cool and great and important for a lot of scenarios (used them both in Java and Python), but I rarely run into issues with the script-level stuff I make in JavaScript.
The little I've seen of Joe seems like this:
Some rich guy you've never heard of: "So, umm, yeah, I've been trying this new form of yoga."
Joe: hits blunt and drinks something harmful "Oh yeah?"
Guy 1: burp "Yeah, and it's really opened my eyes and shit, y'know?"
Joe: "Oh really?"
(This but for who knows how long).