Skip Navigation

User banner
Posts
0
Comments
2,392
Joined
2 yr. ago

  • I actually have a friend who's involved in a situation like this right now. He got laid off from his old job a few months back and while he was job hunting he started working on a project with a couple other friends that could be worth a fair bit of money. He's had job offers since then and he got a lawyer to write up a description of the project he's working on that could be inserted into those "I'm keeping the rights to this stuff" contract sections.

    It's a bit different for him because it's stuff that he's actively working on right now, though. It sounds like your case might be simpler, if it's stuff you haven't done yet and don't plan to try working on while employed with this current employer I suspect you won't need to worry about it. Though of course, IANAL.

  • Indeed, and many of the more advanced AI systems currently out there are already using LLMs as just one component. Retrieval-augmented generation, for example, adds a separate "memory" that gets searched and bits inserted into the context of the LLM when it's answering questions. LLMs have been trained to be able to call external APIs to do the things they're bad at, like math. The LLM is typically still the central "core" of the system, though; the other stuff is routine sorts of computer activities that we've already had a handle on for decades.

    IMO it still boils down to a continuum. If there's an AI system that's got an LLM in it but also a Wolfram Alpha API and a websearch API and other such "helpers", then that system should be considered as a whole when asking how "intelligent" it is.

  • No, that's not being fair at all. The amendment in full reads:

    A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.

    A full half of that single sentence is talking about "a well regulated militia" being the justification for allowing people to keep arms. There have been decades of flim-flammery ignoring that completely and trying to imply that the intent was to say "Militias are good for national security given how we just went through a rebellion that depended on them. Oh, and on a completely unrelated note, everyone should be allowed to carry portable machine guns and concealed hand-cannons the likes of which were never even imagined in our time."

    This is a nutty interpretation.

  • I live in a Canadian city, and I recall some years back there was an incident where some guy from Texas got in trouble for carrying a handgun while visiting. He raised a huge fuss on social media and went back to the US as soon as he was able, ranting about how he couldn't feel safe in Canada because they wouldn't allow him to have the ability to shoot anyone who might attack him while he was there. I wish I could find one of the news articles, there was a lot of head-shaking amusement from the locals at the time.

    Really goes to show how diametrically different people can be sometimes.

  • Call it whatever makes you feel happy, it is allowing me to accomplish things much more quickly and easily than working without it does.

  • There was an interesting paper published just recently titled Generative Models: What do they know? Do they know things? Let's find out! (a lot of fun names and titles in the AI field these days :) ) That does a lot of work in actually analyzing what an AI image generator "knows" about what they're depicting. They seem to have an awareness of three dimensional space, of light and shadow and reflectivity, lots of things you wouldn't necessarily expect from something trained just on 2-D images tagged with a few short descriptive sentences. This article from a few months ago also delved into this, it showed that when you ask a generative AI to create a picture of a physical object the first thing the AI does is come up with the three-dimensional shape of the scene before it starts figuring out what it looks like. Quite interesting stuff.

  • And even if local small-scale models turn out to be optimal, that wouldn't stop big business from using them. I'm not sure what "it" is being referred to with "I hope it collapses."

  • Conversely, there are way too many people who think that humans are magic and that it's impossible for AI to ever do insert whatever is currently being debated here.

    I've long believed that there's a smooth spectrum between not-intelligent and human-intelligent. It's not a binary yes/no sort of thing. There's basic inert rocks at one end, and humans at the other, and everything else gets scattered at various points in between. So I think it's fine to discuss where exactly on that scale LLMs fall, and accept the possibility that they're moving in our direction.

  • I actually think public perception is not going to be that big a deal one way or the other. A lot of decisions about AI applications will be made by businessmen in boardrooms, and people will be presented with the results without necessarily even knowing that it's AI.

  • Those recent failures only come across as cracks for people who see AI as magic in the first place. What they're really cracks in is people's misperceptions about what AI can do.

    Recent AI advances are still amazing and world-changing. People have been spoiled by science fiction, though, and are disappointed that it's not the person-in-a-robot-body kind of AI that they imagined they were being promised. Turns out we don't need to jump straight to that level to still get dramatic changes to society and the economy out of it.

    I get strong "everything is amazing and nobody is happy" vibes from this sort of thing.

  • And some of those hosts can decide to serve up their content to AI trainers. Some of those hosts can be run by AI trainers, specifically to gather data for training. If one was to try to prevent that then one would be attacking the open nature of the fediverse.

    There have been many people raging about their content being used to train AIs without permission or compensation. I'm speaking to those people, not the "fediverse collectively". As you suggest, the fediverse can't say anything collectively.

  • It's the "make some people non-white" kludge that's the specific problem being discussed here.

    The training data skewing white is a different problem, but IMO not as big of one. The solution is simple, as I've discovered over many months of using local image generators. Let the user specify what exactly they want.

  • It's not the training data that's the problem here.

  • The term "AI" has a much broader meaning and use than the sci-fi "thinking machine" that people are interpeting it as. The term has been in use by scientists for many decades already and these generative image programs and LLMs definitely fit within it.

    You are likely thinking of AGI, or artificial general intelligence. We don't have those yet, but these things aren't intended to be AGI so that's to be expected.

  • Many of those embryos are old enough that they should be going to school. They're truant.

  • I didn't say it'd work great. I'm talking about what's legally possible to do.

    The US federal government is in many ways prevented from doing the right things by the details of its constitution. Even when the Supreme Court is genuinely following it, there's a bunch of stuff in there that lets individual states do crazy stupid things that the federal government can't really stop. So even given the powers that OP has given me in this scenario there's some big limits to what can be done. If he was to give me the ability to amend the constitution or control the state governments I'd be able to do a lot more.

  • So contact them about that, then. Sue them if you're sufficiently offended. This doesn't change anything. If they were GDPR-compliant before they're still GDPR-compliant, if they weren't GDPR-compliant then they still aren't. My point is that this AI training stuff has nothing to do with that.

  • And even if it was Google, these companies aren't magic. Once there's a proof of concept out there that something like this can be done other companies will dump resources into catching up with it. Cue the famous "we have no moat" memo.

  • Gun control wouldn't be my top priority in that case, but when I got around to it I'd put a ton of restrictions on interstate commerce related to guns and removing laws that may be preventing states from passing regulations on them. I'd be using my mind control to force the Supreme Court to interpret "well-regulated militia" in a sane way, so those states will then be able to put the brakes on if they want.

    I don't think there's a lot that the American federal government can do to directly ban most kinds of firearms, based on how their constitution is set up, but stopping the large scale flow of guns (and ammo) into states that don't want them should go a long way to curbing the problem for them.