Skip Navigation

Posts
3
Comments
253
Joined
2 yr. ago

  • Like, those cells will require the same nutrients and same growing conditions, and they naturally 3D print themselves into the shape of themselves.

    They'll also naturally use the nutrients and energy to 3D print stuff that's not useful to humans, like leaves, roots, flowers, etc. Basically this is how vat grown vegetables, meat, etc, can potentially be more efficient than the typical approach.

  • Easily hour+ long headache on your first time.

    Whenever I read this kind of thing (and people seem to say it pretty often), it seems really weird to me. Same goes for complaining about distro installers. An hour of possible headache/irritation and then you use the machine for years. Obviously it would be better if stuff was easy, but an hour just seems insignificant in the scheme of things. I really just don't understand seeing it as an actual roadblock.

    (Of course, there are other situations where it could matter like if you had to install/maintain 20 machines, but that's not what we're talking about here.)

  • One thing is the pace is very, very consistent. Real humans don't usually maintain that level of consistency, they'll speed up, slow down, some words come out fast, some come out slow, etc.

  • Maybe I misunderstood you but my point was if it interpreted the language preferences I set in the normal config as "knowing" the languages I added and didn't offer translations, that wouldn't necessarily be what I want.

  • The languages I might want to see aren't necessarily the ones I know. People who are learning languages might set that (I did for the language I'm learning, anyway).

  • I'm sure there's a way to disable it, even if you have to go into about:config

  • Definitely very interesting, but figuring out what layers to skip is a relatively difficult problem.

    I really wish they'd shown an example of the optimal layers to skip for the 13B model. Like the paper notes, using the wrong combination of skipped layers can be worse overall. So it's not just about how many layers you skip, but which ones as well.

    It would also be interesting to see if there are any common patterns in which layers are most skippable. It probably would be model architecture specific but it would be pretty useful if you could calculate the optimal skip pattern for say a 3B model and then translate that to a 30B with good/reasonable results.

  • The timing and similarity highly suggests this is a problem with how almost all software has implemented the webp standard in its image processing software.

    Did you read the article or the post? The point was that both places where the vulnerability was found probably used libwepb. So it's not that there's something inherently vulnerable in handling webp, just that they both used the same library which had a vulnerability. (Presumably the article was a little vague about the Apple side because the source wasn't open/available.)

    given that the programs processing images often have escalated privileges.

    What? That sounds like a really strange thing to say. I guess one could argue it's technically true because browsers can be considered "a program that processes images" and a browser component can end up in stuff with escalated privileges. That's kind of a special case though and in general there's no reason for the vast majority of programs that process images to have special privileges.

  • So I have never once ever considered anything produced by a LLM as true or false, because it cannot possibly do that.

    You're looking at this in an overly literal way. It's kind of like if you said:

    Actually, your program cannot possibly have a "bug". Programs are digital information, so it's ridiculous to suggest that an insect could be inside! That's clearly impossible.

    "Bug", "hallucination", "lying", etc are just convenient ways to refer to things. You don't have to interpret them as the literal meaning of the word. It also doesn't require anything as sophisticated as a LLM for something like a program to "lie". Just for example, I could write a program that logs some status information. It could log that everything is fine and then immediately crash: clearly everything isn't actually fine. I might say something about the program being "lying", but this is just a way to refer to the way that what it's reporting doesn't correspond with what is factually true.

    People talk so often about how they “hallucinate”, or that they are “inaccurate”, but I think those discussions are totally irrelevant in the long term.

    It's actually extremely relevant in terms of putting LLMs to practical use, something people are already doing. Even when talking about plain old text completion for something like a phone keyboard, it's obviously relevant if the completions it suggests are accurate.

    So text prediction is saying when A, high probability that then B.

    This is effectively the same as "knowing" A implies B. If you get down to it, human brains don't really "know" anything either. It's just a bunch of neurons connected up, maybe reaching a potential and firing, maybe not, etc.

    (I wouldn't claim to be an expert on this subject but I am reasonably well informed. I've written my own implementation of LLM inference and contributed to other AI-related projects as well, you can verify that with the GitHub link in my profile.)

  • "This time you're going to love Cortana. For reals!"

  • People that love to read only the title. What could be better than a bunch of titles in a row?

  • As a general statement: No, I am not.

    You didn't qualify what you said originally. It either has the capability or not: you said it didn't, it actually does.

    You’re making an over specific scenario to make it true.

    Not really. It isn't that far-fetched that a company would see an artist they'd like to use but also not want to pay that artist's fees so they train an AI on the artist's portfolio and can churn out very similar artwork. Training it on one or two images is obviously contrived, but a situation like what I just mentioned is very plausible.

    This entire counter argument is nothing more than being pedantic.

    So this isn't true. What you said isn't accurate with the literal interpretation and it doesn't work with the more general interpretation either. The person higher in the thread called it stealing: in that case it wasn't, but AI models do have the capability to do what most people would probably call "stealing" or infringing on the artist's rights. I think recognizing that distinction is important.

    Furthermore, if I’m making such specific instructions to the AI, then I am the one who’s replicating the art.

    Yes, that's kind of the point. A lot of people (me included) would be comfortable calling doing that sort of thing stealing or plagiarism. That's why the company in OP took pains to say they weren't doing that.

  • It’s a briefcase full of cash.

    I'm pretty sure you could just say "It's tax free" or even double the amount to $2 million and it wouldn't really change which people would do it and which wouldn't.

    I'd do it, as long as I was really convinced that the only danger was mental, not physical.

  • You probably ate or drank other stuff with water. The other person didn't mean "water" specifically, just some means of hydration.

  • I just want fucking humans paid for their work

    That's a problem whether or not we're talking about AI.

    why do you tech nerds have to innovate new ways to lick the boots of capital every few years?

    That's really not how it works. "Tech nerds" aren't licking the boots of capitalists, capitalists just try to exploit any tech for maximum advantage. What are the tech nerds supposed to do, just stop all scientific and technological progress?

    why AI should own all of our work, for free, rights be damned,

    AI doesn't "own your work" any more than a human artist who learned from it does. You don't like the end result, but you also don't seem to know how to come up with a coherent argument against the process of getting there. Like I mentioned, there are better arguments against it than "it's stealing", "it's violating our rights" because those have some serious issues.

  • Artists who look at art are processing it in a relatable, human way.

    Yeah, sure. But there's nothing that says "it's not stealing if you do it in a relatable, human way". Stealing doesn't have anything to do with that.

    knowing that work is copyrighted and not available for someone else’s commercial project to develop an AI.

    And it is available for someone else's commercial project to develop a human artist? Basically, the "an AI" part is still irrelevant to. If the works are out there where it's possible to view them, then it's possible for both humans and AIs to acquire them and use them for training. I don't think "theft" is a good argument against it.

    But there are probably others. I can think of a few.

  • You can’t tell it to find art and plug it in.

    Kind of. The AI doesn't go out and find/do anything, people include images in its training data though. So it's the human that's finding the art and plugging it in — most likely through automated processes that just scrape massive amounts of images and add them to the corpus used for training.

    It doesn’t have the capability to store or copy existing artworks. It only contains the matrix of vectors which contain concepts.

    Sorry, this is wrong. You definitely can train AI to produce works that are very nearly a direct copy. How "original" works created by the AI are is going to depend on the size of the corpus it got trained on. If you train the AI (or put a lot of weight on) training for just a couple works from one specific artist or something like that it's going to output stuff that's very similar. If you train the AI on 1,000,000 images from all different artists, the output isn't really going to resemble any specific artist's style or work.

    That's why the company emphasized they weren't training the AI to replicate a specific artist's (or design company, etc) works.

  • They deliberately do that in some public toilets to discourage people from hooking up in there.

  • Doubled down on the “yea were not gonna credit artist’s our AI stole from”. What a supreme douche

    I don't think it's as simple as all that. Artists look at other artists' work when they're learning, for ideas, for methods of doing stuff, etc. Good artists probably have looked at a ton of other artwork, they don't just form their skills in a vacuum. Do they need to credit all the artists they "stole from"?

    In the article, the company made a point about not using AI models specifically trained on a smaller set of works (or some artist's individual works). Doing something like that would be a lot easier to argue that it's stealing: but the same would be true if a human artist carefully studied another person's work and tried to emulate their style/ideas. I think there's a difference between that an "learning" (or learning) for a large body of work and not emulating any specific artist, company, individual works, etc.

    Obviously it's something that needs to be handled fairly carefully, but that can be true with human artists too.