Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)MA
Posts
1
Comments
90
Joined
2 yr. ago

  • You should probably hook up with the SillyTavern crowd. It's a frontend to chat with LLMs that will do what you want. Its main purpose is chat role-play. You can assign a persona to the LLM and ST will handle the prompt to make it work. It also handles jailbreaks if you want to use one of the big ones (no idea if it works well). You can also connect to other services that run open models, including aihorde.

    https://github.com/SillyTavern/SillyTavern

    https://www.reddit.com/r/SillyTavernAI/


    If you want to host your own model you can find more help here:

    https://www.reddit.com/r/LocalLLaMA/

    !localllama@sh.itjust.works

  • A solution would be to save the chat log as a text file. An LLM might be able to turn it into FAQ format with little oversight. Of course, someone would still have to volunteer the work.

    Obviously, Discord doesn't want that sort of thing since it lessens their hold on a community and the people in it. They could decide to cause trouble.

  • deleted

    Jump
  • Can I ask why this is important to you? Did you donate and don't like how your money is used?

    ETA: I asked, because I wondered if it has to do with AI-tech specifically, as many here obviously believe. OP kindly answered my question in DMs. They obviously do not wish the details to be public, but I believe I can say that the answer was very reasonable and not connected to AI-tech. (There's nothing in the answer which is private or couldn't be made public, but it's up to them.)

  • It's noteworthy that patent law is 20 years to this day. It has survived with its core fairly intact, the main change being that you can no longer get a patent for bringing an invention into the country. Today that is called piracy (poor China).

    I believe that is because patents simply have to work for the whole country in encouraging progress. If cultural production is stifled, well... Who cares? The elites in the copyright industry benefit, and they have an outsize influence on public discourse.

  • This touches several difficult topics.

    I think my disagreement with you about AI copyright infringement is that you think that AI can create new things whereas I don’t think that.

    I don't think that matters to copyright law, as it exists.

    Copyright law is all about substantial similarity in copyrightable elements. All portraits are similar by virtue of being portraits. Portraits are not copyrighted, nor can one copyright genres and such. A translation of a text has superficially no similarity with the original, but has to be authorized.

    What you are saying would mean, that similarity is no longer a requirement for an infringement. That's a big change. It is copyright, after all.

    Furthermore it really wouldn’t take a huge change to copyright law, just clear differences between the rules that apply to sentient vs non-sentient sources.

    Non-sentient sources are not new. Take cameras, for example. Cameras have been improved over time so that less skill is necessary to operate one. It's no longer necessary to manually focus, to set the exposure time, to develop the film, ... This also means that photos today have less human creative input. In current smartphone cameras, neural AIs make many decisions and also "photoshop" the result.

    It doesn't really make sense to me to treat modern cameras differently to old ones. Or: Someone poses and renders a figure in Blender. What difference does it make if they use an old-fashioned physical based render or a genAI?


    Nevertheless, the question whether AIs can create something new, can be answered. The formal definition of "information" is that it is a reduction in uncertainty. For example, take the sequence of letters: "creativit_". You probably have a very clear idea what the last, missing letter is. So learning that it is "y" doesn't give you much information.

    But take the sequence: "juubfpvoi_". The missing letter could be any lower-case letter. You may not feel very informed when you learn that it is "f", but it does represent a much bigger reduction in uncertainty.

    When we write texts, we use the same old words in the dictionary; just a few 10,000 at most. We string them together with the same old rules of grammar to tell the same old things. The sky is blue, things fall down, not up; people love and hate, and in the end the good guys win. You can probably think of exceptions to all these. They are exceptions. We create small variations on the same old themes. We rehash.

    If a story does not cater to expectations, then it's not believable. People should behave as we know people to behave. The laws of nature should be consistent and familiar. Most of all: The conventions of the genre should be followed. As a human, you are supposed to lift ideas from previous works. New ideas may be appreciated, but are not required.

    The second string was, in fact, created by a machine; not an AI, but an RNG. Even with many GBs of output, it should be impossible to find any biases or patterns that allow one to guess at the next letter. I didn't make one up myself because humans are not very random even when we try. And when we write, we do our best to reduce our randomness even further. We try not to invent new spellings; ie make spelling errors.

    AIs receive input from a pRNG, which means that they create new things. What they are supposed to do is to strip away all that novel information and create something largely predictable. They often fail and, say, create images of humans with an innovative number of fingers. LLMs make continuity errors, or straight start to spout gibberish. The problem is that AIs create too many new things, not that they don't.

  • Well, that is a philosophical or religious argument. It's somewhat reminiscent of the claim that evolution can't add information. That can't be the basis for law.

    In any case, it doesn't matter to copyright law as is, that you see it that way. The AI is the equivalent to that book on how to write bestsellers in my earlier reply. People extract information from copyrighted works to create new works, without needing permission. A closer example are programmers, who look into copyrighted references while they create.

  • I didn't downvote you. (Just gave you an upvote, though.) You're reasonable and polite, so a downvote would be very inappropriate. Sorry for that.

    Music is having ongoing problems with copyright litigation, like Ed Sheeran most recently. From what I have read, it's blamed on juries without the necessary musical background. As far as I know, higher courts usually strike down these cases, as with Sheeran. Hip hop was neutered, in a blow to (African-)American culture. While it was obviously wrong, not to find for fair use in that case, samples are copies.

    It's not so bad outside of music. You can write books on "how to write a bestseller", or "how to draw comics" without needing permission. Of course, you would study many novels and images to get material. The purpose of books is that we learn from them. That we go on to use this to make our own thing is intended (in the US).

    What you're proposing there would be a great change to copyright law and probably disastrous. Even if one could limit the immediate effect to new technologies, it would severely limit authors in adopting these technologies.

  • I understand. The idea would be to hold AI makers liable for contributory infringement, reminiscent of the Betamax case.

    I don't think that would work in court. The argument is much weaker here than in the Betamax case, and even then it didn't convince. But yes, it's prudent to get the explicit permission, just in case of a case.

  • That shouldn't be an issue. If you look at an unauthorized image copy, you're not usually on the hook (unless you are intentionally pirating). It's unlikely that they needed to get explicit "consent" (ie license the images) in the first place.

  • The models are deliberately engineered to create "good" images, just like cameras get autofocus, anti-shake and stuff. There are many tools that will auto-prettify people, not so many for the reverse.

    There are enough imperfect images around for the model to know what that looks like.

  • Asklemmy @lemmy.ml

    Critics of capitalism, what concrete economic policies do you support?