Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)EA
Posts
15
Comments
825
Joined
2 yr. ago

  • He's not trying to get copyright for something he generated, he's trying to have the court award copyright to his AI system "DABUS", but copyright is for humans. Humans using Gen AI are eligible for copyright according to the latest guidance by the United States Copyright Office.

  • Permanently Deleted

    Jump
  • We're saying the same thing here. It's just your characterization of gen AI as a "tech-enabled copying device" isn't accurate. You should read this which breaks down how all this works.

  • Permanently Deleted

    Jump
  • The fair use doctrine allows you to do just that. The alternative would be someone being able to publish a book and then shutting anyone else out of publishing, discussing, or building on their ideas without them getting a kick-back.

  • Permanently Deleted

    Jump
  • The funny part is most of the headlines want you to believe that using things without permission is somehow against copyright. When in reality, fair use is a part of copyright law, and the reason our discourse isn't wholly controlled by mega-corporations and the rich. It's sad watching people desperately trying to become the kind of system they're against.

  • You're moving the goalposts. Your original reply made no mention of co-authorship by a human, it was just one sweeping statement.

    AI art is not protected by copyright, yes. That isn’t a “should” but rather how it actually works in nearly all countries but a few, certainly including the US.

  • But they do, explicitly:

    Many popular AI platforms offer tools that encourage users to select, edit, and adapt AI- generated content in an iterative fashion. Midjourney, for instance, offers what it calls “Vary Region and Remix Prompting,” which allow users to select and regenerate regions of an image with a modified prompt. In the “Getting Started” section of its website, Midjourney provides the following images to demonstrate how these tools work.136

    Unlike prompts alone, these tools can enable the user to control the selection and placement of individual creative elements. Whether such modifications rise to the minimum standard of originality required under Feist will depend on a case-by-case determination.138 In those cases where they do, the output should be copyrightable. Similarly, the inclusion of elements of AI-generated content in a larger human-authored work does not affect the copyrightability of the larger human-authored work as a whole.139 For example, a film that includes AI-generated special effects or background artwork is copyrightable, even if the AI effects and artwork separately are not.

  • This seems like a good place for discussion so if you'll humor me, I'd like to explain some things you might find in a prompt, maybe some things you weren't aware you could do. Web services don't allow for a lot of freedom to keep users from generating things outside their terms of use, but with open source tools you can get a lot more involved.

    Take a look at these generation parameters: sarasf, 1girl, solo, robe, long sleeves, white footwear, smile, wide sleeves, closed mouth, blush, looking at viewer, sitting, tree stump, forest, tree, sky, traditional media, 1990s \(style\), <lora:sarasf_V2-10:0.7>

    Negative prompt: (worst quality, low quality:1.4), FastNegativeV2

    Steps: 21, VAE: kl-f8-anime2.ckpt, Size: 512x768, Seed: 2303584416, Model: Based64mix-V3-Pruned, Version: v1.6.0, Sampler: DPM++ 2M Karras, VAE hash: df3c506e51, CFG scale: 6, Clip skip: 2, Model hash: 98a1428d4c, Hires steps: 16, "sarasf_V2-10: 1ca692d73fb1", Hires upscale: 2, Hires upscaler: 4x_foolhardy_Remacri, "FastNegativeV2: a7465e7cc2a2",

    ADetailer model: face_yolov8n.pt, ADetailer version: 23.11.1, Denoising strength: 0.38, ADetailer mask blur: 4, ADetailer model 2nd: Eyes.pt, ADetailer confidence: 0.3, ADetailer dilate erode: 4, ADetailer mask blur 2nd: 4, ADetailer confidence 2nd: 0.3, ADetailer inpaint padding: 32, ADetailer dilate erode 2nd: 4, ADetailer denoising strength: 0.42, ADetailer inpaint only masked: True, ADetailer inpaint padding 2nd: 32, ADetailer denoising strength 2nd: 0.43, ADetailer inpaint only masked 2nd: True

    To break down a bit of what's going on here, I'd like to explain some of the elements found here. sarasf is the token for the LoRA of the character in this image, and <lora:sarasf_V2-10:0.7> is the character LoRA for Sarah from Shining Force II. LoRA are like supplementary models you use on top of a base model to capture a style or concept, like a patch. Some LoRA don't have activation tokens, and some with them can be used without their token to get different results.

    The 0.7 in <lora:sarasf_V2-10:0.7> refers to the strength at which the weights from the LoRA are applied to the output. Lowering the number causes the concept to manifest weaker in the output. You can blend styles and concepts this way with just the base model or multiple LoRA at the same time at different strengths. You can even take a monochrome LoRA and take the weight into the negative to get some crazy colors.

    The Negative Prompt is where you include things you don't want in your image. (worst quality, low quality:1.4), here have their attention set to 1.4, attention is sort of like weight, but for tokens. LoRA bring their own weights to add onto the model, whereas attention on tokens works completely inside the weights they're given. In this negative prompt FastNegativeV2 is an embedding known as a Textual Inversion. It's sort of like a crystallized collection of tokens that tell the model something precise you want without having to enter the tokens yourself or mess around with the attention manually. Embeddings you put in the negative prompt are known as Negative Embeddings.

    In the next part, Steps stands for how many steps you want the model to take to solve the starting noise into an image. More steps take longer. VAE is the name of the Variational Autoencoder used in this generation. The VAE is responsible for working with the weights to make each image unique. A mismatch of VAE and model can yield blurry and desaturated images, so some models opt to have their VAE baked in, Size are the dimensions in pixels the image will be generated at. Seed is the number representation of the starting noise for the image. You need this to be able to reproduce a specific image.

    Model is the name of the model used, and Sampler is the name of the algorithm that solves the noise into an image. There are a few different samplers, also known as schedulers, each with their own trade-offs for speed, quality, and memory usage. CFG is basically how close you want the model to follow your prompt. Some models can't handle high CFG values and flip out, giving over-exposed or nonsense output. Hires steps represents the amount of steps you want to take on the second pass to upscale the output. This is necessary to get higher resolution images without visual artifacts. Hires upscaler is the name of the model that was used during the upscaling step, and again there are a ton of those with their own trade-offs and use cases.

    After ADetailer are the parameters for Adetailer, an extension that does a post-process pass to fix things like broken anatomy, faces, and hands. We'll just leave it at that because I don't feel like explaining all the different settings found there.

    I could continue if you want to hear more.

  • I remember you. It was my thread you commented in. You're mad because a moderator removed your comment after you showed up and started antagonizing no one in particular?

    I'd like to ask you a question. How much experience do you have with any Stable Diffusion tools?