Who's the A-Team?
Itrytoblenderrender @ Itrytoblenderrender @lemmy.world Posts 19Comments 177Joined 2 yr. ago
After his last movie, Thor needs his callcenter night job to pay for all the goat food.
Made with Comfyui. the workflow is embedded in the picture.
- Positive Prompt: chris hemsworth dressed as thor sitting on a office chair in a call center cubicle, wearing a call center headset, looking unhappy and depressed
- Negative Prompt: Anime
- Checkpoint (SDXL): fenrisxl_801Photorealistic
- Refiner: None
- Sampler: dpmpp_3m_sde
- Scheduler: Karras
- CFG: 7.0
- Clipskip 2
Thank you !
I was not aware of this Lora. Will check it out.
Thats nice! Which Model gives you a clear readable text like this?
I would buy this one.
I´ve tried to implement Loras in the Workflow and a face detailer to strengthen the lora effect. the Results are quite interesting (low quality comes from the webm format):
The workflow is embedded in this picture ( The image is pre Face Detailer)
Its all a bit trial and error right now. this animation took about 20 Minutes on my machine. I would love to do some more tests with different models and embeddings or even loras but unfortunately my time for this is somewhat limited.
I love to do the contests to test new things out :-)
Visions for the future: If you could get a stable output for the background and the actors (maybe Loras?) you could "play out" your own scenes and transform them via stable diffusion to something great. thinking of epic fight scenes, or even short anmation films.
This whole stable diffusion thing is extremly interesting and in my opinion a game changer like the introduction of the mobile phone.
Sorry for the low quality, i´ve found no other way to upload animations directly to lemmy, yet.
Edit: Here is a link to a sharing Portal with a higher Resolution GIF :
The comfyui Workflow is also embedded in this picture, you will have to install several custom extensions to make this work:
- Generator: Comfyui
- Positive Prompt: large format photo of (Masterpiece, best quality:1.2) A dancing ninja girl (wearing a wooden demon mask on the head:1.1) intricate ornamental carvings, medieval background, full body, cinematic lighting, Kodak Funsaver, Kodak Vision3, 50mm
- Negative Prompt: bad anatomy, bad proportions, blurry, cloned face, deformed, disfigured, duplicate, extra arms, extra fingers, extra limbs, extra legs, fused fingers, gross proportions, long neck, malformed limbs, missing arms, missing legs, mutated hands, mutation, mutilated, morbid, out of frame, poorly drawn hands, poorly drawn face, too many fingers, ugly, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, out of frame, ugly, extra limbs, bad anatomy, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, long neck
- Negative Embedding: FastNegativeV2
- Sampler: Euler
- Scheduler: normal
- CFG: 6
- Model: lyriel_v14
This is a quite interesting workflow as you can generate relatively long animations.
You draw the Motions of your character from a video. For this one i googled "dancing girl" and took one of the first i´ve found:
You can draw single images from the Video into comfyui. for this one i´ve skipped the first 500 Frames and took 150 Frames to generate the animation. The single images are scaled to the Resolution 512x512. This gives me a initial set of pictures to work with:
Via the openpose prepocessor you can get the Poses for every single image:
This can be fed to the openpose controlnet to get the correct pose for every single animation. Now we have following problem. We are all set with the Poses, but we also need a set of latent images which have to go trough the ksampler. The solution is to generate a single 512x512 latent image and blend it with every single VAE encoded Picture of the Video to get an empty latent image for every Frame:
We get a nice set of empty latents for the sampler:
then we let the ksampler together with the animate diff nodes and controlnet do its magic and we get a set of images for our animation ( The number of possible images seems to be limited by your system memory. i had no problem with 100, 150, 200, 250 images and have not tested higher numbers yet. I could not load the full video):
Last step is to put everthing together with the video combine node. You can set the frame rate here. 30 FPS seems to produce acceptable results.:
yeah, its broken. I struggle a bit to upload a gif here. I put the mp4 on a video sharing platform and made a new link.
I had to scale the gif down with an online Tool which put me their logo in the bottom right corner.
Its ok If the use of tools outside of an classic AI Generator violates the rules of the contest and disqualifies me. I still had much fun trying it out!
The idea was to "fly" from outer space deeper and deeper into an Ecumenopolis (had to google thatone) until you "arrive" at the final destination deep into the Ecumenopolis, a grass field.
- AI Tool: Comfyui
- Model: dreamshaper_5BakedVAE
- Sampler: euler
- Scheduler: normal
- CFG: 7.0
- Samples: 40
- Positive Prompt: Its complicated.. See below
- Negative Prompts: bad anatomy, bad proportions, blurry, cloned face, deformed, frame, border, black rectangle, disfigured, duplicate, extra arms, extra fingers, extra limbs, extra legs, fused fingers, gross proportions, long neck, malformed limbs, missing arms, missing legs, mutated hands, mutation, mutilated, morbid, out of frame, poorly drawn hands, poorly drawn face, too many fingers, ugly, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, out of frame, ugly, extra limbs, bad anatomy, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, mutated hands, fused fingers, too many fingers, long neck
- Negative Embeddings: FastNegativeV2, BAdDream
The Workflow is also embedded in this image:
The Workflow uses custom nodes which have to be installed to work
This is quite an interesting Workflow:
- Base Image
I´ve created a Base Image with following Prompt:
documentary photography photo of closeup of a single grass blade, futuristic, grenn , ecological, hippie city background, natural lighting, from below/low angle, ARRI ALEXA 65, Kodak Vision3 IMAX
- Send the created image back into the workflow
Via the image Sender and image receiver node of the Comfyui custom Module "Impact pack" you are able to send the generated image back into the workflow. With these nodes you are able to do a Generation loop which makes it possible to generate something like an animation by the creative use of inpainting.
- Scale the genrated image down an pad it for outpainting
The previously generated image isscaled down to 50%. then we pad the image to get it back to its original size for the outpainting. combined with an outpainting controlnet you get a new image which "zooms" a bit out:
- Now starts the Fun
You have in Comfyui an optional Setting where you can set the Image generation to an endless loop. It generates image after image until you say stop. As we have generated a feedback loop in our workflow we get the nice zoom effect for every following image.
And here is the point from above with the prompt "It´s complicated"
You can now modify the promp while the generation process is running and alter slowly the prompt. If you are a fast typer you can do this while the process is running or you pause, modify the prompt and start the process again.
My goal was to have a "Zoom in from Space" into the Ecumenopolis.
As the process works in the way that you zoom out from the first image, you have to think in "reverse" and modify the prompt gradually during the generation process so that you come frome the detail view of grass to the outside view of the Ecumenopolis from Space.
Surprisingly you only need a few iterations to get a nice effect:
- Make a Video
The last step is to Generate the Animation with the Zoom Video Composer . This will generate you a .mp4 out of your single images. You also have many parameters to play with and to get different effects.
I love these challenges to try new workflows and tricks in comfy. The opportunity to partake in these contests is rewarding enough. Looking forward to the next contest!
Old man with only, compared to you, 394 titles in the steam library.
I had a similar experience.
My solution was to categorize my steam library with custom categories.
The most important category is the "Trash" category to remove the "clutter". "Dead" games like Artifact and trash from bundles from steam sales.
Now I have my library sorted and want for example to play a soulslike I just look into my library in the category "soulslike" and can choose from the games I've sorted into the category.
I wish this would also be possible for streaming platforms as their standard categories are usually redundant to give you the feeling that their library is bigger than it actually is.
Not as contender for the contest, only as example: The same workflow with contrasting prompts. The core image is the regular unsettling prompt from above, the remaining tiles are a happy prompt contrasting the original prompt. ( Workflow is embedded in the picture)
And here is the inversion (happy / unsettling):
Edit:
this is too much fun, i keep the image spamming in this post:
Lemmy seems to scale the image down. The original resolution is: 3072x2048
The comfyui workflow should be embedded into the picture. For this one i wanted to try the outpainting process in comfyui.
I used chatgpt to generate the Prompt:
"Please give me a descritpion of a surreal unsettling cartoon style comic page"
The Prompt:
Prepare to venture into the unsettling depths of the unknown with "Eerie Enigma," a surreal cartoon-style comic page that will leave you with a sense of profound unease and curiosity. This disconcerting page draws you into a world where reality unravels, and every panel is a portal to a bizarre and unsettling dimension.
The page is dominated by a nightmarish dreamscape, where the sky is a sickly shade of green and jagged, twisted trees reach out like skeletal fingers. Moonlight casts long, distorted shadows that seem to have a mind of their own, morphing into eerie shapes that crawl across the ground.
Characters are equally unsettling: a faceless, cloaked figure with hollow eyes that follow your every move, a humanoid creature made of fragmented, disjointed limbs that seem to float in mid-air, and a spectral, translucent child who whispers cryptic messages that send shivers down your spine.
The narrative unfolds in a non-linear, disorienting fashion, with fragmented, surreal sequences that defy conventional storytelling. In one panel, a clock melts into a puddle, its hands spinning wildly as it oozes into a black void. In another, a door opens to reveal an infinite corridor of twisted, impossible architecture.
The color palette is dominated by sickly greens, blood-red accents, and deep, inky blacks, creating an oppressive atmosphere that seems to close in on you. Unsettling sounds—whispers, distant screams, and eerie laughter—seem to emanate from the very page itself.
"Eerie Enigma" is a disconcerting journey that challenges your perception of reality and invites you to confront the uncanny. It is a nightmarish, surreal experience that lingers long after you've turned the page, leaving you with a profound sense of unease and a burning desire to unravel the enigma that it presents.
I used comfyui as Generator.
Stable Diffusion XL is better at processing a lengthy prompt. So i used a XL Model
- Model: fenrisxl_801Photorealistic
- Positive Prompt: See above
- Negative Prompt: signature, Text, credits
- Negative Embedding: unestheticXLv31
- SDXL Refiner: None
- CFG 7
- Sampler first KSampler dpmpp_3m_sde
- Sampler all following KSamplers: ddim
- Scheduler: Karras
- Steps: 40
Also:
i´ve integrated FreeU for allegedly better images into the process and a latentgarbagecollector for better performance.
If you pull the process into your Comfyui instance you will have to install some custom nodes.
The Process:
In the first step, i generate a 1024 x 1024 Image with above parameters:
In the next and follwing steps, i enlarge the canvas and do an inpainting with the same parameters in the enlarged area:
I do this for several iterations until iget a nice larger picture. For this image i did not take the last image but one in the process beause in later steps the picture got a bit wild:
I think outpainting is a nice tool you can use in the comfy workflow.
Theoretically you can generate extremly large images with outpainting.
You could also use a completly new or modified prompt for the outpainting which should give you some interesting results.
Thank you very much! Looking forward to the next challenge!
The wide variety of nodes can be a bit overwhelming but in general it is a logical process with many good youtube tutorials. I had a little experiennce with the node concept from blender but i think even without that it is easy to use. The Bigger Problem will be your disk space when you sudenly have 50 different Models on your harddrive ;-)
If Mr T. Had unlimited prep time