I mainly use a site called
pornpen.ai (pornpen.art currently since something messed up with their domain registration) and it creates NSFW images. They have a GIF feature but it so very lacking. I do pay for that site, but not much. I also pay for Pixverse, at least for now. You run out of credits before the next billing, that's for sure, because who wouldn't want to keep trying until things look good! But, NSFW is off limits. I can, though, create some nice content with outfits, things like that. And I've been able to get some bouncing boobs (covered up) with great success.
@_2old_ has been giving me some pointers on using Stable Diffusion to create images. I've basically just started and I'm still getting used to it, but I bet before long I won't even have to pay for the pornpen site since I can create any type of image I can imagine without filters. I'm also starting to look at Image2Video using ComfyUI, but my gear is too weak for it, I'm finding. I just upgraded to 32GB of RAM, but the used GPU I got from eBay is an older one (I'm not familiar with GPU stats), its a GTX 1060 with only 3GB of VRAM, which can't do much for Image2Video, turns out. What I was able to produce was like 3 seconds and not anywhere near what can be done on Pixverse, obviously. So, as to not get totally discouraged I'm turning back to creating images. My hope was that any images Pixverse didn't take, I could turn to video myself. Maybe later in life I'll be able to.
The site animegenius is absolutely fantastic in creating both NSFW and "real girl" AI images. Thing is ...
The only thing you can do with a real life image is turn it into an anime image. And an anime image into a real life image. There's an "animate" button at the end of it - to transform the image that was generated into a gif - but no way to turn the base image itself into the gif. It has to be put through the AI process first.
I remember when the site first came out, or around that time, they added a feature where you could put -any- image into it - and it would create a 3D "rotating" effect around the image. Not fully 3-D or anything like that, but it would add just enough depth in every direction to make the image pop out from it's surroundings, and make it look like you were actually "there." It would just slowly rotate, and the person in the center of the image would jiggle a bit, and ... that was it. It was like the first baby step towards AI video, like absolutely tiny, but it was still great.
They then just ... removed it. Like, from one day to the next, it was gone. They still had certain links to it on site - and referencing it - but the entire framework for generating that rotating gif just ... evaporated into thin air. And it's been gone ever since.
At that point I figured, alright, I see what's going on here. People starting getting a little too cute with the nudify images they started creating of people - and the site most likely faced some blowback, even if not legal, and they just figured ... fuck it. Better to play it safe and just yank that out. Shame too, because it was absolutely fantastic.
---
Your woman bouncing around mp4 was very nice. I've gotten close to that (but actually got pixverse to nudify the image, even though it somehow sensed it wasn't supposed to do it, and then added a bunch of extra unnatural splotches).
These are all just sparks, though. We need to get closer to actually starting a fire with these things. Which brings me to the next post - but I'm still going to be addressing what you said in it, so I'll be responding to both of you essentially at the same time.
What do you think about this one from civitai that I came across:
civitai.com
They often have that "cross" symbol on the videos, not sure if it comes from the AI tool.
You see - now -THIS- is what I'm talking about. THIS IS THE FIRE.
Problem is - the person that created this - did so by essentially compiling an entire programming language, inserting specific data nodes in specific spots, and ... well, essentially, required having the full knowledge of a programming language AND the assembly of the program itself. Like, I'm not a complete n00ber when it comes to programming, I wrote code back in the day (not good code - but, you know - I did it) - so it's not like I'm looking at something that I don't grasp the concept of. I get it. I fully get it.
Thing is - I don't have either the time or the resources to learn the entire ins and outs of how to specifically train a LORA on my own. Even more (getting back to Cashhern2 here) - I don't have the 256 GB of RAM needed in all my different computer components to be able to BARELY run the process. I'm really more interested in the artistic side of it. Like, I've taught myself a great deal (in using a series of programs) to get some really phenomenal AI images. Even better? They don't look like AI! I've found a lot of ways to decreasing the glossiness and rubbery-ness of the skin. My skills are still beginner - and I've still got a long way I can go - but increasingly I can create imagery that's just like, "YES!"
And that's with 40 to 60 hours of work on a single image.
Thing is - I don't have the thousand hours needed to fully get into the production side of it for videos - and even if I did - I don't have the $10,000 (if this is exaggerated, forgive me) available for the parts.
But ... Hunyuan and ComfyAI seem to be getting daringly close. It seems they've found shortcuts around the tremendous amounts of processing power needed for things that aren't necessary. Having a LORA that has a stable straight-forward framework through which differentiation can occur, but the entire reality of the image doesn't have to be reconceived for every second. It sounds like it's approaching it in the way it should be approached, by creating a stable background layer, and then imposing a foreground layer (the LORA) on top of it. Allowing both the flexibility to change somewhat, but not wasting resources transforming the blue window into a pond and then into a blue bird.
I dunno - I'm still out of my depth. But it sounds like everyone's been working vigorously on it. And with TenCent being the ones that are funding Hunyuan to begin with - they might just have the willingness and capacity to fund the servers and bandwidth needed to essentially have 20 million people (minimum) all creating videos at the same time. Especially if they've reduced the waste and the workflow needed within the basis of the AI system itself.
Here's to hoping - if anyone hears of any updates to video generation - please post it in here. I - like pretty much all of us here - am going to be there on hour one, never mind day one.
Thank you to Cashhern and Qubit for the replies. Much appreciated.
And for _2old_ showing us his inner freak. lol.
