I see that Princess Peach/Resident Evil crossover project is going wellbeen using SDXL .9 for the past week and really impressed. I just created a sdxl cain lora from robocop2, 60 images @ 1024x1024 took 6-7 hours on a 4070ti, gonna post results later, so here's my peach sdxl lora for now.![]()
I see that Princess Peach/Resident Evil crossover project is going well![]()
you can download it and mess about with it. i'm no expert but i think you only need to train if you want to add in your own stuff. I was trying to make a LoRA but it's too confusing for me. i couldn't get it to work. i did some text inversion embedding which i had to train.I've been playing with midjourney for a few months and have been thinking of trying out stable diffusion. Do you still need to train it, or can you just download and prompt whatever?
You don’t have to train it it there is a plethora of checkpoints and others stuff you can download from here : https://civitai.com/I've been playing with midjourney for a few months and have been thinking of trying out stable diffusion. Do you still need to train it, or can you just download and prompt whatever?
you can download it and mess about with it. i'm no expert but i think you only need to train if you want to add in your own stuff. I was trying to make a LoRA but it's too confusing for me. i couldn't get it to work. i did some text inversion embedding which i had to train.
I did install Kohya or at least I think it was that lol but when it came to tagging my photos it kept getting an error and I couldn't figure out what was causing it or find any solutions so gave up. I might give it another shot maybe I installed it wrong but I was following a guide step by step.Adding your stuff is half the fun. Don't give up, there's plenty of tutorials on youtube for creating Lora using Kohya. I believe you need a minimum of 12GB for creating SDXL Lora but I could be wrong. I no longer use my 4070Ti to train stuff with, maybe in the winter but not with this his humidity. I just rent a 4090 instead, which is a little bit more involved but once you get it down its rather easy.
I rent it out from runpod.io, I basically deploy an instance with kohya template which pre-installs kohya, then I connect to jupyter notebook, then pip install gdown, and then !gdown https://drive.google.com/uc?id=????????????????? my sdxl safetensor from my gdrive I had uploaded it on. I then put said safetensor into model folder, images with txt captions into image/100_urlora folder and then create log folder, from there I load terminal and cd into Kohya_ss, type source venv/bin/activate and then bash gui.sh --share, I click on the link it shows and then use Kohya regularly. Or if you dont really care about making your own stuff, there are tons of stuff on civitai as Tumle suggested.
I don't share my Loras because studios are actively pursuing those who are uploading loras, I believe square enix was one of them. It's completely out of control now though. I like to make my own stuff, niche stuff is still very rare.
What was the error do you remember, I use blip captioning, it will download data if you have never used it before and then generate txt files, once I have the txt I go modify them, there are scripts to help with this.I did install Kohya or at least I think it was that lol but when it came to tagging my photos it kept getting an error and I couldn't figure out what was causing it or find any solutions so gave up. I might give it another shot maybe I installed it wrong but I was following a guide step by step.
I can't remember. Something about cuda so I thought it was a memory issue. I'll try it again tomorrow.What was the error do you remember, I use blip captioning, it will download data if you have never used it before and then generate txt files, once I have the txt I go modify them, there are scripts to help with this.
Im sure there is a technical answer to this and anyone can correct me if im wrong, but its basically a file you create with whatever you are trying to train, it can be an object or a person, or even a style, this allows you to mix it with checkpoints. For example if you want the style of the movie Coraline, which this is my next project, you take a lot of pictures of the style you want, probably more images than you would do a person,, and then create a lora with it, and then you can create whatever you want using said style lora,, its a little bit more involved than that but thats the tldr i guess.What is a loRA?
Say you wanted to make silly images of yourself then you'd take a bunch of photos of yourself and train it into a LoRA so instead of getting a boring generic image you could make the image look like yourself. or say you had a favourite movie/TV show with a distinct artstyle then you could train a LoRA so that the output was in the same style as that. I downloaded an Arcane (League of Legends show) LoRA that made everything look like it was from that show. There are a bunch of Ghibli LoRAs too. It could also be you want your backgrounds to be a certain scene.What is a loRA?
Tried Kling AI by creating video from one stable diffusion image. Pretty nice results for using one sentence.
My brother's into building and painting models.
Eg:
I ran these through Kling for a laugh:
Someone just did. Although, ironically, you'd need an AMD GPU (I think).Why does this have to use weird shit like Python and web browser GUIs, why can't someone just package it as a regular Windows exe with a basic interface?
CUDA card is not needed.
Just grab this:
![]()
My friend told me that censorship can be defeated via editing models after program starts.
Someone just did. Although, ironically, you'd need an AMD GPU (I think).
Gotta try that.![]()
Brother, I needed this! I've got an AMD GPU and getting AI set up on it isn't easy.CUDA card is not needed.
Just grab this:
![]()
My friend told me that censorship can be defeated via editing models after program starts.
Someone just did. Although, ironically, you'd need an AMD GPU (I think).
I wish some like this was for Apple silicon, M1 chips…. I want that.CUDA card is not needed.
Just grab this:
![]()
My friend told me that censorship can be defeated via editing models after program starts.
Someone just did. Although, ironically, you'd need an AMD GPU (I think).
Not a Mac user and I don’t know how user friendly you want it.. but heard good things about this app:I wish some like this was for Apple silicon, M1 chips…. I want that.
Thanks yes. Have Diffusion Bee. But it’s mostly used for text to image. T2V or I2V gets harder to do on Mac.Not a Mac user and I don’t know how user friendly you want it.. but heard good things about this app:
![]()
DiffusionBee - Stable Diffusion App for AI Art
DiffusionBee is the easiest way to generate AI art on your computer with Stable Diffusion. Completely free of charge.diffusionbee.com
Installation guide
![]()
GitHub - divamgupta/diffusionbee-stable-diffusion-ui: Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed.
Diffusion Bee is the easiest way to run Stable Diffusion locally on your M1 Mac. Comes with a one-click installer. No dependencies or technical knowledge needed. - divamgupta/diffusionbee-stable-di...github.com
Ah ok, I understand nowThanks yes. Have Diffusion Bee. But it’s mostly used for text to image. T2V or I2V gets harder to do on Mac.