• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Get Stable diffusion locally on your PC (RTX card needed) no restrictions and no censorship.

KyoZz

Tag, you're it.
I think this is the latest

I may be stupid but I have no idea how to use this website and download the version you linked.
Looks like GitHub but worse :(

In fact, checking the Readme and just running the thing seems a bit tedious

 
Last edited:

Wildebeest

Member
Is there any way to get the latest version of Stable Diffusion?
Stable Diffusion v1 still seems more popular than v2 especially since there seem to be more models based on 1.x that give better results in specific styles for more simple natural prompts. Stable Diffusion 2 is understood to be more arcane to write good prompts for.
 

BadBurger

Many “Whelps”! Handle It!
I'll download this once I get bored to make hardcore gangster art of Mister Rogers and Anderson Cooper or some shit.
 

Haemi

Member
I may be stupid but I have no idea how to use this website and download the version you linked.
Looks like GitHub but worse :(

In fact, checking the Readme and just running the thing seems a bit tedious
1. Install Automatic1111
2. Download "v2-1_768-ema-pruned.ckpt" or "v2-1_768-ema-pruned.safetensors" from huggingface and put it in your Automatic1111 installation directory under "\models\Stable-diffusion"
3. run "webui-user.bat" and open "localhost:7860" in your browser.
4. choose under "Stable Diffusion checkpoint" the file "v2-1_768-ema-pruned" you downloaded earlier.
5. Have fun!

On sites (for example Civitai) you can download other models based on stable diffusion or "mods" you can add to it called "loras", "hypernets" and "textual inversions" to add specific looks or type of content etc. you want to have.

Last image i made:
AODCR0v.jpg
 
Last edited:

Tumle

Member
1. Install Automatic1111
2. Download "v2-1_768-ema-pruned.ckpt" or "v2-1_768-ema-pruned.safetensors" from huggingface and put it in your Automatic1111 installation directory under "\models\Stable-diffusion"
3. run "webui-user.bat" and open "localhost:7860" in your browser.
4. choose under "Stable Diffusion checkpoint" the file "v2-1_768-ema-pruned" you downloaded earlier.
5. Have fun!

On sites (for example Civitai) you can download other models based on stable diffusion or "mods" you can add to it called "loras", "hypernets" and "textual inversions" to add specific looks or type of content etc. you want to have.

Last image i made:
AODCR0v.jpg
Id actually just use the checkpoints from Civitai instead of trying to go through hugginface.. also they are more stylised for anything you want to make :)
Also even if you don’t want to make nsfw stuff use a checkpoint with some of those checkpoints mixed into them, they are better at anatomy 😊
 
Last edited:

Haint

Member
Why does this have to use weird shit like Python and web browser GUIs, why can't someone just package it as a regular Windows exe with a basic interface? Is there some technical reason behind it? Why is there no compatibility layer like Wine or Proton that can transcibe it to a Windows app.
 
Last edited:

Tumle

Member
Why does this have to use weird shit like Python and web browser GUIs, why can't someone just package it as a regular Windows exe with a basic interface? Is there some technical reason behind it? Why is there no compatibility layer like Wine or Proton that can transcibe it to a Windows app.
I think it’s because of the ai libraries that it uses are in python.. not sure if they could be transferred to a windows programming language, could maybe make front end with out too much system resources taken away.. but you would still need to install python and the ai libraries it needs, to get it to work.. but that could be done with an executable maybe..
I think the stable diffusion app I originally posted works like that.. but haven’t used it for a long time so not sure how updated it is😊
 

Sakura

Member
Why does this have to use weird shit like Python and web browser GUIs, why can't someone just package it as a regular Windows exe with a basic interface? Is there some technical reason behind it? Why is there no compatibility layer like Wine or Proton that can transcibe it to a Windows app.
Python is the main language for AI stuff as far as I am aware, and web UI stuff is being done using gradio because it is just really easy to set up.
Building a single EXE to do all this stuff would take a lot of extra work for something that is primarily being done by hobbyists. Things are also constantly changing as well, so the current format suits that better.
 

Salz01

Member
I’m using invokeai. But I haven’t for the life of me figured out how to get more samplers like DPM++ 2M Karras.
Also some of the prompts in Civitai, trigger nsfw content automatically. It’s like some key words trigger that shit right away. Drives me nuts.
 

Tumle

Member
I’m using invokeai. But I haven’t for the life of me figured out how to get more samplers like DPM++ 2M Karras.
Also some of the prompts in Civitai, trigger nsfw content automatically. It’s like some key words trigger that shit right away. Drives me nuts.
try using a negative prompt like nude 50 times :p
 
Last edited:

Haemi

Member
You can pretty much define the lens, focal length and aperture used and descripe what kind of bokeh you want. Also descriping lens reflection like anamorphic lens flares help. Background out of focus with lots of light sources.
Thanks. Having the right model/checkpoint was also quite important. I had only one that made realistic images with this type of dark background and bokeh.
 

01011001

Banned
it's so fun to have the Bing sidebar right there that has full access to DALL-E, if you're bored you can just flip it open and enter a quick promt :D

YHBlago.png

lJI8w9j.jpg

SS9heOI.jpg

QL6ejyG.jpg
 
Last edited:

Haemi

Member
I’m starting to think people should include the prompts when they post pics here… wink wink….

Checkpoint: Colorful
Sampling method: DPM++ 2M Carras
Sampling steps: 50
CFG: 15
Restore faces: checked

Positive promts:

8k,hyperrealistic,photo-realistic,masterpiece,(portrait),(colorful),(model pose),
(neon),(bokeh:1.2),(black background),(rim light:1.2),(black silk dress:1.2),gorgeous,(smile:0.5),
30 years old woman,(josie loren,kate beckinsale:0.3),(dark hair),(asymmetric sidecut),(curly hair:0.1),(skin pores:1.2)

Negative promts:

3d,cgi,artwork,overexposed,underexposed,desaturated,low contrast,blurry,mutilated,ugly,disfigured,naked,(hands:1.6),signature, wrong eyes

Clothing variations:
For center image: Replace "(black silk dress)" with "(black lingerie),(black leather jacket)"
For center right image : Replace "(black silk dress)" with "(black shirt)"

Face variations:
Replace the celebrity names and weight. I used in these images for example: Jodie Comer, Jia Lissa, Natalie Portman, Stana Katic

Play with the age and try different words before woman like "european", "punk" etc.

Edit: Had to reconstruct it from memory. Fixed some things
 
Last edited:

Salz01

Member
Checkpoint: Colorful
Sampling method: DPM++ 2M Carras
Sampling steps: 50
CFG: 15
Restore faces checked

Positive promts:

8k,hyper realistic,photo realistic,masterpiece,(portrait),(colorful),
(neon bokeh:1.2),(black background),(rim light:1.2),(night),(black silk dress),(smile:0.5),gorgeous,
30 years old woman,(josie loren,kate beckinsale:0.3),(black hair),(sidecut),(curly hair:0.1),(skin pores:1.2)

Negative Promts:

3d,cgi,artwork,overexposed,underexposed,desaturated,low contrast,blurry,mutilated,ugly,disfigured,naked,(hands:1.6),signature, wrong eyes

Clothing variations:
For center image: Replace "(black silk dress)" with "(black lingerie),(black leather jacket)"
For center right image : Replace "(black silk dress)" with "(black shirt)"

Face variations:
Replace the celebrity names and weight. I used in these images for example: Jodie Comer, Jia Lissa, Natalie Portman, Stana Katic

Play with the age and try different words before woman like "european", "punk" etc.
Nice! Thank you. Will try this later tonight.
 

jason10mm

Gold Member
8k,hyperrealistic,photo-realistic,masterpiece,(portrait),(colorful),
(neon),(bokeh:1.2),(black background),(rim light:1.2),(night),(black silk dress:1.2),gorgeous,(smile:0.5),
30 years old woman,(josie loren,kate beckinsale:0.3),(black hair),(sidecut),(curly hair:0.1),(skin pores:1.2)



Face variations:
Replace the celebrity names and weight. I used in these images for example: Jodie Comer, Jia Lissa, Natalie Portman, Stana Katic
Are these prompts recoverable from the image (i.e. embedded in it somehow)? I wonder if folks can sue for using their likeness as a base if so. I doubt it can be traced back from just the image and even so, artists have probably been using photos without crediting the model for decades. Still, it would be an interesting way to get paid if you could get "credited" from an AIs work.
 

Haemi

Member
Nice! Thank you. Will try this later tonight.

Where you successful?

Are these prompts recoverable from the image (i.e. embedded in it somehow)? I wonder if folks can sue for using their likeness as a base if so. I doubt it can be traced back from just the image and even so, artists have probably been using photos without crediting the model for decades. Still, it would be an interesting way to get paid if you could get "credited" from an AIs work.

No they are not. And they would be easy to remove if they would be saved in the image.
But even if i didn't use the names in the promts, would it make a difference? The model is trained with thousands of images of different people and celebrities which it is using to create faces. Theoretically you would have to pay everyone of them. Also all the artists who shot the photos and drew the artworks the model was trained with.
We will see what courts will decide regarding this topic in the future.
 
Last edited:

Haemi

Member
I’d try and use a couple of them together, to try and hold it all together.. if that makes sense😋
Is it even possible to use more than one? I can only load one image for the poses at a time in automatic1111.

Made all this in midjourney with an account , yea the censorship sucks but for now it cant be beat, hopefully one day soon home ai will gain this level of quality with so much ease
I hope too. Midjourney is far better now than SD
 

jason10mm

Gold Member
Damn, that AI stuff is good. I can easily see a tool that takes my stick figure level drawings and translates them into a high res, fully detailed image becoming a workable tool. If it can remember a consistent model for each character and allow for consistent backgrounds it would eliminate a lot of comic book art requirements. If it can learn to imitate SPECIFIC art styles (which I think it can already do) it can easily bolster an artists own work or produce cheap clones of someone elses.

This stuff is gonna totally be used for pron...no question.

I wonder if there are ways to embed small visual "ticks" in your work (like a little pattern that is hard for humans to see but the AI might unknowingly adopt) that would show up in an AI fed your stuff to allow you to issue a cease and desist or at least have to pay a license. Just like if writers slip in specific unique phrases that an AI might imitate to make it obvious that an AI is "stealing" your stuff. Anti-AI measures, if you will.
 

Tumle

Member
Is it even possible to use more than one? I can only load one image for the poses at a time in automatic1111.


I hope too. Midjourney is far better now than SD
Yea under settings in the controlnet tab, you can choose how many controlnets you want to be able to use at one time😊
 
Last edited:

Tumle

Member
You should try the new extension called roop, it can do deepfake with just one image and implement it into your image 😊

Here is my first try making an image of the guys I play fps shooters with 😋
SPAF6wt.jpg

Haven’t done anything to the mistakes in the picture like mutilated, hands and stuff😊
 

Ironbunny

Member
You should try the new extension called roop, it can do deepfake with just one image and implement it into your image 😊

Here is my first try making an image of the guys I play fps shooters with 😋

Haven’t done anything to the mistakes in the picture like mutilated, hands and stuff😊

Nice. Its basicly faceswapping or can you do broader swap for what you want to change in the image?
 
Last edited:

Tumle

Member
Nice. Its basicly faceswapping or can you do broader swap for what you want to change in the image?
Yea, No its only faces swapping, for now😊
But it’s really good at it, angling the face in almost any direction from one source picture.. and then again if you want to do stylised pictures.. it can be a little hassle, as it’s only really good for realistic pictures..
But definitely quicker the training your own Lora model on a person and it’s more consistent in the quality than a Lora model 😊
 
Last edited:

Ironbunny

Member
Yea, No its only faces swapping, for now😊
But it’s really good at it, angling the face in almost any direction from one source picture.. and then again if you want to do stylised pictures.. it can be a little hassle, as it’s only really good for realistic pictures..
But definitely quicker the training your own Lora model on a person and it’s more consistent in the quality than a Lora model 😊

Yea loras really take some time to do but I have find them to do really great results. My dog died like one year ago and I only had like a few good quality photos of him. Decided to try SD by mixing bad photos with good photos and the results have been really great. It ended up with so good results that it even got the dandruff right. :)

ICKg1Xr.jpg
uVEXow1.jpg
pS2vTrM.jpg
RxM7k2d.jpg
x758QRT.jpg
 
Top Bottom