Edit model card

stable-diffusion-3-medium-GGUF

Original Model

stabilityai/stable-diffusion-3-medium

Run with sd-api-server

Go to the sd-api-server repository for more information.

Quantized GGUF Models

Name Quant method Bits Size Use case
sd3-medium-Q4_0.gguf Q4_0 4 4.55 GB
sd3-medium-Q4_1.gguf Q4_1 4 5.04 GB
sd3-medium-Q5_0.gguf Q5_0 5 5.53 GB
sd3-medium-Q5_1.gguf Q5_1 5 6.03 GB
sd3-medium-Q8_0.gguf Q8_0 8 8.45 GB
sd3-medium-f16.gguf f16 16 15.8 GB
sd3-medium-f32.gguf f32 32 31.5 GB

Quantized with stable-diffusion.cpp master-697d000.

Downloads last month
12,285
GGUF
Model size
7.88B params
Architecture
undefined

4-bit

5-bit

8-bit

16-bit

32-bit

Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for second-state/stable-diffusion-3-medium-GGUF

Quantized
this model