Of course, you can upscale smaller images using the Ultimate SD Upscale script. What size would you need for a profile banner though? Higher resolutions need more RAM if the initial upscale preprocessor is used (optional, but it gives better quality.)
Oh I didn’t necessarily mean high res, just the right dimensions. I’m not actually sure what that would be. I guess there would be some experimentation involved…
SD 1.5 is natively trained on 512x512, but you can do 512x768 (2:3) and 768x512 (3:2). I’ve tried 768x448 (12:7) before and it was fine. Though it’s recommended to keep one side at 512 pixels and >2:1 ratio, or it’ll produce some weird artifacts. For the best result, make sure the resolutions are cleanly divisible by 64 (without any decimal).
As for SDXL, it’s trained on bigger resolution and finetuned to more diverse resolution. This has some recommended resolutions. Just make sure that the total pixels are as close to 1,048,576 pixels (1024x1024) as possible, while not exceeding it.
You can always crop generated images to get the dimensions you want. Or do some inpainting tricks if you want to extend an image sideways one way or the other.
Setting the generation dimensions during initial generation to something other than the dimensions the model was trained at can sometimes give you weird results though, like duplicates of what your prompt specifies.
These are really cute, like the type of thing I would hang in my living room!
me too! if i had a living room
Is it possible to generate something of the appropriate size for a profile banner?
Of course, you can upscale smaller images using the Ultimate SD Upscale script. What size would you need for a profile banner though? Higher resolutions need more RAM if the initial upscale preprocessor is used (optional, but it gives better quality.)
Oh I didn’t necessarily mean high res, just the right dimensions. I’m not actually sure what that would be. I guess there would be some experimentation involved…
SD 1.5 is natively trained on 512x512, but you can do 512x768 (2:3) and 768x512 (3:2). I’ve tried 768x448 (12:7) before and it was fine. Though it’s recommended to keep one side at 512 pixels and >2:1 ratio, or it’ll produce some weird artifacts. For the best result, make sure the resolutions are cleanly divisible by 64 (without any decimal).
As for SDXL, it’s trained on bigger resolution and finetuned to more diverse resolution. This has some recommended resolutions. Just make sure that the total pixels are as close to 1,048,576 pixels (1024x1024) as possible, while not exceeding it.
You can always crop generated images to get the dimensions you want. Or do some inpainting tricks if you want to extend an image sideways one way or the other.
Setting the generation dimensions during initial generation to something other than the dimensions the model was trained at can sometimes give you weird results though, like duplicates of what your prompt specifies.