• SquishyPillow@burggit.moeOP
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Of course, you can upscale smaller images using the Ultimate SD Upscale script. What size would you need for a profile banner though? Higher resolutions need more RAM if the initial upscale preprocessor is used (optional, but it gives better quality.)

          • Mousepad@burggit.moe
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Oh I didn’t necessarily mean high res, just the right dimensions. I’m not actually sure what that would be. I guess there would be some experimentation involved…

            • SmolSlime@burggit.moeM
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              1 year ago

              SD 1.5 is natively trained on 512x512, but you can do 512x768 (2:3) and 768x512 (3:2). I’ve tried 768x448 (12:7) before and it was fine. Though it’s recommended to keep one side at 512 pixels and >2:1 ratio, or it’ll produce some weird artifacts. For the best result, make sure the resolutions are cleanly divisible by 64 (without any decimal).

              As for SDXL, it’s trained on bigger resolution and finetuned to more diverse resolution. This has some recommended resolutions. Just make sure that the total pixels are as close to 1,048,576 pixels (1024x1024) as possible, while not exceeding it.

            • SquishyPillow@burggit.moeOP
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              You can always crop generated images to get the dimensions you want. Or do some inpainting tricks if you want to extend an image sideways one way or the other.

            • SquishyPillow@burggit.moeOP
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Setting the generation dimensions during initial generation to something other than the dimensions the model was trained at can sometimes give you weird results though, like duplicates of what your prompt specifies.