• Published on

    Generating art through large language models such as DALL-E has become popular, but one challenge is keeping a consistent style throughout a prompt when generating a series of images.

    To address this, Google Research has developed StyleAligned, a method for achieving consistent style across images using a pre-trained diffusion model without the need for fine-tuning.

    StyleAligned operates by encouraging information retention and style consistency through a shared attention mechanism, in which an image being generated attends to a user-provided reference image during the diffusion process.

    The researchers demonstrate the efficacy of the method across a range of artistic styles and text prompts, showing that StyleAligned can produce a series of images that maintain a consistent visual style without the need for fine-tuning or manual intervention.

    Styled aligned image generation can also be used in combination with other methods.