• Published on

    Ideogram has launched Ideogram 2.0, now freely available on ideogram.ai and their new iOS app, with premium features accessible via subscription plans.

    The beta Ideogram API is also released for developers. Ideogram 2.0, trained from scratch, excels in generating realistic images, graphic design, typography, and more, outperforming other text-to-image models in image-text alignment, subjective preference, and text rendering accuracy.

    The launch includes the Ideogram iOS app, Ideogram Search, and the Ideogram API.

    Users can choose from styles like Realistic, Design, 3D, and Anime, and control colour palettes.

  • Published on

    Stability AI has announced the release of Stable Diffusion 3 Medium, a new version of its popular AI model for generating images. SD3 Medium is a 2 billion parameter model.

    This release includes significant performance enhancements achieved through collaborations with NVIDIA and AMD. NVIDIA’s TensorRT optimisation for RTX GPUs boosts performance by 50%, while AMD has optimised inference for various devices.

    Stability AI has introduced new licensing options, including the Creator License for commercial use and an Enterprise License for large-scale commercial applications.

    The model is available for download or available via API.

  • Published on

    Ideogram has released Ideogram 1.0, its most advanced text-to-image model to date, with a feature called Magic Prompt that helps create detailed prompts for artistic images.

    The company believes generative media models will transform the creative economy, and it has raised $80m in Series A financing led by Andreessen Horowitz to accelerate its own growth in this field.

    Ideogram 1.0 offers state-of-the-art text rendering, photorealism and prompt adherence, and reliable text rendering capabilities for the creation of personalised messages and designs.

  • Published on

    AI art generator, Playground, has released an upgraded version, Playground v2, which the developers say outperforms competitors in its quality and creativity.

    Users can visit the website and compare the results of using Playground v2 with those from other platforms, such as Stable Diffusion XL, via a series of prompts that ask users to compare images.

    The company has also released weights for the pre-training of AI models, meaning those with fewer computational resources can use the tech.

    Additionally, the company has devised a new benchmark test for AI image generation, MJHQ-30K, which measures FID (Frechet Inception Distance) scores to assess the quality of the output.

    The release is part of Playground’s mission to make AI more accessible, and the company has said it would like to hear from people who use the new tools.

  • Published on

    Generating art through large language models such as DALL-E has become popular, but one challenge is keeping a consistent style throughout a prompt when generating a series of images.

    To address this, Google Research has developed StyleAligned, a method for achieving consistent style across images using a pre-trained diffusion model without the need for fine-tuning.

    StyleAligned operates by encouraging information retention and style consistency through a shared attention mechanism, in which an image being generated attends to a user-provided reference image during the diffusion process.

    The researchers demonstrate the efficacy of the method across a range of artistic styles and text prompts, showing that StyleAligned can produce a series of images that maintain a consistent visual style without the need for fine-tuning or manual intervention.

    Styled aligned image generation can also be used in combination with other methods.

  • Published on

    Stability AI has launched SDXL Turbo, a text-to-image model that uses a method called Adversarial Diffusion Distillation (ADD) to create images in one step, generating near-real-time results.

    In tests, SDXL Turbo outperformed other state-of-the-art models in terms of both image quality and the number of steps required to generate an image, beating a 50-step model with just four steps.

    It also offers faster generation times; specifically, it can generate a 512×512 image in 207ms.

    However, the company has made it clear that SDXL Turbo is not yet available for commercial use.

    The release of SDXL Turbo marks an important step forward for text-to-image generation models, promising both speed and high fidelity.