- Published on
Character animation, or the task of creating the illusion of movement in otherwise static images, is an important and challenging aspect of computer graphics.
In a paper released on arXiv, a group of researchers from the Alibaba Group detail a new method for training a character animation model using a type of neural network called a diffusion model. Their method, which they have called Animate Anyone, preserves the intricate appearance details of the reference image and uses a technique called spatial attention to merge detail features.
It also uses a technique called pose guiding to direct the character’s movement and an approach called temporal modelling to ensure smooth transitions between frames in the resulting animation.
In testing, the researchers used datasets of fashion photos and human dance videos to demonstrate that their method out-performed existing approaches and could generate realistic animations from images of different characters.