Wan Animate

Unified character animation and replacement. Provide one character image and a reference video. Wan Animate reproduces expressions, body motion, and scene lighting to create coherent character videos or to replace a person in the scene.

What Wan Animate does

A single design supports two tasks: animate a character from a performance, or replace the person in a video with your character while keeping motion and lighting consistent with the scene.

Character animation

Give a character image and a reference video. The system reproduces facial expressions and full-body motion with clear timing and structure.

Character replacement

Place the animated character into the original video. Lighting and color tone are matched so the character fits the scene.

Consistent appearance

The character’s features remain stable across frames while expressions and motion vary with the performance.

Motion replication

Spatially aligned skeleton signals guide body motion, keeping timing stable from frame to frame.

Facial reenactment

Implicit features extracted from the source image help reproduce expressions with fine detail.

Relighting module

An auxiliary relighting adapter applies scene lighting and color tone without drifting the character’s identity.

How it works

  1. Prepare inputs: one clear character image and one reference video showing the target performance. Front-facing or three-quarter views work best.
  2. Choose a task: animate the character or replace the person in the video. For replacement, mark the region to render if needed.
  3. The system extracts body skeleton signals from the reference and maps them to the character. A face module reads expression features from the source image.
  4. During replacement, the relighting adapter matches scene lighting and color tone while preserving the character’s look.
  5. Export the result as a video. Review motion, expression, and lighting. If needed, adjust inputs and run again.

Unified input design separates reference conditions from rendered regions.

Demo

Interact with the hosted demo to see character animation and replacement in action.

Install Wan Animate locally

Follow a step-by-step guide to set up the model, prepare inputs, and run animation or replacement jobs on your machine.

The guide covers prerequisites, environment setup, downloading weights, and running the first example.

What you need

  • Recent GPU with enough memory for inference
  • Python environment with required packages
  • One character image and one reference video

Where Wan Animate helps

Entertainment

Produce character clips for short videos or prototypes. Keep expression, motion, and lighting consistent across shots.

Previsualization

Test staging and timing by animating stand-in characters from performance recordings.

Education

Demonstrate motion mapping, face reenactment, and relighting concepts in a single project.

Research

Study controllability and identity preservation with a unified interface for two related tasks.

Advertising

Create character-driven variations while matching scene tone. Keep brand characters consistent across edits.

Content tools

Build simple pipelines around a single image and a reference video to generate drafts quickly.

Technical overview

Unified input design

Inputs separate reference conditions from target regions to render. This single design covers both animation and replacement without extra tooling.

Motion and face signals

Body motion is guided by skeleton signals aligned in space. Facial reenactment uses features taken from the source image to keep identity stable while changing expression.

Relighting adapter

For replacement, an auxiliary adapter applies the scene’s lighting and color tone. Appearance stays consistent while the character fits the scene.

Quality and control

Outputs are controllable through skeleton strength, crop choices, and source image selection. Small changes in inputs help refine timing and expression.

FAQs

What inputs are required?

At minimum, one high-quality character image and one reference video. A clear face and a stable pose in the image help the system reproduce expressions.

Can it replace a person in my video?

Yes. Choose the replacement task and provide the same inputs. The relighting adapter matches scene lighting and tone while keeping the character’s look.

How to get stable motion?

Use a steady reference with clear body movement. Skeleton guidance maintains timing, and trimming shaky segments can improve results.

Does it keep identity?

Yes. The model reads identity cues from the source image and maintains them across frames while expressions and motion follow the reference.