Install Wan Animate

Set up the environment, download model weights, and run your first example.

Requirements

  • CUDA-compatible GPU with sufficient memory for inference
  • Python 3.10 or later
  • ffmpeg installed and available in PATH
  • Disk space for model weights and outputs

Environment setup

  1. Create and activate a virtual environment.
  2. Install dependencies as listed in the project’s requirements file.
  3. Verify that your GPU is visible to the framework.

Download model weights

Place weights in a dedicated directory. Configure an environment variable that points to this directory so the runner can find them.

Prepare inputs

  • Character image: front-facing or three-quarter, clear face, stable pose.
  • Reference video: steady motion, good lighting, sufficient duration for the target action.

Run your first job

  1. Select the task: animation or replacement.
  2. Set input paths for the character image and reference video.
  3. Choose output folder and quality settings.
  4. Start inference and monitor progress.

Quality tips

  • Use stable, well-lit reference videos to improve motion clarity.
  • Crop the character image to keep the face and torso centered.
  • Trim shaky segments of the reference to reduce jitter.

Troubleshooting

  • If motion is unstable, try a shorter reference clip with clearer poses.
  • If identity drifts, use a sharper source image and reduce heavy compression.
  • If lighting looks off in replacement, check that the scene has consistent tone across frames.