Install Wan Animate
Set up the environment, download model weights, and run your first example.
Requirements
- CUDA-compatible GPU with sufficient memory for inference
- Python 3.10 or later
- ffmpeg installed and available in PATH
- Disk space for model weights and outputs
Environment setup
- Create and activate a virtual environment.
- Install dependencies as listed in the project’s requirements file.
- Verify that your GPU is visible to the framework.
Download model weights
Place weights in a dedicated directory. Configure an environment variable that points to this directory so the runner can find them.
Prepare inputs
- Character image: front-facing or three-quarter, clear face, stable pose.
- Reference video: steady motion, good lighting, sufficient duration for the target action.
Run your first job
- Select the task: animation or replacement.
- Set input paths for the character image and reference video.
- Choose output folder and quality settings.
- Start inference and monitor progress.
Quality tips
- Use stable, well-lit reference videos to improve motion clarity.
- Crop the character image to keep the face and torso centered.
- Trim shaky segments of the reference to reduce jitter.
Troubleshooting
- If motion is unstable, try a shorter reference clip with clearer poses.
- If identity drifts, use a sharper source image and reduce heavy compression.
- If lighting looks off in replacement, check that the scene has consistent tone across frames.