CTRL-Adapter: An Efficient and Versatile Framework
for Adapting Diverse Controls to Any Diffusion Model

University of North Carolina, Chapel Hill
*: equal contribution

Abstract

ControlNets are widely used for adding spatial control in image generation with different conditions, such as depth maps, canny edges, and human poses. However, there are several challenges when leveraging the pretrained image ControlNets for controlled video generation. First, pretrained ControlNet cannot be directly plugged into new backbone models due to the mismatch of feature spaces, and the cost of training ControlNets for new backbones is a big burden for many users. Second, ControlNet features for different frames might not effectively handle the temporal consistency of objects.

To address these challenges, we introduce Ctrl-Adapter, an efficient and versatile framework that adds diverse controls to any image/video diffusion models, by adapting pretrained ControlNets (and improving temporal alignment for videos). Ctrl-Adapter provides strong and diverse capabilities including image control, video control, video control with sparse frames, multi-condition control, compatibility with different backbone models, adaptation to unseen control conditions, and video editing. In the Ctrl-Adapter framework, we train adapter layers that fuse pretrained ControlNet features to different image/video diffusion models, while keeping the parameters of the ControlNets and the diffusion models frozen. Ctrl-Adapter consists of temporal as well as spatial modules so that it can effectively handle the temporal consistency of videos. Additionally, for robust adaptation to different backbone models and sparse control, we propose latent skipping and inverse timestep sampling. Moreover, Ctrl-Adapter enables control from multiple conditions by simply taking the (weighted) average of ControlNet outputs.

From our experiments with diverse image and video diffusion backbones (SDXL, Hotshot-XL, I2VGen-XL, and SVD), Ctrl-Adapter matches ControlNet on the COCO dataset for image control, and even outperforms all baselines for video control (achieving the state-of-the-art accuracy on the DAVIS 2017 dataset) with significantly lower computational costs (Ctrl-Adapter outperforms baselines in less than 10 GPU hours). Lastly, we provide comprehensive ablations for our design choices and qualitative examples.

Method (see more details below ↓)

Ctrl-Adapter Main Method Figure

Efficient Adaptation of Pretrained ControlNets. As shown in the left figure, we train an adapter module (colored orange) to map the middle/output blocks of a pretrained ControlNet (colored blue) to the corresponding middle/output blocks of the target video diffusion model (colored green). We keep all parameters in both the ControlNet and the target video diffusion model frozen. Therefore, training a Ctrl-Adapter can be significantly more efficient than training a new video ControlNet.

Ctrl-Adapter architecture. As shown in the right figure, each block of Ctrl-Adapter consists of four modules: spatial convolution, temporal convolution, spatial attention, and temporal attention. The temporal convolution and attention modules model effectively fuse the in ControlNet features for better temporal consistency. When adapting to image diffusion models, Ctrl-Adapter blocks only consist of spatial convolution/attention modules (without temporal convolution/attention modules).

Generated Examples

Video Generation with Condition Control



"A fish swimming"
+
Control Generated Video
"A man riding a motorcycle jumps off a mountain"
+
Control Generated Video
"Close-up of a majestic white dragon
with pearlescent, silver-edged scales,
icy blue eyes, elegantivory horns, and
misty breath. Focus on detailed facial
features and textured scales, set
against a softly blurred background"
+
Control Generated Video





"A car flies over a hill"
+
Control Generated Video
"A white and orange tabby cat
is seen happily darting through a
dense garden, as if chasing something.
Its eyes are wide and happy
as it jogs forward, scanning the
branches, flowers, and leaves as it
walks. The path is narrow as
it makes its way between all
the plants."
+
Control Generated Video








"A bird flying over a forest."
+
Control Generated Video
"A miniature Christmas village with
snow-covered houses, glowing
windows, decorated trees, festive
snowmen, and tiny figurines in a
quaint, holiday-themed diorama
evoking a cozy, celebratory winter atmosphere"
+
Control Generated Video



"A woman wearing blue jeans and a
white t-shirt taking a pleasant stroll in
Mumbai India during a beautiful sunset"
+
Control Generated Video


Video Generation with Multiple Control Conditions


"A small child and an adult standing
in shallow ocean waters along the beach"
+
Controls Generated Video





"A man dancing"
+
Controls Generated Video




"A woman wearing purple overalls and cowboy boots
taking a pleasant stroll in Johannesburg South Africa
during a beautiful sunset"
+
Controls Generated Video




"A skateboarder mid-trick, airborne above
a bench, wears a casual outfit and a beanie,
displaying focus and athletic skill"
+
Controls Generated Video


Video Editing via Combining Image and Video Ctrl-Adapters


(1) Control Condition Extraction

Input Prompt (2) Generated Frame
(Generated by SDXL + Ctrl-Adapter)
(3) Generated Video
(Generated by I2VGen-XL + Ctrl-Adapter)

A camel with rainbow fur walking.

A zebra stripped camel walking.

A camel walking, ink sketch style.

A camel walking, van gogh-style.







Video Generation with Sparse Frames as Control Condition


"Fly through tour of a museum with many paintings
and sculptures and beautiful works of art in all styles"
+
Sparse Inputs
(Condition is given for 4 out of 16 frames)
Generated Video
 ... 


"Reflections in the window of a train traveling through the Tokyo suburbs."
+
Sparse Inputs
(Condition is given for 4 out of 16 frames)
Generated Video
 ... 


Zero-Shot Generalization on Unseen Conditions


"An old man wearing purple overalls and
cowboy boots taking a pleasant stroll in
Mumbai India during a beautiful sunset"
+
Condition Controls Generated Video

Training: Depth Map

Inference: Normal Map





"An extreme close-up of an gray-haired man with a beard in his 60s, he is deep in thought
pondering the history of the universe. He sits at a cafe in Paris, his eyes focus on people offscreen. As they walk, he sits mostly motionless, he is dressed in a wool coat suit coat.
With a button-down shirt, he wears a brown beret and glasses."
+
Condition Controls Generated Video

Training: Depth Map

Inference: Line art





"This close-up shot of a chameleon showcases its striking color changing capabilities.
The background is blurred, drawing attention to the animal's striking appearance.
The chameleon's vibrant colors and unique texture are the focus of this shot."
+
Condition Controls Generated Video

Training: Depth Map

Inference: Softedge



Image Generation with Condition Control


Prompt Control Generated Image

Cute fluffy corgi dog in the city in anime style

happy Hulk standing in a beautiful field of flowers, colorful flowers everywhere, perfect lighting, leica summicron 35mm f2.0, Kodak Portra 400, film grain

Astronaut walking on water

a cute mouse pilot wearing aviator goggles, unreal engine render, 8k

Cute lady frog in dress and crown dressed in gown in cinematic environment

A cute sheep with rainbow fur, photo

Cute and super adorable mouse in black and red chef coat and chef hat, holding a steaming entree

a cute, happy hedgehog taking a bite from a piece of watermelon, eyes closed, cute ink sketch style illustration









Comparison to other methods

Teaser Figure

Overview of the capabilities supported by recent methods for controlled image/video generation. Ctrl-Adapter supports diverse capabilities including image control, video control, video control with sparse frames, multi-condition control, compatibility with different backbone models, adaptation to unseen control conditions, and video editing, while previous methods support only some of them.



Skipping the latent from ControlNet inputs: robust adaption to different noise scales & sparse frame conditions.

latent_skipping

Although the original ControlNets take the latent as part of their inputs, we find that skipping from ControlNet inputs is effective for Ctrl-Adapter when (1) adpating to backbones diffusion models with different noise scales and (2) video generation with sparse frame conditions (i.e., conditions are only provided for the subset of video frames).



Video Generation from Multiple Conditions

Video Generation from Multiple Conditions

For more effective spatial control beyond a single condition, we can easily combine the control features of multiple ControlNets via Ctrl-Adapter. We first average the ControlNet output features via learnable scalar weights for each Ctrl-Adapter block, then provide such fused ControlNet features as input to a shared and unified Ctrl-Adapter. The weighted-average approach can be understood as a lightweight mixture-of-experts (MoE).



Evaluation on Video Control and Image Control with Single Condition

Table - Video/Image Generation from Single Condition Tabe - Video/Image Generation from Single Condition

Left: Evaluation on video control with a single condition on DAVIS 2017 dataset. Right: Evaluation on image control with a single condition on COCO dataset. We demonstrate that Ctrl-Adapter matches the performance of a pretrained image ControlNet and outperforms previous methods in controllable video generation (achieving state-of-the-art performance on the DAVIS 2017 dataset) with significantly lower computational costs (Ctrl-Adapter outperforms baselines in less than 10 GPU hours).

Evaluation on Video Control with Multiple Conditions

Table - Video Generation from Multiple Conditions

More conditions improve spatial control in video generation. The proposed linear weight outperforms the equal-weight approach as the number of conditions increases. The control sources are abbreviated as D (depth map), C (canny edge), N (surface normal), S (softedge), Seg (semantic segmentation map), L (line art), and P (human pose).



Training Efficiency of Ctrl-Adapter

training efficiency - video training efficiency - image


Training speed of Ctrl-Adapter for video (left) and image (right) control with depth map. The training GPU hours are measured with A100 80GB GPUs. For both video and image controls, Ctrl-Adapter outperforms strong baselines in less than 10 GPU hours.

Ablation Study - Skipping Latents from ControlNet Inputs

training efficiency - video


We find that skipping the latents from ControlNet inputs helps Ctrl-Adapter for (1) adaptation to backbone models with different noise scales and (2) video control with sparse frame conditions.

Limitations

Our framework is primarily for research purposes (and therefore should be used with caution in real-world applications).
Note that the performance/quality/visual artifacts of Ctrl-Adapter largely depend on the capabilities (e.g., motion styles and video length) of the current open-source backbone video/image diffusion models used.

BibTeX

@article{Lin2024CtrlAdapter,
	author = {Han Lin and Jaemin Cho and Abhay Zala and Mohit Bansal},
	title  = {Ctrl-Adapter: An Efficient and Versatile Framework for Adapting Diverse Controls to Any Diffusion Model},
	year   = {2024},
}