InstantDrag for fast and light Drag Editing. arXiv2024
Drag-based image editing has recently gained popularity for its interactivity and precision.
However, despite the ability of text-to-image models to generate samples within a second, drag editing still lags behind due to the challenge of accurately reflecting user interaction while maintaining image content.
Some existing approaches rely on computationally intensive per-image optimization or intricate guidance-based methods, requiring additional inputs such as masks for movable regions and text prompts, thereby compromising the interactivity of the editing process.
We introduce InstantDrag, an optimization-free pipeline that enhances interactivity and speed, requiring only an image and a drag instruction as input.
InstantDrag consists of two carefully designed networks: a drag-conditioned optical flow generator (FlowGen) and an optical flow-conditioned diffusion model (FlowDiffusion). InstantDrag learns motion dynamics for drag-based image editing in real-world video datasets by decomposing the task into motion generation and motion-conditioned image generation.
We demonstrate InstantDrag capability to perform fast, photo-realistic edits without masks or text prompts through experiments on facial video datasets and general scenes.
These results highlight the efficiency of our approach in handling drag-based image editing, making it a promising solution for interactive, real-time applications.
https://joonghyuk.com/instantdrag-web/