Article reprinted from: Machine Heart
It's getting easier and easier to get a good looking photo.
Taking photos is a must when traveling on holidays. However, most of the photos taken in scenic spots are more or less regrettable, with something missing or too much in the background.
Image source: Generated by Unbounded AI
Obtaining a "perfect" image has been one of the goals that CV researchers have been working on for a long time. Recently, researchers from Google Research and Cornell University have collaborated to propose an "authentic image completion" technology - RealFill, a generative model for image completion.
The advantage of the RealFill model is that it can be personalized using a small number of scene reference images that do not need to be aligned with the target image and can even differ greatly in viewpoint, lighting conditions, camera aperture, or image style. Once personalized, RealFill can complete the target image with visually engaging content in a way that is faithful to the original scene.
Paper link: https://arxiv.org/abs/2309.16668
Project page: https://realfill.github.io/
Inpainting and outpainting models are technologies that can generate high-quality, plausible image content in unknown areas of an image, but the content generated by these models is necessarily unrealistic because these models lack contextual information about the real scene. In contrast, RealFill can generate content that "should" appear there, making the results of image completion more realistic.
The authors pointed out in the paper that they defined a new image completion problem - "Authentic Image Completion". Different from traditional generative image restoration (the content of the missing area may not be consistent with the original scene), the goal of authentic image completion is to make the completed content as faithful to the original scene as possible, and to complete the target image with the content that "should appear there" instead of the content that "may be there".
The authors state that RealFill is the first method to extend the expressiveness of generative image inpainting models by adding more conditions to the process: namely, adding reference images.
On a new image completion benchmark covering a range of diverse and challenging scenes, RealFill significantly outperforms existing methods.
method
The goal of RealFill is to fill in the missing parts of a given target image using a small number of reference images while maintaining as much realism as possible. Specifically, given up to 5 reference images, and a target image that roughly captures the same scene (but may have a different layout or appearance).
For a given scene, the researchers first create a personalized generative model by fine-tuning a pre-trained inpainting diffusion model on the reference image and the target image. This fine-tuning process is designed to allow the fine-tuned model to not only maintain good image priors, but also learn the scene content, lighting, and style in the input image. Then, using this fine-tuned model, the missing areas in the target image are filled through a standard diffusion sampling process.
It is worth noting that for practical application value, the model pays special attention to the more challenging, unconstrained situation, where the target image and the reference image may have very different viewpoints, environmental conditions, camera apertures, image styles, and even include moving objects.
Experimental Results
Given a reference image on the left, RealFill can uncrop or inpaint a target image on the right, producing a result that is not only visually appealing but also consistent with the reference image, even when the reference and target images have large differences in viewpoint, aperture, lighting, image style, and object motion.
Output of the RealFill model. Given a reference image on the left, RealFill is able to expand the corresponding target image on the right. The areas inside the white box are provided to the network as known pixels, while the areas outside the white box are generated. The results show that RealFill can generate high-quality images that are faithful to the reference image even when there are huge differences between the reference image and the target image, including viewpoint, aperture, lighting, image style, and object motion. Source: Paper
Controlled experiment
The researchers compared the RealFill model to other baseline methods and found that RealFill produced high-quality results, with better performance in terms of scene fidelity and consistency with reference images.
Paint-by-Example cannot achieve high scene fidelity because it relies on CLIP embeddings, which can only capture high-level semantic information.
Although Stable Diffusion Inpainting can produce seemingly reasonable results, due to the limited expressive power of the prompt, the final generated result is inconsistent with the reference image.
Comparison of RealFill with two other baseline methods. The areas covered by a transparent white mask are the unmodified parts of the target image. Source: realfill.github.io
Limitations
The researchers also discussed some potential issues and limitations of the RealFill model, including processing speed, its ability to handle viewpoint changes, and its ability to handle situations that are challenging for the base model. Specifically:
RealFill requires a gradient-based fine-tuning process on the input image, which makes it relatively slow.
When the viewpoint change between the reference and target images is very large, RealFill often fails to recover the 3D scene, especially when there is only one reference image.
Since RealFill mainly relies on image priors inherited from the base pre-trained model, it cannot handle cases that are challenging for the base model, such as the stable diffusion model cannot handle text well.
Finally, the author expressed his gratitude to the collaborators:
We would like to thank Rundi Wu, Qianqian Wang, Viraj Shah, Ethan Weber, Zhengqi Li, Kyle Genova, Boyang Deng, Maya Goldenberg, Noah Snavely, Ben Poole, Ben Mildenhall, Alex Rav-Acha, Pratul Srinivasan, Dor Verbin, and Jon Barron for valuable discussions and feedback, and Zeya Peng, Rundi Wu, and Shan Nan for their contributions to the evaluation datasets. We especially thank Jason Baldridge, Kihyuk Sohn, Kathy Meier-Hellstern, and Nicole Brichtova for their feedback and support on the project.