image matting
DRIP: Unleashing Diffusion Priors for Joint Foreground and Alpha Prediction in Image Matting
Recovering the foreground color and opacity/alpha matte from a single image (i.e., image matting) is a challenging and ill-posed problem where data priors play a critical role in achieving precise results. Traditional methods generally predict the alpha matte and then extract the foreground through post-processing, often failing to produce high-fidelity foreground color. This failure stems from the models' difficulty in learning robust color predictions from limited matting datasets. To address this, we explore the potential of leveraging vision priors embedded in pre-trained latent diffusion models (LDM) for estimating foreground RGBA values in challenging scenarios and rare objects. We introduce Drip, a novel approach for image matting that harnesses the rich prior knowledge of LDM models.
Long-Range Feature Propagating for Natural Image Matting
Liu, Qinglin, Xie, Haozhe, Zhang, Shengping, Zhong, Bineng, Ji, Rongrong
Natural image matting estimates the alpha values of unknown regions in the trimap. Recently, deep learning based methods propagate the alpha values from the known regions to unknown regions according to the similarity between them. However, we find that more than 50\% pixels in the unknown regions cannot be correlated to pixels in known regions due to the limitation of small effective reception fields of common convolutional neural networks, which leads to inaccurate estimation when the pixels in the unknown regions cannot be inferred only with pixels in the reception fields. To solve this problem, we propose Long-Range Feature Propagating Network (LFPNet), which learns the long-range context features outside the reception fields for alpha matte estimation. Specifically, we first design the propagating module which extracts the context features from the downsampled image. Then, we present Center-Surround Pyramid Pooling (CSPP) that explicitly propagates the context features from the surrounding context image patch to the inner center image patch. Finally, we use the matting module which takes the image, trimap and context features to estimate the alpha matte. Experimental results demonstrate that the proposed method performs favorably against the state-of-the-art methods on the AlphaMatting and Adobe Image Matting datasets.
- Asia > China > Heilongjiang Province > Harbin (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Asia > China > Fujian Province > Xiamen (0.04)
Adobe's DL-Based 'HDMatt' Handles Image Details Thinner Than Hair
Image matting plays a key role in image and video editing and composition. Although existing deep learning approaches can produce acceptable image matting results, their performance suffers in real-world applications, where the input images are mostly high resolution. To address this, a group of researchers from UIUC, Adobe Research and the University of Oregon have proposed HDMatt, the first deep learning-based image matting approach for high-resolution image inputs. Generally, deep learning approaches take an entire input image and an associated trimap to infer the alpha matte using convolutional neural networks. Such methods however may fail when dealing with high-resolution input images in sizes of 5000 5000 pixels or higher due to hardware limitations. The researchers designed HDMatt to crop an input image and trimap into patches, then estimate the alpha values of each patch.
- North America > United States > Oregon (0.26)
- Asia > China (0.09)
- Media > Film (0.50)
- Leisure & Entertainment (0.50)