Goto

Collaborating Authors

 bokeh effect


Any-to-Bokeh: Arbitrary-Subject Video Refocusing with Video Diffusion Model

Yang, Yang, Zheng, Siming, Yang, Qirui, Chen, Jinwei, Wu, Boxi, He, Xiaofei, Cai, Deng, Li, Bo, Jiang, Peng-Tao

arXiv.org Artificial Intelligence

Diffusion models have recently emerged as powerful tools for camera simulation, enabling both geometric transformations and realistic optical effects. Among these, image-based bokeh rendering has shown promising results, but diffusion for video bokeh remains unexplored. Existing image-based methods are plagued by temporal flickering and inconsistent blur transitions, while current video editing methods lack explicit control over the focus plane and bokeh intensity. These issues limit their applicability for controllable video bokeh. In this work, we propose a one-step diffusion framework for generating temporally coherent, depth-aware video bokeh rendering. The framework employs a multi-plane image (MPI) representation adapted to the focal plane to condition the video diffusion model, thereby enabling it to exploit strong 3D priors from pretrained backbones. To further enhance temporal stability, depth robustness, and detail preservation, we introduce a progressive training strategy. Experiments on synthetic and real-world benchmarks demonstrate superior temporal coherence, spatial accuracy, and controllability, outperforming prior baselines. This work represents the first dedicated diffusion framework for video bokeh generation, establishing a new baseline for temporally coherent and controllable depth-of-field effects.


GBSD: Generative Bokeh with Stage Diffusion

Deng, Jieren, Zhou, Xin, Tian, Hao, Pan, Zhihong, Aguiar, Derek

arXiv.org Artificial Intelligence

The bokeh effect is an artistic technique that blurs out-of-focus areas in a photograph and has gained interest due to recent developments in text-to-image synthesis and the ubiquity of smart-phone cameras and photo-sharing apps. Prior work on rendering bokeh effects have focused on post hoc image manipulation to produce similar blurring effects in existing photographs using classical computer graphics or neural rendering techniques, but have either depth discontinuity artifacts or are restricted to reproducing bokeh effects that are present in the training data. More recent diffusion based models can synthesize images with an artistic style, but either require the generation of high-dimensional masks, expensive fine-tuning, or affect global image characteristics. In this paper, we present GBSD, the first generative text-to-image model that synthesizes photorealistic images with a bokeh style. Motivated by how image synthesis occurs progressively in diffusion models, our approach combines latent diffusion models with a 2-stage conditioning algorithm to render bokeh effects on semantically defined objects. Since we can focus the effect on objects, this semantic bokeh effect is more versatile than classical rendering techniques. We evaluate GBSD both quantitatively and qualitatively and demonstrate its ability to be applied in both text-to-image and image-to-image settings.


BokehOrNot: Transforming Bokeh Effect with Image Transformer and Lens Metadata Embedding

Yang, Zhihao, Lian, Wenyi, Lai, Siyuan

arXiv.org Artificial Intelligence

Bokeh effect is an optical phenomenon that offers a pleasant visual experience, typically generated by high-end cameras with wide aperture lenses. The task of bokeh effect transformation aims to produce a desired effect in one set of lenses and apertures based on another combination. Current models are limited in their ability to render a specific set of bokeh effects, primarily transformations from sharp to blur. In this paper, we propose a novel universal method for embedding lens metadata into the model and introducing a loss calculation method using alpha masks from the newly released Bokeh Effect Transformation Dataset(BETD) [3]. Based on the above techniques, we propose the BokehOrNot model, which is capable of producing both blur-to-sharp and sharp-to-blur bokeh effect with various combinations of lenses and aperture sizes. Our proposed model outperforms current leading bokeh rendering and image restoration models and renders visually natural bokeh effects. Our code is available at: https://github.com/indicator0/bokehornot.


Depth-aware Blending of Smoothed Images for Bokeh Effect Generation

Dutta, Saikat

arXiv.org Artificial Intelligence

Bokeh effect is used in photography to capture images where the closer objects look sharp and every-thing else stays out-of-focus. Bokeh photos are generally captured using Single Lens Reflex cameras using shallow depth-of-field. Most of the modern smartphones can take bokeh images by leveraging dual rear cameras or a good auto-focus hardware. However, for smartphones with single-rear camera without a good auto-focus hardware, we have to rely on software to generate bokeh images. This kind of system is also useful to generate bokeh effect in already captured images. In this paper, an end-to-end deep learning framework is proposed to generate high-quality bokeh effect from images. The original image and different versions of smoothed images are blended to generate Bokeh effect with the help of a monocular depth estimation network. The proposed approach is compared against a saliency detection based baseline and a number of approaches proposed in AIM 2019 Challenge on Bokeh Effect Synthesis. Extensive experiments are shown in order to understand different parts of the proposed algorithm. The network is lightweight and can process an HD image in 0.03 seconds. This approach ranked second in AIM 2019 Bokeh effect challenge-Perceptual Track.


Pixel 2 XL review: A.I. magic on a 6-inch display

PCWorld

You'll want the Google Pixel 2 XL if you're looking for the purest, most elegant Android experience possible in a 6-inch phone. You'll want the Pixel 2 XL if you're looking for a stunning display with an 18:9 aspect ratio, amazing portrait photography, and a ton of surprise-and-delight features made possible by Google Lens and the rest of Google's A.I. tool chest. When the Pixel 2 XL was announced on Oct. 4, Google reminded us that its machine learning engine is watching our every move to improve its A.I. algorithms. So, yes, the Pixel 2 XL's ever-Googley magic tricks may keep robophobes up at night. And you'll rightfully want one it if you're due for a phone upgrade. But if you already own the original Pixel, your decision is more difficult. The Pixel 2 XL kicks ass, but much of what makes it special--stock Android, the Google Photos experience, Google Assistant in the home button, and Google Lens--are available in the first-generation Pixel phones, too. To this extent, the Pixel 2 XL (and the smaller Pixel 2, which I'll review soon) are victims of Google's success at creating a cloud-first, machine-learning platform that spans #MadeByGoogle devices. The Pixel 2 XL feels great in the hand. Before we drill down into features, let's get straight to Pixel 2 XL specs.