Collaborating Authors

Image Processing

Sony's new Bravia XR TVs are all about 'cognitive intelligence'


Image processing has always been at the heart of Sony's TV designs. Sure, its premium Bravia TVs have typically featured the latest and greatest display hardware around, but the company's devotion to image quality has typically set it apart from competitors. This year, Sony is doubling down on that reputation with the Cognitive Processor XR, a new image processor that will focus on bringing "cognitive intelligence" to its upcoming Bravia XR LED and OLED TVs. I know, that sounds like a marketing buzzword, but it describes a new approach to image processing for Sony. Its previous chips used artificial intelligence to optimize individual elements of the picture, things like brightness, contrast and color.

Can the Government Regulate Deepfakes? WSJD - Technology

Last month, the British television network Channel 4 broadcast an "alternative Christmas address" by Queen Elizabeth II, in which the 94-year-old monarch was shown cracking jokes and performing a dance popular on TikTok. Of course, it wasn't real: The video was produced as a warning about deepfakes--apparently real images or videos that show people doing or saying things they never did or said. If an image of a person can be found, new technologies using artificial intelligence and machine learning now make it possible to show that person doing almost anything at all. The dangers of the technology are clear: A high-school teacher could be shown in a compromising situation with a student, a neighbor could be depicted as a terrorist. Can deepfakes, as such, be prohibited under American law?

Hot papers on arXiv from the past month – December 2020


Here are the most tweeted papers that were uploaded onto arXiv during December 2020. Results are powered by Arxiv Sanity Preserver. Abstract: Self-attention networks have revolutionized natural language processing and are making impressive strides in image analysis tasks such as image classification and object detection. Inspired by this success, we investigate the application of self-attention networks to 3D point cloud processing. We design self-attention layers for point clouds and use these to construct self-attention networks for tasks such as semantic scene segmentation, object part segmentation, and object classification.

Worldsheet: Wrapping the World in a 3D Sheet for View Synthesis from a Single Image Machine Learning

We present Worldsheet, a method for novel view synthesis using just a single RGB image as input. This is a challenging problem as it requires an understanding of the 3D geometry of the scene as well as texture mapping to generate both visible and occluded regions from new view-points. Our main insight is that simply shrink-wrapping a planar mesh sheet onto the input image, consistent with the learned intermediate depth, captures underlying geometry sufficient enough to generate photorealistic unseen views with arbitrarily large view-point changes. To operationalize this, we propose a novel differentiable texture sampler that allows our wrapped mesh sheet to be textured; which is then transformed into a target image via differentiable rendering. Our approach is category-agnostic, end-to-end trainable without using any 3D supervision and requires a single image at test time. Worldsheet consistently outperforms prior state-of-the-art methods on single-image view synthesis across several datasets. Furthermore, this simple idea captures novel views surprisingly well on a wide range of high resolution in-the-wild images in converting them into a navigable 3D pop-up. Video results and code at

Unsupervised Image Segmentation using Mutual Mean-Teaching Artificial Intelligence

Unsupervised image segmentation aims at assigning the pixels with similar feature into a same cluster without annotation, which is an important task in computer vision. Due to lack of prior knowledge, most of existing model usually need to be trained several times to obtain suitable results. To address this problem, we propose an unsupervised image segmentation model based on the Mutual Mean-Teaching (MMT) framework to produce more stable results. In addition, since the labels of pixels from two model are not matched, a label alignment algorithm based on the Hungarian algorithm is proposed to match the cluster labels. Experimental results demonstrate that the proposed model is able to segment various types of images and achieves better performance than the existing methods.

Multi-modal AsynDGAN: Learn From Distributed Medical Image Data without Sharing Private Information Artificial Intelligence

As deep learning technologies advance, increasingly more data is necessary to generate general and robust models for various tasks. In the medical domain, however, large-scale and multi-parties data training and analyses are infeasible due to the privacy and data security concerns. In this paper, we propose an extendable and elastic learning framework to preserve privacy and security while enabling collaborative learning with efficient communication. The proposed framework is named distributed Asynchronized Discriminator Generative Adversarial Networks (AsynDGAN), which consists of a centralized generator and multiple distributed discriminators. The advantages of our proposed framework are five-fold: 1) the central generator could learn the real data distribution from multiple datasets implicitly without sharing the image data; 2) the framework is applicable for single-modality or multi-modality data; 3) the learned generator can be used to synthesize samples for down-stream learning tasks to achieve close-to-real performance as using actual samples collected from multiple data centers; 4) the synthetic samples can also be used to augment data or complete missing modalities for one single data center; 5) the learning process is more efficient and requires lower bandwidth than other distributed deep learning methods.

Semantic Image Segmentation with DeepLabv3-pytorch


We will be using opencv to interface a webcam for reading in input from our screens and we'll use matplotlib's pyplot module to render the processed video feed to output. If you have multiple webcams you could create multiple such objects by passing the appropriate index; by default nowadays, most monitors have one inbuilt camera which could be indexed at 0th position. Subsequently, opencv reads images in a BGR format but while rendering we need to show it in RGB format; so we've written a tiny function that captures a frame in realtime and converts it from BGR format to RGB format above. With this, we're set with the input preprocessing steps. Let's look at how we'll set the stage for output now.

Top 20 Image Datasets for Machine Learning and Computer Vision


Computer vision enables computers to understand the content of images and videos. The goal in computer vision is to automate tasks that the human visual system can do. Computer vision tasks include image acquisition, image processing, and image analysis. The image data can come in different forms, such as video sequences, view from multiple cameras at different angles, or multi-dimensional data from a medical scanner. Labelme: A large dataset created by the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) containing 187,240 images, 62,197 annotated images, and 658,992 labeled objects.

Pyramid-Focus-Augmentation: Medical Image Segmentation with Step-Wise Focus Artificial Intelligence

Segmentation of findings in the gastrointestinal tract is a challenging but also an important task which is an important building stone for sufficient automatic decision support systems. In this work, we present our solution for the Medico 2020 task, which focused on the problem of colon polyp segmentation. We present our simple but efficient idea of using an augmentation method that uses grids in a pyramid-like manner (large to small) for segmentation. Our results show that the proposed methods work as indented and can also lead to comparable results when competing with other methods.

HR-Depth: High Resolution Self-Supervised Monocular Depth Estimation Artificial Intelligence

Self-supervised learning shows great potential in monoculardepth estimation, using image sequences as the only source ofsupervision. Although people try to use the high-resolutionimage for depth estimation, the accuracy of prediction hasnot been significantly improved. In this work, we find thecore reason comes from the inaccurate depth estimation inlarge gradient regions, making the bilinear interpolation er-ror gradually disappear as the resolution increases. To obtainmore accurate depth estimation in large gradient regions, itis necessary to obtain high-resolution features with spatialand semantic information. Therefore, we present an improvedDepthNet, HR-Depth, with two effective strategies: (1) re-design the skip-connection in DepthNet to get better high-resolution features and (2) propose feature fusion Squeeze-and-Excitation(fSE) module to fuse feature more efficiently.Using Resnet-18 as the encoder, HR-Depth surpasses all pre-vious state-of-the-art(SoTA) methods with the least param-eters at both high and low resolution. Moreover, previousstate-of-the-art methods are based on fairly complex and deepnetworks with a mass of parameters which limits their realapplications. Thus we also construct a lightweight networkwhich uses MobileNetV3 as encoder. Experiments show thatthe lightweight network can perform on par with many largemodels like Monodepth2 at high-resolution with only20%parameters. All codes and models will be available at