Goto

Collaborating Authors

Image Processing


Nixon's grim moon-disaster speech is a now a warning about the deepfake future

ZDNet

The entertainment industry has yet to regulate the use of deepfakes and voice cloning. On September 29, the Emmy for interactive documentary went to'In Event of Moon Disaster', a film that uses artificial intelligence (AI) to create a fake video featuring former US President Richard Nixon. The film shows him delivering a speech that was prepared in case the Apollo 11 mission failed, leaving astronauts Neil Armstrong and Buzz Aldrin to die on the moon. The multimedia project was created by the Massachusetts Institute of Technology's Center for Advanced Virtuality, with a bit of help from a Ukrainian voice-cloning startup, Respeecher, which worked on Nixon's voice. The increasing scale of AI is raising the stakes for major ethical questions.


Transformers in computer vision: ViT architectures, tips, tricks and improvements

#artificialintelligence

Finally, DeiT relies on data regularization techniques like stochastic depth. Ultimately, strong augmentations and regularization are limiting ViT's tendency to overfit in the small data regimes. Overall architecture of the proposed Pyramid Vision Transformer (PVT).


Arteris IP FlexNoC Interconnect Licensed by Eyenix for AI-Enabled Imaging/Digital Camera SoC

#artificialintelligence

NoC interconnect IP to be dataflow backbone for image signal processors providing enhanced sensitivity, high-resolution HD imaging through low current, low power in a single-chip solution for the security/surveillance market. Eyenix's imaging solution provides a step-function advance over their previous product, replacing a 3rd-party artificial intelligence (AI) function with a superior capability developed in-house for super-resolution imaging. This is provided in a tightly integrated system design, including functions for image stabilization for mobile usage and image dewarping for wide-angle camera correction. The first application is destined for surveillance camera applications. Eyenix chose Arteris IP on-chip interconnect technology as a part of Eyenix's proprietary image processing chip because it enables Eyenix to design and integrate a complete and superior Eyenix imaging solution without dependency on external IP blocks for the AI function.


7 Ways Image Recognition Can Help Impaired Vision! Here's How

#artificialintelligence

But with the help of technology in this visual age, images, videos are becoming more and more prevalent in today's lives. In the early days, social media was predominantly text-based but now technology has started to adapt according to the needs of people with impaired vision too. All thanks to modern technologies for their design, making navigating social media for giving a wonderful experience to also the visually impaired. Let's look at such one technology called image recognition which made life easier for people with impaired vision. Here are the 7 ways it can aid people.


Introduction to Face Detection using OpenCV

#artificialintelligence

OpenCV is a cross-platform library using which we can develop real-time computer vision applications. It mainly focuses on image processing, video capture, and analysis including features like face detection and object detection. Here we are focused on face detection(frontal face). Before that, u may have some confusion between what are computer vision and image processing...Let me explain The input and output of image processing are both images. Computer vision is the construction of explicit, meaningful descriptions of physical objects from their image.


Meet Transformer in Transformer: A Visual Transformer That Captures Structural Information From Images

#artificialintelligence

A new paper from Huawei, ISCAS and UCAS researchers proposes a novel Transformer-iN-Transformer (TNT) network architecture that outperforms conventional vision transformers on local information preservation and modelling for visual recognition. Transformer architectures were introduced in 2017, and their computational efficiency and scalability quickly made them the de-facto standard for natural language processing (NLP) tasks. Recently, transformers have also begun to show their potential in computer vision (CV) tasks such as image recognition, object detection, and image processing. Most of today's visual transformers view an input image as a sequence of image patches while ignoring intrinsic structural information among the patches -- a deficiency that negatively impacts their overall visual recognition ability. While convolutional neural networks (CNN) remain dominant in CV, transformer-based models have achieved promising performance on visual tasks without an image-specific inductive bias.


The Hidden Second Face of Deepfakes

#artificialintelligence

A lot of times when you read about deepfakes (more professionally known as synthetic media) the common themes being explored is only one of the two faces of deepfakes the negative side, however, I want to explore some of the positive things deepfakes can be used for so you can get a full scope of the capabilities of deepfakes. Glad you asked, simple answer: artificial intelligence-generated media that has seamlessly stitch anyone in the world into a video or photo they never actually in and a summarised more technical answer: deepfakes are made by using a GAN (generative adversarial network) a type of deep learning artificial intelligence. It uses two neural networks that rival each other to generate a synthetic version of data that can pass for real data, one of the neural networks is called the generator (generates new data instances) and the other is called a discriminator (evaluates them for authenticity). The purpose of the generator is to generate synthetic media that is given to the discriminator, which its purpose is to identify whether the media is fake or real, they are trained together until it achieves acceptable accuracy (discriminator fooled 50% of the time). So now that we have a better understanding of how deepfakes are created we can begin to explore the positive uses of deepfakes.


Hidden Picasso nude revealed and brought to life with artificial intelligence

#artificialintelligence

A nude portrait of a crouching woman, hidden beneath the surface of a Pablo Picasso painting, has been revealed using artificial intelligence, advanced imaging technology and 3D printing. Dubbed "The Lonesome Crouching Nude," the recreation is the work of Oxia Palus, a company that uses technology to resurrect lost art, according to a statement sent to CNN on Monday. Picasso painted over the figure when making "The Blind Man's Meal" in 1903. The nude had been partially revealed by a superimposed X-ray fluorescence (XRF) image, but Oxia Palus has now "brought the hidden work to life back to life," according to the statement. In order to do so, the company used XRF imaging and image processing to reveal the outline of the hidden painting, and then trained artificial intelligence to add brushstrokes to the portrait in the style of Picasso.


The Age of AI: Changing Human Relationships with Knowledge

#artificialintelligence

An AI learned to win chess by making moves human grandmasters had never conceived. Another AI discovered a new antibiotic by analyzing molecular properties human scientists did not understand. Now, AI-powered jets are defeating experienced human pilots in simulated dogfights. AI is coming online in searching, streaming, medicine, education, and many other fields and, in so doing, transforming how humans are experiencing reality. Artificial Intelligence enhances the speed, precision, and effectiveness of human efforts.


Artificial Intelligence provides sharper images of lunar craters that contain water ice

#artificialintelligence

The moon's polar regions are home to craters and other depressions that never receive sunlight. Today, a group of researchers led by the Max Planck Institute for Solar System Research (MPS) in Germany present the highest-resolution images to date covering 17 such craters. Craters of this type could contain frozen water, making them attractive targets for future lunar missions, and the researchers focused further on relatively small and accessible craters surrounded by gentle slopes. In fact, three of the craters have turned out to lie within the just-announced mission area of NASA's Volatiles Investigating Polar Exploration Rover (VIPER), which is scheduled to touch down on the moon in 2023. Imaging the interior of permanently shadowed craters is difficult, and efforts so far have relied on long exposure times resulting in smearing and lower resolution. By taking advantage of reflected sunlight from nearby hills and a novel image processing method, the researchers have now produced images at 1–2 meters per pixel, which is at or very close to the best capability of the cameras.