Goto

Collaborating Authors

Image Processing


Is AI the future of art?

#artificialintelligence

To many they are art's next big thing--digital images of jellyfish pulsing and blurring in a dark pink sea, or dozens of butterflies fusing together into a single organism. The Argentine artist Sofia Crespo, who created the works with the help of artificial intelligence, is part of the "generative art" movement, where humans create rules for computers which then use algorithms to generate new forms, ideas and patterns. The field has begun to attract huge interest among art collectors--and even bigger price tags at auction. US artist and programmer Robbie Barrat--a prodigy still only 22 years old--sold a work called "Nude Portrait#7Frame#64" at Sotheby's in March for £630,000 ($821,000). That came almost four years after French collective Obvious sold a work at Christie's titled "Edmond de Belamy"--largely based on Barrat's code--for $432,500.


Face Mask Recognition: Deep Learning based Desktop App

#artificialintelligence

I will start the course by installing Python and installing the necessary libraries in Python for developing the end-to-end project. Then I will teach you one of the prerequisites of the course that is image processing techniques in OpenCV and the mathematical concepts behind the images. We will also do the necessary image analysis and required preprocessing steps for the images. Then we will do a mini project on Face Detection using OpenCV and Deep Neural Networks. With the concepts of image basics, we will then start our project phase-1, face identity recognition.


Top 9 Use Cases of Artificial Intelligence for Industries In 2022

#artificialintelligence

The impact of Artificial Intelligence is becoming the dominant focal point with each conceivable development. Innovation is changing practically all industries, including banking and finance, medical services, car, telecom, assembling, security and military, media, schooling, and so forth. AI-based solutions are a choice for companies that want to progress in the world. The sub-spaces of Artificial Intelligence, for example, Machine Learning, Natural language processing, data analytics, and image processing, are additionally carrying out productive use cases in different areas. Plus, AI solutions are filling the business gaps by offering start-to-finish digitization processes.


Artificial Intelligence Enhances Potential of Intravascular OCT

#artificialintelligence

Artificial intelligence's (AI) applicability in cardiac imaging is rapidly growing and was a major topic of discussion at this year's EuroPCR 2022 meeting. Many session speakers discussed how they are using AI tools in their day-to-day practice and in their research to improve decision-making and patient/research outcomes. It's no secret, however, that AI tools are only as good as the data sets and the thousands of expert opinions used to power them. Implementing AI applications in our day-to-day practice, from an operations standpoint, could mean adjusting clinician workflows and setting aside time to set up and train on the new systems. And from an efficacy standpoint, it leaves clinicians wary of result accuracy, especially if they are unsure how good the data used to power the technology really is.


DALL-E Mini Is the Internet's Favorite AI Meme Machine

WIRED

On June 6, Hugging Face, a company that hosts open source artificial intelligence projects, saw traffic to an AI image-generation tool called DALL-E Mini skyrocket. The outwardly simple app, which generates nine images in response to any typed text prompt, was launched nearly a year ago by an independent developer. But after some recent improvements and a few viral tweets, its ability to crudely sketch all manner of surreal, hilarious, and even nightmarish visions suddenly became meme magic. As more people created and shared DALL-E Mini images on Twitter and Reddit, and more new users arrived, Hugging Face saw its servers overwhelmed with traffic. "Our engineers didn't sleep for the first night," says Clément Delangue, CEO of Hugging Face, on a video call from his home in Miami.


Prism AI Software Accelerates Thermal Camera Integration for ADAS & Autonomous Vehicles

#artificialintelligence

Teledyne FLIR has announced the release of Prism AI, a software framework that provides classification, object detection, and object tracking, enabling perception engineers to quickly start integrating thermal cameras for Advanced Driver Assistance Systems (ADAS) and Autonomous Vehicle (AV) systems. Built for automotive perception system developers, Prism includes features such as visible-and-thermal image fusion and advanced thermal image processing capabilities that provide superior pedestrian and animal detection in challenging lighting conditions, especially at night. "The Prism AI software model has performed successfully in third-party, NCAP Automatic Emergency Braking (AEB) tests and will now help perception engineers create more effective systems," said Michael Walters, vice president product management, Teledyne FLIR. "Combining the Prism AI development tools, plugins, and dataset development offers integrators a route to quickly test and decrease development cost for thermal-enabled ADAS or AV that will help save lives." Developers can use Prism AI as the primary perception software or as reference software during in-house development.


Artificial intelligence

#artificialintelligence

Deep learning[133] uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits or letters or faces.[134] Deep learning has drastically improved the performance of programs in many important subfields of artificial intelligence, including computer vision, speech recognition, image classification[135] and others. Deep learning often uses convolutional neural networks for many or all of its layers.


Cancer Prevention through early detecTion (CaPTion) Workshop @ MICCAI 2022

#artificialintelligence

Prof. Kristy K. Brock is currently a Professor with tenure in the Department of Imaging Physics at the University of Texas MD Anderson Cancer Center, where she is the Director for the Image-Guided Cancer Therapy Research Program. Her research has focused on image guided cancer therapy, where she has developed a biomechanical model-based deformable image registration algorithm to integrate imaging into treatment planning, delivery, and response assessment as well as to understand and validate imaging signals through correlative pathology. Her algorithm was licensed and incorporated into a commercial treatment planning system. She is board certified by the American Board of Radiology in Therapeutic Medical Physics and holds a joint appointment with the Department of Radiation Physics at MD Anderson. Dr. Brock has published over 150 papers in peer-reviewed journals, is the Editor of the book'Image Processing in Radiation Therapy' and has been the PI/co-PI on over 25 peer-reviewed, industry, and institutional grants.


A new generative adversarial network for medical images super resolution - Scientific Reports

#artificialintelligence

For medical image analysis, there is always an immense need for rich details in an image. Typically, the diagnosis will be served best if the fine details in the image are retained and the image is available in high resolution. In medical imaging, acquiring high-resolution images is challenging and costly as it requires sophisticated and expensive instruments, trained human resources, and often causes operation delays. Deep learning based super resolution techniques can help us to extract rich details from a low-resolution image acquired using the existing devices. In this paper, we propose a new Generative Adversarial Network (GAN) based architecture for medical images, which maps low-resolution medical images to high-resolution images. The proposed architecture is divided into three steps. In the first step, we use a multi-path architecture to extract shallow features on multiple scales instead of single scale. In the second step, we use a ResNet34 architecture to extract deep features and upscale the features map by a factor of two. In the third step, we extract features of the upscaled version of the image using a residual connection-based mini-CNN and again upscale the feature map by a factor of two. The progressive upscaling overcomes the limitation for previous methods in generating true colors. Finally, we use a reconstruction convolutional layer to map back the upscaled features to a high-resolution image. Our addition of an extra loss term helps in overcoming large errors, thus, generating more realistic and smooth images. We evaluate the proposed architecture on four different medical image modalities: (1) the DRIVE and STARE datasets of retinal fundoscopy images, (2) the BraTS dataset of brain MRI, (3) the ISIC skin cancer dataset of dermoscopy images, and (4) the CAMUS dataset of cardiac ultrasound images. The proposed architecture achieves superior accuracy compared to other state-of-the-art super-resolution architectures.


La veille de la cybersécurité

#artificialintelligence

Though marketers are still in the early stages of experimenting with deepfakes and deepfake technology, these videos convey a more immersive marketing experience through storytelling. Deepfake technology is a type of "deep learning." Deep learning is a machine learning type that allows computers to learn tasks independently without being explicitly programmed. Deepfake technology also involves computer vision, allowing computers to recognize objects in images. For example, computer vision uses deep learning algorithms to identify objects in photos or videos.