Goto

Collaborating Authors

Sensing and Signal Processing


X-ray Image Classification and Model Evaluation

#artificialintelligence

Kaggle has a wonderful source of chest X-ray image datasets for pneumonia and normal cases. There are significant differences between the image of a normal X-ray and an affected X-ray. Machine learning can play a pivotal role in determining the disease and significantly boost the diagnosis time as well as reduce human effort. I have been motivated by the work done here on the datasets between cats and dogs and reused the code block for dataset pipeline. First we need to import the necessary packages.


Implementing Real-time Object Detection System using PyTorch and OpenCV

#artificialintelligence

The Self-Driving car might still be having difficulties understanding the difference between humans and garbage can, but that does not take anything away from the amazing progress state-of-the-art object detection models have made in the last decade. Combine that with the image processing abilities of libraries like OpenCV, it is much easier today to build a real-time object detection system prototype in hours. In this guide, I will try to show you how to develop sub-systems that go into a simple object detection application and how to put all of that together. I know some of you might be thinking why I am using Python, isn't it too slow for a real-time application, and you are right; to some extent. The most compute-heavy operations, like predictions or image processing, are being performed by PyTorch and OpenCV both of which use c behind the scene to implement these operations, therefore it won't make much difference if we use c or python for our use case here.


Architectures for Medical Image Segmentation [Part 2: Attention UNet]

#artificialintelligence

I started writing about network architectures useful for medical image segmentation i.e. In the first article, I had covered basic UNet and 3D UNet. You can find that here. In this article, I'm going to go over Attention UNet. Fully convolutional neural networks (FCNNs) like UNet outperform traditional approaches in medical image analysis.


AI system-on-chip runs on solar power

#artificialintelligence

AI is used in an array of useful applications, such as predicting a machine's lifetime through its vibrations, monitoring the cardiac activity of patients and incorporating facial recognition capabilities into video surveillance systems. The downside is that AI-based technology generally requires a lot of power and, in most cases, must be permanently connected to the cloud, raising issues related to data protection, IT security and energy use. CSEM engineers may have found a way to get around those issues, thanks to a new system-on-chip they have developed. It runs on a tiny battery or a small solar cell and executes AI operations at the edge--i.e., locally on the chip rather than in the cloud. What's more, their system is fully modular and can be tailored to any application where real-time signal and image processing is required, especially when sensitive data are involved.


Facebook can now detect 'the most dangerous crime of the future' and the AI used to make them

The Independent - Tech

Facebook has developed a model to tell when a video is using a deepfake – and can even tell which algorithm was used to create it. The term "deepfake" refers to a video where artificial intelligence and deep learning – an algorithmic learning method used to train computers – has been used to make a person appear to say something they have not. Notable examples of deepfakes include a manipulated video of Richard Nixon's Apollo 11 presidential address and Barack Obama insulting Donald Trump – and although they are relatively benign now, experts suggest that they could be the most dangerous crime of the future. Detecting a deepfake relies on telling whether an image is real or not, but the amount of information available to researchers to do so can be limited – relying on potential input-output pairs or rely on hardware information that might not be available in the real world. Facebook's new process relies in detecting the unique patterns behind an artificially-intelligent model that could generate a deepfake.


Facebook Researchers Say They Can Detect Deepfakes And Where They Came From

NPR Technology

This image made from video of a fake video featuring former President Barack Obama shows elements of facial mapping used for deepfakes that lets anyone make videos of real people appearing to say things they've never said. This image made from video of a fake video featuring former President Barack Obama shows elements of facial mapping used for deepfakes that lets anyone make videos of real people appearing to say things they've never said. Facebook researchers say they've developed artificial intelligence that can identify so-called "deepfakes" and track their origin by using reverse engineering. Deepfakes are altered photos, videos, and still images that use artificial intelligence to appear like the real thing. They've become increasingly realistic in recent years, making it harder to detect the real from the fake with just the naked eye.


Deepfakes in 2021 -- How Worried Should We Be?

#artificialintelligence

Before I go any further it's probably worth establishing what a Deepfake is and isn't. A technique by which a digital image or video can be superimposed onto another, which maintains the appearance of an unedited image or video. The term is often misinterpreted, and that's potentially as a result of definitions like this. The concept of manipulating images and video in this way is certainly not a new concept. Visual effects artists working on Hollywood films back in the '90s would probably describe parts of their job as something very similar to this.


Artificial Neural Patches

#artificialintelligence

This article describes what neural patches and patch systems are, their advantage over tradition neural network design, and why we're looking for people to train interesting artificial neural patches for image classification. It goes over the steps to train such patches using a simple Windows tool, how to test them in the wild on mobile devices (iOS and Android) and submit them for publication review. In 2006 researchers used fMRI (functional magnetic resonance imaging) and electrical recordings of individual nerve cells to find regions of the inferior temporal lobe that become active when macaque monkeys observe another monkey's face. They found that some nerve regions are triggered only when a face is identified. And those trigger other regions which show sensitivity to only specific orientations of the face, or to specific feature exaggerations. Such regions of a neural network that are conditionally activated in the presence of certain coarse features, and then extract more finer features, are referred to as Neural Patches.


This AI Prevents Bad Hair Days

#artificialintelligence

I explain Artificial Intelligence terms and news to non-experts. Could this be the technological innovation that hairstylists have been dying for? I'm sure a majority of us have had a bad haircut or two. But hopefully, with this AI, you'll never have to guess what a new haircut will look like ever again. This AI can transfer a new hairstyle and/or color to a portrait to see how it would look like before committing to the change.


Artificial Neural Patches

#artificialintelligence

This article describes what neural patches and patch systems are, their advantage over tradition neural network design, and why we're looking for people to train interesting artificial neural patches for image classification. It goes over the steps to train such patches using a simple Windows tool, how to test them in the wild on mobile devices (iOS and Android) and submit them for publication review. In 2006 researchers used fMRI (functional magnetic resonance imaging) and electrical recordings of individual nerve cells to find regions of the inferior temporal lobe that become active when macaque monkeys observe another monkey's face. They found that some nerve regions are triggered only when a face is identified. And those trigger other regions which show sensitivity to only specific orientations of the face, or to specific feature exaggerations. Such regions of a neural network that are conditionally activated in the presence of certain coarse features, and then extract more finer features, are referred to as Neural Patches.