Nvidia unveiled a new federated learning edge computing reference application for radiology to help hospitals crunch medical data for better disease detection while protecting patient privacy. Called Clara Federal Learning, the system relies on Nvidia EGX, a computing platform which was announced earlier in 2019. It uses the Jetson Nano low wattage computer which can provide up to one-half trillion operations per second of processing for tasks like image recognition. EGX allows low-latency artificial intelligence at the edge to act on data, in this case images from MRIs, CT scans and more. Nvidia made its announcement of Clara on Sunday at the Radiological Society of North America conference in Chicago.
The impact of AI on human lives can be felt the most in the healthcare industry. AI-powered computer vision technology can help bring affordable healthcare to millions of people. Computer vision practices are already in place for sorting and finding images in blogs and retail websites. It also has applications in medicine. Medical diagnosis depends on medical images such as CAT scans, MRI images, X-rays, sonograms, and other images.
If you've ever wanted to see Einstein play charades, Rodin's "The Thinker" wink at you, or an ancient Chinese Emperor cast in a Chaplin movie -- then the AI-powered video transformation tech you're looking for is "face reenactment," which can digitally deliver all such fantastic scenarios. Unlike face swapping, which transfers a face from one source to another, face reenactment captures the movements of a driver face and expresses them through the identity of a target face. Starting with a dynamic driver face, researchers can manipulate any target face -- from today's celebrities to historical figures, including any age, ethnicity or gender -- to perform any humanly possible face-based task. Previous approaches at synthesizing a reenacted face used generative adversarial networks (GAN), which have demonstrated tremendous ability is a wide range of image generation tasks. GAN-based models however require at least a few minutes of training data for each target.
MIT researchers have devised a method that accelerates the process for creating and customizing templates used in medical-image analysis, to guide disease diagnosis. One use of medical image analysis is to crunch datasets of patients' medical images and capture structural relationships that may indicate the progression of diseases. In many cases, analysis requires use of a common image template, called an "atlas," that's an average representation of a given patient population. Atlases serve as a reference for comparison, for example to identify clinically significant changes in brain structures over time. Building a template is a time-consuming, laborious process, often taking days or weeks to generate, especially when using 3D brain scans.
Placentas can provide critical information about the health of the mother and baby, but only 20 percent of placentas are assessed by pathology exams after delivery in the U.S. The cost, time and expertise required to analyze them are prohibitive. Now, a team of researchers has developed a novel solution that could produce accurate, automated and near-immediate placental diagnostic reports through computerized photographic image analysis. Their research could allow all placentas to be examined, reduce the number of normal placentas sent for full pathological examination and create a less resource-intensive path to analysis for research--all of which may positively benefit health outcomes for mothers and babies. "The placenta drives everything to do with the pregnancy for the mom and baby, but we're missing placental data on 95 percent of births globally," said Alison Gernand, assistant professor of nutritional sciences in Penn State's College of Health and Human Development. "Creating a more efficient process that requires fewer resources will allow us to gather more comprehensive data to examine how placentas are linked to maternal and fetal health outcomes, and it will help us to examine placentas without special equipment and in minutes rather than days."
Last time, we explain how works Time Distributed layers in Keras and we introduced the usage of transfer learning with that kind of neural network. We can prepare a little network to apply what we learned. If you didn't read our last article, you may do it now: Discover an "action" in videos can be very interesting -- let's imagine the possibilities: detecting someone in danger, a characteristic movement in space to detect asteroids, meteorologic events from satellite images, sign language, and many more. Detecting an action is possible by analyzing a series of images (that we name "frames") that are taken in time. The Time Distributed layer provided by Keras helps a lot to make it easy to try.
The artificial intelligence of things (AIoT) will consist of automotive electronics, industrial 4.0, and smart home applications, new IoT edge devices and human-machine interface devices, all of which will require new functionality in terms of size, power consumption, and performance. New microcontrollers are being developed to meet these demands with higher performance and lower power consumption and with new RAM options to improve that of the existing SDRAM and pSRAM available. HyperRAM supports the HyperBus interface and Winbond Electronics offers 32, 64 and 128Mbit devices. Hans Liao, technology manager of DRAMs at Winbond, explained that the computing power, data processing and image display functions of traditional MCUs are limited and that the new IoT devices often have touch panel as image control interface, or require stronger edge computing functions for image processing and speech recognition, requiring higher performance, lower power microcontrollers. Winbond's 64Mbit HyperRAM consumes 90 microW at 1.8V, which is about half of a DRAM of the same capacity, claims the company.
The Hanguang 800 is being implemented across many application scenarios within Aliyun, ranging from video classification to smart city applications. For example, the company's popular Pailitao platform applies visual image search to e-commerce, allowing customers to search for items by taking a photo of the query object. Using AI-based image recognition & indexing powered by the new Hanguang 800, Aliyun can increase image processing efficiency by 12 times compared to GPUs. With regard to smart city tech, Aliyun says it previously used 40 traditional GPUs to process videos of central Hangzhou with a latency of 300ms. Now the task requires only four Hanguang 800 with a lower latency of 150ms.
Microsoft's Azure Cloud Platform customers now have access to microchips made by Graphcore. Graphcore, a British startup founded in 2016, has put the spotlight (and several millions of dollars in investment capital) on its newest baby: Intelligent Processing Unit microchips (IPU), specifically designed to work with AI. Unlike most AI compatible chips that were designed with a specific programming in mind, Graphcore's IPUs were designed specifically to support the calculations that assist machines in facial recognition technologies, speech recognition, parse language, car automation, and machine learning. Microsoft's posted benchmarks for the IPU match or exceed the performance levels of the current top AI chips available, and Graphcore's code is rumored to perform image-processing tasks several times faster than their opponents. Graphcore's IPU chip also has memory space to eliminate having to move data on and off the chip for processing.
The AMRC is helping lead a revolution in the UK. Inside its glass-walled, state-of-the-art Factory 2050 facility in Sheffield, the centre develops digital-driven solutions that employ AI, Internet of Things (IoT), robotic and other emerging technologies, all with the aim to solve real-world manufacturing problems. Once considered futuristic, these solutions are ready for full scale deployment today, helping UK manufacturers increase their performance while fueling the Fourth Industrial Revolution. "The whole ethos behind the AMRC is to maintain UK competitiveness in global manufacturing," explains Tom Hodgson, Theme Lead, Inspection and AI, AMRC. "We take ideas that come out of the universities, where they've been developed to a prototype level. Then, with our partner companies, we conduct research projects to transition those technologies into production environments."