The Food and Drug Administration (FDA) is announcing the following public workshop entitled "Evolving Role of Artificial Intelligence in Radiological Imaging." The intent of this public workshop is to discuss emerging applications of Artificial Intelligence (AI) in radiological imaging including AI devices intended to automate the diagnostic radiology workflow as well as guided image acquisition. The purpose of the workshop is to work with interested stakeholders to identify the benefits and risks associated with use of AI in radiological imaging. We also plan to discuss best practices for the validation of AI-automated radiological imaging software and image acquisition devices. Validation of device performance with respect to the intended use is critical to assess safety and effectiveness.
Artificial Intelligence, Machine Learning and High Velocity Analytic workloads are going mainstream. Enterprises of all types and sizes want to seize the opportunity their data presents. As these workloads move from development to production, organizations face a significant challenge with the supporting storage architecture. At the heart of the problem is the file system the organization will use to store the information. It needs to be fast, scalable, durable and cloud-ready.
We have a threefold approach. First, AI as a technological choice compared to more traditional heuristic approaches, treating it undogmatically with clear eyes, especially when the goal is at improving the performance of certain existing technologies. Typical areas where we are carrying out this work include sound and image processing, video compression, computer vision, and cognitive state prediction by measuring physiological signals. Second is the use of AI as an approach for developing a solution to a problem, for which modeling is complex. This approach is closely tied to the use of data, whether personal or corporate, with a major focus on health care.
The ability to automatically estimate the quality and coverage of the samples produced by a generative model is a vital requirement for driving algorithm research. We present an evaluation metric that can separately and reliably measure both of these aspects in image generation tasks by forming explicit, non-parametric representations of the manifolds of real and generated data. We demonstrate the effectiveness of our metric in StyleGAN and BigGAN by providing several illustrative examples where existing metrics yield uninformative or contradictory results. Furthermore, we analyze multiple design variants of StyleGAN to better understand the relationships between the model architecture, training methods, and the properties of the resulting sample distribution. In the process, we identify new variants that improve the state-of-the-art.
Radiological sciences in the last ten years have advanced in a revolutionary manner, especially when it comes about medical imaging and computerized medical image processing. These techniques help in the understanding of the disease as well as initiation and evaluation of ongoing treatment. Apart from this, the dataset of these images is used in further analysis of such diseases occurring around the world as a whole. Heather Landi, a senior editor at Fierce Healthcare, writes in an article that IBM researchers estimate that medical images, as the largest and fastest-growing data source in the healthcare industry, account for at least 90 percent of all medical data. We can use a computer to process and manipulate the multidimensional digital images of psychological structures in order to visualize hidden characteristic diagnostic features that are very difficult or perhaps impossible to see using planer imaging methods.
Kornia is a differentiable computer vision library for PyTorch. It consists of a set of routines and differentiable modules to solve generic computer vision problems. At its core, the package uses PyTorch as its main backend both for efficiency and to take advantage of the reverse-mode auto-differentiation to define and compute the gradient of complex functions. Inspired by OpenCV, this library is composed by a subset of packages containing operators that can be inserted within neural networks to train models to perform image transformations, epipolar geometry, depth estimation, and low-level image processing such as filtering and edge detection that operate directly on tensors. Run our Jupyter notebooks examples to learn to use the library.
We address the problem of adaptive sensor control in dynamic resource-constrained sensor networks. We focus on a meteorological sensing network comprising radars that can perform sector scanning rather than always scanning 360 degrees. We compare three sector scanning strategies. The sit-and-spin strategy always scans 360 degrees. The limited lookahead strategy additionally uses the expected environmental state K decision epochs in the future, as predicted from Kalman filters, in its decision-making.
We present a new analysis for the combination of binary classifiers. We propose a theoretical framework based on the Neyman-Pearson lemma to analyze combinations of classifiers. In particular, we give a method for finding the optimal decision rule for a combination of classifiers and prove that it has the optimal ROC curve. We also show how our method generalizes and improves on previous work on combining classifiers and generating ROC curves. Papers published at the Neural Information Processing Systems Conference.
We introduce PiCoDes: a very compact image descriptor which nevertheless allows high performance on object category recognition. In particular, we address novel-category recognition: the task of defining indexing structures and image representations which enable a large collection of images to be searched for an object category that was not known when the index was built. Instead, the training images defining the category are supplied at query time. We explicitly learn descriptors of a given length (from as small as 16 bytes per image) which have good object-recognition performance. In contrast to previous work in the domain of object recognition, we do not choose an arbitrary intermediate representation, but explicitly learn short codes.
Learning from multi-view data is important in many applications, such as image classification and annotation. In this paper, we present a large-margin learning framework to discover a predictive latent subspace representation shared by multiple views. Our approach is based on an undirected latent space Markov network that fulfills a weak conditional independence assumption that multi-view observations and response variables are independent given a set of latent variables. We provide efficient inference and parameter estimation methods for the latent subspace model. Finally, we demonstrate the advantages of large-margin learning on real video and web image data for discovering predictive latent representations and improving the performance on image classification, annotation and retrieval.