News concerning Artificial Intelligence (AI) abounds again. The progress with Deep Learning techniques are quite remarkable with such demonstrations of self-driving cars, Watson on Jeopardy, and beating human Go players. This rate of progress has led some notable scientists and business people to warn about the potential dangers of AI as it approaches a human level. Exascale computers are being considered that would approach what many believe is this level. However, there are many questions yet unanswered on how the human brain works, and specifically the hard problem of consciousness with its integrated subjective experiences.
This week Stanford was the center of attention in the artificial intelligence community after it published news that it trained a deep learning model that diagnoses skin cancer as accurately as a dermatologist. The algorithm apparently can identify a cancerous mole with nothing more than a picture, meaning it could be put into the hands of anyone with a simple smartphone -- otherwise known as a pocket supercomputer. Deep learning is revolutionizing the way innovators can apply AI and data science to solve real-world problems. Image classification, facial recognition, computational linguistics, translation, augmented reality, self-driving cars -- all of these fields have made huge leaps in the last several years as computer scientists apply the rapidly-developing machine learning models that empower them. With all the excitement around these developments, one starts to wonder…what does a future with advanced AI look like?
Who thought in 1950's that AI and deep learning will make self-driving cars and impossible missions like Mission Mars almost possible. While these innovations are not only getting possible but also the future predictions are getting quite interesting as well. While everyone is predicting future of AI mostly in the Software sector, I believe the most influential application of AI-based Nanochip will be in the medical diagnostics industry. These bot chips can be implanted in human brain just like currently a female can implant a birth control rod in her arm and can avoid taking pills. This nano biochip NBC will be biocompatible and will be programmed.
Wong, Josiah (University of Central Florida) | Hastings, Lauren (University of Central Florida) | Negy, Kevin (University of Central Florida) | Gonzalez, Avelino J. (University of Central Florida) | Ontañón, Santiago (Drexel University) | Lee, Yi-Ching (George Mason University)
Detection of abnormal behavior is the catalyst for many applications that seek to react to deviations from behavioral expectations. However, this is often difficult to do when direct communication with the performer is impractical. Therefore, we propose to create models of normal human performance and then compare their performance to a human's actual behavior. Any detected deviations can be then used to determine what condition(s) could possibly be influencing the deviant behavior. We build the models of human behavior through machine learning from observation; more specifically, we employ the Genetic Context Learning algorithm to create models of normal car driving behaviors of different humans with and without ADHD (Attention Deficit Hyperactivity Disorder). We use a car simulator for our studies to eliminate risk to our test subjects and to other drivers. Our results show that different driving situations have varying utility in abnormal behavior detection. Learning from Observation was successful in building models to be applied to abnormal behavior detection.
Algorithms that fuse multiple input sources benefit from both complementary and shared information. Shared information may provide robustness to faulty or noisy inputs, which is indispensable for safety-critical applications like self-driving cars. We investigate learning fusion algorithms that are robust against noise added to a single source. We first demonstrate that robustness against single source noise is not guaranteed in a linear fusion model. Motivated by this discovery, two possible approaches are proposed to increase robustness: a carefully designed loss with corresponding training algorithms for deep fusion models, and a simple convolutional fusion layer that has a structural advantage in dealing with noise. Experimental results show that both training algorithms and our fusion layer make a deep fusion-based 3D object detector robust against noise applied to a single source, while preserving the original performance on clean data.