Goto

Collaborating Authors

Results


Why the AI revolution now? Because of 6 key factors.

#artificialintelligence

About: Data-Driven Science (DDS) provides training for people building a career in Artificial Intelligence (AI). In recent years, AI has been taking off and became a topic that is frequently making it into the news. But why is that actually? AI research has started in the mid-twentieth century when mathematician Alan Turing asked the question "Can Machines Think?" in a famous paper in 1950. However, it's been not until the 21st century that Artificial Intelligence has shaped real-world applications that are impacting billions of people and most industries across the globe.


Seeing Through Walls

Communications of the ACM

Machine vision coupled with artificial intelligence (AI) has made great strides toward letting computers understand images. Thanks to deep learning, which processes information in a way analogous to the human brain, machine vision is doing everything from keeping self-driving cars on the right track to improving cancer diagnosis by examining biopsy slides or x-ray images. Now some researchers are going beyond what the human eye or a camera lens can see, using machine learning to watch what people are doing on the other side of a wall. The technique relies on low-power radio frequency (RF) signals, which reflect off living tissue and metal but pass easily through wooden or plaster interior walls. AI can decipher those signals, not only to detect the presence of people, but also to see how they are moving, and even to predict the activity they are engaged in, from talking on a phone to brushing their teeth.


Hitting the Books: Do we really want our robots to have consciousness?

Engadget

From Star Trek's Data and 2001's HAL to Columbus Day's Skippy the Magnificent, pop culture is chock full of fully conscious AI who, in many cases, are more human than the humans they serve alongside. But is all that self-actualization really necessary for these synthetic life forms to carry out their essential duties? In his new book, How to Grow a Robot: Developing Human-Friendly, Social AI, author Mark H. Lee examines the social shortcomings of the today's AI and delves into the promises and potential pitfalls surrounding deep learning techniques, currently believed to be our most effective tool at building robots capable of doing more than a handful of specialized tasks. In the excerpt below, Lee argues that the robots of tomorrow don't necessarily need -- nor should they particularly seek out -- the feelings and experiences that make up the human condition. Although I argue for self-awareness, I do not believe that we need to worry about consciousness.


Edge AI Is The Future, Intel And Udacity Are Teaming Up To Train Developers

#artificialintelligence

On April 16, 2020, Intel and Udacity jointly announced their new Intel Edge AI for IoT Developers Nanodegree program to train the developer community in deep learning and computer vision. If you are wondering where AI is headed, now you know, it's headed to the edge. Edge computing is the concept of storing data and computing data directly at the location where it is needed. The global edge computing market is forecasted to reach 1.12 trillion dollars by 2023. Intel and Udacity aim to train 1 million developers.


Top 50 AI Articles, Papers & Videos from Q1 2020

#artificialintelligence

We compiled a list here. See our list of webinars and virtual summits for more online content.


Battling a killer bug with deep tech

#artificialintelligence

That said, technologies--such as big data, cloud computing, supercomputers, artificial intelligence (AI), robotics, 3D printing, thermal imaging and 5G--are being used to effectively complement the traditional methods of increased hygiene, self- and forced quarantines, and enforced global travel bans. Having enforced traditional measures in place, for instance, police officers in China now wear AI-powered helmets that can automatically record the temperatures of pedestrians. The high-tech headgear has an infrared camera, and sounds an alarm if anyone in a radius of 16ft has fever. Equipped with the facial-recognition technology, it can also display the pedestrian's personal information, such as their name on a virtual screen. Officials at railway stations, airports and in other public areas in India, too, are using smart thermal scanners to record temperatures from a distance, thus helping in identifying potential coronavirus carriers.


A Survey on Edge Intelligence

arXiv.org Artificial Intelligence

Edge intelligence refers to a set of connected systems and devices for data collection, caching, processing, and analysis in locations close to where data is captured based on artificial intelligence. The aim of edge intelligence is to enhance the quality and speed of data processing and protect the privacy and security of the data. Although recently emerged, spanning the period from 2011 to now, this field of research has shown explosive growth over the past five years. In this paper, we present a thorough and comprehensive survey on the literature surrounding edge intelligence. We first identify four fundamental components of edge intelligence, namely edge caching, edge training, edge inference, and edge offloading, based on theoretical and practical results pertaining to proposed and deployed systems. We then aim for a systematic classification of the state of the solutions by examining research results and observations for each of the four components and present a taxonomy that includes practical problems, adopted techniques, and application goals. For each category, we elaborate, compare and analyse the literature from the perspectives of adopted techniques, objectives, performance, advantages and drawbacks, etc. This survey article provides a comprehensive introduction to edge intelligence and its application areas. In addition, we summarise the development of the emerging research field and the current state-of-the-art and discuss the important open issues and possible theoretical and technical solutions.


Noah Schwartz, Co-Founder & CEO of Quorum – Interview Series

#artificialintelligence

Noah is an AI systems architect. Prior to founding Quorum, Noah spent 12 years in academic research, first at the University of Southern California and most recently at Northwestern as the Assistant Chair of Neurobiology. His work focused on information processing in the brain and he has translated his research into products in augmented reality, brain-computer interfaces, computer vision, and embedded robotics control systems. Your interest in AI and robotics started as a little boy. How were you first introduced to these technologies?


Noah Schwartz, Co-Founder & CEO of Quorum – Interview Series

#artificialintelligence

Noah is an AI systems architect. Prior to founding Quorum, Noah spent 12 years in academic research, first at the University of Southern California and most recently at Northwestern as the Assistant Chair of Neurobiology. His work focused on information processing in the brain and he has translated his research into products in augmented reality, brain-computer interfaces, computer vision, and embedded robotics control systems. Your interest in AI and robotics started as a little boy. How were you first introduced to these technologies?


Julia Language in Machine Learning: Algorithms, Applications, and Open Issues

arXiv.org Machine Learning

Machine learning is driving development across many fields in science and engineering. A simple and efficient programming language could accelerate applications of machine learning in various fields. Currently, the programming languages most commonly used to develop machine learning algorithms include Python, MATLAB, and C/C ++. However, none of these languages well balance both efficiency and simplicity. The Julia language is a fast, easy-to-use, and open-source programming language that was originally designed for high-performance computing, which can well balance the efficiency and simplicity. This paper summarizes the related research work and developments in the application of the Julia language in machine learning. It first surveys the popular machine learning algorithms that are developed in the Julia language. Then, it investigates applications of the machine learning algorithms implemented with the Julia language. Finally, it discusses the open issues and the potential future directions that arise in the use of the Julia language in machine learning.