If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
These past few months have not been kind to any of us. The ripples caused by the COVID-19 crisis are felt far and wide, and the world's economies have taken a staggering blow. As with most things in life, however, this crisis has also brought some interesting side effects. Reimagining business for the digital age is the number-one priority for many of today's top executives. We offer practical advice and examples of how to do it right.
A University of Maryland expert in machine learning is being funded by the National Institute of Standards and Technology (NIST) to develop metrics that will bridge the knowledge gap between empirical and certifiable defenses against adversarial attacks. Soheil Feizi, assistant professor of computer science with an appointment in the University of Maryland Institute for Advanced Computer Studies (UMIACS), is principal investigator of the $387K two-year project. An adversarial attack involves penetrating machine learning systems in order to make small changes to the input data to confuse the algorithm, resulting in flawed outputs. Some of these changes are so small they can fly under the radar undetected, posing a serious security risk for AI systems that are increasingly being applied to industrial settings, medicine, information analysis and more. Both empirical and certifiable defenses have recently gained attention in the machine learning community for showing success against adversarial attacks, says Feizi.
Current research and applications in the field of artificial intelligence (AI) include several key challenges. These include: (a) A priori estimation of the required dataset size to achieve a desired test accuracy. For example, how many handwritten digits does a machine have to learn before being able to predict a new one with a success rate of 99%? Similarly, how many specific types of circumstances does an autonomous vehicle have to learn before its reaction will not lead to an accident? This type of realization of fast on-line decision making is representative of many aspects of human activity, robotic control and network optimization.
Modern machine learning research has demonstrated remarkable achievements. Today, we can train machines to detect objects in images, extract meaning from text, stop spam emails, drive cars, discover new drug candidates, and beat top players in Chess, Go, and countless other games. A lot of these advancements are powered by deep learning, in particular deep neural networks. Yet, the theory behind deep neural networks remains poorly understood. Sure, we understand the math of what individual neurons are doing, but we're lacking a mathematical theory of the emergent behavior of entire network.
The field of physics could provide a solution to some of the key challenges encountered in the artificial intelligence field, according to new research from Bar-Ilan University.Some of the key challenges facing the AI field include estimating necessary dataset size, how many circumstances does it need to learn beforehand and fast, on the spot decision making skills. However, tackling these challenges may be possible through the use of a central concept in physics known as power-law scaling.As described in an article published last Thursday in the academic journal Scientific Reports, the power-law scaling arises from a number of different phenomena, including the timing of magnitude of earthquakes to stock market fluctuations and to even frequency of word use in linguistics. It is this concept, which originally was thought of to describe how magnets are formed in the iron bulk cooling process, that could see application in the AI field, especially with deep learning. "Test errors with online learning, where each example is trained only once, are in close agreement with state-of-the-art algorithms consisting of a very large number of epochs, where each example is trained many times. This result has an important implication on rapid decision making such as robotic control," the study's lead author, Prof. Ido Kanter of Bar-Ilan's Department of Physics and Gonda (Goldshmied) Multidisciplinary Brain Research Center, said in a statement. "The power-law scaling, governing different dynamical rules and network architectures, enables the classification and hierarchy creation among the different examined classification or decision problems.""One of the important ingredients of the advanced deep learning algorithm is the recent new bridge between experimental neuroscience and advanced artificial intelligence learning algorithms," said co-author and PhD student Shira Sardi."This
Learning-based methodologies increasingly find applications in safety-critical domains like autonomous driving and medical robotics. Due to the rare nature of dangerous events, real-world testing is prohibitively expensive and unscalable. In this work, we employ a probabilistic approach to safety evaluation in simulation, where we are concerned with computing the probability of dangerous events. We develop a novel rare-event simulation method that combines exploration, exploitation, and optimization techniques to find failure modes and estimate their rate of occurrence. We provide rigorous guarantees for the performance of our method in terms of both statistical and computational efficiency. Finally, we demonstrate the efficacy of our approach on a variety of scenarios, illustrating its usefulness as a tool for rapid sensitivity analysis and model comparison that are essential to developing and testing safety-critical autonomous systems.
Enterprises now acknowledge the value of having a highly capable workforce. The ever-changing business landscape demands workers to continually upskill and reskill, giving rise to employee training utilization. Most organizations are now reinforcing their human resources and training and development departments to help them address the need. Large US companies on the average spent $17.7 million on such efforts in 2019. Managing employee training, however, has its own set of challenges.
Brainome, a new player in the machine learning space, is today launching Daimensions, a product which the company says helps customers take a "measure before build" approach to machine learning (ML) model development. The product, aimed at data scientists, helps optimize around training data analysis, data volume management and their downstream effect on training time, model size and performance. ZDNet spoke with Brainome's co-founders, Bertrand Irissou (CEO) and Gerald Friedland (CTO). The two provided a careful, thoughtful and thorough explanation of how the company's approach to ML differs from others. Brainome's take on ML is that much of the common model experimentation process can be optimized. Trial-and-error can be largely avoided by specifying the model's qualities and then building it, rather than following the standard experimentation approach of building several candidate models, then seeing which performs best.
While the headline seems a lot intriguing, it's certainly interesting to note about how would things function if Robotic Process Automation (RPA) would be playing a pivotal role in hyper automation in the post COVID era. Hyperautomation refers to the use of a combination of technologies to automate, simplify, discover, design, measure, and manage workflows and processes across the enterprise. Although, times have been quite difficult in the current scenario, it has been noticed that leaders who have been slow in adopting automation technologies -- such as Robotic Process Automation ( RPA), Artificial Intelligence ( AI), and Machine Learning (ML)--have started to leverage them as a way of cutting costs during economic turmoil, providing faster customer service, and revamping their distributed work operations. Under all such circumstances, Robotics is playing a key role which is paving the way for a brighter world in the upcoming days. These days hospitals are rapidly deploying new technologies to support their staff better, and they also have been facilitating a lot of changes that concern automation.
Organizations with an interdisciplinary team have a "far higher ratio of success" when deploying AI projects, said Arun Chandrasekaran, distinguished VP analyst at Gartner, speaking at a Gartner IT Symposium/Xpo Americas session last week. Interdisciplinary teams that blend roles across business and data science have a higher ratio of success with AI projects, as well as a faster time to production. This trend "clearly tells us that AI needs to be a team sport, said Chandrasekaran. "However, in reality what we see in most organizations is data scientists wearing too many hats, because there's a dearth of skills across other areas," he said. Organizations with an interdisciplinary team have a "far higher ratio of success" when deploying AI projects, said Arun Chandrasekaran, distinguished VP analyst at Gartner, speaking at a Gartner IT Symposium/Xpo Americas session last week. Interdisciplinary teams that blend roles across business and data science have a higher ratio of success with AI projects, as well as a faster time to production. This trend "clearly tells us that AI needs to be a team sport, said Chandrasekaran.