If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
For human travelers, the iconic moment of space exploration occurred a half-century ago, when Neil Armstrong planted the first human boot-print on the moon. But if you don't mind using robots as our stand-ins, the greatest era is unfolding right now on Mars, where NASA's Curiosity rover is rolling across the rusty, dusty surface and leaving behind tread marks that spell out the letters "J-P-L" in Morse code. JPL stands for the Jet Propulsion Laboratory, the NASA center that designed and built Curiosity along with three earlier Mars rovers. Collectively, these machines have racked up 46.4 miles of travel, tremendously expanded our understanding of the Martian environment, and energized the search for life in the universe. Everywhere the rovers have gone, they have discovered unexpected complexity.
On a chilly evening last fall, I stared into nothingness out of the floor-to-ceiling windows in my office on the outskirts of Harvard's campus. As a purplish-red sun set, I sat brooding over my dataset on rat brains. I thought of the cold windowless rooms in downtown Boston, home to Harvard's high-performance computing center, where computer servers were holding on to a precious 48 terabytes of my data. I have recorded the 13 trillion numbers in this dataset as part of my Ph.D. experiments, asking how the visual parts of the rat brain respond to movement. Printed on paper, the dataset would fill 116 billion pages, double-spaced. When I recently finished writing the story of my data, the magnum opus fit on fewer than two dozen printed pages. Performing the experiments turned out to be the easy part. I had spent the last year agonizing over the data, observing and asking questions. The answers left out large chunks that did not pertain to the questions, like a map leaves out irrelevant details of a territory.
Fuzzing, or fuzz testing, is the process of finding security vulnerabilities in input-parsing code by repeatedly testing the parser with modified, or fuzzed, inputs.35 Since the early 2000s, fuzzing has become a mainstream practice in assessing software security. Thousands of security vulnerabilities have been found while fuzzing all kinds of software applications for processing documents, images, sounds, videos, network packets, Web pages, among others. These applications must deal with untrusted inputs encoded in complex data formats. For example, the Microsoft Windows operating system supports over 360 file formats and includes millions of lines of code just to handle all of these.
Cloud platforms, such as Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform, are tremendously complex. Its main resource management systems include virtual machine (VM) and container (hereafter we refer to VMs and containers simply as "containers") scheduling, server and container health monitoring and repairs, power and energy management, and other management functions. Cloud platforms are also extremely expensive to build and operate, so providers have a strong incentive to optimize their use. A nascent approach is to leverage machine learning (ML) in the platforms' resource management using supervised learning techniques, such as gradient-boosted trees and neural networks, or reinforcement learning. We also discuss why ML is often preferable to traditional non-ML techniques.
The Lone Star State may become a little lonelier -- at least when it comes to big-rig trucking. Waymo, the self-driving vehicle division of Google parent Alphabet, is about to start mapping in Texas and New Mexico as a prelude to testing its self-driving big-rig trucks. The mapping minivans, to be followed by the large trucks, will run primarily along Interstates 10, 20 and 45 and through metropolitan areas like El Paso, Dallas and Houston, the company said. Waymo previously mapped and tested its big rigs in Arizona, California and Georgia. The latest move will add to that footprint as the company moves toward its vision of big rigs rolling down interstates with no one at the wheel, their sensors and computers making them safer than if they have a human in control.
The current wave of emerging digital technologies offers great opportunities to transform pharma operating models and improve the declining ROI on R&D productivity. Harnessing the power of digital technologies – such as robotic process automation, artificial intelligence, machine learning and organ-on-a-chip – can transform how clinical trials are conceived, designed and conducted. For instance, they can be used to automate processes, make efficient use of Big Data and support early decision-making with predictive analytics. The beginning of this digital transformation is well underway and is likely to accelerate. Therefore, harnessing these technologies will require a deep understanding of how they work, the role they will play in advancing clinical development and the limitations they present.
ICHEC, the national high-performance computing authority of Ireland, recently participated in the xView2 disaster recovery challenge run by the US Defense Innovation Unit and other Humanitarian Assistance and Disaster Recovery (HADR) organisations. Models developed during the challenge including those developed at ICHEC are currently being tested by agencies responding to the ongoing bushfires in Australia. XView2 Challenge is based on using high resolution imagery to see the details of specific damage conditions in overhead imagery of a disaster area. The challenge involved building AI models to locate and classify the severity of damage to buildings using pairs of pre and post disaster satellite images. Models like these allow those responding to disasters to rapidly assess the damage left in their wake, enabling more effective response efforts and potentially saving lives.
It is difficult to open an insurance industry newsletter these days without seeing some reference to machine learning or its cousin artificial intelligence and how they will revolutionize the industry. Yet according to Willis Towers Watson's recently released 2019/2020 P&C Insurance Advanced Analytics Survey results, fewer companies have adopted machine learning and artificial intelligence than had planned to do so just two years ago (see the accompanying graphic). In the context of insurance, we're not talking about self-driving cars (though these may have important implications for insurance) or chess-playing computers. We're talking about predicting the outcome of comparatively simple future events: Who will buy what product, which clients are more likely to have what kind of claim, which claim will become complex according to some definition. The better insurers can estimate the outcomes of these future events, the better they can plan for them and achieve more positive results.
In this blog post we propose a taxonomy of 6 levels of Auto ML, similar to the taxonomy used for self-driving cars. Machine Learning (ML) is currently one of the hottest and most hyped-up areas of science and technology. In terms of both theoretical discoveries and practical applications, ML seems to be going from success to success, with no slowing down in sight. It has become the dominant, and in some cases exclusive, approach to Artificial Intelligence (AI), which in turn has the promise to radically alter most aspects of our everyday lives. The connection between ML and AI is so strong that the two are used interchangeably, and have in many applications become synonymous. Another concept that is closely linked with ML is automation. Even though ML is frequently used for other purposes (predictive modeling being the best known), it's really the prospect of automating many operations and processes, which are now done manually, that best captures the excitement about ML and its core value proposition. Which begs the following question: how far can we go in automating ML itself?