landscape


Artificial Intelligence in Health Care – What You Need to Know

#artificialintelligence

Much has been said in recent years about the potential of artificial intelligence (AI) to improve clinical and consumer decision making, resulting in better health outcomes. But deploying AI technology in health care is not without its own challenges, risks, and potential liabilities. Artificial intelligence is generally understood to mean a bundle of technologies that perform tasks that would normally depend on human intelligence, in which high-speed machines work and react like humans. A related concept is augmented intelligence, where technology is designed to work with human intelligence and enhance it rather than replace it. Machine learning is a subset of AI, and is the computerized practice of using statistical algorithms to analyze data, learn from it, and then make a determination or prediction about an assigned task.


Cities and Counties Turn to Machine Learning to Bolster Cybersecurity

#artificialintelligence

In late 2017, a government employee in Livingston County, Mich., plugged a personal laptop into the workplace server -- inadvertently exposing the network to malware. "We had 9,000 attacks within a few minutes from this computer," says Rich Malewicz, CIO and security officer for Livingston County. The county detected the attack and stopped it quickly using a program called Darktrace, which uses artificial intelligence (AI) and machine learning to provide real-time alerts about abnormal activity on the network. "No device on the network detected [the attack] except for Darktrace," he says. More local and state governments are eyeing AI and machine learning as tools to help combat cyberattacks, in part because hackers themselves have adopted the technology.


How to Prepare for the Malicious Use of AI - Future of Life Institute

#artificialintelligence

How can we forecast, prevent, and (when necessary) mitigate the harmful effects of malicious uses of AI? This is the question posed by a 100-page report released last week, written by 26 authors from 14 institutions. The report, which is the result of a two-day workshop in Oxford, UK followed by months of research, provides a sweeping landscape of the security implications of artificial intelligence. The authors, who include representatives from the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, and the Center for a New American Security, argue that AI is not only changing the nature and scope of existing threats, but also expanding the range of threats we will face. They are excited about many beneficial applications of AI, including the ways in which it will assist defensive capabilities.


One problem to explain why AI works – Towards Data Science

#artificialintelligence

Ask your resident experts, Why does AI work? Readily, they'll explain How it works, methods emptying in a mesmerizing jargonfall of gradient descent. Why will an expensive and inscrutable machine create the knowledge I need to solve my problem? A glossary of technical terms, an architectural drawing, or a binder full of credentials will do little to insulate you from the fallout if you can't stand up and explain Why. The purpose of AI is to create machines that create good knowledge. Just as a theory of flight is essential to the success of flying machines, a theory of knowledge is essential to AI. And a theoretical basis for understanding AI has greater reach and explanatory power than the applied or technical discussions that dominate this subject. As we'll discover, there's a deep problem at the center of the AI landscape. Two opposing perspectives on the problem give a simple yet far-reaching account of why AI works, the magnitude of the achievement, and where it might be headed. Many overlook the question because it's obvious how knowledge is created: We learn from observation. This is called inductive reasoning, or induction for short.


How Changing Technology Impacts IT (And How to Keep Up)

#artificialintelligence

Missing out on technology innovation could cripple any business or leave it at the mercy of rivals. This makes keeping up with the latest IT trends a vital business effort. Most small and medium-sized businesses focus firmly on their day-to-day operations. With only a small amount of time to focus on the technology that helps keeps the lights on, the blur of IT and technology innovation can pass them by. However, there are ways that any business can apply some focus when it comes to identifying and making use of the latest technology trends to benefit the company.


MIT's self-driving car can navigate unmapped country roads

Engadget

There's a good reason why companies often test self-driving cars in big cities: they'd be lost most anywhere else. They typically need well-labeled 3D maps to identify curbs, lanes and signs, which isn't much use on a backwoods road where those features might not even exist. MIT CSAIL may have a solution, though. Its researchers (with some help from Toyota) have developed a new framework, MapLite, that can find its way without any 3D maps. The system gets a basic sense of the vehicle's location using GPS, and uses that for both the final destination and a "local" objective within view of the car.


Five ways AI is disrupting financial services

#artificialintelligence

Whether you realise it or not, artificial intelligence (AI) is taking over the world. No need to brace for a Hollywood scripted battle like in movies such as "I, Robot" and "The Terminator". The reality is that most AI applications do not have a physical form, but rather "live" in lines of code. The term "AI" includes all technology used to mimic human intelligence, typically falling into one of three subcategories: machine learning, natural language process and cognitive computing. Currently, there are more than 2,000 AI start-ups in 70 countries that have raised more than $27 billion, according to Venture Scanner, a tech-centric analytics firm.


This Website Uses AI to Enhance Low-Res Photos, CSI-Style

#artificialintelligence

Let's Enhance is a new free website that uses neural networks to upscale your photos in a way Photoshop can't. It magically boosts and enhances your photo resolution like something straight out of CSI. The service is designed to be minimalist and extremely easy to use. The homepage invites you to drag and drop a photo into the center (once you do, you'll be asked to create a free account): Once it receives your photo, the neural network goes to work, upscaling your photo by 4x, removing JPEG artifacts, and "hallucinating" missing details and textures into your upscale photo to make it look natural. You'll need to wait a couple of minutes for the work to be done, but it's worth the wait -- the results we've seen are impressive.


Machine Learning Helps Humans Perform Text Analysis

#artificialintelligence

To augment that approach, we've found that we can use machine learning to improve the semantic data models as the data set evolves. Our specific use-case is text data in millions of documents. We've found that machine learning facilitates the storage and exploration of data that would otherwise be too vast to support valuable insights. Machine Learning (ML) allows for a model to improve over time given new training data, without requiring more human effort. For example, a common text-classification benchmark task is to train a model on messages for multiple discussion board threads and then later use it to predict what the topic of discussion was (space, computers, religion, etc).


Neural networks as Interacting Particle Systems: Asymptotic convexity of the Loss Landscape and Universal Scaling of the Approximation Error

arXiv.org Machine Learning

Neural networks, a central tool in machine learning, have demonstrated remarkable, high fidelity performance on image recognition and classification tasks. These successes evince an ability to accurately represent high dimensional functions, potentially of great use in computational and applied mathematics. That said, there are few rigorous results about the representation error and trainability of neural networks, as well as how they scale with the network size. Here we characterize both the error and scaling by reinterpreting the standard optimization algorithm used in machine learning applications, stochastic gradient descent, as the evolution of a particle system with interactions governed by a potential related to the objective or "loss" function used to train the network. We show that, when the number $n$ of parameters is large, the empirical distribution of the particles descends on a convex landscape towards a minimizer at a rate independent of $n$. We establish a Law of Large Numbers and a Central Limit Theorem for the empirical distribution, which together show that the approximation error of the network universally scales as $o(n^{-1})$. Remarkably, these properties do not depend on the dimensionality of the domain of the function that we seek to represent. Our analysis also quantifies the scale and nature of the noise introduced by stochastic gradient descent and provides guidelines for the step size and batch size to use when training a neural network. We illustrate our findings on examples in which we train neural network to learn the energy function of the continuous 3-spin model on the sphere. The approximation error scales as our analysis predicts in as high a dimension as $d=25$.