If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
In a recent blog post, Google announced the beta of Cloud AI Platform Pipelines, which provides users with a way to deploy robust, repeatable machine learning pipelines along with monitoring, auditing, version tracking, and reproducibility. With Cloud AI Pipelines, Google can help organizations adopt the practice of Machine Learning Operations, also known as MLOps – a term for applying DevOps practices to help users automate, manage, and audit ML workflows. Typically, these practices involve data preparation and analysis, training, evaluation, deployment, and more. When you're just prototyping a machine learning (ML) model in a notebook, it can seem fairly straightforward. But when you need to start paying attention to the other pieces required to make an ML workflow sustainable and scalable, things become more complex.
"At Cisco, we strive to use the latest engagement technology to transform our customers' experience," says Colin Choo, Cisco IT Contact Center Solutions Consultant & Strategy Lead. For our contact centers in 2020, this effort will mean a big focus on data analytics, artificial intelligence, and machine learning (AI/ML) for our Virtual Demand Center. We will also take the first steps to migrate our own Cisco Contact Center platform deployment to the cloud. A key part of this companywide initiative to improve CX is bringing all customer data together into a single "collection" of connected sources of truth, including the data that exists in our contact centers. "With information accessible in one place, we will be better able to serve customers with greater accuracy, efficiency, and relevance," says Mary Mazon, Cisco IT manager.
I chose artificial intelligence as my next topic, as it can be considered as one of the most known technologies, and people imagine it when they talk about the future. But the right question would be: What is artificial intelligence? Artificial intelligence is not something that just happened in 2015 and 2016. It's been around for a hundred years as an idea, but as a science, we started seeing developments from the 1950s. So, this is quite an old tech topic already, but because of the kinds of technology that we have access to today -- specifically, processing performance and storage -- we're starting to see significant leaps in AI development. When I started the course entitled, "Foundations of the Fourth Industrial Revolution (Industry 4.0)," I got deeper into the topic of artificial intelligence. One of the differences between the third industrial revolution -- defined by the microchip and digitization -- and the fourth industrial revolution is the scope, velocity and breakthroughs in medicine and biology, as well as widespread use of artificial intelligence across our society. Thus, AI is not only a product of Industry 4.0 but also an impetus as to why the fourth industrial revolution is currently happening and will continue to do so. I think there are two ways to understand AI: the first way is to try giving a quick definition of what it is, but the second is to also think about what it is not.
Researchers at Google have open-sourced a new framework that can scale up artificial intelligence model training across thousands of machines. It's a promising development because it should enable AI algorithm training to be performed at millions of frames per second while reducing the costs of doing so by as much as 80%, Google noted in a research paper. That kind of reduction could help to level the playing field a bit for startups that previously haven't been able to compete with major players such as Google in AI. Indeed, the cost of training sophisticated machine learning models in the cloud is surprisingly expensive. One recent report by Synced found that the University of Washington racked up $25,000 in costs to train its Grover model, which is used to detect and generate fake news.
Brock Ferguson is a practice-over-theory kind of guy. The Chicago-based data-science and machine-learning consultancy he co-founded in 2016, Strong Analytics, puts a major focus on productionizing AI models rather than just building out proofs of concept. "We want to minimize that gap between research in the lab and deploying to production," he said. "We think about that a lot." That means thinking a lot about cost -- something that's never far from the minds of machine-learning practitioners and consultants, but which came to the forefront again thanks to a much-circulated recent Andreesen Horowitz review that emphasized the high and ongoing computing costs of building and deploying artificial intelligence models.
This repository contains an implementation of distributed reinforcement learning agent where both training and inference are performed on the learner. However, any reinforcement learning environment using the gym API can be used. For a detailed description of the architecture please read our paper. Please cite the paper if you use the code from this repository in your work. There are a few steps you need to take before playing with SEED.
Data-driven experiences are rich, immersive and immediate. Think pizza delivery by drone, video cameras that can record traffic accidents at an intersection, freight trucks that can identify a potential system failure. These kinds of fast-acting activities need lots of data -- quickly. So they can't sustain latency as data travels to and from the cloud. That to-and-fro takes too long; instead, many of these data-intensive processes must remain localized and processed at the edge and on or near a hardware device.
In a blog post and accompanying paper, researchers at Google detail an AI system -- MetNet -- that can predict precipitation up to eight hours into the future. They say that it outperforms the current state-of-the-art physics model in use by the U.S. National Oceanic and Atmospheric Administration (NOAA) and that it makes a prediction over the entire U.S. in seconds as opposed to an hour. It builds on previous work from Google, which entailed an AI system that ingested satellite images to produce forecasts with a roughly one-kilometer resolution and a latency of only 5-10 minutes. And while it's early days, it could the runway for a forecasting tool that could help businesses, residents, and local governments better prepare for inclement weather. MetNet takes a data-driven and physics-free approach to weather modeling, meaning it learns to approximate atmospheric physics from examples and not by incorporating prior knowledge.
In the world we live in there are many ways to access a car, from buying one outright to personal car leasing, renting and a variety of other ways. Of course, while the ways to get hold of a car are almost endless, the options available to you are just as big, if not bigger. And, while you may have your eye on a specific car, have you ever stopped to think about the type of car you'll be driving, or not driving, in the very near future? Well, with the world advancing at an ever-increasing rate, with Mercedes-Benz being a leader in AI adoption at the moment, there are many smart companies making movements to seize opportunities in the automotive industry, such as McKinsey Global, who are using robotics and AI technology, such as machine learning, in the development of vehicles. To give you a better idea of what could be on the horizon, we've outlined some of the future AI that could change our cars forever.