If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Uber is one of those organizations that rely heavily on data. Each day, millions of trips take place in 700 cities across the world, generating information on traffic, preferred routes, estimated times of arrival/delivery, drop-off locations, and more that enables Uber to deliver a smooth riding experience to its customers. With access to the rich dataset coming from the cabs, drivers, and users, Uber has been investing in machine learning and artificial intelligence to enhance its business. Uber AI Labs consists of ML researchers and practitioners that translate the benefits of the state of the art machine learning techniques and advancements to Uber's core business. From computer vision to conversational AI to sensing and perception, Uber has successfully infused ML and AI into its ride-sharing platform.
You may have experimented with blockchain projects to drive transparency in your supply chain or efficiencies in cross-border payments. You may have live AI-driven applications in customer services or backend automation. You may even be considering migrating your core infrastructure into the cloud to increase storage efficiencies and speed. But have you considered how these technologies should be integrated with each other to drive your core strategy now, and not just the periphery or on a ten year horizon? The driver for innovation projects in incumbent organisations is largely to break down silos.
Models are trained and initially evaluated against historical data. This means that users can know that a model would have worked well in the past. But once you deploy the model and use it to make predictions on new data, it's often hard to ensure that it's still working correctly. Models can degrade over time because the world is always changing. Moreover, there can be breakages or bugs in a production model's data sources or data pipelines.
Companies of all sizes are not satisfied with their machine learning process and various challenges to widespread adoption remain. SEATTLE, Oct. 16, 2018 (GLOBE NEWSWIRE) -- Algorithmia announces the results of a survey on enterprise machine learning. The comprehensive survey, titled "State of Enterprise Machine Learning," is a first for Algorithmia and was designed to explore the ways in which companies of all sizes are utilizing machine learning. The survey was completed by over 500 data science and machine learning professionals, the majority of whom were based in North America. A report detailing the survey's findings can be foundhere.
Uber Engineering formally introduced its internal Machine Learning as a Service platform Michelangelo in a company blog post Tuesday. Uber began building the AI platform with a combination of open-source and in-house components in 2015 and now deploys it across company services such as UberEATs. Michelangelo covers end-to-end ML workflow and allows Uber teams to manage data; teach, evaluate and employ models; and create and track predictions. It also serves deep learning, time series forecasting and other machine learning models, and the company is focusing on improving developer productivity on the platform. Uber is not the only large company creating in-house machine learning platforms tailored to its needs.
Specifically, there were no systems in place to build reliable, uniform, and reproducible pipelines for creating and managing training and prediction data at scale. Prior to Michelangelo, it was not possible to train models larger than what would fit on data scientists' desktop machines, and there was neither a standard place to store the results of training experiments nor an easy way to compare one experiment to another. Most importantly, there was no established path to deploying a model into production–in most cases, the relevant engineering team had to create a custom serving container specific to the project at hand. At the same time, we were starting to see signs of many of the ML anti-patterns documented by Scully et al.