If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Michelangelo can deploy multiple models in the same serving container, which allows for safe transitions from old to new model versions and side-by-side A/B testing of models. The original incarnation of Michelangelo did not support deep learning's need to train on GPUs, but that the team addressed that omission in the meantime. The current platform uses Spark's ML pipeline serialization but with an additional interface for online serving that adds a single-example (online) scoring method that is both lightweight and capable of handling tight SLAs, for instance, for fraud detection and prevention. It does so by bypassing the overhead of Spark SQL's Catalyst optimizer. Noteworthy is that both Google and Uber built in-house protocol buffer parsers and representations for serving, avoiding bottlenecks present in the default implementation. Airbnb established their own ML infrastructure team in 2016/2017 for similar reasons. First, they only had a few models in production, but building each model could take up to three months. Second, there was no consistency among models. And third, there were large differences between online and offline predictions.
An ML platform offers advanced functionality essential for building ML solutions (primarily predictive and prescriptive models). ML platforms support the incorporation of these solutions into business processes, surrounding infrastructure, products, and applications. It supports variously skilled data scientists (and other stakeholders, i.e., ML Engineers, Data Analysts & Business Analysts/Experts) in multiple tasks across the data and analytics pipeline, including all of the following areas:
A Dutch artist is using modern technology to create realistic photo-style portraits of famous figures only depicted in paint and sculpture. Bas Uterwijk, from Amsterdam, explained that he wanted to see if he could create realistic digital renderings of key faces in history, including Vincent Van Gogh and Napoleon. He also turned his talents to statues like Michelangelo's David and the Statue of Liberty. Bas uses Artbreeder, a'deep-learning' software which can create life-like images from scratch or based on a composite of different portraits. Bas Uterwijk, from Amsterdam, can create likenesses of famous historical figures using'deep-learning' technology.
Despite the hype surrounding machine learning and artificial intelligence(AI) most efforts in the enterprise remain in a pilot stage. Part of the reason for this phenomenon is the natural experimentation associated with machine learning projects but also there is a significant component related to the lack of maturity of machine learning architectures. This problem is particularly visible in enterprise environments in which the new application lifecycle management practices of modern machine learning solutions conflicts with corporate practices and regulatory requirements. What are the key architecture building blocks that organizations should put in place when adopting machine learning solutions? The answer is not very trivial but recently we have seen some efforts from research labs and AI data science that are starting to lay down the path of what can become reference architectures for large scale machine learning solutions.
Uber's services require real-world coordination between a wide range of customers, including driver-partners, riders, restaurants, and eaters. Accurately forecasting things like rider demand and ETAs enables this coordination, which makes our services work as seamlessly as possible. In an effort to constantly optimize our operations, serve our customers, and train our systems to perform better and better, we leverage machine learning (ML). In addition, we make many of our ML tools open source, sharing them with the community to advance the state of the art. In this spirit, members of our Seattle Engineering team shared their work at an April 2019 meetup on ML and AI at Uber.
Uber is one of those organizations that rely heavily on data. Each day, millions of trips take place in 700 cities across the world, generating information on traffic, preferred routes, estimated times of arrival/delivery, drop-off locations, and more that enables Uber to deliver a smooth riding experience to its customers. With access to the rich dataset coming from the cabs, drivers, and users, Uber has been investing in machine learning and artificial intelligence to enhance its business. Uber AI Labs consists of ML researchers and practitioners that translate the benefits of the state of the art machine learning techniques and advancements to Uber's core business. From computer vision to conversational AI to sensing and perception, Uber has successfully infused ML and AI into its ride-sharing platform.
You may have experimented with blockchain projects to drive transparency in your supply chain or efficiencies in cross-border payments. You may have live AI-driven applications in customer services or backend automation. You may even be considering migrating your core infrastructure into the cloud to increase storage efficiencies and speed. But have you considered how these technologies should be integrated with each other to drive your core strategy now, and not just the periphery or on a ten year horizon? The driver for innovation projects in incumbent organisations is largely to break down silos.
Models are trained and initially evaluated against historical data. This means that users can know that a model would have worked well in the past. But once you deploy the model and use it to make predictions on new data, it's often hard to ensure that it's still working correctly. Models can degrade over time because the world is always changing. Moreover, there can be breakages or bugs in a production model's data sources or data pipelines.