The first few chapters cover the theory and practice of both DevOps and MLOps. One of the items covered is how to set up continuous integration and continuous delivery. Another critical topic is Kaizen, i.e., the idea of continuous improvement in everything. There are three chapters on cloud computing that cover AWS, Azure, and GCP. Alfredo, a developer advocate for Microsoft, is an ideal source of knowledge for MLOps on the Azure platform. Likewise, Noah has spent years getting students trained on cloud computing and working with the education arms of Google, AWS, and Azure.
Gartner predicts that by the end of 2024, 75% of enterprises will shift from piloting to operationalizing artificial intelligence (AI), and the vast majority of workloads will end up in the cloud in the long run. For some enterprises that plan to migrate to the cloud, the complexity, magnitude, and length of migrations may be daunting. The speed of different teams and their appetites for new tooling can vary dramatically. An enterprise's data science team may be hungry for adopting the latest cloud technology, while the application development team is focused on running their web applications on premises. Even with a multi-year cloud migration plan, some of the product releases must be built on the cloud in order to meet the enterprise's business outcomes.
I have been at Lucidworks for about eight years. My background is in software engineering with an emphasis on analytics and distributed computing. I was brought in to Lucidworks by the board to lead the company through a transition from an open source support and services company to delivering proprietary search and AI solutions. We have a team of around 250 employees that are distributed across the country and the world. Back in 2005, pre-Lucidworks, I was a part of the founding team at Splunk.
We at AWS continue to be impressed by the passion AWS enthusiasts have for knowledge sharing and supporting peer-to-peer learning in tech communities. A select few of the most influential and active community leaders in the world, who truly go above and beyond to create content and help others build better & faster on AWS, are recognized as AWS Heroes. Data Hero Anahit Pogosova is a Lead Cloud Software Engineer at Solita. She has been architecting and building software solutions with various customers for over a decade. Anahit started working with monolithic on-prem software, but has since moved all the way to the cloud, nowadays focusing mostly on AWS Data and Serverless services.
I made my first trip to China in late 2008. I was able to speak to developers and entrepreneurs and to get a sense of the then-nascent market for cloud computing. With over 900 million Internet users as of 2020 (according to a recent report from China Internet Network Information Center), China now has the largest user base in the world. A limited preview of the China (Beijing) Region was launched in 2013 and brought to general availability in 2016. A year later the AWS China (Ningxia) Region launched.
AWS IoT Greengrass is software that extends cloud capabilities to local devices. This enables devices to collect and analyze data closer to the source of information, react autonomously to local events, and communicate securely with each other on local networks. Local devices can also communicate securely with AWS IoT Core and export IoT data to the AWS Cloud. AWS IoT Greengrass developers can use AWS Lambda functions and prebuilt connectors to create serverless applications that are deployed to devices for local execution. The following diagram shows the basic architecture of AWS IoT Greengrass. AWS IoT Greengrass makes it possible for customers to build IoT devices and application logic. Specifically, AWS IoT Greengrass provides cloud-based management of application logic that runs on devices. Locally deployed Lambda functions and connectors are triggered by local events, messages from the cloud, or other sources. In AWS IoT Greengrass, devices securely communicate on a local network and exchange messages with each other without having to connect to the cloud. AWS IoT Greengrass provides a local pub/sub message manager that can intelligently buffer messages if connectivity is lost so that inbound and outbound messages to the cloud are preserved. Through secure connectivity in the local network. Device security credentials function in a group until they are revoked, even if connectivity to the cloud is disrupted, so that the devices can continue to securely communicate locally. MQTT messaging over the local network between devices, connectors, and Lambda functions using managed subscriptions. MQTT messaging between AWS IoT and devices, connectors, and Lambda functions using managed subscriptions. Shadows can be configured to sync with the AWS Cloud. Automatic IP address detection that enables devices to discover the Greengrass core device. Central deployment of new or updated group configuration.
What was the last thing you heard about Amazon (AMZN)? Or was it the FAA's approval of Amazon's delivery drones? Most of this news about Amazon's store is just noise that distracts investors from Amazon's real force. As I'll show, Amazon is running an "operating system" that powers some of today's most important technologies such as virtual reality, machine learning, and even quantum computing. Behind the scenes, it is utilized by over a million companies--including tech giants Apple AAPL, Netflix NFLX, and Facebook FB .
Organizations are modernizing their applications by adopting containers and microservices-based architectures. Many customers are deploying high-performance workloads on containers to power microservices architecture, and require access to low latency and high throughput shared storage from these containers. Because containers are transient in nature, these long-running applications require data to be stored in durable storage. Amazon FSx for Lustre (FSx for Lustre) provides the world's most popular high-performance file system, now fully managed and integrated with Amazon S3. It offers a POSIX-compliant, fast parallel file system to enable peak performance and highly durable storage for your Kubernetes workloads.
AWS is excited to introduce Amazon SageMaker Operators for Kubernetes, a new capability that makes it easier for developers and data scientists using Kubernetes to train, tune, and deploy machine learning (ML) models in Amazon SageMaker. Customers can install these Amazon SageMaker Operators on their Kubernetes cluster to create Amazon SageMaker jobs natively using the Kubernetes API and command-line Kubernetes tools such as'kubectl'. Many AWS customers use Kubernetes, an open-source general-purpose container orchestration system, to deploy and manage containerized applications, often via a managed service such as Amazon Elastic Kubernetes Service (EKS). This enables data scientists and developers, for example, to set up repeatable ML pipelines and maintain greater control over their training and inference workloads. However, to support ML workloads these customers still need to write custom code to optimize the underlying ML infrastructure, ensure high availability and reliability, provide data science productivity tools, and comply with appropriate security and regulatory requirements.
Noah Gift is lecturer and consultant at UC Davis Graduate School of Management in the MSBA program. Professionally, Noah has approximately 20 years' experience programming in Python and is a member of the Python Software Foundation. He has worked for a variety of companies in roles ranging from CTO, general manager, consulting CTO, and cloud architect. Currently, he is consulting start-ups and other companies on machine learning and cloud architecture, and is doing CTO-level consulting via Noah Gift Consulting. He has published close to 100 technical publications including two books on subjects ranging from cloud machine learning to DevOps.