Goto

Collaborating Authors

 aw service


The Long Tail of the AWS Outage

WIRED

Experts say outages like the one that Amazon experienced this week are almost inevitable given the complexity and scale of cloud technology--but the duration serves as a warning. A sprawling Amazon Web Services cloud outage that began early Monday morning illustrated the fragile interdependencies of the internet as major communication, financial, health care, education, and government platforms around the world suffered disruptions. As the day wore on, AWS diagnosed and began working to correct the issue, which stemmed from the company's critical US-EAST-1 region based in northern Virginia. But the cascade of impacts took time to fully resolve. Researchers reflecting on the incident particularly highlighted the length of Monday's outage, which started around 3 am ET on Monday, October 20.


Serverless ICYMI Q1 2023

#artificialintelligence

Welcome to the 21st edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all the most recent product launches, feature enhancements, blog posts, webinars, live streams, and other interesting things that you might have missed! In case you missed our last ICYMI, check out what happened last quarter here. Artificial intelligence (AI) technologies, ChatGPT, and DALL-E are creating significant interest in the industry at the moment. Find out how to integrate serverless services with ChatGPT and DALL-E to generate unique bedtime stories for children.


Service in review: Sagemaker Modeling Pipelines - DEV Community

#artificialintelligence

Welcome back to my blog, where I share insights and tips on machine learning workflows using Sagemaker Pipelines. If you're new here, I recommend checking out my first post to learn more about this AWS fully managed machine learning service. In my second post, I discussed how parameterization can help you customize the workflow and make it more flexible and efficient. After using Sagemaker Pipelines extensively in real-life projects, I've gained a comprehensive understanding of the service. In this post, I'll summarize the key benefits of using Sagemaker Pipelines and the limitations you should consider before implementing it. This service is integrated with Sagemaker directly, so the user doesn't have to deal with other AWS services.


Real-time Analytics News for Week Ending March 11 - RTInsights

#artificialintelligence

In this week's real-time analytics news: Several companies announced generative AI offerings or enhancements to their product lines. Keeping pace with news and developments in the real-time analytics market can be a daunting task. We want to help by providing a summary of some of the important real-time analytics news items our staff came across this week. Salesforce launched Einstein GPT, a generative AI CRM technology, which delivers AI-created content across every sales, service, marketing, commerce, and IT interaction. Einstein GPT will infuse Salesforce's proprietary AI models with generative AI technology from an ecosystem of partners and real-time data from the Salesforce Data Cloud.


What is Amazon Security Lake? Written by ChatGPT

#artificialintelligence

As ChatGPT continues to make headlines in the news and tech blogs everywhere, I wanted to try it out for myself to see how I can use it in my own life. What better than to utilize it to write a new blog post summarizing an AWS service announced at re:Invent? I first went to ChatGPT with a simple question about summarizing the new service Amazon Security Lake. The result I got was a well written blog post summarizing all of it's capabilities. Amazon Security Lake is a new service from Amazon Web Services (AWS) that provides a central repository for storing, analyzing, and managing security data at scale.


AWS Certified Machine Learning Specialty -- Resources and Experience

#artificialintelligence

This article covers my experience in getting certified with AWS Certified Machine Learning -- Specialty, and I have shared the resources and cheatsheets, which helped me understand concepts! In the preparation phase of certification, I came across many excellent articles, blogs, and experience posts alongside the courses, which immensely helped me in understanding the width and breadth of the AWS ML world. I want to share my experience and the resources I found along the way, which boosted my confidence to take up the certification! Let's fill in some colors. What all are we talking about in this article?


Demystifying machine learning at the edge through real use cases

#artificialintelligence

Edge is a term that refers to a location, far from the cloud or a big data center, where you have a computer device (edge device) capable of running (edge) applications. Edge computing is the act of running workloads on these edge devices. Machine learning at the edge (ML@Edge) is a concept that brings the capability of running ML models locally to edge devices. These ML models can then be invoked by the edge application. ML@Edge is important for many scenarios where raw data is collected from sources far from the cloud. Although ML@Edge can address many use cases, there are complex architectural challenges that need to be solved in order to have a secure, robust, and reliable design.


Advice & Tips for Passing AWS Machine Learning Specialty

#artificialintelligence

I read a bunch of other articles on the subject, all of which helped and supported me on the path to passing the exam. After finishing I wanted to contribute to that pool of knowledge so, in true data science fashion, I thought I'd go ahead and open source my experience and preparatory materials. Everyone approaches these exams differently, so before I get into the meat of the article, it may help to detail my background and where I started studying for the exam. At the time I began studying, I had been practicing data science for approximately 7 years. I held a masters in applied statistics and did a minor in math during my undergraduate years.


5 AWS Services That Implement AIOps Effectively

#artificialintelligence

The rise of AI has influenced almost every domain, including DevOps and SysOps. When AI is infused into tools that are used for systems management, they become more efficient and intelligent. Like other machine learning-based systems, AIOps relies on massive amounts of data. The metrics, logs and events captured from tens of thousands of machines help data scientists and ML engineers derive interesting insights through correlation. Amazon is equipped with everything it takes to design an effective AIOps strategy for its infrastructure, operations, and management services.


Build GAN with PyTorch and Amazon SageMaker

#artificialintelligence

GAN is a generative ML model that is widely used in advertising, games, entertainment, media, pharmaceuticals, and other industries. You can use it to create fictional characters and scenes, simulate facial aging, change image styles, produce chemical formulas synthetic data, and more. For example, the following images show the effect of picture-to-picture conversion. The following images show the effect of synthesizing scenery based on semantic layout. This post walks you through building your first GAN model using Amazon SageMaker. This is a journey of learning GAN from the perspective of practical engineering experiences, as well as opening a new AI/ML domain of generative models.