Goto

Collaborating Authors

production environment


Finding AI's low-hanging fruit

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Delivering AI solutions from the test bed to production environments will probably be the key focus for the enterprise throughout the next year or longer. But organizations should be cautious not to push AI too far too fast, despite the pressure to keep up with the competition. This often leads to two key problems. First, it pushes inadequate solutions into environments where they are quickly overwhelmed and this leads to failure, disillusionment and mistrust from the user base that ultimately inhibits adoption. The AI industry is not helping anything with its stream of promises that their solutions offer complete digital autonomy and transformative experiences.


How AI is making real contributions (right now) to business models

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - August 3. Join AI and data leaders for insightful talks and exciting networking opportunities. Stories of how AI will benefit the enterprise are a dime-a-dozen these days. Applications in sales, marketing, payroll and a host of other areas are legion. But as of yet, there is precious little talk about how, exactly, organizations are faring with their AI projects. Are they really delivering on these promises, and might there be some concrete examples of AI at work that can be emulated elsewhere?


Ploomber vs Kubeflow: Making MLOps Easier - DataScienceCentral.com

#artificialintelligence

In this short article, I'll try to capture the main differences between the MLops tools Ploomber and Kubeflow. We'll cover some background on what is Plumber, Kubeflow pipelines, and why we need those tools to make our lives easier. We'll see the differences in 3 main areas: Let's start with a short explanation of what a common data/ML workflow looks like, why do we even need orchestration and how it will help you perform your job better and faster. Usually, when an organization has data and is looking to produce insights or predictions out of it (to drive a business outcome), they bring in Data Scientists or MLEs to explore the data, prepare it for consumption and generate a model. These 3 assignments can be then unified into a data pipeline with correlating tasks, getting the data, cleaning it, and training a model.


The Pitfalls of Using AI as the Input of Another AI

#artificialintelligence

In the previous article, I briefly mentioned how using AI sequentially is a nightmare. Whenever an AI is used as input for another, the individual errors of each model quickly add up to unacceptable levels or, simply put, catastrophic failure. Moreover, as you add more nodes to the chain, the problem gets exponentially worse. In this article, I expand on the matter, explaining the intuition to why sequential models fail and how we can remedy some of these issues. The following discussion is of paramount interest to anyone developing complex AI pipelines, such as using object detection to find objects of interest and applying some other model to these objects.


5 Things to Consider When Operationalizing Your Machine Learning

#artificialintelligence

Operationalizing machine learning models requires a different process than creating those models. To be successful at this transition, you need to consider five critical areas. As machine learning teams start, most of their work is done in a laboratory mode. This means that they work through the process in a very manual yet scientific manner. They iteratively develop valuable machine learning models by forming a hypothesis, testing the model to confirm this hypothesis, and adjusting to improve model behavior.


Machine learning: The AIOps system Azure uses to make the cloud reliable

#artificialintelligence

Cloud services change all the time, whether it's adding new features or fixing bugs and security vulnerabilities; that's one of the big advantages over on-prem software. But every change is also an opportunity to introduce the bugs and regressions that are the main reasons for reliability issues and cloud downtime. In an attempt to avoid issues like that, Azure uses a safe deployment process that rolls out updates in phases, running them on progressively larger rings of infrastructure and using continuous, AI-powered monitoring to detect any issues that were missed during development and testing. When Microsoft launched its Chaos Studio service for testing how workloads cope with unexpected faults last year, Azure CTO Mark Russinovich explained the safe deployment process. "We go through a canary cluster as part of our safe deployment, which is an internal Azure region where we've got synthetic tests and we've got internal workloads that actually test services before they go out. This is the first production environment that the code for new service update reaches so we want to make sure that we can validate it and get a good sense for the quality of it before we move it out and actually have it touch customers."


Reducing cloud waste by optimizing Kubernetes with machine learning

ZDNet

Applications are proliferating, cloud complexity is exploding, and Kubernetes is prevailing as the foundation for application deployment in the cloud. That sounds like an optimization task ripe for machine learning, and StormForge is acting up on that.


Getting AI from the lab to production

#artificialintelligence

Did you miss a session from the Future of Work Summit? The enterprise is eager to push AI out of the lab and into production environments, where it will hopefully usher in a new era of productivity and profitability. But this is not as easy as it seems because it turns out that AI tends to behave much differently in the test bed than it does in the real world. Getting over this hump between the lab and actual applications is quickly emerging as the next major objective in the race to deploy AI. Since intelligent technology requires a steady flow of reliable data to function properly, a controlled environment is not necessarily the proving ground that it is for traditional software.


5 Data Science Trends in the Next 5 Years

#artificialintelligence

This field is large enough that it's a bit impossible to deeply cover all the things that can happen in the coming 5 years for it. Important trends that I foresee but won't cover here are specific applications of Data Science in unique domains, integrating of low-code/no-code tools in the tech stack, and other narrowly-focused insights. This is going to be a focus on the general, broad themes of change I see coming to stay in the next half-decade. This isn't an exhaustive list, but it does cover a lot of the issues that are currently faced in practice today: The title of the Data Scientist has been a big issue for many in the industry mainly because of the ambiguity around what the role entails and also what the company needs. Although I believe the job descriptions have largely become clearer and concise, the job profiles are just starting to become normalized.


Machine Learning Model Development and Model Operations: Principles and Practices - KDnuggets

#artificialintelligence

The use of Machine Leaning (ML) has increased substantially in enterprise data analytics scenarios to extract valuable insights from the business data. Hence, it is very important to have an ecosystem to build, test, deploy, and maintain the enterprise grade machine learning models in production environments. The ML model development involves data acquisition from multiple trusted sources, data processing to make suitable for building the model, choose algorithm to build the model, build model, compute performance metrics and choose best performing model. The model maintenance plays critical role once the model is deployed into production. The maintenance of machine learning model includes keeping the model up to date and relevant in tune with the source data changes as there is a risk of model becoming outdated in course of time.