Goto

Collaborating Authors

 devops team


Council Post: How Limitless Observability Can Help Enable AISecOps-Driven Transformation

#artificialintelligence

Bernd Greifeneder is the CTO and founder of Dynatrace, a software intelligence company that helps to simplify enterprise cloud complexity. Continuous digital transformation now defines modern, competitive organizations. Yet, the infrastructure that supports this transformation--powering everything from mobile banking to personalized, omnichannel retail experiences and "smart" healthcare--is built on complex multicloud architectures. The scale and complexity of these data and application environments are increasing relentlessly, and many companies already use five different cloud service platforms on average, according to research conducted by Coleman Parkes and commissioned by Dynatrace. This complexity exceeds humans' ability to manage.


Software testing trends: From AI to DevTestOps, what's hot and why

#artificialintelligence

The software development space is extremely volatile and is constantly evolving. In software testing, what works for an organization in the present may not be as effective a few months down the line. As the workloads become more distributed and decentralized, it is harder to test them and ensure quality. Today, organizations require quality at speed. The time it takes for products to reach the market is getting shorter, and testing can sometimes seem more like a hindrance than a necessity.


To Overcome DevOps Problems, More AI Skills Are Needed - AI Magazine

#artificialintelligence

Artificial intelligence would strengthen intelligence within companies, and would do the same for IT workshops. For example, AIOps (artificial intelligence for IT operations) applies AI and machine learning to data from IT processes, sifting through noise to detect, highlight and prevent problems. AI and machine learning also find their place in another emerging area of IT: helping DevOps teams ensure the viability and quality of software that moves at ever-increasing speeds through the system and to users. . As a recent survey by GitHub indicates, development and operations teams are massively turning to AI to streamline code flow in the software review and testing phase. The survey also reveals that 37% of teams are using AI/ML in software testing (up from 25% previously), and another 20% plan to use it this year.


Council Post: How To Leverage AI/ML For Predictive Incident Management

#artificialintelligence

Digital technologies have led to the application of new-age technologies that operate with minimal human intervention. And while they may heighten productivity and drive growth, any failure can pose a significant challenge for IT and DevOps teams to resolve. An incident or service disruption is an IT manager's worst nightmare. Very often, factors such as cybersecurity breaches, human error, and the accelerated pace of innovation place significant pressure on enterprises' IT infrastructure, leading to system failures and outages impacting the bottom line. According to the ITIC 2021 Hourly Cost of Downtime Survey, 44% of participants (of 1,200 global organizations) said that hourly downtime costs anywhere from $1 million to over $5 million.


NVIDIA Raises the Standard of Low Code DevOps with the NVIDIA AI Enterprise 2.1

#artificialintelligence

NVIDIA AI Enterprise 2.1 is now generally available for all enterprise users. Today, the global technology leader NVIDIA announced the most advanced version of its AI-powered data and analytics software for enterprise users. The new AI suite would enable users to fully-optimize their IT and Low Code DevOps processes in a highly scalable AI-based environments. These include applications across bare metal, virtual, container, and Cloud environments. The latest NVIDIA AI Enterprise 2.1 is part of NVIDIA's AI enterprise suite.


10 ways AI and ML are accelerating DevOps

#artificialintelligence

Software development teams are adapting AI & ML models into their apps and platforms to lessen DevOps lags. AI-driven DevOps will be the way of the future and flow with the tide. Software development tool vendors are speeding up the pace of integrating AI and machine learning models into their apps while seeking ways to lessen the delays in DevOps teams. Artificial intelligence will replace people as the essential tool for computing & analysis, revolutionizing how teams create, distribute, deploy, and manage applications since humans are not suited to handle the enormous volumes of data and computing required in daily operations. But first, let's grasp how AI and DevOps are related before we explore how ai ml will impact DevOps.


Adopting MLSecOps for secure machine learning at scale

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Given the complexity, sensitivity and scale of the typical enterprise's software stack, security has naturally always been a central concern for most IT teams. But in addition to the well-known security challenges faced by devops teams, organizations also need to consider a new source of security challenges: machine learning (ML). ML adoption is skyrocketing in every sector, with McKinsey finding that by the end of last year, 56% of businesses had adopted ML in at least one business function. However, in the race to adoption, many are encountering the distinct security challenges that come with ML, along with challenges in deploying and leveraging ML responsibly.


Iterative launches machine learning management tool

#artificialintelligence

Iterative, the MLOps company dedicated to streamlining the workflow of data scientists and machine learning (ML) engineers, has launched machine learning engineering management (MLEM) - an open source model deployment and registry tool that uses an organisation's existing Git infrastructure and workflows. According to the company, MLEM is designed to bridge the gap between ML engineers and DevOps teams. DevOps teams can understand the underlying frameworks and libraries a model uses and automate deployment into a one-step process for production services and apps, Iterative states. IDC AI/ML Lifecycle Management Softwrae research director Sriram Subramanian says, "Iterative enables customers to treat AI models as just another type of software artifact. The ability to build ML model registries using Git infrastructure and DevOps principles allows models to get into production faster."


Iterative launches MLEM, an open-source tool to simplify ML model deployment – TechCrunch

#artificialintelligence

MLOps platform Iterative, which announced a $20 million Series A round almost exactly a year ago, today launched MLEM, an open-source git-based machine learning model management and deployment tool. The idea here, the company says, is to bridge the gap between ML engineers and DevOps teams by using the git-based approach that developers are already familiar with. Using MLEM, developers can store and track their ML models throughout their lifecycle. As such, it complements Iterative's open-source GTO artifact registry and DVC, the company's version control system for data and models. "Having a machine learning model registry is becoming an essential part of the machine learning technology stack. Current SaaS solutions can lead to a divergence in the lifecycle of ML models and software applications," said Dmitry Petrov, co-founder and CEO of Iterative.


Operationalizing Machine Learning from PoC to Production - KDnuggets

#artificialintelligence

Many companies use machine learning to help create a differentiator and grow their business. However, it's not easy to make machine learning work as it requires a balance between research and engineering. One can come up with a good innovative solution based on current research, but it might not go live due to engineering inefficiencies, cost and complexity. Most companies haven't seen much ROI from machine learning since the benefit is realized only when the models are in production. Let's dive into the challenges and best practices that one can follow to make machine learning work.