Goto

Collaborating Authors

 model life cycle


Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI

Wright, Dustin, Igel, Christian, Samuel, Gabrielle, Selvan, Raghavendra

arXiv.org Machine Learning

Artificial Intelligence (AI) is currently spearheaded by machine learning (ML) methods such as deep learning (DL) which have accelerated progress on many tasks thought to be out of reach of AI. These ML methods can often be compute hungry, energy intensive, and result in significant carbon emissions, a known driver of anthropogenic climate change. Additionally, the platforms on which ML systems run are associated with environmental impacts including and beyond carbon emissions. The solution lionized by both industry and the ML community to improve the environmental sustainability of ML is to increase the efficiency with which ML systems operate in terms of both compute and energy consumption. In this perspective, we argue that efficiency alone is not enough to make ML as a technology environmentally sustainable. We do so by presenting three high level discrepancies between the effect of efficiency on the environmental sustainability of ML when considering the many variables which it interacts with. In doing so, we comprehensively demonstrate, at multiple levels of granularity both technical and non-technical reasons, why efficiency is not enough to fully remedy the environmental impacts of ML. Based on this, we present and argue for systems thinking as a viable path towards improving the environmental sustainability of ML holistically.


Unlocking the Value of AI in Business Applications with ModelOps › Kenovy

#artificialintelligence

AI is fast becoming critical to business and IT applications and operations. Organizations have been investing in artificial intelligence capabilities for years to stay competitive, are hiring the best data scientist teams and are investing more and more in artificial intelligence and machine learning systems. However, implementing AI / ML models is not easy and the risk of failure is just around the corner. A solid methodology is needed to reduce this risk and enable companies to succeed. AI executives have been working toget more models in business for years now.


Council Post: Achieving Next-Level Value From AI By Focusing On The Operational Side Of Machine Learning

#artificialintelligence

Manasi Vartak is founder and CEO of Verta, a Palo Alto-based provider of solutions for Operational AI and ML Model Management. Technology research firm Gartner, Inc. has estimated that 85% of artificial intelligence (AI) and machine learning (ML) projects fail to produce a return for the business. The reasons often cited for the high failure rate include poor scope definition, bad training data, organizational inertia, lack of process change, mission creep and insufficient experimentation. To this list, I would add another reason that I have seen many organizations struggle to achieve value from their AI projects. Companies often have invested heavily in building data science teams to create innovative ML models.


Council Post: Top Six Trends (And Recommendations) For AI And ML In 2023

#artificialintelligence

Manasi Vartak is founder and CEO of Verta, a Palo Alto-based provider of solutions for Operational AI and ML Model Management. AI continues to transform our world as companies look to win over consumers with intelligent experiences delivered in real time on smartphones, smart TVs, smart cars--smart everything. But along with new opportunities, organizations are also finding new challenges as they seek to cross the AI chasm. Here are the top six AI/ML trends that I'll be tracking in the year ahead, along with recommendations for how enterprises can stay ahead of each trend. A recent study by our company's research group, Verta Insights, found that more than two-thirds of ML practitioners expect real-time use cases to increase significantly over the next three years.


Unlocking the Value of AI in Business Applications with ModelOps

#artificialintelligence

Organizations have been investing in artificial intelligence capabilities for years to stay competitive, are hiring the best data scientist teams and are investing more and more in artificial intelligence and machine learning systems. However, implementing AI / ML models is not easy and the risk of failure is just around the corner. A solid methodology is needed to reduce this risk and enable companies to succeed. AI executives have been working to get more models in business for years now. The first hurdle was getting data scientists hired and tools for rapid model creation.


Unlocking the Value of AI in Business Applications with ModelOps

#artificialintelligence

AI executives have been working to get more models in business for years now. The first hurdle was getting data scientists hired and tools for rapid model creation. That problem has been solved. The next hurdle is getting those models into production in a timely, compliant manner. Companies have a backlog of models that are sitting idle and degrading -- contributing no value/revenue to the business.


Don't Let Tooling and Management Approaches Stifle Your AI Innovation

#artificialintelligence

It is no coincidence that companies are investing in AI at unprecedented levels at a time when they are under tremendous pressure to innovate. The artificial intelligence models developed by data scientists give enterprises new insights, enable new and more efficient ways of working, and help identify opportunities to reduce costs and introduce profitable new products and services. The possibilities for AI use grow almost daily, so it's important not to limit innovation. Unfortunately, many organizations do just that by tethering themselves to proprietary tools and solutions. This can handcuff data scientists and IT as new innovations become available, and results in higher costs than an open environment that supports best-of-breed AI model development and management.


Don't Let Tooling and Management Approaches Stifle Your AI Innovation

#artificialintelligence

It is no coincidence that companies are investing in AI at unprecedented levels at a time when they are under tremendous pressure to innovate. The artificial intelligence models developed by data scientists give enterprises new insights, enable new and more efficient ways of working, and help identify opportunities to reduce costs and introduce profitable new products and services. The possibilities for AI use grow almost daily, so it's important not to limit innovation. Unfortunately, many organizations do just that by tethering themselves to proprietary tools and solutions. This can handcuff data scientists and IT as new innovations become available, and results in higher costs than an open environment that supports best-of-breed AI model development and management.


Council Post: Are Your Model Governance Practices 'AI Ready'?

#artificialintelligence

For some industries, the use of AI and machine learning models is novel, but several industries--consumer finance and insurance in particular--have been building, using and governing models for decades. These industries have well-developed governance practices built largely around algorithmic, rule-based and other model technologies and regulations that predate AI models. Many of the enterprises I talk to are revisiting their model operationalization and governance processes and strengthening them with new capabilities to accommodate the increased use of AI/ML technologies. You can't govern what you can't see, so every model risk management (MRM) program must start with a centralized model inventory that includes all the metadata associated with every model throughout its life cycle, from development to deployment, modification and retirement. This model metadata, which documents the model's complete history and lineage, captures a broad range of elements including the specific software and libraries used in its development, the data used to train the model, the people involved in the model's development and maintenance and what they created or changed, the model's intended business use and KPIs, an explanation of the key influencing factors behind the model's decision-making, etc.


The Tech Behind Uber's Bet On Self-Driving Cars -

#artificialintelligence

For the first time, ride-hailing company Uber has opened up about what is going on under the hood of their ATG's machine learning infrastructure and versioning control platform for autonomous driving vehicles. ATG is the Advanced Technologies Group, which concentrates and researches on self-driving vehicles by deploying machine learning models into the cars. The self-driving division at Uber has more than 450 employees who have been working on autonomous vehicle technology for several years now. Recently, the self-driving team at Uber developed a set of tools and microservices to support the ML workflow known as VerCD. The team also discussed their self-driving vehicle components, which use machine learning models as well as the machine learning model life cycle.