Goto

Collaborating Authors

 modzy


The Perks and Obstacles of AI Adoption in Insurance

#artificialintelligence

Imagine that you are a leader at an insurance company. You know that artificial intelligence (AI) will give you a competitive edge and have decided to invest. You hired two brilliant data scientists, Juana and Yash. Juana develops an AI solution that scans digitized customer files, mines them for relevant information, and calculates accurate pay-outs. You project savings of over $1 million in the next 2 years, and 30% increased staff productivity.


AI in Production: the Final Frontier

#artificialintelligence

Production is often viewed as the final frontier in the machine learning process. By now, your data scientists trained a model on your data, the machine learning and software engineers incorporated that model into an application, the DevOps team configured the automation that containerizes the application for use by the rest of the organization, and the IT department set up infrastructure to host your model's application. At this point, most program managers flip the proverbial switch, allow users to rely on the solution, and move on to the next thing. That's also the wrong thing to do. This blog in the ModelOps blog series covers the model production step in the ModelOps pipeline, or AI in production, and the active management required to successfully field a machine learning model.


Council Post: ModelOps To The Rescue: What Your AI Has Been Missing

#artificialintelligence

ModelOps has swooped in to make artificial intelligence (AI) accessible by anyone anywhere. Faced with staggering stats like the fact that "On average, organizations take nine months to develop AI initiatives from prototype to production" and "as of 2018 only 47% of all AI investments make it out of the lab," data scientists and developers needed an easier way to take models from their favorite machine learning (ML) workbenches to running them at scale in production. If you're not working in the data science or the AI industry, ModelOps is probably a new term. It's a relatively new technical field, so you're likely to find differing opinions which can create confusion. Therefore, it's necessary to clarify what is meant by ModelOps.


Lead with DevSecOps to lower risk and raise value

#artificialintelligence

Developing and deploying AI-powered systems and applications is a complex business, especially in our extended remote reality. You're likely facing an uphill climb and let's face it, huge risks. The way to clear the obstacles, lower the risks, and raise the value you deliver hinges on one essential element: implementing DevSecOps to protect your process and your assets. We're operating in a different world now where unity among development (Dev), security (Sec), and operations (Ops) has never been more essential. Compounded by pressure related to the fast need to convert many of our office infrastructure to meet the needs of our remote reality during the COVID-19 pandemic, the market for DevSecOps is projected to grow from 32% to 34% mid-decade.[i]


DarwinAI Now a Model Partner at Modzy

#artificialintelligence

Modzy, the leading enterprise AI platform, announced that DarwinAI is now a model partner at the Modzy AI Model Marketplace. DarwinAI is expected to deploy numerous models using its GenSynth platform, including COVID-Net, an open source deep neural network for detecting COVID-19 infections from chest X-rays. DarwinAI has been featured in the MIT Technology Review, AI in Healthcare, and VentureBeat for their innovations in combating COVID-19 using AI. "We're really excited for this partnership with DarwinAI," said Norm Litterini, Head of Partnerships at Modzy. "The quality of their work is reflected in their solid reputation and industry acknowledgment. DarwinAI's models will enable customers to quickly operationalize AI into strategic initiatives while building out our marketplace offering, particularly for biomedical applications, where there is critical need."


Calling all Citizen Data Scientists

#artificialintelligence

At some point in our careers, we are asked to "stretch" beyond our core strengths--and then quickly scour all available resources to get up to speed. With more organizations leveraging AI for better, data-driven decision-making, these "stretch" opportunities are a daily reality for many. A study published by Element AI in 2018 estimated that only 22,000 researchers in the world are able to pursue serious machine learning research.[1] As talent gaps compete with rising demands, we've seen the emergence of the "citizen data scientist." A citizen data scientist is usually someone with a strong technical acumen who, while not a data scientist by training, relies more and more on self-service analytics and AI tools--the superpowers we need to thrive today.


ModelOps Webinar Presented by Modzy with Mike Gualtieri

#artificialintelligence

AI is all the buzz today, yet many find the final step – operationalizing AI at scale – too big a leap. So, what's the secret to successfully get AI out of the lab and into production? To deploy, monitor, and govern machine learning and other analytical models, you need the confidence and control of a rapid, repeatable process. Model operations (ModelOps) is the answer you need now to make AI business value happen.


Hacking AI: Exposing Vulnerabilities in Machine Learning

#artificialintelligence

An NLP bot gives an erroneous summary of an intercepted wire. These are examples of how AI systems can be hacked, which is an area of increased focus for government and industrial leaders alike. As AI technology matures, it's being adopted widely, which is great. That is what is supposed to happen, after all. However, greater reliance on automated decision-making in the real world brings a greater threat that bad actors will employ techniques like adversarial machine learning and data poisoning to hack our AI systems.


Hacking AI: Exposing Vulnerabilities in Machine Learning

#artificialintelligence

An NLP bot gives an erroneous summary of an intercepted wire. These are examples of how AI systems can be hacked, which is an area of increased focus for government and industrial leaders alike. As AI technology matures, it's being adopted widely, which is great. That is what is supposed to happen, after all. However, greater reliance on automated decision-making in the real world brings a greater threat that bad actors will employ techniques like adversarial machine learning and data poisoning to hack our AI systems.


It's Official – Our DNN Models are Now Commodity Software

#artificialintelligence

Summary: Booze Allen just launched a one-stop shop for all manner of pretested DNN models. This makes buying just like picking accounting, CRM, or HRIS software. Equally as important, it's a genius example of platform strategy to lock in customers and lock out competitors. The common vision of developing and deploying a deep learning model is half-a-dozen (at least) data scientists and engineers slogging away over maybe three to six months before having that MVP to first test in production. Go down to the software store, grab a COTS (commercial off the shelf) DNN for any image or text problem you may have, add a little transfer learning, and slam, bang, thank you ma'am you're in production.