Attacking the AI Trust Gap: 'FICO-like' Risk Scoring for Machine Learning Models
Implementing machine learning is a minefield and a slog. Even after IT managers put in place an accelerated computing infrastructure required for AI, after data scientists and business managers agree on analytics projects the organization needs, after the data science team selects algorithms, builds models, prepares data, runs prototypes and makes everything operational – after all that –there's still the real possibility business unit managers will reject ML recommendations for fear of bias in the model or simply because they don't understand how the system arrives at its decisions. It's the AI Trust Gap, and it's a particularly difficult hurdle for companies without FAANG-class compute and data science resources. We've written about new attempts to close the trust gap, including management strategy recommendations ("How to Overcome the AI Trust Gap: A Strategy for Business Leaders") and a product launch last month by IBM ("Explaining AI Decisions to Your Customers: IBM Toolkit for Algorithm Accountability"). Now CognitiveScale has added Certifai to its Cortex line of enterprise AI software that generates, according to the company, a "FICO-like" composite risk score based on the "AI Trust Index" that CognitiveScale developed with AI Global.
Oct-3-2019, 16:37:58 GMT