kwartler
The Problem With Biased AIs (and How To Make AI Better)
AI has the potential to deliver enormous business value for organizations, and its adoption has been sped up by the data-related challenges of the pandemic. Forrester estimates that almost 100% of organizations will be using AI by 2025, and the artificial intelligence software market will reach $37 billion by the same year. But there is growing concern around AI bias -- situations where AI makes decisions that are systematically unfair to particular groups of people. Researchers have found that AI bias has the potential to cause real harm. I recently had the chance to speak with Ted Kwartler, VP of Trusted AI at DataRobot, to get his thoughts on how AI bias occurs and what companies can do to make sure their models are fair.
- Banking & Finance (0.74)
- Law > Statutes (0.32)
- Government > Regional Government (0.31)
DataRobot exec talks 'humble' AI, regulation
Organizations of all sizes have accelerated the rate at which they employ AI models to advance digital business transformation initiatives. But in the absence of any clear-cut regulations, many of these organizations don't know with any certainty whether those AI models will one day run afoul of new AI regulations. Ted Kwartler, vice president of Trusted AI at DataRobot, talked with VentureBeat about why it's critical for AI models to make predictions "humbly" to make sure they don't drift or, one day, potentially run afoul of government regulations. This interview has been edited for brevity and clarity. VentureBeat: Why do we need AI to be humble?
- Law > Statutes (1.00)
- Government (1.00)
Can We Trust AI? When AI Asks For Human Help (Part One)
Making AI more'humble' could not only help improve AI decision making, but could also help inspire ... [ ] more trust in the technology as a whole, and open the door for more useful and mission-critical applications in the future. AI is notoriously difficult to explain, and some deep learning algorithms can be too complex for even their creators to understand their reasoning. This makes it hard to trust what AI is doing, and even harder to find mistakes before it's too late. Having an algorithm stop partway through its reasoning to check with a human-in-the-loop could inspire more trust in AI, and open the door for the technology to be used in more sensitive and mission-critical applications. Injecting some'humility' into AI in this way could not only make AI more trustworthy and change how companies think about AI, but it could also help to demystify AI and reveal it as the logical and reliable technology that it is.
The Maturation of Data Science
Data science used to be somewhat of a mystery, more of a dark art than a repeatable, scientific process. Companies basically entrusted powerful priests called data scientists to build magical algorithms that used data to make predictions, usually to boost profits or improve customer happiness. But in recent years, the field has matured to a remarkable degree, and that is enabling progress to be made on multiple fronts, from ModelOps and reproducibility to ethics and accountability. About five years ago, the worldwide scientific community was suffering a "reproducibility crises" that impacted a wide range of scientific endeavors, including so-called hard sciences like physics and chemistry. One of the hallmarks of the scientific method is that experiments must be reproducible and will give the same results, but that lofty goal too often was not met.
Why you need to pay more attention to combatting AI bias
As artificial intelligence (AI) continues its march into enterprises, many IT pros are beginning to express concern about potential AI bias in the systems they use. A new report from DataRobot finds that nearly half (42%) of AI professionals in the US and UK are "very" to "extremely" concerned about AI bias. The report, conducted last June of more than 350 US- and UK-based CIOs, CTOs, VPs, and IT managers involved in AI and machine learning (ML) purchasing decisions, also found that "compromised brand reputation" and "loss of customer trust" are the most concerning repercussions of AI bias. This prompted 93% of respondents to say they plan to invest more in AI bias prevention initiatives in the next 12 months. SEE: The ethical challenges of AI: A leader's guide (free PDF) (TechRepublic) Despite the fact that many organizations see AI as a game changer, many organizations are still using untrustworthy AI systems, said Ted Kwartler, vice president of trusted AI, at DataRobot.
- North America > United States (0.35)
- Europe > United Kingdom (0.25)
- Research Report (0.50)
- Questionnaire & Opinion Survey (0.37)