If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
The 1990s might have a lot to teach us about how we should tackle harm from artificial intelligence in the 2020s. Back then, some companies found they could actually make themselves safer by incentivizing the work of independent "white hat" security researchers who would hunt for issues and disclose them in a process that looked a lot like hacking with guardrails. That's how the practice of bug bounties became a cornerstone of cybersecurity today. In a research paper unveiled Thursday, researchers Josh Kenway, Camille François, Sasha Costanza-Chock, Inioluwa Deborah Raji and Joy Buolamwini argue that companies should once again invite their most ardent critics in -- this time, by putting bounties on harms that might originate in their artificial intelligence systems. François, a Fulbright scholar who has advised the French CTO and who played a key role in the U.S. Senate's probe of Russia's attempts to influence the 2016 election, published the report through the Algorithmic Justice League, which was founded in 2016 and "combines art and research to illuminate the social implications and harms of artificial intelligence."
Tech companies in the U.S. and the U.K. haven't done enough to prevent bias in artificial intelligence algorithms, according to a new survey from Data Robot. These same organizations are already feeling the impact of this problem as well in the form of lost customers and lost revenue. DataRobot surveyed more than 350 U.S. and U.K.-based technology leaders to understand how organizations are identifying and mitigating instances of AI bias. Survey respondents included CIOs, IT directors, IT managers, data scientists and development leads who use or plan to use AI. The research was conducted in collaboration with the World Economic Forum and global academic leaders.
Companies working with AI fear losing money or staff over AI bias, but there's additional risk in being outpaced by competition if projects fail due to AI bias. To jump ahead of algorithmic bias, over half of companies with mature AI implementations check the fairness, bias and ethics of their AI platforms, according to the O'Reilly 2021 AI Adoption in the Enterprise report. One approach yielding results for organizations is the development of in-house centers of excellence, said Marshall Choy, SVP, product at SambaNova. These institutions can address the technical aspects of AI as well as "the business and organizational implications of governance, dealing with topics like bias and ethics of AI." Despite ethical challenges, AI remains a top enterprise technology priority.
AI bias is already harming businesses and there's significant appetite for more regulation to help counter the problem. The findings come from the State of AI Bias report by DataRobot in collaboration with the World Economic Forum and global academic leaders. The report involved responses from over 350 organisations across industries. "DataRobot's research shows what many in the artificial intelligence field have long-known to be true: the line of what is and is not ethical when it comes to AI solutions has been too blurry for too long. The CIOs, IT directors and managers, data scientists, and development leads polled in this research clearly understand and appreciate the gravity and impact at play when it comes to AI and ethics."
As organizations roll out machine learning and AI models into production, they're increasing cognizant of the presence of bias in their systems. Not only does this bias potentially lead to poorer decisions on the part of the AI systems, but it can put the organizations running them in legal jeopardy. However, getting on top of this problem is turning out to be tougher than expected for a lot of organizations. For example, Harvard University and Accenture demonstrated how algorithmic bias can creep into the hiring processes at human resources departments in a report issued last year. In their 2021 joint report "Hidden Workers: Untapped Talent," the two organizations show how the combination of outdated job descriptions and automated hiring systems that leans heavily on algorithmic processes for posting of ads for open job and evaluation of resumes can keep otherwise qualified individuals from landing jobs.
Bias in AI systems can result in significant losses to companies, according to a new survey by an enterprise AI company. More than one in three companies (36 percent) revealed they had suffered losses due to AI bias in one or several algorithms, noted the DataRobot survey of over 350 U.S. and U.K. technologists, including CIOs, IT directors, IT managers, data scientists and development leads who use or plan to use AI. Of the companies damaged by AI bias, more than half lost revenue (62 percent) or customers (61 percent), while nearly half lost employees (43 percent) and over a third incurred legal fees from litigation (35 percent), according to the research, which was conducted in collaboration with the World Economic Forum and global academic leaders. Bias in AI systems can result in significant losses to companies, according to a new survey by an enterprise AI company. More than one in three companies (36 percent) revealed they had suffered losses due to AI bias in one or several algorithms, noted the DataRobot survey of over 350 U.S. and U.K. technologists, including CIOs, IT directors, IT managers, data scientists and development leads who use or plan to use AI.
The pace of technological change increased in 2021, and if history is any guide, will continue to accelerate in 2022. At the leading edge of high tech are data science and artificial intelligence, two disciplines that promise to keep the pace of change at a high level. Interest in AI, machine learning, and data science is extremely high, if the number of predictions on these topics is any indication. We start this batch of predictions with DataKitchen CEO Chris Bergh, who notes that the global AI market is projected to grow at a compound annual growth rate (CAGR) of 33% through 2027. But that significant growth comes with a hidden risk: reputational harm due to bias and a lack of accountability in AI processes.
Developers and data scientists are human, of course, but the systems they create are not -- they are merely code-based reflections of the human reasoning that goes into them. Getting artificial intelligence systems to deliver unbiased results and ensure smart business decisions requires a holistic approach that involves most of the enterprise. IT staff and data scientists cannot -- and should not -- be expected to be solo acts when it comes to AI. There is a growing push to expand AI beyond the confines of systems development and into the business suite. For example, at a recent panel at AI Summit, panelists agreed that business leaders and managers need to not only question the quality of decisions delivered through AI, but also get more actively involved in their formulation.
Entrepreneurs and experts at the front lines of the AI revolution recognize there are issues like a company's culture or lack of trust from a customer base, which cannot be solved by technology alone. These are fostered by principles that shape the everyday inner and outer workings of a company. AI is a powerful mechanism for amplifying human knowledge, skills, and efficiency. But can AI proponents employ AI to fix a moribund or toxic corporate culture? One of the challenges that Artificial Intelligence models are based on historical data, which means they are prone to biases that are ingrained into the algorithm while their learning phase is on.
With reliance on AI-based decisioning and operations growing by the day, it's important to take a step back and ask if everything that can be done to assure fairness and mitigate bias is being done. There needs to be greater awareness and training behind AI deployments. That's the word from John Boezeman, chief technology officer at Acoustic, who shared his insights on the urgency of getting AI right. Q: How far along are corporate efforts to achieve fairness and eliminate bias in AI results? Boezeman: Trying to determine bias or skew in AI is a very difficult problem and requires a lot of extra care, services, and financial investment to be able to not only detect, but then fix and compensate those issues.