AI could well have been nominated Person of the Year 2020 by Time magazine due to huge media attention, in-depth scientific scrutiny and hot policy and regulatory debates that swirled around the great opportunities and enormous risks it poses. However, in 2021 and beyond, we should not stop talking about AI. The goal of this whitepaper is to contribute towards an inclusive development of AI and help restore and strengthen trust between policymakers and the public. This calls for a greater effort to understand AI's effects more clearly and develop explainable and accountable algorithms. Furthermore, there is a need for strong evaluation frameworks that can assess not only the performance but also the performance and socio-economic impact of AI.
In April, the European Commission released a wide-ranging proposed regulation to govern the design, development, and deployment of A.I. systems. The regulation stipulates that "high-risk A.I. systems" (such as facial recognition and algorithms that determine eligibility for public benefits) should be designed to allow for oversight by humans who will be tasked with preventing or minimizing risks. Often expressed as the "human-in-the-loop" solution, this approach of human oversight over A.I. is rapidly becoming a staple in A.I. policy proposals globally. And although placing humans back in the "loop" of A.I. seems reassuring, this approach is instead "loopy" in a different sense: It rests on circular logic that offers false comfort and distracts from inherently harmful uses of automated systems. A.I. is celebrated for its superior accuracy, efficiency, and objectivity in comparison to humans.
Bias and ethics in artificial intelligence have captured the attention of the public and some organizations following some high-profile examples of it at work. For instance, there has been work that has demonstrated bias against darker skinned and female individuals in face recognition technology and a secret AI recruiting tool at Amazon that showed bias against women, among many other examples. But when it comes to looking inside at our own houses -- or businesses -- we may not be very far along in prioritizing AI ethics or taking measures to mitigate bias in algorithms. According to a new report from FICO, a global analytics software firm, 65% of C-level analytics and data executives surveyed said that their company cannot explain how specific AI model decisions or predictions are made, and 73% have struggled to get broader executive support for prioritizing AI ethics and responsible AI practices. Only 20% actively monitor their models in production for fairness and ethics.
There is a popular media narrative that AI will take over teachers. If anything, AI will be a new tool in teachers' toolkits. Teachers spend a good portion of their time reeling with administrative burdens. AI will not replace them but free up their time to focus on what they do best – helping students grow to comprehend the world. ARTiBA explores the impact of Artificial Intelligence in Education, its future, and the ongoing AI and ML innovations in the sector.
Artificial intelligence is also on the advance in IT security. According to a survey of 300 managers, 96 percent reported preparations in their companies for AI-supported IT attacks. In doing so, they partly rely on the help of "defensive AI". The survey was carried out with the assistance of the AI cybersecurity provider Darktrace. A survey of around 200 IT managers in medium-sized companies came to a more differentiated result.
Every industry is being transformed by Artificial Intelligence owing to its sophisticated capabilities and thorough data analysis. AI may help organizations in a variety of ways. Because AI is a larger technology, its commercial benefits are limitless. AI is capable of controlling corporate process automation as well as accumulating data analysis findings. Many global corporations are leveraging AI to improve employee and customer engagement. This article will discuss how businesses will use AI in the future year.
Organizations of all sizes have accelerated the rate at which they employ AI models to advance digital business transformation initiatives. But in the absence of any clear-cut regulations, many of these organizations don't know with any certainty whether those AI models will one day run afoul of new AI regulations. Ted Kwartler, vice president of Trusted AI at DataRobot, talked with VentureBeat about why it's critical for AI models to make predictions "humbly" to make sure they don't drift or, one day, potentially run afoul of government regulations. This interview has been edited for brevity and clarity. VentureBeat: Why do we need AI to be humble?
Timnit Gebru never thought a scientific paper would cause her so much trouble. In 2020, as the co-lead of Google's ethical AI team, Gebru had reached out to Emily Bender, a linguistics professor at the University of Washington, and asked to collaborate on research about the troubling direction of artificial intelligence. Gebru wanted to identify the risks posed by large language models, one of the most stunning recent breakthroughs in AI research. The models are algorithms trained on staggering amounts of text. Under the right conditions, they can compose what look like convincing passages of prose.
Of all the concerns surrounding artificial intelligence these days -- and no, I don't mean evil robot overlords, but more mundane things like job replacement and security -- perhaps none is more overlooked than cost. This is understandable, considering AI has the potential to lower the cost of doing business in so many ways. But AI is not only expensive to acquire and deploy, it also requires a substantial amount of compute power, storage, and energy to produce worthwhile returns. Back in 2019, AI pioneer Elliot Turner estimated that training the XLNet natural language system could cost upwards of $245,000 – roughly 512 TPUs running at full capacity for 60 straight hours. And there is no guarantee it will produce usable results.
At a 2020 meeting of the World Economic Forum in Davos, Salesforce founder Marc Benioff declared that "capitalism as we have known it is dead." In its place now is stakeholder capitalism, a form of capitalism that has been spearheaded by Klaus Schwab, founder of the World Economic Forum, over the past 50 years. As Benioff put it, stakeholder capitalism is "a more fair, a more just, a more equitable, a more sustainable way of doing business that values all stakeholders, as well as all shareholders." Unlike shareholder capitalism, which is measured primarily by the monetary profit generated for a business' shareholders alone, stakeholder capitalism requires that business activity should benefit all stakeholders associated with the business. These stakeholders can include the shareholders, the employees, the customers, the local community, the environment, etc.