On Monday 8th April 2019, the European Commission's High-Level Expert Group on Artificial Intelligence (AI HLEG) revealed ethics guidelines aimed at forming best practices for creating "trustworthy AI." In fact, many argue this issue of trust in the AI system is one of the main hurdles the technology must overcome for more widespread implementation. A Forbes survey found that nearly 42% of respondents "could not cite a single example of AI that they trust"; in another survey, when respondents were asked what emotion best described their feeling towards AI, "Interested" was the most common response (45%), but it was closely followed by "concerned" (40.5%), "skeptical" (40.1%), "unsure" (39.1%), and "suspicious" (29.8%). The Commission's guidelines are a new roadmap for businesses to align their AI systems. While these guidelines are not policy, it is easy to imagine that they will serve as the building blocks for such regulations.
As artificial intelligence is being used to solve problems in healthcare, agriculture, weather prediction and more, scientists and engineers are investigating how AI could be used to fight climate change. AI algorithms could indeed be used to build better climate models and determine more efficient methods of reducing CO2 emissions, but AI itself often requires substantial computing power and therefore consumes a lot of energy. Is it possible to reduce the amount of energy consumed by AI and improve its effectiveness when it comes to fighting climate change? Virginia Dignum, an ethical artificial intelligence professor at the Umeå University in Sweden, was recently interviewed by Horizon Magazine. Dignum explained that AI can have a large environmental footprint that can go unexamined.
Artificial intelligence (AI) is already re-configuring the world in conspicuous ways. Data drives our global digital ecosystem, and AI technologies reveal patterns in data. Smartphones, smart homes, and smart cities influence how we live and interact, and AI systems are increasingly involved in recruitment decisions, medical diagnoses, and judicial verdicts. Whether this scenario is utopian or dystopian depends on your perspective. The potential risks of AI are enumerated repeatedly.
As artificial intelligence enhanced solutions penetrate further into our daily lives, we are confronted with the limitations of computers and of humans and the co-ordination between them. The promise of AI runs far ahead of its current capabilities, and yet the challenges are ever more urgent to address. We are at the relative beginning of an era when machines learn to use the data about us and the world around us to do things more efficiently, effectively, and more precisely. At a time when the rate of technological change exceeds imagination of the future, what is key to learn about AI is how it intersects with everything else. The future of artificial intelligence is more human.
In response to the serious threat that AI-enabled bots and deepfakes pose for election integrity, the California government has pushed forward progressive pieces of legislation that have influenced federal and international efforts. Passed in 2018, the "Bots Disclosure Act" makes it unlawful to use a bot to influence a commercial transaction or a vote in an election without disclosure in California. This includes bots deployed by companies in other states and countries, which requires those companies to either develop bespoke standards for Californian residents or harmonize their strategies across jurisdictions to maintain efficiency. At the federal level, the "Bots Disclosure and Accountability Act" includes many of the same strategies proposed in California. The California "Anti-Deepfakes Bill" seeks to mitigate the spread and impact of malicious political deepfakes before an election and the federal "Deepfakes Accountability Act" seeks to do the same.
AI is poised to benefit a multitude of industries in a variety of different ways. What does artificial intelligence in the near-term look like? How is it impacting industries and what should companies know about AI to remain competitive over the next few years? What are the early adopters of AI doing right now? Early adopters of AI include everything from automotive to marketing.
By 2030 the total gross domestic product of the world will be 14% higher because of one thing: more use of artificial intelligence or AI. That's the conclusion of PwC, a professional services firm based in London. If such forecasts are right these sophisticated computer programs will be doing tasks such as driving vehicles, planning and waging wars, and advising humans on how to handle both their health and wealth. One observer writing in the Journal of the American Medical Association has declared that the "hype and fear" surrounding AI "may be greater than that which accompanied the discovery of the structure of DNA or the whole genome." Yet despite the possibility of colossal impacts from AI, the U.S. government has been doing little to study its ethical implications.
The Trump administration, as part of its strategy on artificial intelligence, has spent a considerable amount of time identifying jobs that become obsolete with the rise of automation. As part of that effort, agencies have also looked at predicting what new career paths automation might create in the years ahead. But now some officials say fear over automation-related job security might have gone too far. Federal Chief Information Officer Suzette Kent, who has overseen some of the administration's reskilling pilots, like the Federal Cyber Reskilling Academy, said some of these anxieties about automation aren't new. "This is not a story that we haven't heard before in our nation: Something comes along that radically changes the way that we work, the way that we live, and creating fear about that is not the best path forward," Kent said during a panel hosted by the Bipartisan Policy Center on Wednesday.