One day, robots will take over and it's going to be "bad" to "very bad". According to a survey conducted by Oxford University's Center for the Governance of AI, many Americans fear a future where mechanisms of AI become too intelligent. When asked what kind of impact high-level machine intelligence would have on humanity, 34 percent of respondents thought it would be negative, with 12 percent going for the option "very bad, possibly human extinction". Only 27 percent of respondents believed in a positive outcome, 21 percent thought AI wouldn't change the future much and 18 percent said they didn't know what impact AI would have. When ask to consider a negative future outcome of AI technology, Americans ranked the AI apocalypse as more catastrophic than the possible failure to address climate change, even though respondents said that it was less likely to happen.
We would like you to become part of HR Grapevine and join the most engaged online communities of HR Professionals in the UK. Thousands of HR Professionals just like you have already registered with HR Grapevine and we would like you to join in - its FREE! However, an EU regulation coming our way means that to continue hearing from us, you will need to become a registered user. No matter the outcome of BREXIT, this regulation will apply to us while we remain in the UK and perhaps beyond. Access across the HR Grapevine site will continue to be free of charge once you register.
In the era of AI superpowers, Finland is no match for the US and China. So the Scandinavian country is taking a different tack. It has embarked on an ambitious challenge to teach the basics of AI to 1% of its population, or 55,000 people. Once it reaches that goal, it plans to go further, increasing the share of the population with AI know-how. The scheme is all part of a greater effort to establish Finland as a leader in applying and using the technology.
MEPs in the European Parliament's Committee on Industry, Research and Energy backed plans on Monday evening (14 January) for a comprehensive policy framework on artificial intelligence (AI) and robotics, weeks after ethical concerns in the field were highlighted in a EU report. Parliament's report, though not legally binding, gives a clear signal that MEPs will seek to pressure the Commission to draw up an industrial policy for artificial intelligence and robotics. "This is a key area and I am pleased that we have been able to make some strong suggestions on AI," British Conservative MEP Ashley Fox said on Tuesday evening. "The technology is not confined to the boundaries of the single market and it is imperative that the EU work at the international level to agree on standards." MEPs noted the future potential for AI and robotics to transform a number of sectors ranging from health, energy, manufacturing and transport, and also urged member states to develop new training programmes that cultivate skills in areas that are likely to be affected by future autonomous technologies.
As artificial intelligence becomes integrated into more technology in our homes and workplaces, concerns about its ethical implications are growing. It seems like every day there are new tools designed to help workers use their time more efficiently with machine learning. If these trends continue, what will the workplaces of the future look like? What if it goes wrong? Keiichi Matsuda's new short film Merger taps into that uncertainty by imagining a workplace where humans have been proven to be less capable than AI.
Retailers are expected to spend 7.3 billion dollars on AI annually by 2022, according to a CapGemini Research Institute report. This investment is largely motivated by companies' interest in improving customer experience across all engagement points, including marketing, buying, and after-sales service. Eugenio Cassiano is the chief innovation officer for the SAP Customer Experience organization. He talked about three ways AI can deliver great customer experiences for retailers and other types of organizations. According to Cassiano, conversational AI is moving into the mainstream.
Members of Congress, the U.S. military, and prominent technologists have raised the alarm that the U.S. is at risk of losing an Artificial Intelligence (AI) arms race. China already has leveraged strategic investment and planning, access to massive data, and suspect business practices to surpass the U.S. in some aspects of AI implementation. There are worries that this competition could extend to the military sphere with serious consequences for U.S. national security. During the prior Cold War arms race era, U.S. policymakers and the military expressed consternation about a so-called "missile gap" with the USSR that potentially gave the Soviets military superiority. Echoes of gap anxiety continue today.
A so-called'DeepFake' video of a Trump speech was broadcast on a Fox-owned Seattle TV network, showing a very present AI threat. The station, Q13, broadcasted a doctored Trump speech in which he somehow appeared even more orange and pulled amusing faces. Following the broadcast, a Q13 employee was sacked. It's unclear if the worker created the clip or whether it was just allowed to air. The video could be the first DeepFake to be televised, but it won't be the last.
Two weeks ago, MIT's David Autor gave the prestigious Richard T. Ely lecture at the annual meeting of American economists in Atlanta. Introduced by the former chair of the Federal Reserve Ben Bernanke as a "first-class thinker" who was doing "path-breaking" work on the central economic issues of automation, globalization, and inequality, Autor strolled up to the microphone with a big smile.
One of the most frustrating things about AI systems is their inability to understand context. For example, if a system is trained to identify dogs, it will be completely oblivious to a family playing Frisbee with its beloved pet. This flaw can get extremely frustrating when we're trying to converse with a system that takes each statement as a separate query and ignores everything that came before.