According to scientists and legal experts, responding to the bank's warning this November, there is now an urgent need for the development of intelligent algorithms to be put on the political agenda. Top of the agenda as far as Lightfoot is concerned is the economic impact if AI cuts large amounts of jobs and the incomes from people, how will they make a living and what will they do, a concern that Professor Toby Walsh, an expert in AI at Australia's University of New South Wales and a prominent campaigner against the use of AI in military weapons, says is justified and one that needs to be urgently considered. Though Professor Walsh and fellow AI expert Murray Shanahan, Professor of Cognitive Robotics at London's Imperial College were wary of calls for regulation of the sector, which they said, would inhibit research. According to Professor Walsh scientists working in AI have already started to exercise a degree of self-control over the exploitation of the discoveries being made in AI the areas that need to be focussed on are the ramifications of the technology.
According to newly uncovered documents filed to the state of California in September 2015, Anthony Levandowski serves as the CEO and president of religious organisation Way of the Future. The documents, discovered by Wired's Backchannel, detail that Way of the Future's mission is "to develop and promote the realisation of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society". But the emergence of the documents demonstrates how the rapid advancement of AI and bioengineering is forcing discussions around how humans and robots will coexist on earth. According to Wired, many people in Silicone Valley believe in "the Singularity" – a time in the future when computers will surpass human levels of intelligence, which would likely trigger a major shift in power. Many in the design industry have already expressed concerns about the way humans and robots will live together.
When it comes to artificial intelligence, a lot of attention has been focused on issues of privacy and economics – what happens if AI makes human workers obsolete. Now, a new report from the non-profit Environmental Law Institute highlights the potential environmental impacts of AI-driven technologies, from autonomous cars to smart thermostats. Lead author Dave Rejeski says that whether those impacts are positive or negative will depend on how the technology is built and used, and the time to start thinking about that is now. The PyeongChang Olympics are likely to be remembered for the joint Korean team, wind delays, and robots. South Korea is taking advantage of the international spotlight to show off its leadership in robotics, with eleven different types of robots – eighty five, in all – in action at the Olympics.
Like any other shrewd businesspeople, cyber-criminals work with many of the same financial models of reducing risk and exposure while maximizing profitability as the organizations they seek to exploit. Attack techniques are evaluated not only in terms of their effectiveness, but in the overhead required to develop, modify, and implement them. To better defend themselves, organizations are adopting AI and machine learning to automate tedious and time-consuming activities that normally require a high degree of human supervision and intervention to increase visibility and response time, as well as to accelerate critical security functions such as threat detection and response. As these newer defensive strategies are implemented, they impact the basic economic and ROI models of cyber-criminals. Four Emerging Security Threats The evolution of zero-days: The rapidly increasing variety and number of vulnerabilities and exploits is likely to be augmented by the ability to quickly produce zero-day exploits and provide them as a service.
We've reached a tipping point where it is now high time that we start the conversation of regulating Face Recognition Artificial Intelligence (AI). In a previous post, I explored some ideas of how we may regulate AI. The most compelling argument against AI regulation has been that it isn't clear for many as to precisely what needs to be regulated. However, in recent days, it has come to my attention that a specific kind of AI algorithm needs serious thought for regulation. Stanford researchers have trained a Deep Learning system to recognize a person's sexual orientation.