Artificial intelligence is here, it's just the beginning, and it's time to start thinking about how to regulate it. Those were the takeaways from the Technology Alliance's AI Policy Matters Summit, a Seattle event that convened experts and government officials for a conversation about artificial intelligence. Many of those experts agreed that the government should start establishing guardrails to defend against malicious or negligent uses of artificial intelligence. But determining what shape those regulations should take is no easy feat. "It's not even clear what the difference is between AI and software," said Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, on stage at the event.
Artificial Intelligence (AI) is already pervasive in many applications. Deep learning has happened since 2012 and it's only growing. This is Nigel Toon's perspective, which he shared at Berlin's TechCrunch Disrupt conference on Wednesday. Toon is Graphcore's CEO, a company that specializes in AI chips. For Toon, AI can allow many different kinds of innovation.
Artificial intelligence appears to be "widening inequality," and its deployment should be subject to tough regulations and limits, especially for sensitive technologies such as facial recognition, a research report said Thursday. The AI Now Institute, a New York University center studying the social implications of artificial intelligence, said that as these technologies become widely deployed, the negative impacts are starting to emerge. The 93-page report examined concerns being raised "from AI-enabled management of workers, to algorithmic determinations of benefits and social services, to surveillance and tracking of immigrants and underrepresented communities," the researchers wrote. "What becomes clear is that across diverse domains and contexts, AI is widening inequality, placing information and control in the hands of those who already have power and further disempowering those who don't."
Machine learning is everywhere, but is it actual intelligence? A computer scientist wrestles with the ethical questions demanded by the rise of AI. Published by Farrar, Straus and Giroux October 15th 2019. The idea is that unchecked robots will rise up and kill us all. But such martial bodings overlook a perhaps more threatening model: Aladdin.
As technology continues to evolve with the development of artificial intelligence, machine learning, chatbot, and robotics, this technology has also created a ripple of effects on the different sectors of society, such as healthcare and medicine. It brought changes for both patients and health workers that are benefactors of this development. The digital age has transformed the way we understand the healthcare industry and have brought unprecedented changes for the betterment of health services. Find out more areas where these changes have occurred. One of the most critical leverage of artificial intelligence in healthcare is in the field of disease diagnosis and detection.
Thousands of scientists have signed a pledge not to have any role in building AIs which have the ability to kill without human oversight. When many think of AI, they at least give some passing thought of rogue AIs seen in sci-fi movies such as the infamous Skynet in Terminator. In an ideal world, AI would never be used in any military capacity. However, it was almost certainly be developed one way or another because of the advantage it would provide to an adversary without similar capabilities. Russian President Vladimir Putin, when asked his thoughts on AI, recently said: "Whoever becomes the leader in this sphere will become the ruler of the world."
An expert on artificial intelligence has called for all algorithms that make life-changing decisions – in areas from job applications to immigration into the UK – to be halted immediately. Prof Noel Sharkey, who is also a leading figure in a global campaign against "killer robots", said algorithms were so "infected with biases" that their decision-making processes could not be fair or trusted. A moratorium must be imposed on all "life-changing decision-making algorithms" in Britain, he said. Sharkey has suggested testing AI decision-making machines in the same way as new pharmaceutical drugs are vigorously checked before they are allowed on to the market. In an interview with the Guardian, the Sheffield University robotics/AI pioneer said he was deeply concerned over a series of examples of machine-learning systems being loaded with bias.
Zillow, an online marketplace that facilitates the buying, selling, renting, financing, and remodeling of homes, employs lots of AI technologies to do things like estimate home prices. But the output of AI systems like these can be opaque, creating a "black box" problem where practitioners and customers can't audit the systems properly. Without transparency, serious problems like algorithmic bias can persist undetected, and trust in the models becomes impossible. For obvious ethical reasons, this is why explainable AI (XAI) is so crucial to the creation and deployment of AI systems, but pragmatically, it's also key to the success of AI-powered products and services from companies like Zillow. David Fagnan, director of applied science on the Zillow Offers team, discussed with VentureBeat how and why XAI is indispensable for the company.