Goto

Collaborating Authors

 avin


Community of ethical hackers needed to prevent AI's looming 'crisis of trust'

#artificialintelligence

The Artificial Intelligence industry should create a global community of hackers and "threat modellers" dedicated to stress-testing the harm potential of new AI products in order to earn the trust of governments and the public before it's too late. This is one of the recommendations made by an international team of risk and machine-learning experts, led by researchers at the University of Cambridge's Centre for the Study of Existential Risk (CSER), who have authored a new "call to action" published in the journal Science. They say that companies building intelligent technologies should harness techniques such as "red team" hacking, audit trails and "bias bounties" – paying out rewards for revealing ethical flaws – to prove their integrity before releasing AI for use on the wider public. Otherwise, the industry faces a "crisis of trust" in the systems that increasingly underpin our society, as public concern continues to mount over everything from driverless cars and autonomous drones to secret social media algorithms that spread misinformation and provoke political turmoil. The novelty and "black box" nature of AI systems, and ferocious competition in the race to the marketplace, has hindered development and adoption of auditing or third party analysis, according to lead author Dr Shahar Avin of CSER.


cONnected with Brisk Synergies presented by AVIN

#artificialintelligence

Raed Kadri, Director, Automotive Technology and Mobility Innovation at OCE sits down with Charles Chung, CEO, Brisk Synergies. Brisk Synergies specializes in automated road safety analysis, using computer vision and AI to help predict future collision rates and understand traffic flow on the roads. Learn more at: https://brisksynergies.com/ AVIN's (Autonomous Vehicle Innovation Network) cONnected with series explores Ontario's connected and autonomous vehicles innovation ecosystem across Ontario, showcasing technology, research and entrepreneurship across the province.


This cyberwar just got real DW 24.05.2018

#artificialintelligence

Cyberwar may not feel like "real" war -- the kind we've known and loathed for eons and the very same we perversely reenact in video games. But some military and legal experts say cyberwar is as real as it gets. David Petraeus, a retired US General and (some say disgraced) former intelligence chief says the internet has created an entirely distinct domain of warfare, one which he calls "netwar." And that's the kind being waged by terrorists. Then there's another kind, and technically any hacker with enough computer skills can do it -- whatever the motivation.


Artificial Intelligence Poses Big Threat to Society, Warn Leading Scientists

#artificialintelligence

Artificial Intelligence is on the cusp of transforming our world in ways many of us can barely imagine. While there's much excitement about emerging technologies, a new report by 26 of the world's leading AI researchers warns of the potential dangers that could emerge over the coming decade, as AI systems begin to surpass levels of human performance. Automated hacking is identified as one of the most imminent applications of AI, especially so-called "phishing" attacks. "That part used to take a lot of human effort – you had to study your target, make a profile of them, craft a particular message – that's known as phishing. We are now getting to the point where we can train computers to do the same thing. So you can model someone's topics of interest or preferences, their writing style, the writing style of a close friend, and have a machine automatically create a message that looks a lot like something they would click on," says report co-author Shahar Avin of the Center for the Study of Existential Risk at Britain's University of Cambridge.


Why Artificial Intelligence Researchers Should Be More Paranoid

#artificialintelligence

Life has gotten more convenient since 2012, when breakthroughs in machine learning triggered the ongoing frenzy of investment in artificial intelligence. Speech recognition works most of the time, for example, and you can unlock the new iPhone with your face. People with the skills to build things such systems have reaped great benefits--they've become the most prized of tech workers. But a new report on the downsides of progress in AI warns they need to pay more attention to the heavy moral burdens created by their work. It calls for urgent and active discussion of how AI technology could be misused.


This 'Civilization 5' Mod May Help Humans Avoid Rogue AI

#artificialintelligence

Cambridge University's Centre for the Study of Existential Risk (CSER) recently released a new Civilization 5 mod all about reducing the threat superintelligent AI can pose against mankind, The Verge reports. The creator of Unreal Engine describes his vision of the world-changing metaverse that's just 12 years away As The Verge points out, CSER was founded in 2012 to explore "various global catastrophes capable of collapsing civilization or wiping out humanity altogether." These dangers are referred to as "existential threats." One of these threats, the CSER thinks, is a superintelligent artificial intelligence system smarter than humans that ultimately chooses they aren't necessary to its system's needs. "We had the idea in the center that we wanted to do outreach for the idea of superintelligence – to get people with the right skillset interested, grow the field of people who care about AI safety, and test our own ideas," CSER postdoc researcher Shahar Avin told the outlet.