"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.
My name is Kris Coratti. Thank you for joining us on this very rainy morning. I'm glad you all made it out. We are going to have a fascinating series of discussion this morning on artificial intelligence. This is the latest in our ongoing event series that we call "Transformers." And our speakers this morning are going to explore the regulatory questions around this technology. They going to look at how AI is reshaping the way we live and work. And they're going to discuss how to make sure this technology is used responsibly in the future. Before we begin, I just want to quickly thank our presenting sponsor for this even, Software.org, And so now I'd like to go ahead and welcome to the stage The Washington Post's Tony Romm and Senators Maria Cantwell and Todd Young. And for those who don't know, Senator Cantwell is a Democrat from Washington State. Both are members of a Senate commerce committee which touches on artificial intelligence and many tech issues that we'll talk about today.
Why are so many AI researchers so worried about lethal autonomous weapons? What makes autonomous weapons so much worse than any other weapons we have today? And why is it so hard for countries to come to a consensus about autonomous weapons? Not surprisingly, the short answer is: it's complicated. In this month's podcast, Ariel spoke with experts from a variety of perspectives on the current status of LAWS, where we are headed, and the feasibility of banning these weapons. Guests include ex-Pentagon advisor Paul Scharre (3:40), artificial intelligence professor Toby Walsh (40:51), Article 36 founder Richard Moyes (53:30), Campaign to Stop Killer Robots founders Mary Wareham and Bonnie Docherty (1:03:38), and ethicist and co-founder of the International Committee for Robot Arms Control, Peter Asaro (1:32:39). You can listen to the podcast above, and read the full transcript below. You can check out previous podcasts on SoundCloud, iTunes, GooglePlay, and Stitcher. If you work with ...
Stanford professor Fei-Fei Li is a pioneer in artificial intelligence. Her research helped lead to breakthroughs like allowing computers to recognize images. Now, AI has spread to every economic sector. This episode, hear Fei-Fei's thoughts on how humans can play a compassionate role in shaping AI's future. Plus, Caroline Fairchild brings reporting on some surprising jobs in this emerging industry. JESSI HEMPEL: From the editorial team at LinkedIn, I'm Jessi Hempel, and this is Hello Monday, a show where I investigate the changing nature of work, and how that work is changing us. Last year, I got to test-drive a self-driving car, which of course means I got to sit behind the wheel and not drive. In this one test, a human-size dummy walked out onto the track, imitating a pedestrian, jaywalking. SELF-DRIVING CAR TAPE: So here it comes...so we pass this trigger…do we see him? The car saw the pedestrian and slowed down to let him pass. This is just one of the many, many things that have become possible now that computers can recognize images. That's why this week, I wanted to talk to Fei-Fei Li.