In June, a crisis erupted in the artificial intelligence world. Conversation on Twitter exploded after a new tool for creating realistic, high-resolution images of people from pixelated photos showed its racial bias, turning a pixelated yet recognizable photo of former President Barack Obama into a high-resolution photo of a white man. Researchers soon posted images of other famous Black, Asian, and Indian people, and other people of color, being turned white. Two well-known AI corporate researchers -- Facebook's chief AI scientist, Yann LeCun, and Google's co-lead of AI ethics, Timnit Gebru -- expressed strongly divergent views about how to interpret the tool's error. A heated, multiday online debate ensued, dividing the field into two distinct camps: Some argued that the bias shown in the results came from bad (that is, incomplete) data being fed into the algorithm, while others argued that it came from bad (that is, short-sighted) decisions about the algorithm itself, including what data to consider.
This book discusses the necessity and perhaps urgency for the regulation of algorithms on which new technologies rely; technologies that have the potential to re-shape human societies. From commerce and farming to medical care and education, it is difficult to find any aspect of our lives that will not be affected by these emerging technologies. At the same time, artificial intelligence, deep learning, machine learning, cognitive computing, blockchain, virtual reality and augmented reality, belong to the fields most likely to affect law and, in particular, administrative law. The book examines universally applicable patterns in administrative decisions and judicial rulings. First, similarities and divergence in behavior among the different cases are identified by analyzing parameters ranging from geographical location and administrative decisions to judicial reasoning and legal basis. As it turns out, in several of the cases presented, sources of general law, such as competition or labor law, are invoked as a legal basis, due to the lack of current specialized legislation. This book also investigates the role and significance of national and indeed supranational regulatory bodies for advanced algorithms and considers ENISA, an EU agency that focuses on network and information security, as an interesting candidate for a European regulator of advanced algorithms. Lastly, it discusses the involvement of representative institutions in algorithmic regulation.
Artificial intelligence is considered as a key technology. It has a huge impact on our society. Besides many positive effects, there are also some negative effects or threats. Some of these threats to society are well-known, e.g., weapons or killer robots. But there are also threats that are ignored. These unknown-knowns or blind spots affect privacy, and facilitate manipulation and mistaken identities. We cannot trust data, audio, video, and identities any more. Democracies are able to cope with known threats, the known-knowns. Transforming unknown-knowns to known-knowns is one important cornerstone of resilient societies. An AI-resilient society is able to transform threats caused by new AI tecchnologies such as generative adversarial networks. Resilience can be seen as a positive adaptation of these threats. We propose three strategies how this adaptation can be achieved: awareness, agreements, and red flags. This article accompanies the TEDx talk "Why we urgently need an AI-resilient society", see https://youtu.be/f6c2ngp7rqY.
"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.
Decades of research in artificial intelligence (AI) have produced formidable technologies that are providing immense benefit to industry, government, and society. AI systems can now translate across multiple languages, identify objects in images and video, streamline manufacturing processes, and control cars. The deployment of AI systems has not only created a trillion-dollar industry that is projected to quadruple in three years, but has also exposed the need to make AI systems fair, explainable, trustworthy, and secure. Future AI systems will rightfully be expected to reason effectively about the world in which they (and people) operate, handling complex tasks and responsibilities effectively and ethically, engaging in meaningful communication, and improving their awareness through experience. Achieving the full potential of AI technologies poses research challenges that require a radical transformation of the AI research enterprise, facilitated by significant and sustained investment. These are the major recommendations of a recent community effort coordinated by the Computing Community Consortium and the Association for the Advancement of Artificial Intelligence to formulate a Roadmap for AI research and development over the next two decades.
A few months ago I made the trek to the sylvan campus of the IBM research labs in Yorktown Heights, New York, to catch an early glimpse of the fast-arriving, long-overdue future of artificial intelligence. This was the home of Watson, the electronic genius that conquered Jeopardy! in 2011. The original Watson is still here--it's about the size of a bedroom, with 10 upright, refrigerator-shaped machines forming the four walls. The tiny interior cavity gives technicians access to the jumble of wires and cables on the machines' backs. It is surprisingly warm inside, as if the cluster were alive. Today's Watson is very different. It no longer exists solely within a wall of cabinets but is spread across a cloud of open-standard servers that run several hundred "instances" of the AI at once. Like all things cloudy, Watson is served to simultaneous customers anywhere in the world, who can access it using their phones, their desktops, or their own data servers.
Ask a layman about artificial intelligence and they might point to sci-fi villains such as HAL from 2001: A Space Odyssey or the Terminator. But the co-founders of the AI Now Institute, Meredith Whittaker and Kate Crawford, want to change the conversation. Instead of talking about far-flung super-intelligent AI, they argued on the latest episode of Recode Decode, we should be talking about the ways AI is affecting people right now, in everything from education to policing to hiring. Rather than killer robots, you should be concerned about what happens to your résumé when it hits a program like the one Amazon tried to build. "They took two years to design, essentially, an AI automatic résumé scanner," Crawford said. "And they found that it was so biased against any female applicant that if you even had the word'woman' on your résumé that it went to the bottom of the pile." That's a classic example of what Crawford calls "dirty data." Even though people think of algorithms as being ...
Key sectors of interest include Internet of Things, FinTech, Future of Work, Logis-cs/Transporta-on, eHealth, Security and others. BootstrapLabs is a Venture Capital firm based in Silicon Valley and focused on Applied Ar:ficial Intelligence About Us 3. Community of Founders, Intrapreneurs, AI/ML Experts, Execu-ves, Professors, Researchers, Investors focused on Innova-on, Technology and Entrepreneurship 30K PEOPLE Our Community 200K FOLLOWERS 1K ATTENDEES Our online community between BootstrapLabs core team and its closer advisors has over 200K followers. We see traffic on our website and deal flow referral coming from over 60 countries BootstrapLabs brought together over 1,000 people during 2016. Our community is a key pillar of our success and we organize many exclusive private and public AI centric events each year 4. Applied AI Digest #1 2016 Google's DeepMind Beats a Top Player at the Game of Go Zucks to create AI-Powered Jarvis JAN FEB MAR APR MAY JUN JUL AUG SEP OCT NOV DEC IBM Watson Head on the Future of AI Read Full Articles 5. Applied AI Digest #2 Artificial Intelligence Deals on the RiseCould AI Solve the World's Biggest Problems? Harvard is building an AI Engine as fast as the Brain Read Full Articles 6. Applied AI Digest #3 Is Big Data Still a Thing?
If you are a data scientist, a software developer, or in the social and human sciences with interest in digital humanities, then you're no stranger to the ongoing discussions on how algorithms embed biases, and discrimination and the call for critical and ethical engagement. This list is by no means exhaustive and as more and more awareness is being raised, there are more pieces/articles/journal papers being written on a daily basis. I plan to update these lists regularly. Also, if you think there are relevant material that I have not included, please leave them as a comment and I will add them. Weapons of math destruction: how big data increases inequality and threatens democracy by Cathy O'Neil.