Goto

Collaborating Authors

Results


The AI Index 2021 Annual Report

arXiv.org Artificial Intelligence

Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.


Another reminder that bias, testing, diversity is needed in machine learning: Twitter's image-crop AI may favor white men, women's chests

#artificialintelligence

Concerns about bias or unfair results in AI systems have come to the fore in recent years as the technology has infiltrated hiring, insurance, law enforcement, advertising, and other aspects of society. Prejudiced code may be a source of indignation on social media but it affects people's access to opportunities and resources in the real world. It's something that needs to be dealt with on a national and international level. A variety of factors go into making insufficiently neutral systems, such as unrepresentative training data, lack of testing on diverse subjects at scale, lack of diversity among research teams, and so on. But among those who developed Twitter's cropping algorithm, several expressed frustration about the assumptions being made about their work. Ferenc Huszár, former Twitter employee, one of the co-authors of Twitter's image pruning research, and now a senior lecturer on machine-learning at University of Cambridge, acknowledged there's reason to look into the results people have been reporting though cautioned against jumping to conclusions about negligence or lack of oversight. Some of the outrage was based on a small number of reported failure cases. While these failures look very bad, there's work to be done to determine the degree to which they are associated w/ race or gender.


Zoom faces lawsuit over Facebook data controversy

The Independent - Tech

Video conference app Zoom illegally shared personal data with Facebook, even if users did not have a Facebook account, a lawsuit claims. The app has experienced a surge in popularity as millions of people around the world are forced to work from home as part of coronavirus containment measures. The lawsuit, which was filed in a California federal court on Monday, states that the company failed to inform users that their data was being sent to Facebook "and possibly other third parties". It states: "Had Zoom informed its users that it would use inadequate security measures and permit unauthorised third-party tracking of their personal information, users... would not have been willing to use the Zoom App." The allegations come amid a flurry of questions surrounding Zoom's privacy policies, with the Electronic Frontier Foundation recently warning that the app allows administrators to track the activities of attendees.


The 2018 Survey: AI and the Future of Humans

#artificialintelligence

"Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties. Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today? Please explain why you chose the answer you did and sketch out a vision of how the human-machine/AI collaboration will function in 2030.


Artificial stupidity: 'Move slow and fix things' could be the mantra AI needs

#artificialintelligence

"Let's not use society as a test-bed for technologies that we're not sure yet how they're going to change society," warned Carly Kind, director at the Ada Lovelace Institute, an artificial intelligence (AI) research body based in the U.K. "Let's try to think through some of these issues -- move slower and fix things, rather than move fast and break things." Kind was speaking as part of a recent panel discussion at Digital Frontrunners, a conference in Copenhagen that focused on the impact of AI and other next-gen technologies on society. The "move fast and break things" ethos embodied by Facebook's rise to internet dominance is one that has been borrowed by many a Silicon Valley startup: develop and swiftly ship an MVP (minimal viable product), iterate, learn from mistakes, and repeat. These principles are relatively harmless when it comes to developing a photo-sharing app, social network, or mobile messaging service, but in the 15 years since Facebook came to the fore, the technology industry has evolved into a very different beast. Large-scale data breaches are a near-daily occurrence, data-harvesting on an industrial level is threatening democracies, and artificial intelligence (AI) is now permeating just about every facet of society -- often to humans' chagrin.


Artificial stupidity: 'Move slow and fix things' could be the mantra AI needs

#artificialintelligence

"Let's not use society as a test-bed for technologies that we're not sure yet how they're going to change society," warned Carly Kind, director at the Ada Lovelace Institute, an artificial intelligence (AI) research body based in the U.K. "Let's try to think through some of these issues -- move slower and fix things, rather than move fast and break things." Kind was speaking as part of a recent panel discussion at Digital Frontrunners, a conference in Copenhagen that focused on the impact of AI and other next-gen technologies on society. The "move fast and break things" ethos embodied by Facebook's rise to internet dominance is one that has been borrowed by many a Silicon Valley startup: develop and swiftly ship an MVP (minimal viable product), iterate, learn from mistakes, and repeat. These principles are relatively harmless when it comes to developing a photo-sharing app, social network, or mobile messaging service, but in the 15 years since Facebook came to the fore, the technology industry has evolved into a very different beast. Large-scale data breaches are a near-daily occurrence, data-harvesting on an industrial level is threatening democracies, and artificial intelligence (AI) is now permeating just about every facet of society -- often to humans' chagrin.


Artificial stupidity: 'Move slow and fix things' could be the mantra AI needs

#artificialintelligence

"Let's not use society as a test-bed for technologies that we're not sure yet how they're going to change society," warned Carly Kind, director at the Ada Lovelace Institute, an artificial intelligence (AI) research body based in the U.K. "Let's try to think through some of these issues -- move slower and fix things, rather than move fast and break things." Kind was speaking as part of a recent panel discussion at Digital Frontrunners, a conference in Copenhagen that focused on the impact of AI and other next-gen technologies on society. The "move fast and break things" ethos embodied by Facebook's rise to internet dominance is one that has been borrowed by many a Silicon Valley startup: develop and swiftly ship an MVP (minimal viable product), iterate, learn from mistakes, and repeat. These principles are relatively harmless when it comes to developing a photo-sharing app, social network, or mobile messaging service, but in the 15 years since Facebook came to the fore, the technology industry has evolved into a very different beast. Large-scale data breaches are a near-daily occurrence, data-harvesting on an industrial level is threatening democracies, and artificial intelligence (AI) is now permeating just about every facet of society -- often to humans' chagrin.


AI Tech North: Time for discussion!

#artificialintelligence

I recently had the pleasure of attending the first AI Tech event to take place in the North of England, something that set a powerful and thought provoking precedent or the future. As part of the Leeds Digital Festival, AI Tech North, was sold out and featured a good mixture of students, programmers and small businesses coming together to hear some of the leading experts in their field share their wealth of knowledge. Anthony Cohn, a Professor in Automated Reasoning at the University of Leeds opened up the event with a lively introduction that gave all in attendance a solid overview of the components of AI, breaking things down into five major categories; perception, language/speech recognition, planning, reasoning (inferring new facts from a basis of existing facts and coming sense) and learning. He stressed that "intelligence can be manifested in different ways" and that "the most successful parts of AI is where there has been little human interaction e.g. The point he made that stuck with me the most however was that the biggest threat to AI and its progress for the foreseeable future is that "the public overestimates the capabilities of AI", something which shows clearly the need for better awareness of exactly what AI is, away from the constraints of science fiction and fantasy.