Goto

Collaborating Authors

Results


Algorithms of war: The military plan for artificial intelligence

#artificialintelligence

At the outbreak of World War I, the French army was mobilised in the fashion of Napoleonic times. On horseback and equipped with swords, the cuirassiers wore bright tricolour uniforms topped with feathers--the same get-up as when they swept through Europe a hundred years earlier. Vast fields were filled with trenches, barbed wire, poison gas and machine gun fire--plunging the ill-equipped soldiers into a violent hellscape of industrial-scale slaughter. Only three decades after the first World War I bayonet charge across no man's land, the US was able to incinerate entire cities with a single (nuclear) bomb blast. And since the destruction of Hiroshima and Nagasaki in 1945, our rulers' methods of war have been made yet more deadly and "efficient".


Artificial Intelligence: The Terminator of Truth

#artificialintelligence

Science fiction movies like "Blade Runner" and "The Terminator" have defined the perception of artificial intelligence within popular culture. For most people, the term AI conjures up images of a dystopian future dominated by humanoid robots that have taken over the world. This common conception leads to the dismissal of the technology as impossible, or at least faroff in the future. Few people realize that we are already delving into a world dominated by AI, and it's nothing like "The Terminator." The actual risks posed by artificial intelligence have nothing to do with killer robots; they relate to the machine-learning algorithms that recommend content on the internet.


5 real AI threats that make The Terminator look like Kindergarten Cop

#artificialintelligence

Every time an AI article finds its way to social media there's hundreds of people invoking the terrifying specter of "SKYNET." SKYNET is a fictional artificial general intelligence that's responsible for the creation of the killer robots from the Terminator film franchise. It was a scary vision of AI's future until deep learning came along and big tech decided to take off its metaphorical belt and really give us something to cry about. At least the people fighting the robots in The Terminator film franchises get to face a villain they can see and shoot at. And that makes it difficult to explain why, based on what's happening now, the real future might be even scarier than the one from those killer robot movies.


Ethical Artificial Intelligence is Focus of New Robotics Program - UT News

#artificialintelligence

Ethics will be at the forefront of robotics education thanks to a new University of Texas at Austin program that will train tomorrow's technologists to understand the positive -- and potentially negative -- implications of their creations. Today, much robotic technology is developed without considering its potentially harmful effects on society, including how these technologies can infringe on privacy or further economic inequity. The new UT Austin program will fill an important educational gap by prioritizing these issues in its curriculum. "In the next 10 years, we are going to live more closely alongside robots, and we want to be sure that those robots are fair, inclusive and free from bias," said Junfeng Jiao, associate professor in the School of Architecture and the program lead. "And because the robots we create are reflections of ourselves, it is imperative that technologists receive an excellent ethics education. We want our students to work directly with companies to create practices and technologies that are equitable and fair."


Survey XII: What Is the Future of Ethical AI Design? – Imagining the Internet

#artificialintelligence

Results released June 16, 2021 – Pew Research Center and Elon University's Imagining the Internet Center asked experts where they thought efforts aimed at ethical artificial intelligence design would stand in the year 2030. Some 602 technology innovators, developers, business and policy leaders, researchers and activists responded to this specific question. The Question – Regarding the application of AI Ethics by 2030: In recent years, there have been scores of convenings and even more papers generated proposing ethical frameworks for the application of artificial intelligence (AI). They cover a host of issues including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, freedom, trust, sustainability and dignity. Our questions here seek your predictions about the possibilities for such efforts. By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public ...


A brief history of AI: how to prevent another winter (a critical review)

arXiv.org Artificial Intelligence

The field of artificial intelligence (AI), regarded as one of the most enigmatic areas of science, has witnessed exponential growth in the past decade including a remarkably wide array of applications, having already impacted our everyday lives. Advances in computing power and the design of sophisticated AI algorithms have enabled computers to outperform humans in a variety of tasks, especially in the areas of computer vision and speech recognition. Yet, AI's path has never been smooth, having essentially fallen apart twice in its lifetime ('winters' of AI), both after periods of popular success ('summers' of AI). We provide a brief rundown of AI's evolution over the course of decades, highlighting its crucial moments and major turning points from inception to the present. In doing so, we attempt to learn, anticipate the future, and discuss what steps may be taken to prevent another 'winter'.


War Mongering for Artificial Intelligence

#artificialintelligence

The ghost of Edward Teller must have been doing the rounds between members of the National Commission on Artificial Intelligence. The father of the hydrogen bomb was never one too bothered by the ethical niggles that came with inventing murderous technology. It was not, for instance, "the scientist's job to determine whether a hydrogen bomb should be constructed, whether it should be used, or how it should be used." Responsibility, however exercised, rested with the American people and their elected officials. The application of AI in military systems has plagued the ethicist but excited certain leaders and inventors.


Why is Elon Musk, posterchild of AI doomsday, creating AI-powered robots?

#artificialintelligence

After repeatedly warning about the dangers of artificial intelligence, and sparring with fellow tech billionaires on the issue, Elon Musk wants to create AI-powered humanoid robots. Speaking at his electric vehicle company Tesla's first AI Day event in California, Musk gave a preview of the Tesla Bot – a general purpose, bipedal, non-automotive robot. According to Musk, building a humanoid robot is the next logical step for Tesla because it has already become "the world's biggest robotics company." Our cars are semi-sentient robots on wheels. With the full self-driving computer, the inference engine on the car, which we'll keep evolving obviously… neural nets, recognizing the world, understanding how to navigate through the world… it kinda makes sense to put that into a humanoid form.


The future of work in health and human services

#artificialintelligence

Health and human services (HHS) agencies often struggle to serve some of society's most needy populations. At many HHS agencies today, tight budgets limit the size of the workforce, even as the volume of caseloads continues to grow. That imbalance makes it hard to provide efficient and effective solutions to address the critical needs of individuals and families, and can leave employees feeling stressed and overworked. Those same employees may also see few opportunities for career development or advancement. High rates of turnover can put a steady stream of inexperienced staff into critical jobs with little training to prepare them.


On the Opportunities and Risks of Foundation Models

arXiv.org Artificial Intelligence

AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character. This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations). Though foundation models are based on standard deep learning and transfer learning, their scale results in new emergent capabilities,and their effectiveness across so many tasks incentivizes homogenization. Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream. Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties. To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.