Goto

Collaborating Authors

Results


Marco Rubio calls for improved 'threat assessment process' to stop school shooters before they act

FOX News

Sen. Marco Rubio, R-Fla., joined the'Brian Kilmeade Show' to discuss his effort to prevent school shootings nationwide and the latest on Putin's war on Ukraine. Lawmakers on Capitol Hill are debating classroom security measures to ensure student safety across the country, one week after the Uvalde elementary school massacre left more than 20 people dead, including 19 children. Sen. Marco Rubio, R-Fla., joined the "Brian Kilmeade Show" to discuss his approach in preventing potential school shooters from committing mass atrocities, stressing the importance of flagging concerning behavior beforehand. "It's so important that all this information be fed into a threat assessment process," Rubio told host Brian Kilmeade. "That has to be applied, obviously, at the local level, and that involves multiple people feeding into the threat assessment because a bunch of people are going to see those threats."


Ethics And Conversational Assistants

#artificialintelligence

It is utopian to rule out any form of anthropomorphism when addressing a conversational assistant because of the use of language as a vector of exchange. Designers, therefore, must limit these shortcomings with the implementation of these design rules, thus reducing the risks of deception and dependency, and giving confidence in these systems.


Challenges of Artificial Intelligence -- From Machine Learning and Computer Vision to Emotional Intelligence

arXiv.org Artificial Intelligence

Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.


Ethical and social risks of harm from Language Models

arXiv.org Artificial Intelligence

This paper aims to help structure the risk landscape associated with large-scale Language Models (LMs). In order to foster advances in responsible innovation, an in-depth understanding of the potential risks posed by these models is needed. A wide range of established and anticipated risks are analysed in detail, drawing on multidisciplinary expertise and literature from computer science, linguistics, and social sciences. We outline six specific risk areas: I. Discrimination, Exclusion and Toxicity, II. Information Hazards, III. Misinformation Harms, V. Malicious Uses, V. Human-Computer Interaction Harms, VI. Automation, Access, and Environmental Harms. The first area concerns the perpetuation of stereotypes, unfair discrimination, exclusionary norms, toxic language, and lower performance by social group for LMs. The second focuses on risks from private data leaks or LMs correctly inferring sensitive information. The third addresses risks arising from poor, false or misleading information including in sensitive domains, and knock-on risks such as the erosion of trust in shared information. The fourth considers risks from actors who try to use LMs to cause harm. The fifth focuses on risks specific to LLMs used to underpin conversational agents that interact with human users, including unsafe use, manipulation or deception. The sixth discusses the risk of environmental harm, job automation, and other challenges that may have a disparate effect on different social groups or communities. In total, we review 21 risks in-depth. We discuss the points of origin of different risks and point to potential mitigation approaches. Lastly, we discuss organisational responsibilities in implementing mitigations, and the role of collaboration and participation. We highlight directions for further research, particularly on expanding the toolkit for assessing and evaluating the outlined risks in LMs.


Trustworthy AI: From Principles to Practices

arXiv.org Artificial Intelligence

Fast developing artificial intelligence (AI) technology has enabled various applied systems deployed in the real world, impacting people's everyday lives. However, many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc., which not only degrades user experience but erodes the society's trust in all AI systems. In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems. We first introduce the theoretical framework of important aspects of AI trustworthiness, including robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, alignment with human values, and accountability. We then survey leading approaches in these aspects in the industry. To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems, ranging from data acquisition to model development, to development and deployment, finally to continuous monitoring and governance. In this framework, we offer concrete action items to practitioners and societal stakeholders (e.g., researchers and regulators) to improve AI trustworthiness. Finally, we identify key opportunities and challenges in the future development of trustworthy AI systems, where we identify the need for paradigm shift towards comprehensive trustworthy AI systems.


Is AI sexist and racist?

#artificialintelligence

We all use facial recognition to unlock our phones. And we all view online content automatically suggested to us. But some of us have rather more success with artificial intelligence (AI) than others. A study of face recognition AIs discovered that systems from leading companies IBM, Microsoft and Amazon misclassified the faces of Oprah Winfrey, Michelle Obama and Serena Williams, while having no trouble at all with white males. Even the voices of digital assistants such as Cortana or Google Assistant have female voices by default, perhaps unconsciously reinforcing the stereotype of female subservience in the minds of millions of users.


The AI Index 2021 Annual Report

arXiv.org Artificial Intelligence

Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.


Google antitrust: Just how much do you actually use it? Way more than you think

USATODAY - Tech Top Stories

Google's influence in our lives is overwhelming, which is perhaps one of the reasons the Department of Justice and several state attorney generals banded together to file an anti-trust lawsuit against the company. But just how wide is Google's reach? We decided to take a look, and the results may surprise you. Start with the fact that Google ads are all over the Internet, and despite the initial stated goal of "organizing the world's information," the Alphabet unit is designed to have more ads appear, to keep the earnings up. In its most recent earnings, Alphabet reported $38.30 billion for Google.


We need a full investigation into Siri's secret surveillance campaign Ted Greenberg

The Guardian

No one wants their most private activities secretly monitored. That's why wiretapping is strictly regulated in the US and most of the world. Federal law makes it a crime for the government to surveil communications without a court-ordered warrant. This is not the issue here. Nor is this a case involving one-party consent.


Adriana Cohen: Congress should break up Big Tech -- companies are far too powerful

FOX News

Monopolistic companies such as Facebook, Twitter, Google, Apple and others have become far too powerful. They thwart competition and abuse their power, whether it's by failing to protect users' privacy and data or by controlling one of our most basic freedom -- free speech -- in hopes of influencing, if not, swaying elections. These actions warrant congressional intervention, especially given Silicon Valley's well-known political bias against conservatives -- including the president of the United States of America. President Trump's tweets are routinely "fact-checked" and censored, for example, while his political opponents are not. This rigged system has far-reaching consequences that, among other things, shape public opinion and culture and taint America's standing in the world, while diminishing our collective rights.