If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups. Using the dermatologist approved Fitzpatrick Skin Type classification system, we characterize the gender and skin type distribution of two facial analysis benchmarks, IJB-A and Adience. We find that these datasets are overwhelmingly composed of lighter-skinned subjects (79.6% for IJB-A and 86.2% for Adience) and introduce a new facial analysis dataset which is balanced by gender and skin type. We evaluate 3 commercial gender classification systems using our dataset and show that darker-skinned females are the most misclassified group (with error rates of up to 34.7%).
Artificial intelligence makes our life easier and more comfortable. But he, rather, uses of them are a threat to our rights and freedoms. Back in the 1970-ies, when even the word "Internet" had not yet been invented, radical philosopher Herbert Marcuse predicted the emergence of certain new technologies, can change the world. On the one hand, they open new prospects for freedom, but will create new forms of exclusion and will give the government and corporations new mechanisms of control over people. But today his prophecy seem to have been carried out.
Human trafficking is a crime that takes place largely in the shadows. Victims, who are mostly women and children, often lack legal documentation in the country where they are forced to work or perform sex acts, and many fear reprisals if they go to authorities. Perpetrators, for obvious reasons, take great pains to conceal their behavior by laundering money and keeping their operations quiet. And others who engage in trafficking-related criminal activity -- such as individuals looking to connect with trafficked sex workers -- also have powerful incentives to hide their participation. Recently, law enforcement agencies and organizations that help victims of human trafficking have begun using artificial intelligence tools to overcome this lack of visibility.
Who should be on the ethics board of a tech company that's in the business of artificial intelligence (A.I.)? Given the attention to the devastating failure of Google's proposed Advanced Technology External Advisory Council (ATEAC) earlier this year, which was announced and then canceled within a week, it's crucial to get to the bottom of this question. Google, for one, admitted it's "going back to the drawing board." Tech companies are realizing that artificial intelligence changes power dynamics and as providers of A.I. and machine learning systems, they should proactively consider the ethical impacts of their inventions. That's why they're publishing vision documents like "Principles for A.I." when they haven't done anything comparable for previous technologies.
Technological advancement including artificial intelligence (AI) has sparked debate for people and governments in developed countries where democratic systems shape the operation of institutional systems. Specifically, such systems have been driven by established universal values such as respect for human rights, property and privacy rights, and democracy, including freedom of expression, and political participation. In these systems, the advancement of technology has been deployed to enhance the efficiency of governments in providing public services while undergoing public scrutiny and institutional oversight. For example, many cities in developed democratic countries ban the use of facial recognition technology as an instrument of security efforts. However, this may not be the case in developing countries in general and particularly in those undergoing long economic transitions without political liberalization, such as Vietnam and China.
We already knew an artificial intelligence could reflect the racial bias of its creator. But San Francisco thinks the tech could potentially do the opposite as well, by identifying and counteracting racial prejudice -- and it plans to put the theory to the test in a way that could change the legal system forever. On Wednesday, San Francisco District Attorney George Gascon announced that city prosecutors will begin using an AI-powered "bias-mitigation tool" created by Stanford University researchers on July 1. This could include their last name, eye color, hair color, or location. It also removes any information that might identify the law enforcement involved in the case, such as their badge number, a DA spokesperson told The Verge.
Millions of security cameras become equipped with "video analytics" and other AI-infused technologies that allow computers not only record but "understand" the objects they're capturing, they could be used for both security and marketing purposes, the American Civil Liberties Union (ACLU) warned in a recent report,"The Dawn of Robot Surveillance." As they become more advanced, the camera use is shifting from simply capturing and storing video "just in case" to actively evaluating video with real-time analytics and for surveillance. While ownership of cameras is mostly under decentralized ownership and control the ACLU cautioned policymakers to be proactive and create rules to regulate the potential negative impact this could have. The report also listed specific features that could allow for intrusive surveillance and recommendations to curtail potential abuse. The organization warned legislators to be wary of technologies such as human action recognition, anomaly detection, contextual understanding, emotion recognition, wide-area surveillance, and video search and summarization among other changes in camera technology.
More and more organizations are beginning to use or expand their use of artificial intelligence (AI) tools and services in the workplace. Despite AI's proven potential for enhancing efficiency and decision-making, it has raised a host of issues in the workplace which, in turn, have prompted an array of federal and state regulatory efforts that are likely to increase in the near future. Artificial intelligence, defined very simply, involves machines performing tasks in a way that is intelligent. The AI field involves a number of subfields or forms of AI that solve complex problems associated with human intelligence--for example, machine learning (computers using data to make predictions), natural-language processing (computers processing and understanding a natural human language like English), and computer vision or image recognition (computers processing, identifying, and categorizing images based on their content). One area where AI is becoming increasingly prevalent is in talent acquisition and recruiting.
A new investigative report by Reuters revealed the Massachusetts Institute of Technology and at least one other U.S. university have research partnerships with a Chinese firm who has ties with the expansive security system created in China's Xinjiang region on Wednesday. Beijing has been leading an intense campaign against the minority Uighurs in the autonomous region, with the United Nations estimating up to one million currently being held in detention facilities. Reporters uncovered two documents which revealed iFlytek, an artificial intelligence company, was the only supplier of 25 "voiceprint" collection systems for police in Kashgar, a major city in Xinjiang, during 2016. Maya Wang, a senior researcher at Human Rights Watch, said she's heard of people in Xinjiang last May being asked to have their voice recorded using the software, but iFlytek declined to comment on whether that was its technology. A May 2017 blog post also revealed another iFlytek subsidiary signed a "strategic cooperation framework agreement" with Xinjiang's prison administration bureau.
The great power nations that master the use of artificial intelligence are likely to gain a tremendous military and economic benefits from the technology. The United States benefitted greatly from a relatively fast adoption of the internet, and many of its most powerful companies today are the global giants of the internet age. I believe these to be fatal assumptions. The decade ahead will make it clear that the United States must, as it has in the past, earn its prosperity and its technological leadership – something that many Americans now take completely for granted. This will involve a focus on the competitiveness of the US economy – and a willingness to continually earn its place in the international order.