Human trafficking is a crime that takes place largely in the shadows. Victims, who are mostly women and children, often lack legal documentation in the country where they are forced to work or perform sex acts, and many fear reprisals if they go to authorities. Perpetrators, for obvious reasons, take great pains to conceal their behavior by laundering money and keeping their operations quiet. And others who engage in trafficking-related criminal activity -- such as individuals looking to connect with trafficked sex workers -- also have powerful incentives to hide their participation. Recently, law enforcement agencies and organizations that help victims of human trafficking have begun using artificial intelligence tools to overcome this lack of visibility.
Who should be on the ethics board of a tech company that's in the business of artificial intelligence (A.I.)? Given the attention to the devastating failure of Google's proposed Advanced Technology External Advisory Council (ATEAC) earlier this year, which was announced and then canceled within a week, it's crucial to get to the bottom of this question. Google, for one, admitted it's "going back to the drawing board." Tech companies are realizing that artificial intelligence changes power dynamics and as providers of A.I. and machine learning systems, they should proactively consider the ethical impacts of their inventions. That's why they're publishing vision documents like "Principles for A.I." when they haven't done anything comparable for previous technologies.
Technological advancement including artificial intelligence (AI) has sparked debate for people and governments in developed countries where democratic systems shape the operation of institutional systems. Specifically, such systems have been driven by established universal values such as respect for human rights, property and privacy rights, and democracy, including freedom of expression, and political participation. In these systems, the advancement of technology has been deployed to enhance the efficiency of governments in providing public services while undergoing public scrutiny and institutional oversight. For example, many cities in developed democratic countries ban the use of facial recognition technology as an instrument of security efforts. However, this may not be the case in developing countries in general and particularly in those undergoing long economic transitions without political liberalization, such as Vietnam and China.
We already knew an artificial intelligence could reflect the racial bias of its creator. But San Francisco thinks the tech could potentially do the opposite as well, by identifying and counteracting racial prejudice -- and it plans to put the theory to the test in a way that could change the legal system forever. On Wednesday, San Francisco District Attorney George Gascon announced that city prosecutors will begin using an AI-powered "bias-mitigation tool" created by Stanford University researchers on July 1. This could include their last name, eye color, hair color, or location. It also removes any information that might identify the law enforcement involved in the case, such as their badge number, a DA spokesperson told The Verge.
Millions of security cameras become equipped with "video analytics" and other AI-infused technologies that allow computers not only record but "understand" the objects they're capturing, they could be used for both security and marketing purposes, the American Civil Liberties Union (ACLU) warned in a recent report,"The Dawn of Robot Surveillance." As they become more advanced, the camera use is shifting from simply capturing and storing video "just in case" to actively evaluating video with real-time analytics and for surveillance. While ownership of cameras is mostly under decentralized ownership and control the ACLU cautioned policymakers to be proactive and create rules to regulate the potential negative impact this could have. The report also listed specific features that could allow for intrusive surveillance and recommendations to curtail potential abuse. The organization warned legislators to be wary of technologies such as human action recognition, anomaly detection, contextual understanding, emotion recognition, wide-area surveillance, and video search and summarization among other changes in camera technology.
More and more organizations are beginning to use or expand their use of artificial intelligence (AI) tools and services in the workplace. Despite AI's proven potential for enhancing efficiency and decision-making, it has raised a host of issues in the workplace which, in turn, have prompted an array of federal and state regulatory efforts that are likely to increase in the near future. Artificial intelligence, defined very simply, involves machines performing tasks in a way that is intelligent. The AI field involves a number of subfields or forms of AI that solve complex problems associated with human intelligence--for example, machine learning (computers using data to make predictions), natural-language processing (computers processing and understanding a natural human language like English), and computer vision or image recognition (computers processing, identifying, and categorizing images based on their content). One area where AI is becoming increasingly prevalent is in talent acquisition and recruiting.
A new investigative report by Reuters revealed the Massachusetts Institute of Technology and at least one other U.S. university have research partnerships with a Chinese firm who has ties with the expansive security system created in China's Xinjiang region on Wednesday. Beijing has been leading an intense campaign against the minority Uighurs in the autonomous region, with the United Nations estimating up to one million currently being held in detention facilities. Reporters uncovered two documents which revealed iFlytek, an artificial intelligence company, was the only supplier of 25 "voiceprint" collection systems for police in Kashgar, a major city in Xinjiang, during 2016. Maya Wang, a senior researcher at Human Rights Watch, said she's heard of people in Xinjiang last May being asked to have their voice recorded using the software, but iFlytek declined to comment on whether that was its technology. A May 2017 blog post also revealed another iFlytek subsidiary signed a "strategic cooperation framework agreement" with Xinjiang's prison administration bureau.
The great power nations that master the use of artificial intelligence are likely to gain a tremendous military and economic benefits from the technology. The United States benefitted greatly from a relatively fast adoption of the internet, and many of its most powerful companies today are the global giants of the internet age. I believe these to be fatal assumptions. The decade ahead will make it clear that the United States must, as it has in the past, earn its prosperity and its technological leadership – something that many Americans now take completely for granted. This will involve a focus on the competitiveness of the US economy – and a willingness to continually earn its place in the international order.
We propose a nonparametric test of independence, termed OPT-HSIC, between a covariate and a right-censored lifetime. Because the presence of censoring creates a challenge in applying the standard permutation-based testing approaches, we use optimal transport to transform the censored dataset into an uncensored one, while preserving the relevant dependencies. We then apply a permutation test using the kernel-based dependence measure as a statistic to the transformed dataset. The type 1 error is proven to be correct in the case where censoring is independent of the covariate. Experiments indicate that OPT-HSIC has power against a much wider class of alternatives than Cox proportional hazards regression and that it has the correct type 1 control even in the challenging cases where censoring strongly depends on the covariate.
As followers of Christ, we are called to engage the world around us with the unchanging gospel message of hope and reconciliation. Tools like technology are able to aid us in this pursuit. We know they can also be designed and used in ways that dishonor God and devalue our fellow image-bearers. Evangelical Christians hold fast to the inerrant and infallible Word of God, which states that every human being is made in God's image and thus has infinite value and worth in the eyes of their Creator. This message dictates how we view God, ourselves, and the tools that God has given us the ability to create.