Police and border guards must combat racial profiling and ensure that their use of "big data" collected via artificial intelligence does not reinforce biases against minorities, United Nations experts said on Thursday. Companies that sell algorithmic profiling systems to public entities and private companies, often used in screening job applicants, must be regulated to prevent misuse of personal data that perpetuates prejudices, they said. "It's a rapidly developing technological means used by law enforcement to determine, using big data, who is likely to do what. And that's the danger of it," Verene Shepherd, a member of the UN Committee on the Elimination of Racial Discrimination, told Reuters. "We've heard about companies using these algorithmic methods to discriminate on the basis of skin colour," she added, speaking from Jamaica.
In August 2016, a Bloomberg report revealed a secret aerial surveillance program in Baltimore led by the city's police department. Over eight months, planes equipped with cameras collected over 300 hours of footage, used by the police to investigate alleged crimes. Hardly anyone outside police department leadership and the vendor, Persistent Surveillance Systems, knew. Baltimore's police commissioner at the time, Kevin Davis, defended both the planes and the secrecy. The city's murder rate was spiking, the stretched police department was responding to thousands of calls per day, and footage from the planes was helping police find suspects.
Governments need an abrupt change of direction to avoid "stumbling zombielike into a digital welfare dystopia," Philip G. Alston, a human rights expert reporting on poverty, told the United Nations General Assembly last year, in a report calling for the regulation of digital technologies, including artificial intelligence, to ensure compliance with human rights. The private companies that play an increasingly dominant role in social welfare delivery, he noted, "operate in a virtually human-rights-free zone." Last month, the U.N. expert monitoring contemporary forms of racism flagged concerns that "governments and nonstate actors are developing and deploying emerging digital technologies in ways that are uniquely experimental, dangerous, and discriminatory in the border and immigration enforcement context." The European Border and Coast Guard Agency, also called Frontex, has tested unpiloted military-grade drones in the Mediterranean and Aegean for the surveillance and interdiction of vessels of migrants and refugees trying to reach Europe, the expert, E. Tendayi Achiume, reported. The U.N. antiracism panel, which is charged with monitoring and holding states to account for their compliance with the international convention on eliminating racial discrimination, said states must legislate measures combating racial bias and create independent mechanisms for handling complaints.
A new technical paper has been released demonstrating how businesses can identify if their artificial intelligence (AI) technology is bias. It also offers recommendations for those making AI systems to ensure they are fair, accurate, and comply with human rights. The paper, Addressing the problem of algorithmic bias, was developed by the Australian Human Rights Commission, together with the Gradient Institute, Consumer Policy Research Centre, Choice, and the Commonwealth Scientific and Industrial Research Organisation's (CSIRO) Data61. Human Rights Commissioner Edward Santow, in his foreword, described algorithmic bias as a "kind of error associated with the use of AI in decision making, and often results in unfairness". He continued, saying that when this occurs it can result in harm, and therefore human rights should be considered when AI systems are being developed and used to make important decisions.
On a summer night in Dallas in 2016, a bomb-handling robot made technological history. Police officers had attached roughly a pound of C-4 explosive to it, steered the device up to a wall near an active shooter and detonated the charge. In the explosion, the assailant, Micah Xavier Johnson, became the first person in the United States to be killed by a police robot. Afterward, then-Dallas Police Chief David Brown called the decision sound. Before the robot attacked, Mr. Johnson had shot five officers dead, wounded nine others and hit two civilians, and negotiations had stalled.
Differential privacy is a data anonymization technique that's used by major technology companies such as Apple and Google. The goal of differential privacy is simple: allow data analysts to build accurate models without sacrificing the privacy of the individual data points. But what does "sacrificing the privacy of the data points" mean? Well, let's think about an example. Suppose I have a dataset that contains information (age, gender, treatment, marriage status, other medical conditions, etc.) about every person who was treated for breast cancer at Hospital X.
New York City is trying to rein in the use of algorithms used to screen job applicants. It's one of the first cities in the U.S. to try to regulate what is an increasingly common -- and opaque -- hiring practice. The city council is considering a bill that would require potential employers to notify job candidates about the use of these tools, referred to as "automated decision systems." Companies would also have to complete an annual audit to make sure the technology doesn't result in bias. The move comes as the use of artificial intelligence in hiring skyrockets, increasingly replacing human screeners.
Artificial intelligence (AI) is steadily becoming a familiar tool for many Australians. We have come to know it through our pocket voice assistants, like Siri and Alexa, and as the brains behind Google's predictive searches. Australian businesses, particularly in the mining sector, view it as a means to gain a competitive advantage, and we have even seen its potential to fight COVID-19. As AI begins to permeate every aspect of our lives, the Australian government has recognised the economic and social opportunities it affords us in its newly proposed AI Action Plan. The discussion paper, released on 29 October 2020, is the latest in a suite of Australian initiatives targeting AI regulation and development, following on from the AI Ethics Framework.
Equal Opportunity and Nondiscrimination at Princeton University: Princeton University believes that commitment to principles of fairness and respect for all is favorable to the free and open exchange of ideas, and the University seeks to reach out as widely as possible in order to attract the ablest individuals as students, faculty, and staff. In applying this policy, the University is committed to nondiscrimination on the basis of personal beliefs or characteristics such as political views, religion, national or ethnic origin, race, color, sex, sexual orientation, gender identity or expression, pregnancy, age, marital or domestic partnership status, veteran status, disability, genetic information and/or other characteristics protected by applicable law in any phase of its education or employment programs or activities. In addition, pursuant to Title IX of the Education Amendments of 1972 and supporting regulations, Princeton does not discriminate on the basis of sex in the education programs or activities that it operates; this extends to admission and employment. Inquiries about the application of Title IX and its supporting regulations may be directed to the Assistant Secretary for Civil Rights, Office for Civil Rights, U.S. Department of Education or to the University's Sexual Misconduct/Title IX Coordinator. See Princeton's full Equal Opportunity Policy and Nondiscrimination Statement.