eckersley
How a Catholic Group Doxed Gay Priests
This week, WIRED debuted its joint investigation with Lighthouse Reports into the questions of bias and equity that are inherent in governments' use of algorithms to oversee financial assistance programs and identify alleged welfare fraud. The investigation included an unprecedented look inside the system used by the city of Rotterdam, in the Netherlands, and the training data that was used on the algorithm. We looked closely at how flaws in the algorithm's conclusions and wrongful accusations have impacted people's lives in Rotterdam. And we examined the global role of the private fraud-detection industry in these systems as well as urgent concerns about the pervasive surveillance that is now inherent in Denmark's national welfare scheme. The United States FBI admitted for the first time this week that it has purchased location data about people in the US from private data brokers rather than obtaining a warrant for the information.
- Europe > Netherlands > South Holland > Rotterdam (0.47)
- Europe > Denmark (0.26)
- North America > United States > Colorado (0.06)
- Information Technology (0.72)
- Law Enforcement & Public Safety > Fraud (0.57)
- Government > Regional Government > North America Government > United States Government (0.54)
- Government > Military > Air Force (0.54)
A Privacy Hero's Final Wish: An Institute to Redirect AI's Future - Digital Wisdom
Yesterday, hundreds in Eckersley's community of friends and colleagues packed the pews for an unusual sort of memorial service at the church-like sanctuary of the Internet Archive in San Francisco--a symposium with a series of talks devoted not just to remembrances of Eckersley as a person but a tour of his life's work. Facing a shrine to Eckersley at the back of the hall filled with his writings, his beloved road bike, and some samples of his Victorian goth punk wardrobe, Turan, Gallagher, and 10 other speakers gave presentations about Eckersley's long list of contributions: his years pushing Silicon Valley towards better privacy-preserving technologies, his co-founding of a groundbreaking project to encrypt the entire web, and his late-life pivot to improving the safety and ethics of AI. The event also served as a kind of soft launch for AOI, the organization that will now carry on Eckersley's work after his death. Eckersley envisioned the institute as an incubator and applied laboratory that would work with major AI labs to that take on the problem Eckersley had come to believe was, perhaps, even more important than the privacy and cybersecurity work to which he'd devoted decades of his career: redirecting the future of artificial intelligence away from the forces causing suffering in the world, toward what he described as "human flourishing." "We need to make AI not just who we are, but what we aspire to be," Turan said in his speech at the memorial event, after playing a recording of the phone call in which Eckersley had recruited him.
- Information Technology (0.57)
- Government (0.37)
Uncertain AI as More Ethical AI? CS Professor Carla Gomes responds
In her article in The MIT Technology Review--"Giving Algorithms a Sense of Uncertainty Could Make Them More Ethical" (January 18, 2019)--Karen Hao reached out to Cornell CS Professor Carla Gomes to ask if Peter Eckersley (and his Partnership on AI) is onto something in his approach to considering partial orders of solutions with respect to multiple, often conflicting, objectives, and possibly introducing uncertainty into AI systems, especially those addressing decision making and moral dilemmas. Eckersley says: "We as humans want multiple incompatible things. There are many high-stakes situations where it's actually inappropriate--perhaps dangerous--to program in a single objective function that tries to describe your ethics." Supportively, Gomes remarks: "The overall problem is very complex. It will take a body of research to address all issues, but Peter's approach is making an important step in the right direction."
Giving algorithms a sense of uncertainty could make them more ethical
Algorithms are increasingly being used to make ethical decisions. Perhaps the best example of this is a high-tech take on the ethical dilemma known as the trolley problem: if a self-driving car cannot stop itself from killing one of two pedestrians, how should the car's control software choose who live and who dies? In reality, this conundrum isn't a very realistic depiction of how self-driving cars behave. But many other systems that are already here or not far off will have to make all sorts of real ethical trade-offs. Assessment tools currently used in the criminal justice system must consider risks to society against harms to individual defendants; autonomous weapons will need to weigh the lives of soldiers against those of civilians. The problem is, algorithms were never designed to handle such tough choices.
- Transportation > Passenger (0.57)
- Transportation > Ground > Road (0.57)
- Information Technology > Robotics & Automation (0.57)
- Law > Criminal Law (0.56)
Google Backtracks, Says Its AI Will Not Be Used for Weapons or Surveillance
Google is committing to not using artificial intelligence for weapons or surveillance after employees protested the company's involvement in Project Maven, a Pentagon pilot program that uses artificial intelligence to analyse drone footage. However, Google says it will continue to work with the United States military on cybersecurity, search and rescue, and other non-offensive projects. Google CEO Sundar Pichai announced the change in a set of AI principles released today. The principles are intended to govern Google's use of artificial intelligence and are a response to employee pressure on the company to create guidelines for its use of AI. Employees at the company have spent months protesting Google's involvement in Project Maven, sending a letter to Pichai demanding that Google terminate its contract with the Department of Defense.
- Information Technology (1.00)
- Government > Military (1.00)
- Government > Regional Government > North America Government > United States Government (0.70)
AI's Malicious Potential Front and Center in New Report Cybercrime
As beneficial as artificial intelligence can be, it has its dark side, too. That dark side is the focus of a 100-page report a group of technology, academic and public interest organizations jointly released Tuesday. AI will be used by threat actors to expand the scale and efficiency of their attacks, the report predicts. They will employ it to compromise physical systems such as drones and driverless cars, and to broaden their privacy invasion and social manipulation capabilities. Novel attacks that take advantage of an improved capacity to analyze human behaviors, moods and beliefs on the basis of available data are to be expected, according to the researchers.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
AI is developing faster than experts imagined. Do we need a speed limit?
Artificial intelligence is rapidly developing and is already starting to change the world, at a pace that is worrying to some experts. Huge personalities in the tech industry often lament the dangers of unfettered development of AI systems: people like Elon Musk and Stephen Hawking, who warn of a future where AI reigns supreme. Whether or not their concerns are unfounded, there certainly is a value on keeping tabs on the progress of AI innovation. AI is getting good and in a lot of cases, way better than experts imagined. AlphaGo, Google's game playing AI, has been beating the world's best players for a while now, something that wasn't thought to be possible for at least a decade.
- Information Technology (0.57)
- Transportation (0.42)
- Leisure & Entertainment (0.37)
Do We Need a Speedometer for Artificial Intelligence?
Microsoft said last week that it had achieved a new record for the accuracy of software that transcribes speech. Its system missed just one in 20 words on a standard collection of phone call recordings--matching humans given the same challenge. The result is the latest in a string of recent findings that some view as proof that advances in artificial intelligence are accelerating, threatening to upend the economy. Some software has proved itself better than people at recognizing objects such as cars or cats in images, and Google's AlphaGo software has overpowered multiple Go champions--a feat that until recently was considered a decade or more away. Companies are eager to build on this progress; mentions of AI on corporate earnings calls have grown more or less exponentially.
- North America > United States > California (0.05)
- Asia > China (0.05)
- Information Technology (1.00)
- Leisure & Entertainment > Games > Go (0.55)