Goto

Collaborating Authors

Results


Ex-Googler Meredith Whittaker on Political Power in Tech, the Flaws of 'The Social Dilemma,' and…

#artificialintelligence

OneZero is partnering with the Big Technology Podcast from Alex Kantrowitz to bring readers exclusive access to interview transcripts with notable figures in and around the tech industry. This week, Kantrowitz sits down with Meredith Whittaker, an A.I. researcher who helped lead Google's employee walkout in 2018. This interview, which took place at World Summit A.I, has been edited for length and clarity. To subscribe to the podcast and hear the interview for yourself, you can check it out on Apple Podcasts, Spotify, and Overcast. When I interviewed Tristan Harris about The Social Dilemma earlier this month, my mentions filled with people saying, "You should speak to the people who were critical of the social web long before the film." One name stood out: Meredith Whittaker. An A.I. researcher and former Big Tech employee, Whittaker helped lead Google's walkout in 2018 amid a season of activism inside the company. On this edition of the Big Technology Podcast, we spoke not only about her views on the film, but also of the future of workplace activism inside tech companies in a moment where some are questioning if it belongs at all. Alex Kantrowitz: It seems like your perspective on The Social Dilemma is a little bit different from Tristan's.


Does Palantir See Too Much?

#artificialintelligence

On a bright Tuesday afternoon in Paris last fall, Alex Karp was doing tai chi in the Luxembourg Gardens. He wore blue Nike sweatpants, a blue polo shirt, orange socks, charcoal-gray sneakers and white-framed sunglasses with red accents that inevitably drew attention to his most distinctive feature, a tangle of salt-and-pepper hair rising skyward from his head. Under a canopy of chestnut trees, Karp executed a series of elegant tai chi and qigong moves, shifting the pebbles and dirt gently under his feet as he twisted and turned. A group of teenagers watched in amusement. After 10 minutes or so, Karp walked to a nearby bench, where one of his bodyguards had placed a cooler and what looked like an instrument case. The cooler held several bottles of the nonalcoholic German beer that Karp drinks (he would crack one open on the way out of the park). The case contained a wooden sword, which he needed for the next part of his routine. "I brought a real sword the last time I was here, but the police stopped me," he said matter of factly as he began slashing the air with the sword. Those gendarmes evidently didn't know that Karp, far from being a public menace, was the chief executive of an American company whose software has been deployed on behalf of public safety in France. The company, Palantir Technologies, is named after the seeing stones in J.R.R. Tolkien's "The Lord of the Rings." Its two primary software programs, Gotham and Foundry, gather and process vast quantities of data in order to identify connections, patterns and trends that might elude human analysts. The stated goal of all this "data integration" is to help organizations make better decisions, and many of Palantir's customers consider its technology to be transformative. Karp claims a loftier ambition, however. "We built our company to support the West," he says. To that end, Palantir says it does not do business in countries that it considers adversarial to the U.S. and its allies, namely China and Russia. In the company's early days, Palantir employees, invoking Tolkien, described their mission as "saving the shire." The brainchild of Karp's friend and law-school classmate Peter Thiel, Palantir was founded in 2003. It was seeded in part by In-Q-Tel, the C.I.A.'s venture-capital arm, and the C.I.A. remains a client. Palantir's technology is rumored to have been used to track down Osama bin Laden -- a claim that has never been verified but one that has conferred an enduring mystique on the company. These days, Palantir is used for counterterrorism by a number of Western governments.


Ethical Machine Learning in Health Care

arXiv.org Artificial Intelligence

The use of machine learning (ML) in health care raises numerous ethical concerns, especially as models can amplify existing health inequities. Here, we outline ethical considerations for equitable ML in the advancement of health care. Specifically, we frame ethics of ML in health care through the lens of social justice. We describe ongoing efforts and outline challenges in a proposed pipeline of ethical ML in health, ranging from problem selection to post-deployment considerations. We close by summarizing recommendations to address these challenges.


Weakly Supervised Learning of Nuanced Frames for Analyzing Polarization in News Media

arXiv.org Artificial Intelligence

In this paper we suggest a minimally-supervised approach for identifying nuanced frames in news article coverage of politically divisive topics. We suggest to break the broad policy frames suggested by Boydstun et al., 2014 into fine-grained subframes which can capture differences in political ideology in a better way. We evaluate the suggested subframes and their embedding, learned using minimal supervision, over three topics, namely, immigration, gun-control and abortion. We demonstrate the ability of the subframes to capture ideological differences and analyze political discourse in news media.


ICE just signed a contract with facial recognition company Clearview AI

#artificialintelligence

Immigration and Customs Enforcement (ICE) signed a contract with facial recognition company Clearview AI this week for "mission support," government contracting records show (as first spotted by the tech accountability nonprofit Tech Inquiry). The purchase order for $224,000 describes "clearview licenses" and lists "ICE mission support dallas" as the contracting office. ICE is known to use facial recognition technology; last month, The Washington Post reported the agency, along with the FBI, had accessed state drivers' license databases -- a veritable facial recognition gold mine, as the Post termed it -- but without the knowledge or consent of drivers. The agency has been criticized for its practices at the US southern border, which has included separating immigrant children from their families and detaining refugees indefinitely. "Clearview AI's agreement is with Homeland Security Investigations (HSI), which uses our technology for their Child Exploitation Unit and ongoing criminal investigations," Clearview AI CEO Hoan Ton-That said in an emailed statement to The Verge.


Controversial facial recognition tech firm Clearview AI inks deal with ICE

ZDNet

The US Department of Homeland Security (DHS) has signed a contract with Clearview AI to give Immigration and Customs Enforcement (ICE) access to the controversial facial recognition firm's technology. Tech Inquiry, a non-profit technology watchdog and rights outfit, spotted documents revealing the deal last week. The $224,000 purchase order, signed on August 12, 2020, is for "Clearview licenses" relating to "information technology components," but no further information has been made public. The contract will last until September 4, 2021. Tech Inquiry has submitted a Freedom of Information Act (FOIA) request for the contracts and communication between Clearview AI and ICE relating to the award.


Clearview AI wins an ICE contract as it prepares to defend itself in court

Engadget

Immigration and Customs Enforcement (ICE) this week signed a deal with Clearview AI to licence the facial recognition company's technology. According to a federal purchase order unearthed by the nonprofit Tech Inquiry (via The Verge), an ICE mission support office in Dallas is paying $224,000 for "Clearview licenses." Engadget has contacted Clearview and ICE for details on the scope of this agreement, as well as what ICE plans to do with those licenses. ICE and Clearview signed the deal just as the company is set to defend itself in court. Lawsuits filed in a number of states accuse Clearview of violating privacy and safety laws. It can identify a person by matching their photo against billions of images it has scraped from social media and other internet services.


Tech-enabled 'terror capitalism' is spreading worldwide. The surveillance regimes must be stopped

The Guardian

When Gulzira Aeulkhan finally fled China for Kazakhstan early last year, she still suffered debilitating headaches and nausea. She didn't know if this was a result of the guards at an internment camp hitting her in the head with an electric baton for spending more than two minutes on the toilet, or from the enforced starvation diet. Maybe it was simply the horror she had witnessed – the sounds of women screaming when they were beaten, their silence when they returned to the cell. Like an estimated 1.5 million other Turkic Muslims, Gulzira had been interned in a "re-education camp" in north-west China. After discovering that she had watched a Turkish TV show in which some of the actors wore hijabs, Chinese police had accused her of "extremism" and said she was "infected by the virus" of Islamism.


Federal agencies are worried face masks may be used to evade facial recognition technology

Daily Mail - Science & tech

The The US Department of Homeland Security is reportedly worried that face coverings will stymie the police's use of facial recognition technology. According to a report from The Intercept, a bulletin drafted by the DHS discusses the effects of widespread use of face coverings in a correspondence with other federal agencies, including Immigration and Customs Enforcement (ICE). 'The potential impacts that widespread use of protective masks could have on security operations that incorporate face recognition systems -- such as video cameras, image processing hardware and software, and image recognition algorithms -- to monitor public spaces during the ongoing Covid-19 public health emergency and in the months after the pandemic subsides,' reads the bulletin according to The Intercept. The bulletin, which was obtained via a trove of police documents leaked in the'BlueLeaks' hack on law enforcement agencies, mentions that the masks could be used by extremists to avoid facial recognition technology but says there is no current evidence that any such group is currently doing so. '[There is] no specific information that violent extremists or other criminals in the United States are using protective face coverings to conduct attacks,' reads the document.


Clearview AI app was reportedly used for fun by company's investors, friends

#artificialintelligence

The use of facial recognition technology has been spreading rapidly. Before Clearview AI became the target of public scrutiny earlier this year, the facial recognition app was used freely by the company's investors, clients and friends, according to a report Thursday from The New York Times. The app was reportedly demonstrated at events like parties, business gatherings and even on dates. Clearview identifies people by comparing photos to a database of images scraped from social media and other sites. It came under fire after a New York Times investigation in January.