What could be one of the most consequential First Amendment cases of the digital age is pending before a court in Illinois and will likely be argued before the end of the year. The case concerns Clearview AI, the technology company that surreptitiously scraped 3 billion images from the internet to feed a facial recognition app it sold to law enforcement agencies. Now confronting multiple lawsuits based on an Illinois privacy law, the company has retained Floyd Abrams, the prominent First Amendment litigator, to argue that its business activities are constitutionally protected. Landing Abrams was a coup for Clearview, but whether anyone else should be celebrating is less clear. A First Amendment that shielded Clearview and other technology companies from reasonable privacy regulation would be bad for privacy, obviously, but it would be bad for free speech, too.
Travelers who wander the banana pancake trail through Southeast Asia will all get roughly the same experience. They'll eat crummy food on one of fifty boats floating around Halong Bay, then head up to the highlands of Sapa for a faux cultural experience with hill tribes that grow dreadful cannabis. After that, it's on to Laos to float the river in Vang Vien while smashed on opium tea. Eventually, you'll see someone wearing a t-shirt with the classic slogan – "same same, but different." The origins of this phrase surround the Southeast Asian vendors who often respond to queries about the authenticity of fake goods they're selling with "same same, but different." It's a phrase that appropriately describes how the technology world loves to spin things as fresh and new when they've hardly changed at all.
Residents of Portland, Maine, can now officially sue the bastards. In a robust show of doubling down on privacy protections, voters in the Maine city passed a measure Tuesday replacing and strengthening an existing ban on city official's use of facial recognition technology. While city employees were already prohibited from using the controversial tech, this new ban also gives residents the right to sue the city for violations and specifies monetary fines the city would have to pay out. Oh yeah, and for some icing on the cake: Under the new law, city officials that violate the ban can be fired. What's more, if a person discovers that "any person or entity acting on behalf of the City of Portland, including any officer, employee, agent, contractor, subcontractor, or vendor" used facial recognition on them, that person is entitled to no less than $100 per violation or $1,000 (whichever is greater).
This anti-detection starter pack came recommended for those looking to shield themselves from government surveillance while protesting in support of Black Lives Matter. In the future, the Federal Aviation Agency might be a resource added to the list. The gamut of surveillance tools used during protests runs wide. It's unlikely that your Twitter account was hacked, much like Donald Trump's was thought to be last month, to determine your location while protesting. But it may have been analyzed with a social media scanning tool.
Travelers who wander the banana pancake trail through Southeast Asia will all get roughly the same experience. They'll eat crummy food on one of fifty boats floating around Ha Long Bay, then head up to the highlands of Sa Pa for a faux cultural experience with hill tribes that grow dreadful cannabis. After that, it's on to Laos to float the river in Vang Vieng while smashed on opium tea. Eventually, you'll see someone wearing a t-shirt with the classic slogan – "same same, but different." The origins of this phrase surround the Southeast Asian vendors who often respond to queries about the authenticity of fake goods they're selling with "same same, but different." It's a phrase that appropriately describes how the technology world loves to spin things as fresh and new when they've hardly changed at all.
Artificial intelligence (AI) applications have attracted considerable ethical attention for good reasons. Although AI models might advance human welfare in unprecedented ways, progress will not occur without substantial risks. This article considers 3 such risks: system malfunctions, privacy protections, and consent to data repurposing. To meet these challenges, traditional risk managers will likely need to collaborate intensively with computer scientists, bioinformaticists, information technologists, and data privacy and security experts. This essay will speculate on the degree to which these AI risks might be embraced or dismissed by risk management.
On a bright Tuesday afternoon in Paris last fall, Alex Karp was doing tai chi in the Luxembourg Gardens. He wore blue Nike sweatpants, a blue polo shirt, orange socks, charcoal-gray sneakers and white-framed sunglasses with red accents that inevitably drew attention to his most distinctive feature, a tangle of salt-and-pepper hair rising skyward from his head. Under a canopy of chestnut trees, Karp executed a series of elegant tai chi and qigong moves, shifting the pebbles and dirt gently under his feet as he twisted and turned. A group of teenagers watched in amusement. After 10 minutes or so, Karp walked to a nearby bench, where one of his bodyguards had placed a cooler and what looked like an instrument case. The cooler held several bottles of the nonalcoholic German beer that Karp drinks (he would crack one open on the way out of the park). The case contained a wooden sword, which he needed for the next part of his routine. "I brought a real sword the last time I was here, but the police stopped me," he said matter of factly as he began slashing the air with the sword. Those gendarmes evidently didn't know that Karp, far from being a public menace, was the chief executive of an American company whose software has been deployed on behalf of public safety in France. The company, Palantir Technologies, is named after the seeing stones in J.R.R. Tolkien's "The Lord of the Rings." Its two primary software programs, Gotham and Foundry, gather and process vast quantities of data in order to identify connections, patterns and trends that might elude human analysts. The stated goal of all this "data integration" is to help organizations make better decisions, and many of Palantir's customers consider its technology to be transformative. Karp claims a loftier ambition, however. "We built our company to support the West," he says. To that end, Palantir says it does not do business in countries that it considers adversarial to the U.S. and its allies, namely China and Russia. In the company's early days, Palantir employees, invoking Tolkien, described their mission as "saving the shire." The brainchild of Karp's friend and law-school classmate Peter Thiel, Palantir was founded in 2003. It was seeded in part by In-Q-Tel, the C.I.A.'s venture-capital arm, and the C.I.A. remains a client. Palantir's technology is rumored to have been used to track down Osama bin Laden -- a claim that has never been verified but one that has conferred an enduring mystique on the company. These days, Palantir is used for counterterrorism by a number of Western governments.
In June 2020, when the U.S. Department of Justice (DoJ) issued updated guidance on how to evaluate corporate compliance programs, it came with a clear mandate to companies: Compliance programs must use robust technology and data analytics to assess their own actions and those of any third parties they do business with, from the point of engagement onward. At the very least, companies are expected to be able to explain the rationale for using third parties, whether they have relationships with foreign officials, and any potential risks to their reputation. This is a compliance game-changer. Historically, organizations could argue that they simply did not have the information available to identify potential compliance dissonance across their networks: the "needle in a haystack" defense. Organizations are now expected to show that they are leveraging data and applying modern analytics to draw insights and navigate the risks across their entire business network.
Transaction data is like a friendship tie: both parties must respect the relationship and if one party exploits it the relationship sours. As data becomes increasingly valuable, firms must take care not to exploit their users or they will sour their ties. Ethical uses of data cover a spectrum: at one end, using patient data in healthcare to cure patients is little cause for concern. At the other end, selling data to third parties who exploit users is serious cause for concern.2 Between these two extremes lies a vast gray area where firms need better ways to frame data risks and rewards in order to make better legal and ethical choices.
Back in 2008, New York Times best-selling author and Boing Boing alum, Cory Doctorow introduced Markus "w1n5t0n" Yallow to the world in the original Little Brother (which you can still read for free right here). The story follows the talented teenage computer prodigy's exploits after he and his friends find themselves caught in the aftermath of a terrorist bombing of the Bay Bridge. They must outwit and out-hack the DHS, which has turned San Francisco into a police state. Its sequel, Homeland, catches up with Yallow a few years down the line as he faces an impossible choice between behaving as the heroic hacker his friends see him as and toeing the company line. The third installment, Attack Surface, is a standalone story set in the Little Brother universe. It follows Yallow's archrival, Masha Maximow, an equally talented hacker who finds herself working as a counterterrorism expert for a multinational security firm. By day, she enables tin-pot dictators around the world to repress and surveil their citizens.