"I have nothing to hide" was once the standard response to surveillance programs utilizing cameras, border checks, and casual questioning by law enforcement. Privacy used to be considered a concept generally respected in many countries with a few changes to rules and regulations here and there often made only in the name of the common good. Things have changed, and not for the better. China's Great Firewall, the UK's Snooper's Charter, the US' mass surveillance and bulk data collection -- compliments of the National Security Agency (NSA) and Edward Snowden's whistleblowing -- Russia's insidious election meddling, and countless censorship and communication blackout schemes across the Middle East are all contributing to a global surveillance state in which privacy is a luxury of the few and not a right of the many. As surveillance becomes a common factor of our daily lives, privacy is in danger of no longer being considered an intrinsic right. Everything from our web browsing to mobile devices and the Internet of Things (IoT) products installed in our homes have the potential to erode our privacy and personal security, and you cannot depend on vendors or ever-changing surveillance rules to keep them intact. Having "nothing to hide" doesn't cut it anymore. We must all do whatever we can to safeguard our personal privacy. Taking the steps outlined below can not only give you some sanctuary from spreading surveillance tactics but also help keep you safe from cyberattackers, scam artists, and a new, emerging issue: misinformation. Data is a vague concept and can encompass such a wide range of information that it is worth briefly breaking down different collections before examining how each area is relevant to your privacy and security. A roundup of the best software and apps for Windows and Mac computers, as well as iOS and Android devices, to keep yourself safe from malware and viruses. Known as PII, this can include your name, physical home address, email address, telephone numbers, date of birth, marital status, Social Security numbers (US)/National Insurance numbers (UK), and other information relating to your medical status, family members, employment, and education. All this data, whether lost in different data breaches or stolen piecemeal through phishing campaigns, can provide attackers with enough information to conduct identity theft, take out loans using your name, and potentially compromise online accounts that rely on security questions being answered correctly. In the wrong hands, this information can also prove to be a gold mine for advertisers lacking a moral backbone.
Last week, the United States Senate played host to a number of social media company VPs during hearings on the potential dangers presented by algorithmic bias and amplification. While that meeting almost immediately broke down into a partisan circus of grandstanding grievance airing, Democratic senators did manage to focus a bit on how these recommendation algorithms might contribute to the spread of online misinformation and extremist ideologies. The issues and pitfalls presented by social algorithms are well-known and have been well-documented. So, really, what are we going to do about it? "So I think in order to answer that question, there's something critical that needs to happen: we need more independent researchers being able to analyze platforms and their behavior," Dr. Brandie Nonnecke, Director of the CITRIS Policy Lab at UC Berkeley, told Engadget. Social media companies "know that they need to be more transparent in what's happening on their platforms, but I'm of the firm belief that, in order for that transparency to be genuine, there needs to be collaboration between the platforms and independent peer reviewed, empirical research."
Nearly two thousand government bodies, including police departments and public schools, have been using Clearview AI without oversight. Buzzfeed News reports that employees from 1,803 public bodies used the controversial facial-recognition platform without authorization from bosses. Reporters contacted a number of agency heads, many of which said they were unaware their employees were accessing the system. A database of searches, outlining which agencies were able to access the platform, and how many queries were made, was leaked to Buzzfeed by an anonymous source. It has published a version of the database online, enabling you to examine how many times each department has used the tool.
This week, Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai, and Twitter CEO Jack Dorsey went back to Congress, the first hearing with Big Tech executives since the January 6 insurrection led by white supremacists that directly threatened the lives of lawmakers. The main topic of discussion was the role social media plays in the spread of extremism and disinformation. The end of liability protections granted by Section 230 of the Communications Decency Act (CDA), disinformation, and how tech can harm the mental health of children were discussed, but artificial intelligence took center stage. The word "algorithm" alone was used more than 50 times. Whereas previous hearings involved more exploratory questions and took on a feeling of Geek Squad tech repair meets policy, in this hearing lawmakers asked questions based on evidence and seemed to treat tech CEOs like hostile witnesses.
When a secretive start-up scraped the internet to build a facial-recognition tool, it tested a legal and ethical limit -- and blew the future of privacy in America wide open. In May 2019, an agent at the Department of Homeland Security received a trove of unsettling images. Found by Yahoo in a Syrian user's account, the photos seemed to document the sexual abuse of a young girl. One showed a man with his head reclined on a pillow, gazing directly at the camera. The man appeared to be white, with brown hair and a goatee, but it was hard to really make him out; the photo was grainy, the angle a bit oblique. The agent sent the man's face to child-crime investigators around the country in the hope that someone might recognize him. When an investigator in New York saw the request, she ran the face through an unusual new facial-recognition app she had just started using, called Clearview AI. The team behind it had scraped the public web -- social media, employment sites, YouTube, Venmo -- to create a database with three billion images of people, along with links to the webpages from which the photos had come. This dwarfed the databases of other such products for law enforcement, which drew only on official photography like mug shots, driver's licenses and passport pictures; with Clearview, it was effortless to go from a face to a Facebook account. The app turned up an odd hit: an Instagram photo of a heavily muscled Asian man and a female fitness model, posing on a red carpet at a bodybuilding expo in Las Vegas. The suspect was neither Asian nor a woman. But upon closer inspection, you could see a white man in the background, at the edge of the photo's frame, standing behind the counter of a booth for a workout-supplements company. On Instagram, his face would appear about half as big as your fingernail. The federal agent was astounded. The agent contacted the supplements company and obtained the booth worker's name: Andres Rafael Viola, who turned out to be an Argentine citizen living in Las Vegas.
Zhang, Daniel, Mishra, Saurabh, Brynjolfsson, Erik, Etchemendy, John, Ganguli, Deep, Grosz, Barbara, Lyons, Terah, Manyika, James, Niebles, Juan Carlos, Sellitto, Michael, Shoham, Yoav, Clark, Jack, Perrault, Raymond
Welcome to the fourth edition of the AI Index Report. This year we significantly expanded the amount of data available in the report, worked with a broader set of external organizations to calibrate our data, and deepened our connections with the Stanford Institute for Human-Centered Artificial Intelligence (HAI). The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence. Its mission is to provide unbiased, rigorously vetted, and globally sourced data for policymakers, researchers, executives, journalists, and the general public to develop intuitions about the complex field of AI. The report aims to be the most credible and authoritative source for data and insights about AI in the world.
"I don't use Facebook anymore," she said. I was leading a usability session for the design of a new mobile app when she stunned me with that statement. It was a few years back, when I was a design research lead at IDEO and we were working on a service design project for a telecommunications company. The design concept we were showing her had a simultaneously innocuous and yet ubiquitous feature -- the ability to log in using Facebook. But the young woman, older than 20, less than 40, balked at that feature and went on to tell me why she didn't trust the social network any more. This session was, of course, in the aftermath of the 2016 Presidential election. An election in which a man who many regarded as a television spectacle at best and grandiose charlatan at worst had just been elected to our highest office. Though now in 2020, our democracy remains intact.
This article is part of the Free Speech Project, a collaboration between Future Tense and the Tech, Law, & Security Program at American University Washington College of Law that examines the ways technology is influencing how we think about speech. Last summer's anti–police brutality protests represented the largest mass demonstration effort in American history. Since then, law enforcement departments nationwide have faced intense scrutiny for how they policed these historic protests. The repeated, egregious instances of violence against journalists and protesters are well documented and have driven widespread calls for systematic reform. These calls have focused in part on surveillance, after the police used sophisticated social media data monitoring, commandeered non-city camera networks, and tried other intrusive methods to identify suspects.
Local businesses close as more National Guard troops deploy in the nation's capital. Selfie-snapping Capitol rioters left investigators a treasure trove of evidence -- at least 140,000 pictures and videos taken during the deadly Jan. 6 siege, according to federal prosecutors. The mass of digital evidence from media reports, live-streams and social media posts has been crucial to the FBI, which by Friday had identified more than 275 suspects, with close to 100 charged, officials said. Investigators have been working with social media and phone companies to help ID suspects -- as well as using advanced facial recognition technology, according to Bloomberg News. FILE: Rioters try to break through a police barrier at the Capitol in Washington.
The dating app Bumble has disabled its politics filter after it was supposedly used to reveal the identities of Capitol rioters, Mashable has reported. Bumble support posted on Twitter that it "temporarily removed our politics filter to prevent misuse," adding that it "prohibits any content that promotes terrorism or racial hatred." Bumble has promised in another tweet that it will "be reinstated in the future." It also stated that it has removed users confirmed as participants in the US Capitol attack. We've temporarily removed our politics filter to prevent misuse.