It's hard to focus on the nitty gritty of tech policy when the world is on fire. Take, for example, his fight against Big Tech in the name of "anti-conservative bias" (no, it doesn't exist), which resulted in an assault on Section 230. Experts say the true aim of those efforts was to undermine content moderation, and normalize the white supremacist attitudes that helped put people like Trump in power. Unfortunately, those allegations will have life for years to come as a form of "zombie Trumpism," as Berin Szoka, a senior fellow at the technology policy organization TechFreedom, put it. Trump may be gone from office and Twitter.
There are many amazing practical applications of AI and machine learning here now and on the horizon. But that doesn't mean every use is good or applied with good intent. The fastest growing type of financial crime in the United States is synthetic identity fraud - when a fraudster uses a combination of real and fake information to create an entirely new identity. This progressive uptick in synthetic identity fraud is likely due to multiple factors such as data breaches, dark web data access, and the competitive lending landscape. In Experian's recent Future of Fraud Forecast, they are predicting that these fraudsters will start to use fake faces for biometric verification, which is the first of five new threats they detail for 2021.
Last month, the British television network Channel 4 broadcast an "alternative Christmas address" by Queen Elizabeth II, in which the 94-year-old monarch was shown cracking jokes and performing a dance popular on TikTok. Of course, it wasn't real: The video was produced as a warning about deepfakes--apparently real images or videos that show people doing or saying things they never did or said. If an image of a person can be found, new technologies using artificial intelligence and machine learning now make it possible to show that person doing almost anything at all. The dangers of the technology are clear: A high-school teacher could be shown in a compromising situation with a student, a neighbor could be depicted as a terrorist. Can deepfakes, as such, be prohibited under American law?
Deepfake technology (DT) has taken a new level of sophistication. Cybercriminals now can manipulate sounds, images, and videos to defraud and misinform individuals and businesses. This represents a growing threat to international institutions and individuals which needs to be addressed. This paper provides an overview of deepfakes, their benefits to society, and how DT works. Highlights the threats that are presented by deepfakes to businesses, politics, and judicial systems worldwide. Additionally, the paper will explore potential solutions to deepfakes and conclude with future research direction.
How can we assess the value of data objectively, systematically and quantitatively? Pricing data, or information goods in general, has been studied and practiced in dispersed areas and principles, such as economics, marketing, electronic commerce, data management, data mining and machine learning. In this article, we present a unified, interdisciplinary and comprehensive overview of this important direction. We examine various motivations behind data pricing, understand the economics of data pricing and review the development and evolution of pricing models according to a series of fundamental principles. We discuss both digital products and data products. We also consider a series of challenges and directions for future work.
What could be one of the most consequential First Amendment cases of the digital age is pending before a court in Illinois and will likely be argued before the end of the year. The case concerns Clearview AI, the technology company that surreptitiously scraped 3 billion images from the internet to feed a facial recognition app it sold to law enforcement agencies. Now confronting multiple lawsuits based on an Illinois privacy law, the company has retained Floyd Abrams, the prominent First Amendment litigator, to argue that its business activities are constitutionally protected. Landing Abrams was a coup for Clearview, but whether anyone else should be celebrating is less clear. A First Amendment that shielded Clearview and other technology companies from reasonable privacy regulation would be bad for privacy, obviously, but it would be bad for free speech, too.
Local differential privacy has become the gold-standard of privacy literature for gathering or releasing sensitive individual data points in a privacy-preserving manner. However, locally differential data can twist the probability density of the data because of the additive noise used to ensure privacy. In fact, the density of privacy-preserving data (no matter how many samples we gather) is always flatter in comparison with the density function of the original data points due to convolution with privacy-preserving noise density function. The effect is especially more pronounced when using slow-decaying privacy-preserving noises, such as the Laplace noise. This can result in under/over-estimation of the heavy-hitters. This is an important challenge facing social scientists due to the use of differential privacy in the 2020 Census in the United States. In this paper, we develop density estimation methods using smoothing kernels. We use the framework of deconvoluting kernel density estimators to remove the effect of privacy-preserving noise. This approach also allows us to adapt the results from non-parameteric regression with errors-in-variables to develop regression models based on locally differentially private data. We demonstrate the performance of the developed methods on financial and demographic datasets.
Travelers who wander the banana pancake trail through Southeast Asia will all get roughly the same experience. They'll eat crummy food on one of fifty boats floating around Halong Bay, then head up to the highlands of Sapa for a faux cultural experience with hill tribes that grow dreadful cannabis. After that, it's on to Laos to float the river in Vang Vien while smashed on opium tea. Eventually, you'll see someone wearing a t-shirt with the classic slogan – "same same, but different." The origins of this phrase surround the Southeast Asian vendors who often respond to queries about the authenticity of fake goods they're selling with "same same, but different." It's a phrase that appropriately describes how the technology world loves to spin things as fresh and new when they've hardly changed at all.
Residents of Portland, Maine, can now officially sue the bastards. In a robust show of doubling down on privacy protections, voters in the Maine city passed a measure Tuesday replacing and strengthening an existing ban on city official's use of facial recognition technology. While city employees were already prohibited from using the controversial tech, this new ban also gives residents the right to sue the city for violations and specifies monetary fines the city would have to pay out. Oh yeah, and for some icing on the cake: Under the new law, city officials that violate the ban can be fired. What's more, if a person discovers that "any person or entity acting on behalf of the City of Portland, including any officer, employee, agent, contractor, subcontractor, or vendor" used facial recognition on them, that person is entitled to no less than $100 per violation or $1,000 (whichever is greater).
This anti-detection starter pack came recommended for those looking to shield themselves from government surveillance while protesting in support of Black Lives Matter. In the future, the Federal Aviation Agency might be a resource added to the list. The gamut of surveillance tools used during protests runs wide. It's unlikely that your Twitter account was hacked, much like Donald Trump's was thought to be last month, to determine your location while protesting. But it may have been analyzed with a social media scanning tool.