Goto

Collaborating Authors

Results


Podcast: How democracies can reclaim digital power

MIT Technology Review

Technology companies provide much of the critical infrastructure of the modern state and develop products that affect fundamental rights. Search and social media companies, for example, have set de facto norms on privacy, while facial recognition and predictive policing software used by law enforcement agencies can contain racial bias. In this episode of Deep Tech, Marietje Schaake argues that national regulators aren't doing enough to enforce democratic values in technology, and it will take an international effort to fight back. Schaake--a Dutch politician who used to be a member of the European parliament and is now international policy director at Stanford University's Cyber Policy Center--joins our editor-in-chief, Gideon Lichfield, to discuss how decisions made in the interests of business are dictating the lives of billions of people. Also this week, we get the latest on the hunt to locate an air leak aboard the International Space Station--which has grown larger in recent weeks. Elsewhere in space, new findings suggest there is even more liquid water on Mars than we thought. It's located in deep underground lakes and there's a chance it could be home to Martian life. Space reporter Neel Patel explains how we might find out. Back on Earth, the US election is heating up. Data reporter Tate Ryan-Mosley breaks down how technologies like microtargeting and data analytics have improved since 2016. Check out more episodes of Deep Tech here. Gideon Lichfield: There's a situation playing out onboard the International Space Station that sounds like something out of Star Trek… But there is an air leak in the space station.


The Kim Jong-un and Putin deepfakes TV networks refused to air

Mashable

The internet is full of misinformation. In an election year, that often means that bad actors are trying to sway the vote one way or another. The president of the United States often spreads misinformation himself. Since the 2016 election, deepfake technology has often been considered one of the biggest threats in the online information wars. The ability to use machine learning in order to seamlessly swap one person's face onto another in order to trick people is certainly concerning.


The technology that powers the 2020 campaigns, explained

MIT Technology Review

Campaigns and elections have always been about data--underneath the empathetic promises to fix your problems and fight for your family, it's a business of metrics. If a campaign is lucky, it will find its way through a wilderness of polling, voter attributes, demographics, turnout, impressions, gerrymandering, and ad buys to connect with voters in a way that moves or even inspires them. Obama, MAGA, AOC--all have had some of that special sauce. Still, campaigns that collect and use the numbers best win. That's been true for some time, of course.


Future Tense Newsletter: Make the Future Great Again

Slate

Politics are in the air, like that ominous reddish glow suffocating much of the West in recent weeks on account of all those tragic wild fires. This coming week we get our first presidential debate. A chance for Donald Trump and Joe Biden to shake hands and have a respectful, reasoned exchange of views on the future of the unfairly maligned Section 230 of the Communications Decency Act; the need to reform the Stored Communications Act; the wisdom of replicating Europe's General Data Privacy Regulation; the merits of taking antitrust action against Google for its manipulation of search results or against Amazon for its treatment of third-party sellers on its platform. Maybe we will even see the candidates reflect humbly on humanity's place in the universe, in light of the breaking news from Venus. The debate will probably be all tense, no future--maybe not as heated as a debate between 2016 Lindsey Graham and 2020 Lindsey Graham, but close.


Preserving Integrity in Online Social Networks

arXiv.org Artificial Intelligence

Online social networks provide a platform for sharing information and free expression. However, these networks are also used for malicious purposes, such as distributing misinformation and hate speech, selling illegal drugs, and coordinating sex trafficking or child exploitation. This paper surveys the state of the art in keeping online platforms and their users safe from such harm, also known as the problem of preserving integrity. This survey comes from the perspective of having to combat a broad spectrum of integrity violations at Facebook. We highlight the techniques that have been proven useful in practice and that deserve additional attention from the academic community. Instead of discussing the many individual violation types, we identify key aspects of the social-media eco-system, each of which is common to a wide variety violation types. Furthermore, each of these components represents an area for research and development, and the innovations that are found can be applied widely.


Deepfake Fiascos Of 2020 That Made Headlines

#artificialintelligence

Deepfakes are indeed scary and have managed to strike a nerve for many, especially the ones being victimised for this sophisticated technology. Not only has it become a worldwide concern for many due to its influential impact on election campaigns but also made people anxious due to the criminal activity associated with it. With easily accessible deepfake making tools available for anybody to use and advancements in GANs has made it relatively easy for notorious minds to create these eerie-looking unreal AI-generated videos and images. Such improvement and accessibility has in turn increased the number of deepfake incidents in recent times. Some of them are so incredibly convincing that they manage to surpass the original videos. This news showcased one of the weirder applications of deep fakes, that used artificial intelligence to manipulate an audio-visual content -- a less heard usage, termed as audio deepfake scam.


'Video Authenticator' is Microsoft's answer to Deepfake detection

#artificialintelligence

Deepfakes is a class of synthetic media generated by AI and represents another dark side of technology -- this form of Artificial Intelligence stole the headlines last year when a LinkedIn user by the name Katie Jones, who appeared on the platform & started connecting with the Who's Who of the political elite in Washington DC. It was alarming, how deep learning created a real-life image of a person & then penetrated the social media spreading misinformation. With the U.S presidential elections looming, lawmakers in the country are worried about how deepfakes can greatly jeopardize the transparency of the democratic process. Many of the leading tech companies have been asked for help and are working on developing tools that can detect this fake synthetic media. Global software giant, Microsoft, has now released two new tools that can spot if a certain media has been artificially manipulated.


Microsoft's New Deepfake Detector Puts Reality To The Test - Liwaiwai

#artificialintelligence

The upcoming US presidential election seems set to be something of a mess--to put it lightly. Covid-19 will likely deter millions from voting in person, and mail-in voting isn't shaping up to be much more promising. This all comes at a time when political tensions are running higher than they have in decades, issues that shouldn't be political (like mask-wearing) have become highly politicized, and Americans are dramatically divided along party lines. So the last thing we need right now is yet another wrench in the spokes of democracy, in the form of disinformation; we all saw how that played out in 2016, and it wasn't pretty. For the record, disinformation purposely misleads people, while misinformation is simply inaccurate, but without malicious intent.


Disinformation Will Come for em Animal Crossing /em

Slate

On Tuesday, the Joe Biden–Kamala Harris campaign debuted a new series of lawn signs, variously emblazoned with the official Biden-Harris logo, the word JOE with a rainbow E at the end, and three pixelated sets of aviator sunglasses, Biden's signature prop. The release of lawn signs for a presidential campaign would not ordinarily be notable--aside from, at least, the usually undesirable pixelation of the images--but the location of these signs is unique: They are in a video game. These Biden-Harris signs are digital assets for players to proudly display inside the online multiplayer video game Animal Crossing: New Horizons. This is not the first time Animal Crossing has crossed paths with American politics: In May, Rep. Alexandria Ocasio-Cortez visited the Animal Crossing islands of some of her supporters. But the Biden-Harris sign rollout raises important questions around how companies that create and operate online multiplayer games will wrestle with the abuse of these digital social spaces for less wholesome political ends.


Microsoft's New Deepfake Detector Puts Reality to the Test

#artificialintelligence

The upcoming US presidential election seems set to be something of a mess--to put it lightly. Covid-19 will likely deter millions from voting in person, and mail-in voting isn't shaping up to be much more promising. This all comes at a time when political tensions are running higher than they have in decades, issues that shouldn't be political (like mask-wearing) have become highly politicized, and Americans are dramatically divided along party lines. So the last thing we need right now is yet another wrench in the spokes of democracy, in the form of disinformation; we all saw how that played out in 2016, and it wasn't pretty. For the record, disinformation purposely misleads people, while misinformation is simply inaccurate, but without malicious intent.