Goto

Collaborating Authors

cambridge analytica


7 Ways Data and AI Can Be Used to Trick and Deceive the Public

#artificialintelligence

Deepfake videos and altered videos are getting so advanced that they are increasingly harder to spot. Many believe AI deepfake tools, that allow people to superimpose the face of a politician or actor onto a video and also convincingly replicate their voice, could be a real threat to democracy. In May 2019, Donald Trump posted a video that had gone viral of Nancy Pelosi appearing to drunkenly slur her way through a speech. The video was quickly debunked -- someone had altered the original footage to slow down Pelosi's speech while raising the pitch to make it sound like natural slow speech. The video was viewed millions of times and Trump, notably, didn't remove the video from his social media after it was debunked.


Facebook rolled out a chatbot to advise employees on how to answer questions about its controversies

Daily Mail - Science & tech

Consultancy firm Cambridge Analytica had offices in London, New York, Washington, as well as Brazil and Malaysia. The company boasted it can'find your voters and move them to action' through data-driven campaigns and a team that includes data scientists and behavioural psychologists. In 2013, Cambridge professor Aleksandr Kogan used his app, This Is Your Digital Life, to ask 270,000 Facebook users questions about their personalities. By answering them, the users granted Kogan access to not only their profiles but to those of their friends. He subsequently sold that information to Cambridge Analytica for $51million.


Facebook built a facial recognition app that could 'identify any member of the social network'

Daily Mail - Science & tech

Facebook is under fire for privacy concerns once again, as the social media giant tested a facial recognition app on its employees. Using real-time facial recognition, the firm was able to identify a person by pointing a smartphone camera at them. It was reported that the app has been discontinued, but the technology was capable of bringing up someone's Facebook profile who had enabled facial recognition on their profiles. Facebook did confirm that it developed the app, but denied it was capable of identifying members of its social media network and pulling up their profile. Facebook is under fire for privacy concerns once again, as the social media giant revealed it tested a facial recognition app on its employees.


Artificial Inhumanity - WebSystemer.no

#artificialintelligence

A few months ago, Fr Philip Larrey published his book called "Artificial Humanity". In this article, we will explain what would happen if we have an inhumane AI. First of all, what does inhumane mean? Primarily, when we say Artificial Inhumanity, we are referring to an AI which is not concerned with humans. It does not exhibit any human feeling, and humans are just animate objects roaming our world. Even though AI was initially conceived to serve humans, we do not exclude the possibility of eventually having an AI, which ultimately only serves its interests. If that happens, then we are definitely in big trouble. The question of whether machines can think is about as relevant as the question of whether submarines can swim. Using the same line of thought, if machines exhibit humanity, does that mean that they are human?


Duke researchers use machine learning to defend personal information

#artificialintelligence

Two Duke researchers have found a way to confuse machine learning systems, potentially revealing a new way to protect online privacy. Neil Gong, assistant professor of electrical and computer engineering, and Jinyuan Jia, a Ph.D. candidate in electrical and computer engineering, have displayed the potential for so-called "adversarial examples," or deliberately altered data, to confuse machine learning systems. This research could be used to fool attackers who use these systems to analyze user data. "We found that, since attackers are using machine learning to perform automated large-scale inference attacks, and the machine learning is vulnerable to those adversarial examples, we can leverage those adversarial examples to protect our privacy," Gong said. Machine learning systems are tools for statistical analysis.


Canopy provides a blueprint for privacy-focused content recommendations

#artificialintelligence

With the advent of cloud computing, e-commerce, and social media, it's difficult to keep tabs on who has access to our data, and harder still to know how much care they're taking with it -- barely a day goes by without some form of data-breach, lapse, or privacy scandal coming to the fore. But what constitutes "data-misuse" is covered by a broad gamut of scenarios that reach beyond poor security hygiene. Online tracking and profiling is rife -- it turns out there is a heap of money to be made from knowing where you are, what you do, and what you like. It all comes down to personalization: selling things, be it products, playlists, or a political ideology, based on who you are. The Facebook and Cambridge Analytical, which highlighted how social networks armed with vast banks of personal data could be leveraged to profile voters and micro-target with personalized political ads, was something of a watershed moment in terms of elevating the issue of data-privacy and abuse into the public consciousness.


Is Data the New Oil? techsocialnetwork

#artificialintelligence

Data is everywhere and its growing constantly. Its used in a multitude of ways and most of the time we don't even realize it. It is no longer being deleted but being stored for future reference and analysis so it can start to predict you! I don't want to scare you but rather inform you so that you are aware when you give your data away. Whether it be on Social Media or Shopping.


The interface between psychometrics and data science

#artificialintelligence

Who are the brightest 10% of people at your Company? Which of your current staff are best equipped with both good judgement and good leadership. One of my diligent HR officer's want to move into IT, does she have a good fit with that role? Which of my current financial officers have the greatest risk of being dishonest? I want to help staff become more assertive, but what percentage of staff would most benefit from such a course?


The Great Hack: the film that goes behind the scenes of the Facebook data scandal

#artificialintelligence

Cambridge Analytica may have become the byword for a scandal, but it's not entirely clear that anyone knows exactly what that scandal is. It's more like toxic word association: "Facebook", "data", "harvested", "weaponised", "Trump" and, in this country, most controversially, "Brexit". It was a media firestorm that's yet to be extinguished, a year on from whistleblower Christopher Wylie's revelations in the Observer and the New York Times about how the company acquired the personal data of tens of millions of Facebook users in order to target them in political campaigns. This week sees the release of The Great Hack, a Netflix documentary that is the first feature-length attempt to gather all the strands of the affair into some sort of narrative – though it is one contested even by those appearing in the film. "This is not about one company," Julian Wheatland, the ex-chief operating officer of Cambridge Analytica, claims at one point. "This technology is going on unabated and will continue to go on unabated.[…] There was always going to be a Cambridge Analytica. It just sucks to me that it's Cambridge Analytica."


Facebook data-sharing deals with major tech companies under investigation in criminal inquiry

The Independent - Tech

Federal prosecutors are conducting a criminal investigation into data deals Facebook struck with some of the world's largest technology companies, intensifying scrutiny of the social media giant's business practices as it seeks to rebound from a year of scandal and setbacks. A grand jury in New York has subpoenaed records from at least two prominent makers of smartphones and other devices, according to two people who were familiar with the requests and who insisted on anonymity to discuss confidential legal matters. Both companies had entered into partnerships with Facebook, gaining broad access to the personal information of hundreds of millions of its users. We'll tell you what's true. You can form your own view.