Security & Privacy

Academia's Facial Recognition Datasets Illustrate The Globalization Of Today's Data


This week's furor over FaceApp has largely centered on concerns that its Russian developers might be compelled to share the app's data with the Russian government, much as the Snowden disclosures illustrated the myriad ways in which American companies were compelled to disclose their private user data to the US government. Yet the reality is that this represents a mistaken understanding of just how the modern data trade works today and the simple fact that American universities and companies routinely make their data available to companies all across the world, including in Russia and China. In today's globalized world, data is just as globalized, with national borders no longer restricting the flow of our personal information - trend made worse by the data-hungry world of deep learning. Data brokers have long bought and sold our personal data in a shadowy world of international trade involving our most intimate and private information. The digital era has upended this explicit trade through the interlocking world of passive exchange through analytics services.

Facial Recognition: When Convenience and Privacy Collide


The use of facial recognition in the United States public sector has received a great deal of press lately, and most of it isn't positive. There's a lot of concern over how state and federal government agencies are using this technology and how the resulting biometric data will be used. Many fear that the use of this technology will lead to a Big Brother state. Unfortunately, these concerns are not without merit. We're already seeing damaging results where this technology is prevalent in countries like China, Singapore, and even the United Kingdom where London authorities recently fined a man for disorderly conduct for covering his face to avoid surveillance on the streets.

Artificial Intelligence & Cybersecurity: Attacking & Defending


Cybersecurity suffers from a skills shortage in the market. As a result, the opportunities for artificial intelligence (AI) automation are vast. In many cases, AI is used to enhance and improve certain defensive aspects of cybersecurity. Prime examples are combating spam and detecting malware. From the attacker point of view, there are many incentives to using AI when trying to penetrate others' vulnerable systems.

FaceApp denies storing users' photographs without permission

The Guardian

The developer of a popular app which transforms users' faces to predict how they will look as older people has insisted they are not accessing users' photographs without permission. FaceApp, which was launched by a Russian developer in 2017, uses artificial intelligence allowing people to see how they would look with different hair colour, eye colour or as a different gender. The app has topped download charts again this week, after users homed in on its ageing filter, which has since been used by dozens of celebrities and prominent figures to picture how they will supposedly look in several decades' time. This surge of interest has in turn created concerns that FaceApp is systematically harvesting users' images. People who upload their image to the app transfer the picture to a server controlled by the developer, with the photograph processing done remotely, rather than on their phone.

Paul Friedman on LinkedIn: "Excellent Gartner insights on all things privacy. Our partner Verint has #AI powered tools to ensure private Omni-Channel conversations stay secure. Mayday Communications Inc promotes Verint's complete portfolio of #security solutions. #datacompliance #cybersecurity #gartner #verint"


Our partner Verint has #AI powered tools to ensure private Omni-Channel conversations stay secure. Mayday Communications Inc promotes Verint's complete portfolio of #security solutions. In this newsletter featuring Gartner's report, "Predicts 2019: The Ambiguous Future of Privacy," we dig into steps you can take now to prepare your business for the rising tide of #privacy #regulations..

Artificial intelligence in cyber security: The savior or enemy of your business? - Hashed Out by The SSL Store


Artificial intelligence poses both a blessing and a curse to businesses, customers, and cybercriminals alike. AI technology is what provides us with speech recognition technology (think Siri), Google's search engine, and Facebook's facial recognition software. Some credit card companies are now using AI to help financial institutions prevent billions of dollars in fraud annually. Is artificial intelligence an advantage or a threat to your company's digital security? On one hand, artificial intelligence in cyber security is beneficial because it improves how security experts analyze, study, and understand cybercrime.

Organisations turn to AI in race against cyber attackers


Companies and public sector organisations say they have no choice but to automate their cyber defences as hacking become increasingly sophisticated. Security professionals can no longer keep pace with the volume and sophistication of attacks on computer systems. In a study of 850 security professionals across 10 countries, more than half said their organisations are overwhelmed with data. So they are turning to machine-learning technologies that can identify cyber attacks by analysing huge quantities of network data and have the potential to block attacks automatically. By 2020, two out of three companies plan to deploy cyber security defences incorporating machine learning and other forms of artificial intelligence (AI), according to the Capgemini study, Reinventing cyber security with artificial intelligence.

How AI and Machine Learning Can Help With Governmental Cybersecurity Strategies


An ever-present threat to any given country's national security is that of cybersecurity. There are always hackers that want to use technology for malicious purposes, not to say the long list of adversaries that a country can pile up along the years. That's so as what it is at stake is millions of sensible data from citizens, companies, directories, senior officers and members of the government, state's information and more. Unfortunately, not all Governments take this peril as seriously as they should, and the efforts towards creating cyber-defense strategies – in most countries – lack budget, personnel and even real, field knowledge. Before this absence of real policies, Artificial Intelligence might be well seen as a good starting point where to build the walls that keep out any possible threats.