Well File:

Security & Privacy


AI can now convincingly mimic cybersecurity and medical experts

#artificialintelligence

If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation – flagged and unflagged – has been aimed at the general public. Imagine the possibility of misinformation – information that is false or misleading – in scientific and technical fields like cybersecurity, public safety and medicine. There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as facultymembers doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community.


TikTok Has Started Collecting Your 'Faceprints' and 'Voiceprints.' Here's What It Could Do With Them

TIME - Tech

Recently, TikTok made a change to its U.S. privacy policy, allowing the company to "automatically" collect new types of biometric data, including what it describes as "faceprints" and "voiceprints." TikTok's unclear intent, the permanence of the biometric data and potential future uses for it have caused concern among experts who say users' security and privacy could be at risk. On June 2, TikTok updated the "Information we collect automatically" portion of its privacy policy to include a new section called "Image and Audio Information," giving itself permission to gather certain physical and behavioral characteristics from its users' content. The increasingly popular video sharing app may now collect biometric information such as "faceprints and voiceprints," but the update doesn't define these terms or what the company plans to do with the data. "Generally speaking, these policy changes are very concerning," Douglas Cuthbertson, a partner in Lieff Cabraser's Privacy & Cybersecurity practice group, tells TIME.


How deep learning can deliver improved cybersecurity [Q&A]

#artificialintelligence

Traditional cybersecurity isn't necessarily bad at detecting attacks, the trouble is it often does so after they have occurred. A better approach is to spot potential attacks and block them before they can do any damage. One possible way of doing this is via'deep learning' allowing technology to identify the difference between good and bad. We spoke with Brooks Wallace, cybersecurity sales leader at Deep Instinct to find out more about this innovative solution. BW: If you look at cybersecurity, there's always been this holy grail of prevention.


The EU's Artificial Intelligence Act: A Pragmatic Approach - Techonomy

#artificialintelligence

The European Union has introduced a proposal to regulate the development of AI, with the goal of protecting the rights and well-being of its citizens. The Artificial Intelligence Act (AIA) is designed to address certain potentially risky, high-stakes use cases of AI, including biometric surveillance, bank lending, test scoring, criminal justice, and behavior manipulation techniques, among others. The goal of the AIA is to regulate the development of these applications of AI in a way that will foster increased trust in its adoption. Similar to the EU's General Data Protection Regulation (GDPR), the AIA law will apply to anyone selling or providing relevant services to EU citizens. GDPR spearheaded data privacy regulations across the United States and around the world.


Cybersecurity experts face a new challenge: AI capable of tricking them

#artificialintelligence

If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation – flagged and unflagged – has been aimed at the general public. Imagine the possibility of misinformation – information that is false or misleading – in scientific and technical fields like cybersecurity, public safety and medicine. There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as faculty members doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community.


CTAB-GAN: Effective Table Data Synthesizing

#artificialintelligence

We devise a novel conditional tabular data synthesizer, CTAB-GAN, that addresses the limitations of the prior state-of-the-art: (i) encoding mixed data type of continuous and categorical variables, (ii) efficient modeling of long tail continuous variables and (iii) increased robustness to imbalanced categorical variables along with skewed continuous variables. Furthermore, two key features of CTAB-GAN are the introduction of classification loss in conditional GAN, and novel encoding for the conditional vector that efficiently encodes mixed variables and helps to deal with highly skewed distributions for continuous variables.


AI-based loan apps are booming in India, but some borrowers miss out

#artificialintelligence

But even he has been surprised by the sheer volume of complaints against digital lenders in recent years. While most of the grievances are about unauthorised lending platforms misusing borrowers' data or harassing them for missed payments, others relate to high interest rates or loan requests that were rejected without explanation, Shah said. "These are not like traditional banks, where you can talk to the manager or file a complaint with the head office. There is no transparency, and no one to ask for remedy," said Shah, founder of JivanamAsteya. "It is hurting young people starting off in their lives -- a loan being rejected can result in a low credit score, which will adversely affect bigger financial events later on," he told the Thomson Reuters Foundation.


Fed up with Big Tech? Find out how to get your privacy back, explore alternatives to Google

USATODAY - Tech Top Stories

Years ago, we searched the web, bought new gadgets, and typed in our email addresses without much thought. As far as accounts went, "Hey if it's free, sign me up," we thought. Fast forward to now, and you can't go online or turn on the news without hearing about the control Big Tech has on our lives – and the growing resentment around it. Probably due to government initiatives, tech companies are making changes to address these concerns. You can now password protect the page that reveals all your Google searches and other activity.


Dark Reading

#artificialintelligence

The Internet has enhanced communications, increased commerce, and brought people together socially. Unfortunately, it has also enabled malicious activity with data breaches, ransomware, destroyed systems, and the Dark Web. Cyberattacks have become so common that only the large ones make the news now. The United States is arguably the most "wired" country in the world, with everything from cars to refrigerators to security cameras connected online, making us also the most vulnerable. Because the open Internet is driven by cost and speed and not by security, continual cyberattacks have pushed us into a new kind of Cold War -- with artificial intelligence (AI) serving as the basis of this arms race. From Moonlight Maze in the late 1990s to the recent SolarWinds attack, we have seen malware and ransomware planted in our infrastructure and systems.


Will Potter on Artificial Intelligence Business Directory

#artificialintelligence

Synthetic Identity Fraud is the fastest-growing financial crime in the U.S. Payments System. FiVerity develops and markets AI and Machine Learning solutions that detect sophisticated forms of cyber fraud, delivering actionable, proactive threat intelligence. The company's solutions meet the unique requirements for financial institutions with consumer offerings, including banks, credit unions, credit card providers, and online lenders. SynthID Detect identifies fraud and cyber threats at your financial institution in real-time.