Collaborating Authors


Clearview AI Raises Disquiet at Privacy Regulators WSJD - Technology

The data protection authority in Hamburg, Germany, for instance, last week issued a preliminary order saying New York-based Clearview must delete biometric data related to Matthias Marx, a 32-year-old doctoral student. The regulator ordered the company to delete biometric hashes, or bits of code, used to identify photos of Mr. Marx's face, and gave it till Feb. 12 to comply. Not all photos, however, are considered sensitive biometric data under the European Union's 2018 General Data Protection Regulation. The action in Germany is only one of many investigations, lawsuits and regulatory reprimands that Clearview is facing in jurisdictions around the world. On Wednesday, Canadian privacy authorities called the company's practices a form of "mass identification and surveillance" that violated the country's privacy laws.

Central position in biometrics and privacy debate "an honor," Clearview AI CEO says


In a one-hour-long video interview with This Week in Startups, Clearview AI CEO Hoan Ton-That discusses the misconceptions around biometric facial recognition, arguing his technology is "a tool to help get a lead" and only "used when there is probable cause for a crime." In the middle of an international scandal, the New York startup had signed over 2,400 service contracts with law enforcement agencies to deploy its facial recognition software, at the time of the interview in May. Interviewer Jason Calacanis based his interview on a New York Times investigation discussing the company's business strategy to scrape images from social networks. Despite the controversy, Ton-That claims "it's an honor to be at the center of the debate now and talk about privacy," and confirms the paper's reporting is "extremely fair." The interview was recorded in May, but posted this week, as protests against police violence gripped America and changed the context of discussions on law enforcement tools.

MALOnt: An Ontology for Malware Threat Intelligence Artificial Intelligence

Malware threat intelligence uncovers deep information about malware, threat actors, and their tactics, Indicators of Compromise(IoC), and vulnerabilities in different platforms from scattered threat sources. This collective information can guide decision making in cyber defense applications utilized by security operation centers(SoCs). In this paper, we introduce an open-source malware ontology - MALOnt that allows the structured extraction of information and knowledge graph generation, especially for threat intelligence. The knowledge graph that uses MALOnt is instantiated from a corpus comprising hundreds of annotated malware threat reports. The knowledge graph enables the analysis, detection, classification, and attribution of cyber threats caused by malware. We also demonstrate the annotation process using MALOnt on exemplar threat intelligence reports. A work in progress, this research is part of a larger effort towards auto-generation of knowledge graphs (KGs)for gathering malware threat intelligence from heterogeneous online resources.

CES gadget show: Surveillance is in -- and in a big way

The Japan Times

NEW YORK – From the face scanner that will check in some attendees to the cameras-everywhere array of digital products, the CES gadget show is all-in on surveillance technology -- whether it calls it that or not. Nestled in the "smart home" and "smart city" showrooms at the sprawling Las Vegas consumer tech conference are devices that see, hear and track the people they encounter. Some of them also analyze their looks and behavior. The technology on display includes eyelid-tracking car dashboard cameras to prevent distracted driving and "rapid DNA" kits for identifying a person from a cheek swab sample. All these talking speakers, doorbell cameras and fitness trackers come with the promise of making life easier or more fun, but they're also potentially powerful spying tools.

Businesses Use AI to Thwart Hackers


But rather than impeding the pace of innovation, these concerns are prompting many corporate security chiefs to accelerate the development of advanced capabilities, in a bid to turn the tables on attackers by better detecting the misuse of data and keeping it safe. "Artificial intelligence is a backbone of security initiatives," Camille François, chief innovation officer of social-media analytics firm Graphika Inc., said Tuesday at the WSJ Pro Cybersecurity Executive Forum in New York. Among other applications, AI is being used in cyberattack modeling, where smart tools identify security vulnerabilities in simulated breaches or hacks, Ms. François said. Separate subscription required for some articles. The strategy is to approach your own systems like a hacker, she added, allowing AI-powered apps to find areas that need stronger security features.

Securing New Ground Considers Impact Of Technologies And A Holistic Approach


Securing New Ground, the security industry's annual executive conference this week in New York, offered food for thought about current and future trends in the security marketplace. Highlights from SNG 2019 included keynote remarks from security leaders at SAP, Johnson Controls and the Consumer Technology Association, discussions on how CSOs mitigate security risks, topic-focused thought leadership roundtables and a lively networking reception. Top trends observed at the event include cybersecurity, data privacy, facial recognition and artificial intelligence. A "View from the Top" session covered the need for companies to consider responsible use and ethics around technology; responsibility should extend throughout the organization. A panel of security leaders emphasized the need to understand the diversity of risks that end users face.

IBM brings artificial intelligence to the heart of cybersecurity strategies


IBM has launched IBM Security Connect, a new platform designed to bring vendors, developers, AI, and data together to improve cyber incident response and abilities. On Monday, the New York-based technology company unveiled the open platform, which IBM says "is the first security cloud platform built on open technologies, with AI at its core, to analyze federated security data across previously unconnected tools and environments." An analysis conducted by IBM suggests that cybersecurity teams in the enterprise use, on average, over 80 cybersecurity solutions provided by roughly 40 vendors. This is a potential recipe for chaos and may reduce the overall effectiveness of security and defense. IBM Security Connect makes use of both cloud technology and AI.

Artificial Intelligence: A Cybersecurity Tool for Good, and Sometimes Bad


Artificial intelligence is the new golden ring for cybersecurity developers, thanks to its potential to not just automate functions at scale but also to make contextual decisions based on what it learns over time. This can have big implications for security personnel--all too often, companies simply don't have the resources to search through the haystack of anomalies for the proverbial malicious needle. For instance, if a worker normally based in New York suddenly one morning logs in from Pittsburgh, that's an anomaly -- and the AI can tell that's an anomaly because it has learned to expect that user to be logging in from New York. Similarly, if a log-in in Pittsburgh is followed within a few minutes of another log-in by the same user from, say, California, that's likely a malicious red flag. So, at its simplest level, AI and "machine learning" is oriented around understanding behavioral norms.

Are you average? If not, algorithms might 'screw' you


Are you average in every way, or do you sometimes stand out from the crowd? Your answer might have big implications for how you're treated by the algorithms that governments and corporations are deploying to make important decisions affecting your life. "What algorithms?" you might ask. The ones that decide whether you get hired or fired, whether you're targeted for debt recovery and what news you see, for starters. Automated decisions made using statistical processes "will screw [some] people by default, because that's how statistics works," said Dr Julia Powles, an Australian lawyer currently based at New York University's Information Law Institute.

There's nothing fake about cybersecurity potential of artificial intelligence


The first word in AI may stand for "artificial," but the belief in its potential for cybersecurity in government and business circles is very real. In fact, industry sources say efforts to make use of artificial intelligence could drive a more flexible approach to cyber-related regulation, particularly in the finance sector. Former White House cybersecurity coordinator Rob Joyce, now back at the National Security Agency, called AI a "key element" in cybersecurity strategy in a recent speech. "The point about AI being a key element of the future, I think there is so much that AI can do to clean out anomalies, to move the speed of cyber, in setting up those defenses," Joyce said. Makers of financial-technology products are looking into and promoting the possibilities, a topic discussed extensively at the recent Securities Industry and Financial Markets Association "FinTech" conference in New York City.