It's possible that someone may be watching your screen--by listening to it. A recent study from cybersecurity analysts at the universities of Michigan, Pennsylvania and Tel Aviv found that LCD screens "leak" a frequency that can be processed by artificial intelligence to provide a hacker insight into what's on a screen. "Displays are built to show visuals, not emit sound," says Roei Schuster, a PhD candidate at Tel Aviv University and a co-author of the study with doctoral candidates Daniel Genkin, Eran Tromer and Mihir Pattani. Yet the team's study shows that's not the case. The researchers were able to collect the noise through either a built-in or nearby microphone or remotely over Google Hangouts, for example.
Malware sophistication is increasing as adversaries begin to weaponize cloud services and evade detection through encryption, used as a tool to conceal command-and-control activity. To reduce adversaries' time to operate, security professionals said they will increasingly leverage and spend more on tools that use AI and machine learning, according to the 11th Cisco 2018 Annual Cybersecurity Report (ACR).
As businesses struggle to combat increasingly sophisticated cybersecurity attacks, the severity of which is exacerbated by both the vanishing IT perimeters in today's mobile and IoT era, coupled with an acute shortage of skilled security professionals, IT security teams need both a new approach and powerful new tools to protect data and other high-value assets. Increasingly, they are looking to artificial intelligence (AI) as a key weapon to win the battle against stealthy threats inside their IT infrastructures, according to a new global research study conducted by the Ponemon Institute on behalf of Aruba, a Hewlett Packard Enterprise company HPE, 1.66% This press release features multimedia. The Ponemon Institute study, entitled "Closing the IT Security Gap with Automation & AI in the Era of IoT," surveyed 4,000 security and IT professionals across the Americas, Europe and Asia to understand what makes security deficiencies so hard to fix, and what types of technologies and processes are needed to stay a step ahead of bad actors within the new threat landscape. The research revealed that in the quest to protect data and other high-value assets, security systems incorporating machine learning and other AI-based technologies are essential for detecting and stopping attacks that target users and IoT devices.
Relational data representations have become an increasingly important topic due to the recent proliferation of network datasets (e.g., social, biological, information networks) and a corresponding increase in the application of statistical relational learning (SRL) algorithms to these domains. In this article, we examine a range of representation issues for graph-based relational data. Since the choice of relational data representation for the nodes, links, and features can dramatically affect the capabilities of SRL algorithms, we survey approaches and opportunities for relational representation transformation designed to improve the performance of these algorithms. This leads us to introduce an intuitive taxonomy for data representation transformations in relational domains that incorporates link transformation and node transformation as symmetric representation tasks. In particular, the transformation tasks for both nodes and links include (i) predicting their existence, (ii) predicting their label or type, (iii) estimating their weight or importance, and (iv) systematically constructing their relevant features. We motivate our taxonomy through detailed examples and use it to survey and compare competing approaches for each of these tasks. We also discuss general conditions for transforming links, nodes, and features. Finally, we highlight challenges that remain to be addressed.