Security & Privacy


Fake fingerprints can imitate real ones in biometric systems – research

The Guardian

Researchers have used a neural network to generate artificial fingerprints that work as a "master key" for biometric identification systems and prove fake fingerprints can be created. According to a paper presented at a security conference in Los Angeles, the artificially generated fingerprints, dubbed "DeepMasterPrints" by the researchers from New York University, were able to imitate more than one in five fingerprints in a biometric system that should only have an error rate of one in a thousand. In order to work, the DeepMasterPrints take advantage of two properties of fingerprint-based authentication systems. The first is that, for ergonomic reasons, most fingerprint readers do not read the entire finger at once, instead imaging whichever part of the finger touches the scanner. Crucially, such systems do not blend all the partial images in order to compare the full finger against a full record; instead, they simply compare the partial scan against the partial records.


Mozilla's gift guide ranks gadgets by how secure they are

Engadget

You can always expect to see a bunch of gift and shopping guides pop up in the weeks, even months, leading to Black Friday and Christmas season. Even Mozilla has released its own take, but instead of making it a list of products to buy, the organization has compiled the most popular gadget gifts and identified which of them are secure and trustworthy. It's called Privacy Not Included, and it will tell you if a particular device can spy on you using its camera, mic and location services. The guide also includes various information about the devices' security features, and those that meet Mozilla's minimum standards are recognized with a badge on their page. Mozilla awarded the badge to 33 products (out of 70), including the Nintendo Switch, Google Home, Amazon Echo speakers, Apple TV/iPad, Sony PS4 and Microsoft XBox One.


4 best practices to combat new IoT security threats at the firmware level

#artificialintelligence

Telepresence robots enable physicians to administer care to patients in remote and rural areas, and extend the reach of healthcare to those who otherwise might go without it. The use of telepresence in healthcare isn't new; it has operated for more than ten years and is an accepted part of medical practice in many care networks. What has changed for telepresence is the emergence of a new set of security vulnerabilities that attack telepresence robots at the firmware level--where standard IT security practices often don't extend. "Robotic telepresence is a next-generation technology that allows a person in one location to replicate himself in another," wrote Dan Regalado, Security Researcher at IoT security provider Zingbox in a 2018 research report. "The remote person can see you, hear you, interact with you, and move all around your location. But what if the person behind the robot is not who you think he is? What if the robot gets compromised, and now the attacker is watching you and your surroundings?"


State of AI in the Enterprise, 2nd Edition

#artificialintelligence

FOR the second straight year, Deloitte surveyed executives in the US knowledgeable about cognitive technologies and artificial intelligence,1 representing companies that are testing and implementing them today. We found that these early adopters2 remain bullish on cognitive technologies' value. As in last year's survey, the level of support for AI is truly extraordinary. These findings illustrate that cognitive technologies hold enticing promise, some of which is being fulfilled today. However, AI technologies may deliver their best returns when companies balance excitement over their potential with the ability to execute. A year later, and the thrill isn't gone. In Deloitte's 2017 cognitive survey, we were struck by early adopters' enthusiasm for cognitive technologies.4 That excitement owed much to the returns they said cognitive technologies were generating: 83 percent stated they were seeing either "moderate" or "substantial" benefits. Respondents also said they expected that cognitive technologies would change both their companies and their industries rapidly. In 2018, respondents remain enthusiastic about the value cognitive technologies bring. Their companies are investing in foundational cognitive capabilities, and using them with more skill. Thirty-seven percent of respondents say their companies have invested US$5 million or more in cognitive technologies. Another reason is that companies have more ways to acquire cognitive capabilities, and they are taking advantage.


Why organizations need to know full benefits of artificial intelligence

#artificialintelligence

Artificial Intelligence is dominating both headlines and the agendas of business leaders. Our 2018 Views from the C-Suite survey of global executives finds widespread agreement that there are tremendous opportunities in digitization and new technologies such as AI. Fully 71 percent of executives expect AI to have "transformative effects for economic growth and competitiveness" over the next 12 months. However, executives may need to temper their expectations for the short-term implications of AI. Much of executives' enthusiasm is justified.


Alibaba's Tmall and Ford have a vehicle 'vending machine'

ZDNet

Ecommerce giant Alibaba wants to make the process of purchasing a new vehicle as easy as buying a can of Coke, launching an "auto vending machine" to target the largest new car market in the world. The Super Test-Drive Center in Guangzhou was launched earlier this year by Alibaba's Tmall and the Ford Motor Group with the goal to "dramatically" improve the car shopping experience for Chinese consumers. Discussing the initiative with ZDNet at Alibaba's 11.11 Global Shopping Festival in Shanghai on Sunday, company representatives said that the initiative isn't limited to Ford vehicles, and that the likes of Volvo and BMW will soon be on board. To test-drive a car, Alibaba app users will need to have over 700 points on Alibaba's credit-scoring system, Zhima Credit, and be an accredited Alibaba Super Member. Customers can browse and select models they want to test-drive via the app catalogue, and after having their eligibility confirmed, the customer is required to take a photo using the app to allow for biometric authentication.


Researchers demo how machine learning can be used to track Gh0st RAT variants SC Media

#artificialintelligence

Trend Micro researchers are proposing machine learning as a new way to combat threat actors using techniques including polymorphism, encryption, and obfuscation and other tactics to disguise their attacks. Researchers tested the theory by observing cluster network flows from Gh0st RAT variants in an effort to better spot network anomalies and intrusions and found that multiple versions of Gh0st RAT were clustered together due to the similarities in their payloads, according to a Nov. 13 blog post. While monitoring the malware, researchers also saw ways machine learning's ability to cluster data could be used to detect future Gh0st RAT variants, provide insights on different network patterns from malicious traffic, and even show similar characteristics between different malware families within the same classification. "Using machine learning for analysis vastly improves the speed at which data is organized and conclusions are obtained," researchers said in their researcher paper, which further detailed their methods. "In addition, the results show how machine learning can be used to efficiently identify a widely used vulnerability as it is spreading, or to recognize a certain vulnerability used in a novel way as part of another malware campaign."


New cybersecurity threats posed by artificial intelligence Packt Hub

#artificialintelligence

In 2017, the cybersecurity firm Darktrace reported a novel attack that used machine learning to observe and learn normal user behavior patterns inside a network. The malignant software began to mimic normal behavior thus blending it into the background and become difficult for security tools to spot. Many organizations are exploring the use of AI and machine learning to secure their systems against malware or cyber attacks. However, given their nature for self-learning, these AI systems have now reached a level where they can be trained to be a threat to systems i.e., go on the offensive. This brings us to a point where we should be aware of different threats that AI poses on cybersecurity and how we should be careful while dealing with it.


Transforming IoT Architecture With AI

#artificialintelligence

Thinkers say Internet of things (IoT) and the artificial intelligence (AI) will transmute business and society more deeply than the digital and industrial revolutions combined, and now we are starting to see how that phase would shape up. One and only one critical factor is where the intelligence exists in and how will IoT architecture will be influenced. Although many institutes believe that AI's rightful place is in the cloud as that is the place where they are moving their IT computing power and data, the main requirement for practical IoT is interoperable connections between the different sensors at the periphery of a gateway and bidirectionally from the cloud. This then poses the problem of underdevelopment. Majority of the machine learning applications and AI that are about to change the industries and revolutionize our world, call for real-time responsiveness.


EU's Right to Explanation: A Harmful Restriction on Artificial Intelligence

#artificialintelligence

Last September, a U.K. House of Commons committee concluded that it is too soon to regulate artificial intelligence (AI). Its recommendation comes too late: The EU General Data Protection Regulation (GDPR), which comes into force next year, includes a right to obtain an explanation of decisions made by algorithms and a right to opt-out of some algorithmic decisions altogether. These regulations do little to help consumers, but they will slow down the development and use of AI in Europe by holding developers to a standard that is often unnecessary and infeasible. Although the GDPR is designed to address the risk of companies making unfair decisions about individuals using algorithms, its rules will provide little benefit because other laws already protect their interests in this regard. For example, when it comes to a decision to fire a worker, laws already exist to require an explanation, even if AI is not used.