information security

A Whole New Way to Hack Self-Driving Cars


Apple bowed to China's censorship pressure, which is completely unsurprising but nevertheless disappointing. Car and Driver reports that security researchers at the University of Washington confused autonomous cars into misidentifying road signs, and they did it with simple stickers they made a home computer. The researchers put stickers on road signs and managed to convince the car's image-detecting alogorithms that they were seeing, say, a speed limit sign instead of a stop sign. Buzzfeed this week reported that federal agents are now using secret spy planes to hunt for cartel leaders in Mexico.

Roomba maker may share maps of users' homes with Google, Amazon or Apple

The Guardian

The maker of the Roomba robotic vacuum, iRobot, has found itself embroiled in a privacy row after its chief executive suggested it may begin selling floor plans of customers' homes, derived from the movement data of their autonomous servants. However, this is not necessarily personal data as protected under data protection law. The company's terms of service appear to give the company the right to sell such data already, however. When signing up for the company's Home app, which connects to its smart robots, customers have to agree to a privacy policy that states that it can share personal information with subsidiaries, third party vendors, and the government, as well as in connection with "any company transaction" such as a merger or external investment.

Artificial intelligence is aiding the fight against cybercrime


The automation wave is the progression of technology and machine learning into intelligent software that can act to both identify and remediate incidents, leaving security professionals to tackle more complex and relevant issues. Graduating from a traditional rule-based system, experts have employed machine-learning techniques, drawing on data insight to identify patterns and apply machine-readable context to events. Information security professionals have battled for years to gain better insight into threat behaviour and utilising the most up-to-date technology to protect against attacks. A hybrid approach to security operations combining automation and humans, or supervised machine learning, is not only critical in alleviating the current skills shortage in the information security and cyber security industry, but also provides significantly improved results over either a human or machine working alone.

AI or not, machine learning in cybersecurity advances


As companies promote AI and advanced machine learning in cybersecurity, CISOs need to ask some tough questions to get past the hype: Are these technologies bolted on to get investments as well as customers, or are they core to an innovative security platform that solves a business problem (too many alerts to efficiently monitor)? Is the company's expertise in machine learning and AI or information security? Advances in machine learning and security can help in areas such as antimalware, dynamic risk analysis and anomaly detection, found Robert Lemos, who reports on machine learning in cybersecurity in this month's cover story. The technology is really good at "crunching through data," Joseph Blankenship, senior analyst for security and risk at Forrester Research, tells Lemos.

Royal Free breached UK data law in 1.6m patient deal with Google's DeepMind

The Guardian

London's Royal Free hospital failed to comply with the Data Protection Act when it handed over personal data of 1.6 million patients to DeepMind, a Google subsidiary, according to the Information Commissioner's Office. The ICO ruled that testing the app with real patient data went beyond Royal Free's authority, particularly given how broad the scope of the data transfer was. The ruling does not directly criticise DeepMind, a London-based AI company purchased by Google in 2013, since the ICO views the Royal Free as the "data controller" responsible for upholding the data protection act throughout its partnership with Streams, with DeepMind acting as a data processor on behalf of the trust. Streams has since been rolled out to other British hospitals, and DeepMind has also branched out into other clinical trials, including a project aimed at using machine-learning techniques to improve diagnosis of diabetic retinopathy, and another aimed at using similar techniques to better prepare radiotherapists for treating head and neck cancers.

Tieto and Espoo test artificial intelligence to boost value-based health and social care


All data processing includes extreme measures regarding information security, and personal data, such as names, identity numbers and addresses are concealed already during data collection. This will help the city to provide more individualized services, thereby preventing problems such as social exclusion more cost-effectively. Through Tieto's artificial intelligence engine, it is possible to develop more personalized citizen services within social and healthcare area in Espoo. Tieto is developing artificial intelligent platform to provide exponential value for its customers in both public and private sectors.

Artificial Intelligence in Healthcare: Major Opportunities and Challenges


When a patient enters a physician's office, the patient must inform the staff about their historical medical information and simple data points, such as smoking status and age. Federal mandates have pushed the hospital industry to adopt electronic health record (EHR)systems. With so many patients storing their personal information in electronic systems, data security is of utmost importance. The biggest challenges are user training and physician burnout Training users on new technology is an expensive process.

The 7 hottest jobs in IT


The solution to this problem is either to find approaches that help us to generate data, or building more robust machine learning models which can learn from limited data. Aymen Sayed, chief product officer for CA Technologies, points out that while AR and VR tech made a splash with a range of consumer products shown at this year's CES, more promising opportunities will occur this year in the enterprise for simulation and training, which should mean more roles for AR and VR developers -- both in development and security. In fact, Gartner predicts that by 2020, augmented reality, virtual reality, and mixed reality immersive solutions will be a part of 20 percent of enterprise's digital transformation strategy." Alana Hall, corporate recruiter at Conga, says a number of cloud-related roles are the toughest to fill this year, including "cloud architects and developers, cloud infrastructure devops roles, hybrid cloud architects and developers."

Why Big Data, Machine Learning Are Critical to Security


Big data and machine learning will play increasingly critical roles in improving information security, predicts Will Cappelli, a vice president of research at Gartner. "In terms of market size, Gartner estimates that in 2016 the world spent approximately $800 million on the application of big data and machine learning technologies to security use cases," he says in an interview with Information Security Media Group. A typical use case would be to deploy a big data log management platform and then deploy some kind of machine learning capability on top of that platform to enable the automated discovery of hidden patterns in this data that indicate, for example, unauthorized access, he says. Cappelli is a Gartner Research vice president in the enterprise management area, focusing on the application of big data and machine learning technologies to IT operations as well as application performance monitoring.

Allo, privacy, are you there? Google keeps your messages forever


Google's Allo offers users messaging app with Google Assistant built in, offering automatically generated responses called Smart Replies and other computer-generated suggestions for your everyday life. Instead of keeping your messages on company servers for a short period of time, the company will keep them indefinitely, or at least until you manually delete them. The change sets Allo apart from other messaging apps that have built in privacy settings by default rather than leaving it up to the user to make sure messages don't hang around on company servers. Instead, users' messages are encrypted end to end by default settings.