Even worse, how often do we hear about major data breaches with unprotected or misconfigured servers? Here's an example just last week where records for 57 million American residents were revealed by a search engine that scans for connected devices and open servers. It found at least three IP addresses with identical clusters misconfigured for public access. With 73GB of data, the service held data on almost 57 million US citizens, containing information including first and last name, employers, job title, email, address, state, ZIP code, phone number, and IP address. Another index of the same database included over 25 million business records, which held details on companies including employee counts, revenue numbers, and carrier routes.
A month ago a group convened in the University Club dining room at Arizona State University to discuss the future of national security research. There were retired Army and Marine generals, agents from the CIA and a bevy of scientists. Two trendlines popped out over the peppered bacon and frittatas: Nation states are vying for technological dominance, and the Holy Grail in that sphere is the successful pairing of humans and artificial intelligence. Creating machines that think and act like us is as much grounded in the humanities as it is in engineering. Talk to engineers about the problem, and they'll discuss things far outside the usual lanes of engineering, things like the nature of self, perception and free will.
I've always been a loner, avoiding crowds as much as possible, but last Friday I found myself in the company of 500 million people. The breach of the personal accounts of Marriott and Starwood customers forced us to join the 34% of U.S. consumers who experienced a compromise of their personal information over the last year. Viewed another way, there were 2,216 data breaches and more than 53,000 cybersecurity incidents reported in 65 countries in the 12 months ending in March 2018. How many data breaches we will see in 2019 and how big are they going to be? No one has a crystal ball this accurate and it's difficult to make predictions, especially about the future. Still, I made a brilliant, contrarian, and very accurate prediction last year, stating unequivocally that "there will be more spectacular data breaches" in 2018. Just like last year, this year's 60 predictions reveal the state-of-mind of key participants in the cybersecurity industry (on the defense team, of course) and cover all that's hot today. Topics include the use and misuse of data; artificial intelligence (AI) and machine learning as a double-edge sword helping both attackers and defenders; whether we are going to finally "get over privacy" or see our data finally being treated as a private and protected asset; how the cloud changes everything and how connected and moving devices add numerous security risks; the emerging global cyber war conducted by terrorists, criminals, and countries; and the changing skills and landscape of cybersecurity.
You don't usually find companies asking for regulation on the technology that they're developing, but Microsoft is doing just that. The company wants Congress to write laws for its facial recognition technology in 2019. Microsoft is positioning itself as an outspoken elder statesman while still trying to beat its competitors. ALINA SELYUKH, BYLINE: For Silicon Valley, this has been a troubled year. UNIDENTIFIED REPORTER #3: Facebook just had what might be the biggest wipeout in stock market history.
Symantec is rolling out a new product that it says will help enterprises protect public infrastructure from cyber attacks and cyberattack-induced blackouts. The Industrial Control System Protection (ICSP) Neural is a device that scans for malware on USB devices to block attacks on IoT and operational technology environments. Today's security threats have expanded in scope and seriousness. There can now be millions -- or even billions -- of dollars at risk when information security isn't handled properly. The cybersecurity firm said the ICSP station functions as a neural network, using artificial intelligence to detect USB-borne malware and sanitize the devices.
It's possible that someone may be watching your screen--by listening to it. A recent study from cybersecurity analysts at the universities of Michigan, Pennsylvania and Tel Aviv found that LCD screens "leak" a frequency that can be processed by artificial intelligence to provide a hacker insight into what's on a screen. "Displays are built to show visuals, not emit sound," says Roei Schuster, a PhD candidate at Tel Aviv University and a co-author of the study with doctoral candidates Daniel Genkin, Eran Tromer and Mihir Pattani. Yet the team's study shows that's not the case. The researchers were able to collect the noise through either a built-in or nearby microphone or remotely over Google Hangouts, for example.
The subject of outer space has been making headlines of late thanks to the proposal by US President Donald Trump for a United States Space Force as a new branch of the US military. If the Space Force does materialise, it will become the sixth armed forces branch in the US, joining the Navy, Army, Marine Corps, Air Force and Coast Guard. It also underscores the burgeoning importance of space in Earthly affairs. The Space Force will focus on national security, and preserving the satellites and vehicles that are dedicated to international communications and observation. Talk of the Space Force has been exciting news to fans of space-based science fiction such as Star Wars and military buffs alike, who imagine high-tech weaponry and elite soldiers doing battle in the wide and wondrous expanse of outer space.
Artificial Intelligence (AI) is at the frontier of a new techno-tsunami that is transforming the way we live and work. "Historically, an AV researcher might see 10,000 viruses in a career. Today there are over 700,000 per day," said Ryan Permeh, chief scientist, Cylance. Could AI be the solution to solving the big data problem, and bridging the widening workforce gap in the Cyber Security industry? Intelligent machines now have the power to make observations, understand requests, reason, draw data correlations, and derive conclusions.
Sponsored "Machine learning is eating the world," writes Clarence Chio in Machine Learning and Security with David Freeman, who heads a team of ML engineers charged with detecting and preventing fraud and abuse across LinkedIn. Chio went on to write that in fact: "Cybersecurity is also eating the world," and he has a point. The UK's National Cyber Security Centre (NCSC) claimed, on its second anniversary in October 2018, it had stopped more than 10 attacks per week, primarily from hostile nation states. Perhaps, unsurprisingly, the rise in threats has led to a boom in sales of cybersecurity software: it will be a $248bn (£194bn) industry by 2023, according to Markets and Markets research. Within this, the future for machine learning is bright.
In the past two years, we've learned that machine learning algorithms can manipulate public opinion, cause fatal car crashes, create fake porn, and manifest extremely sexist and racist behavior. And now, the cybersecurity threats of deep learning and neural networks are emerging. We're just beginning to catch glimpses of a future in which cybercriminals trick neural networks into making fatal mistakes and use deep learning to hide their malware and find their target among millions of users. Part of the challenge of securing artificial intelligence applications lies in the fact it's hard to explain how they work, and even the people who create them are often hard-pressed to make sense of their inner workings. But unless we prepare ourselves for what is to come, we'll learn to appreciate and react to these threats the hard way.