Goto

Collaborating Authors

An innovation war: Cybersecurity vs. cybercrime

#artificialintelligence

Cybercrime tools that incorporate AI are outstripping their cybersecurity counterparts--malware today can pinpoint their targets from millions, generate convincing spam, and infect computer networks without being detected. All this raises the question--and it's a tough one--can cybersecurity innovations keep pace with cybercrime? It can, if companies use the same original thought and invention that sustains the war, turning not just to technology, but also communications with government agencies and new ways of thinking about cyber defense.



Cybercrime is Affecting How We Should Manage Projects

#artificialintelligence

If you aren't handling significantly sensitive data at the moment, then I recommend growing your own security talent from within, utilizing the skilled resources you already have who are already familiar with your business processes and client needs. The end solution can be the creation of a two to three member internal cybersecurity team and department. Whatever you do, complete inaction isn't the answer. While you cannot know what cybersecurity threats lie ahead, you can and should be proactive. Are you currently taking specific measures to prevent data breaches on the projects you manage and the customer and internal data you handle?



MIT releases artificial intelligence system to prevent cybercrime

#artificialintelligence

The team from the university's Computer Science and Artificial Intelligence Laboratory (CSAIL) and machine-learning startup PatternEx developed the new platform that can identify cyberattacks 85% of the time and even reduce the amount of false positives by a factor of five. AI2 goes through data and then spots suspicious activity through unmanned machine learning. From there, human reviewers check for signs of a security breach, a solution that can predict attacks with precision and eliminate the need to pursue bogus intelligence leads. AI2 uses three machine learning algorithms for detecting suspicious events, but just like other AI systems it also needs human feedback to verify its findings, so the system is constantly being enhanced through the team's so-called'continuous active learning system'. For computer science professor Nitesh Chawla of University of Notre Dame, the research is a potential'line of defense' against fraud, account takeover, service abuse, and other attacks faced by consumer-oriented systems today.