Results


What artificial intelligence will look like in 2030

#artificialintelligence

Over the next 15 years, AI technologies will continue to make inroads in nearly every area of our lives, from education to entertainment, health care to security. "Now is the time to consider the design, ethical, and policy challenges that AI technologies raise," said Grosz. The report investigates eight areas of human activity in which AI technologies are already affecting urban life and will be even more pervasive by 2030: transportation, home/service robots, health care, education, entertainment, low-resource communities, public safety and security, employment, and the workplace. Some of the biggest challenges in the next 15 years will be creating safe and reliable hardware for autonomous cars and health care robots; gaining public trust for AI systems, especially in low-resource communities; and overcoming fears that the technology will marginalize humans in the workplace.


Exploiting machine learning in cybersecurity

#artificialintelligence

MIT's Computer Science and Artificial Intelligence Lab (CSAIL) has led one of the most notable efforts in this regard, developing a system called AI2, an adaptive cybersecurity platform that uses machine learning and the assistance of expert analysts to adapt and improve over time. The system uses near-real-time analytics to identify known security threats, stored data analytics to compare samples against historical data and big data analytics to identify evolving threats through anonymized datasets gathered from a vast number of clients. Combining this capability with the data already being gathered by IBM's threat intelligence platform, X-Force Exchange, the company wants to address the shortage of talent in the industry by raising Watson's level of efficiency to that of an expert assistant and help reduce the rate of false positives. This technique gives the cybersecurity firm the unique ability to monitor billions of results on a daily basis, identify and alert about the publication of potentially brand-damaging information and proactively detect and prevent attacks and data loss before they happen.


2cT8B2J

#artificialintelligence

Machine learning and AI could be the key to protecting enterprise IT from advancing cybersecurity threats, Cylance CEO Stuart McClure said on Tuesday. McClure's company, which bills itself as "advanced threat protection for the endpoint," uses machine learning to analyze massive amounts of data in an organization and classifies that data automatically. Cylance, in offering breach protection, is often confused with legacy anti-virus software, McClure said. The US Office of Personnel Management (OPM) eventually brought Cylance in to help them work on the early days of what would eventually be determined to be a massive breach.


Business Case Drive Enhancements to Video Analytics

#artificialintelligence

The video analytics industry is typically split into two distinct camps: (1) systems designed around rules and user-specified rules or models and (2) autonomous systems designed around machine learning. Supervised learning systems require heavy training and feedback to achieve the desired output, where unsupervised learning systems train themselves from the input data and require minimal human input. The video analytic solutions we saw in the market a decade ago seem rudimentary compared to today's offerings; partly due to the technology catching up with early promises and partly due to the industry's understanding and level-setting of expectations from the initial splash of analytics hyped as a panacea and the future of security. However, some of the extreme claims such as its ability to replace trained human operators, eliminate the need for well-designed camera placement, completely eliminate false positives, and determine a person's intent ahead of an action have proven to be more hype than reality for many end users.