Researchers at the University of Notre Dame are using artificial intelligence to develop an early warning system that will identify manipulated images, deepfake videos and disinformation online. The project is an effort to combat the rise of coordinated social media campaigns to incite violence, sew discord and threaten the integrity of democratic elections. The scalable, automated system uses content-based image retrieval and applies computer vision-based techniques to root out political memes from multiple social networks. "Memes are easy to create and even easier to share," said Tim Weninger, associate professor in the Department of Computer Science and Engineering at Notre Dame. "When it comes to political memes, these can be used to help get out the vote, but they can also be used to spread inaccurate information and cause harm."
Video analytics represents something of a holy grail to those in the security industry. Computers have long been able to scan text and even audio for keywords or phrases, but analyzing video -- especially in real time -- is considerably more challenging. In recent years, however, major improvements to artificial intelligence (AI), machine learning and deep learning capabilities have given rise to impressive new tools capable of analyzing video with minimal input from security personnel. As companies look to invest in these new technologies, it's important to establish a baseline understanding of what terms like artificial intelligence, machine learning and deep learning actually mean -- and what these technologies are capable of. Education will be increasingly critical as we move away from relying on human-based security and lean more on technology to identify and alert us to anomalous or troubling behavior.
Learning to detect content-independent transformations from data is one of the central problems in biological and artificial intelligence. An example of such problem is unsupervised learning of a visual motion detector from pairs of consecutive video frames. Rao and Ruderman formulated this problem in terms of learning infinitesimal transformation operators (Lie group generators) via minimizing image reconstruction error. Unfortunately, it is difficult to map their model onto a biologically plausible neural network (NN) with local learning rules. Here we propose a biologically plausible model of motion detection.
The concept of "smart cities" is no longer confined to the realms of futuristic science fiction--they're quickly becoming part of our everyday reality. Technologies like self-driving buses that communicate with traffic lights and AI-monitored CCTV cameras are being implemented in cities from Singapore to Las Vegas, and the technology behind these smart-city initiatives promises innovative solutions for both municipalities and their citizens--offering safer and more efficient living for an ever-growing population. The smart-city promise is often delivered without the fine print though: namely, that a single attack waged against just one component of a connected infrastructure could disable an entire smart city in a matter of minutes. The attack could come from a single line of code. This looming threat is turning the promises of revolutionized living standards into a potential menace to public safety.
Suppose you would like to know mortality rates for women during childbirth, by country, around the world. One option is the WomanStats Project, the website of an academic research effort investigating the links between the security and activities of nation-states, and the security of the women who live in them. The project, founded in 2001, meets a need by patching together data from around the world. Many countries are indifferent to collecting statistics about women's lives. But even where countries try harder to gather data, there are clear challenges to arriving at useful numbers -- whether it comes to women's physical security, property rights, and government participation, among many other issues.
IRONSCALES, the pioneer of self-learning email security, today announced that is has won Cyber Defense Magazine's Infosec Award in the category of Most Innovative Artificial Intelligence and Machine Learning application. In addition, IRONSCALES also revealed today that it has won two'Gold' awards from the Info Security Products Guide Global Excellence Awards in the categories of Artificial Intelligence in Security and Incident Analysis & Response. These awards continue momentum form 2019 in which IRONSCALES won a total of six awards, including the distinction as the Best Anti-Phishing Security Solution and Innovation in Email Security. "IRONSCALES philosophy has always been that in order to make a dent in what has become the global phishing epidemic, real-time human intelligence combined with technology that leverages artificial intelligence and machine learning is required to protect against the rapid scale of new phishing attacks," said Eyal Benishti, IRONSCALES founder and CEO. "Our team has worked tirelessly to build an email security platform that is both seamless to use yet incredibly powerful and effective. I thank the judges for recognizing our intuition and technological achievements, our thousands of customers for believing in our product and of course our dedicated team for pushing the limits to build the anti-phishing solution of tomorrow, today."
Controversial facial recognition company, Clearview AI, is reportedly developing surveillance cameras and augmented reality glasses despite mounting public scrutiny over the company's ethics. According to documents obtained by Buzzfeed News, Clearview AI is exploring the possibility of making surveillance cameras that use computer vision software to identify subjects by cross-referencing a database. Its database of photos has been the subject of controversy after it was found to be scraping pictures from Facebook and Instagram without people's consent. Those pictures were used to train its facial recognition algorithm. The company has also partnered with at least 600 law enforcement agencies across the US.
There is a lot of excitement around AI for SecOps. From a market perspective, AI in cybersecurity is projected to grow by a CAGR of 23.3% between 2019 and 2026 to exceed $38B. On the physical security front, AI- powered video analytics market driven primarily by security and safety, is projected to grow by a CAGR of 22.3% between 2018 and 2025 to reach $4.5B2. From a value perspective, securityintelligence.com has an insightful article titled - Artificial Intelligence (AI) and Security: A Match Made in the SOC – where it says " In summary, when security analysts partner with artificial intelligence, the benefits include streamlined threat detection, investigation and response processes, increased productivity, and improved job satisfaction -- analysts spend more time doing what they enjoy most and the cost of security breaches decreases." It is a well-known fact that the talent war is real in security operations.
Fintech or Financial technology is the industry that delivers traditional financial services in a technical manner. The industry has been there for a long time but made significant growth in the previous few years. New cryptocurrencies and payment solutions are surfacing, and the industry is projected to make significant growth of $309.98 billion at an annual growth rate of 24.8% through 2022. All this growth is due to huge customer value from consumers around the world. As the competition of FinTech is with a decades-old industry the need to retain loyal customers is important.