Probabilistic approaches to computer vision typically assume a centralized setting, with the algorithm granted access to all observed data points. However, many problems in wide-area surveillance can benefit from distributed modeling, either because of physical or computational constraints. Most distributed models to date use algebraic approaches (such as distributed SVD) and as a result cannot explicitly deal with missing data. In this work we present an approach to estimation and learning of generative probabilistic models in a distributed context where certain sensor data can be missing. In particular, we show how traditional centralized models, such as probabilistic PCA and missing-data PPCA, can be learned when the data is distributed across a network of sensors.
Every day organisations face risks to their security and business continuity. These may include industrial espionage, cyber attacks, protests, union strikes, terrorism, epidemic, natural disasters and the list is endless. Keeping track on all the events potentially hampering business operations and the security of employees and assets is not an easy task, and it all starts with collecting the right information via "threat intelligence". Threat intelligence is the process of collecting and elaborating information on existing or emerging risks or hazard to people, assets or operations, with the purpose of informing decision makers and, whenever possible, preventing or mitigating operational or strategic threats if and when they occur. Since most of the information is collected via openly available media and social media (so called Open Source Intelligence or OSINT), practitioners and threat intelligence solution providers have been looking at Artificial Intelligence (AI) to find more relevant information faster.
IBM Cloud Identity now features AI-based adaptive access capabilities that help continually assess employee or consumer user risk levels when accessing applications and services. The solution escalates suspicious user interactions for further authentication, while those identified as lower risk are "fast tracked" so they can access applications and services they need. With data breaches on the rise, traditional means of securing access, like passwords, are often not enough to prevent unauthorized access. The rise of credential-stuffing attacks, where a malicious actor obtains a list of credentials and tests them at various other sites using a bot, demonstrates that many password combinations have been leaked. Considering the amount of programs and passwords that employees are managing between their professional and personal lives, it is increasingly important that new security measures do not hinder user experience.
Artificial intelligence and automation adoption rates are rising, and investment plans are high on enterprise radars. AI is in pilots or use at 41% of companies, with another 42% actively researching it, according to the 2019 IDG Digital Business Study. Cybersecurity has emerged as an ideal use case for these technologies. Digital business has opened a score of new risks and vulnerabilities that, combined with a security skills gap, is weighing down security teams. As a result, more organizations are looking at AI and machine learning as a way to relieve some of the burden on security teams by sifting through high volumes of security data and automating routine tasks.
AI and privacy needn't be mutually exclusive. After a decade in the labs, homomorphic encryption (HE) is emerging as a top way to help protect data privacy in machine learning (ML) and cloud computing. It's a timely breakthrough: Data from ML is doubling yearly. At the same time, concern about related data privacy and security is growing among industry, professionals and the public. "It doesn't have to be a zero-sum game," says Casimir Wierzynski, senior director, office of the CTO, AI Products Group at Intel.
Live facial recognition cameras will be deployed across London, with the city's Metropolitan Police announcing today that the technology has moved past the trial stage and is ready to be permanently integrated into everyday policing. The cameras will be placed in locations popular with shoppers and tourists, like Stratford's Westfield shopping center and the West End, reports BBC News. Each camera will scan for faces contained in "bespoke" watch lists, which the Met says will predominantly contain individuals "wanted for serious and violent offences." When the camera flags an individual, police officers will approach and ask them to verify their identity. If they're on the watch list, they'll be arrested.
Video surveillance systems are evolving and are using artificial intelligence (AI) to inspect and analyse video footage, interpret patterns and flag unusual activity. Lenovo DCG and Pivot3 provide a state-of-the-art upgraded infrastructure solutions that aim to enhance current technology required to support these systems rather than entrusting the preservation of crucial data to outdated NVR technology. Commenting on the partnership, Dr. Chris Cooper, General Manager for Lenovo DCG, Middle East, Turkey and Africa, said, "We are delighted to showcase our partnership with Pivot3 at one the world's leading technology trade shows. The Middle East is exhibiting tremendous growth in terms of adopting smart solutions. The UAE in particular is investing heavily in implementing the latest innovations in their technological infrastructure; therefore, we see great potential from our partnership with Pivot3 as we work together to supply the appetite for next generation computing products and services."
Smart utility metering for power, gas and water, and video surveillance will remain by far the largest smart city segment, representing 87 per cent of the total number of smart city connections by 2026. This is according to new analysis by ABI Research. While metering is mainly focused on usage monitoring, savings and efficient operation of utility networks, video surveillance is no longer just about security and crime detection and prevention, ABI Research's Smart Cities market data report finds. Video surveillance is increasingly enabling new applications like urban tolling and the monitoring of low-emission zones to reduce air pollution, mainly in Europe. These systems use licence plate recognition to identify older vehicles banned from entering the zone.
Learning to detect content-independent transformations from data is one of the central problems in biological and artificial intelligence. An example of such problem is unsupervised learning of a visual motion detector from pairs of consecutive video frames. Rao and Ruderman formulated this problem in terms of learning infinitesimal transformation operators (Lie group generators) via minimizing image reconstruction error. Unfortunately, it is difficult to map their model onto a biologically plausible neural network (NN) with local learning rules. Here we propose a biologically plausible model of motion detection.
Hi! Just sharing with my recent project clever-camera which is a simple IP camera monitoring web service which uses MobileNet classifier to filter camera events based on the predicted labels - with possibility to search through the history of events or send email notifications in the case of camera movement detection. In practice CC uses MobileNetV3 to classify the content of selected ROIs, so one can filter events using predicted labels e.g. The application was written to work realtime ( 1 FPS) on Raspberry Pi 4 (in my case, I had low power consumption requirements), but it can be also run on a standard desktop/laptop with Ubuntu. You just need to have access to some IP camera to run monitoring. The whole application is just a few files and it's completely written in Python (thanks to remigui library), so it should be relatively easy for anyone to modify the code to e.g.