Development of Secure Embedded Systems Coursera

@machinelearnbot

Three people died after the crash landing of an Asiana Airlines aircraft from Seoul, Korea, at San Fransisco International Airport (SFO) on July 6, 2013. The American National Transportation Safety Board (NTSB) established that the crash most probably was caused by the flight crew's (in)actions. Three teenage girls lost their lives; two in the airplane and another was accidentally run over by a firetruck. The human factor is often cause for accidents. NTSB and others report that more than 50 percent of plane crashes is caused by pilot error (and for road accidents it is even 90 perc.)


Application of neural networks using machine learning for live video recognition - Evenbytes

#artificialintelligence

In the analysis phase we carry out two tasks, a real time analysis applying the models to obtain the data shown on the web (without storing them) and which identify separately whether or not there is a parked aircraft, in addition to obtaining the airline, the aircraft model and whether or not there are vehicles for fuel loading and baggage management. We carry out these analyses using two different recognition systems, on the one hand we apply automatic learning models programmed and trained by us (our own models and scripts) and in parallel we carry out the same task using Cloud Auto ML (a package of products that makes it possible to create customised models easily). With this double processing we increase the reliability of the results by comparison, and we improve the training processes of our own models and scripts. In addition to real time processing, we store all collected images for a complete overnight reprocessing. This is useful because depending on the project to which these technologies are applied, it will not always be necessary to obtain data in real time, and in this case, the data processed in batches are stored in Big Query for later analysis obtaining different indicators.


Avaya Conversational Intelligence: A Real-Time System for Spoken Language Understanding in Human-Human Call Center Conversations

arXiv.org Machine Learning

Avaya Conversational Intelligence (ACI) is an end-to-end, cloud-based solution for real-time Spoken Language Understanding for call centers. It combines large vocabulary, real-time speech recognition, transcript refinement, and entity and intent recognition in order to convert live audio into a rich, actionable stream of structured events. These events can be further leveraged with a business rules engine, thus serving as a foundation for real-time supervision and assistance applications. After the ingestion, calls are enriched with unsupervised keyword extraction, abstractive summarization, and business-defined attributes, enabling offline use cases, such as business intelligence, topic mining, full-text search, quality assurance, and agent training. ACI comes with a pretrained, configurable library of hundreds of intents and a robust intent training environment that allows for efficient, cost-effective creation and customization of customer-specific intents.


Aegis AI Software Detects Gun Threats And Provides Real-Time Alerts

#artificialintelligence

During the Parkland, Florida, school shooting in 2018, the shooter was caught on a security camera pulling his rifle out of a duffle bag in the staircase 15 seconds before discharging the first round. However, the School Resource Officer didn't enter the building because he wasn't confident about the situation, and the Coral Springs Police Department had no idea what the shooter even looked like until 7 minutes and 30 seconds after the first round was fired. If the video system had included technology to recognize the gun threat in real time, alerts could have been sent to the security team. An announcement could have been made right away for all students and faculty in Building 12 to barricade their doors, and law enforcement could have responded a lot faster to a real-time feed of timely and accurate information. Aegis AI offers such a technology, which the company says enables existing security cameras to automatically recognize gun threats and notify security in real-time.


This hand-tracking algorithm could lead to sign language recognition – TechCrunch

#artificialintelligence

Millions of people communicate using sign language, but so far projects to capture its complex gestures and translate them to verbal speech have had limited success. A new advance in real-time hand tracking from Google's AI labs, however, could be the breakthrough some have been waiting for. The new technique uses a few clever shortcuts and, of course, the increasing general efficiency of machine learning systems to produce, in real time, a highly accurate map of the hand and all its fingers, using nothing but a smartphone and its camera. "Whereas current state-of-the-art approaches rely primarily on powerful desktop environments for inference, our method achieves real-time performance on a mobile phone, and even scales to multiple hands," write Google researchers Valentin Bazarevsky and Fan Zhang in a blog post. "Robust real-time hand perception is a decidedly challenging computer vision task, as hands often occlude themselves or each other (e.g.