I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Safety is one of the emerging concerns in deep learning systems. In the context of deep learning systems, safety is related to building agents that respect safety dynamics in a given environment.
The construction industry is one of the biggest in the USA but also is one of the deadliest. Construction has been and continues to be a dangerous occupation, resulting in many accidents, injuries and fatalities. Safe practices are crucial for the industry. Developments of methodologies for machine learning use in construction safety and the preparation of appropriate software will fill a large need in the industry. Analysis of construction equipment activities with a proper level of detail can help improve several aspects of construction engineering and management such as productivity assessment, safety management, idle time reduction, emission monitoring and control etc.
While much work in data science to date has focused on algorithmic scale and sophistication, safety -- that is, safeguards against harm -- is a domain no less worth pursuing. This is particularly true in applications like self-driving vehicles, where a machine learning system's poor judgement might contribute to an accident. That's why firms like Intel's Mobileye and Nvidia have proposed frameworks to guarantee safe and logical decision-making, and it's why OpenAI -- the San Francisco-based research firm cofounded by CTO Greg Brockman, chief scientist Ilya Sutskever, and others -- today released Safety Gym. OpenAI describes it as a suite of tools for developing AI that respects safety constraints while training, and for comparing the "safety" of algorithms and the extent to which those algorithms avoid mistakes while learning. Safety Gym is designed for reinforcement learning agents, or AI that's progressively spurred toward goals via rewards (or punishments).
Neural networks are increasingly deployed in real-world safety-critical domains such as autonomous driving, aircraft collision avoidance, and malware detection. However, these networks have been shown to often mispredict on inputs with minor adversarial or even accidental perturbations. Consequences of such errors can be disastrous and even potentially fatal as shown by the recent Tesla autopilot crash. Thus, there is an urgent need for formal analysis systems that can rigorously check neural networks for violations of different safety properties such as robustness against adversarial perturbations within a certain L-norm of a given image. An effective safety analysis system for a neural network must be able to either ensure that a safety property is satisfied by the network or find a counterexample, i.e., an input for which the network will violate the property.
E-scooter companies have faced plenty of criticism for allegedly doing too little to foster safety (not to mention basic respect for the law) among riders, and Lime appears to be tackling this issue head-on. It's launching a $3 million "Respect the Ride" campaign to both promote safety and educate customers. The initiative will venture beyond existing efforts, such as safer scooters and a safety ambassador program, to include "multi-channel" ads asking riders to wear helmets, park properly and honor local laws. There's a new Head of Trust and Safety to manage the company's strategy, and there will be a summit to discuss safety and policies with key partners and governments. Lime is also relying on another, simpler tactic to promote safety: it's offering freebies.