Last week, at its AI Summit, Unity, a platform that aims to enable users to produce real-time 3D content, announced the launch of two new products designed to simplify training complex systems with AI: Unity Simulation Pro, a headless multi-GPU distributed rendering solution, and Unity SystemGraph, a node-based editor extension. The two products are designed to make it easier for engineers to test and analyze the capabilities of AI systems virtually -- that is, without having access to physical hardware. For instance, Unity Simulation Pro is built to enable developers to use distributed rendering to model and test systems faster than real-time independent of physical hardware, so they can iterate and test at a much higher rate. "Unity Simulation Pro is purpose built for building cutting-edge simulation applications. With this product we are enabling a future where we'll see more developers create and evolve autonomous systems, across different industries, at a quicker, safer, and more cost-effective rate," said Danny Lange, senior vice president of artificial intelligence at Unity.
Autonomous cameras allow live events, such as lectures and sports matches, to be broadcast to larger audiences. In this work, we review autonomous camera systems developed over the past twenty years. Quite often, these systems were demonstrated on scripted stage productions (typically cooking shows), lectures, or team sports. We organize the discussion in terms of three core tasks: (1) planning where the cameras should look, (2) controlling the cameras as they transition from one parameter configuration to another, and (3) selecting which camera to put "on-air"' in multi-camera systems. We conclude by discussing a trend towards more data-driven approaches fueled by continuous improvements in underlying sensing and signal processing technology.
You can use algorithms and apps to systematically analyze, design, and visualize the behavior of complex systems in time and frequency domains. Automatically tune compensator parameters using interactive techniques such as bode loop shaping and the root locus method. You can tune gain-scheduled controllers and specify multiple tuning objectives, such as reference tracking, disturbance rejection, and stability margins. Code generation and requirements traceability helps you validate your system and certify compliance.
Automated control systems were one of the most disruptive applications of industrial technology in the 20th century. The ability to control workflows and processes based on specific inputs and outputs streamlined even the most complex manufacturing processes. These systems, however, need specific parameters and, in some cases, require extensive human oversight and planning to ensure optimal execution. Innovations in AI training methodologies are pushing past these limitations to produce the next wave of disruption to industrial technology: autonomous systems. Autonomous machines do more than address the limitations of automated systems, however.