If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Flexible plant operations are highly desirable in today's power generation industry. Every plant owner desires increased ramp rates and the ability to operate at lower loads so their plants will remain "in the money" longer in today's competitive power markets. This goal, while laudable, remains elusive. The ADEX self-tuning artificial intelligence (AI) system allows plants to continuously optimize plant performance at any operating point rather than being constrained to a static "design point" commonly found in gas- and coal-fired plants. Better yet, no changes to the plant distributed control system (DCS) are required.
Now!" I heard my 6-year-old son yell as I arrived back home. It isn't uncommon to hear him yell. But it is uncommon to hear him yell directions at his older brother. I caught a whiff of toasted mini pizzas as I headed downstairs in search of my family and found an all-out gaming party. My three children, ages 10, 8, and 6 years old, were sprawled across the furniture and floor with my husband. Snacks and drinks lay scattered around as they played the 2006 Ghostbusters video game for Xbox. Henry is using the adaptive Xbox controller," my husband shouted over the game's music.
Creating robots that can perform acrobatic movements such as flips or spinning jumps can be highly challenging. Typically, in fact, these robots require sophisticated hardware designs, motion planners and control algorithms. Researchers at Massachusetts Institute of Technology (MIT) and University of Massachusetts Amherst recently designed a new humanoid robot supported by an actuator-aware kino-dynamic motion planner and a landing controller. This design, presented in a paper pre-published on arXiv, could allow the humanoid robot to perform back flips and other acrobatic movements. "In this work, we tried to come up with realistic control algorithm to make a real humanoid robot perform acrobatic behavior such as back/front/side-flip, spinning jump, and jump over an obstacle," Donghyun Kim, one of the researchers who developed the robot's software and controller, told TechXplore.
Today's video games may boast photorealistic graphics, surround sound and worldwide multiplayer support, but many still long for the days when games were simple. You know, when a game didn't require more than a joystick and a button or two? Perhaps it's no surprise, many are buying cabinets for the home, including replicas of classic coin-operated ("coin-op") games and pinball machines. "Simple games that are'quick to learn but difficult to master' have a special addictive quality that we tried for when designing them with our limited graphic palette," recalls Nolan Bushnell, who established Atari and Pong in the '70s, and shortly thereafter, founded Chuck E. Cheese (smartly, as a distribution channel for Atari games). "Often games are for turning off your mind and entering kind of a Zen state."
If you are among the lucky video game fans to snag a PlayStation 5, you can soon add controllers in different colors to your collection. Sony announced it will launch versions of its DualSense controller for the video game console in "cosmic red" and "midnight black." The new controllers will be available at participating retailers next month, said Sony, which provided no specific launch dates but noted availability windows would vary by location. PlayStation's website lists the black controller for $69.99 (the same price as the stock white model), and the red controller for $74.99. The DualSense controller has received rave reviews since launching with the console last November.
We develop a formal framework for automatic reasoning about the obligations of autonomous cyber-physical systems, including their social and ethical obligations. Obligations, permissions and prohibitions are distinct from a system's mission, and are a necessary part of specifying advanced, adaptive AI-equipped systems. They need a dedicated deontic logic of obligations to formalize them. Most existing deontic logics lack corresponding algorithms and system models that permit automatic verification. We demonstrate how a particular deontic logic, Dominance Act Utilitarianism (DAU), is a suitable starting point for formalizing the obligations of autonomous systems like self-driving cars. We demonstrate its usefulness by formalizing a subset of Responsibility-Sensitive Safety (RSS) in DAU; RSS is an industrial proposal for how self-driving cars should and should not behave in traffic. We show that certain logical consequences of RSS are undesirable, indicating a need to further refine the proposal. We also demonstrate how obligations can change over time, which is necessary for long-term autonomy. We then demonstrate a model-checking algorithm for DAU formulas on weighted transition systems, and illustrate it by model-checking obligations of a self-driving car controller from the literature.
It is 2021, and I'm not playing on an Xbox, PlayStation or Nintendo Switch. This isn't an old Atari 2600 previously collecting dust in a closet, or an emulator I found online. It's a fresh home video game console: the Atari VCS. Having spent some time playing Atari VCS, it's easy to get trapped by the nostalgic feelings of popping in my "Asteroids" or "Missile Command" cartridges. However, the VCS delivers plenty of modern touches such as wireless, rechargeable controllers and Wi-Fi support for downloadable games.
Recently, Intelligent Transportation Systems are leveraging the power of increased sensory coverage and computing power to deliver data-intensive solutions achieving higher levels of performance than traditional systems. Within Traffic Signal Control (TSC), this has allowed the emergence of Machine Learning (ML) based systems. Among this group, Reinforcement Learning (RL) approaches have performed particularly well. Given the lack of industry standards in ML for TSC, literature exploring RL often lacks comparison against commercially available systems and straightforward formulations of how the agents operate. Here we attempt to bridge that gap. We propose three different architectures for TSC RL agents and compare them against the currently used commercial systems MOVA, SurTrac and Cyclic controllers and provide pseudo-code for them. The agents use variations of Deep Q-Learning and Actor Critic, using states and rewards based on queue lengths. Their performance is compared in across different map scenarios with variable demand, assessing them in terms of the global delay and average queue length. We find that the RL-based systems can significantly and consistently achieve lower delays when compared with existing commercial systems.
Left ventricular assist devices (LVADs) are mechanical pumps, which can be used to support heart failure (HF) patients as bridge to transplant and destination therapy. To automatically adjust the LVAD speed, a physiological control system needs to be designed to respond to variations of patient hemodynamics across a variety of clinical scenarios. These control systems require pressure feedback signals from the cardiovascular system. However, there are no suitable long-term implantable sensors available. In this study, a novel real-time deep convolutional neural network (CNN) for estimation of preload based on the LVAD flow was proposed. A new sensorless adaptive physiological control system for an LVAD pump was developed using the full dynamic form of model free adaptive control (FFDL-MFAC) and the proposed preload estimator to maintain the patient conditions in safe physiological ranges. The CNN model for preload estimation was trained and evaluated through 10-fold cross validation on 100 different patient conditions and the proposed sensorless control system was assessed on a new testing set of 30 different patient conditions across six different patient scenarios. The proposed preload estimator was extremely accurate with a correlation coefficient of 0.97, root mean squared error of 0.84 mmHg, reproducibility coefficient of 1.56 mmHg, coefficient of variation of 14.44 %, and bias of 0.29 mmHg for the testing dataset. The results also indicate that the proposed sensorless physiological controller works similarly to the preload-based physiological control system for LVAD using measured preload to prevent ventricular suction and pulmonary congestion. This study shows that the LVADs can respond appropriately to changing patient states and physiological demands without the need for additional pressure or flow measurements.
Balancing and push-recovery are essential capabilities enabling humanoid robots to solve complex locomotion tasks. In this context, classical control systems tend to be based on simplified physical models and hard-coded strategies. Although successful in specific scenarios, this approach requires demanding tuning of parameters and switching logic between specifically-designed controllers for handling more general perturbations. We apply model-free Deep Reinforcement Learning for training a general and robust humanoid push-recovery policy in a simulation environment. Our method targets high-dimensional whole-body humanoid control and is validated on the iCub humanoid. Reward components incorporating expert knowledge on humanoid control enable fast learning of several robust behaviors by the same policy, spanning the entire body. We validate our method with extensive quantitative analyses in simulation, including out-of-sample tasks which demonstrate policy robustness and generalization, both key requirements towards real-world robot deployment.