If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
AI Researcher, Cognitive Technologist Inventor - AI Thinking, Think Chain Innovator - AIOT, XAI, Autonomous Cars, IIOT Founder Fisheyebox Spatial Computing Savant, Transformative Leader, Industry X.0 Practitioner What are the Principles of classification of #artificialintelligence systems (AIS)? The classification scheme is based on the key, from the point of view of standardization, classification grounds. Each of the bases under consideration is represented as several top-level classes. In most cases, more detailed class hierarchies or classification principles can be found by reference to the relevant standards or documents. The basic classes of AIS based on the following principles: 1) by classes and categories of objects in management; 2) technologies for building, acquiring and using knowledge; 3) according to the functions that the IS performs in the control loop; 4) on methods and technologies used in SII; 5) on methods and means of interaction of AIS with other systems and a human operator.
Reinforcement learning (RL) has shown promise in creating complex logic in controlled settings. On the other hand, what are the prospects for using RL in a more complicated context like telecom networks? Let's learn the basics first. What is reinforcement learning, and how does it work? In machine learning, the three methodologies are reinforcement learning (RL), supervised learning, and unsupervised learning.
The Ericsson Intelligent RAN Automation portfolio, shown in Figure 1, features end-to-end network automation that includes centralized and distributed SON solutions and new capabilities that support the transformation to a more open environment enabled for AI/ML, which empowers innovation and support for wide range of use cases, shorter time to market and is highly adaptable supporting existing and future networks. The objective of RAN automation is to boost RAN performance and operational efficiency by replacing the manual work of developing, installing, deploying, managing, optimizing and retiring of RAN functions with automated processes. The AI's role is to unlock more advanced network automation performance to make RAN network functions more autonomous and replace manual processes with intelligent tools that augment humans. Furthermore, it makes both AI/ML powered RAN network functions and tools more robust for deployment in different environments. Ericsson AI and automation foundations gives service providers the platforms, and evolved life cycle management of RAN SW and services to evolve networks efficiently to successfully meet ever-changing demands.
The Open Dynamic Robot Initiative Group is a collaboration between five robotics-oriented research groups, based in three countries, with the aim to build an Open Source robotics platform based around the torque-control method. Leveraging 3D printing, a few custom PCBs, and off-the-shelf parts, there is a low-barrier to entry and much lower cost compared to similar robots. The eagle-eyed will note that this is only a development platform, and all of the higher level control is off-machine, hosted by a separate PC. What's interesting here, is just how low-level the robot actually is. The motion hardware is purely a few BLDC motors driven by field-orientated control (FOC) driver units, a wireless controller and some batteries.
Autonomous vehicle technology is almost ready for widespread deployment--but people aren't ready for autonomous technology. This is because they don't yet trust the technology to make decisions fully on its own--thus inhibiting driver-assisted vehicles from transforming to truly autonomous vehicles. We accept a certain level of failures in technology like our laptops, smartphones and Wi-Fi because those limitations are merely inconveniences and we can live with that. Building a vehicle requires safety, security and automotive-quality considerations. But when it comes to technology where our lives are dependent on its performance, we have to hold it to a higher standard.
A data-driven computational heuristic is proposed to control MIMO systems without prior knowledge of their dynamics. The heuristic is illustrated on a two-input two-output balance system. It integrates a self-adjusting nonlinear threshold accepting heuristic with a neural network to compromise between the desired transient and steady state characteristics of the system while optimizing a dynamic cost function. The heuristic decides on the control gains of multiple interacting PID control loops. The neural network is trained upon optimizing a weighted-derivative like objective cost function. The performance of the developed mechanism is compared with another controller that employs a combined PID-Riccati approach. One of the salient features of the proposed control schemes is that they do not require prior knowledge of the system dynamics. However, they depend on a known region of stability for the control gains to be used as a search space by the optimization algorithm. The control mechanism is validated using different optimization criteria which address different design requirements.
Entitled "Trusted Sensor Integration", the Phase I STTR focuses on building both analytical/statistical and Machine Learning based models of the static and dynamic behavior of individual sensors and systems. The proposed solution uses the imperfections of the sensor in translating the physical input to a numeric output to derive a fingerprint. It will stimulate a simple cyber-physical system, e.g. an engine, to measure the sensor output, and to feed both stimulation signals and sensor outputs into an RNN. A realistic training here depends on the realistic stimulation. As stated in the original solicitation titled "Cyber Resilience of Condition Based Monitoring Capabilities", The project, carried out in collaboration between ObjectSecurity LLC and subcontractor Mississippi State University, aims for successful technology development and transition that will result in a secure CBM sensor node that can minimize human intervention and reduce the number of machinery overhauls, shorten time spent in depot for repairs, and optimize maintenance logistics by at least 50%.
In this blog, I'll discuss how I worked collaboratively with various domain experts, using reinforcement learning to develop innovative solutions in rocket engine development. In doing so, I'll demonstrate the application of ML techniques to the manufacturing industry and the role of the Machine Learning Product Manager. Machine learning (ML) has had an incredible impact across industries with numerous applications such as personalized TV recommendations and dynamic price models in your rideshare app. Because it is such a core component to the success of companies in the tech industry, advances in ML research and applications are developing at an astonishing rate. For industries outside of tech, ML can be utilized to personalize a user's experience, automate laborious tasks and optimize subjective decision making.
As a part of Expedia Group's partnership with AWS we recently took an amazing opportunity to host a DeepRacer competition in our Brisbane office. DeepRacer is designed to introduce people of all backgrounds to Machine Learning. The goal of the competition is to engineer a control loop for an autonomous toy racing car that enables the car to complete a full circuit of a physical race track in the shortest amount of time. This control loop is constructed using a Machine Learning technique called Reinforcement Learning. Reinforcement Learning encourages an autonomous machine to perform certain actions.
We tune one of the most common heating, ventilation, and air conditioning (HVAC) control loops, namely the temperature control of a room. For economical and environmental reasons, it is of prime importance to optimize the performance of this system. Buildings account from 20 to 40% of a country energy consumption, and almost 50% of it comes from HVAC systems. Scenario projections predict a 30% decrease in heating consumption by 2050 due to efficiency increase. Advanced control techniques can improve performance; however, the proportional-integral-derivative (PID) control is typically used due to its simplicity and overall performance. We use Safe Contextual Bayesian Optimization to optimize the PID parameters without human intervention. We reduce costs by 32% compared to the current PID controller setting while assuring safety and comfort to people in the room. The results of this work have an immediate impact on the room control loop performances and its related commissioning costs. Furthermore, this successful attempt paves the way for further use at different levels of HVAC systems, with promising energy, operational, and commissioning costs savings, and it is a practical demonstration of the positive effects that Artificial Intelligence can have on environmental sustainability.