If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
It is an embarrassing problem we have all had to deal with. A run for the bus or a hot meeting room can leave you trying to check your armpit without anyone noticing. Luckily, AI is here to help. UK chip-maker Arm, better known for developing the hardware that powers most smartphones, is working on a new generation of smart chips that embed artificial intelligence inside devices. One of these chips is being taught to smell.
To try to get a glimpse of the everyday devices we could be using a decade from now, there are worse places to look than inside the Future Interfaces Group (FIG) lab at Carnegie Mellon University. During a recent visit to Pittsburgh by Engadget, PhD student Gierad Laput put on a smartwatch and touched a Macbook Pro, then an electric drill, then a door knob. The moment his skin pressed against each, the name of the object popped up on an adjacent computer screen. Each item had emitted a unique electromagnetic signal which flowed through Laput's body, to be picked up by the sensor on his watch. The software essentially knew what Laput was doing in dumb meatspace, without a pricey sensor needing to be embedded (and its batteries recharged) on every object he made contact with.
Artificial intelligence related efforts are on the rise, in seemingly all industries. In financial services, AI is being commissioned with increasingly critical accountabilities in making data-driven decisions on what in and when to invest but also how much to invest. The marketing industry is making targeted communications more relevant than ever, with tools and automation that rely increasingly on AI. And the healthcare industry is developing AI tools that could ultimately save lives by speeding up detection and reducing human error. The insurance industry too will be impacted by advancements in AI.
This article was originally published on TechRepublic. Aerial imagery: Photos taken from the air, often with UAVs in smart farming. Used to assist farmers to determine the condition of a field. It is the integrated internal and external networking of farming operations as a result of the emergence of smart technology in agriculture. Agro-chemicals: Chemicals used in agriculture, which include fertilizers, herbicides, and pesticides.
Artificial intelligence (AI) – the science of making computers mimic humans using logic, decision trees, deep learning, and machine learning – is fast approaching the market opportunity around preventive and predictive maintenance. According to a recent GlobalData survey, the top two business challenges in Australia are in improving operational efficiency and reducing costs. Many businesses, such as manufacturers, producers of natural resources, through to the agriculture and health sectors, need ongoing reliability of machines and their constituent parts to keep the lights on in the business. Unplanned outages, for example, can cost an oil and gas company, on average $50 million dollars annually. In the case of a windfarm, in the event of one single fail, an entire turbine needs to come down, a technical crew with a crane needs to be on site costing $100,000 or more for each time a part fails.
When U.N. member states unanimously adopted the 2030 Agenda in 2015, the narrative around global development embraced a new paradigm of sustainability and inclusion--of planetary stewardship alongside economic progress, and inclusive distribution of income. This comprehensive agenda--merging social, economic and environmental dimensions of sustainability--is not supported by current modes of data collection and data analysis, so the report of the High-Level Panel on the post-2015 development agenda called for a "data revolution" to empower people through access to information.1 Today, a central development problem is that high-quality, timely, accessible data are absent in most poor countries, where development needs are greatest. In a world of unequal distributions of income and wealth across space, age and class, gender and ethnic pay gaps, and environmental risks, data that provide only national averages conceal more than they reveal. This paper argues that spatial disaggregation and timeliness could permit a process of evidence-based policy making that monitors outcomes and adjusts actions in a feedback loop that can accelerate development through learning. Big data and artificial intelligence are key elements in such a process. Emerging technologies could lead to the next quantum leap in (i) how data is collected; (ii) how data is analyzed; and (iii) how analysis is used for policymaking and the achievement of better results. Big data platforms expand the toolkit for acquiring real-time information at a granular level, while machine learning permits pattern recognition across multiple layers of input. Together, these advances could make data more accessible, scalable, and finely tuned. In turn, the availability of real-time information can shorten the feedback loop between results monitoring, learning, and policy formulation or investment, accelerating the speed and scale at which development actors can implement change.
Just like any lock can be picked, any biometric scanner can be fooled. Researchers have shown for years that the popular fingerprint sensors used to guard smartphones can be tricked sometimes, using a lifted print or a person's digitized fingerprint data. But new findings from computer scientists at New York University's Tandon School of Engineering could raise the stakes significantly. The group has developed machine learning methods for generating fake fingerprints--called DeepMasterPrints--that not only dupe smartphone sensors, but can successfully masquerade as prints from numerous different people. Think of it as a skeleton key for fingerprint-protected devices.
In contrast to the intense studies of deep Reinforcement Learning(RL) in games and simulations , employing deep RL to real world robots remains challenging, especially in high risk scenarios. Though there has been some progresses in RL based control in realistic robotics [2, 3, 4, 5], most of those previous works does not specifically deal with the safety concerns in the RL training process. For majority of high risk scenarios in real world, deep RL still suffer from bottlenecks both in cost and safety. As an example, collisions are extremely dangerous for UAV, while RL training requires thousands of times of collisions. Other works contributes to building simulation environments and bridging the gap between reality and simulation [4, 5]. However, building such simulation environment is arduous, not to mention that the gap can not be totally made up. To address the safety issue in real-world RL training, we present the Intervention Aided Reinforcement Learning (IARL) framework. Intervention is commonly used in many automatic control systems in real world for safety insurance. It is also regarded as an important evaluation criteria for autonomous navigation systems, e.g. the disengagement ratio in autonomous driving
Filtering is a general name for inferring the states of a dynamical system given observations. The most common filtering approach is Gaussian Filtering (GF) where the distribution of the inferred states is a Gaussian whose mean is an affine function of the observations. There are two restrictions in this model: Gaussianity and Affinity. We propose a model to relax both these assumptions based on recent advances in implicit generative models. Empirical results show that the proposed method gives a significant advantage over GF and nonlinear methods based on fixed nonlinear kernels.
Abstract--One less addressed issue of deep reinforcement learning is the lack of generalization capability based on new state and new target, for complex tasks, it is necessary to give the correct strategy and evaluate all possible actions for current state. Fortunately, deep reinforcement learning has enabled enormous progress in both subproblems: giving the correct strategy and evaluating all actions based on the state. In this paper we present an approach called orthogonal policy gradient descent(OPGD) that can make agent learn the policy gradient based on the current state and the actions set, by which the agent can learn a policy network with generalization capability. The framework of the proposed method to implement the autonomous driving. In this paper we proposed a deep reinforcement learning(DRL) method called orthogonal policy gradient descent, which is prooved that the global optimization objective function can reach maximum value and is used in the application of autonomous driving.