Goto

Collaborating Authors

 happen


What Did You Think Would Happen? Explaining Agent Behaviour through Intended Outcomes

Neural Information Processing Systems

We present a novel form of explanation for Reinforcement Learning, based around the notion of intended outcome. These explanations describe the outcome an agent is trying to achieve by its actions. We provide a simple proof that general methods for post-hoc explanations of this nature are impossible in traditional reinforcement learning. Rather, the information needed for the explanations must be collected in conjunction with training the agent. We derive approaches designed to extract local explanations based on intention for several variants of Q-function approximation and prove consistency between the explanations and the Q-values learned. We demonstrate our method on multiple reinforcement learning problems, and provide code to help researchers introspecting their RL environments and algorithms.


Training Scale-Invariant Neural Networks on the Sphere Can Happen in Three Regimes

Neural Information Processing Systems

A fundamental property of deep learning normalization techniques, such as batch normalization, is making the pre-normalization parameters scale invariant. The intrinsic domain of such parameters is the unit sphere, and therefore their gradient optimization dynamics can be represented via spherical optimization with varying effective learning rate (ELR), which was studied previously. However, the varying ELR may obscure certain characteristics of the intrinsic loss landscape structure. In this work, we investigate the properties of training scale-invariant neural networks directly on the sphere using a fixed ELR. We discover three regimes of such training depending on the ELR value: convergence, chaotic equilibrium, and divergence. We study these regimes in detail both on a theoretical examination of a toy example and on a thorough empirical analysis of real scale-invariant deep learning models. Each regime has unique features and reflects specific properties of the intrinsic loss landscape, some of which have strong parallels with previous research on both regular and scale-invariant neural networks training. Finally, we demonstrate how the discovered regimes are reflected in conventional training of normalized networks and how they can be leveraged to achieve better optima.


millerfilm - Movies, Space, Photography and More! millerfilm: What Happens When Artificial Intelligence Has Read Everything?

#artificialintelligence

What Happens When Artificial Intelligence Has Read Everything? Article: What Happens When AI Has Read Everything? - The Atlantic (OPEN IN AN INCOGNITO WINDOW to avoid Paywall) Artificial Intelligence has grown exponentially by scanning more and more information online. So, what happens when it's read everything and runs out of material to train on? Read the article above to learn more! Come back here for all the latest Artificial Intelligence News.


This is how AI bias really happens--and why it's so hard to fix – MIT Technology Review

#artificialintelligence

The first thing computer scientists do when they create a deep-learning model is decide what they actually want it to achieve. A credit card company, for example, might want to predict a customer's creditworthiness, but "creditworthiness" is a rather nebulous concept. In order to translate it into something that can be computed, the company must decide whether it wants to, say, maximize its profit margins or maximize the number of loans that get repaid. It could then define creditworthiness within the context of that goal. The problem is that "those decisions are made for various business reasons other than fairness or discrimination," explains Solon Barocas, an assistant professor at Cornell University who specializes in fairness in machine learning. If the algorithm discovered that giving out subprime loans was an effective way to maximize profit, it would end up engaging in predatory behavior even if that wasn't the company's intention.


This is how AI bias really happens--and why it's so hard to fix

#artificialintelligence

Over the past few months, we've documented how the vast majority of AI's applications today are based on the category of algorithms known as deep learning, and how deep-learning algorithms find patterns in data. We've also covered how these technologies affect people's lives: how they can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system. But it's not enough just to know that this bias exists. If we want to be able to fix it, we need to understand the mechanics of how it arises in the first place. We often shorthand our explanation of AI bias by blaming it on biased training data. The reality is more nuanced: bias can creep in long before the data is collected as well as at many other stages of the deep-learning process.


This is how AI bias really happens--and why it's so hard to fix

MIT Technology Review

Over the past few months, we've documented how the vast majority of AI's applications today are based on the category of algorithms known as deep learning, and how deep-learning algorithms find patterns in data. We've also covered how these technologies affect people's lives: how they can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system. But it's not enough just to know that this bias exists. If we want to be able to fix it, we need to understand the mechanics of how it arises in the first place. We often shorthand our explanation of AI bias by blaming it on biased training data.


Unpredictions – what won't happen with artificial intelligence (Includes interview and first-hand account)

#artificialintelligence

Artificial intelligence and machine learning are two of the key tools for the digital transformation of many businesses. From Amazon Alexa to autonomous vehicles, artificial intelligence is progressing at a very fast rate. However, there remain many technological limitations in terms of what machine intelligence technology can deliver in the short-term. The company Conversica is a leader in conversational artificial intelligence for business, and Conversica Chief Scientist Dr. Sid J. Reddy has shared with Digital Journal readers four things are unlikely to happen with artificial intelligence during 2018. Dr. Reddy refers to these as "unpredictions", turning the common approach for analysts to make predictions on its head.


clever-modular-robots-turn-legs-into-arms-on-demand?utm_source=feedburner-robotics&utm_medium=feed&utm_campaign=Feed%3A+IeeeSpectrumRobotics+%28IEEE+Spectrum%3A+Robotics%29

IEEE Spectrum Robotics Channel

Robots that can be physically reconfigured to do lots of different things are, in theory, a great way to maximize versatility while saving time and effort. Okay, yeah, that may not sound super exciting, but it means you can teach a dodecapod robot to transition into a septapod robot that can carry stuff with two arms while using a third to point a camera. Programmed in advance, that is, which is fine, except that as robots get more modular and easier to physically reconfigure, it becomes more and more useful to have a generalized system that can dynamically generate gaits (and transitions between gaits) on the fly no matter what the leg configuration of your robot happens to be. The researchers are planning on extending their method to include dynamic gaits, which means things like (we hope) running and jumping, and they're also going to generalize to other morphologies like bipeds and tripeds.


to-control-ai-we-need-to-understand-more-about-humans

#artificialintelligence

And among the things we urgently need to learn more about is not just how artificial intelligence works, but how humans work. Humans are also the only species to have developed "group normativity" – an elaborate system of rules and norms that designate what is collectively acceptable and not acceptable for other people to do, kept in check by group efforts to punish those who break the rules. But our complex normative social orders are less about ethical choices than they are about the coordination of billions of people making millions of choices on a daily basis about how to behave. Are we prepared for AIs that start building their own normative systems--their own rules about what is acceptable and unacceptable for a machine to do--in order to coordinate their own interactions?


How AI is Transforming the Contact Center

#artificialintelligence

Today's contact center agents must be able to communicate with customers not only on the phone but via social media, instant messaging, video conferencing and web chat. How can humans do it all? That's why many companies are implementing bots powered by artificial intelligence to work in their contact centers and communicate with their customers. Gartner predicts that, by 2020, 85% of all customer interactions will no longer be managed by humans. Facebook, Apple, Microsoft and Google are all building virtual assistants and chatbots that can respond to voice queries and engage in a fairly natural dialog with users.