Goto

Collaborating Authors

Guide to Encoding Categorical Values in Python

#artificialintelligence

For this article, I was able to find a good dataset at the UCI Machine Learning Repository. This particular Automobile Data Set includes a good mix of categorical values as well as continuous values and serves as a useful example that is relatively easy to understand. Since domain understanding is an important aspect when deciding how to encode various categorical values - this data set makes a good case study. Before we get started encoding the various values, we need to important the data and do some minor cleanups. Since this article will only focus on encoding the categorical variables, we are going to include only the object columns in our dataframe.


VALUE-Dx course, Madrid, 2 to 3 April 2019

#artificialintelligence

The first VALUE-Dx Course followed the VALUE-Dx Kick-Off meeting in Madrid, from 2 to 3 April 2019. Approximately 70 participants attended the course over a day and a half where 16 experts from the VALUE-Dx consortium presented in 11 sessions. The VALUE-Dx consortium comprises of experts from varying disciplines and sectors, with extensive knowledge in methods, approaches, and insight in their respective fields. As VALUE-Dx requires close collaborations needed to collectively accomplish the objectives, the VALUE-Dx Course offers the opportunity to bring them together in order to share knowledge and cover the key disciplines involved in the project. These presentations served as introductory sessions to the various disciplines represented in the VALUE-Dx consortium.


Why We Must Embed Humanitarian Values In Artificial Intelligence Before It's Too Late

#artificialintelligence

There are two ways to look at the future: one is at us as prisoners of a technological revolution that we can no longer control, with climate change out of control, and the other as the architects of a better future, imprinting humanitarian values into technology before it becomes more clever than us.


Natural Value Approximators: Learning when to Trust Past Estimates

Neural Information Processing Systems

Neural networks have a smooth initial inductive bias, such that small changes in input do not lead to large changes in output. However, in reinforcement learning domains with sparse rewards, value functions have non-smooth structure with a characteristic asymmetric discontinuity whenever rewards arrive. We propose a mechanism that learns an interpolation between a direct value estimate and a projected value estimate computed from the encountered reward and the previous estimate. This reduces the need to learn about discontinuities, and thus improves the value function approximation. Furthermore, as the interpolation is learned and state-dependent, our method can deal with heterogeneous observability. We demonstrate that this one change leads to significant improvements on multiple Atari games, when applied to the state-of-the-art A3C algorithm.


Value Pursuit Iteration

Neural Information Processing Systems

Value Pursuit Iteration (VPI) is an approximate value iteration algorithm that finds a close to optimal policy for reinforcement learning and planning problems with large state spaces. VPI has two main features: First, it is a nonparametric algorithm that finds a good sparse approximation of the optimal value function given a dictionary of features. The algorithm is almost insensitive to the number of irrelevant features. Second, after each iteration of VPI, the algorithm adds a set of functions based on the currently learned value function to the dictionary. This increases the representation power of the dictionary in a way that is directly relevant to the goal of having a good approximation of the optimal value function.