Goto

Collaborating Authors

 igloo


Southeast Asia insurtech Igloo increases its Series B to $46M • TechCrunch

#artificialintelligence

Igloo, a Singapore-based insurtech focused on underserved communities in Southeast Asia, announced it has raised a Series B extension of $27 million, bringing the round's total to $46 million. The first tranche of $19 million was announced in March, and led by Cathay Innovation, with participation from ACA and returning investors OpenSpace. The newest round was led by the InsuResilience Investment Fund II, which was launched by the German development bank KfW for the German Federal Ministry for Economic Cooperation and is managed by impact investor BlueOrchard. Other lead investors were the Women's World Banking Asset Management (WAM), FinnFund, La Maison and returning investors Cathay Innovation. Igloo develops its insurance products and then partners with insurers who underwrite their policies.


A Look at IGLOO: Slicing the Features Space to Represent Sequences – IAM Network

#artificialintelligence

Sequences are most important in deep learning. Whether it is in natural language processing (NLP) or for biological data (RNA sequences), neural networks try to find a representation for sequences of tokens and classify them accordingly or generate new ones following a given logic. There are generally two approaches for this task: The first one is Recurrent Neural Networks (RNN) and its variants (GRU and LSTM), the second one is Transformers. The first method looks at elements in the sequence recursively while the second one focuses on self-attention between elements of the sequence.Each approach has had great success but neither is particularly suited for long sequences. Experiments show that LSTM have a difficult time dealing with sequences longer than 5000 steps, while Transformers are not adapted to it because of the large memory requirements.


Learn to Interpret Atari Agents

Yang, Zhao, Bai, Song, Zhang, Li, Torr, Philip H. S.

arXiv.org Machine Learning

Deep Reinforcement Learning (DeepRL) agents surpass human-level performances in a multitude of tasks. However, the direct mapping from states to actions makes it hard to interpret the rationale behind the decision making of agents. In contrast to previous a-posteriori methods of visualizing DeepRL policies, we propose an end-to-end trainable frameworkbased on Rainbow, a representative Deep Q-Network (DQN) agent. Our method automatically learns important regions in the input domain, which enables characterizations of the decision makingand interpretations for non-intuitive behaviors. Hence we name it Region Sensitive Rainbow (RS-Rainbow). RS-Rainbow utilizes a simple yet effective mechanism to incorporate visualization ability into the learning model, not only improving model interpretability, but leading to improved performance. Extensive experiments on the challenging platform of Atari 2600 demonstrate thesuperiority of RS-Rainbow. In particular, our agent achieves state of the art at just 25% of the training frames. Demonstrations and code are available at https://github.com/yz93/Learn-to-


IGLOO: Slicing the Features Space to Represent Long Sequences

Sourkov, Vsevolod

arXiv.org Machine Learning

We introduce a new neural network architecture, IGLOO, which aims at providing a representation for long sequences where RNNs fail to converge. The structure uses the relationships between random patches sliced out of the features space of some backbone 1 dimensional CNN to find a representation. This paper explains the implementation of the method and provides benchmark results commonly used for RNNs and compare IGLOO to other structures recently published. It is found that IGLOO can deal with sequences of up to 25,000 time steps. For shorter sequences it is also found to be effective and we find that it achieves the highest score in the literature for the permuted MNIST task. Benchmarks also show that IGLOO can run at the speed of the CuDNN optimised GRU or LSTM without being tied to any specific hardware.