An Online Sequence-to-Sequence Model Using Partial Conditioning

Neural Information Processing Systems

Sequence-to-sequence models have achieved impressive results on various tasks. However, they are unsuitable for tasks that require incremental predictions to be made as more data arrives or tasks that have long input sequences and output sequences. This is because they generate an output sequence conditioned on an entire input sequence. In this paper, we present a Neural Transducer that can make incremental predictions as more input arrives, without redoing the entire computation. Unlike sequence-to-sequence models, the Neural Transducer computes the next-step distribution conditioned on the partially observed input sequence and the partially generated sequence.


Nested sequences of hippocampal assemblies during behavior support subsequent sleep replay

Science

Consolidation of spatial and episodic memories is thought to rely on replay of neuronal activity sequences during sleep. However, the network dynamics underlying the initial storage of memories during wakefulness have never been tested. Although slow, behavioral time scale sequences have been claimed to sustain sequential memory formation, fast ("theta") time scale sequences, nested within slow sequences, could be instrumental. We found that in rats traveling passively on a model train, place cells formed behavioral time scale sequences but theta sequences were degraded, resulting in impaired subsequent sleep replay. In contrast, when the rats actively ran on a treadmill while being transported on the train, place cells generated clear theta sequences and accurate trajectory replay during sleep.


Assembly Sequence Planning

AI Magazine

The sequence of mating operations that can be carried out to assemble a group of parts is constrained by the geometric and mechanical properties of the parts, their assembled configuration, and the stability of the resulting subassemblies. An approach to representation and reasoning about these sequences is described here and leads to several alternative explicit and implicit plan representations. The Pleiades system will provide an interactive software environment for designers to evaluate alternative systems and product designs through their impact on the feasibility and complexity of the resulting assembly sequences.


Are Char-RNN's Generative or Discriminative Models? • /r/MachineLearning

@machinelearnbot

I was reading over Block's sequence generators, which seem to use RNN's with attention mechanisms to generate sequences. I'm not completely sure (I couldn't find any example of them being used), but they seem to be designed for training in a way where they will generate sequences, then calculate loss based on the generated sequence, rather than just predict the next character like Char-RNN. For Char-RNN's they seem to be trained in a discriminative fashion, but they can be used to sample the next character in a sequence, then feed in a new string with the predicted/sampled character appended to the string. This is more of a general discussion than a single question. Is there a fundamental difference between learning a probability distribution and sampling from it (like Char-RNN), or is Char-RNN also somehow implicitly learning to become a generative model like an RBM?


Keras Variable Length Sequence-to-Sequence Learning with TimeDistributed Embeddings. • /r/MachineLearning

@machinelearnbot

I'm trying to do some sequence - to - sequence learning with Keras, but I'm having trouble figuring out how to encode variable-length sequences. Essentially, I'm feeding my network a "story" (series of sentences), and i'm trying to learn key words at each level (each sentence). However, this doesn't work as there isn't a way to do TimeDistributed Embeddings, nor do I know how to get a variable-length input encoded. If anyone knows a way to do this with keras, your input would be very much appreciated.