Goto

Collaborating Authors

Understanding vectors in Word2Vector model • /r/MachineLearning

@machinelearnbot

In general, the goal of the loss function is to maximise the dot product between input vector and output vector while minimise the dot product between the input vector and other random vector. So this will make vectors corresponding to input and output word (context word) become more similar. With CBOW, the idea is kind of the same but with a different formulation.


Giveaway: Win an Anki Vector

PCWorld

Anki's Vector is an adorable companion that's packed with tech and personality. By simply speaking to Vector you can play games, have it take your picture, or ask it the weather via Alexa integration. As mentioned in our review, Vector is basically the modern version of the Tamagotchi virtual pet. He'll make you laugh, demand your attention, and just make you feel less lonely when you're at home by yourself. But if you found the $250 price tag hard to swallow, have no fear--we secured a Vector from Anki, and it could be yours!


Fall in love about the robot Vector by Anki

#artificialintelligence

Based in California, the company Anki offers us a new robot model: Vector. It looks a bit like its predecessor who was destined to learn the code for kids! He will complete several roles such as answering questions, giving you the weather, taking pictures and many other things while being animated. What makes the robot endearing! Finally, I realize that in the long-term Vector could easily replace the "connected speakers" but without the music.


BERT for QuestionAnswering

#artificialintelligence

A few months back, I wrote a medium article on BERT, which talked about its functionality and use-case and its implementation through Transformers. In this article, we will look at how we can use BERT for answering our questions based on the given context using Transformers from Hugging Face. Suppose the question asked is: Who wrote the fictionalized "Chopin?" and you are given with the context: Possibly the first venture into fictional treatments of Chopin's life was a fanciful operatic version of some of its events. Chopin was written by Giacomo Orefice and produced in Milan in 1901. All the music is derived from that of Chopin.


Context Vectors are Reflections of Word Vectors in Half the Dimensions

Journal of Artificial Intelligence Research

This paper takes a step towards theoretical analysis of the relationship between word embeddings and context embeddings in models such as word2vec. We start from basic probabilistic assumptions on the nature of word vectors, context vectors, and text generation. These assumptions are supported either empirically or theoretically by the existing literature. Next, we show that under these assumptions the widely-used word-word PMI matrix is approximately a random symmetric Gaussian ensemble. This, in turn, implies that context vectors are reflections of word vectors in approximately half the dimensions.