Language Learning


Machines Are Developing Language Skills Inside Virtual Worlds

MIT Technology Review

Both the DeepMind and CMU approaches use deep reinforcement learning, popularized by DeepMind's Atari-playing AI. A neural network is fed raw pixel data from a virtual environment and uses rewards, like points in a computer game, to learn by trial and error (see "10 Breakthrough Technologies 2017: Reinforcement Learning"). By running through millions of training scenarios at accelerated speeds, both AI programs learned to associate words with particular objects and characteristics, which let them follow the commands. The millions of training runs required means Domingos is not convinced pure deep reinforcement learning will ever crack the real world.


IoT - A Support Vector Machine Implementation for Sign Language Recognition on Intel Edison.

#artificialintelligence

This project explores the development of a sign language to speech translation glove by implementing a Support Vector Machine(SVM) on the Intel Edison to recognize various letters signed by sign language users. Support Vector Machines (SVMs) are machine learning supervised models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training examples, each marked for belonging to one of n categories, an SVM training algorithm builds a model that assigns new examples into one category or the other, making it a non-probabilistic binary linear classifier. It is important that users set their own preferred interval since a user just beginning to learn sign language will sign at a slower rate compared to a sign language expert.


Recognising sign language signs from glove sensor data

#artificialintelligence

I saved the data using numpy's np.save() function: Each row has the length 2992, which corresponds to the length of the longest recording (136) multiplied number of variables (22). I used numpy's linear interpolator to fill in the gaps randomly: This code generates a scaffold of NaN values, randomly chooses a list of indices from the scaffold which corresponds to the length of the sign being interpolated, fills the real values into the scaffold at those random indices, and then interpolates the missing values. The pipeline specifies three steps: scaling using StandardScaler(), principal components analysis (PCA) using the PCA() function and finally a linear support vector classifier (LinearSVC()). Once the PCA algorithm has reduced the dimensionality of the data to just those features which encapsulate the most variance, the data is used to train a linear support vector classification (SVC) model.