Goto

Collaborating Authors

#000 Linear Algebra for Machine Learning Master Data Science 14.03.2020

#artificialintelligence

Highlights: Linear algebra is a branch of mathematics related to linear equations, linear functions and their representations through matrices and vector spaces. Basically, it is the science of numbers which empowers diverse Data Science algorithms and applications. To fully comprehend machine learning, linear algebra fundamentals are the essential prerequisite. In this post, you will discover what exactly linear algebra is and what are the main applications in the machine learning. Linear algebra is a foundation of machine learning. Before you start to study machine learning, you need to get better knowledge and understanding of this field. If you are a fan and a practitioner of machine learning, this post will help you to realize where linear algebra is applied to and you can benefit from these insights. In machine learning, the majority of data is most often represented as vectors, matrices or tensors.


Microelectronic Implementations of Connectionist Neural Networks

Neural Information Processing Systems

Three chip designs are described: a hybrid digital/analog programmable connection matrix, an analog connection matrix with adjustable connection strengths, and a digital pipelined best-match chip. The common feature of the designs is the distribution of arithmetic processing power amongst the data storage to minimize data movement.


Microelectronic Implementations of Connectionist Neural Networks

Neural Information Processing Systems

It is clear that conventional computers lag far behind organic computers when it comes to dealing with very large data rates in problems such as computer vision and speech recognition.


transformers.html

#artificialintelligence

Finally the discomfort of not knowing what makes them tick grew too great for me. Transformers were introduced in this 2017 paper as a tool for sequence transduction--converting one sequence of symbols to another. The most popular examples of this are translation, as in English to German. It has also been modified to perform sequence completion--given a starting prompt, carry on in the same vein and style. They have quickly become an indispensible tool for research and product development in natural language processing. Before we start, just a heads-up. We're going to be talking a lot about matrix multiplications and touching on backpropagation (the algorithm for training the model), but you don't need to know any of it beforehand. We'll add the concepts we need one at a time, with explanation. This isn't a short journey, but I hope you'll be glad you came. In the beginning were the words. Our first step is to convert all the words to numbers so we can do math on them. Imagine that our goal is to create the computer that responds to our voice commands. It's our job to build the transformer that converts (or transduces) a sequence of sounds to a sequence of words. We start by choosing our vocabulary, the collection of symbols that we are going to be working with in each sequence. In our case, there will be two different sets of symbols, one for the input sequence to represent vocal sounds and one for the output sequence to represent words. For now, let's assume we're working with English. There are tens of thousands of words in the English language, and perhaps another few thousand to cover computer-specific terminology. That would give us a vocabulary size that is the better part of a hundred thousand. One way to convert words to numbers is to start counting at one and assign each word its own number. Then a sequence of words can be represented as a list of numbers. For example, consider a tiny language with a vocabulary size of three: files, find, and my. Each word could be swapped out for a number, perhaps files 1, find 2, and my 3. Then the sentence "Find my files", consisting of the word sequence [ find, my, files ] could be represented instead as the sequence of numbers [2, 3, 1]. This is a perfectly valid way to convert symbols to numbers, but it turns out that there's another format that's even easier for computers to work with, one-hot encoding. In one-hot encoding a symbol is represented by an array of mostly zeros, the same length of the vocabulary, with only a single element having a value of one. Another way to think about one-hot encoding is that each word still gets assigned its own number, but now that number is an index to an array. Here is our example above, in one-hot notation.


Frequency Based Index Estimating the Subclusters' Connection Strength

arXiv.org Machine Learning

In this paper, a frequency coefficient based on the Sen-Shorrocks-Thon (SST) poverty index notion is proposed. The clustering SST index can be used as the method for determination of the connection between similar neighbor sub-clusters. Consequently, connections can reveal existence of natural homogeneous. Through estimation of the connection strength, we can also verify information about the estimated number of natural clusters that is necessary assumption of efficient market segmentation and campaign management and financial decisions. The index can be used as the complementary tool for the U-matrix visualization. The index is tested on an artificial dataset with known parameters and compared with results obtained by the Unified-distance matrix method.