This article follows my previous one on Bayesian probability & probabilistic programming that I published few months ago on LinkedIn. And for the purpose of this article, I am going to assume that most this article readers have some idea what a Neural Network or Artificial Neural Network is. Neural Network is a non-linear function approximator. We can think of it as a parameterized function where the parameters are the weights & biases of Neural Network through which we will be typically passing our data (inputs), that will be converted to a probability between 0 and 1, to some kind of non-linearity such as a sigmoid function and help make our predictions or estimations. These non-linear functions can be composed together hence Deep Learning Neural Network with multiple layers of this function compositions.
Financial markets have been one of the earliest adopters of machine learning (ML). People have been using ML to spot patterns in the markets since 1980s. Even though ML has had enormous successes in predicting the market outcomes in the past, the recent advances in deep learning haven't helped financial market predictions much. While deep learning and other ML techniques have finally made it possible for Alexa, Google Assistant and Google Photos to work, there hasn't been much progress when it comes to stock markets.
Recurrent neural networks (RNNs) have shown promising results in audio and speech-processing applications. The increasing popularity of Internet of Things (IoT) devices makes a strong case for implementing RNN-based inferences for applications such as acoustics-based authentication and voice commands for smart homes. However, the feasibility and performance of these inferences on resource-constrained devices remain largely unexplored. The authors compare traditional machine-learning models with deep-learning RNN models for an end-to-end authentication system based on breathing acoustics.
Anyone that might be concerned about computers taking over look away now, because they are a step closer to sounding just like humans. Researchers in the UK at Google's DeepMind unit have been working on making computer-generated speech sound as "natural" as humans. The technology, called WaveNet, which is focused on the area of speech synthesis, or text-to-speech, was found to sound more natural than any of Google's products. However, this was only achieved after the WaveNet artificial neural network was trained to produce English and Chinese speech which required copious amounts of computing power, so the technology probably won't be hitting the mainstream any time soon. Using a convolutional neural network, which is used for artificial intelligence in deep learning, it is trained on data and then the systems make inferences about new data, in addition to being used to generate new data.