Goto

Collaborating Authors

Autoencoder - Wikipedia

#artificialintelligence

An autoencoder is a type of artificial neural network used to learn efficient data codings in an unsupervised manner.[1] The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal "noise". Along with the reduction side, a reconstructing side is learnt, where the autoencoder tries to generate from the reduced encoding a representation as close as possible to its original input, hence its name. Several variants exist to the basic model, with the aim of forcing the learned representations of the input to assume useful properties.[2] Examples are the regularized autoencoders (Sparse, Denoising and Contractive autoencoders), proven effective in learning representations for subsequent classification tasks,[3] and Variational autoencoders, with their recent applications as generative models.[4] Autoencoders are effectively used for solving many applied problems, from face recognition[5] to acquiring the semantic meaning of words.[6][7]


Generalizing Randomized Smoothing for Pointwise-Certified Defenses to Data Poisoning Attacks

#artificialintelligence

We propose a method for making black-box functions provably robust to input manipulations. By training an ensemble of classifiers on randomly flipped training labels, we can use results from randomized smoothing to certify our classifier against label-flipping attacks--the larger the margin, the larger the certified radius of robustness. Using other types of noise allows for certifying robustness to other data poisoning attacks. Adversarial examples--targeted, human-imperceptible modifications to a test input that cause a deep network to fail catastrophically--have taken the machine learning community by storm, with a large body of literature dedicated to understanding and preventing this phenomenon (see these surveys). Understanding why deep networks consistently make these mistakes and how to fix them is one way researchers hope to make progress towards more robust artificial intelligence.


Implementing Variational Autoencoders in Keras: Beyond the Quickstart

#artificialintelligence

It is a very well-designed library that clearly abides by its guiding principles of modularity and extensibility, enabling us to easily assemble powerful, complex models from primitive building blocks. This has been demonstrated in numerous blog posts and tutorials, in particular, the excellent tutorial on Building Autoencoders in Keras. As the name suggests, that tutorial provides examples of how to implement various kinds of autoencoders in Keras, including the variational autoencoder (VAE) [1]. Visualization of 2D manifold of MNIST digits (left) and the representation of digits in latent space colored according to their digit labels (right). Like all autoencoders, the variational autoencoder is primarily used for unsupervised learning of hidden representations.


Invertibility in Reinsch form Derivation (Smoothing Splines)

#artificialintelligence

I was hoping to find a more intuitive proof, but the matrix can be shown to be invertible by directly showing that its determinant is nonzero. First note that the knots are located at the $K$ unique values of $\mathbf{x}$. Let $\boldsymbol{\xi}$ represent these knots, in ascending order. Also, $d_j(\xi_i) 0$ if $i \le j$, and so $N_{i,j} 0$ if $i j 1$ and so $\mathbf{N}$ is "almost lower diagonal". In fact, it's a lower diagonal matrix with a column of 1's appended to the front of it.


Calculate the decision boundary for Quadratic Discriminant Analysis (QDA)

#artificialintelligence

I am trying to find a solution to the decision boundary in QDA. The question was already asked and answered for LDA, and the solution provided by amoeba to compute this using the "standard Gaussian way" worked well. However, I am applying the same technique for a 2 class, 2 feature QDA and am having trouble. Would someone be able to check my work and let me know if this approach is correct? Next I am trying to solve for the value of y (e.g., feature 2) given some input value of x (feature 1).