normal distribution


How to sample from multidimensional distributions using Gibbs sampling?

@machinelearnbot

We will show how to perform multivariate random sampling using one of the Markov Chain Monte Carlo (MCMC) algorithms, called the Gibbs sampler. If the structure is correct, we should expect random variables to converge to . Let's move on to use the Gibbs sampler to estimate the density parameters. At start let's generate random sample from mixture of normals with parameters .


Outlier Detection with Parametric and Non-Parametric methods

@machinelearnbot

Additionally, you could do a univariate analysis by studying a single variable at a time or multivariate analysis where you would study more than one variable at the same time to identify outliers. The x-axis, in the above plot, represents the Revenues and the y-axis, probability density of the observed Revenue value. The density curve for the actual data is shaded in'pink', the normal distribution is shaded in'green' and log normal distribution is shaded in'blue'. The probability density for the actual distribution is calculated from the observed data, whereas for both normal and log-normal distribution is computed based on the observed mean and standard deviation of the Revenues.


What causes predictive models to fail - and how to fix it?

@machinelearnbot

Over-fitting.If you perform a regression with 200 predictors (with strong cross-correlations among predictors), use meta regression coefficients: that is, use coefficients of the form f[Corr(Var, Response), a,b, c] where a, b, c are three meta-parameters (e.g. If your training set has 400,000 observations distributed across 50 clients, and your test data set (used for cross-validation) has 200,000 observations but only 3 clients or 5 days worth of historical data, then your cross-validation methodology is very flawed. Over-fitting.If you perform a regression with 200 predictors (with strong cross-correlations among predictors), use meta regression coefficients: that is, use coefficients of the form f[Corr(Var, Response), a,b, c] where a, b, c are three meta-parameters (e.g. If your training set has 400,000 observations distributed across 50 clients, and your test data set (used for cross-validation) has 200,000 observations but only 3 clients or 5 days worth of historical data, then your cross-validation methodology is very flawed.


The t-distribution: a key statistical concept discovered by a beer brewery

@machinelearnbot

Thanks to the Central Limit Theorem, the Gaussian distribution is present in many real world phenomena. Mean µ controls the expected value (where the most values will go) of a normally distributed random variable. This way we may say that X is a mixture of two continuous probability distributions: Normal and Gamma. Gaussian distributions and Student's distributions are some of the most important continuous probability distributions in statistics and machine learning.


Building Convolutional Neural Networks with Tensorflow

@machinelearnbot

If you want to start building Neural Networks immediatly, or you are already familiar with Tensorflow you can go ahead and skip to section 2. The most basic units within tensorflow are Constants, Variables and Placeholders. Besides the tf.zeros() and tf.ones(), which create a Tensor initialized to zero or one, there is also the tf.random_normal() function which create a tensor filled with values picked randomly from a normal distribution (the default distribution has a mean of 0.0 and stddev of 1.0). There is also the tf.truncated_normal() function, which creates an Tensor with values randomly picked from a normal distribution, where two times the standard deviation forms the lower and upper limit.


The Fundamental Statistics Theorem Revisited

@machinelearnbot

It turned out that putting more weight on close neighbors, and increasingly lower weight on far away neighbors (with weights slowly decaying to zero based on the distance to the neighbor in question) was the solution to the problem. For those interested in the theory, the fact that cases 1, 2 and 3 yield convergence to the Gaussian distribution is a consequence of the Central Limit Theorem under the Liapounov condition. More specifically, and because the samples produced here come from uniformly bounded distributions (we use a random number generator to simulate uniform deviates), all that is needed for convergence to the Gaussian distribution is that the sum of the squares of the weights -- and thus Stdev(S) as n tends to infinity -- must be infinite. More generally, we can work with more complex auto-regressive processes with a covariance matrix as general as possible, then compute S as a weighted sum of the X(k)'s, and find a relationship between the weights and the covariance matrix, to eventually identify conditions on the covariance matrix that guarantee convergence to the Gaussian destribution.


Outlier Detection with Parametric and Non-Parametric methods

@machinelearnbot

Additionally, you could do a univariate analysis by studying a single variable at a time or multivariate analysis where you would study more than one variable at the same time to identify outliers. The x-axis, in the above plot, represents the Revenues and the y-axis, probability density of the observed Revenue value. The density curve for the actual data is shaded in'pink', the normal distribution is shaded in'green' and log normal distribution is shaded in'blue'. The probability density for the actual distribution is calculated from the observed data, whereas for both normal and log-normal distribution is computed based on the observed mean and standard deviation of the Revenues.


Feature Engineering: Data scientist's Secret Sauce !

@machinelearnbot

With the help of an effective feature engineering process, we intend to come up with an effective representation of the data. Entropy: Higher the entropy, more the information contained in the data, variance: higher the variance: more the information, projection for better separation: the projection to the basis which has the highest variance holds more information, feature to class association etc, all of these explains the information in data. However, sometimes we may find that the features are not following a normal distribution but a log normal distribution instead. One of the common things to do in this situation is to take the log of the feature values (that exhibit log normal distribution) so that it exhibits a normal distribution.If the algorithm being used is making the implicit/explicit assumption of the features being normally distributed, then such a transformation of a log-normally distributed feature to a normally distributed feature can help improve the performance of that algorithm.


Removing Outliers Using Standard Deviation in Python

@machinelearnbot

Such values follow a normal distribution. According to the Wikipedia article on normal distribution, about 68% of values drawn from a normal distribution are within one standard deviation σ away from the mean; about 95% of the values lie within two standard deviations; and about 99.7% are within three standard deviations. As you case see, we removed the outlier values and if we plot this dataset, our plot will look much better. But in our case, the outliers were clearly because of error in the data and the data was in a normal distribution so standard deviation made sense.


40 Interview Questions asked at Startups in Machine Learning / Data Science

@machinelearnbot

The data set has missing values which spread along 1 standard deviation from the median. Therefore, 32% of the data would remain unaffected by missing values. In an imbalanced data set, accuracy should not be used as a measure of performance because 96% (as given) might only be predicting majority class correctly, but our class of interest is minority class (4%) which is the people who actually got diagnosed with cancer. Hence, in order to evaluate model performance, we should use Sensitivity (True Positive Rate), Specificity (True Negative Rate), F measure to determine class wise performance of the classifier.