delta


The airports of the future are here

Mashable

One reason airports tend to look and function remarkably alike is that they're designed to accommodate air travel infrastructure--security, passenger ticketing, baggage, ground transport--with the primary concerns being safety and minimal overhead for their tenant airlines. "It's like having a Super Bowl worth of people every single day." "It's like having a Super Bowl worth of people every single day." At Changi, concession revenues rose 5 percent last year to a record S$2.16 billion ($1.6 billion), while the world's busiest airport, Atlanta's Hartsfield-Jackson International, topped $1 billion in concession sales in 2016, also a record.


The Fundamental Statistics Theorem Revisited

@machinelearnbot

It turned out that putting more weight on close neighbors, and increasingly lower weight on far away neighbors (with weights slowly decaying to zero based on the distance to the neighbor in question) was the solution to the problem. For those interested in the theory, the fact that cases 1, 2 and 3 yield convergence to the Gaussian distribution is a consequence of the Central Limit Theorem under the Liapounov condition. More specifically, and because the samples produced here come from uniformly bounded distributions (we use a random number generator to simulate uniform deviates), all that is needed for convergence to the Gaussian distribution is that the sum of the squares of the weights -- and thus Stdev(S) as n tends to infinity -- must be infinite. More generally, we can work with more complex auto-regressive processes with a covariance matrix as general as possible, then compute S as a weighted sum of the X(k)'s, and find a relationship between the weights and the covariance matrix, to eventually identify conditions on the covariance matrix that guarantee convergence to the Gaussian destribution.


The Death of the Statistical Tests of Hypotheses

@machinelearnbot

It is part of a data science framework (see section 2 in this article), in which many statistical procedures have been revisited to make them simple, scalable, accurate enough without aiming for perfection but instead for speed, and usable by engineers, machine learning practitioners, computer scientists, software engineers, AI and IoT experts, big data practitioners, business analysts, lawyers, doctors, journalists, even in some cases by the layman, and even by machines and API's (as in machine-to-machine communications). Over years, I have designed a new, unified statistical framework for big data, data science, machine learning, and related disciplines. I have also written quite a bit on time series (detection of accidental high correlations in big data, change point detection, multiple periodicities), correlation and causation, clustering for big data, random numbers, simulation, ridge regression (approximate solutions) and synthetic metrics (new variances, bumpiness coefficient, robust correlation metric and robust R-squared non sensitive to outliers.) Vincent also manages his own self-funded research lab, focusing on simplifying, unifying, modernizing, automating, scaling, and dramatically optimizing statistical techniques.


What's wrong with this pic?

FOX News

The Atlanta-based airline has recently teamed up with Tinder to transform the exterior of Brooklyn building into a "dating wall" covered in worldly murals depicting nine different Delta destinations. According to a press release, the idea is for Brooklynites to snap photos near the murals, upload them to their dating profiles, and trick unsuspecting Tinder dates into thinking they're more well-traveled than they actually are. "So this summer, Delta and Tinder are offering New York singles an opportunity to snap profile pictures that will make you look like a jet-setter via a series of painted walls on display on Wythe Avenue in Williamsburg, Brooklyn." The airline has also placed another large mural -- the second in its Painted Wall Series -- a few blocks away at the site of Brooklyn's weekly Smorgasburg food festival.


Tinder and Delta want to help you pretend to be a world traveler on your dating profile

Mashable

If you are too busy or strapped for cash to actually go abroad, but you still want to look like a seasoned traveler, Delta and Tinder have a fix for you. Apparently if your profile photo makes you look like a world traveler, you're more likely to get that coveted swipe right on Tinder. There is now a wall featuring nine different scenes of popular destinations around the world, conveniently located in Williamsburg, Brooklyn. So happy we finally got to go to London -what a quick trip thanks to @delta's #deltadatingwall - just an hour overseas (over the Hudson sea that is) #nyc #brooklyn #williamsburg #tinder #london This wall is cute purely as art, but it's pretty clear if you look at the photos for more than two seconds that the image behind them is painted on brick.


Color quantization using k-means

#artificialintelligence

The most common techniques reduce the problem of color quantization into a clustering problem of points where each point represents the color of a pixel. In the following figure it's visually explained the whole process in case of RGB space where the final palette is composed by three colors (for Lab space is completely analogous). Here there are the output images generated by randomly selecting the colors in RGB color space. Here instead there are the output images generated by randomly selecting the colors in Lab color space.


How and Why: Decorrelate Time Series

@machinelearnbot

When dealing with time series, the first step consists in isolating trends and periodicites. A deeper investigation consists in isolating the auto-correlations to see whether the remaining values, once decorrelated, behave like white noise, or not. In the particular case of true random walks (see Figure 1), auto-correlations are extremely high, while auto-correlations measured on the differences are very close to zero. The resulting time series is a random walk (with no trend and no periodicity) with a lag-1 auto-correlation of 0.99 when measured on the first 100 observations.


How and Why: Decorrelate Time Series

@machinelearnbot

When dealing with time series, the first step consists in isolating trends and periodicites. A deeper investigation consists in isolating the auto-correlations to see whether the remaining values, once decorrelated, behave like white noise, or not. Chances are that the auto-correlations in the time series of differences X(t) - X(t-1) are much smaller (in absolute value) than the auto-correlations in the original time series X(t). Let X X(t), X(t-1), ... be the original time series, Y X(t-1), X(t-2), ... be the lag-1 time series, and Z X(t-2), X(t-3), ... be the lag-2 time series.


The Fundamental Statistics Theorem Revisited

@machinelearnbot

It turned out that putting more weight on close neighbors, and increasingly lower weight on far away neighbors (with weights slowly decaying to zero based on the distance to the neighbor in question) was the solution to the problem. For those interested in the theory, the fact that cases 1, 2 and 3 yield convergence to the Gaussian distribution is a consequence of the Central Limit Theorem under the Liapounov condition. More specifically, and because the samples produced here come from uniformly bounded distributions (we use a random number generator to simulate uniform deviates), all that is needed for convergence to the Gaussian distribution is that the sum of the squares of the weights -- and thus Stdev(S) as n tends to infinity -- must be infinite. More generally, we can work with more complex auto-regressive processes with a covariance matrix as general as possible, then compute S as a weighted sum of the X(k)'s, and find a relationship between the weights and the covariance matrix, to eventually identify conditions on the covariance matrix that guarantee convergence to the Gaussian destribution.


The Fundamental Statistics Theorem Revisited

@machinelearnbot

It turned out that putting more weight on close neighbors, and increasingly lower weight on far away neighbors (with weights slowly decaying to zero based on the distance to the neighbor in question) was the solution to the problem. Case 1: a(k) 1, corresponding to the classic version of the Central Limit Theorem, and with guaranteed convergence to the Gaussian distribution. Case 2: a(k) 1 / log 2k, still with guaranteed convergence to the Gaussian distribution Case 3: a(k) k {-1/2}, the last exponent (-1/2) that still provides guaranteed convergence to the Gaussian distribution, according to the Central Limit Theorem with the Liapounov condition (more on this below.) Case 3: a(k) k {-1/2}, the last exponent (-1/2) that still provides guaranteed convergence to the Gaussian distribution, according to the Central Limit Theorem with the Liapounov condition (more on this below.)