Google s Clips camera is an AI mini camera

Daily Mail

Alphabet Inc's Google is betting this combination proves irresistible with the Tuesday launch of Google Clips, a pocket-sized digital camera that decides on its own whether an image is interesting enough to shoot. The $249 device, which is designed to clip onto furniture or other fixed objects, automatically captures subjects that wander into its viewfinder. But unlike some trail or security cameras that are triggered by motion or programmed on timers, Clips is more discerning. Google has trained its electronic brain to recognize smiles, human faces, dogs, cats and rapid sequences of movement. The $249 device, which is designed to clip onto furniture or other fixed objects, automatically captures subjects that wander into its viewfinder.

[D] List of Neural Network Attacks • r/MachineLearning


I am working on automated testing platform for neural networks and I would like to know what kinds of attacks you would like to see automated?

[D] Is there any app that uses AI to increase resolution? • r/MachineLearning


After 3 hours of Googling, I have to ask you guys. I'm looking for an app or command-line tool that is able to increase resolution using AI. Something like Let's Enhance but free. I know about Alex J. C.'s neural-enhance but my PC is not able to run Docker. And without Docker, the installation is super complex. Also, I don't have Nvidia graphics card that supports CUDA.

[N] The most versatile labeling tool for machine learning • r/MachineLearning


We recently started open beta for Labelbox. You can simply connect your data, choose or customize an open source labeling interface, invite team members and start labeling. Our labeling interfaces are open source, meaning, that you can customize it to work with any kind of data such as images, videos, point clouds, medical DICOM and many more (as long as your data can be loaded in the browser). We'd love to hear your feedback and ideas to improve this further.

[D] Multivariate seq2seq model • r/MachineLearning


I am working on a problem and think that a sequence to sequence LSTM model would be a good approach. However, I am dealing with a multivariate input sequence. Every seq2seq example I have found is for machine translation and uses a one dimensional input sequence. Any examples or ideas on how to implement would be greatly appreciated.

[D] Undergraduate Machine Learning internships? • r/MachineLearning


If you want a research position of any sort, you typically need to demonstrate the capacity to perform original research. ML competitions (Kaggle in particular) are not good indicators of research skill, and it sounds like you've mostly just applied techniques--both of these would probably set you up well for applied positions or data science (assuming you can hack it [rimshot] in a coding interview), but not for research. When the labor pool is flush with MS and PhD students looking for research positions, most places are going to pick them over an undergrad unless you can stand out, so you've got to play the game along the same lines.

[D] Machine Learning - WAYR (What Are You Reading) - Week 40 • r/MachineLearning


This is a place to share machine learning research papers, journals, and articles that you're reading this week. If it relates to what you're researching, by all means elaborate and give us your insight, otherwise it could just be an interesting paper you've read.

[D] Unsupervised-as-supervised learning • r/MachineLearning


I'm including noise-contrastive estimation, and GANs, but I'm worried I won't have enough to write (need about 3000 words). I've gone through most of the citations for these papers, so I'm thinking of just including GAN variants (like f-GAN, WGAN etc) to fill out any additional space.

[D] Learning to forget. Optimizing a confusion loss to remove bias. • r/MachineLearning


A few months ago I stumbled onto an interesting idea while listening to the TWiML & AI podcast. It described a process by which one could attempt to introduce confusion into a network (starting at any arbitrary hidden layer) so that it couldn't learn from select biases in the training data. For example, if you were training an image classification network, and you wanted to forbid the network to learn anything about race, you could use this technique, to do so. The problem is that I can't for the life of me remember what this technique is called, or what episode of the podcast it was discussed in.

Adam - momentum y (aka. cost) terms. • r/MachineLearning


It was the Newton-Raphson method for finding roots of an equation. I thought this method mostly applies for minimization in machine learning as cost is always defined as a positive real valued function. To relate this update equation with the title: if we consider the update portion of the equation - g(x, y) (y * x) / (y x2); y 0 It is quite similar to adam since there is a square gradient term in the denominator and the gradient term in the numerator. WIth the equation that I have mentioned, the hypothesis is that this decay is kind of estimating the cost term itself. Please let me know what you think about this hypothesis and what it's implications are.