Machine Learning


Amazon votes to keep selling its facial recognition software despite privacy concerns

Daily Mail - Science & tech

Amazon will continue to sell its controversial facial recognition software to law enforcement and other entities after its shareholders shot down a proposal to reel the technology in. The vote effectively kills two initiatives brought before Amazon's board. One proposal would have required board approval to sell the software to governments, with approval only being given if the client meets certain standards of civil liberties. Another proposal called for a study on the technology's implications on rights and privacy. The exact breakdown of the vote is unclear and according to an Amazon representative it will only be made available via SEC filings later this week.


Facial recognition tech prevents crime, police tell UK privacy case

The Guardian

Facial recognition cameras prevent crime, protect the public and do not breach the privacy of innocent people whose images are captured, a police force has argued. Ed Bridges, an office worker from Cardiff, claims South Wales police violated his privacy and data protection rights by using facial recognition technology on him. But Jeremy Johnson QC compared automated facial recognition (AFR) to the use of DNA to solve crimes and said it would have had little impact on Bridges. Johnson, representing the police, said: "AFR is a further technology that potentially has great utility for the prevention of crime, the apprehension of offenders and the protection of the public." The technology maps faces in a crowd and then compares them with a watch list of images, which can include suspects, missing people and persons of interest to the police.


A Plethora of Original, Not Well-Known Statistical Tests

#artificialintelligence

Many of the following statistical tests are rarely discussed in textbooks or in college classes, much less in data camps. Yet they help answer a lot of different and interesting questions. I used most of them without even computing the underlying distribution under the null hypothesis, but instead, using simulations to check whether my assumptions were plausible or not. In short, my approach to statistical testing is model-free, data-driven. Some are easy to implement even in Excel.


5 Reasons Why Python Is The Dominant Language For Machine Learning – Frank's World of Data Science & AI

#artificialintelligence

Python has conquered the machine learning and AI world. Here's an interesting article from Analytics India Magazine about why Python is on top. According to the Stack Overflow Survey 2018, Python is the most wanted language for the second year in a row, which means it is the language that developers who do not yet use it most often say they want to learn. It is also claimed to be the fastest-growing major programming language. Developers and pioneers around the globe are implementing this language for machine learning projects.


An AI Pioneer Explains the Evolution of Neural Networks

#artificialintelligence

Geoffrey Hinton is one of the creators of Deep Learning, a 2019 winner of the Turing Award, and an engineering fellow at Google. Last week, at the company's I/O developer conference, we discussed his early fascination with the brain, and the possibility that computers could be modeled after its neural structure--an idea long dismissed by other scholars as foolhardy. We also discussed consciousness, his future plans, and whether computers should be taught to dream. The conversation has been lightly edited for length and clarity. Nicholas Thompson: Let's start when you write some of your early, very influential papers. Everybody says, "This is a smart idea, but we're not actually going to be able to design computers this way." Explain why you persisted and why you were so confident that you had found something important. Geoffrey Hinton: It seemed to me there's no other way the brain could work. It has to work by learning the strength of connections. And if you want to make a device do something intelligent, you've got two options: You can program it, or it can learn. And people certainly weren't programmed, so we had to learn. This had to be the right way to go. NT: Explain what neural networks are. GH: You have relatively simple processing elements that are very loosely models of neurons. They have connections coming in, each connection has a weight on it, and that weight can be changed through learning. And what a neuron does is take the activities on the connections times the weights, adds them all up, and then decides whether to send an output.


The basics of Deep Neural Networks

#artificialintelligence

With the rise of libraries such as Tensorflow 2.0, PyTorch and Fastai, implementing deep learning has become accessible to so many more people and it helps to understand the fundamentals behind deep neural networks. Hopefully this article will be of help people to people on the path of learning about deep neural networks. Back when I first learnt about neural nets and implemented my first, they were always represented as individual artificial neurons, essentially nodes with individually weighted inputs, a summed output and an activation function. When first returning into learning about deep neural networks, the concept of how this equated to matrix multiplication didn't appear obvious. Also, linked to this is why Graphics Processing Units (GPUs) and their spin-offs have helped advance deep learning results so much.


Predicting failures of Molteno and Baerveldt glaucoma drainage devices using machine learning models

#artificialintelligence

The purpose of this retrospective study is to measure machine learning models' ability to predict glaucoma drainage device failure based on demographic information and preoperative measurements. The medical records of sixty-two patients were used. Potential predictors included the patient's race, age, sex, preoperative intraocular pressure, preoperative visual acuity, number of intraocular pressure-lowering medications, and number and type of previous ophthalmic surgeries. Failure was defined as final intraocular pressure greater than 18 mm Hg, reduction in intraocular pressure less than 20% from baseline, or need for reoperation unrelated to normal implant maintenance. Five classifiers were compared: logistic regression, artificial neural network, random forest, decision tree, and support vector machine.


#016 CNN Network in Network - 1x1 Convolutions Master Data Science 13.11.2018

#artificialintelligence

More generally, if we have not just one filter but multiple filters, then it's as if we have not just one unit but multiple units that are taking as inputs all the numbers in one slice, and then building them up into an output there the \(6\times 6\times number \enspace of \enspace filters \). One way to think about the \(1\times 1 \) convolution is that it is basically like having a fully connected neural network that applies to each of the \(36 \) different positions. What this fully connected neural network does, it has a \(32 \) dimensional input whereas the number of outputs equals the number of \(1\times 1 \) filters applied. Doing this every \(36 \) positions we end up with an output that is \(6\times 6 \times number \enspace of \enspace filters \). This can carry out a pretty non-trivial computation on our input volume.


Paul Pepper: Scott Christianson, Artificial Intelligence Specialist, "Face Recognition"

#artificialintelligence

University of Missouri assistant professor SCOTT CHRISTIANSON puts an app designed to assist those with visual impairments to the test using yours truly, our floor director and some wrinkled up dollar bills. Self-driving cars is becoming a reality, and while it may sound like a cool idea, PROF. SCOTT CHRISTIANSON points out a not-so-obvious morality dilemma when it comes to programming machines that are designed to make decisions that a human normally would, saying "hopefully the car will be able to avoid the accident, but there may be situations where it may not be able to, so how do we want those cars programmed?" Never mind tomorrow, machine-learning artificial intelligence is happening now! University of Missouri professor SCOTT CHRISTIANSON tells us just how much it's "creeping into our lives."


Machine Learning in Python NumPy: Neural Network in 9 Steps

#artificialintelligence

Although there are many clean datasets available online, we will generate our own for simplicity -- for inputs a and b, we have outputs a b, a-b, and a-b . Our dataset is split into training (70%) and testing (30%) set. Only training set is leveraged for tuning neural networks. Testing set is used only for performance evaluation when the training is complete. Data in the training set is standardized so that the distribution for each standardized feature is zero-mean and unit-variance.