Machines are eating humans' jobs talents. And it's not just about jobs that are repetitive and low-skill. Automation, robotics, algorithms and artificial intelligence (AI) in recent times have shown they can do equal or sometimes even better work than humans who are dermatologists, insurance claims adjusters, lawyers, seismic testers in oil fields, sports journalists and financial reporters, crew members on guided-missile destroyers, hiring managers, psychological testers, retail salespeople, and border patrol agents. Moreover, there is growing anxiety that technology developments on the near horizon will crush the jobs of the millions who drive cars and trucks, analyze medical tests and data, perform middle management chores, dispense medicine, trade stocks and evaluate markets, fight on battlefields, perform government functions, and even replace those who program software – that is, the creators of algorithms. People will create the jobs of the future, not simply train for them, ...
The 2012 publication Race against the Machine makes the case that the digitalization of work activities is proceeding so rapidly as to cause dislocations in the job market beyond anything previously experienced [1]. Unlike past mechanization/automation, which affected lower-skill blue-collar and white-collar work, today's information technology affects workers high in the education and skill distribution. Machines can substitute for brains as well as brawn. On one estimate, about 47% of total US employment is at risk of computerization [2]. If you doubt whether a robot or some other machine equipped with digital intelligence connected to the internet could outdo you or me in our work in the foreseeable future, consider news reports about an IBM program to "create" new food dishes (chefs beware), the battle between anesthesiologists and computer programs/robots that do their job much cheaper, and the coming version of Watson ("twice as powerful as the original") based on computers connected over the internet via IBM's Cloud [3].
The 2012 publication Race against the Machine makes the case that the digitalization of work activities is proceeding so rapidly as to cause dislocations in the job market beyond anything previously experienced [1]. Unlike past mechanization/automation, which affected lower-skill blue-collar and white-collar work, today's information technology affects workers high in the education and skill distribution. Machines can substitute for brains as well as brawn. On one estimate, about 47% of total US employment is at risk of computerization [2]. If you doubt whether a robot or some other machine equipped with digital intelligence connected to the internet could outdo you or me in our work in the foreseeable future, consider news reports about an IBM program to "create" new food dishes (chefs beware), the battle between anesthesiologists and computer programs/robots that do their job much cheaper, and the coming version of Watson ("twice as powerful as the original") based on computers connected over the internet via IBM's Cloud [3].
WWTS (What Would Turing Say?) Turing's Imitation Game was a brilliant Turing was heavily influenced by the World War II "game" If Turing were alive today, what sort of test might he propose? If a machine could fool interrogators as often as a typical man, then one would have to conclude that that machine, as programmed, was as intelligent as a person (well, as intelligent as men.) As Judy Genova (1994) puts it, Turing's originally proposed game involves not a question of species, but one of gender. The current version, where the interrogator is told he or she needs to distinguish a person from a machine, is (1) much more difficult to get a program to pass, and (2) almost all the added difficulties are largely irrelevant to intelligence! And it's possible to muddy the waters even more by some programs appearing to do well at it due to various tricks, such as having the interviewee program claim to be a 13-year-old Ukrainian who doesn't speak English well (University of Reading 2014), and hence having all its wrong or bizarre responses excused due to cultural, age, or language issues.
Zipfian Academy has graduated more than 50 alumni, placing graduates into data science roles at Facebook, Twitter, Airbnb, Tesla, Uber, Square, Coursera, and many more Silicon Valley companies. Participants in our program come from backgrounds in engineering, data analysis, statistics, and occasionally professional poker. Here, we share an interview with Alex Mentch, a graduate from our Winter 2014 Cohort. Alex hails originally from Idaho, and studied electrical engineering at Washington University in St. Louis. Looking for a career transition into data science, Alex attended our Winter 2014 cohort where he built a search engine for state legislation.
The crucial importance of metrics in machine learning algorithms has led to an increasing interest in optimizing distance and similarity functions, an area of research known as metric learning. When data consist of feature vectors, a large body of work has focused on learning a Mahalanobis distance. Less work has been devoted to metric learning from structured objects (such as strings or trees), most of it focusing on optimizing a notion of edit distance. We identify two important limitations of current metric learning approaches. First, they allow to improve the performance of local algorithms such as k-nearest neighbors, but metric learning for global algorithms (such as linear classifiers) has not been studied so far. Second, the question of the generalization ability of metric learning methods has been largely ignored. In this thesis, we propose theoretical and algorithmic contributions that address these limitations. Our first contribution is the derivation of a new kernel function built from learned edit probabilities. Our second contribution is a novel framework for learning string and tree edit similarities inspired by the recent theory of (e,g,t)-good similarity functions. Using uniform stability arguments, we establish theoretical guarantees for the learned similarity that give a bound on the generalization error of a linear classifier built from that similarity. In our third contribution, we extend these ideas to metric learning from feature vectors by proposing a bilinear similarity learning method that efficiently optimizes the (e,g,t)-goodness. Generalization guarantees are derived for our approach, highlighting that our method minimizes a tighter bound on the generalization error of the classifier. Our last contribution is a framework for establishing generalization bounds for a large class of existing metric learning algorithms based on a notion of algorithmic robustness.
Madani, Omid (SRI International) | Bui, Hung (SRI International) | Yeh, Eric (SRI International)
We investigate prediction of users' desktop activities in the Unix domain. The learning techniques we explore do not require explicit user teaching. We show that simple efficient many-class learning can perform well for action prediction, significantly improving over previously published results and baselines. This finding is promising for various human-computer interaction scenarios where a rich set of potentially predictive features is available, where there can be many different actions to predict, and where there can be considerable nonstationarity.