Reward Function Engineering is the job of determining the rewards to various actions, given the predictions made by the AI. Sometimes Reward Function Engineering involves programming the rewards in advance of the predictions so that actions can be automated. As machines get better at prediction, the distinct value of Reward Function Engineering will increase as the application of human judgment becomes central. So, although it is too early to speculate on the overall impact on jobs, there is little doubt that we will soon be witness to a great flourishing of demand for human judgment in the form of Reward Function Engineering.
To create his algorithm, Janosov first built a network of the realm's social system based on how often characters interacted with each other. This data illustrated which characters had the strongest ties to other characters, as shown in the realm's social network below. Of the 94 characters he considered, 61 had already died, providing a wealth of data about the social position and level of importance of the deceased. After entering this data into a machine-learning algorithm, the model accurately predicted the fates of about 75 percent of the 94 characters that were considered.
Julian and I independently wrote summaries of our solution to the 2017 Data Science Bowl. A tricky detail that I found reading the LUNA competition is that different CT machines will produce scans with different sampling rates in the 3rd dimension. Here's an example of a malignant nodule (highlighted in blue): Anyway, the LUNA16 dataset had some very crucial information - the locations in the LUNA CT scans of 1200 nodules. It's the reason that I am able to build models on only 1200 samples (nodules) and have them work very well (normal computer vision datasets have 10,000 - 10,000,000 images).
The key development that most would attribute to overcoming this common challenge is the development of neural network technology that has allowed us to train models that help machines make complex decisions at a higher, more abstract level similar to the way the brain functions. Although Moore's law is appearing to slow down, I would predict that as more focus gets put toward developing hardware specialized for machine learning, and if research of more efficient training algorithms continues on its trend, the cost to perform machine learning tasks in 2027 should be roughly 0.1% what it is today. With greater availability of computational resources and data, I believe that the trend will move from deep architectures with a single snapshot input and single classification output to much more complex, deep recurrent networks that take in multiple streams of varying input and offer a multitude of varying output types. We are already great at learning continuous customer profiles over time, auto adaptation of models, and providing reasons along with scores.
How they got there: Whereas Frey and Osbourne analyzed entire occupations for the risk of automation, researchers at PWC took into account that some tasks within jobs can be automated, while some can't, and that different jobs within occupations involve different combinations of tasks. For every 15 jobs the researchers predicted would be lost, they assumed that one job would be created "in software, engineering, design, maintenance, support, training." Forrester estimated that automating technologies would create about a 10% increase in employment, which, when combined with the 17% decline in employment, would create a net effect of 7% of jobs lost. Our productivity estimates assume that people displaced by automation will find other employment.
The core business of Amazon, Google, and to a certain extent, Microsoft required them to build large scale data centre operations. As mentioned, the AI models require a data set to train the model on, and then operational data to generate intelligence from (for example predictions or diagnoses). More intelligence will be available even faster, enabling humans to make better decisions, particularly in a world of unstable markets where creativity, empathy and other emotions are strong influencing factors. It is the advent of cheap compute and data engineering in the cloud that makes AI possible now.
As IIA's chief analytics officer Bill Franks has pointed out, you can have perfectly functional AI (i.e., makes good predictions) that you can't explain, but that may not be acceptable: "If you only care about predicting who will get a disease, or which image is a cat, or who will respond to a coupon, then the opacity of AI is irrelevant.
Many ML algorithms are black boxes -- you input a lot of data, and get a model that works in mysterious ways, which makes the results difficult-to-impossible to explain. This is not a problem you can set aside until after you built the model(s) -- it is important to think in advance about the data and components of the model that the user may want to see and how to present results in a way that builds user trust. Again, the choice of user experience highly depends on the subject matter, product and user needs -- there's no "one-size-fits-all". Explainability is an evolving area of ML research, with researchers actively looking for ways to make models less of a black box.
They contacted 1,634 researchers who published papers at the 2015 NIPS and ICML conferences--the two leading machine learning conferences--and asked them to complete a survey on the topic, with 352 researchers responding. And when the question was worded slightly differently to gauge when all human labor would be automated rather than just when it could be, the aggregate forecast was a 50 percent chance in 122 years from now and a 10 percent chance within 20 years. When asked how likely it was that AI would perform vastly better than humans in all tasks two years after machines overtook human capabilities the median probability was just 10 percent. Unsurprisingly, the vast majority of respondents thought machines outperforming humans would have a positive impact on humanity.
Knowing the historical statistical performance, the manager may choose to replace the current pitcher with someone who has better historical performance against the batter. A study recently published in the journal Nature documents how Stanford researchers developed a machine learning algorithm to detect potential cases of skin cancer. Put simply, good predictive analytics require a large sampling of data to work well. Analyzing historic data to predict future outcomes only works if the past is truly representative of the future.