Telstra has used open source machine learning technology to answer the age-old question that plagues every marketer: how effective is my ad spend? The telco wields one of the biggest marketing budgets in Australia, but that doesn't stop Telstra from wanting to track the performance of every dollar spent. The company previously faced a six-month lag to get visibility into the effectiveness of its marketing spend; that is now down to five weeks using new marketing mix modelling developed in partnership with Accenture, Deakin University and Servian. The telco previously used a traditional econometric model to assess the performance of its marketing spend, pulling together 800 variables – which took two-and-a-half months to assemble – and then modelling this using regression techniques. "Six months after the marketing period had ended I could tell the CMO [chief marketing officer] and the marketers how effective their marketing was... six months ago," Telstra's director of research, insights & analytics Liz Moore told the recent Big Data & Analytics Innovation Summit in Sydney.
A 2015 Journal of Experimental Psychology study, involving 166 subjects, found that when people's phones beep or buzz while they're in the middle of a challenging task, their focus wavers, and their work gets sloppier--whether they check the phone or not. In an April article in the Journal of the Association for Consumer Research, Dr. Ward and his colleagues wrote that the "integration of smartphones into daily life" appears to cause a "brain drain" that can diminish such vital mental skills as "learning, logical reasoning, abstract thought, problem solving, and creativity." In a similar but smaller 2014 study (involving 47 subjects) in the journal Social Psychology, psychologists at the University of Southern Maine found that people who had their phones in view, albeit turned off, during two demanding tests of attention and cognition made significantly more errors than did a control group whose phones remained out of sight. In another study, published in Applied Cognitive Psychology in April, researchers examined how smartphones affected learning in a lecture class with 160 students at the University of Arkansas at Monticello.
Specifically, it was engineered to exploit every bit of memory and hardware resources for tree boosting algorithms. The implementation of XGBoost offers several advanced features for model tuning, computing environments and algorithm enhancement. It is capable of performing the three main forms of gradient boosting (Gradient Boosting (GB), Stochastic GB and Regularized GB) and it is robust enough to support fine tuning and addition of regularization parameters. XGBoost specifically, implements this algorithm for decision tree boosting with an additional custom regularization term in the objective function.
In this post, we'll explain how we used the automated machine learning function from H2O to develop a predictive model that is in the same ballpark as commercial products in terms of ML accuracy we'll also explain how we applied the new LIME package that enables breakdown of complex, black-box machine learning models into variable importance plots. Machine Learning with h2o.automl() from the h2o package: This function takes automated machine learning to the next level by testing a number of advanced algorithms such as random forests, ensemble methods, and deep learning along with more traditional algorithms such as logistic regression. Feature (Variable) Importance with the lime package: The problem with advanced machine learning algorithms such as deep learning is that it's near impossible to understand the algorithm because of its complexity. LIME detected Over Time, Job Role, and Training Time as features that are relevant to the model predictions.
The Consortium for Advancing Adult Learning & Development (CAALD), a group of learning authorities whose members include researchers, corporate and nonprofit leaders, and McKinsey experts, recently met in Boston for the second year in a row to assess the state of the workplace and explore potential solutions. Bob Kegan, William and Miriam Meehan Research Professor in Adult Learning and Professional Development, Harvard Graduate School of Education: The number of employees who are operating in more nonstandard, complex jobs is going to increase, while less complex work is going to be increasingly automated. Bob Kegan: Work will increasingly be about adaptive challenges, the ones that artificial intelligence and robots will be less good at meeting. Tamara Ganc, chief learning officer, Vanguard Group: With our workforce now more dispersed, we're leveraging technology so people don't need to be physically together to still connect live.
By convention, the rare class is usually positive, so this means the True Positive (TP) rate is 0.78, and the False Negative rate (1 – True Positive rate) is 0.22. The Non-Large Loss recognition rate is 0.79, so the True Negative rate is 0.79 and the False Positive (FP) rate is 0.21. They don't report a False Positive rate (or True Negative rate, from which we could have calculated it). This result means that, using their Neural network, they must process 28 uninteresting Non-Large Loss customers (false alarms) for each Large-Loss customer they want.
Then we'll use the new lime package that enables breakdown of complex, black-box machine learning models into variable importance plots. We'll take a look at two cutting edge techniques: Machine Learning with h2o.automl() from the h2o package: This function takes automated machine learning to the next level by testing a number of advanced algorithms such as random forests, ensemble methods, and deep learning along with more traditional algorithms such as logistic regression. Feature Importance with the lime package: The problem with advanced machine learning algorithms such as deep learning is that it's near impossible to understand the algorithm because of its complexity. We can see a common theme with Case 3 and Case 7: Training Time, Job Role, and Over Time are among the top factors influencing attrition.
The ability to go beyond existing data and capture implied knowledge significantly improves the performance of the machine learning systems. Then the model is improved by a learning iteration cycle between data scientists and domain experts. In which, the domain expert educates data scientists on how different variables affect the outcome, while data science adapt the rules to consider the new information; and the results are provided back to the domain experts for review. Domain experts or qualified employees are then able to improve the intelligent system, without the help of data scientists, by identifying which variables affected the outcome that the AI system was not considering and creating rules surrounding those new inputs.
Neural network are powerful learning models especially deep learning networks on visual and speech recognition problems. In spite of having made a lot of efforts (e.g., a researcher created a popular toolkit called Deep Visualization Toolbox) to capture step by step how a neural network get trained, what we can see inside these layers is still very intricate. For a deep acoustic model used by Android voice search, a Google research team showed that nearly all of the improvement by training an ensemble of deep neural nets can be distilled into a single neural net of the same size which is much easier to deploy. In an experiment to answer how the pre-training work, it was empirically shown the influence of pre-training in terms of model capacity, training example number, and architecture depth.
Shalina Chatlani writing for Education Dive explains, "The education technology market is growing rapidly and expected to hit $252 billion globally by 2020, according to the 2017 Kahoot! The good news is, it is going after the most intractable problems we have all faced in the education system: college application processes, continuing education, peer to peer study guides, and yes, standardized test preparation. "Most independent schools require standardized test scores from either the ISEE or the SSAT as part of the application. But improved test prep technology isn't just about getting kids to score better on standardized tests.