Goto

Collaborating Authors

Results


AI is the new electricity, says Coursera's Andrew Ng

#artificialintelligence

No discussion in information technology today is complete without reference to artificial intelligence or AI, in quickspeak. Needless to say, experts in AI are in great demand. Among them, Andrew Ng is often referred to as a go-to guru on AI. He is the co-founder of Coursera, which offers online courses. He is also an adjunct professor at the Stanford University and was formerly the head of Baidu AI Group, and Google Brain. He calls AI, the new electricity. In response to an email from Business Today, he explains why and shares his thoughts on what companies need to do.


What can AI do for the future of e-learning?

#artificialintelligence

THE training and development of your workforce is vital to the achievement of digital transformation success for businesses. And today, more and more businesses are leveraging e-learning to educate their employees.



10 uses cases - Artificial Intelligence and Machine Learning in Education #AI

#artificialintelligence

Responding to questions over email and posted on forums, Jill had a casual, colloquial tone, and was able to offer nuanced and accurate responses within minutes. A robot has been teaching graduate students for 5 months and none of them realized.


Google wants to teach more people AI and machine learning with a free online course

#artificialintelligence

Machine learning and AI are some of the biggest topics in the tech world right now, and Google is looking to make those fields more accessible to more people with its new Learn with Google AI website.


The HR Technology Market: Trends and Disruptions for 2018

#artificialintelligence

Digital disruption A new industrial revolution, but people matter HR Tech 2017 Keynote 3 Copyright 2017 Deloitte Development LLC. Robots Can cost as low as $25,000* 250,000 purchased globally in 2016** *Source: Robots: The new low-cost worker, Dhara Ranasinghe, CNBC, April 10, 2015. The future of work Robotics, AI, sensors are here now HR Tech 2017 Keynote 4 Copyright 2017 Deloitte Development LLC. Quantified self: arriving now $1.8 billion in venture invested in wearables since 2016 Source: CB Insights HR Tech 2017 Keynote 5 Copyright 2017 Deloitte Development LLC. The "average" US worker now spends 25% of their day reading or answering emails Fewer than 16% of companies have a program to "simplify work" or help employees deal with stress. The average mobile phone user checks their device 150 times a day. The "average" US worker works 47 hours and 49% work 50 hours or more per week, with 20% at 60 hours per week 40% of the US population believes it is impossible to succeed at work and have a balanced family life.


AI education opens up as Imperial College London launches MOOCs Imperial News Imperial College London

#artificialintelligence

A leading centre for AI education will open up to the world, as Imperial launches its first Massive Open Online Courses with Coursera.


The Many Faces of Exponential Weights in Online Learning

arXiv.org Machine Learning

A standard introduction to online learning might place Online Gradient Descent at its center and then proceed to develop generalizations and extensions like Online Mirror Descent and second-order methods. Here we explore the alternative approach of putting exponential weights (EW) first. We show that many standard methods and their regret bounds then follow as a special case by plugging in suitable surrogate losses and playing the EW posterior mean. For instance, we easily recover Online Gradient Descent by using EW with a Gaussian prior on linearized losses, and, more generally, all instances of Online Mirror Descent based on regular Bregman divergences also correspond to EW with a prior that depends on the mirror map. Furthermore, appropriate quadratic surrogate losses naturally give rise to Online Gradient Descent for strongly convex losses and to Online Newton Step. We further interpret several recent adaptive methods (iProd, Squint, and a variation of Coin Betting for experts) as a series of closely related reductions to exp-concave surrogate losses that are then handled by Exponential Weights. Finally, a benefit of our EW interpretation is that it opens up the possibility of sampling from the EW posterior distribution instead of playing the mean. As already observed by Bubeck and Eldan, this recovers the best-known rate in Online Bandit Linear Optimization.


Artificial Intelligence Website Creation 2018 (No Coding)

#artificialintelligence

This game-changing course will cover artificial intelligence tools in website, chatbot design and analytics which will help you to create website in minutes.


Black-Box Reductions for Parameter-free Online Learning in Banach Spaces

arXiv.org Machine Learning

We introduce several new black-box reductions that significantly improve the design of adaptive and parameter-free online learning algorithms by simplifying analysis, improving regret guarantees, and sometimes even improving runtime. We reduce parameter-free online learning to online exp-concave optimization, we reduce optimization in a Banach space to one-dimensional optimization, and we reduce optimization over a constrained domain to unconstrained optimization. All of our reductions run as fast as online gradient descent. We use our new techniques to improve upon the previously best regret bounds for parameter-free learning, and do so for arbitrary norms.