Algorithms that parse data, learn from that data, and then apply what they've learned to make informed decisions. I'm sure you are asking yourself, how can a program or algorithm make decisions and learn from data, doesn't every program need to be programmed? Not if the program was trained to learn from and adapt to data. In the case of machine learning the algorithm is not explicitly programmed, rather the model is "trained" using historical and present data in order to make future decisions and prediction. The more data available for training, the more accurate the predictions are.
Good reinforcement learning and other'reasoning' benchmarks to measure progress, some set of increasingly harder tasks that can measurably show the different strengths of various models. My thoughts are that it wasn't just the data, but everything around image-net that really pushed the field forward, the yearly competition, the talks and progress graphs the anticipation and excitement to see how far the teams pushed the limit this time. Reinforcement learning still needs its'image-net moment', ideally some annual competition that can gain traction over time, have the big teams invest resource to push the limits. The field lends itself well to simply adding more complex tasks as the models get stronger and stronger. I merely answered this question as in'what would I as an outsider like to see', so feel free to disregard', but I think there is something in the human nature about competition which drives progress.
I am currently taking a course in the verification of cyber-physical systems. When I say that, think formal and probabilistic verification of state machines for safety. It's a graduate course and the professor wants us all to do a large project. Anything that somewhat relates to the course material is fair game. I thought about mixing it together with machine learning.