Many algorithms for data analysis exist, especially for classification problems. To solve a data analysis problem, a proper algorithm should be chosen, and also its hyperparameters should be selected. In this paper, we present a new method for the simultaneous selection of an algorithm and its hyperparameters. In order to do so, we reduced this problem to the multi-armed bandit problem. We consider an algorithm as an arm and algorithm hyperparameters search during a fixed time as the corresponding arm play. We also suggest a problem-specific reward function. We performed the experiments on 10 real datasets and compare the suggested method with the existing one implemented in Auto-WEKA. The results show that our method is significantly better in most of the cases and never worse than the Auto-WEKA.
Google DeepMind artificial intelligence (AI) technology can play soccer with an ant. The AI technology may be implemented to real products. The DeepMind AI technology is very smart, and earlier this year, DeepMind's AlphaGo system was applauded worldwide for defeating Lee Sedol, who is the strongest human Go player. Lee Sedol has won 18 world titles, but the Go player lost 4 to 1 against the Google AI. The company says the game was watched by about 200 million people.
We made a tool that you can use. Reinforcement learning is much discussed these days with successes like AlphaGo. Wouldn't it be great if Reinforcement Learning algorithms could easily be used to solve all reinforcement learning problems? But there is a well-known problem: It's very easy to create natural RL problems for which all standard RL algorithms (epsilon-greedy Q-learning, SARSA, etc…) fail catastrophically. That's a serious limitation which both inspires research and which I suspect many people need to learn the hard way.
Google's DeepMind has conquered some big artificial intelligence challenges in its day, such as defeating Go's world champion and navigating mazes through virtual sight. However, one of its accomplishments is decidedly unusual: it learned how to play soccer (aka football) with a digital ant. It looks cute, but it's really a profound test of DeepMind's asynchronous, reinforcement-based learning process. The AI has to not only learn how to move the ant without any prior understanding of its mechanics, but to kick the ball into a goal. Imagine if you had to learn how to run while playing your first-ever match -- that's how complex this is.
New machine learning approach could give a big boost to the efficiency of optical networks February 25, 2019, Optical Society of America Credit: CC0 Public Domain New work leveraging machine learning could increase the efficiency of optical telecommunications networks. As our world becomes increasingly interconnected, fiber optic cables offer the ability to transmit more data over longer distances compared to traditional copper wires. Optical Transport Networks (OTNs) have emerged as a solution for packaging data in fiber optic cables, and improvements stand to make them more cost-effective. A group of researchers from Universitat Politècnica de Catalunya in Barcelona and the telecom company Huawei have retooled an artificial intelligence technique used for chess and self-driving cars to make OTNs run more efficiently. They will present their research at the upcoming Optical Fiber Conference and Exposition, to be held 3-7 March in San Diego, California, USA.