Hi, I am a noob in reinforcement learning, but I want to try and dabble in it. As I have understood, experiments in RL may take a long time to converge compared to regular deep learning methods. Therefore I am looking for to increase my effectiveness when working with these models on AWS. My current workflow in deep learning is to open a notebook on the server, run the model, tune hyperparameters, run the model etc. So my question is, how do you setup many experiments to run in parallell on AWS?
Recurrent neural networks (RNNs) have been widely used for processing sequential data. However, RNNs are commonly difficult to train due to the well-known gradient vanishing and exploding problems and hard to learn long-term patterns. Long short-term memory (LSTM) and gated recurrent unit (GRU) were developed to address these problems, but the use of hyperbolic tangent and the sigmoid action functions results in gradient decay over layers. Consequently, construction of an efficiently trainable deep network is challenging. In addition, all the neurons in an RNN layer are entangled together and their behaviour is hard to interpret.
But on a serious note, this is both well-timed and invaluable. While many people that invested around November (hopefully) learned their lesson on doing their own research and investigation into those same sources that your ML system analyzes (white paper, team member Linkden pages, Github repos, the website itself... etc), this is undoubtedly impressive work and should assist in analyzing the neverending onslaught of offerings going forward.
NEW YORK, March 12, 2018 (GLOBE NEWSWIRE) -- The Global Artificial Intelligence in Agriculture (AIA) Market is expected to grow at a significant CAGR of 24.3% during the forecast period. The factors driving the growth of the global AIA market are rising adoption of information management systems (IMS), automated irrigation, increasing crop productivity by implementing deep learning techniques, and increasing global population. Furthermore, growing trend of precision farming and increasing adoption of smart sensors are also fueling the demand of the global AIA market. Replacement of human labor is also expected to overcome by AIA, to minimize scarcity of physical labor. However, the high cost of collecting data of agricultural land is a major restraint of the AIA market growth.
It all depends on your skills as a developer. If you do know how to work with many threads on many cores, I'd go for a cheap xeon/amd server with 2 or 4 cpu sockets to get up to 64 cores, a min of 1 gigabyte of ram per core, A BOOTABLE RAM DISK ON PCI-EXPRESS WITH AUTOMATIC BACKUP ( SSDs are ridiculous and overrated, they burn out so easily and they're not worth the risk for long-term storage purposes) and a fast HD (10k rpm minimum) as storage. For the GPU, honestly, unless you plan on working with CUDA/ opencl, anything is fine because you'd rarely compute on it. But if you will develop GPU-"powered" neural networks and if wattage isn't of a concern for you, given a proper thermal dissipation, there are many AMDs that can pack a punch for little money both in single and double precision. But if you don't know how to take advantage of multithreading and if frameworks are what you have in mind, whatever you buy, as long as it is fast, "it's gonna be fine".
"It is possible for ASICs over time to be successful in the deep-learning world," Mosesmann said. "However, we are of the opinion that at this stage in a multidecade product cycle it is just too early to'fix' the hardware, given that there is a plethora of deep-learning frameworks (Tensorflow, Caffee, MXNet, …