With the release of the Titan V, we now entered deep learning hardware limbo. It is unclear if NVIDIA will be able to keep its spot as the main deep learning hardware vendor in 2018 and both AMD and Intel Nervana will have a shot at overtaking NVIDIA. So for consumers, I cannot recommend buying any hardware right now. The most prudent choice is to wait until the hardware limbo passes. This might take as little as 3 months or as long as 9 months. So why did we enter deep learning hardware limbo just now?
Google's DeepMind AI Research News & Update: Company Works With Blizzard To Use'Starcraft ... Artificial Intelligence Could Not Replace CEOs...Yet, Study Says Back to the Future 2.1 – Will Amazon Echo & Google Home play a big part in lives in 2017? Stay up-to-date on the topics you care about. We'll send you an email alert whenever a news article matches your alert term. It's free, and you can add new alerts at any time. We won't share your personal information with anyone.
People are the heart and mind of your business. Data is the lifeblood that feeds everything you do. For your business to operate at peak performance and deliver the results you seek, people, processes and data must be healthy individually, as well as work in harmony. Technology has always been important to bringing people, process and data together; however, technology's importance is evolving. As it does, the relationships among people, processes and technology are also changing People are the source of the ideas and the engine of critical thinking that enables you to turn customer needs and market forces into competitive (and profitable) opportunities for your business.
Google Executive Chairman Eric Schmidt has suggested machine learning would be the one commonality for every big startup over the next five years. The inference here is that machine learning, or AI, will be as revolutionary as the Internet, the mobile phone, the personal computer; heck, I'll say it, as game changing as sliced bread. AI is responsible for many simple experiences we already take for granted: the Netflix "recommended for you" and the Facebook feeds that happen to show travel deals for places we've been searching.
At the start of last month I sat down to benchmark the new generation of accelerator hardware intended to speed up machine learning inferencing on the edge. So I'd have a rough yardstick for comparison, I also ran the same benchmarks on the Raspberry Pi. Afterwards a lot of people complained that I should have been using TensorFlow Lite on the Raspberry Pi rather than full blown TensorFlow. They were right, it ran a lot faster. Then with the release of the AI2GO framework from Xnor.ai, which uses next generation binary weight models, I looked at the inferencing speeds of these next generation of models in comparison to'traditional' TensorFlow.