Sustainable Deep Learning Architectures require Manageability


This is a very important consideration that is often overlooked by many in the field of Artificial Intelligence (AI). I suspect there are very few academic researchers who understand this aspect. The work performed in academe is distinctly different from the work required to make a product that is sustainable and economically viable. It is the difference between computer code that is written to demonstrate a new discovery and code that is written to support the operations of a company. The former kind turns to be exploratory and throwaway while the the latter kind tends to be exploitive and requires sustainability.

Biologically Inspired Software Architecture for Deep Learning


In the Google paper, the authors enumerate many risk factors, design patterns, and anti-patterns to needs to be taken into consideration in an architecture. These include design patterns such as: boundary erosion, entanglement, hidden feedback loops, undeclared consumers, data dependencies and changes in the external world. By contrast, Deep Learning systems (applies equally to machine learning), code is created from training data. A recent paper from the folks at Berkeley are exploring the requirements for building these new kinds of systems (see: "Real-Time Machine Learning: The Missing Pieces").