In my opinion RBMs have one of the easiest architectures of all neural networks. As it can be seen in Fig.1. The absence of an output layer is apparent. But as it can be seen later an output layer wont be needed since the predictions are made differently as in regular feedforward neural networks. Energy is a term that may not be associated with deep learning in the first place.
Jan-12-2019, 05:46:26 GMT