what_nns_learn.html

#artificialintelligence 

Neural networks are famously difficult to interpret. It's hard to know what they are actually learning when we train them. Let's take a closer look and see whether we can build a good picture of what's going on inside. Just like every other supervised machine learning model, neural networks learn relationships between input variables and output variables. In fact, we can even see how it's related to the most iconic model of all, linear regression. Linear regression assumes a straight line relationship between an input variable x and an output variable y. x is multiplied by a constant, m, which also happens to be the slope of the line, and it's added to another constant, b, which happens to be where the line crosses the y axis. We can represent this in a picture. Our input value x is multiplied by m. Our constant b, is multiplied by one. And then they are added together to get y.