Model Parameters and Hyperparameters in Machine Learning -- What is the difference?
For example, suppose you want to build a simple linear regression model using an m-dimensional training data set. If the model uses the gradient descent algorithm to minimize the objective function in order to determine the weights w_0, w_1, w_2, …,w_m, then we can have an optimizer such as GradientDescent(eta, n_iter). Here eta (learning rate) and n_iter (number of iterations) are the hyperparameters that would have to be adjusted in order to obtain the best values for the model parameters w_0, w_1, w_2, …,w_m. For more information about this, see the following example: Machine Learning: Python Linear Regression Estimator Using Gradient Descent. Here, n_iter is the number of iterations, eta0 is the learning rate, and random_state is the seed of the pseudo random number generator to use when shuffling the data.
Nov-8-2019, 20:53:03 GMT