Once you have a number of models logged, you have way more dimensions to examine than can be looked at all at once. One powerful visualization tool we've discovered is the parallel coordinates chart. Here each line is an individual experiment and each column is an input hyperparameter or an output metric. I've highlighted the top accuracy runs and it shows quite clearly that across all of my experiments that I've selected, high accuracy comes from low dropout values. Aggregate metrics are good, but it is essential to look at specific examples.
If you are training models in an automated environment where it's inconvenient to run shell commands, such as Google's CloudML, you should look at the documentation on Running in Automated Environments. Sign up for a free account in your shell or go to our sign up page. Add a few lines to your script to log hyperparameters and metrics. Weights and Biases is framework agnostic, but if you are using a common ML framework, you may find framework-specific examples even easier for getting started. We've built framework-specific hooks to simplify the integration for Keras, TensorFlow, PyTorch, Fast.ai,
In this tutorial, we'll go through the neural style transfer algorithm by Gatys, implement it and track it using the W&B library. Let's assume that we're building a style transfer app for production. We'll need to compare the results generated by changing various parameters. This requires subjective comparison because we cannot use an accuracy metric as no style transfer result is more "accurate" than the other. So, we'll need to choose the parameters according to our preference and this requires side-by-side comparison which can easily be done using wandb library.
Gradient Dissent by Weights and Biases We started Weights and Biases to build tools for Machine Learning practitioners because we care a lot about the impact that Machine Learning can have in the world and we love working in the trenches with the people building these models. One of the most fun things about these building tools has been the conversations with these ML practitioners and learning about the interesting things they're working on. This process has been so fun that we wanted to open it up to the world in the form of our new podcast. We hope you have as much fun listening to it as we had making it. Today our guest is Nicolas Koumchatzky.
Machine Learning is incredibly exciting and it's not just science fiction. This 5-class series is a practical, hands-on dive into machine learning, after which you will be ready to deliver immediate value to any organization. You will learn by doing, and create, optimize and debug your own models. Your time is precious - our classes are fast-paced and cover as much material as possible.