run directory
neuralmagic/sparseml
Sparsifying involves removing redundant information from neural networks using algorithms such as pruning and quantization, among others. Unfortunately, many have not realized the benefits due to the complicated process and number of hyperparameters involved. Neural Magic's ML team created recipes encoding the necessary hyperparameters and instructions to create highly accurate pruned and pruned-quantized YOLOv3 models to simplify the process. These recipes allow anyone to plug in their data and leverage SparseML's recipe-driven approach on top of Ultralytics' robust training pipelines. The examples listed in this tutorial are all performed on the VOC dataset.
tfruns-tools-for-tensorflow-training-runs
Our example training script (mnist_mlp.R) trains a Keras model to recognize MNIST digits. To train a model with tfruns, just use the training_run() function in place of the source() function to execute your R script. The metrics and output of each run are automatically captured within a run directory which is unique for each run that you initiate. You can call the latest_run() function to view the results of the last run (including the path to the run directory which stores all of the run's output): The run directory used in the example above is "runs/2017-10-02T14-23-38Z".