neuralmagic/sparseml

#artificialintelligence 

Sparsifying involves removing redundant information from neural networks using algorithms such as pruning and quantization, among others. Unfortunately, many have not realized the benefits due to the complicated process and number of hyperparameters involved. Neural Magic's ML team created recipes encoding the necessary hyperparameters and instructions to create highly accurate pruned and pruned-quantized YOLOv3 models to simplify the process. These recipes allow anyone to plug in their data and leverage SparseML's recipe-driven approach on top of Ultralytics' robust training pipelines. The examples listed in this tutorial are all performed on the VOC dataset.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found