NVIDIA Crushes Latest Artificial Intelligence Benchmarking Tests
In its third round of submissions, MLCommons released results for MLPerf Inference v1.0. MLPerf is a set of standard AI inference benchmarking tests using seven different applications. These seven tests include a range of workloads that include computer vision, medical imaging, recommender systems, speech recognition, and natural language processing. MLPerf benchmarking measures how fast a trained neural network can process data for each application and its form factor. The results allow unbiased comparison between systems.
Apr-23-2021, 07:45:37 GMT
- Country:
- Europe > France
- Provence-Alpes-Côte d'Azur > Bouches-du-Rhône > Marseille (0.05)
- North America > Aruba (0.05)
- Europe > France
- Industry:
- Information Technology > Hardware (0.82)
- Semiconductors & Electronics (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Natural Language (0.90)
- Vision (0.89)
- Information Technology > Artificial Intelligence