Goto

Collaborating Authors

 graph mode


How to Implement YoloV3 in Tensorflow 2.0

#artificialintelligence

This repo provides a clean implementation of YoloV3 in TensorFlow 2.0 using all the best practices. I have created a complete tutorial on how to train from scratch using the VOC2012 Dataset. For customzied training, you need to generate tfrecord following the TensorFlow Object Detection API. For example you can use Microsoft VOTT to generate such dataset. You can also use this script to create the pascal voc dataset.


Improve the Performance Easily in TensorFlow Using Graph Mode

#artificialintelligence

Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. Originally, TensorFlow only allowed you to code in Graph Mode, but since the ability to code in Eager Mode was introduced, most notebooks produced are in Eager Mode.


GitHub - zzh8829/yolov3-tf2: YoloV3 Implemented in Tensorflow 2.0

#artificialintelligence

This repo provides a clean implementation of YoloV3 in TensorFlow 2.0 using all the best practices. I have created a complete tutorial on how to train from scratch using the VOC2012 Dataset. For customzied training, you need to generate tfrecord following the TensorFlow Object Detection API. For example you can use Microsoft VOTT to generate such dataset. You can also use this script to create the pascal voc dataset.


Benchmarking the Linear Algebra Awareness of TensorFlow and PyTorch

Sankaran, Aravind, Alashti, Navid Akbari, Psarras, Christos, Bientinesi, Paolo

arXiv.org Artificial Intelligence

Linear algebra operations, which are ubiquitous in machine learning, form major performance bottlenecks. The High-Performance Computing community invests significant effort in the development of architecture-specific optimized kernels, such as those provided by the BLAS and LAPACK libraries, to speed up linear algebra operations. However, end users are progressively less likely to go through the error prone and time-consuming process of directly using said kernels; instead, frameworks such as TensorFlow (TF) and PyTorch (PyT), which facilitate the development of machine learning applications, are becoming more and more popular. Although such frameworks link to BLAS and LAPACK, it is not clear whether or not they make use of linear algebra knowledge to speed up computations. For this reason, in this paper we develop benchmarks to investigate the linear algebra optimization capabilities of TF and PyT. Our analyses reveal that a number of linear algebra optimizations are still missing; for instance, reducing the number of scalar operations by applying the distributive law, and automatically identifying the optimal parenthesization of a matrix chain. In this work, we focus on linear algebra computations in TF and PyT; we both expose opportunities for performance enhancement to the benefit of the developers of the frameworks and provide end users with guidelines on how to achieve performance gains.


TensorFlow Sad Story

#artificialintelligence

I have been using Pytorch for several years now, and always enjoyed it. It is clear, intuitive, flexible and fast. And then I was confronted with an opportunity to do my new computer vision projects on TensorFlow. This is where this story begins. TensorFlow is a well-established widely used framework. It couldn't be that bad, I said to myself.


The TensorFlow Keras Summary Capture Layer

#artificialintelligence

We also integrated the use of the estimator hooks, such as the tf.estimator.SummarySaverHook and other custom hooks, to support our wide variety of usages.