Goto

Collaborating Authors

 tensorflow2


Reinforcement Learning(Part-1): Deep Q Learning using Tensorflow2

#artificialintelligence

In this tutorial, we will be discussing what is Q learning and how to Implement Q learning using Tensorflow2. Q Learning is one of the most popular reinforcement learning algorithms, it is an off-policy reinforcement learning(RL) that finds the best action for the given state. Q learning is considered off-policy reinforcement learning because Q learning is not dependent on current policy. It learns from actions that are outside the current policy, like taking random actions, therefore the policy is not needed. Here Q stands for Quality, which means how useful the given action is for the current state to get some future reward.


Policy Gradient(Reinforce)using Tensorflow2

#artificialintelligence

In this article, we will be discussing what is Policy gradients and how to implement policy gradients using tensorflow2. There are three main points in the policy gradient algorithm. By considering the above three principles, we can implement the policy gradient using TensorFlow. We are dividing our source code into two parts. Policy gradient takes the current state as input and outputs probabilities for all actions.


lyhue1991/eat_tensorflow2_in_30_days

#artificialintelligence

For the engineers, priority goes to TensorFlow2. For the students and researchers,first choice should be Pytorch. The best way is to master both of them if having sufficient time. Keras will be discontinued in development after version 2.3.0, Keras is a high-level API for the deep learning frameworks.


TensorFlow 1.x vs 2.x. – summary of changes

#artificialintelligence

In 2019, Google announced TensorFlow 2.0, it is a major leap from the existing TensorFlow 1.0. Ease of use: Many old libraries (example tf.contrib) were removed, and some consolidated. For example, in TensorFlow1.x the model could be made using Contrib, layers, Keras or estimators, so many options for the same task confused many new users. TensorFlow 2.0 promotes TensorFlow Keras for model experimentation and Estimators for scaled serving, and the two APIs are very convenient to use. The writing of code was divided into two parts: building the computational graph and later creating a session to execute it.


How To Build A BERT Classifier Model With TensorFlow 2.0

#artificialintelligence

BERT is one of the most popular algorithms in the NLP spectrum known for producing state-of-the-art results in a variety of language modeling tasks. Built on top of transformers and seq-to-sequence models, the Bidirectional Encoder Representations from Transformers is a very powerful NLP model that has outperformed many. The state-of-the-art results that it produces on a variety of language-specific tasks are enough to show that it is indeed a big deal. The results come from its underlying architecture which uses breakthrough techniques such as seq2seq (sequence-to-sequence) models and transformers. The seq2seq model is a network that converts a given sequence of words into a different sequence and is capable of relating the words that seem more important.


TensorFlow 1.x vs 2.x. – summary of changes

#artificialintelligence

Earlier this year, Google announced TensorFlow 2.0, it is a major leap from the existing TensorFlow 1.0. Ease of use: Many old libraries (example tf.contrib) were removed, and some consolidated. For example, in TensorFlow1.x the model could be made using Contrib, layers, Keras or estimators, so many options for the same task confused many new users. TensorFlow 2.0 promotes TensorFlow Keras for model experimentation and Estimators for scaled serving, and the two APIs are very convenient to use. The writing of code was divided into two parts: building the computational graph and later creating a session to execute it. Eager Execution is implemented by default, i.e. you no longer need to create a session to run the computational graph, you can see the result of your code directly without the need of creating Session.