JIT native code generation for TensorFlow computation graphs using Python and LLVM
One of the most amazing components of the TensorFlow architecture is the computation graph that can be serialized using Protocol Buffers. This computation graph follows a well-defined format (click here for the proto files) and describes the computation that you specify (it can be a Deep Learning model like a CNN, a simple Logistic Regression or even any computation you want). As you can see, this is a very simple computation graph. First, we define the placeholder that will hold the input tensor and after that we specify the computation that should happen using this input tensor as input data. Here we can also see that we're defining two important nodes of this graph, one is called "input" (the aforementioned placeholder) and the other is called "output", that will hold the result of the final computation.
Aug-24-2016, 01:40:45 GMT
- Technology: