Reverse mode Automatic Differentiation
TensorFlow uses a computational graph because it uses a Reverse mode Automatic Differentiation algorithm.
The article
“Introduction TensorFlow - Optimization of Objective Functions”, Wieschollek, 2017
describes the gradients from tf.gradient
are “neither computed by finite differences, nor in a symbolic fashion.”
TensorFlow uses “reverse-mode auto-differentiation” - or RevAD.
The article “Reverse-mode automatic differentiation: a tutorial”, Rufflewind, 2016 explains RevAD in more detail. I’d say that RevAD uses symbolic differentiation. But there is a computational element in the handling of the noisy input data. The article notes there are several techniques for implementing RevAD but focuses on a computational graph implemention. This implementation closely describes the computational graph structure used by TensorFlow. Further information is available from this paper: “Automatic differentiation of algorithms”, Bartholomew-Biggs, 2000