.. _libdoc_gradient:

===========================================
:mod:`gradient` -- Symbolic Differentiation
===========================================

.. module:: gradient
   :platform: Unix, Windows
   :synopsis: low-level automatic differentiation
.. moduleauthor:: LISA

.. testsetup:: *

   from theano.gradient import *

Symbolic gradient is usually computed from :func:`gradient.grad`, which offers a
more convenient syntax for the common case of wanting the gradient in some
expressions with respect to a scalar cost.  The :func:`grad_sources_inputs`
function does the underlying work, and is more flexible, but is also more
awkward to use when :func:`gradient.grad` can do the job.


Gradient related functions
==========================

.. automodule:: theano.gradient
    :members:

.. _R_op_list:


List of Implemented R op
========================


See the :ref:`gradient tutorial <tutcomputinggrads>` for the R op documentation.

list of ops that support R-op:
 * with test [Most is tensor/tests/test_rop.py]
    * SpecifyShape
    * MaxAndArgmax
    * Subtensor
    * IncSubtensor set_subtensor too
    * Alloc
    * Dot
    * Elemwise
    * Sum
    * Softmax
    * Shape
    * Join
    * Rebroadcast
    * Reshape
    * Flatten
    * DimShuffle
    * Scan [In scan_module/tests/test_scan.test_rop]

 * without test
    * Split
    * ARange
    * ScalarFromTensor
    * AdvancedSubtensor1
    * AdvancedIncSubtensor1
    * AdvancedIncSubtensor

Partial list of ops without support for R-op:

 * All sparse ops
 * All linear algebra ops.
 * PermuteRowElements
 * Tile
 * AdvancedSubtensor
 * TensorDot
 * Outer
 * Prod
 * MulwithoutZeros
 * ProdWithoutZeros
 * CAReduce(for max,... done for MaxAndArgmax op)
 * MaxAndArgmax(only for matrix on axis 0 or 1)