.. _libdoc_config: ======================================= :mod:`config` -- Theano Configuration ======================================= .. module:: config :platform: Unix, Windows :synopsis: Library configuration attributes. .. moduleauthor:: LISA Guide ===== The config module contains many ``attributes`` that modify Theano's behavior. Many of these attributes are consulted during the import of the ``theano`` module and many are assumed to be read-only. *As a rule, the attributes in this module should not be modified by user code.* Theano's code comes with default values for these attributes, but you can override them from your .theanorc file, and override those values in turn by the :envvar:`THEANO_FLAGS` environment variable. The order of precedence is: 1. an assignment to ``theano.config.`` 2. an assignment in :envvar:`THEANO_FLAGS` 3. an assignment in the .theanorc file (or the file indicated in :envvar:`THEANORC`) You can print out the current/effective configuration at any time by printing ``theano.config``. For example, to see a list of all active configuration variables, type this from the command-line: .. code-block:: bash python -c 'import theano; print(theano.config)' | less Environment Variables ===================== .. envvar:: THEANO_FLAGS This is a list of comma-delimited key=value pairs that control Theano's behavior. For example, in bash, you can override your :envvar:`THEANORC` defaults for .py by typing this: .. code-block:: bash THEANO_FLAGS='floatX=float32,device=cuda0,lib.cnmem=1' python .py If a value is defined several times in ``THEANO_FLAGS``, the right-most definition is used. So, for instance, if ``THEANO_FLAGS='device=cpu,device=cuda0'``, then cuda0 will be used. .. envvar:: THEANORC The location[s] of the .theanorc file[s] in ConfigParser format. It defaults to ``$HOME/.theanorc``. On Windows, it defaults to ``$HOME/.theanorc:$HOME/.theanorc.txt`` to make Windows users' life easier. Here is the .theanorc equivalent to the THEANO_FLAGS in the example above: .. code-block:: cfg [global] floatX = float32 device = cuda0 [lib] cnmem = 1 Configuration attributes that are available directly in ``config`` (e.g. ``config.device``, ``config.mode``) should be defined in the ``[global]`` section. Attributes from a subsection of ``config`` (e.g. ``config.lib.cnmem``, ``config.dnn.conv.algo_fwd``) should be defined in their corresponding section (e.g. ``[nvcc]``, ``[dnn.conv]``). Multiple configuration files can be specified by separating them with ':' characters (as in $PATH). Multiple configuration files will be merged, with later (right-most) files taking priority over earlier files in the case that multiple files specify values for a common configuration option. For example, to override system-wide settings with personal ones, set ``THEANORC=/etc/theanorc:~/.theanorc``. To load configuration files in the current working directory, append ``.theanorc`` to the list of configuration files, e.g. ``THEANORC=~/.theanorc:.theanorc``. Config Attributes ===================== The list below describes some of the more common and important flags that you might want to use. For the complete list (including documentation), import theano and print the config variable, as in: .. code-block:: bash python -c 'import theano; print(theano.config)' | less .. attribute:: device String value: either ``'cpu'``, ``'cuda'``, ``'cuda0'``, ``'cuda1'``, ``'opencl0:0'``, ``'opencl0:1'``, ``'gpu'``, ``'gpu0'`` ... Default device for computations. If ``'cuda*``, change the default to try to move computation to the GPU using CUDA libraries. If ``'opencl*'``, the openCL libraries will be used. To let the driver select the device, use ``'cuda'`` or ``'opencl'``. If ``'gpu*'``, the old gpu backend will be used, although users are encouraged to migrate to the new GpuArray backend. If we are not able to use the GPU, either we fall back on the CPU, or an error is raised, depending on the :attr:`force_device` flag. This flag's value cannot be modified during the program execution. Do not use upper case letters, only lower case even if NVIDIA uses capital letters. .. attribute:: force_device Bool value: either ``True`` or ``False`` Default: ``False`` If ``True`` and ``device=gpu*``, we raise an error if we cannot use the specified :attr:`device`. If ``True`` and ``device=cpu``, we disable the GPU. If ``False`` and ``device=gpu*``, and if the specified device cannot be used, we warn and fall back to the CPU. This is useful to run Theano's tests on a computer with a GPU, but without running the GPU tests. This flag's value cannot be modified during the program execution. .. attribute:: init_gpu_device String value: either ``''``, ``'cuda'``, ``'cuda0'``, ``'cuda1'``, ``'opencl0:0'``, ``'opencl0:1'``, ``'gpu'``, ``'gpu0'`` ... Initialize the gpu device to use. When its value is ``'cuda*'``, ``'opencl*'`` or ``'gpu*'``, the theano flag :attr:`device` must be ``'cpu'``. Unlike :attr:`device`, setting this flag to a specific GPU will not try to use this device by default, in particular it will **not** move computations, nor shared variables, to the specified GPU. This flag is useful to run GPU-specific tests on a particular GPU, instead of using the default one. This flag's value cannot be modified during the program execution. .. attribute:: config.pycuda.init Bool value: either ``True`` or ``False`` Default: ``False`` If True, always initialize PyCUDA when Theano want to initialize the GPU. With PyCUDA version 2011.2.2 or earlier, PyCUDA must initialize the GPU before Theano does it. Setting this flag to True, ensure that, but always import PyCUDA. It can be done manually by importing ``theano.misc.pycuda_init`` before Theano initialize the GPU device. Newer version of PyCUDA (currently only in the trunk) don't have this restriction. .. attribute:: print_active_device Bool value: either ``True`` or ``False`` Default: ``True`` Print active device at when the GPU device is initialized. .. attribute:: enable_initial_driver_test Bool value: either ``True`` or ``False`` Default: ``True`` Tests the nvidia driver when a GPU device is initialized. .. attribute:: floatX String value: ``'float64'``, ``'float32'``, or ``'float16'`` (with limited support) Default: ``'float64'`` This sets the default dtype returned by ``tensor.matrix()``, ``tensor.vector()``, and similar functions. It also sets the default Theano bit width for arguments passed as Python floating-point numbers. .. attribute:: warn_float64 String value: either ``'ignore'``, ``'warn'``, ``'raise'``, or ``'pdb'`` Default: ``'ignore'`` When creating a TensorVariable with dtype float64, what should be done? This is useful to help find upcast to float64 in user code. .. attribute:: allow_gc Bool value: either ``True`` or ``False`` Default: ``True`` This sets the default for the use of the Theano garbage collector for intermediate results. To use less memory, Theano frees the intermediate results as soon as they are no longer needed. Disabling Theano garbage collection allows Theano to reuse buffers for intermediate results between function calls. This speeds up Theano by no longer spending time reallocating space. This gives significant speed up on functions with many ops that are fast to execute, but this increases Theano's memory usage. .. attribute:: config.scan.allow_output_prealloc Bool value, either ``True`` or ``False`` Default: ``True`` This enables, or not, an optimization in Scan in which it tries to pre-allocate memory for its outputs. Enabling the optimization can give a significant speed up with Scan at the cost of slightly increased memory usage. .. attribute:: config.scan.allow_gc Bool value, either ``True`` or ``False`` Default: ``False`` Allow/disallow gc inside of Scan. If :attr:`config.allow_gc` is ``True``, but :attr:`config.scan.allow_gc` is ``False``, then we will gc the inner of scan after all iterations. This is the default. .. attribute:: config.scan.debug Bool value, either ``True`` or ``False`` Default: ``False`` If True, we will print extra scan debug information. .. attribute:: openmp Bool value: either ``True`` or ``False`` Default: ``False`` Enable or disable parallel computation on the CPU with OpenMP. It is the default value used when creating an Op that supports it. It is best to define it in .theanorc or in the environment variable THEANO_FLAGS. .. attribute:: openmp_elemwise_minsize Positive int value, default: 200000. This specifies the vectors minimum size for which elemwise ops use openmp, if openmp is enabled. .. attribute:: cast_policy String value: either ``'numpy+floatX'`` or ``'custom'`` Default: ``'custom'`` This specifies how data types are implicitly figured out in Theano, e.g. for constants or in the results of arithmetic operations. The 'custom' value corresponds to a set of custom rules originally used in Theano (which can be partially customized, see e.g. the in-code help of ``tensor.NumpyAutocaster``), and will be deprecated in the future. The 'numpy+floatX' setting attempts to mimic the numpy casting rules, although it prefers to use float32 numbers instead of float64 when ``config.floatX`` is set to 'float32' and the user uses data that is not explicitly typed as float64 (e.g. regular Python floats). Note that 'numpy+floatX' is not currently behaving exactly as planned (it is a work-in-progress), and thus you should consider it as experimental. At the moment it behaves differently from numpy in the following situations: * Depending on the value of :attr:`config.int_division`, the resulting type of a division of integer types with the ``/`` operator may not match that of numpy. * On mixed scalar / array operations, numpy tries to prevent the scalar from upcasting the array's type unless it is of a fundamentally different type. Theano does not attempt to do the same at this point, so you should be careful that scalars may upcast arrays when they would not when using numpy. This behavior should change in the near future. .. attribute:: int_division String value: either ``'int'``, ``'floatX'``, or ``'raise'`` Default: ``'int'`` Specifies what to do when one tries to compute ``x / y``, where both ``x`` and ``y`` are of integer types (possibly unsigned). 'int' means an integer is returned (as in Python 2.X), but this behavior is deprecated. 'floatX' returns a number of type given by ``config.floatX``. 'raise' is the safest choice (and will become default in a future release of Theano) and raises an error when one tries to do such an operation, enforcing the use of the integer division operator (``//``) (if a float result is intended, either cast one of the arguments to a float, or use ``x.__truediv__(y)``). .. attribute:: mode String value: ``'Mode'``, ``'DebugMode'``, ``'FAST_RUN'``, ``'FAST_COMPILE'`` Default: ``'Mode'`` This sets the default compilation mode for theano functions. By default the mode Mode is equivalent to FAST_RUN. See Config attribute linker and optimizer. .. attribute:: profile Bool value: either ``True`` or ``False`` Default: ``False`` Do the vm/cvm linkers profile the execution time of Theano functions? See :ref:`tut_profiling` for examples. .. attribute:: profile_memory Bool value: either ``True`` or ``False`` Default: ``False`` Do the vm/cvm linkers profile the memory usage of Theano functions? It only works when profile=True. .. attribute:: profile_optimizer Bool value: either ``True`` or ``False`` Default: ``False`` Do the vm/cvm linkers profile the optimization phase when compiling a Theano function? It only works when profile=True. .. attribute:: config.profiling.n_apply Positive int value, default: 20. The number of Apply nodes to print in the profiler output .. attribute:: config.profiling.n_ops Positive int value, default: 20. The number of Ops to print in the profiler output .. attribute:: config.profiling.min_memory_size Positive int value, default: 1024. For the memory profile, do not print Apply nodes if the size of their outputs (in bytes) is lower than this. .. attribute:: config.profiling.min_peak_memory Bool value: either ``True`` or ``False`` Default: ``False`` Does the memory profile print the min peak memory usage? It only works when profile=True, profile_memory=True .. attribute:: config.profiling.destination String value: ``'stderr'``, ``'stdout'``, or a name of a file to be created Default: ``'stderr'`` Name of the destination file for the profiling output. The profiling output can be either directed to stderr (default), or stdout or an arbitrary file. .. attribute:: config.profiling.debugprint Bool value: either ``True`` or ``False`` Default: ``False`` Do a debugprint of the profiled functions .. attribute:: config.profiling.ignore_first_call Bool value: either ``True`` or ``False`` Default: ``False`` Do we ignore the first call to a Theano function while profiling. .. attribute:: config.lib.amdlibm Bool value: either ``True`` or ``False`` Default: ``False`` This makes the compilation use the `amdlibm `__ library, which is faster than the standard libm. .. attribute:: config.gpuarray.preallocate Float value Default: 0 (Preallocation of size 0, only cache the allocation) Controls the preallocation of memory with the gpuarray backend. The value represents the start size (either in MB or the fraction of total GPU memory) of the memory pool. If more memory is needed, Theano will try to obtain more, but this can cause memory fragmentation. A negative value will completely disable the allocation cache. This can have a severe impact on performance and so should not be done outside of debugging. * < 0: disabled * 0 <= N <= 1: use this fraction of the total GPU memory (clipped to .95 for driver memory). * > 1: use this number in megabytes (MB) of memory. .. note:: This value allocates GPU memory ONLY when using (:ref:`gpuarray`). For the old backend, please see ``config.lib.cnmem`` .. note:: This could cause memory fragmentation. So if you have a memory error while using the cache, try to allocate more memory at the start or disable it. If you try this, report your result on :ref`theano-dev`. .. note:: The clipping at 95% can be bypassed by specifing the exact number of megabytes. If more then 95% are needed, it will try automatically to get more memory. But this can cause fragmentation, see note above. .. attribute:: config.lib.cnmem .. note:: This value allocates GPU memory ONLY when using (:ref:`cuda`) and has no effect when the GPU backend is (:ref:`gpuarray`). For the new backend, please see ``config.gpuarray.preallocate`` Float value: >= 0 Controls the use of `CNMeM `_ (a faster CUDA memory allocator). Applies to the old GPU backend :ref:`cuda` up to Theano release 0.8. The CNMeM library is included in Theano and does not need to be separately installed. The value represents the start size (either in MB or the fraction of total GPU memory) of the memory pool. If more memory is needed, Theano will try to obtain more, but this can cause memory fragmentation. * 0: not enabled. * 0 < N <= 1: use this fraction of the total GPU memory (clipped to .95 for driver memory). * > 1: use this number in megabytes (MB) of memory. Default: 0 .. note:: This could cause memory fragmentation. So if you have a memory error while using CNMeM, try to allocate more memory at the start or disable it. If you try this, report your result on :ref`theano-dev`. .. note:: The clipping at 95% can be bypassed by specifing the exact number of megabytes. If more then 95% are needed, it will try automatically to get more memory. But this can cause fragmentation, see note above. .. attribute:: config.gpuarray.sched String value: ``'default'``, ``'multi'``, ``'single'`` Default: ``'default'`` Control the stream mode of contexts. The sched parameter passed for context creation to pygpu. With CUDA, using "multi" mean using the parameter cudaDeviceScheduleYield. This is useful to lower the CPU overhead when waiting for GPU. One user found that it speeds up his other processes that was doing data augmentation. .. attribute:: config.gpuarray.single_stream Boolean value Default: ``True`` Control the stream mode of contexts. If your computations are mostly lots of small elements, using single-stream will avoid the synchronization overhead and usually be faster. For larger elements it does not make a difference yet. In the future when true multi-stream is enabled in libgpuarray, this may change. If you want to make sure to have optimal performance, check both options. .. attribute:: linker String value: ``'c|py'``, ``'py'``, ``'c'``, ``'c|py_nogc'`` Default: ``'c|py'`` When the mode is Mode, it sets the default linker used. See :ref:`using_modes` for a comparison of the different linkers. .. attribute:: optimizer String value: ``'fast_run'``, ``'merge'``, ``'fast_compile'``, ``'None'`` Default: ``'fast_run'`` When the mode is Mode, it sets the default optimizer used. .. attribute:: on_opt_error String value: ``'warn'``, ``'raise'``, ``'pdb'`` or ``'ignore'`` Default: ``'warn'`` When a crash occurs while trying to apply some optimization, either warn the user and skip this optimization ('warn'), raise the exception ('raise'), fall into the pdb debugger ('pdb') or ignore it ('ignore'). We suggest to never use 'ignore' except in tests. If you encounter a warning, report it on `theano-dev`_. .. _theano-dev: http://groups.google.com/group/theano-dev .. attribute:: assert_no_cpu_op String value: ``'ignore'`` or ``'warn'`` or ``'raise'`` or ``'pdb'`` Default: ``'ignore'`` If there is a CPU op in the computational graph, depending on its value; this flag can either raise a warning, an exception or stop the compilation with pdb. .. attribute:: on_shape_error String value: ``'warn'`` or ``'raise'`` Default: ``'warn'`` When an exception is raised when inferring the shape of some apply node, either warn the user and use a default value ('warn'), or raise the exception ('raise'). .. attribute:: config.warn.ignore_bug_before String value: ``'None'``, ``'all'``, ``'0.3'``, ``'0.4'``, ``'0.4.1'``, ``'0.5'``, ``'0.6'``, ``'0.7'``, ``'0.8'``, ``'0.8.1'``, ``'0.8.2'``, ``'0.9'`` Default: ``'0.7'`` When we fix a Theano bug that generated bad results under some circumstances, we also make Theano raise a warning when it encounters the same circumstances again. This helps to detect if said bug had affected your past experiments, as you only need to run your experiment again with the new version, and you do not have to understand the Theano internal that triggered the bug. A better way to detect this will be implemented. See this `ticket `__. This flag allows new users not to get warnings about old bugs, that were fixed before their first checkout of Theano. You can set its value to the first version of Theano that you used (probably 0.3 or higher) ``'None'`` means that all warnings will be displayed. ``'all'`` means all warnings will be ignored. It is recommended that you put a version, so that you will see future warnings. It is also recommended you put this into your .theanorc, so this setting will always be used. This flag's value cannot be modified during the program execution. .. attribute:: base_compiledir Default: On Windows: $LOCALAPPDATA\\Theano if $LOCALAPPDATA is defined, otherwise and on other systems: ~/.theano. This directory stores the platform-dependent compilation directories. This flag's value cannot be modified during the program execution. .. attribute:: compiledir_format Default: ``"compiledir_%(platform)s-%(processor)s-%(python_version)s-%(python_bitwidth)s"`` This is a Python format string that specifies the subdirectory of ``config.base_compiledir`` in which to store platform-dependent compiled modules. To see a list of all available substitution keys, run ``python -c "import theano; print(theano.config)"``, and look for compiledir_format. This flag's value cannot be modified during the program execution. .. attribute:: compiledir Default: ``config.base_compiledir``/``config.compiledir_format`` This directory stores dynamically-compiled modules for a particular platform. This flag's value cannot be modified during the program execution. .. attribute:: config.blas.ldflags Default: ``'-lblas'`` Link arguments to link against a (Fortran) level-3 blas implementation. The default will test if ``'-lblas'`` works. If not, we will disable our C code for BLAS. .. attribute:: config.experimental.local_alloc_elemwise_assert Bool value: either ``True`` or ``False`` Default: ``True`` When the local_alloc_optimization is applied, add an assert to highlight shape errors. Without such asserts this optimization could hide errors in the user code. We add the assert only if we can't infer that the shapes are equivalent. As such this optimization does not always introduce an assert in the graph. Removing the assert could speed up execution. .. attribute:: config.cuda.root Default: $CUDA_ROOT or failing that, ``"/usr/local/cuda"`` A directory with bin/, lib/, include/ folders containing cuda utilities. .. attribute:: config.cuda.enabled Bool value: either ``True`` of ``False`` Default: ``True`` If set to `False`, C code in old backend is not compiled. .. attribute:: config.dnn.enabled String value: ``'auto'``, ``'True'``, ``'False'`` Default: ``'auto'`` If ``'auto'``, automatically detect and use `cuDNN `_ if it is available. If cuDNN is unavailable, raise no error. If ``'True'``, require the use of cuDNN. If cuDNN is unavailable, raise an error. If ``'False'``, do not use cuDNN or check if it is available. .. attribute:: config.conv.assert_shape If True, AbstractConv* ops will verify that user-provided shapes match the runtime shapes (debugging option, may slow down compilation) .. attribute:: config.dnn.conv.workmem Deprecated, use :attr:`config.dnn.conv.algo_fwd`. .. attribute:: config.dnn.conv.workmem_bwd Deprecated, use :attr:`config.dnn.conv.algo_bwd_filter` and :attr:`config.dnn.conv.algo_bwd_data` instead. .. attribute:: config.dnn.conv.algo_fwd String value: ``'small'``, ``'none'``, ``'large'``, ``'fft'``, ``'fft_tiling'``, ``'winograd'``, ``'guess_once'``, ``'guess_on_shape_change'``, ``'time_once'``, ``'time_on_shape_change'``. Default: ``'small'`` 3d convolution only support ``'none'``, ``'winograd'``, ``'guess_once'``, ``'guess_on_shape_change'``, ``'time_once'``, ``'time_on_shape_change'``. .. attribute:: config.dnn.conv.algo_bwd Deprecated, use :attr:`config.dnn.conv.algo_bwd_filter` and :attr:`config.dnn.conv.algo_bwd_data` instead. .. attribute:: config.dnn.conv.algo_bwd_filter String value: ``'none'``, ``'deterministic'``, ``'fft'``, ``'small'``, ``'guess_once'``, ``'guess_on_shape_change'``, ``'time_once'``, ``'time_on_shape_change'``. Default: ``'none'`` 3d convolution only supports ``'none'``, ``'guess_once'``, ``'guess_on_shape_change'``, ``'time_once'``, ``'time_on_shape_change'``. .. attribute:: config.dnn.conv.algo_bwd_data String value: ``'none'``, ``'deterministic'``, ``'fft'``, ``'fft_tiling'``, ``'winograd'``, ``'guess_once'``, ``'guess_on_shape_change'``, ``'time_once'``, ``'time_on_shape_change'``. Default: ``'none'`` 3d convolution only support ``'none'``, ``'winograd'``, ``'guess_once'``, ``'guess_on_shape_change'``, ``'time_once'``, ``'time_on_shape_change'``. .. attribute:: config.gcc.cxxflags Default: ``""`` Extra parameters to pass to gcc when compiling. Extra include paths, library paths, configuration options, etc. .. attribute:: cxx Default: Full path to g++ if g++ is present. Empty string otherwise. Indicates which C++ compiler to use. If empty, no C++ code is compiled. Theano automatically detects whether g++ is present and disables C++ compilation when it is not. On darwin systems (Mac OS X), it preferably looks for clang++ and uses that if available. We print a warning if we detect that no compiler is present. It is recommended to run with C++ compilation as Theano will be much slower otherwise. This can be any compiler binary (full path or not) but things may break if the interface is not g++-compatible to some degree. .. attribute:: config.nvcc.fastmath Bool value, default: ``False`` If true, this will enable fastmath (|use_fast_math|_) mode for compiled cuda code which makes div and sqrt faster at the cost of precision. This also disables support for denormal numbers. This can cause NaN. So if you have NaN and use this flag, try to disable it. .. |use_fast_math| replace:: ``--use-fast-math`` .. _use_fast_math: http://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/#options-for-steering-cuda-compilation .. attribute:: config.optimizer_excluding Default: ``""`` A list of optimizer tags that we don't want included in the default Mode. If multiple tags, separate them by ':'. Ex: to remove the elemwise inplace optimizer(slow for big graph), use the flags: ``optimizer_excluding:inplace_opt``, where inplace_opt is the name of that optimization. This flag's value cannot be modified during the program execution. .. attribute:: optimizer_including Default: ``""`` A list of optimizer tags that we want included in the default Mode. If multiple tags, separate them by ':'. This flag's value cannot be modified during the program execution. .. attribute:: optimizer_requiring Default: ``""`` A list of optimizer tags that we require for optimizer in the default Mode. If multiple tags, separate them by ':'. This flag's value cannot be modified during the program execution. .. attribute:: optimizer_verbose Bool value: either ``True`` or ``False`` Default: ``False`` When True, we print on the stdout the optimization applied. .. attribute:: nocleanup Bool value: either ``True`` or ``False`` Default: ``False`` If False, source code files are removed when they are not needed anymore. This means files whose compilation failed are deleted. Set to True to keep those files in order to debug compilation errors. .. attribute:: compile This section contains attributes which influence the compilation of C code for ops. Due to historical reasons many attributes outside of this section also have an influence over compilation, most notably 'cxx'. This is not expected to change any time soon. .. attribute:: config.compile.timeout Positive int value, default: :attr:`compile.wait` * 24 Time to wait before an unrefreshed lock is broken and stolen. This is in place to avoid manual cleanup of locks in case a process crashed and left a lock in place. The refresh time is automatically set to half the timeout value. .. attribute:: config.compile.wait Positive int value, default: 5 Time to wait between attempts at grabbing the lock if the first attempt is not successful. The actual time will be between :attr:`compile.wait` and :attr:`compile.wait` * 2 to avoid a crowding effect on lock. .. attribute:: DebugMode This section contains various attributes configuring the behaviour of mode :class:`~debugmode.DebugMode`. See directly this section for the documentation of more configuration options. .. attribute:: config.DebugMode.check_preallocated_output Default: ``''`` A list of kinds of preallocated memory to use as output buffers for each Op's computations, separated by ``:``. Implemented modes are: * ``"initial"``: initial storage present in storage map (for instance, it can happen in the inner function of Scan), * ``"previous"``: reuse previously-returned memory, * ``"c_contiguous"``: newly-allocated C-contiguous memory, * ``"f_contiguous"``: newly-allocated Fortran-contiguous memory, * ``"strided"``: non-contiguous memory with various stride patterns, * ``"wrong_size"``: memory with bigger or smaller dimensions, * ``"ALL"``: placeholder for all of the above. In order not to test with preallocated memory, use an empty string, ``""``. .. attribute:: config.DebugMode.check_preallocated_output_ndim Positive int value, default: 4. When testing with "strided" preallocated output memory, test all combinations of strides over that number of (inner-most) dimensions. You may want to reduce that number to reduce memory or time usage, but it is advised to keep a minimum of 2. .. attribute:: config.DebugMode.warn_input_not_reused Bool value, default: ``True`` Generate a warning when the destroy_map or view_map tell that an op work inplace, but the op did not reuse the input for its output. .. attribute:: config.NanGuardMode.nan_is_error Bool value, default: ``True`` Controls whether NanGuardMode generates an error when it sees a nan. .. attribute:: config.NanGuardMode.inf_is_error Bool value, default: ``True`` Controls whether NanGuardMode generates an error when it sees an inf. .. attribute:: config.NanGuardMode.big_is_error Bool value, default: ``True`` Controls whether NanGuardMode generates an error when it sees a big value (>1e10). .. attribute:: numpy This section contains different attributes for configuring NumPy's behaviour, described by `numpy.seterr `__. .. attribute:: config.numpy.seterr_all String Value: ``'ignore'``, ``'warn'``, ``'raise'``, ``'call'``, ``'print'``, ``'log'``, ``'None'`` Default: ``'ignore'`` Set the default behaviour described by `numpy.seterr `__. ``'None'`` means that numpy's default behaviour will not be changed (unless one of the other ``config.numpy.seterr_*`` overrides it), but this behaviour can change between numpy releases. This flag sets the default behaviour for all kinds of floating-pont errors, and it can be overriden for specific errors by setting one (or more) of the flags below. This flag's value cannot be modified during the program execution. .. attribute:: config.numpy.seterr_divide String Value: ``'None'``, ``'ignore'``, ``'warn'``, ``'raise'``, ``'call'``, ``'print'``, ``'log'`` Default: ``'None'`` Sets numpy's behavior for division by zero. ``'None'`` means using the default, defined by config.numpy.seterr_all. This flag's value cannot be modified during the program execution. .. attribute:: config.numpy.seterr_over String Value: ``'None'``, ``'ignore'``, ``'warn'``, ``'raise'``, ``'call'``, ``'print'``, ``'log'`` Default: ``'None'`` Sets numpy's behavior for floating-point overflow. ``'None'`` means using the default, defined by config.numpy.seterr_all. This flag's value cannot be modified during the program execution. .. attribute:: config.numpy.seterr_under String Value: ``'None'``, ``'ignore'``, ``'warn'``, ``'raise'``, ``'call'``, ``'print'``, ``'log'`` Default: ``'None'`` Sets numpy's behavior for floating-point underflow. ``'None'`` means using the default, defined by config.numpy.seterr_all. This flag's value cannot be modified during the program execution. .. attribute:: config.numpy.seterr_invalid String Value: ``'None'``, ``'ignore'``, ``'warn'``, ``'raise'``, ``'call'``, ``'print'``, ``'log'`` Default: ``'None'`` Sets numpy's behavior for invalid floating-point operation. ``'None'`` means using the default, defined by :attr:`config.numpy.seterr_all`. This flag's value cannot be modified during the program execution. .. attribute:: compute_test_value String Value: ``'off'``, ``'ignore'``, ``'warn'``, ``'raise'``. Default: ``'off'`` Setting this attribute to something other than ``'off'`` activates a debugging mechanism, where Theano executes the graph on-the-fly, as it is being built. This allows the user to spot errors early on (such as dimension mis-match), **before** optimizations are applied. Theano will execute the graph using the Constants and/or shared variables provided by the user. Purely symbolic variables (e.g. ``x = T.dmatrix()``) can be augmented with test values, by writing to their ``'tag.test_value'`` attribute (e.g. ``x.tag.test_value = numpy.random.rand(5, 4)``). When not ``'off'``, the value of this option dictates what happens when an Op's inputs do not provide appropriate test values: - ``'ignore'`` will silently skip the debug mechanism for this Op - ``'warn'`` will raise a UserWarning and skip the debug mechanism for this Op - ``'raise'`` will raise an Exception .. attribute:: compute_test_value_opt As ``compute_test_value``, but it is the value used during Theano optimization phase. Theano user's do not need to use this. This is to help debug shape error in Theano optimization. .. attribute:: print_test_value Bool value, default: ``False`` If ``'True'``, Theano will override the ``__str__`` method of its variables to also print the tag.test_value when this is available. .. attribute:: reoptimize_unpickled_function Bool value, default: False (changed in master after Theano 0.7 release) Theano users can use the standard python pickle tools to save a compiled theano function. When pickling, both graph before and after the optimization are saved, including shared variables. When set to True, the graph is reoptimized when being unpickled. Otherwise, skip the graph optimization and use directly the optimized graph. .. attribute:: exception_verbosity String Value: ``'low'``, ``'high'``. Default: ``'low'`` If ``'low'``, the text of exceptions will generally refer to apply nodes with short names such as ``'Elemwise{add_no_inplace}'``. If ``'high'``, some exceptions will also refer to apply nodes with long descriptions like: :: A. Elemwise{add_no_inplace} B. log_likelihood_v_given_h C. log_likelihood_h .. attribute:: config.cmodule.warn_no_version Bool value, default: ``False`` If True, will print a warning when compiling one or more Op with C code that can't be cached because there is no ``c_code_cache_version()`` function associated to at least one of those Ops. .. attribute:: config.cmodule.remove_gxx_opt Bool value, default: ``False`` If True, will remove the ``-O*`` parameter passed to g++. This is useful to debug in gdb modules compiled by Theano. The parameter ``-g`` is passed by default to g++. .. attribute:: config.cmodule.compilation_warning Bool value, default: ``False`` If True, will print compilation warnings. .. attribute:: config.cmodule.preload_cache Bool value, default: ``False`` If set to True, will preload the C module cache at import time .. attribute:: config.cmodule.age_thresh_use Int value, default: ``60 * 60 * 24 * 24`` # 24 days In seconds. The time after which a compiled c module won't be reused by Theano. Automatic deletion of those c module 7 days after that time. .. attribute:: config.traceback.limit Int value, default: 8 The number of user stack level to keep for variables. .. attribute:: config.traceback.compile_limit Bool value, default: 0 The number of user stack level to keep for variables during Theano compilation. If higher then 0, will make us keep Theano internal stack trace.