optimization

  • full name: tenpy.tools.optimization

  • parent module: tenpy.tools

  • type: module

Classes

Inheritance diagram of tenpy.tools.optimization

OptimizationFlag(value[, names, module, ...])

Options for the global 'optimization level' used for dynamical optimizations.

temporary_level(temporary_level)

Context manager to temporarily set the optimization level to a different value.

Functions

get_level()

Return the global optimization level.

optimize([level_compare])

Called by algorithms to check whether it should (try to) do some optimizations.

set_level([level])

Set the global optimization level.

to_OptimizationFlag(level)

Convert strings and int to a valid OptimizationFlag.

use_cython([func, replacement, check_doc])

Decorator to replace a function with a Cython-equivalent from _npc_helper.pyx.

Module description

Optimization options for this library.

Let me start with a quote of “Micheal Jackson” (a programmer, not the musician):

First rule of optimization: "Don't do it."
Second rule of optimization (for experts only): "Don't do it yet."
Third rule of optimization: "Profile before optimizing."

Luckily, following the third optimization rule, namely profiling code, is fairly simple in python, see the documentation. If you have a python script running your code, you can simply call it with python -m "cProfile" -s "tottime" your_script.py. Alternatively, save the profiling statistics with python -m "cProfile" -o "profile_data.stat" your_script.py and run these few lines of python code:

import pstats
p = pstats.Pstats("profile_data.stat")
p.sort_stats('cumtime')  # sort by 'cumtime' column
p.print_stats(30)   # prints first 30 entries

That being said, I actually did profile and optimize (parts of) the library; and there are a few knobs you can turn to tweak the most out of this library, explained in the following.

  1. Simply install the ‘bottleneck’ python package, which allows to optimize slow parts of numpy, most notably ‘NaN’ checking.

  2. Figure out which numpy/scipy/python you are using. As explained in Installation instructions, we recommend to use the Python distributed provided by Intel or Anaconda. They ship with numpy and scipy which use Intels MKL library, such that e.g. np.tensordot is parallelized to use multiple cores.

  3. In case you didn’t do that yet: some parts of the library are written in both python and Cython with the same interface, so you can simply compile the Cython code, as explained in Installation instructions. Then everything should work the same way from a user perspective, while internally the faster, pre-compiled cython code from tenpy/linalg/_npc_helper.pyx is used. This should also be a safe thing to do. The replacement of the optimized functions is done by the decorator use_cython().

    Note

    By default, the compilation will link against the BLAS functions provided by scipy.linalg.cython_blas. Whether they use MKL depends on the scipy version you installed. However, you can explicitly link against a given MKL by providing the path during compilation, as explained in Compile linking against MKL.

  4. One of the great things about python is its dynamical nature - anything can be done at runtime. In that spirit, this module allows to set a global “optimization level” which can be changed dynamically (i.e., during runtime) with set_level(). The library will then try some extra optimization, most notably skip sanity checks of arguments. The possible choices for this global level are given by the OptimizationFlag. The default initial value for the global optimization level can be adjusted by the environment variable TENPY_OPTIMIZE.

    Warning

    When this optimizing is enabled, we skip (some) sanity checks. Thus, errors will not be detected that easily, and debugging is much harder! We recommend to use this kind of optimization only for code which you successfully have run before with (very) similar parameters! Enable this optimization only during the parts of the code where it is really necessary. The context manager temporary_level can help with that. Check whether it actually helps - if it doesn’t, keep the optimization disabled! Some parts of the library already do that as well (e.g. DMRG after the first sweep).

  5. You might want to try some different compile time options for the cython code, set in the setup.py in the top directory of the repository. Since the setup.py reads out the TENPY_OPTIMIZE environment variable, you can simple use an export TENPY_OPTIMIZE=3 (in your bash/terminal) right before compilation. An export TENPY_OPTIMIZE=0 activates profiling hooks instead.

    Warning

    This increases the probability of getting segmentation faults and anyway might not help that much; in the crucial parts of the cython code, these optimizations are already applied. We do not recommend using this!

tenpy.tools.optimization.bottleneck = None
tenpy.tools.optimization.have_cython_functions = False

bool whether the import of the cython file tenpy/linalg/_npc_helper.pyx succeeded.

The value is set in the first call of use_cython().

tenpy.tools.optimization.compiled_with_MKL = False

bool whether the cython file was compiled with HAVE_MKL.

The value is set in the first call of use_cython().