full name: tenpy.tools.optimization
Options for the global ‘optimization level’ used for dynamical optimizations.
Context manager to temporarily set the optimization level to a different value.
Return the global optimization level.
Called by algorithms to check whether it should (try to) do some optimizations.
Set the global optimization level.
Convert strings and int to a valid OptimizationFlag.
Decorator to replace a function with a Cython-equivalent from _npc_helper.pyx.
Optimization options for this library.
Let me start with a quote of “Micheal Jackson” (a programmer, not the musician):
First rule of optimization: "Don't do it." Second rule of optimization (for experts only): "Don't do it yet." Third rule of optimization: "Profile before optimizing."
Luckily, following the third optimization rule, namely profiling code, is
fairly simple in python, see the documentation.
If you have a python skript running your code, you can simply call it with
python -m "cProfile" -s "tottime" your_skript.py. Alternatively, save the profiling statistics
python -m "cProfile" -o "profile_data.stat" your_skript.py and
run these few lines of python code:
import pstats p = pstats.Pstats("profile_data.stat") p.sort_stats('cumtime') # sort by 'cumtime' column p.print_stats(30) # prints first 30 entries
That being said, I actually did profile and optimize (parts of) the library; and there are a few knobs you can turn to tweak the most out of this library, explained in the following.
Simply install the ‘bottleneck’ python package, which allows to optimize slow parts of numpy, most notably ‘NaN’ checking.
Figure out which numpy/scipy/python you are using. As explained in Installation instructions, we recommend to use the Python distributed provided by Intel or Anaconda. They ship with numpy and scipy which use Intels MKL library, such that e.g.
np.tensordotis parallelized to use multiple cores.
In case you didn’t do that yet: some parts of the library are written in both python and Cython with the same interface, so you can simply compile the Cython code, as explained in Installation instructions. Then everything should work the same way from a user perspective, while internally the faster, pre-compiled cython code from
tenpy/linalg/_npc_helper.pyxis used. This should also be a safe thing to do. The replacement of the optimized functions is done by the decorator
One of the great things about python is its dynamical nature - anything can be done at runtime. In that spirit, this module allows to set a global “optimization level” which can be changed dynamically (i.e., during runtime) with
set_level(). The library will then try some extra optimiztion, most notably skip sanity checks of arguments. The possible choices for this global level are given by the
OptimizationFlag. The default initial value for the global optimization level can be adjusted by the environment variable TENPY_OPTIMIZE.
When this optimizing is enabled, we skip (some) sanity checks. Thus, errors will not be detected that easily, and debugging is much harder! We recommend to use this kind of optimization only for code which you succesfully have run before with (very) similar parmeters! Enable this optimization only during the parts of the code where it is really necessary. The context manager
temporary_levelcan help with that. Check whether it actually helps - if it doesn’t, keep the optimization disabled! Some parts of the library already do that as well (e.g. DMRG after the first sweep).
You might want to try some different compile time options for the cython code, set in the setup.py in the top directory of the repository. Since the setup.py reads out the TENPY_OPTIMIZE environment variable, you can simple use an
export TENPY_OPTIMIZE=3(in your bash/terminal) right before compilation. An
export TENPY_OPTIMIZE=0activates profiling hooks instead.
This increases the probability of getting segmentation faults and anyway might not help that much; in the crucial parts of the cython code, these optimizations are already applied. We do not recommend using this!