Contents¶
README¶
TeNPy (short for ‘Tensor Network Python’) is a Python library for the simulation of strongly correlated quantum systems with tensor networks.
The philosophy of this library is to get a new balance of a good readability and usability for new-comers, and at the same time powerful algorithms and fast development of new algorithms for experts. For good readability, we include an extensive documentation next to the code, both in Python doc strings and separately as user guides, as well as simple example codes and even toy codes, which just demonstrate various algorithms (like TEBD and DMRG) in ~100 lines per file.
How do I get set up?¶
If you have the conda package manager, you can install the latest released version of TeNPy with:
conda install --channel=conda-forge physics-tenpy
Further details and alternative methods can be found the file doc/INSTALL.rst. The latest version of the source code can be obtained from https://github.com/tenpy/tenpy.
How to read the documentation¶
The documentation is available online at https://tenpy.readthedocs.io/. The documentation is roughly split in two parts: on one hand the full “reference” containing the documentation of all functions, classes, methods, etc., and on the other hand the “user guide” containing some introductions with additional explanations and examples.
The documentation is based on Python’s docstrings, and some additional *.rst
files located in the folder doc/ of the repository.
All documentation is formated as reStructuredText,
which means it is quite readable in the source plain text, but can also be converted to other formats.
If you like it simple, you can just use intective python help(), Python IDEs of your choice or jupyter notebooks, or just read the source.
Moreover, the documentation gets converted into HTML using Sphinx, and is made available online at https://tenpy.readthedocs.io/.
The big advantages of the (online) HTML documentation are a lot of cross-links between different functions, and even a search function.
If you prefer yet another format, you can try to build the documentation yourself, as described in doc/contr/build_doc.rst
.
Help - I looked at the documentation, but I don’t understand how …?¶
We have set up a community forum at https://tenpy.johannes-hauschild.de/, where you can post questions and hopefully find answers. Once you got some experience with TeNPy, you might also be able to contribute to the community and answer some questions yourself ;-) We also use this forum for official annoucements, for example when we release a new version.
I found a bug¶
You might want to check the github issues, if someone else already reported the same problem. To report a new bug, just open a new issue on github. If you already know how to fix it, you can just create a pull request :) If you are not sure whether your problem is a bug or a feature, you can also ask for help in the TeNPy forum.
Citing TeNPy¶
When you use TeNPy for a work published in an academic journal, you can cite this paper to acknowledge the work put into the development of TeNPy.
(The license of TeNPy does not force you, however.)
For example, you could add the sentence "Calculations were performed using the TeNPy Library (version X.X.X)\cite{tenpy}."
in the acknowledgements or in the main text.
The corresponding BibTex Entry would be the following (the \url{...}
requires \usepackage{hyperref}
in the LaTeX preamble.):
@Article{tenpy,
title={{Efficient numerical simulations with Tensor Networks: Tensor Network Python (TeNPy)}},
author={Johannes Hauschild and Frank Pollmann},
journal={SciPost Phys. Lect. Notes},
pages={5},
year={2018},
publisher={SciPost},
doi={10.21468/SciPostPhysLectNotes.5},
url={https://scipost.org/10.21468/SciPostPhysLectNotes.5},
archiveprefix={arXiv},
eprint={1805.00055},
note={Code available from \url{https://github.com/tenpy/tenpy}},
}
To keep us motivated, you can also include your work into the list of papers using TeNPy.
Acknowledgment¶
This work was funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DE-AC02-05- CH11231 through the Scientific Discovery through Advanced Computing (SciDAC) program (KC23DAC Topological and Correlated Matter via Tensor Networks and Quantum Monte Carlo).
License¶
The code is licensed under GPL-v3.0 given in the file LICENSE
of the repository,
in the online documentation readable at https://tenpy.readthedocs.io/en/latest/install/license.html.
Installation instructions¶
With the [conda] package manager you can install python with:
conda install --channel=conda-forge physics-tenpy
More details and tricks in Installation with conda from conda-forge.
If you don’t have conda, but you have [pip], you can:
pip install physics-tenpy
More details for this method can be found in Installation from PyPi with pip.
We also have a bunch of optional Extra requirements, which you don’t have to install to use TeNPy, but you might want to.
The method with the minimal requirements is to just download the source and adjust the PYTHONPATH, as described in Installation from source. This is also the recommended way if you plan to modify parts of the source.
Installation with conda from conda-forge¶
We provide a package for the [conda] package manager in the conda-forge channel, so you can install TeNPy as:
conda install --channel=conda-forge physics-tenpy
Following the recommondation of conda-forge, you can also make conda-forge the default channel as follows:
conda config --add channels conda-forge
conda config --set channel_priority strict
If you have done this, you don’t need to specify the --channel=conda-forge
explicitly.
Note
The numpy package provided by the conda-forge channel by default uses openblas on linux. As outlined in the conda forge docs, you can switch to MKL using:
conda install "libblas=*=*mkl"
Warning
If you use the conda-forge channe and don’t pin BLAS to the MKL version as outlined in the above version,
but nevertheless have mkl-devel installed during compilation of TeNPy, this can have crazy effects on the number
of threads used: numpy will call openblas and open up $OMP_NUM_THREADS - 1
new threads,
while MKL called from tenpy will open another $MKL_NUM_THREADS - 1
threads, making it very hard to control the
number of threads used!
Moreover, it is actually recommended to create a separate environment. To create a conda environment with the name tenpy, where the TeNPy package (called physics-tenpy) is installed:
conda create --name tenpy --channel=conda-forge physics-tenpy
In that case, you need to activate the environment each time you want to use the package with:
conda activate tenpy
The big advantage of this approach is that it allows multiple version of software to be installed in parallel, e.g., if one of your projects requires python>=3.8 and another one requires an old library which doesn’t support that. Further info can be found in the conda documentation.
Installation from PyPi with pip¶
Preparation: install requirements¶
If you have the [conda] package manager from anaconda, you can just download the
environment.yml file (using the conda-forge
channel, or the environment_other.yml for all other channels) out of the repository
and create a new environment (called tenpy
, if you don’t speficy another name) for TeNPy with all the required packages:
conda env create -f environment.yml
conda activate tenpy
Further information on conda environments can be found in the conda documentation. Note that installing conda also installs a version of [pip].
Alternatively, if you only have [pip] (and not [conda]), install the required packages with the following command (after downloading the requirements.txt file from the repository):
pip install -r requirements.txt
Note
Make sure that the pip
you call corresponds to the python version you want to use.
(One way to ensure this is to use python -m pip
instead of a simple pip
.)
Also, you might need to use the argument --user
to install the packages to your home directory,
if you don’t have sudo
rights. (Using --user
with conda’s pip
is discouraged, though.)
Warning
It might just be a temporary problem, but I found that the pip version of numpy is incompatible with the python distribution of anaconda. If you have installed the intelpython or anaconda distribution, use the conda packagemanager instead of pip for updating the packages whenever possible!
Installing the latest stable TeNPy package¶
Now we are ready to install TeNPy. It should be as easy as (note the different package name - ‘tenpy’ was taken!)
pip install physics-tenpy
Note
If the installation fails, don’t give up yet. In the minimal version, tenpy requires only pure Python with somewhat up-to-date NumPy and SciPy. See Installation from source.
Installation of the latest version from Github¶
To get the latest development version from the github main branch, you can use:
pip install git+git://github.com/tenpy/tenpy.git
This should already have the lastest features described in [latest]. Disclaimer: this might sometimes be broken, although we do our best to keep to keep it stable as well.
Installation from the downloaded source folder¶
Finally, if you downloaded the source and want to modify parts of the source,
You can also install TeNPy with in
development version with --editable
:
cd $HOME/tenpy # after downloading the source, got to the repository
pip install --editable .
Uninstalling a pip-installed version¶
In all of the above cases, you can uninstall tenpy with:
pip uninstall physics-tenpy
Updating to a new version¶
Before you update, take a look at the Release Notes, which lists the changes, fixes, and new stuff. Most importantly, it has a section on backwards incompatible changes (i.e., changes which may break your existing code) along with information how to fix it. Of course, we try to avoid introducing such incompatible changes, but sometimes, there’s no way around them. If you skip some intermediate version(s) for the update, read also the release notes of those!
How to update depends a little bit on the way you installed TeNPy.
Of course, you have always the option to just remove the TeNPy files (possibly with a pip uninstall physics-tenpy
or
conda uninstall physics-tenpy
),
and to start over with downloading and installing the newest version.
When installed with conda¶
When you installed TeNPy with [conda], you just need to activate the corresponding environment
(e.g. conda activate tenpy
) and do a:
conda update physics-tenpy
When installed with pip¶
When you installed TeNPy with [pip], you just need to do a:
pip install --upgrade physics-tenpy
When installed from source¶
If you used git clone ...
to download the repository, you can update to the newest version using [git].
First, briefly check that you didn’t change anything you need to keep with git status
.
Then, do a git pull
to download (and possibly merge) the newest commit from the repository.
Note
If some Cython file (ending in .pyx
) got renamed/removed (e.g., when updating from v0.3.0 to v0.4.0),
you first need to remove the corresponding binary files.
You can do so with the command bash cleanup.sh
.
Furthermore, whenever one of the cython files (ending in .pyx
) changed, you need to re-compile it.
To do that, simply call the command bash ./compile
again.
If you are unsure whether a cython file changed, compiling again doesn’t hurt.
To summarize, you need to execute the following bash commands in the repository:
# 0) make a backup of the whole folder
git status # check the output whether you modified some files
git pull
bash ./cleanup.sh # (confirm with 'y')
bash ./compile.sh
Installation from source¶
Minimal Requirements¶
This code works with a minimal requirement of pure Python>=3.6 and somewhat recent versions of NumPy and SciPy.
Getting the source¶
The following instructions are for (some kind of) Linux, and tested on Ubuntu. However, the code itself should work on other operating systems as well (in particular MacOS and Windows).
The offical repository is at https://github.com/tenpy/tenpy.git. To get the latest version of the code, you can clone it with [git] using the following commands:
git clone https://github.com/tenpy/tenpy.git $HOME/TeNPy
cd $HOME/TeNPy
Note
Adjust $HOME/TeNPy
to the path wherever you want to save the library.
Optionally, if you don’t want to contribute, you can checkout the latest stable release:
git tag # this prints the available version tags
git checkout v0.3.0 # or whatever is the lastest stable version
Note
In case you don’t have [git] installed, you can download the repository as a ZIP archive. You can find it under releases, or the latest development version.
Minimal installation: Including tenpy into PYTHONPATH¶
The python source is in the directory tenpy/ of the repository. This folder tenpy/ should be placed in (one of the folders of) the environment variable PYTHONPATH. On Linux, you can simply do this with the following line in the terminal:
export PYTHONPATH=$HOME/TeNPy
(If you have already a path in this variable, separate the paths with a colon :
.)
However, if you enter this in the terminal, it will only be temporary for the terminal session where you entered it.
To make it permanently, you can add the above line to the file $HOME/.bashrc
.
You might need to restart the terminal session or need to relogin to force a reload of the ~/.bashrc
.
Whenever the path is set, you should be able to use the library from within python:
>>> import tenpy
/home/johannes/postdoc/2021-01-TenPy-with-MKL/TeNPy/tenpy/tools/optimization.py:308: UserWarning: Couldn't load compiled cython code. Code will run a bit slower.
warnings.warn("Couldn't load compiled cython code. Code will run a bit slower.")
>>> tenpy.show_config()
tenpy 0.7.2.dev130+76c5b7f (not compiled),
git revision 76c5b7fe46df3e2241d85c47cbced3400caad05a using
python 3.9.1 | packaged by conda-forge | (default, Jan 10 2021, 02:55:42)
[GCC 9.3.0]
numpy 1.19.5, scipy 1.6.0
tenpy.show_config()
prints the current version of the used TeNPy library as well as the versions of the used python, numpy and scipy libraries,
which might be different on your computer. It is a good idea to save this data (given as string in tenpy.version.version_summary
along with your data to allow to reproduce your results exactly.
If you got a similar output as above: congratulations! You can now run the codes :)
Compilation of np_conserved¶
At the heart of the TeNPy library is the module tenpy.linalg.np_conseved
, which provides an Array class to exploit the
conservation of abelian charges. The data model of python is not ideal for the required book-keeping, thus
we have implemented the same np_conserved module in Cython.
This allows to compile (and thereby optimize) the corresponding python module, thereby speeding up the execution of the
code. While this might give a significant speed-up for code with small matrix dimensions, don’t expect the same speed-up in
cases where most of the CPU-time is already spent in matrix multiplications (i.e. if the bond dimension of your MPS is huge).
To compile the code, you first need to install Cython
conda install cython # when using anaconda, or
pip install --upgrade Cython # when using pip
Moreover, you need a C++ compiler.
For example, on Ubuntu you can install sudo apt-get install build_essential
,
or on Windows you can download MS Visual Studio 2015.
If you use anaconda, you can also use conda install -c conda-forge cxx-compiler
.
After that, go to the root directory of TeNPy ($HOME/TeNPy
) and simply run
bash ./compile.sh
Note
There is no need to compile if you installed TeNPy directly with conda or pip. (You can verify this with tenpy.show_config() as illustrated below.)
Note that it is not required to separately download (and install) Intel MKL: the compilation just obtains the includes from numpy. In other words, if your current numpy version uses MKL (as the one provided by anaconda), the compiled TeNPy code will also use it.
After a successful compilation, the warning that TeNPy was not compiled should go away:
>>> import tenpy
>>> tenpy.show_config()
tenpy 0.7.2.dev130+76c5b7f (compiled without HAVE_MKL),
git revision 76c5b7fe46df3e2241d85c47cbced3400caad05a using
python 3.9.1 | packaged by conda-forge | (default, Jan 10 2021, 02:55:42)
[GCC 9.3.0]
numpy 1.19.5, scipy 1.6.0
Note
For further optimization options, e.g. how to link against MKL, look at Extra requirements and tenpy.tools.optimization
.
Quick-setup of a development environment with conda¶
You can use the following bash commands to setup a new conda environment called tenpy_dev (call it whatever you want!) and install TeNPy in there in a way which allows editing TeNPy’s python code and still have it available everywhere in the conda environment:
git clone https://github.com/tenpy/tenpy TeNPy
cd TeNPy
conda env create -f environment.yml -n tenpy_dev
conda activate tenpy_dev
pip install -e .
Extra requirements¶
We have some extra requirements that you don’t need to install to use TeNPy, but that you might find usefull to work with. TeNPy does not import the following libraries (at least not globally), but some functions might expect arguments behaving like objects from these libraries.
Note
If you created a [conda] environment with conda env create -f environment.yml
, all the extra requirements below
should already be installed :)
(However, a pip install -r requirements.txt
does not install all of them.)
Matplotlib¶
The first extra requirement is the [matplotlib] plotting library.
Some functions expect a matplotlib.axes.Axes
instance as argument to plot some data for visualization.
Intel’s Math Kernel Library (MKL)¶
If you want to run larger simulations, we recommend the use of Intel’s MKL. It ships with a Lapack library, and uses optimization for Intel CPUs. Moreover, it uses parallelization of the LAPACK/BLAS routines, which makes execution much faster. As of now, the library itself supports no other way of parallelization.
If you don’t have a python version which is built against MKL,
we recommend using [conda] or directly intelpython.
Conda has the advantage that it allows to use different environments for different projects.
Both are available for Linux, Mac and Windows; note that you don’t even need administrator rights to install it on linux.
Simply follow the (straight-forward) instructions of the web page for the installation.
After a successfull installation, if you run python
interactively, the first output line should
state the python version and contain Anaconda
or Intel Corporation
, respectively.
If you have a working conda package manager, you can install the numpy build against MKL with:
conda install mkl mkl-devel numpy scipy
The mkl-devel
package is required for linking against MKL, i.e. for compiling the Cython code.
As outlined in /doc/install/conda, on Linux/Mac you also need to pin blas to use MKL with the following line, if you use the `conda-forge` channel:
conda install "libblas=*=*mkl"
Note
MKL uses different threads to parallelize various BLAS and LAPACK routines.
If you run the code on a cluster, make sure that you specify the number of used cores/threads correctly.
By default, MKL uses all the available CPUs, which might be in stark contrast than what you required from the
cluster. The easiest way to set the used threads is using the environment variable MKL_NUM_THREADS (or OMP_NUM_THREADS).
For a dynamic change of the used threads, you might want to look at process
.
Compile linking agains MKL¶
When you compile the Cython files of TeNPy, you have the option to explicilty link against MKL, such
that e.g. tenpy.linalg.np_conserved.tensordot()
is guaranteed to call the corresponding dgemm or zgemm
function in the BLAS from MKL.
To link against MKL, the MKL library including the headers must be available during the compilation of TeNPy’s Cython
files. If you have the MKL library installed, you can export the environemnt variable MKLROOT to point to the
root folder.
Alternatively, TeNPy will recognise if you are in an active conda environment and have the mkl and mkl-devel conda
packages installed during compilation. In this case, it will link against the MKL provided as conda package.
tenpy.show_config()
indicates whether you linked successfully against MKL:
>>> import tenpy
>>> tenpy.show_config()
tenpy 0.7.2.dev130+76c5b7f (compiled with HAVE_MKL),
git revision 76c5b7fe46df3e2241d85c47cbced3400caad05a using
python 3.9.1 | packaged by conda-forge | (default, Jan 10 2021, 02:55:42)
[GCC 9.3.0]
numpy 1.19.5, scipy 1.6.0
HDF5 file format support¶
We support exporting data to files in the [HDF5] format through the python interface of the h5py <https://docs.h5py.org/en/stable/> package, see Saving to disk: input/output for more information. However, that requires the installation of the HDF5 library and h5py.
YAML parameter files¶
The tenpy.tools.params.Config
class supports reading and writing YAML files, which requires the package
pyyaml; pip install pyyaml
.
Tests¶
To run the tests, you need to install pytest, which you can for example do with pip install pytest
.
For information how to run the tests, see Checking the installation.
Checking the installation¶
The first check of whether tenpy is installed successfully, is to try to import it from within python:
>>> import tenpy
Note
If this raises a warning Couldn't load compiled cython code. Code will run a bit slower.
, something went wrong with
the compilation of the Cython parts (or you didn’t compile at all).
While the code might run slower, the results should still be the same.
The function tenpy.show_config()
prints information about the used versions of tenpy, numpy and
scipy, as well on the fact whether the Cython parts were compiled and could be imported.
As a further check of the installation you can try to run (one of) the python files in the examples/ subfolder; hopefully all of them should run without error.
You can also run the automated testsuite with pytest to make sure everything works fine.
If you have pytest
installed, you can go to the tests folder of the repository, and run the tests with:
cd tests
pytest
In case of errors or failures it gives a detailed traceback and possibly some output of the test. At least the stable releases should run these tests without any failures.
If you can run the examples but not the tests, check whether pytest actually uses the correct python version.
The test suite is also run automatically by github actions and with travis-ci, results can be inspected here.
TeNPy developer team¶
The following people are part of the TeNPy developer team and contributed a significant amount.
The full list of contributors can be obtained from the git repository with ``git shortlog -sn``.
Johannes Hauschild tenpy@johannes-hauschild.de
Frank Pollmann
Michael P. Zaletel
Maximilian Schulz
Leon Schoonderwoerd
Kévin Hémery
Samuel Scalet
Markus Drescher
Wilhelm Kadow
Gunnar Moeller
Jakob Unfried
Further, the code is based on an earlier version of the library, mainly developed by
Frank Pollmann, Michael P. Zaletel and Roger S. K. Mong.
License¶
The source code documented here is published under a GPL v3 license, which we include below.
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
Release Notes¶
The project adheres semantic versioning.
All notable changes to the project should be documented in the changelog. The most important things should be summarized in the release notes.
The changes in [latest] are implemented in the latest development version on github, but not yet released.
Changes compared to previous TeNPy highlights the most important changes compared to the other, previously developed (closed source) TeNPy version.
[v0.8.3] - 2021-03-29¶
This is another very minor fix-up of /doc/changelog/v0.8.0. Like v0.8.1, it just tries to fix the tests to make building the conda packages work again.
[v0.8.1] - 2021-03-23¶
This is only a very minor fix-up of /doc/changelog/v0.8.0.
Despite adding tenpy.networks.mps.MPS.term_list_correlation_function_right()
, it only contains minor fixes:
- the Simulation class didn’t function under windows; some tests were failing.
- dtype issues for tenpy.models.model.CouplingModel.add_coupling()
[v0.8.0] - 2021-03-19¶
Release Notes¶
First of all: We have optimized the cython parts such that they can now link directly against MKL and have been optimized for the case of small blocks inside charge-conserved tensors.
During compilation, TeNPy now checks whether MKL is available, and then directly links against it.
This changed the depencies: in particular, when you created a conda environment for TeNPy, it is highly recommended
to start off with a new one based on the environment.yml
file.
If you want to continue using the existing conda environment, you need to conda install mkl-devel
before compilation. Additionally, when you use the conda-forge
channel of conda, you should pin blas to use MKL by conda install libblas=*=*mkl
.
Another great reason to update are simulation classes and a console script tenpy-run to allow running and even resuming a simulation when it aborted! See /intro/simulation for details.
Further, there is a big change in verbosity: we switched to using Python’s default logging
mechanism.
This implies that by default you don’t get any output besides error messages and warning any more, at least not in pre-simulation setups.
See Logging and terminal output on how to get the output back, and what to change in your code.
Finally, note that the default (stable) git branch was renamed from master
to main
.
Changelog¶
Backwards incompatible changes¶
Drop official support for Python 3.5.
tenpy.linalg.np_conserved.from_ndarray()
: raise ValueError instead of just a warning in case of the wrong non-zero blocks. This behaviour can be switched back with the new argument raise_wrong_sector.Argument v0 of
tenpy.networks.mps.MPS.TransferMatrix.eigenvectors()
is renamed to v0_npc; v0 now serves for non-np_conserved guess.Default parameters for lattice initialization in the following classes changed. In particular, the bc_MPS parameter now defaults to ‘finite’.
Renamed tenpy.algorithms.tebd.Engine to
tenpy.algorithms.tebd.TEBDEngine
and tenpy.algorithms.tdvp.Engine totenpy.algorithms.tdvp.TDVPEngine
to have unique algorithm class-names.When running, no longer print stuff by default. Instead, we use Python’s logging mechanism. To enable printing again, you need to configure the logging to print on “INFO” level (which is the default when running from command line)
As part of this big change in the way verbosity is handled, there were many minor changes: - rename Config.print_if_verbose to
log()
- deprecate the verbose class argument of theConfig
- deprecate the verbose class attribute of all classes (if they had it). - change argument names ofparams()
.
Added¶
Simulation class
Simulation
and subclasses as a new extra layer for handling the general setup.Command line script
tenpy-run
andrun_simulation()
for setting up a simulation.tenpy.linalg.sparse.FlatLinearOperator.eigenvectors()
andeigenvectors()
to unify code fromtenpy.networks.mps.TransferMatrix.eigenvectors()
andtenpy.linalg.lanczos.lanczos_arpack()
.tenpy.tools.fit.entropy_profile_from_CFT()
andtenpy.tools.fit.central_charge_from_S_profile()
tenpy.networks.site.Site.multiply_operators()
as a variant ofmultiply_op_names()
accepting both string and npc arrays.tenpy.tools.events.EventHandler()
to simplify call-backs e.g. for measurement codes during an algorithms.tenpy.tools.misc.find_subclass()
to recursively find subclasses of a given base class by the name. This function is now used e.g. to find lattice classes given the name, hence supporting user-defined lattices defined outside of TeNPy.tenpy.tools.misc.get_recursive()
andset_recursive()
for nested data strucutres, e.g., parameters.tenpy.tools.misc.flatten()
to turn a nested data structure into a flat one.tenpy.networks.mps.InitialStateBuilder
to simplify building various initial states.Common base class
tenpy.algorithms.Algorithm
for all algorithms.Common base class
tenpy.algorithms.TimeEvolutionAlgorithm
for time evolution algorithms.tenpy.models.lattice.Lattice.Lu
as a class attribute.tenpy.models.lattice.Lattice.find_coupling_pairs()
to automatically find coupling pairs of ‘nearest_neighbors’ etc..tenpy.models.lattice.HelicalLattice
allowing to have a much smaller MPS unit cell by shifting the boundary conditions around the cylinder.tenpy.networks.purification_mps.PurificationMPS.from_infiniteT_canonical()
for a canonical ensemble.
Changed¶
For finite DMRG,
DMRGEngine.N_sweeps_check
now defaults to 1 instead of 10 (which is still the default for infinite MPS).Merge
tenpy.linalg.sparse.FlatLinearOperator.npc_to_flat_all_sectors()
intonpc_to_flat()
, mergetenpy.linalg.sparse.FlatLinearOperator.flat_to_npc_all_sectors()
intoflat_to_npc()
.Change the
chinfo.names
of the specificSite
classes to be more consistent and clear.Add the more powerful
tenpy.networks.site.set_common_charges()
to replacetenpy.networks.site.multi_sites_combine_charges()
.Allow
swap_op='autoInv'
fortenpy.networks.mps.MPS.swap_sites()
and explain the idea of the swap_op.The
tenpy.models.model.CouplingMPOModel.init_lattice()
now respects new class attributesdefault_lattice
andforce_default_lattice
.Support additional priority argument for
get_order_grouped()
, issue #122.Warn if one of the add_* methods of the CouplingMPOModel gets called after initialization.
Fixed¶
Sign error for the couplings of the
tenpy.models.toric_code.ToricCode
.The form of the eigenvectors returned by
tenpy.networks.mps.TransferMatrix.eigenvectors()
was dependent on the charge_sector given in the initialization; we try to avoid this now (if possible).The charge conserved by
SpinHalfFermionSite(cons_Sz='parity')
was weird.Allow to pass npc Arrays as Arguments to
expectation_value_multi_sites()
and other correlation functions (issue #116).tenpy.tools.hdf5_io
did not work with h5py version >= (3,0) due to a change in string encoding (issue #117).The overall phase for the returned W from
compute_K()
was undefined.tenpy.networks.mpo.MPO.expectation_value()
didn’t work with max_range=0The default trunc_par for
tenpy.networks.mps.MPS.swap_sites()
,permute_sites()
andcompute_K()
was leading to too small chi for intial MPS with small chi.issue #120 Lattice with different sites in the unit cell.
Index offset in
tenpy.networks.mps.MPS.expectation_value_term()
for the sites to be used.issue #121
tenpy.networks.mps.MPS.correlation_length()
worked with charge_sector=0, but included additional divergent value with charge_sector=[0].Some MPS methods (correlation function, expectation value, …) raised an error for negative site indices even for infinite MPS.
Warn if we add terms to a couplingMPOMOdel after initialization
[0.7.2] - 2020-10-09¶
Release Notes¶
We’ve added a list of all papers using (and citing) TeNPy, see Papers using TeNPy. Feel free to include your own works!
And a slight simplicifation, which might affect your code:
using the MultiCouplingModel is no longer necessary, just use the tenpy.models.model.CouplingModel
directly.
Changelog¶
Backwards incompatible changes¶
Deprecated the
tenpy.models.model.MultiCouplingModel
. The functionality is fully merged into theCouplingModel
, no need to subclass the MultiCouplingModel anymore.The
Kagome
lattice did not include all next_next_nearest_neighbors. (It had only the ones across the hexagon, missing those maiking up a bow-tie.)Combined arguments onsite_terms and coupling_terms of
tenpy.networks.mpo.MPOGraph.from_terms()
into a single argument terms.
Added¶
Allow to include jupyter notebooks into the documentation; collect example notebooks in [TeNPyNotebooks].
term_correlation_function_right()
andterm_correlation_function_left()
for correlation functions with more than one operator on each end.tenpy.networks.terms.ExponentiallyDecayingTerms
for constructing MPOs with exponential decay, andtenpy.networks.model.CouplingModel.add_exponentially_decaying_coupling()
for using it. This closes issue #78.
Fixed¶
The
IrregularLattice
used the'default'
order of the regular lattice instead of whatever the order of the regular lattice was.charge_variance()
did not work for more than 1 charge.
[0.7.0] - 2020-09-04¶
Release Notes¶
The big new feature is the implementation of the W_I
and W_II
method for approximating exponentials of an MPO with an MPO,
and MPS compression / MPO application to an MPS, to allow time evolution with ExpMPOEvolution()
.
Changelog¶
Backwards incompatible changes¶
Remove argument leg0 from
build_MPO
.Remove argument leg0 from
from_grids
, instead optionally give all legs as argument.Moved/renamed the module
tenpy.algorithms.mps_sweeps
totenpy.algorithms.mps_common
. The old mps_sweeps still exists for compatibility, but raises a warning upon import.Moved/renamed the module
tenpy.algorithms.purification_tebd
to tenpy.algorithms.purification (for the PurificaitonTEBD and PurificationTEBD2) and tenpy.algorithms.disentangler (for the disentanglers).
Added¶
VariationalCompression
andVariationalApplyMPO
for variational compressionPurificationApplyMPO
andPurificationTwoSiteU
for variational compression with purifications.Argument insert_all_id for
tenpy.networks.mpo.MPOGraph.from_terms()
andfrom_term_list()
implemented the
IrregularLattice
.extended user guide on lattices, Details on the lattice geometry.
Function to approximate a decaying function by a sum of exponentials.
spatial_inversion()
to perform an explicit spatial inversion of the MPS.
Changed¶
By default, for an usual MPO define IdL and IdR on all bonds. This can generate “dead ends” in the MPO graph of finite systems, but it is useful for the make_WI/make_WII for MPO-exponentiation.
tenpy.models.lattice.Lattice.plot_basis()
now allows to shade the unit cell and shift the origin of the plotted basis.Don’t use bc_shift in
tenpy.models.lattice.Lattice.plot_couplings()
any more - it lead to confusing figures. Instead, the new keyword wrap=True allows to directly connect all sites. This is done to avoid confusing in combination withplot_bc_identified()
.Error handling of non-zero qtotal for
TransferMatrix
.
Fixed¶
Removed double counting of chemical potential terms in the
BosonicHaldaneModel
andFermionicHaldaneModel
.Wrong results of
tenpy.networks.mps.MPS.get_total_charge()
withonly_physical_legs=True
.tenpy.models.lattice.Lattice.plot_bc_identified()
had a sign error for the bc_shift.calc_H_MPO_from_bond()
didn’t work for charges with blocks > 1.TEBD: keep qtotal of the B tensors constant
order model parameter was read out but not used in
tenpy.models.model.CouplingMPOModel.init_lattice()
for 1D lattices.
[0.6.1] - 2020-05-18¶
Release Notes¶
This only is a follow-up release to [0.6.0] - 2020-05-16. It fixes a small bug in the examples/c_tebd.py and some roundoff problems in the tests.
It is now possible to install TeNPy with the conda package manager:
conda install --channel=conda-forge physics-tenpy
[0.6.0] - 2020-05-16¶
Release Notes¶
This release contains a major update of the documentation, which is now hosted by “Read the Docs” at https://tenpy.readthedocs.io/. Update your bookmark :-)
Apart from that, this release introduces a format how to save and load data (in particular TeNPy classes) to HDF5 files.
See Saving to disk: input/output for more details.
To use that feature, you need to install the h5py package (and therefore some version of the HDF5 library).
This is easy with anaconda, conda install h5py
, but might be cumbersome on your local computing cluster.
(However, many university computing clusters have some version of HDF5 installed already. Check with your local sysadmin.)
Moreover, we changed how we read out parameter dictionaries - instead of the get_parameter() function,
we have now a Config
class which behaves like a dictionary, you can simpy use
options.get(key, default)
for model parameters - as you would do for a python dictionary.
Changelog¶
Backwards incompatible changes¶
Created a class
Config
to replace Python-native parameter dictionaries and add some useful functionality. Old code usingtenpy.tools.params.get_parameter()
andtenpy.tools.params.unused_parameters()
still works as before, but raises a warning, and should be replaced. For example, if you defined your own models, you should replace callsget_parameter(model_params, "key", "default_value", "ModelName")
withmodel_params.get("key", "default_value")
, the latter syntax being what you would use for a normal python dictionary as well.Renamed the following class parameter dictionaries to simply options for more consitency. Old code using the class attributes should still work (since we provide property aliases), but raises warnings. Note that this affects also derived classes (for example the
TwoSiteDMRGEngine
).tenpy.algorithms.dmrg.DMRGEngine.DMRG_params
(was already renamed to engine_params in versin 0.5.0)tenpy.algorithms.mps_common.Sweep.engine_params
tenpy.algorithms.tebd.Engine.TEBD_params
tenpy.algorithms.tdvp.Engine.TDVP_params
tenpy.linalg.lanczos.Lanczos
Changed the arguments of
tenpy.models.model.MultiCouplingModel()
: We replaced the three arguments u0, op0 and other_op withother_ops=[(u1, op1, dx1), (op2, u2, dx2), ...]
by single, equivalent argment ops which should now readops=[(op0, dx0, u0), (op1, dx1, u1), (op2, dx2, u2), ...]
, wheredx0 = [0]*lat.dim
. Note the changed order inside the tuple! Old code (which specifies opstr and category as keyword argument, if at all) still works as before, but raises a warning, and should be replaced. Sincetenpy.lattice.Lattice.possible_multi_couplings()
used similar arguments, they were changed as well.Don’t save H_MPO_graph as model attribute anymore - this also wasn’t documented.
Renamed the truncation parameter symmetry_tol to degeneracy_tol and make the criterion more reasonable by not checking \(log(S_i/S_j) < log(symmetry_tol)\), but simply \(log(S_i/S_j) < degeneracy_tol\). The latter makes more sense, as it is equivalent to \((S_i - S_j)/S_j < exp(degeneracy_tol) - 1 = degeneracy_tol + \mathcal{O}(degeneracy_tol^2)\).
Deprecated
tenpy.networks.mps.MPS.increase_L()
in favor of the newly addedtenpy.networks.mps.MPS.enlarge_mps_unit_cell()
(takingfactor
instead ofnew_L=factor*L
as argument).tenpy.networks.mps.MPS.correlation_function()
now auto-determines whether a Jordan-Wigner string is necessary. If any of the given operators is directly an npc Array, it will now raise an error; setautoJW=False
in that case.Instead of “monkey-patching” matvec of the
tenpy.algorithms.mps_common.EffectiveH
for the case that ortho_to_envs is not empty, we defined a proper classNpcLinearOperatorWrapper
, which serves as baseclass forOrthogonalNpcLinearOperator
. The argument ortho_to_envs has been removed fromEffectiveH
.Switch order of the sites in the unit cell for the
DualSquare
, and redefine what the"default"
order means. This is a huge optimization of DMRG, reducing the necessary MPS bond dimension for the ground state to the optimal \(2^{L-1}\) on each bond.Deprecated the Lanczos funciton/class argument orthogonal_to of in
LanczosGroundState
. Instead, one can use theOrthogonalNpcLinearOperator
.Deprecation warning for changing the default argument of shift_ket for non-zero shift_bra of the
TransferMatrix
.
Added¶
tenpy.networks.mpo.MPO.variance()
to calculate the variance of an MPO against a finite MPS.Classmethod
tenpy.networks.MPS.from_lat_product_state()
to initialize an MPS from a product state given in lattice coordinates (independent of the order of the lattice).argument plus_hc for
tenpy.models.model.CouplingModel.add_onsite()
,tenpy.models.model.CouplingModel.add_coupling()
, andtenpy.models.model.MultiCouplingModel.add_multi_coupling()
to simplify adding the hermitian conjugate terms.parameter explicit_plus_hc for
MPOModel
,CouplingModel
andMPO
, to reduce MPO bond dimension by not storing Hermitian conjugate terms, but computing them at runtime.tenpy.models.model.CouplingModel.add_local_term()
for adding a single term to the lattice, and still handling Jordan-Wigner strings etc.tenpy.networks.site.Site.get_hc_opname()
andhc_ops
to allow getting the hermitian conjugate operator (name) of the onsite operators.tenpy.tools.hdf5_io
with convenience functions for import and output with pickle, as well as an implementation allowing to save and load objects to HDF5 files in the format specified in Saving to disk: input/output.human-readable boundary_conditions property in
Lattice
.save_hdf5 and load_hdf5 methods to support saving/loading to HDF5 for the following classes (and their subclasses): -
ChargeInfo
-LegCharge
-LegPipe
-Array
-MPS
-MPO
-Lattice
tenpy.networks.mps.MPSEnvironment.get_initialization_data()
for a convenient way of saving the necessary parts of the environment after an DMRG run.Method enlarge_mps_unit_cell for the following classes: -
MPS
-MPO
-Lattice
-Model
,MPOModel
,NearestNeighborModel
tenpy.tools.misc.to_iterable_of_len()
for convenience of handling arguments.tenpy.models.lattice.Lattice.mps2lat_values_masked()
as generalization oftenpy.models.lattice.Lattice.mps2lat_values()
.tenpy.linalg.sparse.OrthogonalNpcLinearOperator
to orthogonalize against vectors.tenpy.linalg.sparse.ShiftNpcLinearOperator
to add a constant.tenpy.linalg.sparse.SumNpcLinearOperator
which serves e.g. to add the h.c. during the matvec (in combination with the newtenpy.linalg.sparse.NpcLinearOperator.adjoint()
).tenpy.algorithms.mps_common.make_eff_H()
to simplify implementations ofprepare_update()
.attribute
options
for the Model.
Changed¶
DEFAULT DMRG paramter
'diag_method'
from'lanczos'
to'default'
, which is the same for large bond dimensions, but performs a full exact diagonalization if the effective Hamiltonian has small dimensions. The threshold introduced is the new DMRG parameter'max_N_for_ED'
.DEFAULT parameter
charge_sector=None
instead ofcharge_sector=0
intenpy.networks.mps.MPS.overlap()
to look for eigenvalues of the transfer matrix in all charge sectors, and not assume that it’s the 0 sector.Derive the following classes (and their subclasses) from the new
Hdf5Exportable
to support saving to HDF5: -Site
-Terms
-OnsiteTerms
-CouplingTerms
-Model
, i.e., all model classes.Instead of just defining to_matrix and adjoint for
EffectiveH
, define the interface directly forNpcLinearOperator
.Try to keep the charge block structure as far as possible for
add_charge()
anddrop_charge()
Fixed¶
Adjust the default DMRG parameter min_sweeps if chi_list is set.
Avoid some unnecessary transpositions in MPO environments for MPS sweeps (e.g. in DMRG).
sort(bunch=True)
could return un-bunched Array, but still set the bunched flag.LegPipe
did not initializeself.bunched
correctly.issue #98: Error of calling psi.canonical_form() directly after disabling the DMRG mixer.
svd()
withfull_matrices=True
gave wrong charges.tenpy.linalg.np_conserved.Array.drop_charge()
andtenpy.lina.np_conserved.Array.drop_charge()
did not copy over labels.wrong pairs for the fifth_nearest_neighbors of the
Honeycomb
.Continue in
tenpy.algorithms.dmrg.full_diag_effH()
with a warning instaed of raising an Error, if the effective Hamltonian is zero.correlation_length()
: check for hermitian Flag might have raised and Error with new numpy warningscorrelation_function()
did not respect argumentstr_on_first=False
.tenpy.networks.mps.MPS.get_op()
worked unexpected for infinite bc with incomensurateself.L
andlen(op_list)
.tenpy.networks.mps.MPS.permute_sites()
did modify the given perm.issue #105 Unintended side-effects using lanczos_params.verbose in combination with orthogonal_to
issue #108
tenpy.linalg.sparse.FlatLinearOperator._matvec()
changesself._charge_sector
[0.5.0] - 2019-12-18¶
Backwards incompatible changes¶
Major rewriting of the DMRG Engines, see issue #39 and issue #85 for details. The
EngineCombine
andEngineFracture
have been combined into a singleTwoSiteDMRGEngine
with an Therun
function works as before. In case you have directly used theEngineCombine
orEngineFracture
, you should update your code and use theTwoSiteEngine
instead.Moved
init_LP
andinit_RP
method fromMPS
intoMPSEnvironment
andMPOEnvironment
.
Changed¶
Addition/subtraction of
Array
: check whether the both arrays have the same labels in differnt order, and in that case raise a warning that we will transpose in the future.Made
tenpy.linalg.np_conserved.Array.get_block()
public (previouslytenpy.linalg.np_conserved.Array._get_block
).groundstate()
now returns a tuple(E0, psi0)
instead of justpsi0
. Moreover, the argument charge_sector was added.Simplification in the
Lattice
: Instead of having separate arguments/attributes/functions for'nearest_neighbors', 'next_nearest_neighbors', 'next_next_nearest_neighbors'
and possibly (Honeycomb) even'fourth_nearest_neighbors', 'fifth_nearest_neighbors'
, collect them in a dictionary called pairs. Old call structures still allowed, but deprecated.issue #94: Array addition and
inner()
should reflect the order of the labels, if they coincided. Will change the default behaviour in the future, raising FutureWarning for now.Default parameter for DMRG params: increased precision by setting P_tol_min down to the maximum of
1.e-30, lanczos_params['svd_min']**2 * P_tol_to_trunc, lanczos_params['trunc_cut']**2 * P_tol_to_trunc
by default.
Added¶
tenpy.algorithms.mps_common
with theSweep
class andEffectiveH
to be aOneSiteH
orTwoSiteH
.Single-Site DMRG with the
SingleSiteDMRG
.Example function in
examples/c_tebd.py
how to run TEBD with a model originally having next-nearest neighbors.increase_L()
to allow increasing the unit cell of an MPS.Additional option
order='folded'
for theChain
.tenpy.algorithms.exact_diag.ExactDiag.from_H_mpo()
wrapper as replacement fortenpy.networks.mpo.MPO.get_full_hamiltonian()
andtenpy.networks.mpo.MPO.get_grouped_mpo()
. The latter are now deprecated.Argument max_size to limit the matrix dimension in
ExactDiag
.tenpy.linalg.sparse.FlatLinearOperator.from_guess_with_pipe()
to allow quickly converting matvec functions acting on multi-dimensional arrays to a FlatLinearOperator by combining the legs into a LegPipe.tenpy.tools.math.speigsh()
for hermitian variant ofspeigs()
Allow for arguments
'LA', 'SA'
inargsort()
.tenpy.linalg.lanczos.lanczos_arpack()
as possiple replacement of the self-implemented lanczos function.tenpy.algorithms.dmrg.full_diag_effH()
as another replacement oflanczos()
.The new DMRG parameter
'diag_method'
allows to select a method for the diagonalization of the effective Hamiltonian. Seetenpy.algorithms.dmrg.DMRGEngine.diag()
for details.dtype attribute in
EffectiveH
.tenpy.linalg.charges.LegCharge.get_qindex_of_charges()
to allow selecting a block of an Array from the charges.tenpy.algorithms.mps_common.EffectiveH.to_matrix
to allow contracting an EffectiveH to a matrix, as well as metadatatenpy.linalg.sparse.NpcLinearOperator.acts_on
andtenpy.algorithms.mps_common.EffectiveH.N
.argument only_physical_legs in
tenpy.networks.mps.MPS.get_total_charge()
Fixed¶
MPO
expectation_value()
did not work for finite systems.Calling
compute_K()
repeatedly with default parameters but on states with different chi would use the chi of the very first call for the truncation parameters.allow
MPSEnvironment
andMPOEnvironment
to have MPS/MPO with different lengthgroup_sites()
didn’t work correctly in some situations.matvec_to_array()
returned the transposed of A.tenpy.networks.mps.MPS.from_full()
messed up the form of the first array.issue #95: blowup of errors in DMRG with update_env > 0. Turns out to be a problem in the precision of the truncation error: TruncationError.eps was set to 0 if it would be smaller than machine precision. To fix it, I added
from_S()
.
[0.4.1] - 2019-08-14¶
Backwards incompatible changes¶
Switch the sign of the
BoseHubbardModel
andFermiHubbardModel
to hopping and chemical potential having negative prefactors. Of course, the same adjustment happens in theBoseHubbardChain
andFermiHubbardChain
.moved
BoseHubbardModel
andBoseHubbardChain
as well asFermiHubbardModel
andFermiHubbardChain
into the new moduletenpy.models.hubbard
.Change arguments of
coupling_term_handle_JW()
andmulti_coupling_term_handle_JW()
to use strength and sites instead of op_needs_JW.Only accept valid identifiers as operator names in
add_op()
.
Changed¶
grid_concat()
allows forNone
entries (representing zero blocks).from_full()
allows for ‘segment’ boundary conditions.apply_local_op()
allows for n-site operators.
Added¶
Nearest-neighbor interaction in
BoseHubbardModel
multiply_op_names()
to replace' '.join(op_names)
and allow explicit compression/multiplication.order_combine_term()
to group operators together.dagger()
of MPO’s (and to implement that alsoflip_charges_qconj()
).has_label()
to check if a label existsAddition of MPOs
3 additional examples for chern insulators in
examples/chern_insulators/
.from_MPOModel()
for initializing nearest-neighbor models after grouping sites.
Fixed¶
issue #36: long-range couplings could give IndexError.
issue #42: Onsite-terms in
FermiHubbardModel
were wrong for lattices with non-trivial unit cell.Missing a factor 0.5 in
GUE()
.Allow
TermList
to have terms with multiple operators acting on the same site.Allow MPS indices outside unit cell in
mps2lat_idx()
andlat2mps_idx()
.expectation_value()
did not work for n-site operators.
[0.4.0] - 2019-04-28¶
Backwards incompatible changes¶
The argument order of
tenpy.models.lattice.Lattice
could be a tuple(priority, snake_winding)
before. This is no longer valid and needs to be replaced by("standard", snake_winding, priority)
.Moved the boundary conditions bc_coupling from the
tenpy.models.model.CouplingModel
into thetenpy.models.lattice.Lattice
(as bc). Using the parameter bc_coupling will raise a FutureWarning, one should set the boundary conditions directly in the lattice.Added parameter permute (True by default) in
tenpy.networks.mps.MPS.from_product_state()
andtenpy.networks.mps.MPS.from_Bflat()
. The resulting state will therefore be independent of the “conserve” parameter of the Sites - unlike before, where the meaning of the p_state argument might have changed.Generalize and rename
tenpy.networks.site.DoubleSite
totenpy.networks.site.GroupedSite
, to allow for an arbitrary number of sites to be grouped. Argumentssite0, site1, label0, label1
of the __init__ can be replaced with[site0, site1], [label0, label1]
andop0, op1
of the kronecker_product with[op0, op1]
; this will recover the functionality of the DoubleSite.Restructured callstructure of Mixer in DMRG, allowing an implementation of other mixers. To enable the mixer, set the DMRG parameter
"mixer"
toTrue
or'DensityMatrixMixer'
instead of just'Mixer'
.The interaction parameter in the
tenpy.models.bose_hubbbard_chain.BoseHubbardModel
(andtenpy.models.bose_hubbbard_chain.BoseHubbardChain
) did not correspond to \(U/2 N (N-1)\) as claimed in the Hamiltonian, but to \(U N^2\). The correcting factor 1/2 and change in the chemical potential have been fixed.Major restructuring of
tenpy.linalg.np_conserved
andtenpy.linalg.charges
. This should not break backwards-compatibility, but if you compiled the cython files, you need to remove the old binaries in the source directory. Usingbash cleanup.sh
might be helpful to do that, but also remove other files within the repository, so be careful and make a backup beforehand to be on the save side. Afterwards recompile withbash compile.sh
.Changed structure of
tenpy.models.model.CouplingModel.onsite_terms
andtenpy.models.model.CouplingModel.coupling_terms
: Each of them is now a dictionary with category strings as keys and the newly introducedtenpy.networks.terms.OnsiteTerms
andtenpy.networks.terms.CouplingTerms
as values.tenpy.models.model.CouplingModel.calc_H_onsite()
is deprecated in favor of new methods.Argument raise_op2_left of
tenpy.models.model.CouplingModel.add_coupling()
is deprecated.
Added¶
tenpy.networks.mps.MPS.expectation_value_term()
,tenpy.networks.mps.MPS.expectation_value_terms_sum()
andtenpy.networks.mps.MPS.expectation_value_multi_sites()
for expectation values of terms.tenpy.networks.mpo.MPO.expectation_value()
for an MPO.tenpy.linalg.np_conserved.Array.extend()
andtenpy.linalg.charges.LegCharge.extend()
, allowing to extend an Array with zeros.DMRG parameter
'orthogonal_to'
allows to calculate excited states for finite systems.possibility to change the number of charges after creating LegCharges/Arrays.
more general way to specify the order of sites in a
tenpy.models.lattice.Lattice
.new
tenpy.models.lattice.Triangular
,tenpy.models.lattice.Honeycomb
andtenpy.models.lattice.Kagome
latticea way to specify nearest neighbor couplings in a
Lattice
, along with methods to count the number of nearest neighbors for sites in the bulk, and a way to plot them (plot_coupling()
and friends)tenpy.networks.mpo.MPO.from_grids()
to generate the MPO from a grid.tenpy.models.model.MultiCouplingModel
for couplings involving more than 2 sites.request #8: Allow shift in boundary conditions of
CouplingModel
.Allow to use state labels in
tenpy.networks.mps.MPS.from_product_state()
.tenpy.models.model.CouplingMPOModel
structuring the default initialization of most models.Allow to force periodic boundary conditions for finite MPS in the
CouplingMPOModel
. This is not recommended, though.tenpy.models.model.NearestNeighborModel.calc_H_MPO_from_bond()
andtenpy.models.model.MPOModel.calc_H_bond_from_MPO()
for conversion of H_bond into H_MPO and vice versa.tenpy.algorithms.tebd.RandomUnitaryEvolution
for random unitary circuitsAllow documentation links to github issues, arXiv, papers by doi and the forum with e.g.
:issue:`5`, :arxiv:`1805.00055`, :doi:`10.21468/SciPostPhysLectNotes.5`, :forum:`3`
tenpy.models.model.CouplingModel.coupling_strength_add_ext_flux()
for adding hoppings with external flux.tenpy.models.model.CouplingModel.plot_coupling_terms()
to visualize the added coupling terms.tenpy.networks.terms.OnsiteTerms
,tenpy.networks.terms.CouplingTerms
,tenpy.networks.terms.MultiCouplingTerm
containing the of terms for theCouplingModel
andMultiCouplingModel
. This allowed to add the category argument toadd_onsite
,add_coupling
andadd_multi_coupling
.tenpy.networks.terms.TermList
as another (more human readable) representation of terms with conversion from and to the other*Term
classes.tenpy.networks.mps.MPS.init_LP()
andtenpy.networks.mps.MPS.init_RP()
to initialize left and right parts of an Environment.tenpy.networks.mpo.MPOGraph.from_terms()
andtenpy.networks.mpo.MPOGraph.from_term_list()
.argument charge_sector in
tenpy.networks.mps.MPS.correlation_length()
.
Changed¶
moved toycodes from the folder
examples/
to a new foldertoycodes/
to separate them clearly.- major remodelling of the internals of
tenpy.linalg.np_conserved
andtenpy.linalg.charges
.
- major remodelling of the internals of
Restructured lanczos into a class, added time evolution calculating
exp(A*dt)|psi0>
Warning for poorly conditioned Lanczos; to overcome this enable the new parameter reortho.
Restructured
tenpy.algorithms.dmrg
:run()
is now just a wrapper around the newrun()
,run(psi, model, pars)
is roughly equivalent toeng = EngineCombine(psi, model, pars); eng.run()
.Added
init_env()
andreset_stats()
to allow a simple restart of DMRG with slightly different parameters, e.g. for tuning Hamiltonian parameters.Call
canonical_form()
for infinite systems if the final state is not in canonical form.
Changed default values for some parameters:
set
trunc_params['chi_max'] = 100
. Not setting a chi_max at all will lead to memory problems. DisableDMRG_params['chi_list'] = None
by default to avoid conflicting settings.reduce to
mixer_params['amplitude'] = 1.e-5
. A too strong mixer screws DMRG up pretty bad.increase
Lanczos_params['N_cache'] = N_max
(i.e., keep all states)set
DMRG_params['P_tol_to_trunc'] = 0.05
and provide reasonable …_min and …_max values.increased (default) DMRG accuracy by setting
DMRG_params['max_E_err'] = 1.e-8
andDMRG_params['max_S_err'] = 1.e-5
.don’t check the (absolute) energy for convergence in Lanczos.
set
DMRG_params['norm_tol'] = 1.e-5
to check whether the final state is in canonical form.
Verbosity of
get_parameter()
reduced: Print parameters only for verbosity >=1. and default values only for verbosity >= 2.Don’t print the energy during real-time TEBD evolution - it’s preserved up to truncation errors.
Renamed the SquareLattice class to
tenpy.models.lattice.Square
for better consistency.auto-determine whether Jordan-Wigner strings are necessary in
add_coupling()
.The way the labels of npc Arrays are stored internally changed to a simple list with None entries. There is a deprecated propery setter yielding a dictionary with the labels.
renamed first_LP and last_RP arguments of
MPSEnvironment
andMPOEnvironment
to init_LP and init_RP.Testing: insetad of the (outdated) nose, we now use pytest <https://pytest.org> for testing.
Fixed¶
issue #22: Serious bug in
tenpy.linalg.np_conserved.inner()
: ifdo_conj=True
is used with non-zeroqtotal
, it returned 0. instead of non-zero values.avoid error in
tenpy.networks.mps.MPS.apply_local_op()
Don’t carry around total charge when using DMRG with a mixer
Corrected couplings of the FermionicHubbardChain
issue #2: memory leak in cython parts when using intelpython/anaconda
issue #4: incompatible data types.
issue #6: the CouplingModel generated wrong Couplings in some cases
issue #19: Convergence of energy was slow for infinite systems with
N_sweeps_check=1
more reasonable traceback in case of wrong labels
wrong dtype of npc.Array when adding/subtracting/… arrays of different data types
could get wrong H_bond for completely decoupled chains.
SVD could return outer indices with different axes
tenpy.networks.mps.MPS.overlap()
works now for MPS with different total charge (e.g. afterpsi.apply_local_op(i, 'Sp')
).skip existing graph edges in MPOGraph.add() when building up terms without the strength part.
[0.3.0] - 2018-02-19¶
This is the first version published on github.
Added¶
Cython modules for np_conserved and charges, which can optionally be compiled for speed-ups
tools.optimization for dynamical optimization
Various models.
More predefined lattice sites.
Example toy-codes.
Network contractor for general networks
Changed¶
Switch to python3
Removed¶
Python 2 support.
[0.2.0] - 2017-02-24¶
Compatible with python2 and python3 (using the 2to3 tool).
Development version.
Includes TEBD and DMRG.
Changes compared to previous TeNPy¶
This library is based on a previous (closed source) version developed mainly by Frank Pollmann, Michael P. Zaletel and Roger S. K. Mong. While allmost all files are completely rewritten and not backwards compatible, the overall structure is similar. In the following, we list only the most important changes.
Global Changes¶
syntax style based on PEP8. Use
$>yapf -r -i ./
to ensure consitent formatting over the whole project. Special comments# yapf: disable
and# yapf: enable
can be used for manual formatting of some regions in code.Following PEP8, we distinguish between ‘private’ functions, indicated by names starting with an underscore and to be used only within the library, and the public API. The puplic API should be backwards-compatible with different releases, while private functions might change at any time.
all modules are in the folder
tenpy
to avoid name conflicts with other libraries.withing the library, relative imports are used, e.g.,
from ..tools.math import (toiterable, tonparray)
Exception: the files in tests/ and examples/ run as__main__
and can’t use relative importsFiles outside of the library (and in tests/, examples/) should use absolute imports, e.g.
import tenpy.algorithms.tebd
renamed tenpy/mps/ to tenpy/networks, since it containes various tensor networks.
added
Site
describing the local physical sites by providing the physical LegCharge and onsite operators.
np_conserved¶
pure python, no need to compile!
in module
tenpy.linalg
instead ofalgorithms/linalg
.moved functionality for charges to
charges
Introduced the classes
ChargeInfo
(basically the oldq_number
, andmod_q
) andLegCharge
(the oldqind, qconj
).Introduced the class
LegPipe
to replace the oldleg_pipe
. It is derived fromLegCharge
and used as a leg in the array class. Thus any inherited array (aftertensordot
etc still has all the necessary information to split the legs. (The legs are shared between different arrays, so it’s saved only once in memory)Enhanced indexing of the array class to support slices and 1D index arrays along certain axes
more functions, e.g.
grid_outer()
TEBD¶
Introduced TruncationError for easy handling of total truncation error.
some truncation parameters are renamed and may have a different meaning, e.g. svd_max -> svd_min has no ‘log’ in the definition.
DMRG¶
separate Lanczos module in tenpy/linalg/. Strangely, the old version orthoganalized against the complex conjugates of orthogonal_to (contrary to it’s doc string!) (and thus calculated ‘theta_o’ as bra, not ket).
cleaned up, provide prototypes for DMRG engine and mixer.
Tools¶
added
tenpy.tools.misc
, which contains ‘random stuff’ from oldtools.math
liketo_iterable
andto_array
(renamed to follow PEP8, documented)moved stuff for fitting to
tenpy.tools.fit
enhanced
tenpy.tools.string.vert_join()
for nice formattingmoved (parts of) old cluster/omp.py to
tenpy.tools.process
added
tenpy.tools.params
for a simplified handling of parameter/arguments for models and/or algorithms. Similar as the old models.model.set_var, but use it also for algorithms. Also, it may modify the given dictionary.
Introductions¶
The following documents are meant as introductions to various topics relevant to TeNPy.
If you are new to TeNPy, read the Overview.
Overview¶
Repository¶
The root directory of the git repository contains the following folders:
- tenpy
The actual source code of the library. Every subfolder contains an
__init__.py
file with a summary what the modules in it are good for. (This file is also necessary to mark the folder as part of the python package. Consequently, other subfolders of the git repo should not include a__init__.py
file.)- toycodes
Simple toy codes completely independet of the remaining library (i.e., codes in
tenpy/
). These codes should be quite readable and intend to give a flavor of how (some of) the algorithms work.- examples
Some example files demonstrating the usage and interface of the library.
- doc
A folder containing the documentation: the user guide is contained in the
*.rst
files. The online documentation is autogenerated from these files and the docstrings of the library. This folder contains a make file for building the documentation, runmake help
for the different options. The necessary files for the reference indoc/reference
can be auto-generated/updated withmake src2html
.- tests
Contains files with test routines, to be used with pytest. If you are set up correctly and have pytest installed, you can run the test suite with
pytest
from within thetests/
folder.- build
This folder is not distributed with the code, but is generated by
setup.py
(orcompile.sh
, respectively). It contains compiled versions of the Cython files, and can be ignored (and even removed without loosing functionality).
Code structure: getting started¶
There are several layers of abstraction in TeNPy. While there is a certain hierarchy of how the concepts build up on each other, the user can decide to utilize only some of them. A maximal flexibility is provided by an object oriented style based on classes, which can be inherited and adjusted to individual demands.
The following figure gives an overview of the most important modules, classes and functions in TeNPy.
Gray backgrounds indicate (sub)modules, yellow backgrounds indicate classes.
Red arrows indicate inheritance relations, dashed black arrows indicate a direct use.
(The individual models might be derived from the NearestNeighborModel
depending on the geometry of the lattice.)
There is a clear hierarchy from high-level algorithms in the tenpy.algorithms
module down to basic
operations from linear algebra in the tenpy.linalg
module.

Most basic level: linear algebra¶
Note
See Charge conservation with np_conserved for more information on defining charges for arrays.
The most basic layer is given by in the linalg
module, which provides basic features of linear algebra.
In particular, the np_conserved
submodule implements an Array
class which is used to represent
the tensors. The basic interface of np_conserved
is very similar to that of the NumPy and SciPy libraries.
However, the Array
class implements abelian charge conservation.
If no charges are to be used, one can use ‘trivial’ arrays, as shown in the following example code.
"""Basic use of the `Array` class with trivial arrays."""
# Copyright 2019-2021 TeNPy Developers, GNU GPLv3
import tenpy.linalg.np_conserved as npc
M = npc.Array.from_ndarray_trivial([[0., 1.], [1., 0.]])
v = npc.Array.from_ndarray_trivial([2., 4. + 1.j])
v[0] = 3. # set indiviual entries like in numpy
print("|v> =", v.to_ndarray())
# |v> = [ 3.+0.j 4.+1.j]
M_v = npc.tensordot(M, v, axes=[1, 0])
print("M|v> =", M_v.to_ndarray())
# M|v> = [ 4.+1.j 3.+0.j]
print("<v|M|v> =", npc.inner(v.conj(), M_v, axes='range'))
# <v|M|v> = (24+0j)
The number and types of symmetries are specified in a ChargeInfo
class.
An Array
instance represents a tensor satisfying a charge rule specifying which blocks of it are nonzero.
Internally, it stores only the non-zero blocks of the tensor, along with one LegCharge
instance for each
leg, which contains the charges and sign qconj for each leg.
We can combine multiple legs into a single larger LegPipe
,
which is derived from the LegCharge
and stores all the information necessary to later split the pipe.
The following code explicitly defines the spin-1/2 \(S^+, S^-, S^z\) operators and uses them to generate and diagonalize the two-site Hamiltonian \(H = \vec{S} \cdot \vec{S}\). It prints the charge values (by default sorted ascending) and the eigenvalues of H.
"""Explicit definition of charges and spin-1/2 operators."""
# Copyright 2019-2021 TeNPy Developers, GNU GPLv3
import tenpy.linalg.np_conserved as npc
# consider spin-1/2 with Sz-conservation
chinfo = npc.ChargeInfo([1]) # just a U(1) charge
# charges for up, down state
p_leg = npc.LegCharge.from_qflat(chinfo, [[1], [-1]])
Sz = npc.Array.from_ndarray([[0.5, 0.], [0., -0.5]], [p_leg, p_leg.conj()])
Sp = npc.Array.from_ndarray([[0., 1.], [0., 0.]], [p_leg, p_leg.conj()])
Sm = npc.Array.from_ndarray([[0., 0.], [1., 0.]], [p_leg, p_leg.conj()])
Hxy = 0.5 * (npc.outer(Sp, Sm) + npc.outer(Sm, Sp))
Hz = npc.outer(Sz, Sz)
H = Hxy + Hz
# here, H has 4 legs
H.iset_leg_labels(["s1", "t1", "s2", "t2"])
H = H.combine_legs([["s1", "s2"], ["t1", "t2"]], qconj=[+1, -1])
# here, H has 2 legs
print(H.legs[0].to_qflat().flatten())
# prints [-2 0 0 2]
E, U = npc.eigh(H) # diagonalize blocks individually
print(E)
# [ 0.25 -0.75 0.25 0.25]
Sites for the local Hilbert space and tensor networks¶
The next basic concept is that of a local Hilbert space, which is represented by a Site
in TeNPy.
This class does not only label the local states and define the charges, but also
provides onsite operators. For example, the SpinHalfSite
provides the
\(S^+, S^-, S^z\) operators under the names 'Sp', 'Sm', 'Sz'
, defined as Array
instances similarly as
in the code above.
Since the most common sites like for example the SpinSite
(for general spin S=0.5, 1, 1.5,…), BosonSite
and
FermionSite
are predefined, a user of TeNPy usually does not need to define the local charges and operators explicitly.
The total Hilbert space, i.e, the tensor product of the local Hilbert spaces, is then just given by a
list of Site
instances. If desired, different kinds of Site
can be combined in that list.
This list is then given to classes representing tensor networks like the MPS
and
MPO
.
The tensor network classes also use Array
instances for the tensors of the represented network.
The following example illustrates the initialization of a spin-1/2 site, an MPS
representing the Neel state, and
an MPO
representing the Heisenberg model by explicitly defining the W tensor.
"""Initialization of sites, MPS and MPO."""
# Copyright 2019-2021 TeNPy Developers, GNU GPLv3
from tenpy.networks.site import SpinHalfSite
from tenpy.networks.mps import MPS
from tenpy.networks.mpo import MPO
spin = SpinHalfSite(conserve="Sz")
print(spin.Sz.to_ndarray())
# [[ 0.5 0. ]
# [ 0. -0.5]]
N = 6 # number of sites
sites = [spin] * N # repeat entry of list N times
pstate = ["up", "down"] * (N // 2) # Neel state
psi = MPS.from_product_state(sites, pstate, bc="finite")
print("<Sz> =", psi.expectation_value("Sz"))
# <Sz> = [ 0.5 -0.5 0.5 -0.5]
print("<Sp_i Sm_j> =", psi.correlation_function("Sp", "Sm"), sep="\n")
# <Sp_i Sm_j> =
# [[1. 0. 0. 0. 0. 0.]
# [0. 0. 0. 0. 0. 0.]
# [0. 0. 1. 0. 0. 0.]
# [0. 0. 0. 0. 0. 0.]
# [0. 0. 0. 0. 1. 0.]
# [0. 0. 0. 0. 0. 0.]]
# define an MPO
Id, Sp, Sm, Sz = spin.Id, spin.Sp, spin.Sm, spin.Sz
J, Delta, hz = 1., 1., 0.2
W_bulk = [[Id, Sp, Sm, Sz, -hz * Sz], [None, None, None, None, 0.5 * J * Sm],
[None, None, None, None, 0.5 * J * Sp], [None, None, None, None, J * Delta * Sz],
[None, None, None, None, Id]]
W_first = [W_bulk[0]] # first row
W_last = [[row[-1]] for row in W_bulk] # last column
Ws = [W_first] + [W_bulk] * (N - 2) + [W_last]
H = MPO.from_grids([spin] * N, Ws, bc='finite', IdL=0, IdR=-1)
print("<psi|H|psi> =", H.expectation_value(psi))
# <psi|H|psi> = -1.25
Models¶
Note
See Models for more information on sites and how to define and extend models on your own.
Technically, the explicit definition of an MPO
is already enough to call an algorithm like DMRG in dmrg
.
However, writing down the W tensors is cumbersome especially for more complicated models.
Hence, TeNPy provides another layer of abstraction for the definition of models, which we discuss first.
Different kinds of algorithms require different representations of the Hamiltonian.
Therefore, the library offers to specify the model abstractly by the individual onsite terms and coupling terms of the Hamiltonian.
The following example illustrates this, again for the Heisenberg model.
"""Definition of a model: the XXZ chain."""
# Copyright 2019-2021 TeNPy Developers, GNU GPLv3
from tenpy.networks.site import SpinSite
from tenpy.models.lattice import Chain
from tenpy.models.model import CouplingModel, NearestNeighborModel, MPOModel
class XXZChain(CouplingModel, NearestNeighborModel, MPOModel):
def __init__(self, L=2, S=0.5, J=1., Delta=1., hz=0.):
spin = SpinSite(S=S, conserve="Sz")
# the lattice defines the geometry
lattice = Chain(L, spin, bc="open", bc_MPS="finite")
CouplingModel.__init__(self, lattice)
# add terms of the Hamiltonian
self.add_coupling(J * 0.5, 0, "Sp", 0, "Sm", 1) # Sp_i Sm_{i+1}
self.add_coupling(J * 0.5, 0, "Sp", 0, "Sm", -1) # Sp_i Sm_{i-1}
self.add_coupling(J * Delta, 0, "Sz", 0, "Sz", 1)
# (for site dependent prefactors, the strength can be an array)
self.add_onsite(-hz, 0, "Sz")
# finish initialization
# generate MPO for DMRG
MPOModel.__init__(self, lat, self.calc_H_MPO())
# generate H_bond for TEBD
NearestNeighborModel.__init__(self, lat, self.calc_H_bond())
While this generates the same MPO as in the previous code, this example can easily be adjusted and generalized, for
example to a higher dimensional lattice by just specifying a different lattice.
Internally, the MPO is generated using a finite state machine picture.
This allows not only to translate more complicated Hamiltonians into their corresponding MPOs,
but also to automate the mapping from a higher dimensional lattice to the 1D chain along which the MPS
winds.
Note that this mapping introduces longer-range couplings, so the model can no longer be defined to be a
NearestNeighborModel
suited for TEBD if another lattice than the Chain
is to be used.
Of course, many commonly studied models are also predefined.
For example, the following code initializes the Heisenberg model on a kagome lattice;
the spin liquid nature of the ground state of this model is highly debated in the current literature.
"""Initialization of the Heisenberg model on a kagome lattice."""
# Copyright 2019-2021 TeNPy Developers, GNU GPLv3
from tenpy.models.spins import SpinModel
model_params = {
"S": 0.5, # Spin 1/2
"lattice": "Kagome",
"bc_MPS": "infinite",
"bc_y": "cylinder",
"Ly": 2, # defines cylinder circumference
"conserve": "Sz", # use Sz conservation
"Jx": 1.,
"Jy": 1.,
"Jz": 1. # Heisenberg coupling
}
model = SpinModel(model_params)
Algorithms¶
Another layer is given by algorithms like DMRG and TEBD.
Using the previous concepts, setting up a simulation running those algorithms is a matter of just a few lines of code.
The following example runs a DMRG simulation, see dmrg
, exemplary for the transverse field Ising model at the critical point.
"""Call of (finite) DMRG."""
# Copyright 2019-2021 TeNPy Developers, GNU GPLv3
from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.algorithms import dmrg
N = 16 # number of sites
model = TFIChain({"L": N, "J": 1., "g": 1., "bc_MPS": "finite"})
sites = model.lat.mps_sites()
psi = MPS.from_product_state(sites, ['up'] * N, "finite")
dmrg_params = {"trunc_params": {"chi_max": 100, "svd_min": 1.e-10}, "mixer": True}
info = dmrg.run(psi, model, dmrg_params)
print("E =", info['E'])
# E = -20.01638790048513
print("max. bond dimension =", max(psi.chi))
# max. bond dimension = 27
The switch from DMRG to iDMRG in TeNPy is simply accomplished by a change of the parameter
"bc_MPS"
from "finite"
to "infinite"
, both for the model and the state.
The returned E
is then the energy density per site.
Due to the translation invariance, one can also evaluate the correlation length, here slightly away from the critical point.
"""Call of infinite DMRG."""
# Copyright 2019-2021 TeNPy Developers, GNU GPLv3
from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.algorithms import dmrg
N = 2 # number of sites in unit cell
model = TFIChain({"L": N, "J": 1., "g": 1.1, "bc_MPS": "infinite"})
sites = model.lat.mps_sites()
psi = MPS.from_product_state(sites, ['up'] * N, "infinite")
dmrg_params = {"trunc_params": {"chi_max": 100, "svd_min": 1.e-10}, "mixer": True}
info = dmrg.run(psi, model, dmrg_params)
print("E =", info['E'])
# E = -1.342864022725017
print("max. bond dimension =", max(psi.chi))
# max. bond dimension = 56
print("corr. length =", psi.correlation_length())
# corr. length = 4.915809146764157
Running time evolution with TEBD requires an additional loop, during which the desired observables have to be measured. The following code shows this directly for the infinite version of TEBD.
"""Call of (infinite) TEBD."""
# Copyright 2019-2021 TeNPy Developers, GNU GPLv3
from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.algorithms import tebd
M = TFIChain({"L": 2, "J": 1., "g": 1.5, "bc_MPS": "infinite"})
psi = MPS.from_product_state(M.lat.mps_sites(), [0] * 2, "infinite")
tebd_params = {
"order": 2,
"delta_tau_list": [0.1, 0.001, 1.e-5],
"max_error_E": 1.e-6,
"trunc_params": {
"chi_max": 30,
"svd_min": 1.e-10
}
}
eng = tebd.TEBDEngine(psi, M, tebd_params)
eng.run_GS() # imaginary time evolution with TEBD
print("E =", sum(psi.expectation_value(M.H_bond)) / psi.L)
print("final bond dimensions: ", psi.chi)
Simulations¶
The top-most layer is given by Simulations. A simulation wraps the whole setup of initializing the Model, MPS and Algorithm classes, running the algorithm, possibly performing measurements, and finally saving results to disk, if desired. It provides some extra functionality like the ability to resume an interrupted simulation, e.g. if your job got killed on the cluster due to runtime limitis.
Ideally, the Simulation (sub) class represents the whole Simulation from start to end, giving re-producable results depending only on the parameters given to it.
Charge conservation with np_conserved¶
The basic idea is quickly summarized: By inspecting the Hamiltonian, you can identify symmetries, which correspond to conserved quantities, called charges. These charges divide the tensors into different sectors. This can be used to infer for example a block-diagonal structure of certain matrices, which in turn speeds up SVD or diagonalization a lot. Even for more general (non-square-matrix) tensors, charge conservation imposes restrictions which blocks of a tensor can be non-zero. Only those blocks need to be saved, which ultimately (= for large enough arrays) leads to a speedup of many routines, e.g., tensordot.
This introduction covers our implementation of charges; explaining mathematical details of the underlying symmetry is beyond its scope. We refer you to the corresponding chapter in our [TeNPyNotes] for a more general introduction of the idea (also stating the “charge rule” introduced below). [singh2010] explains why it works form a mathematical point of view, [singh2011] has the focus on a \(U(1)\) symmetry and might be easier to read.
What you really need to know about np_conserved¶
The good news is: It is not necessary to understand all the details explained in the following sections
if you just want to use TeNPy for “standard” simulations like TEBD and DMRG.
In praxis, you will likely not have to define the charges by yourself.
For most simulations using TeNPy, the charges are initially defined in the Site
;
and there are many pre-defined sites like the :class:SpinHalfSite
, which you can just use.
The sites in turn are initialized by the Model
class you are using (see also Models).
From there, all the necessary charge information is automatically propagated along with the tensors.
However, you should definitely know a few basic facts about the usage of charge conservation in TeNPy:
Instead of using numpy arrays, tensors are represented by the
Array
class. This class is defined innp_conserved
(the name standing for “numpy with charge conservation”). Internally, it stores only non-zero blocks of the tensor, which are “compatible” with the charges of the indices. It has to have a well defined overall chargeqtotal
. This expludes certain operators (like \(S^x\) for Sz conservation) and MPS which are a superpositions of states in different charge sectors.There is a class
ChargeInfo
holding the general information what kind of charges we have, and aLegCharge
for the charge data on a given leg. The leg holds a flag qconj which is +1 or -1, depending on whether the leg goes into the tensor (representing a vector space) or out of the tensor (representing the corresponding dual vector space).Besides the array class methods, there are a bunch of functions like
tensordot()
,svd()
oreigh()
to manipulate tensors. These function have a very similar call structure as the corresponding numpy functions, but they act on our tensor Array class, and preserve the block structure (and exploit it for speed, wherever possible).The only allowed “reshaping” operations for those tensors are to combine legs and to split previously combined legs. See the correspoding section below.
It is convenient to use string labels instead of numbers to refer to the various legs of a tensor. The rules how these labels change during the various operations are also described a section below.
Introduction to combine_legs, split_legs and LegPipes¶
Often, it is necessary to “combine” multiple legs into one: for example to perfom a SVD, a tensor needs to be viewed as a matrix.
For a flat array, this can be done with np.reshape
, e.g., if A
has shape (10, 3, 7)
then B = np.reshape(A, (30, 7))
will
result in a (view of the) array with one less dimension, but a “larger” first leg. By default (order='C'
), this
results in
B[i*3 + j , k] == A[i, j, k] for i in range(10) for j in range(3) for k in range(7)
While for a np.array, also a reshaping (10, 3, 7) -> (2, 21, 5)
would be allowed, it does not make sense
physically. The only sensible “reshape” operation on an Array
are
to combine multiple legs into one leg pipe (
LegPipe
) withcombine_legs()
, orto split a pipe of previously combined legs with
split_legs()
.
Each leg has a Hilbert space, and a representation of the symmetry on that Hilbert space. Combining legs corresponds to the tensor product operation, and for abelian groups, the corresponding “fusion” of the representation is the simple addition of charge.
Fusion is not a lossless process, so if we ever want to split the combined leg,
we need some additional data to tell us how to reverse the tensor product.
This data is saved in the class LegPipe
, derived from the LegCharge
and used as new leg.
Details of the information contained in a LegPipe are given in the class doc string.
The rough usage idea is as follows:
You can call
combine_legs()
without supplying any LegPipes, combine_legs will then make them for you.Nevertheless, if you plan to perform the combination over and over again on sets of legs you know to be identical [with same charges etc, up to an overall -1 in qconj on all incoming and outgoing Legs] you might make a LegPipe anyway to save on the overhead of computing it each time.
In any way, the resulting Array will have a
LegPipe
as a LegCharge on the combined leg. Thus, it – and all tensors inheriting the leg (e.g. the results of svd, tensordot etc.) – will have the information how to split the LegPipe back to the original legs.Once you performed the necessary operations, you can call
split_legs()
. This uses the information saved in the LegPipe to split the legs, recovering the original legs.
For a LegPipe, conj()
changes qconj
for the outgoing pipe and the incoming legs.
If you need a LegPipe with the same incoming qconj
, use outer_conj()
.
Leg labeling¶
It’s convenient to name the legs of a tensor: for instance, we can name legs 0, 1, 2 to be 'a', 'b', 'c'
: \(T_{i_a,i_b,i_c}\).
That way we don’t have to remember the ordering! Under tensordot, we can then call
U = npc.tensordot(S, T, axes = [ [...], ['b'] ] )
without having to remember where exactly 'b'
is.
Obviously U
should then inherit the name of its legs from the uncontracted legs of S and T.
So here is how it works:
Labels can only be strings. The labels should not include the characters
.
or?
. Internally, the labels are stored as dicta.labels = {label: leg_position, ...}
. Not all legs need a label.To set the labels, call
A.set_labels(['a', 'b', None, 'c', ... ])
which will set up the labeling
{'a': 0, 'b': 1, 'c': 3 ...}
.(Where implemented) the specification of axes can use either the labels or the index positions. For instance, the call
tensordot(A, B, [['a', 2, 'c'], [...]])
will interpret'a'
and'c'
as labels (callingget_leg_indices()
to find their positions using the dict) and 2 as ‘the 2nd leg’. That’s why we require labels to be strings!- Labels will be intelligently inherited through the various operations of np_conserved.
Under transpose, labels are permuted.
Under tensordot, labels are inherited from uncontracted legs. If there is a collision, both labels are dropped.
Under combine_legs, labels get concatenated with a
.
delimiter and sourrounded by brackets. Example: leta.labels = {'a': 1, 'b': 2, 'c': 3}
. Then ifb = a.combine_legs([[0, 1], [2]])
, it will haveb.labels = {'(a.b)': 0, '(c)': 1}
. If some sub-leg of a combined leg isn’t named, then a'?#'
label is inserted (with#
the leg index), e.g.,'a.?0.c'
.Under split_legs, the labels are split using the delimiters (and the
'?#'
are dropped).Under conj, iconj: take
'a' -> 'a*'
,'a*' -> 'a'
, and'(a.(b*.c))' -> '(a*.(b.c*))'
Under svd, the outer labels are inherited, and inner labels can be optionally passed.
Under pinv, the labels are transposed.
Indexing of an Array¶
Although it is usually not necessary to access single entries of an Array
, you can of course do that.
In the simplest case, this is something like A[0, 2, 1]
for a rank-3 Array A
.
However, accessing single entries is quite slow and usually not recommended. For small Arrays, it may be convenient to convert them
back to flat numpy arrays with to_ndarray()
.
On top of that very basic indexing, Array supports slicing and some kind of advanced indexing, which is however different from the one of numpy arrarys (described here). Unlike numpy arrays, our Array class does not broadcast existing index arrays – this would be terribly slow. Also, np.newaxis is not supported, since inserting new axes requires additional information for the charges.
Instead, we allow just indexing of the legs independent of each other, of the form A[i0, i1, ...]
.
If all indices i0, i1, ...
are integers, the single corresponding entry (of type dtype) is returned.
However, the individual ‘indices’ i0
for the individual legs can also be one of what is described in the following list.
In that case, a new Array
with less data (specified by the indices) is returned.
The ‘indices’ can be:
an int: fix the index of that axis, return array with one less dimension. See also
take_slice()
.a
slice(None)
or:
: keep the complete axisan
Ellipsis
or...
: shorthand forslice(None)
for missing axes to fix the lenan 1D bool ndarray
mask
: apply a mask to that axis, seeiproject()
.a
slice(start, stop, step)
orstart:stop:step
: keep only the indices specified by the slice. This is also implemented with iproject.an 1D int ndarray
mask
: keep only the indices specified by the array. This is also implemented with iproject.
For slices and 1D arrays, additional permuations may be perfomed with the help of permute()
.
If the number of indices is less than rank, the remaining axes remain free, so for a rank 4 Array A
, A[i0, i1] == A[i0, i1, ...] == A[i0, i1, :, :]
.
Note that indexing always copies the data – even if int contains just slices, in which case numpy would return a view.
However, assigning with A[:, [3, 5], 3] = B
should work as you would expect.
Warning
Due to numpy’s advanced indexing, for 1D integer arrays a0
and a1
the following holds
A[a0, a1].to_ndarray() == A.to_ndarray()[np.ix_(a0, a1)] != A.to_ndarray()[a0, a1]
For a combination of slices and arrays, things get more complicated with numpys advanced indexing.
In that case, a simple np.ix_(...)
doesn’t help any more to emulate our version of indexing.
Details of the np_conserved implementation¶
Notations¶
Lets fix the notation of certain terms for this introduction and the doc-strings in np_conserved
.
This might be helpful if you know the basics from a different context.
If you’re new to the subject, keep reading even if you don’t understand each detail,
and come back to this section when you encounter the corresponding terms again.
A Array
is a multi-dimensional array representing a tensor with the entries:
Each leg \(a_i\) corresponds the a vector space of dimension n_i.
An index of a leg is a particular value \(a_i \in \lbrace 0, ... ,n_i-1\rbrace\).
The rank is the number of legs, the shape is \((n_0, ..., n_{rank-1})\).
We restrict ourselfes to abelian charges with entries in \(\mathbb{Z}\) or in \(\mathbb{Z}_m\).
The nature of a charge is specified by \(m\); we set \(m=1\) for charges corresponding to \(\mathbb{Z}\).
The number of charges is refered to as qnumber as a short hand, and the collection of \(m\) for each charge is called qmod.
The qnumber, qmod and possibly descriptive names of the charges are saved in an instance of ChargeInfo
.
To each index of each leg, a value of the charge(s) is associated.
A charge block is a contiguous slice corresponding to the same charge(s) of the leg.
A qindex is an index in the list of charge blocks for a certain leg.
A charge sector is for given charge(s) is the set of all qindices of that charge(s).
A leg is blocked if all charge sectors map one-to-one to qindices.
Finally, a leg is sorted, if the charges are sorted lexiographically.
Note that a sorted leg is always blocked.
We can also speak of the complete array to be blocked by charges or legcharge-sorted, which means that all of its legs are blocked or sorted, respectively.
The charge data for a single leg is collected in the class LegCharge
.
A LegCharge
has also a flag qconj, which tells whether the charges
point inward (+1) or outward (-1). What that means, is explained later in Which entries of the npc Array can be non-zero?.
For completeness, let us also summarize also the internal structure of an Array
here:
The array saves only non-zero blocks, collected as a list of np.array in self._data
.
The qindices necessary to map these blocks to the original leg indices are collected in self._qdata
An array is said to be qdata-sorted if its self._qdata
is lexiographically sorted.
More details on this follow later.
However, note that you usually shouldn’t access _qdata and _data directly - this
is only necessary from within tensordot, svd, etc.
Also, an array has a total charge, defining which entries can be non-zero - details in Which entries of the npc Array can be non-zero?.
Finally, a leg pipe (implemented in LegPipe
)
is used to formally combine multiple legs into one leg. Again, more details follow later.
Physical Example¶
For concreteness, you can think of the Hamiltonian \(H = -t \sum_{<i,j>} (c^\dagger_i c_j + H.c.) + U n_i n_j\) with \(n_i = c^\dagger_i c_i\). This Hamiltonian has the global \(U(1)\) gauge symmetry \(c_i \rightarrow c_i e^{i\phi}\). The corresponding charge is the total number of particles \(N = \sum_i n_i\). You would then introduce one charge with \(m=1\).
Note that the total charge is a sum of local terms, living on single sites. Thus, you can infer the charge of a single physical site: it’s just the value \(q_i = n_i \in \mathbb{N}\) for each of the states.
Note that you can only assign integer charges. Consider for example the spin 1/2 Heisenberg chain. Here, you can naturally identify the magnetization \(S^z = \sum_i S^z_i\) as the conserved quantity, with values \(S^z_i = \pm \frac{1}{2}\). Obviously, if \(S^z\) is conserved, then so is \(2 S^z\), so you can use the charges \(q_i = 2 S^z_i \in \lbrace-1, +1 \rbrace\) for the down and up states, respectively. Alternatively, you can also use a shift and define \(q_i = S^z_i + \frac{1}{2} \in \lbrace 0, 1 \rbrace\).
As another example, consider BCS like terms \(\sum_k (c^\dagger_k c^\dagger_{-k} + H.c.)\). These terms break the total particle conservation, but they preserve the total parity, i.e., \(N \mod 2\) is conserved. Thus, you would introduce a charge with \(m = 2\) in this case.
In the above examples, we had only a single charge conserved at a time, but you might be lucky and have multiple conserved quantities, e.g. if you have two chains coupled only by interactions. TeNPy is designed to handle the general case of multiple charges. When giving examples, we will restrict to one charge, but everything generalizes to multiple charges.
The different formats for LegCharge¶
As mentioned above, we assign charges to each index of each leg of a tensor. This can be done in three formats: qflat, as qind and as qdict. Let me explain them with examples, for simplicity considereing only a single charge (the most inner array has one entry for each charge).
- qflat form: simply a list of charges for each index.
An example:
qflat = [[-2], [-1], [-1], [0], [0], [0], [0], [3], [3]]
This tells you that the leg has size 9, the charges for are
[-2], [-1], [-1], ..., [3]
for the indices0, 1, 2, 3,..., 8
. You can identify four charge blocksslice(0, 1), slice(1, 3), slice(3, 7), slice(7, 9)
in this example, which have charges[-2], [-1], [0], [3]
. In other words, the indices1, 2
(which are inslice(1, 3)
) have the same charge value[-1]
. A qindex would just enumerate these blocks as0, 1, 2, 3
.- qind form: a 1D array slices and a 2D array charges.
This is a more compact version than the qflat form: the slices give a partition of the indices and the charges give the charge values. The same example as above would simply be:
slices = [0, 1, 3, 7, 9] charges = [[-2], [-1], [0], [3]]
Note that slices includes
0
as first entry and the number of indices (here9
) as last entries. Thus it has lenblock_number + 1
, whereblock_number
(given byblock_number
) is the number of charge blocks in the leg, i.e. a qindex runs from 0 toblock_number-1
. On the other hand, the 2D array charges has shape(block_number, qnumber)
, whereqnumber
is the number of charges (given byqnumber
).In that way, the qind form maps an qindex, say
qi
, to the indicesslice(slices[qi], slices[qi+1])
and the charge(s)charges[qi]
.- qdict form: a dictionary in the other direction than qind, taking charge tuples to slices.
Again for the same example:
{(-2,): slice(0, 1), (-1,): slice(1, 3), (0,) : slice(3, 7), (3,) : slice(7, 9)}
Since the keys of a dictionary are unique, this form is only possible if the leg is completely blocked.
The LegCharge
saves the charge data of a leg internally in qind form,
directly in the attribute slices and charges.
However, it also provides convenient functions for conversion between from and to the qflat and qdict form.
The above example was nice since all charges were sorted and the charge blocks were ‘as large as possible’. This is however not required.
The following example is also a valid qind form:
slices = [0, 1, 3, 5, 7, 9]
charges = [[-2], [-1], [0], [0], [3]]
This leads to the same qflat form as the above examples, thus representing the same charges on the leg indices. However, regarding our Arrays, this is quite different, since it diveds the leg into 5 (instead of previously 4) charge blocks. We say the latter example is not bunched, while the former one is bunched.
To make the different notions of sorted and bunched clearer, consider the following (valid) examples:
charges |
bunched |
sorted |
blocked |
---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
If a leg is bunched and sorted, it is automatically blocked (but not vice versa). See also below for further comments on that.
Which entries of the npc Array can be non-zero?¶
The reason for the speedup with np_conserved lies in the fact that it saves only the blocks ‘compatible’ with the charges. But how is this ‘compatible’ defined?
Assume you have a tensor, call it \(T\), and the LegCharge
for all of its legs, say \(a, b, c, ...\).
Remeber that the LegCharge associates to each index of the leg a charge value (for each of the charges, if qnumber > 1).
Let a.to_qflat()[ia]
denote the charge(s) of index ia
for leg a
, and similar for other legs.
In addition, the LegCharge has a flag qconj
. This flag qconj is only a sign,
saved as +1 or -1, specifying whether the charges point ‘inward’ (+1, default) or ‘outward’ (-1) of the tensor.
Then, the total charge of an entry T[ia, ib, ic, ...]
of the tensor is defined as:
qtotal[ia, ib, ic, ...] = a.to_qflat()[ia] * a.qconj + b.to_qflat()[ib] * b.qconj + c.to_qflat()[ic] * c.qconj + ... modulo qmod
The rule which entries of the a Array
can be non-zero
(i.e., are ‘compatible’ with the charges), is then very simple:
Rule for non-zero entries
An entry ia, ib, ic, ...
of a Array
can only be non-zero,
if qtotal[ia, ib, ic, ...]
matches the unique qtotal
attribute of the class.
In other words, there is a single total charge .qtotal
attribute of a Array
.
All indices ia, ib, ic, ...
for which the above defined qtotal[ia, ib, ic, ...]
matches this total charge,
are said to be compatible with the charges and can be non-zero.
All other indices are incompatible with the charges and must be zero.
In case of multiple charges, qnumber > 1, is a straigth-forward generalization: an entry can only be non-zero if it is compatible with each of the defined charges.
The pesky qconj - contraction as an example¶
Why did we introduce the qconj
flag? Remember it’s just a sign telling whether the charge points inward or outward.
So whats the reasoning?
The short answer is, that LegCharges actually live on bonds (i.e., legs which are to be contracted) rather than individual tensors. Thus, it is convenient to share the LegCharges between different legs and even tensors, and just adjust the sign of the charges with qconj.
As an example, consider the contraction of two tensors, \(C_{ia,ic} = \sum_{ib} A_{ia,ib} B_{ib,ic}\).
For simplicity, say that the total charge of all three tensors is zero.
What are the implications of the above rule for non-zero entries?
Or rather, how can we ensure that C
complies with the above rule?
An entry C[ia,ic]
will only be non-zero,
if there is an ib
such that both A[ia,ib]
and B[ib,ic]
are non-zero, i.e., both of the following equations are
fullfilled:
A.qtotal == A.legs[0].to_qflat()[ia] * A.legs[0].qconj + A.legs[1].to_qflat()[ib] * A.legs[1].qconj modulo qmod
B.qtotal == B.legs[0].to_qflat()[ib] * B.legs[0].qconj + B.legs[1].to_qflat()[ic] * B.legs[1].qconj modulo qmod
(A.legs[0]
is the LegCharge
saving the charges of the first leg (with index ia
) of A.)
For the uncontracted legs, we just keep the charges as they are:
C.legs = [A.legs[0], B.legs[1]]
It is then straight-forward to check, that the rule is fullfilled for \(C\), if the following condition is met:
A.qtotal + B.qtotal - C.qtotal == A.legs[1].to_qflat()[ib] A.b.qconj + B.legs[0].to_qflat()[ib] B.b.qconj modulo qmod
The easiest way to meet this condition is (1) to require that A.b
and B.b
share the same charges b.to_qflat()
, but have
opposite qconj, and (2) to define C.qtotal = A.qtotal + B.qtotal
.
This justifies the introduction of qconj:
when you define the tensors, you have to define the LegCharge
for the b only once, say for A.legs[1]
.
For B.legs[0]
you simply use A.legs[1].conj()
which creates a copy of the LegCharge with shared slices and charges, but opposite qconj.
As a more impressive example, all ‘physical’ legs of an MPS can usually share the same
LegCharge
(up to different qconj
if the local Hilbert space is the same).
This leads to the following convention:
Convention
When an npc algorithm makes tensors which share a bond (either with the input tensors, as for tensordot, or amongst the output tensors, as for SVD),
the algorithm is free, but not required, to use the same LegCharge
for the tensors sharing the bond, without making a copy.
Thus, if you want to modify a LegCharge, you must make a copy first (e.g. by using methods of LegCharge for what you want to acchive).
Assigning charges to non-physical legs¶
From the above physical examples, it should be clear how you assign charges to physical legs. But what about other legs, e.g, the virtual bond of an MPS (or an MPO)?
The charge of these bonds must be derived by using the ‘rule for non-zero entries’, as far as they are not arbitrary. As a concrete example, consider an MPS on just two spin 1/2 sites:
| _____ _____
| x->- | A | ->-y->- | B | ->-z
| ----- -----
| ^ ^
| |p |p
The two legs p
are the physical legs and share the same charge, as they both describe the same local Hilbert space.
For better distincition, let me label the indices of them by \(\uparrow=0\) and \(\downarrow=1\).
As noted above, we can associate the charges 1 (\(p=\uparrow\)) and -1 (\(p=\downarrow\)), respectively, so we define:
chinfo = npc.ChargeInfo([1], ['2*Sz'])
p = npc.LegCharge.from_qflat(chinfo, [1, -1], qconj=+1)
For the qconj
signs, we stick to the convention used in our MPS code and indicated by the
arrows in above ‘picture’: physical legs are incoming (qconj=+1
), and from left to right on the virtual bonds.
This is acchieved by using [p, x, y.conj()]
as legs for A
, and [p, y, z.conj()]
for B
, with the
default qconj=+1
for all p, x, y, z
: y.conj()
has the same charges as y
, but opposite qconj=-1
.
The legs x
and z
of an L=2
MPS, are ‘dummy’ legs with just one index 0
.
The charge on one of them, as well as the total charge of both A
and B
is arbitrary (i.e., a gauge freedom),
so we make a simple choice: total charge 0 on both arrays, as well as for \(x=0\),
x = npc.LegCharge.from_qflat(chinfo, [0], qconj=+1)
.
The charges on the bonds y and z then depend on the state the MPS represents. Here, we consider a singlet \(\psi = (|\uparrow \downarrow\rangle - |\downarrow \uparrow\rangle)/\sqrt{2}\) as a simple example. A possible MPS representation is given by:
A[up, :, :] = [[1/2.**0.5, 0]] B[up, :, :] = [[0], [-1]]
A[down, :, :] = [[0, 1/2.**0.5]] B[down, :, :] = [[1], [0]]
There are two non-zero entries in A
, for the indices \((a, x, y) = (\uparrow, 0, 0)\) and \((\downarrow, 0, 1)\).
For \((a, x, y) = (\uparrow, 0, 0)\), we want:
A.qtotal = 0 = p.to_qflat()[up] * p.qconj + x.to_qflat()[0] * x.qconj + y.conj().to_qflat()[0] * y.conj().qconj
= 1 * (+1) + 0 * (+1) + y.conj().to_qflat()[0] * (-1)
This fixes the charge of y=0
to 1.
A similar calculation for \((a, x, y) = (\downarrow, 0, 1)\) yields the charge -1
for y=1
.
We have thus all the charges of the leg y
and can define y = npc.LegCharge.from_qflat(chinfo, [1, -1], qconj=+1)
.
Now take a look at the entries of B
.
For the non-zero entry \((b, y, z) = (\uparrow, 1, 0)\), we want:
B.qtotal = 0 = p.to_qflat()[up] * p.qconj + y.to_qflat()[1] * y.qconj + z.conj().to_qflat()[0] * z.conj().qconj
= 1 * (+1) + (-1) * (+1) + z.conj().to_qflat()[0] * (-1)
This implies the charge 0 for z = 0, thus z = npc.LegCharge.form_qflat(chinfo, [0], qconj=+1)
.
Finally, note that the rule for \((b, y, z) = (\downarrow, 0, 0)\) is automatically fullfilled!
This is an implication of the fact that the singlet has a well defined value for \(S^z_a + S^z_b\).
For other states without fixed magnetization (e.g., \(|\uparrow \uparrow\rangle + |\downarrow \downarrow\rangle\))
this would not be the case, and we could not use charge conservation.
As an exercise, you can calculate the charge of z in the case that A.qtotal=5
, B.qtotal = -1
and
charge 2
for x=0
. The result is -2.
Note
This section is meant be an pedagogical introduction. In you program, you can use the functions
detect_legcharge()
(which does exactly what’s described above) or
detect_qtotal()
(if you know all LegCharges, but not qtotal).
Array creation¶
Direct creation¶
Making an new Array
requires both the tensor entries (data) and charge data.
The default initialization a = Array(...)
creates an empty Array, where all entries are zero
(equivalent to zeros()
).
(Non-zero) data can be provided either as a dense np.array to from_ndarray()
,
or by providing a numpy function such as np.random, np.ones etc. to from_func()
.
In both cases, the charge data is provided by one ChargeInfo
,
and a LegCharge
instance for each of the legs.
Note
The charge data instances are not copied, in order to allow it to be shared between different Arrays.
Consequently, you must make copies of the charge data, if you manipulate it directly.
(However, methods like sort()
do that for you.)
Indirect creation by manipulating existing arrays¶
Of course, a new Array
can also created using the charge data from exisiting Arrays,
for example with zeros_like()
or creating a (deep or shallow) copy()
.
Further, there are many higher level functions like tensordot()
or svd()
,
which also return new Arrays.
Complete blocking of Charges¶
While the code was designed in such a way that each charge sector has a different charge, the code
should still run correctly if multiple charge sectors (for different qindex) correspond to the same charge.
In this sense Array
can act like a sparse array class to selectively store subblocks.
Algorithms which need a full blocking should state that explicitly in their doc-strings.
(Some functions (like svd and eigh) require complete blocking internally, but if necessary they just work on
a temporary copy returned by as_completely_blocked()
).
If you expect the tensor to be dense subject to charge constraints (as for MPS), it will be most efficient to fully block by charge, so that work is done on large chunks.
However, if you expect the tensor to be sparser than required by charge (as for an MPO), it may be convenient not to completely block, which forces smaller matrices to be stored, and hence many zeroes to be dropped. Nevertheless, the algorithms were not designed with this in mind, so it is not recommended in general. (If you want to use it, run a benchmark to check whether it is really faster!)
If you haven’t created the array yet, you can call sort()
(with bunch=True
)
on each LegCharge
which you want to block.
This sorts by charges and thus induces a permution of the indices, which is also returned as an 1D array perm
.
For consistency, you have to apply this permutation to your flat data as well.
Alternatively, you can simply call sort_legcharge()
on an existing Array
.
It calls sort()
internally on the specified legs and performs the necessary
permutations directly to (a copy of) self. Yet, you should keep in mind, that the axes are permuted afterwards.
Internal Storage schema of npc Arrays¶
The actual data of the tensor is stored in _data
. Rather than keeping a single np.array (which would have many zeros in it),
we store only the non-zero sub blocks. So _data
is a python list of np.array’s.
The order in which they are stored in the list is not physically meaningful, and so not guaranteed (more on this later).
So to figure out where the sub block sits in the tensor, we need the _qdata
structure (on top of the LegCharges in legs
).
Consider a rank 3 tensor T
, with the first leg like:
legs[0].slices = np.array([0, 1, 4, ...])
legs[0].charges = np.array([[-2], [1], ...])
Each row of charges gives the charges for a charge block of the leg, with the actual indices of the total tensor determined by the slices. The qindex simply enumerates the charge blocks of a lex. Picking a qindex (and thus a charge block) from each leg, we have a subblock of the tensor.
For each (non-zero) subblock of the tensor, we put a (numpy) ndarray entry in the _data
list.
Since each subblock of the tensor is specified by rank qindices,
we put a corresponding entry in _qdata
, which is a 2D array of shape (#stored_blocks, rank)
.
Each row corresponds to a non-zero subblock, and there are rank columns giving the corresponding qindex for each leg.
Example: for a rank 3 tensor we might have:
T._data = [t1, t2, t3, t4, ...]
T._qdata = np.array([[3, 2, 1],
[1, 1, 1],
[4, 2, 2],
[2, 1, 2],
... ])
The third subblock has an ndarray t3
, and qindices [4 2 2]
for the three legs.
To find the position of
t3
in the actual tensor you can useget_slice()
:T.legs[0].get_slice(4), T.legs[1].get_slice(2), T.legs[2].get_slice(2)
The function
leg.get_charges(qi)
simply returnsslice(leg.slices[qi], leg.slices[qi+1])
To find the charges of t3, we an use
get_charge()
:T.legs[0].get_charge(2), T.legs[1].get_charge(2), T.legs[2].get_charge(2)
The function
leg.get_charge(qi)
simply returnsleg.charges[qi]*leg.qconj
.
Note
Outside of np_conserved, you should use the API to access the entries.
If you really need to iterate over all blocks of an Array T
, try for (block, blockslices, charges, qindices) in T: do_something()
.
The order in which the blocks stored in _data
/_qdata
is arbitrary (although of course _data
and _qdata
must be in correspondence).
However, for many purposes it is useful to sort them according to some convention. So we include a flag ._qdata_sorted
to the array.
So, if sorted (with isort_qdata()
, the _qdata
example above goes to
_qdata = np.array([[1, 1, 1],
[3, 2, 1],
[2, 1, 2],
[4, 2, 2],
... ])
Note that np.lexsort chooses the right-most column to be the dominant key, a convention we follow throughout.
If _qdata_sorted == True
, _qdata
and _data
are guaranteed to be lexsorted. If _qdata_sorted == False
, there is no gaurantee.
If an algorithm modifies _qdata
, it must set _qdata_sorted = False
(unless it gaurantees it is still sorted).
The routine sort_qdata()
brings the data to sorted form.
See also¶
The module
tenpy.linalg.np_conserved
should contain all the API needed from the point of view of the algorithms. It contians the fundamentalArray
class and functions for working with them (creating and manipulating).The module
tenpy.linalg.charges
contains implementations for the charge structure, for example the classesChargeInfo
,LegCharge
, andLegPipe
. As noted above, the ‘public’ API is imported to (and accessible from)np_conserved
.
A full example code for spin-1/2¶
Below follows a full example demonstrating the creation and contraction of Arrays. (It’s the file a_np_conserved.py in the examples folder of the tenpy source.)
"""An example code to demonstrate the usage of :class:`~tenpy.linalg.np_conserved.Array`.
This example includes the following steps:
1) create Arrays for an Neel MPS
2) create an MPO representing the nearest-neighbour AFM Heisenberg Hamiltonian
3) define 'environments' left and right
4) contract MPS and MPO to calculate the energy
5) extract two-site hamiltonian ``H2`` from the MPO
6) calculate ``exp(-1.j*dt*H2)`` by diagonalization of H2
7) apply ``exp(H2)`` to two sites of the MPS and truncate with svd
Note that this example uses only np_conserved, but no other modules.
Compare it to the example `b_mps.py`,
which does the same steps using a few predefined classes like MPS and MPO.
"""
# Copyright 2018-2021 TeNPy Developers, GNU GPLv3
import tenpy.linalg.np_conserved as npc
import numpy as np
# model parameters
Jxx, Jz = 1., 1.
L = 20
dt = 0.1
cutoff = 1.e-10
print("Jxx={Jxx}, Jz={Jz}, L={L:d}".format(Jxx=Jxx, Jz=Jz, L=L))
print("1) create Arrays for an Neel MPS")
# vL ->--B-->- vR
# |
# ^
# |
# p
# create a ChargeInfo to specify the nature of the charge
chinfo = npc.ChargeInfo([1], ['2*Sz']) # the second argument is just a descriptive name
# create LegCharges on physical leg and even/odd bonds
p_leg = npc.LegCharge.from_qflat(chinfo, [[1], [-1]]) # charges for up, down
v_leg_even = npc.LegCharge.from_qflat(chinfo, [[0]])
v_leg_odd = npc.LegCharge.from_qflat(chinfo, [[1]])
B_even = npc.zeros([v_leg_even, v_leg_odd.conj(), p_leg],
labels=['vL', 'vR', 'p']) # virtual left/right, physical
B_odd = npc.zeros([v_leg_odd, v_leg_even.conj(), p_leg], labels=['vL', 'vR', 'p'])
B_even[0, 0, 0] = 1. # up
B_odd[0, 0, 1] = 1. # down
Bs = [B_even, B_odd] * (L // 2) + [B_even] * (L % 2) # (right-canonical)
Ss = [np.ones(1)] * L # Ss[i] are singular values between Bs[i-1] and Bs[i]
# Side remark:
# An MPS is expected to have non-zero entries everywhere compatible with the charges.
# In general, we recommend to use `sort_legcharge` (or `as_completely_blocked`)
# to ensure complete blocking. (But the code will also work, if you don't do it.)
# The drawback is that this might introduce permutations in the indices of single legs,
# which you have to keep in mind when converting dense numpy arrays to and from npc.Arrays.
print("2) create an MPO representing the AFM Heisenberg Hamiltonian")
# p*
# |
# ^
# |
# wL ->--W-->- wR
# |
# ^
# |
# p
# create physical spin-1/2 operators Sz, S+, S-
Sz = npc.Array.from_ndarray([[0.5, 0.], [0., -0.5]], [p_leg, p_leg.conj()], labels=['p', 'p*'])
Sp = npc.Array.from_ndarray([[0., 1.], [0., 0.]], [p_leg, p_leg.conj()], labels=['p', 'p*'])
Sm = npc.Array.from_ndarray([[0., 0.], [1., 0.]], [p_leg, p_leg.conj()], labels=['p', 'p*'])
Id = npc.eye_like(Sz, labels=Sz.get_leg_labels()) # identity
mpo_leg = npc.LegCharge.from_qflat(chinfo, [[0], [2], [-2], [0], [0]])
W_grid = [[Id, Sp, Sm, Sz, None ],
[None, None, None, None, 0.5 * Jxx * Sm],
[None, None, None, None, 0.5 * Jxx * Sp],
[None, None, None, None, Jz * Sz ],
[None, None, None, None, Id ]] # yapf:disable
W = npc.grid_outer(W_grid, [mpo_leg, mpo_leg.conj()], grid_labels=['wL', 'wR'])
# wL/wR = virtual left/right of the MPO
Ws = [W] * L
print("3) define 'environments' left and right")
# .---->- vR vL ->----.
# | |
# envL->- wR wL ->-envR
# | |
# .---->- vR* vL*->----.
envL = npc.zeros([W.get_leg('wL').conj(), Bs[0].get_leg('vL').conj(), Bs[0].get_leg('vL')],
labels=['wR', 'vR', 'vR*'])
envL[0, :, :] = npc.diag(1., envL.legs[1])
envR = npc.zeros([W.get_leg('wR').conj(), Bs[-1].get_leg('vR').conj(), Bs[-1].get_leg('vR')],
labels=['wL', 'vL', 'vL*'])
envR[-1, :, :] = npc.diag(1., envR.legs[1])
print("4) contract MPS and MPO to calculate the energy <psi|H|psi>")
contr = envL
for i in range(L):
# contr labels: wR, vR, vR*
contr = npc.tensordot(contr, Bs[i], axes=('vR', 'vL'))
# wR, vR*, vR, p
contr = npc.tensordot(contr, Ws[i], axes=(['p', 'wR'], ['p*', 'wL']))
# vR*, vR, wR, p
contr = npc.tensordot(contr, Bs[i].conj(), axes=(['p', 'vR*'], ['p*', 'vL*']))
# vR, wR, vR*
# note that the order of the legs changed, but that's no problem with labels:
# the arrays are automatically transposed as necessary
E = npc.inner(contr, envR, axes=(['vR', 'wR', 'vR*'], ['vL', 'wL', 'vL*']))
print("E =", E)
print("5) calculate two-site hamiltonian ``H2`` from the MPO")
# label left, right physical legs with p, q
W0 = W.replace_labels(['p', 'p*'], ['p0', 'p0*'])
W1 = W.replace_labels(['p', 'p*'], ['p1', 'p1*'])
H2 = npc.tensordot(W0, W1, axes=('wR', 'wL')).itranspose(['wL', 'wR', 'p0', 'p1', 'p0*', 'p1*'])
H2 = H2[0, -1] # (If H has single-site terms, it's not that simple anymore)
print("H2 labels:", H2.get_leg_labels())
print("6) calculate exp(H2) by diagonalization of H2")
# diagonalization requires to view H2 as a matrix
H2 = H2.combine_legs([('p0', 'p1'), ('p0*', 'p1*')], qconj=[+1, -1])
print("labels after combine_legs:", H2.get_leg_labels())
E2, U2 = npc.eigh(H2)
print("Eigenvalues of H2:", E2)
U_expE2 = U2.scale_axis(np.exp(-1.j * dt * E2), axis=1) # scale_axis ~= apply an diagonal matrix
exp_H2 = npc.tensordot(U_expE2, U2.conj(), axes=(1, 1))
exp_H2.iset_leg_labels(H2.get_leg_labels())
exp_H2 = exp_H2.split_legs() # by default split all legs which are `LegPipe`
# (this restores the originial labels ['p0', 'p1', 'p0*', 'p1*'] of `H2` in `exp_H2`)
print("7) apply exp(H2) to even/odd bonds of the MPS and truncate with svd")
# (this implements one time step of first order TEBD)
for even_odd in [0, 1]:
for i in range(even_odd, L - 1, 2):
B_L = Bs[i].scale_axis(Ss[i], 'vL').ireplace_label('p', 'p0')
B_R = Bs[i + 1].replace_label('p', 'p1')
theta = npc.tensordot(B_L, B_R, axes=('vR', 'vL'))
theta = npc.tensordot(exp_H2, theta, axes=(['p0*', 'p1*'], ['p0', 'p1']))
# view as matrix for SVD
theta = theta.combine_legs([('vL', 'p0'), ('p1', 'vR')], new_axes=[0, 1], qconj=[+1, -1])
# now theta has labels '(vL.p0)', '(p1.vR)'
U, S, V = npc.svd(theta, inner_labels=['vR', 'vL'])
# truncate
keep = S > cutoff
S = S[keep]
invsq = np.linalg.norm(S)
Ss[i + 1] = S / invsq
U = U.iscale_axis(S / invsq, 'vR')
Bs[i] = U.split_legs('(vL.p0)').iscale_axis(Ss[i]**(-1), 'vL').ireplace_label('p0', 'p')
Bs[i + 1] = V.split_legs('(p1.vR)').ireplace_label('p1', 'p')
print("finished")
Models¶
What is a model?¶
Abstractly, a model stands for some physical (quantum) system to be described. For tensor networks algorithms, the model is usually specified as a Hamiltonian written in terms of second quantization. For example, let us consider a spin-1/2 Heisenberg model described by the Hamiltonian
Note that a few things are defined more or less implicitly.
The local Hilbert space: it consists of Spin-1/2 degrees of freedom with the usual spin-1/2 operators \(S^x, S^y, S^z\).
The geometric (lattice) strucuture: above, we spoke of a 1D “chain”.
The boundary conditions: do we have open or periodic boundary conditions? The “chain” suggests open boundaries, which are in most cases preferable for MPS-based methods.
The range of i: How many sites do we consider (for a 2D system: in each direction)?
Obviously, these things need to be specified in TeNPy in one way or another, if we want to define a model.
Ultimately, our goal is to run some algorithm. However, different algorithm requires the model and Hamiltonian to be specified in different forms.
We have one class for each such required form.
For example dmrg
requires an MPOModel
,
which contains the Hamiltonian written as an MPO
.
So a new model class suitable for DMRG should have this general structure:
class MyNewModel(MPOModel):
def __init__(self, model_params):
lattice = somehow_generate_lattice(model_params)
H_MPO = somehow_generate_MPO(lattice, model_params)
# initialize MPOModel
MPOModel.__init__(self, lattice, H_MPO)
On the other hand, if we want to evolve a state with tebd
we need a NearestNeighborModel
, in which the Hamiltonian is written in terms of
two-site bond-terms to allow a Suzuki-Trotter decomposition of the time-evolution operator:
class MyNewModel2(NearestNeighborModel):
"""General strucutre for a model suitable for TEBD."""
def __init__(self, model_params):
lattice = somehow_generate_lattice(model_params)
H_bond = somehow_generate_H_bond(lattice, model_params)
# initialize MPOModel
NearestNeighborModel.__init__(self, lattice, H_bond)
Of course, the difficult part in these examples is to generate the H_MPO
and H_bond
in the required form.
If you want to write it down by hand, you can of course do that.
But it can be quite tedious to write every model multiple times, just because we need different representations of the same Hamiltonian.
Luckily, there is a way out in TeNPy: the CouplingModel
. Before we describe this class, let’s
discuss the background of the Site
and Site
class.
The Hilbert space¶
The local Hilbert space is represented by a Site
(read its doc-string!).
In particular, the Site contains the local LegCharge
and hence the meaning of each
basis state needs to be defined.
Beside that, the site contains the local operators - those give the real meaning to the local basis.
Having the local operators in the site is very convenient, because it makes them available by name for example when you want to calculate expectation values.
The most common sites (e.g. for spins, spin-less or spin-full fermions, or bosons) are predefined
in the module tenpy.networks.site
, but if necessary you can easily extend them
by adding further local operators or completely write your own subclasses of Site
.
The full Hilbert space is a tensor product of the local Hilbert space on each site.
Note
The LegCharge
of all involved sites need to have a common
ChargeInfo
in order to allow the contraction of tensors acting on the various sites.
This can be ensured with the function set_common_charges()
.
An example where set_common_charges()
is needed would be a coupling of different
types of sites, e.g., when a tight binding chain of fermions is coupled to some local spin degrees of freedom.
Another use case of this function would be a model with a $U(1)$ symmetry involving only half the sites, say \(\sum_{i=0}^{L/2} n_{2i}\).
Note
If you don’t know about the charges and np_conserved yet, but want to get started with models right away,
you can set conserve=None
in the existing sites or use
leg = tenpy.linalg.np_conserved.LegCharge.from_trivial(d)
for an implementation of your custom site,
where d is the dimension of the local Hilbert space.
Alternatively, you can find some introduction to the charges in the Charge conservation with np_conserved.
The geometry : lattice class¶
The geometry is usually given by some kind of lattice structure how the sites are arranged,
e.g. implicitly with the sum over nearest neighbours \(\sum_{<i, j>}\).
In TeNPy, this is specified by a Lattice
class, which contains a unit cell of
a few Site
which are shifted periodically by its basis vectors to form a regular lattice.
Again, we have pre-defined some basic lattices like a Chain
,
two chains coupled as a Ladder
or 2D lattices like the
Square
, Honeycomb
and
Kagome
lattices; but you are also free to define your own generalizations.
MPS based algorithms like DMRG always work on purely 1D systems. Even if our model “lives” on a 2D lattice,
these algorithms require to map it onto a 1D chain (probably at the cost of longer-range interactions).
This mapping is also done by the lattice by defining the order (order
) of the sites.
Note
Further details on the lattice geometry can be found in Details on the lattice geometry.
The CouplingModel: general structure¶
The CouplingModel
provides a general, quite abstract way to specify a Hamiltonian
of couplings on a given lattice.
Once initialized, its methods add_onsite()
and
add_coupling()
allow to add onsite and coupling terms repeated over the different
unit cells of the lattice.
In that way, it basically allows a straight-forward translation of the Hamiltonian given as a math forumla
\(H = \sum_{i} A_i B_{i+dx} + ...\) with onsite operators A, B,… into a model class.
The general structure for a new model based on the CouplingModel
is then:
class MyNewModel3(CouplingModel,MPOModel,NearestNeighborModel):
def __init__(self, ...):
... # follow the basic steps explained below
In the initialization method __init__(self, ...)
of this class you can then follow these basic steps:
Read out the parameters.
Given the parameters, determine the charges to be conserved. Initialize the
LegCharge
of the local sites accordingly.Define (additional) local operators needed.
Initialize the needed
Site
.Note
Using pre-defined sites like the
SpinHalfSite
is recommended and can replace steps 1-3.Initialize the lattice (or if you got the lattice as a parameter, set the sites in the unit cell).
Initialize the
CouplingModel
withCouplingModel.__init__(self, lat)
.Use
add_onsite()
andadd_coupling()
to add all terms of the Hamiltonian. Here, thepairs
of the lattice can come in handy, for example:self.add_onsite(-np.asarray(h), 0, 'Sz') for u1, u2, dx in self.lat.pairs['nearest_neighbors']: self.add_coupling(0.5*J, u1, 'Sp', u2, 'Sm', dx, plus_hc=True) self.add_coupling( J, u1, 'Sz', u2, 'Sz', dx)
Note
The method
add_coupling()
adds the coupling only in one direction, i.e. not switching i and j in a \(\sum_{\langle i, j\rangle}\). If you have terms like \(c^\dagger_i c_j\) or \(S^{+}_i S^{-}_j\) in your Hamiltonian, you need to add it in both directions to get a Hermitian Hamiltonian! The easiest way to do that is to use the plus_hc option ofadd_onsite()
andadd_coupling()
, as we did for the \(J/2 (S^{+}_i S^{-}_j + h.c.)\) terms of the Heisenberg model above. Alternatively, you can add the hermitian conjugate terms explicitly, see the examples inadd_coupling()
for more details.Note that the strength arguments of these functions can be (numpy) arrays for site-dependent couplings. If you need to add or multipliy some parameters of the model for the strength of certain terms, it is recommended use
np.asarray
beforehand – in that way lists will also work fine.Finally, if you derived from the
MPOModel
, you can callcalc_H_MPO()
to build the MPO and use it for the initialization asMPOModel.__init__(self, lat, self.calc_H_MPO())
.Similarly, if you derived from the
NearestNeighborModel
, you can callcalc_H_bond()
to initialze it asNearestNeighborModel.__init__(self, lat, self.calc_H_bond())
. Callingself.calc_H_bond()
will fail for models which are not nearest-neighbors (with respect to the MPS ordering), so you should only subclass theNearestNeighborModel
if the lattice is a simpleChain
.
Note
The method add_coupling()
works only for terms involving operators on 2
sites. If you have couplings involving more than two sites, you can use the
add_multi_coupling()
instead.
A prototypical example is the exactly solvable ToricCode
.
The code of the module tenpy.models.xxz_chain
is included below as an illustrative example how to implement a
Model. The implementation of the XXZChain
directly follows the steps
outline above.
The XXZChain2
implements the very same model, but based on the
CouplingMPOModel
explained in the next section.
"""Prototypical example of a 1D quantum model: the spin-1/2 XXZ chain.
The XXZ chain is contained in the more general :class:`~tenpy.models.spins.SpinChain`; the idea of
this module is more to serve as a pedagogical example for a model.
"""
# Copyright 2018-2021 TeNPy Developers, GNU GPLv3
import numpy as np
from .lattice import Site, Chain
from .model import CouplingModel, NearestNeighborModel, MPOModel, CouplingMPOModel
from ..linalg import np_conserved as npc
from ..tools.params import asConfig
from ..networks.site import SpinHalfSite # if you want to use the predefined site
__all__ = ['XXZChain', 'XXZChain2']
class XXZChain(CouplingModel, NearestNeighborModel, MPOModel):
r"""Spin-1/2 XXZ chain with Sz conservation.
The Hamiltonian reads:
.. math ::
H = \sum_i \mathtt{Jxx}/2 (S^{+}_i S^{-}_{i+1} + S^{-}_i S^{+}_{i+1})
+ \mathtt{Jz} S^z_i S^z_{i+1} \\
- \sum_i \mathtt{hz} S^z_i
All parameters are collected in a single dictionary `model_params`, which
is turned into a :class:`~tenpy.tools.params.Config` object.
Parameters
----------
model_params : :class:`~tenpy.tools.params.Config`
Parameters for the model. See :cfg:config:`XXZChain` below.
Options
-------
.. cfg:config :: XXZChain
:include: CouplingMPOModel
L : int
Length of the chain.
Jxx, Jz, hz : float | array
Coupling as defined for the Hamiltonian above.
bc_MPS : {'finite' | 'infinte'}
MPS boundary conditions. Coupling boundary conditions are chosen appropriately.
"""
def __init__(self, model_params):
# 0) read out/set default parameters
model_params = asConfig(model_params, "XXZChain")
L = model_params.get('L', 2)
Jxx = model_params.get('Jxx', 1.)
Jz = model_params.get('Jz', 1.)
hz = model_params.get('hz', 0.)
bc_MPS = model_params.get('bc_MPS', 'finite')
# 1-3):
USE_PREDEFINED_SITE = False
if not USE_PREDEFINED_SITE:
# 1) charges of the physical leg. The only time that we actually define charges!
leg = npc.LegCharge.from_qflat(npc.ChargeInfo([1], ['2*Sz']), [1, -1])
# 2) onsite operators
Sp = [[0., 1.], [0., 0.]]
Sm = [[0., 0.], [1., 0.]]
Sz = [[0.5, 0.], [0., -0.5]]
# (Can't define Sx and Sy as onsite operators: they are incompatible with Sz charges.)
# 3) local physical site
site = Site(leg, ['up', 'down'], Sp=Sp, Sm=Sm, Sz=Sz)
else:
# there is a site for spin-1/2 defined in TeNPy, so just we can just use it
# replacing steps 1-3)
site = SpinHalfSite(conserve='Sz')
# 4) lattice
bc = 'periodic' if bc_MPS == 'infinite' else 'open'
lat = Chain(L, site, bc=bc, bc_MPS=bc_MPS)
# 5) initialize CouplingModel
CouplingModel.__init__(self, lat)
# 6) add terms of the Hamiltonian
# (u is always 0 as we have only one site in the unit cell)
self.add_onsite(-hz, 0, 'Sz')
self.add_coupling(Jxx * 0.5, 0, 'Sp', 0, 'Sm', 1, plus_hc=True)
# the `plus_hc=True` adds the h.c. term
# see also the examples tenpy.models.model.CouplingModel.add_coupling
self.add_coupling(Jz, 0, 'Sz', 0, 'Sz', 1)
# 7) initialize H_MPO
MPOModel.__init__(self, lat, self.calc_H_MPO())
# 8) initialize H_bond (the order of 7/8 doesn't matter)
NearestNeighborModel.__init__(self, lat, self.calc_H_bond())
class XXZChain2(CouplingMPOModel, NearestNeighborModel):
"""Another implementation of the Spin-1/2 XXZ chain with Sz conservation.
This implementation takes the same parameters as the :class:`XXZChain`, but is implemented
based on the :class:`~tenpy.models.model.CouplingMPOModel`.
Parameters
----------
model_params : dict | :class:`~tenpy.tools.params.Config`
See :cfg:config:`XXZChain`
"""
default_lattice = "Chain"
force_default_lattice = True
def init_sites(self, model_params):
return SpinHalfSite(conserve='Sz') # use predefined Site
def init_terms(self, model_params):
# read out parameters
Jxx = model_params.get('Jxx', 1.)
Jz = model_params.get('Jz', 1.)
hz = model_params.get('hz', 0.)
# add terms
for u in range(len(self.lat.unit_cell)):
self.add_onsite(-hz, u, 'Sz')
for u1, u2, dx in self.lat.pairs['nearest_neighbors']:
self.add_coupling(Jxx * 0.5, u1, 'Sp', u2, 'Sm', dx, plus_hc=True)
self.add_coupling(Jz, u1, 'Sz', u2, 'Sz', dx)
The easiest way: the CouplingMPOModel¶
Since many of the basic steps above are always the same, we don’t need to repeat them all the time.
So we have yet another class helping to structure the initialization of models: the CouplingMPOModel
.
The general structure of this class is like this:
class CouplingMPOModel(CouplingModel,MPOModel):
default_lattice = "Chain"
"
def __init__(self, model_param):
# ... follows the basic steps 1-8 using the methods
lat = self.init_lattice(self, model_param) # for step 4
# ...
self.init_terms(self, model_param) # for step 6
# ...
def init_sites(self, model_param):
# You should overwrite this in most cases to ensure
# getting the site(s) and charge conservation you want
site = SpinSite(...) # or FermionSite, BosonSite, ...
return site # (or tuple of sites)
def init_lattice(self, model_param):
sites = self.init_sites(self, model_param) # for steps 1-3
# and then read out the class attribute `default_lattice`,
# initialize an arbitrary pre-defined lattice
# using model_params['lattice']
# and enure it's the default lattice if the class attribute
# `force_default_lattice` is True.
def init_terms(self, model_param):
# does nothing.
# You should overwrite this
The XXZChain2
included above illustrates, how it can be used.
You need to implement steps 1-3) by overwriting the method init_sites()
Step 4) is performed in the method init_lattice()
, which initializes arbitrary 1D or 2D
lattices; by default a simple 1D chain.
If your model only works for specific lattices, you can overwrite this method in your own class.
Step 6) should be done by overwriting the method init_terms()
.
Steps 5,7,8 and calls to the init_… methods for the other steps are done automatically if you just call the
CouplingMPOModel.__init__(self, model_param)
.
The XXZChain
and XXZChain2
work only with the
Chain
as lattice, since they are derived from the NearestNeighborModel
.
This allows to use them for TEBD in 1D (yeah!), but we can’t get the MPO for DMRG on (for example) a Square
lattice cylinder - although it’s intuitively clear, what the Hamiltonian there should be: just put the nearest-neighbor
coupling on each bond of the 2D lattice.
It’s not possible to generalize a NearestNeighborModel
to an arbitrary lattice where it’s
no longer nearest Neigbors in the MPS sense, but we can go the other way around:
first write the model on an arbitrary 2D lattice and then restrict it to a 1D chain to make it a NearestNeighborModel
.
Let me illustrate this with another standard example model: the transverse field Ising model, implemented in the module
tenpy.models.tf_ising
included below.
The TFIModel
works for arbitrary 1D or 2D lattices.
The TFIChain
is then taking the exact same model making a NearestNeighborModel
,
which only works for the 1D chain.
"""Prototypical example of a quantum model: the transverse field Ising model.
Like the :class:`~tenpy.models.xxz_chain.XXZChain`, the transverse field ising chain
:class:`TFIChain` is contained in the more general :class:`~tenpy.models.spins.SpinChain`;
the idea is more to serve as a pedagogical example for a 'model'.
We choose the field along z to allow to conserve the parity, if desired.
"""
# Copyright 2018-2021 TeNPy Developers, GNU GPLv3
import numpy as np
from .model import CouplingMPOModel, NearestNeighborModel
from .lattice import Chain
from ..tools.params import asConfig
from ..networks.site import SpinHalfSite
__all__ = ['TFIModel', 'TFIChain']
class TFIModel(CouplingMPOModel):
r"""Transverse field Ising model on a general lattice.
The Hamiltonian reads:
.. math ::
H = - \sum_{\langle i,j\rangle, i < j} \mathtt{J} \sigma^x_i \sigma^x_{j}
- \sum_{i} \mathtt{g} \sigma^z_i
Here, :math:`\langle i,j \rangle, i< j` denotes nearest neighbor pairs, each pair appearing
exactly once.
All parameters are collected in a single dictionary `model_params`, which
is turned into a :class:`~tenpy.tools.params.Config` object.
Parameters
----------
model_params : :class:`~tenpy.tools.params.Config`
Parameters for the model. See :cfg:config:`TFIModel` below.
Options
-------
.. cfg:config :: TFIModel
:include: CouplingMPOModel
conserve : None | 'parity'
What should be conserved. See :class:`~tenpy.networks.Site.SpinHalfSite`.
J, g : float | array
Coupling as defined for the Hamiltonian above.
"""
def init_sites(self, model_params):
conserve = model_params.get('conserve', 'parity')
assert conserve != 'Sz'
if conserve == 'best':
conserve = 'parity'
self.logger.info("%s: set conserve to %s", self.name, conserve)
site = SpinHalfSite(conserve=conserve)
return site
def init_terms(self, model_params):
J = np.asarray(model_params.get('J', 1.))
g = np.asarray(model_params.get('g', 1.))
for u in range(len(self.lat.unit_cell)):
self.add_onsite(-g, u, 'Sigmaz')
for u1, u2, dx in self.lat.pairs['nearest_neighbors']:
self.add_coupling(-J, u1, 'Sigmax', u2, 'Sigmax', dx)
# done
class TFIChain(TFIModel, NearestNeighborModel):
"""The :class:`TFIModel` on a Chain, suitable for TEBD.
See the :class:`TFIModel` for the documentation of parameters.
"""
default_lattice = Chain
force_default_lattice = True
Automation of Hermitian conjugation¶
As most physical Hamiltonians are Hermitian, these Hamiltonians are fully determined when only half of the mutually conjugate terms is defined. For example, a simple Hamiltonian:
is fully determined by the term \(c^{\dagger}_i c_j\) if we demand that Hermitian conjugates are included automatically.
In TeNPy, whenever you add a coupling using add_onsite()
,
add_coupling()
, or add_multi_coupling()
,
you can use the optional argument plus_hc to automatically create and add the Hermitian conjugate of that coupling term - as shown above.
Additionally, in an MPO, explicitly adding both a non-Hermitian term and its conjugate increases the bond dimension of the MPO, which increases the memory requirements of the MPOEnvironment
.
Instead of adding the conjugate terms explicitly, you can set a flag explicit_plus_hc in the MPOCouplingModel
parameters, which will ensure two things:
The model and the MPO will only store half the terms of each Hermitian conjugate pair added, but the flag explicit_plus_hc indicates that they represent self + h.c.. In the example above, only the term \(c^{\dagger}_i c_j\) would be saved.
At runtime during DMRG, the Hermitian conjugate of the (now non-Hermitian) MPO will be computed and applied along with the MPO, so that the effective Hamiltonian is still Hermitian.
Note
The model flag explicit_plus_hc should be used in conjunction with the flag plus_hc in add_coupling()
or add_multi_coupling()
.
If plus_hc is False while explicit_plus_hc is True the MPO bond dimension will not be reduced, but you will still pay the additional computational cost of computing the Hermitian conjugate at runtime.
Thus, we end up with several use cases, depending on your preferences.
Consider the FermionModel
.
If you do not care about the MPO bond dimension, and want to add Hermitian conjugate terms manually, you would set model_par[‘explicit_plus_hc’] = False and write:
self.add_coupling(-J, u1, 'Cd', u2, 'C', dx)
self.add_coupling(np.conj(-J), u2, 'Cd', u1, 'C', -dx)
If you wanted to save the trouble of the extra line of code (but still did not care about MPO bond dimension), you would keep the model_par, but instead write:
self.add_coupling(-J, u1, 'Cd', u2, 'C', dx, plus_hc=True)
Finally, if you wanted a reduction in MPO bond dimension, you would need to set model_par[‘explicit_plus_hc’] = True, and write:
self.add_coupling(-J, u1, 'Cd', u2, 'C', dx, plus_hc=True)
Non-uniform terms and couplings¶
The CouplingModel-methods add_onsite()
, add_coupling()
,
and add_multi_coupling()
add a sum over a “couplig” term shifted by lattice
vectors. However, some models are not that “uniform” over the whole lattice.
First of all, you might have some local term that gets added only at one specific location in the lattice.
You can add such a term for example with add_local_term()
.
Second, if you have irregular lattices, take a look at the corresponding section in Details on the lattice geometry.
Finally, note that the argument strength for the add_onsite, add_coupling, and add_multi_coupling methods can not only be a numpy scalar, but also a (numpy) array. In general, the sum performed by the methods runs over the given term shifted by lattice vectors as far as possible to still fit the term into the lattice.
For the add_onsite()
case this criterion is simple: there is exactly one site in each lattice unit cell with the u specified as separate argument, so the correct shape for the strength array is simply given by Ls
.
For example, if you want the defacto standard model studied for many-body localization, a Heisenberg chain with random , uniform onsite field \(h^z_i \in [-W, W]\),
you can use the SpinChain
with the following model parameters:
L = 30 # or whatever you like...
W = 5. # MBL transition at W_c ~= 3.5 J
model_params = {
'L': L,
'Jx': 1., 'Jy': 1., 'Jz': 1.,
'hz': 2.*W*(np.random.random(L) - 0.5), # random values in [-W, W], shape (L,)
'conserve': 'best',
}
M = tenpy.models.spins.SpinChain(model_params)
For add_coupling()
and add_multi_coupling()
,
things become a little bit more complicated, and the correct shape of the strength array depends not only on the Ls
but also on the boundary conditions of the lattice. Given a term, you can call
coupling_shape()
and multi_coupling_shape()
to find out the correct shape for strength.
To avoid any ambiguity, the shape of the strength always has to fit, at least after a tiling performed by to_array()
.
For example, consider the Su-Schrieffer-Heeger model, a spin-less FermionChain
with hopping strength alternating between two values, say t1 and t2.
You can generete this model for example like this:
L = 30 # or whatever you like...
t1, t2 = 0.5, 1.5
t_array = np.array([(t1 if i % 2 == 0 else t2) for i in range(L-1)])
model_params = {
'L': L,
't': t_array,
'V': 0., 'mu': 0., # just free fermions, but you can generalize...
'conserve': 'best'
}
M = tenpy.models.fermions.FermionChain(model_params)
Some random remarks on models¶
Needless to say that we have also various predefined models under
tenpy.models
.Of course, an MPO is all you need to initialize a
MPOModel
to be used for DMRG; you don’t have to use theCouplingModel
orCouplingMPOModel
. For example an exponentially decaying long-range interactions are not supported by the coupling model but straight-forward to include to an MPO, as demonstrated in the exampleexamples/mpo_exponentially_decaying.py
.If you want to debug or double check that the you added the correct terms to a
CouplingMPOModel
, you can print the terms withprint(M.all_coupling_terms().to_TermList())
. This willIf the model of your interest contains Fermions, you should read the Fermions and the Jordan-Wigner transformation.
We suggest writing the model to take a single parameter dictionary for the initialization, as the
CouplingMPOModel
does. TheCouplingMPOModel
converts the dictionary to a dict-likeConfig
with some additional features before passing it on to the init_lattice, init_site, … methods. It is recommended to read out providing default values withmodel_params.get("key", default_value)
, seeget()
.When you write a model and want to include a test that it can be at least constructed, take a look at
tests/test_model.py
.
Simulations¶
Simulations provide the highest-level interface in TeNPy. They represent one simulation from start (initializing the various classes from given parameters) to end (saving the results to a file). The idea is that they contain the full package of code that you run by a job on a computing cluster. (You don’t have to stick to that rule, of course.) In fact, any simulation can be run from the command line, given only a parameter file as input, like this:
python -m tenpy -c SimulationClassName parameters.yml
# or alternatively, if tenpy is installed correctly:
tenpy-run -c SimulationClassName parameters.yml
Of course, you should replace SimulationClassName with the class name of the simulation class you want to use, for
example GroundStateSearch
or
RealTimeEvolution
. For more details, see tenpy.run_commandline()
.
In some cases, this might not be enough, and you want to do some pre- or post-processing, or just do something a litte bit differently during the simulation. In that case, you can also define your own simulation class (as subclass of one the existing ones).
Details on the lattice geometry¶
The Lattice
class defines the geometry of the system.
In the basic form, it represents a unit cell of a few sites repeated in one or multiple directions.
Moreover, it maps this higher-dimensional geometry to a one-dimensional chain for MPS-based algorithms.
Visualization¶
A plot of the lattice can greatly help to understand which sites are connected by what couplings.
The methods plot_*
of the Lattice
can do a good job for a quick illustration.
Let’s look at the Honeycomb lattice as an example.
import matplotlib.pyplot as plt
from tenpy.models import lattice
plt.figure(figsize=(5, 6))
ax = plt.gca()
lat = lattice.Honeycomb(Lx=4, Ly=4, sites=None, bc='periodic')
lat.plot_coupling(ax)
lat.plot_order(ax, linestyle=':')
lat.plot_sites(ax)
lat.plot_basis(ax, origin=-0.5*(lat.basis[0] + lat.basis[1]))
ax.set_aspect('equal')
ax.set_xlim(-1)
ax.set_ylim(-1)
plt.show()
(Source code, png, hires.png, pdf)

In this case, the unit cell (shaded green) consists of two sites, which for the purpose of plotting we just set to sites=None
;
in general you should specify instances of Site
for that.
The unit cell gets repeated in the directions given by the lattice basis (green arrows at the unit cell boundary).
Hence, we can label each site by a lattice index (x, y, u)
in this case, where x in range(Lx), y in range(Ly)
specify the translation of the unit cell
and u in range(len(unit_cell))
, here u in [0, 1]
, specifies the index within the unit cell.
How an MPS winds through the lattice: the order¶
For MPS-based algorithms, we need to map a 2D lattice like the one above to a 1D chain.
The red, dashed line in the plot indicates how an MPS winds through the 2D
lattice. The MPS index i is a simple enumeration of the sites along this line, shown as numbers next to the sites
in the plot.
The methods mps2lat_idx()
and lat2mps_idx()
map
indices of the MPS to and from indices of the lattice.
The MPS
class itself is (mostly) agnostic of the underlying geometry.
For example, expectation_value()
will return a 1D array of the expectation value on each
site indexed by the MPS index i.
If you have a two-dimensional lattice, you can use mps2lat_values()
to map this result to a 2D array index by the lattice indices.
A suitable order is critical for the efficiency of MPS-based algorithms. On one hand, different orderings can lead to different MPO bond-dimensions, with direct impact on the complexity scaling. On the other hand, it influences how much entanglement needs to go through each bonds of the underlying MPS, e.g., the ground strate to be found in DMRG, and therefore influences the required MPS bond dimensions. For the latter reason, the “optimal” ordering can not be known a priori and might even depend on your coupling parameters (and the phase you are in). In the end, you can just try different orderings and see which one works best.
The simplets way to change the order is to use a non-default value for the initialization parameter order of the
Lattice
class. This gets passed on to ordering()
,
which you an override in a custom lattice class to define new possible orderings.
Alternatively, you can go the most general way and simply set the attribute order to be a 2D numpy array with
lattice indices as rows, in the order you want.
import matplotlib.pyplot as plt
from tenpy.models import lattice
Lx, Ly = 3, 3
fig, axes = plt.subplots(2, 2, figsize=(7, 8))
lat1 = lattice.Honeycomb(Lx, Ly, sites=None, bc='periodic') # default order
lat2 = lattice.Honeycomb(Lx, Ly, sites=None, bc='periodic',
order="Cstyle") # first method to change order
# alternative: directly set "Cstyle" order
lat3 = lattice.Honeycomb(Lx, Ly, sites=None, bc='periodic')
lat3.order = lat2.ordering("Cstyle") # now equivalent to lat2
# general: can apply arbitrary permutation to the order
lat4 = lattice.Honeycomb(Lx, Ly, sites=None, bc='periodic',
order="Cstyle")
old_order = lat4.order
permutation = []
for i in range(0, len(old_order), 2):
permutation.append(i+1)
permutation.append(i)
lat4.order = old_order[permutation, :]
for lat, label, ax in zip([lat1, lat2, lat3, lat4],
["order='default'",
"order='Cstyle'",
"order='Cstyle'",
"custom permutation"],
axes.flatten()):
lat.plot_coupling(ax)
lat.plot_sites(ax)
lat.plot_order(ax, linestyle=':', linewidth=2.)
ax.set_aspect('equal')
ax.set_title('order = ' + repr(label))
plt.show()
(Source code, png, hires.png, pdf)

Boundary conditions¶
The Lattice
defines the boundary conditions bc in each direction.
It can be one of the usual 'open'
or 'periodic'
in each direcetion.
On top of that, there is the bc_MPS boundary condition of the MPS, one of 'finite', 'segment', 'infinite'
.
For an 'infinite'
MPS, the whole lattice is repeated in the direction of the first basis vector of the lattice.
For bc_MPS='infinite'
, the first direction should always be 'periodic'
, but you can also define a lattice with
bc_MPS='finite', bc=['periodic', 'perioid']
for a finite system on the torus.
This is discouraged, though, because the ground state MPS will require the squared bond dimension for the same precision in this
case!
For two (or higher) dimensional lattices, e.g for DMRG on an infinite cylinder,
you can also specify an integer shift instead of just saying 'periodic'
:
Rolling the 2D lattice up into a cylinder, you have a degree of freedom which sites to connect.
This is illustrated in the figure below for a Square
lattice with bc=['periodic', shift]
for shift in [-1, 0, 1]
(different columns).
In the first row, the orange markers indicate a pair of identified sites (see plot_bc_shift()
).
The dashed orange line indicates the direction of the cylinder axis.
The line where the cylinder is “cut open” therefore winds around the the cylinder for a non-zero shift.
(A similar thing happens even for shift=0 for more complicated lattices with non-orthogonal basis.)
In the second row, we directly draw lines between all sites connected by nearest-neighbor couplings, as they appear in the MPO.
(Source code, png, hires.png, pdf)

Irregular Lattices¶
The IrregularLattice
allows to add or remove sites from/to an existing regular lattice.
The doc-string of IrregularLattice
contains several examples, let us consider another one
here, where we use the IrregularLattice to “fix” the boundary of the Honeycomb lattice:
when we use "open"
boundary conditions for a finite system, there are two sites (on the lower left, and upper right),
wich are not included into any hexagonal. The following example shows how to remove them from the system:
import matplotlib.pyplot as plt
from tenpy.models import lattice
Lx, Ly = 3, 3
fig, axes = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(6, 4))
reg_lat = lattice.Honeycomb(Lx=Lx, Ly=Ly, sites=None, bc='open')
irr_lat = lattice.IrregularLattice(reg_lat, remove=[[0, 0, 0], [-1, -1, 1]])
for lat, label, ax in zip([reg_lat, irr_lat],
["regular", "irregular"],
axes.flatten()):
lat.plot_coupling(ax)
lat.plot_order(ax, linestyle=':')
lat.plot_sites(ax)
ax.set_aspect('equal')
ax.set_title(label)
plt.show()
(Source code, png, hires.png, pdf)

Logging and terminal output¶
By default, calling (almost) any function in TeNPy will not print output, appart from error messages, tracebacks, and warnings.
Instead, we use Python’s logging
module to allow fine-grained redirecting of status messages etc.
Of course, when you get an error message, you should be concerned to find out what it is about and how to fix it.
(If you believe it is a bug, report it.)
Warnings can be reported either using warnings.warn(...)
or with the logging mechanism logger.warn(...)
.
The former is used for warnings about things in your setup that you should fix.
The latter give you notifications about bad things that can happen in calculations, e.g. bad conditioning of a matrix, but there
is not much you can do about it. Those warnings indicate that you should take your results with a grain of salt and carefully double-check them.
Configuring logging¶
If you also want to see status messages (e.g. during a DMRG run whenever a checkpoint is reached), you can use
set the logging
level to logging.INFO with the following, basic setup:
import logging
logging.basicConfig(level=logging.INFO)
We use this snippet in our examples to activate the printing of info messages to the standard output stream.
For really detailed output, you can even set the level to logging.DEBUG.
logging.basicConfig()
also takes a filename argument, which allows to redirect the output to a file
instead of stdout. Note that you should call basicConfig only once; subsequent calls have no effect.
More detailed configurations can be made through logging.config
.
For example, the following both prints log messages to stdout and saves them to`ouput_filename.log`:
import logging.config
conf = {
'version': 1
'disable_existing_loggers': False,
'formatters': {'custom': {'format': '%(levelname)-8s: %(message)s'}},
'handlers': {'to_file': {'class': 'logging.FileHandler',
'filename': 'output_filename.log',
'formatter': 'custom',
'level': 'INFO',
'mode': 'a'},
'to_stdout': {'class': 'logging.StreamHandler',
'formatter': 'custom',
'level': 'INFO',
'stream': 'ext://sys.stdout'}},
'root': {'handlers': ['to_stdout', 'to_file'], 'level': 'DEBUG'},
}
logging.config.dictConfig(conf)
Note
Whether you use logging.config.fileConfig()
or the logging.config.dictConfig()
,
make sure that you also set disable_existing_loggers=False
.
Otherwise, it will not work as expected in the case where you import tenpy
before setting up the logging.
To also capture warnings, you might also want to call logging.captureWarnings()
.
In fact, the above is the default configuration used by tenpy.tools.misc.setup_logging()
.
If you use a Simulation
class, it will automatically
call setup_logging()
for you, saving the log to the same filename as the Simulation.output_filename
but with a .log
ending.
Moreover, you can easily adjust the log levels with simple parameters, for example with the following configuration (using [yaml] notation):
logging_params:
to_stdout: # nothing in yaml -> None in python => no logging to stdout
to_file: INFO
log_levels:
tenpy.tools.params : WARNING # suppres INFO/DEBUG output for any logging of parameters
Of course, you can also explicilty call the setup_logging()
yourself, if you don’t use the Simulation classes:
tenpy.tools.misc.setup_logging({'to_stdout': None, 'to_file': 'INFO', 'filename': 'my_log.txt',
'log_levels': {'tenpy.tools.params': 'WARNING'}})
How to write your own logging (and warning) code¶
Of course, you can still use simple print(...)
statements in your code, and they will just appear on your screen.
In fact, this is one of the benefits of logging: you can make sure that you only get the print statements you have put
yourself, and at the same time redirect the logging messages of tenpy to a file, if you want.
However, these print(...)
statements are not re-directed to the log-files.
Therefore, if you write your own sub-classes like Models, I would recommended that you also use the loggers instead of
simple print statements.
You can read the official logging tutorial for details,
but it’s actually straight-forward, and just requires at most two steps.
If necessary, import the necessary modules and create a logger at the top of your module:
import warnings import logging logger = logging.getLogger(__name__)
Note
Most TeNPy classes that you might want to subclass, like models, algorithm engines or simulations, provide a
Logger
asself.logger
class attribute. In that case you can even skip this step and just useself.logger
instead oflogger
in the snippets below.Inside your funtions/methods/…, make calls like this:
if is_likely_bad(options['parameter']): # this can be fixed by the user! warnings.warn("This is a bad parameter, you shouldn't do this!") if "old_parameter" in options: warnings.warn("Use `new_parameter` instead of `old_parameter`", FutureWarning, 2) logger.info("starting some lengthy calculation") n_steps = do_calculation() if something_bad_happened(): # the user can't do anything about it logger.warn("Something bad happend") logger.info("calculation finished after %d steps", n_steps)
You can use printf-formatting for the arguments of
logger.debug(...), logger.info(...), logger.warn(...)
, as illustrated in the last line.
In summary, instead of just print("do X")
statements, use self.logger.info("do X")
inside TeNPy classes, or just
logger.info("do X")
for the module-wide logger, which you can initialize right at the top of your file with the import
statements. If you have non-string arguments, add a formatter string, e.g. replace print(max(psi.chi))
with
logger.info("%d", max(psi.chi))
, or even better, logger.info("max(chi)=%d", max(psi.chi))
.
For genereic types, use "%s"
or "%r"
, which converts the other arguments to strings with str(...)
or repr(...)
, respectively.
Parameters and options¶
(We use parameter and option synonymously.)
Standard simulations in TeNPy can be defined by just set of options collected in a dictionary (possibly containing
other parameter dictionaries).
It can be convenient to represent these options in a [yaml] file, say parameters.yml
, which might look like this:
output_filename : params_output.h5
overwrite_output : True
model_class : SpinChain
model_params :
L : 14
bc_MPS : finite
initial_state_params:
method : lat_product_state
product_state : [[up], [down]]
algorithm_class: TwoSiteDMRG
algorithm_params:
trunc_params:
chi_max: 120
svd_min: 1e.-8
max_sweeps: 10
mixer : True
Note that the default values and even the allowed/used option names often depend on other parameters.
For example, the model_class parameter above given to a Simulation
selects a model class,
and diffent model classes might have completely different parameters.
This gives you freedom to define your own parameters when you implement a model, but it also makes it a little bit harder to keep track of allowed values.
In the TeNPy documentation, we use the Options
sections of doc-strings to define parameters that are read out.
Each documented parameter is attributed to one set of parameters, called “config”, and managed in a Config
class at runtime.
The above example represents the config for a Simulation, with the model_params representing the config given as
options to the model for initialization.
Sometimes, there is also a structure of one config including the parameters from another one:
For example, the generic parameters for time evolution algorithms, TimeEvolutionAlgorithm
are included
into the TEBDEngine
config, similarly to the sub-classing used.
During runtime, the Config
class logs the first use of any parameter (with DEBUG log-level, if
the default is used, and with INFO log-level, if it is non-default). Moreover, the default is saved into the parameter
dictionary. Hence, it will contain the full set of all used parameters, default and non-default, at the end of a
simulation, e.g., in the sim_params of the results returned by tenpy.simulations.Simulation.run()
.
You can find a list of all the different configs in the Config Index, and a list of all parameters in Config-Options Index.
If you add extra options to your configuration that TeNPy doesn’t read out by the end of the simulation, it will issue a warning. Getting such a warnings is an indicator for a typo in your configuration, or an option being in the wrong config dictionary.
Saving to disk: input/output¶
Using pickle¶
A simple and pythonic way to store data of TeNPy arrays is to use pickle
from the Python standard library.
Pickle allows to store (almost) arbitrary python objects,
and the Array
is no exception (and neither are other TeNPy classes).
Say that you have run DMRG to get a ground state psi as an MPS
.
With pickle, you can save it to disk as follows:
import pickle
with open('my_psi_file.pkl', 'wb') as f:
pickle.dump(psi, f)
Here, the with ... :
structure ensures that the file gets closed after the pickle dump, and the 'wb'
indicates
the file opening mode “write binary”.
Reading the data from disk is as easy as ('rb'
for reading binary):
with open('my_psi_file.pkl', 'rb') as f:
psi = pickle.load(f)
Note
It is a good (scientific) practice to include meta-data to the file, like the parameters you used to generate that state.
Instead of just the psi, you can simply store a dictionary containing psi and other data, e.g.,
data = {'psi': psi, 'dmrg_params': dmrg_params, 'model_params': model_params}
.
This can save you a lot of pain, when you come back looking at the files a few month later and forgot what you’ve done to generate them!
In some cases, compression can significantly reduce the space needed to save the data.
This can for example be done with gzip
(as well in the Python standard library).
However, be warned that it might cause longer loading and saving times, i.e. it comes at the penalty of more CPU usage for the input/output.
In Python, this requires only small adjustments:
import pickle
import gzip
# to save:
with gzip.open('my_data_file.pkl', 'wb') as f:
pickle.dump(data, f)
# and to load:
with gzip.open('my_data_file.pkl', 'rb') as f:
data = pickle.load(data, f)
Using HDF5 with h5py¶
While pickle
is great for simple input/output of python objects, it also has disadvantages. The probably most
dramatic one is the limited portability: saving data on one PC and loading it on another one might fail!
Even exporting data from Python 2 to load them in Python 3 on the same machine can give quite some troubles.
Moreover, pickle requires to load the whole file at once, which might be unnecessary if you only need part of the data,
or even lead to memory problems if you have more data on disk than fits into RAM.
Hence, we support saving to HDF5 files as an alternative. The h5py package provides a dictionary-like interface for the file/group objects with numpy-like data sets, and is quite easy to use. If you don’t know about HDF5, read the quickstart of the h5py documentation (and this guide).
The implementation can be found in the tenpy.tools.hdf5_io
module with the
Hdf5Saver
and Hdf5Loader
classes
and the wrapper functions save_to_hdf5()
, load_from_hdf5()
.
The usage is very similar to pickle:
import h5py
from tenpy.tools import hdf5_io
data = {"psi": psi, # e.g. an MPS
"model": my_model,
"parameters": {"L": 6, "g": 1.3}}
with h5py.File("file.h5", 'w') as f:
hdf5_io.save_to_hdf5(f, data)
# ...
with h5py.File("file.h5", 'r') as f:
data = hdf5_io.load_from_hdf5(f)
# or for partial reading:
pars = hdf5_io.load_from_hdf5(f, "/parameters")
Warning
Like loading a pickle file, loading data from a manipulated HDF5 file with the functions described has the potential to cause arbitrary code execution. Only load data from trusted sources!
Note
The hickle package imitates the pickle functionality while saving the data to HDF5 files. However, since it aims to be close to pickle, it results in a more complicated data structure than we want here.
Note
To use the export/import features to HDF5, you need to install the h5py python package (and hence some version of the HDF5 library).
Data format specification for saving to HDF5¶
This section motivates and defines the format how we save data of TeNPy-defined classes.
The goal is to have the save_to_hdf5()
function for saving sufficiently simple enough python
objects (supported by the format) to disk in an HDF5 file, such that they can be reconstructed with the
load_from_hdf5()
function, as outlined in the example code above.
Guidelines of the format:
Store enough data such that
load_from_hdf5()
can reconstruct a copy of the object (provided that the save did not fail with an error).Objects of a type supported by the HDF5 datasets (with the h5py interface) should be directly stored as h5py
Dataset
. Such objects are for example numpy arrays (of non-object dtype), scalars and strings.Allow to save (nested) python lists, tuples and dictionaries with values (and keys) which can be saved.
Allow user-defined classes to implement a well-defined interface which allows to save instances of that class, hence extending what data can be saved. An instance of a class supporting the interface gets saved as an HDF5
Group
. Class attributes are stored as entries of the group, metadata like the type should be stored in HDF5 attributes, see attributes.Simple and intuitive, human-readable structure for the HDF5 paths. For example, saving a simple dictionary
{'a': np.arange(10), 'b': 123.45}
should result in an HDF5 file with just the two data sets/a
and/b
.Allow loading only a subset of the data by specifying the path of the HDF5 group to be loaded. For the above example, specifying the path
/b
should result in loading the float123.45
, not the array.Avoid unnecessary copies if the same python object is referenced by different names, e.g, for the data
{'c': large_obj, 'd': large_obj}
with to references to the same large_obj, save it only once and use HDF5 hard-links such that/c
and/d
are the same HDF5 dataset/group. Also avoid the copies during the loading, i.e., the loaded dictionary should again have two references to a single object large_obj. This is also necessary to allow saving and loading of objects with cyclic references.
The full format specification is given by the what the code in hdf5_io
does…
Since this is not trivial to understand, let me summarize it here:
Following 1), simple scalars, strings and numpy arrays are saved as
Dataset
. Other objects are saved as a HDF5Group
, with the actual data being saved as group members (as sub-groups and sub-datasets) or as attributes (for metadata or simple data).The type of the object is stored in the HDF5 attribute
'type'
, which is one of the globalREPR_*
variables intenpy.tools.hdf5_io
. The type determines the format for saving/loading of builtin types (list, …)Userdefined classes which should be possible to export/import need to implement the methods
save_hdf5
andfrom_hdf5
as specified inHdf5Exportable
. When saving such a class, the attribute'type'
is automatically set to'instance'
, and the class name and module are saved under the attributes'module'
and'class'
. During loading, this information is used to automatically import the module, get the class and call the classmethodfrom_hdf5
for reconstruction. This can only work if the class definition already exists, i.e., you can only save class instances, not classes itself.For most (python) classes, simply subclassing
Hdf5Exportable
should work to make the class exportable. The latter saves the contents of__dict__
, with the extra attribute'format'
specifying whether the dictionary is “simple” (see below.).The
None
object is saved as a group with the attribute'type'
being'None'
and no subgroups.For iterables (list, tuple and set), we simple enumerate the entries and save entries as group members under the names
'0', '1', '2', ...
, and a maximum'len'
attribute.The format for dictionaries depends on whether all keys are “simple”, which we define as being strings which are valid path names in HDF5, see
valid_hdf5_path_component()
. Following 4), the keys of a simple dictionary are directly used as names for group members, and the values being whatever object the group member represents.Partial loading along 5) is possible by directly specifying the subgroup or the path to
load_from_hdf5()
.Guideline 6) is ensured as much as possible. However, there is a bug/exception: tuples with cyclic references are not re-constructed correctly; the inner objects will be lists instead of tuples (but with the same object entries).
Finally, we have to mention that many TeNPy classes are Hdf5Exportable
.
In particular, the Array
supports this.
To see what the exact format for those classes is, look at the save_hdf5 and from_hdf5 methods of those classes.
Note
There can be multiple possible output formats for the same object. The dictionary – with the format for simple keys or general keys – is such an example, but userdefined classes can use the same technique in their from_hdf5 method. The user might also explicitly choose a “lossy” output format (e.g. “flat” for np_conserved Arrays and LegCharges).
Tip
The above format specification is quite general and not bound to TeNPy. Feel free to use it in your own projects ;-) To separate the development, versions and issues of the format clearly from TeNPy, we maintain the code for it in a separate git repository, https://github.com/tenpy/hdf5_io
Fermions and the Jordan-Wigner transformation¶
The Jordan-Wigner tranformation maps fermionic creation- and annihilation operators to (bosonic) spin-operators.
Spinless fermions in 1D¶
Let’s start by explicitly writing down the transformation. With the Pauli matrices \(\sigma^{x,y,z}_j\) and \(\sigma^{\pm}_j = (\sigma^x_j \pm \mathrm{i} \sigma^y_j)/2\) on each site, we can map
The \(n_l\) in the second and third row are defined in terms of Pauli matrices according to the first row. We do not interpret the Pauli matrices as spin-1/2; they have nothing to do with the spin in the spin-full case. If you really want to interpret them physically, you might better think of them as hard-core bosons (\(b_j =\sigma^{-}_j, b^\dagger_j=\sigma^{+}_j\)), with a spin of the fermions mapping to a spin of the hard-core bosons.
Note that this transformation maps the fermionic operators \(c_j\) and \(c^\dagger_j\) to global operators; although they carry an index j indicating
a site, they actually act on all sites l <= j
!
Thus, clearly the operators C
and Cd
defined in the FermionSite
do not directly correspond to \(c_j\) and
\(c^\dagger_j\).
The part \((-1)^{\sum_{l < j} n_l}\) is called Jordan-Wigner string and in the FermionSite
is given by the local operator
\(JW := (-1)^{n_l}\) acting all sites l < j
.
Since this important, let me stress it again:
Warning
The fermionic operator \(c_j\) (and similar \(c^\dagger_j\)) maps to a global operator consisting of
the Jordan-Wigner string built by the local operator JW
on sites l < j
and the local operator C
(or Cd
, respectively) on site j
.
On the sites itself, the onsite operators C
and Cd
in the FermionSite
fulfill the correct anti-commutation relation, without the need to include JW
strings.
The JW
string is necessary to ensure the anti-commutation for operators acting on different sites.
Written in terms of onsite operators defined in the FermionSite
,
with the i-th entry entry in the list acting on site i, the relations are thus:
["JW", ..., "JW", "C", "Id", ..., "Id"] # for the annihilation operator
["JW", ..., "JW", "Cd", "Id", ..., "Id"] # for the creation operator
Note that "JW"
squares to the identity, "JW JW" == "Id"
,
which is the reason that the Jordan-wigner string completely cancels in \(n_j = c^\dagger_j c_j\).
In the above notation, this can be written as:
["JW", ..., "JW", "Cd", "Id", ..., "Id"] * ["JW", ..., "JW", "C", "Id", ..., "Id"]
== ["JW JW", ..., "JW JW", "Cd C", "Id Id", ..., "Id Id"] # by definition of the tensorproduct
== ["Id", ..., "Id", "N", "Id", ..., "Id"] # by definition of the local operators
# ("X Y" stands for the local operators X and Y applied on the same site. We assume that the "Cd" and "C" on the first line act on the same site.)
For a pair of operators acting on different sites, JW
strings have to be included for every site between the operators.
For example, taking i < j
,
\(c^\dagger_i c_j \leftrightarrow \sigma_i^{+} (-1)^{\sum_{i <=l < j} n_l} \sigma_j^{-}\).
More explicitly, for j = i+2
we get:
["JW", ..., "JW", "Cd", "Id", "Id", "Id", ..., "Id"] * ["JW", ..., "JW", "JW", "JW", "C", "Id", ..., "Id"]
== ["JW JW", ..., "JW JW", "Cd JW", "Id JW", "Id C", ..., "Id"]
== ["Id", ..., "Id", "Cd JW", "JW", "C", ..., "Id"]
In other words, the Jordan-Wigner string appears only in the range i <= l < j
, i.e. between the two sites and on the smaller/left one of them.
(You can easily generalize this rule to cases with more than two \(c\) or \(c^\dagger\).)
This last line (as well as the last line of the previous example) can be rewritten by changing the order of the operators Cd JW
to "JW Cd" == - "Cd"
.
(This is valid because either site i
is occupied, yielding a minus sign from the JW
, or it is empty, yielding a 0 from the Cd
.)
This is also the case for j < i
, say j = i-2
:
\(c^\dagger_i c_j \leftrightarrow (-1)^{\sum_{j <=l < i} n_l} \sigma_i^{+} \sigma_j^{-}\).
As shown in the following, the JW
again appears on the left site,
but this time acting after C
:
["JW", ..., "JW", "JW", "JW", "Cd", "Id", ..., "Id"] * ["JW", ..., "JW", "C", "Id", "Id", "Id", ..., "Id"]
== ["JW JW", ..., "JW JW", "JW C", "JW", "Cd Id", ..., "Id"]
== ["Id", ..., "Id", "JW C", "JW", "Cd", ..., "Id"]
Higher dimensions¶
For an MPO or MPS, you always have to define an ordering of all your sites. This ordering effectifely maps the higher-dimensional lattice to a 1D chain, usually at the expence of long-range hopping/interactions. With this mapping, the Jordan-Wigner transformation generalizes to higher dimensions in a straight-forward way.
Spinful fermions¶

As illustrated in the above picture, you can think of spin-1/2 fermions on a chain as spinless fermions living on a ladder (and analogous mappings for higher dimensional lattices).
Each rung (a blue box in the picture) forms a SpinHalfFermionSite
which is composed of two FermionSite
(the circles in the picture) for spin-up and spin-down.
The mapping of the spin-1/2 fermions onto the ladder induces an ordering of the spins, as the final result must again be a one-dimensional chain, now containing both spin species.
The solid line indicates the convention for the ordering, the dashed lines indicate spin-preserving hopping \(c^\dagger_{s,i} c_{s,i+1} + h.c.\)
and visualize the ladder structure.
More generally, each species of fermions appearing in your model gets a separate label, and its Jordan-Wigner string
includes the signs \((-1)^{n_l}\) of all species of fermions to the ‘left’ of it (in the sense of the ordering indicated by the solid line in the picture).
In the case of spin-1/2 fermions labeled by \(\uparrow\) and \(\downarrow\) on each site, the complete mapping is given (where j and l are indices of the FermionSite
):
In each of the above mappings the operators on the right hand sides commute; we can rewrite
\((-1)^{\sum_{l < j} n_{\uparrow,l} + n_{\downarrow,l}} = \prod_{l < j} (-1)^{n_{\uparrow,l}} (-1)^{n_{\downarrow,l}}\),
which resembles the actual structure in the code more closely.
The parts of the operator acting in the same box of the picture, i.e. which have the same index j or l,
are the ‘onsite’ operators in the SpinHalfFermionSite
:
for example JW
on site j is given by \((-1)^{n_{\uparrow,j}} (-1)^{n_{\downarrow,j}}\),
Cu
is just the \(\sigma^{-}_{\uparrow,j}\), Cdu
is \(\sigma^{+}_{\uparrow,j}\),
Cd
is \((-1)^{n_{\uparrow,j}} \sigma^{-}_{\downarrow,j}\).
and Cdd
is \((-1)^{n_{\uparrow,j}} \sigma^{+}_{\downarrow,j}\).
Note the asymmetry regarding the spin in the definition of the onsite operators:
the spin-down operators include Jordan-Wigner signs for the spin-up fermions on the same site.
This asymetry stems from the ordering convention introduced by the solid line in the picture, according to which the spin-up site
is “left” of the spin-down site. With the above definition, the operators within the same SpinHalfFermionSite
fulfill the expected commutation relations,
for example "Cu Cdd" == - "Cdd Cu"
, but again the JW
on sites left of the operator pair is crucial to get the correct
commutation relations globally.
Warning
Again, the fermionic operators \(c_{\downarrow,j}, c^\dagger_{\downarrow,j}, c_{\downarrow,j}, c^\dagger_{\downarrow,j}\) correspond to global operators consisting of
the Jordan-Wigner string built by the local operator JW
on sites l < j
and the local operators 'Cu', 'Cdu', 'Cd', 'Cdd'
on site j
.
Written explicitly in terms of onsite operators defined in the FermionSite
,
with the j-th entry entry in the list acting on site j, the relations are:
["JW", ..., "JW", "Cu", "Id", ..., "Id"] # for the annihilation operator spin-up
["JW", ..., "JW", "Cd", "Id", ..., "Id"] # for the annihilation operator spin-down
["JW", ..., "JW", "Cdu", "Id", ..., "Id"] # for the creation operator spin-up
["JW", ..., "JW", "Cdd", "Id", ..., "Id"] # for the creation operator spin-down
As you can see, the asymmetry regaring the spins in the definition of the local onsite operators "Cu", "Cd", "Cdu", "Cdd"
lead to a symmetric definition in the global sense.
If you look at the definitions very closely, you can see that in terms like ["Id", "Cd JW", "JW", "Cd"]
the
Jordan-Wigner sign \((-1)^{n_\uparrow,2}\) appears twice (namely once in the definition of "Cd"
and once in the "JW"
on site
2) and could in principle be canceled, however in favor of a simplified handling in the code we do not recommend you to cancel it.
Similar, within a spinless FermionSite
, one can simplify "Cd JW" == "Cd"
and "JW C" == "C"
,
but these relations do not hold in the SpinHalfSite
,
and for consistency we recommend to explicitly keep the "JW"
operator string even in nearest-neighbor models where it is not strictly necessary.
How to handle Jordan-Wigner strings in practice¶
There are only a few pitfalls where you have to keep the mapping in mind: When building a model, you map the physical fermionic operators to the usual spin/bosonic operators. The algorithms don’t care about the mapping, they just use the given Hamiltonian, be it given as MPO for DMRG or as nearest neighbor couplings for TEBD. Only when you do a measurement (e.g. by calculating an expectation value or a correlation function), you have to reverse this mapping. Be aware that in certain cases, e.g. when calculating the entanglement entropy on a certain bond, you cannot reverse this mapping (in a straightforward way), and thus your results might depend on how you defined the Jordan-Wigner string.
Whatever you do, you should first think about if (and how much of) the Jordan-Wigner string cancels.
For example for many of the onsite operators (like the particle number operator N
or the spin operators in the SpinHalfFermionSite
)
the Jordan-Wigner string cancels completely and you can just ignore it both in onsite-terms and couplings.
In case of two operators acting on different sites, you typically have a Jordan-Wigner string inbetween (e.g. for the
\(c^\dagger_i c_j\) examples described above and below) or no Jordan-Wigner strings at all (e.g. for density-density
interactions \(n_i n_j\)).
In fact, the case that the Jordan Wigner string on the left of the first non-trivial operator does not cancel is currently not supported
for models and expectation values, as it usually doesn’t appear in practice.
For terms involving more operators, things tend to get more complicated, e.g. \(c^\dagger_i c^\dagger_j c_k c_l\) with
\(i < j < k < l\) requires a Jordan-Wigner string on sites m with \(i \leq m <j\) or \(k \leq m <l\), but
not for \(j < m < k\).
Note
TeNPy keeps track of which onsite operators need a Jordan-Wigner string in the Site
class,
specifically in need_JW_string
and op_needs_JW()
.
Hence, when you define custom sites or add extra operators to the sites, make sure that
op_needs_JW()
returns the expected results.
When building a model the Jordan-Wigner strings need to be taken into account.
If you just specify the H_MPO or H_bond, it is your responsibility to use the correct mapping.
However, if you use the add_coupling()
method of the
CouplingModel
,
(or the generalization add_multi_coupling()
for more than 2 operators),
TeNPy can use the information from the Site class to automatically add Jordan-Wigner strings as needed.
Indeed, with the default argument op_string=None
, add_coupling will automatically check whether the operators
need Jordan-Wigner strings and correspondlingly set op_string='JW', str_on_first=True
, if necessary.
For add_multi_coupling, you cann’t even explicitly specify the correct Jordan-Wigner strings, but you must use
op_string=None
, from which it will automatically determine where Jordan-Wigner strings are needed.
Obviously, you should be careful about the convention which of the operators is applied first (in a physical
sense as an operator acting on a state), as this corresponds to a sign of the prefactor.
Read the doc-strings of add_coupling()
add_multi_coupling()
for details.
As a concrete example, let us specify a hopping
\(\sum_{i} (c^\dagger_i c_{i+1} + h.c.) = \sum_{i} (c^\dagger_i c_{i+1} + c^\dagger_{i} c_{i-1})\)
in a 1D chain of FermionSite
with add_coupling()
.
The recommended way is just:
self.add_coupling(strength, 0, 'Cd', 0, 'C', 1, plus_hc=True)
If you want to specify both the Jordan-Wigner string and the h.c.
term explicitly, you can use:
self.add_coupling(strength, 0, 'Cd', 0, 'C', 1, op_string='JW', str_on_first=True)
self.add_coupling(strength, 0, 'Cd', 0, 'C', -1, op_string='JW', str_on_first=True)
Slightly more complicated, to specify the hopping \(\sum_{\langle i, j\rangle, s} (c^\dagger_{s,i} c_{s,j} + h.c.)\) in the Fermi-Hubbard model on a 2D square lattice, we could use:
for (dx, dy) in [(1, 0), (0, 1)]:
self.add_coupling(strength, 0, 'Cdu', 0, 'Cu', (dx, dy), plus_hc=True) # spin up
self.add_coupling(strength, 0, 'Cdd', 0, 'Cd', (dx, dy), plus_hc=True) # spin down
# or without `plus_hc`
for (dx, dy) in [(1, 0), (-1, 0), (0, 1), (0, -1)]: # include -dx !
self.add_coupling(strength, 0, 'Cdu', 0, 'Cu', (dx, dy)) # spin up
self.add_coupling(strength, 0, 'Cdd', 0, 'Cd', (dx, dy)) # spin down
# or specifying the 'JW' string explicitly
for (dx, dy) in [(1, 0), (-1, 0), (0, 1), (0, -1)]:
self.add_coupling(strength, 0, 'Cdu', 0, 'Cu', (dx, dy), 'JW', True) # spin up
self.add_coupling(strength, 0, 'Cdd', 0, 'Cd', (dx, dy), 'JW', True) # spin down
The most important functions for doing measurements are probably expectation_value()
and correlation_function()
. Again, if all the Jordan-Wigner strings cancel, you don’t have
to worry about them at all, e.g. for many onsite operators or correlation functions involving only number operators.
If you build multi-site operators to be measured by expectation_value, take care to include the Jordan-Wigner
string correctly.
Some MPS methods like
correlation_function()
,
expectation_value_term()
and
expectation_value_terms_sum()
automatically add Jordan-Wignder strings
(at least with default arguments).
Other more low-level functions like expectation_value_multi_sites()
don’t do it.
Hence, you should always watch out during measurements, if the function used needs special treatment for Jordan-Wigner strings.
Protocol for using (i)DMRG¶
While this documentation contains extensive guidance on how to interact with the tenpy, it is often unclear how to approach a physics question using these methods. This page is an attempt to provide such guidance, describing a protocol on how to go from a model implementation to an answered question.
The basic workflow for an (i)DMRG project is as follows, with individual steps expanded on later where necessary.
Confirm the correctness of the model implementation.
Run some low-effort tests to see whether the question seems answerable.
If the tests are successful, run production-quality simulations. This will be entirely particular to the project you’re working on.
Confirm that your results are converged.
Confirming the model is correct¶
Although TeNPy makes model implementation much easier than constructing the MPO by hand, one should still ensure that the MPO represents the intended model faithfully.
There are several possible ways to do this. Firstly, for sufficiently small system sizes, one can contract the entire MPO into a matrix, and inspect the matrix elements. In TeNPy, this can be done using get_full_hamiltonian()
. These should reproduce the analytical Hamiltonian up to machine precision, or any other necessary cut-off (e.g., long-range interactions may be truncated at some finite distance).
Secondly, if the model basis allows it, one can construct (product state) MPSs for known eigenstates of the model and evaluate whether these reproduce the correct eigenvalues upon contraction with the MPO.
Finally, one can sometimes construct a basis of single- or even two-particle MPSs in some basis, and evaluate the MPO on this basis to get a representation of the single- and two-particle Hamiltonian. If the model contains only single- and two-body terms, this latter approach should reproduce all terms in the Hamiltonian.
Low-effort tests¶
As not every state can be accurately represented by an MPS, some results are outside the reach of (i)DMRG. To prevent wasting considerable numerical resources on a fruitless project, it is recommended to run some low-effort trials first, and see whether any indication of the desired result can be found. If so, one can then go on to more computationally expensive simulations. If not, one should evaluate:
Whether there is a mistake in the model or simulation set-up,
Whether a slightly more computationally expensive test would potentially yield a result, or
Whether your approach is unfortunately out of reach of (i)DMRG.
To set up low-effort trials, one should limit system size, bond dimension and the range of interactions, as well as (if possible) target a non-critical region of phase space. All these measures reduce the size of and/or entanglement entropy needing to be captured by the MPS, which yields both memory and run time advantages. Of course, one introduces a trade-off between computational cost and accuracy, which is why one should be careful to not put too much faith into results obtained at this stage.
Detecting convergence issues¶
Ensuring that the results of an (i)DMRG simulation are well-converged and thus reliable is a hugely important part of any (i)DMRG study. Possible indications that there might be a convergence issue include:
The simulation shows a non-monotonous decrease of energy, and/or a non-monotonous increase of entanglement entropy. An increase of energy or decrease of entanglement entropy on subsequent steps within a sweep, or between subsequent sweeps, are particularly suspicious.
The simulation does not halt because it reached a convergence criterion, but because it reached its maximum number of sweeps.
Results vary wildly under small changes of parameters. In particular, if a small change in bond dimension yields a big change in results, one should be suspicious of the data.
Combating convergence issues¶
To combat convergence issues of the (i)DMRG algorithm, several strategies (short of switching to a different method) can be attempted:
Ensure that there are no errors in the model (see above) or the simulation set-up.
Increase the maximum bond dimension.
Ramp up the maximum bond dimension during simulation, rather than starting at the highest value. I.e., define a schedule wherein the first \(N_{\mathrm{sweeps}}\) sweeps run at some \(\chi_1 < \chi_\mathrm{max}\), the next \(N_{\mathrm{sweeps}}\) at \(\chi_1 < \chi_2 < \chi_{\mathrm{max}}\), etc. This can be done through the
chi_list
option of theDMRGEngine
. You should also make sure that themax_hours
option is set to sufficiently long runtimes.Increase the maximum number of sweeps the algorithm is allowed to make, through the
max_sweeps
option of theDMRGEngine
.Change the
Mixer
settings to in- or decrease the effects of the mixer.Change convergence criteria. This will not overcome convergence issues in itself, but can help fine tune the (i)DMRG simulation if it takes a long time to converge (relax the convergence constraints), or if the simulation finishes too soon (tighten the constraints). Criteria to consider are
max_E_err
andmax_S_err
, inDMRGEngine
.Increase the minimum number of sweeps taken by the algorithm. Again, this will not resolve issues due to bad convergence, but might prevent bad results due to premature convergence. This can be done through the
min_sweeps
option of theDMRGEngine
.Change the size and shape of the MPS unit cell (where possible), in case an artificially enforced translational invariance prevents the algorithm from finding a true ground state which is incommensurate with this periodicity. For example, a chain system which has a true ground state that is periodic in three sites, will not be accurately represented by a two-site MPS unit cell, as the latter enforces two-site periodicity.
In some instances, it is essentially unavoidable to encounter convergence issues. In particular, a simulation of a critical state can cause problems with (i)DMRG convergence, as these states violate the area law underlying an accurate MPS approximation. In these cases, one should acknowledge the difficulties imposed by the method and take care to be very careful in interpreting the data.
Examples¶
Toycodes¶
These toycodes are meant to give you a flavor of the different algorithms,
while keeping the codes as readable and simple as possible.
The scripts are included in the [TeNPySource] repository in the folder toycodes/
,
but not part of the basic TeNpy library;
the only requirements to run them are Python 3, Numpy, and Scipy.
a_mps.py¶
"""Toy code implementing a matrix product state."""
# Copyright 2018-2021 TeNPy Developers, GNU GPLv3
import numpy as np
from scipy.linalg import svd
# if you get an error message "LinAlgError: SVD did not converge",
# uncomment the following line. (This requires TeNPy to be installed.)
# from tenpy.linalg.svd_robust import svd # (works like scipy.linalg.svd)
class SimpleMPS:
"""Simple class for a matrix product state.
We index sites with `i` from 0 to L-1; bond `i` is left of site `i`.
We *assume* that the state is in right-canonical form.
Parameters
----------
Bs, Ss, bc:
Same as attributes.
Attributes
----------
Bs : list of np.Array[ndim=3]
The 'matrices' in right-canonical form, one for each physical site
(within the unit-cell for an infinite MPS).
Each `B[i]` has legs (virtual left, physical, virtual right), in short ``vL i vR``
Ss : list of np.Array[ndim=1]
The Schmidt values at each of the bonds, ``Ss[i]`` is left of ``Bs[i]``.
bc : 'infinite', 'finite'
Boundary conditions.
L : int
Number of sites (in the unit-cell for an infinite MPS).
nbonds : int
Number of (non-trivial) bonds: L-1 for 'finite' boundary conditions
"""
def __init__(self, Bs, Ss, bc='finite'):
assert bc in ['finite', 'infinite']
self.Bs = Bs
self.Ss = Ss
self.bc = bc
self.L = len(Bs)
self.nbonds = self.L - 1 if self.bc == 'finite' else self.L
def copy(self):
return SimpleMPS([B.copy() for B in self.Bs], [S.copy() for S in self.Ss], self.bc)
def get_theta1(self, i):
"""Calculate effective single-site wave function on sites i in mixed canonical form.
The returned array has legs ``vL, i, vR`` (as one of the Bs).
"""
return np.tensordot(np.diag(self.Ss[i]), self.Bs[i], [1, 0]) # vL [vL'], [vL] i vR
def get_theta2(self, i):
"""Calculate effective two-site wave function on sites i,j=(i+1) in mixed canonical form.
The returned array has legs ``vL, i, j, vR``.
"""
j = (i + 1) % self.L
return np.tensordot(self.get_theta1(i), self.Bs[j], [2, 0]) # vL i [vR], [vL] j vR
def get_chi(self):
"""Return bond dimensions."""
return [self.Bs[i].shape[2] for i in range(self.nbonds)]
def site_expectation_value(self, op):
"""Calculate expectation values of a local operator at each site."""
result = []
for i in range(self.L):
theta = self.get_theta1(i) # vL i vR
op_theta = np.tensordot(op, theta, axes=[1, 1]) # i [i*], vL [i] vR
result.append(np.tensordot(theta.conj(), op_theta, [[0, 1, 2], [1, 0, 2]]))
# [vL*] [i*] [vR*], [i] [vL] [vR]
return np.real_if_close(result)
def bond_expectation_value(self, op):
"""Calculate expectation values of a local operator at each bond."""
result = []
for i in range(self.nbonds):
theta = self.get_theta2(i) # vL i j vR
op_theta = np.tensordot(op[i], theta, axes=[[2, 3], [1, 2]])
# i j [i*] [j*], vL [i] [j] vR
result.append(np.tensordot(theta.conj(), op_theta, [[0, 1, 2, 3], [2, 0, 1, 3]]))
# [vL*] [i*] [j*] [vR*], [i] [j] [vL] [vR]
return np.real_if_close(result)
def entanglement_entropy(self):
"""Return the (von-Neumann) entanglement entropy for a bipartition at any of the bonds."""
bonds = range(1, self.L) if self.bc == 'finite' else range(0, self.L)
result = []
for i in bonds:
S = self.Ss[i].copy()
S[S < 1.e-20] = 0. # 0*log(0) should give 0; avoid warning or NaN.
S2 = S * S
assert abs(np.linalg.norm(S) - 1.) < 1.e-14
result.append(-np.sum(S2 * np.log(S2)))
return np.array(result)
def correlation_length(self):
"""Diagonalize transfer matrix to obtain the correlation length."""
import scipy.sparse.linalg.eigen.arpack as arp
assert self.bc == 'infinite' # works only in the infinite case
B = self.Bs[0] # vL i vR
chi = B.shape[0]
T = np.tensordot(B, np.conj(B), axes=[1, 1]) # vL [i] vR, vL* [i*] vR*
T = np.transpose(T, [0, 2, 1, 3]) # vL vL* vR vR*
for i in range(1, self.L):
B = self.Bs[i]
T = np.tensordot(T, B, axes=[2, 0]) # vL vL* [vR] vR*, [vL] i vR
T = np.tensordot(T, np.conj(B), axes=[[2, 3], [0, 1]])
# vL vL* [vR*] [i] vR, [vL*] [i*] vR*
T = np.reshape(T, (chi**2, chi**2))
# Obtain the 2nd largest eigenvalue
eta = arp.eigs(T, k=2, which='LM', return_eigenvectors=False, ncv=20)
return -self.L / np.log(np.min(np.abs(eta)))
def init_FM_MPS(L, d, bc='finite'):
"""Return a ferromagnetic MPS (= product state with all spins up)"""
B = np.zeros([1, d, 1], dtype=float)
B[0, 0, 0] = 1.
S = np.ones([1], dtype=float)
Bs = [B.copy() for i in range(L)]
Ss = [S.copy() for i in range(L)]
return SimpleMPS(Bs, Ss, bc)
def split_truncate_theta(theta, chi_max, eps):
"""Split and truncate a two-site wave function in mixed canonical form.
Split a two-site wave function as follows::
vL --(theta)-- vR => vL --(A)--diag(S)--(B)-- vR
| | | |
i j i j
Afterwards, truncate in the new leg (labeled ``vC``).
Parameters
----------
theta : np.Array[ndim=4]
Two-site wave function in mixed canonical form, with legs ``vL, i, j, vR``.
chi_max : int
Maximum number of singular values to keep
eps : float
Discard any singular values smaller than that.
Returns
-------
A : np.Array[ndim=3]
Left-canonical matrix on site i, with legs ``vL, i, vC``
S : np.Array[ndim=1]
Singular/Schmidt values.
B : np.Array[ndim=3]
Right-canonical matrix on site j, with legs ``vC, j, vR``
"""
chivL, dL, dR, chivR = theta.shape
theta = np.reshape(theta, [chivL * dL, dR * chivR])
X, Y, Z = svd(theta, full_matrices=False)
# truncate
chivC = min(chi_max, np.sum(Y > eps))
piv = np.argsort(Y)[::-1][:chivC] # keep the largest `chivC` singular values
X, Y, Z = X[:, piv], Y[piv], Z[piv, :]
# renormalize
S = Y / np.linalg.norm(Y) # == Y/sqrt(sum(Y**2))
# split legs of X and Z
A = np.reshape(X, [chivL, dL, chivC])
B = np.reshape(Z, [chivC, dR, chivR])
return A, S, B
b_model.py¶
"""Toy code implementing the transverse-field ising model."""
# Copyright 2018-2021 TeNPy Developers, GNU GPLv3
import numpy as np
class TFIModel:
"""Simple class generating the Hamiltonian of the transverse-field Ising model.
The Hamiltonian reads
.. math ::
H = - J \\sum_{i} \\sigma^x_i \\sigma^x_{i+1} - g \\sum_{i} \\sigma^z_i
Parameters
----------
L : int
Number of sites.
J, g : float
Coupling parameters of the above defined Hamiltonian.
bc : 'infinite', 'finite'
Boundary conditions.
Attributes
----------
L : int
Number of sites.
bc : 'infinite', 'finite'
Boundary conditions.
sigmax, sigmay, sigmaz, id :
Local operators, namely the Pauli matrices and identity.
H_bonds : list of np.Array[ndim=4]
The Hamiltonian written in terms of local 2-site operators, ``H = sum_i H_bonds[i]``.
Each ``H_bonds[i]`` has (physical) legs (i out, (i+1) out, i in, (i+1) in),
in short ``i j i* j*``.
H_mpo : lit of np.Array[ndim=4]
The Hamiltonian written as an MPO.
Each ``H_mpo[i]`` has legs (virutal left, virtual right, physical out, physical in),
in short ``wL wR i i*``.
"""
def __init__(self, L, J, g, bc='finite'):
assert bc in ['finite', 'infinite']
self.L, self.d, self.bc = L, 2, bc
self.J, self.g = J, g
self.sigmax = np.array([[0., 1.], [1., 0.]])
self.sigmay = np.array([[0., -1j], [1j, 0.]])
self.sigmaz = np.array([[1., 0.], [0., -1.]])
self.id = np.eye(2)
self.init_H_bonds()
self.init_H_mpo()
def init_H_bonds(self):
"""Initialize `H_bonds` hamiltonian.
Called by __init__().
"""
sx, sz, id = self.sigmax, self.sigmaz, self.id
d = self.d
nbonds = self.L - 1 if self.bc == 'finite' else self.L
H_list = []
for i in range(nbonds):
gL = gR = 0.5 * self.g
if self.bc == 'finite':
if i == 0:
gL = self.g
if i + 1 == self.L - 1:
gR = self.g
H_bond = -self.J * np.kron(sx, sx) - gL * np.kron(sz, id) - gR * np.kron(id, sz)
# H_bond has legs ``i, j, i*, j*``
H_list.append(np.reshape(H_bond, [d, d, d, d]))
self.H_bonds = H_list
# (note: not required for TEBD)
def init_H_mpo(self):
"""Initialize `H_mpo` Hamiltonian.
Called by __init__().
"""
w_list = []
for i in range(self.L):
w = np.zeros((3, 3, self.d, self.d), dtype=float)
w[0, 0] = w[2, 2] = self.id
w[0, 1] = self.sigmax
w[0, 2] = -self.g * self.sigmaz
w[1, 2] = -self.J * self.sigmax
w_list.append(w)
self.H_mpo = w_list
c_tebd.py¶
"""Toy code implementing the time evolving block decimation (TEBD)."""
# Copyright 2018-2021 TeNPy Developers, GNU GPLv3
import numpy as np
from scipy.linalg import expm
from a_mps import split_truncate_theta
def calc_U_bonds(H_bonds, dt):
"""Given the H_bonds, calculate ``U_bonds[i] = expm(-dt*H_bonds[i])``.
Each local operator has legs (i out, (i+1) out, i in, (i+1) in), in short ``i j i* j*``.
Note that no imaginary 'i' is included, thus real `dt` means 'imaginary time' evolution!
"""
d = H_bonds[0].shape[0]
U_bonds = []
for H in H_bonds:
H = np.reshape(H, [d * d, d * d])
U = expm(-dt * H)
U_bonds.append(np.reshape(U, [d, d, d, d]))
return U_bonds
def run_TEBD(psi, U_bonds, N_steps, chi_max, eps):
"""Evolve for `N_steps` time steps with TEBD."""
Nbonds = psi.L - 1 if psi.bc == 'finite' else psi.L
assert len(U_bonds) == Nbonds
for n in range(N_steps):
for k in [0, 1]: # even, odd
for i_bond in range(k, Nbonds, 2):
update_bond(psi, i_bond, U_bonds[i_bond], chi_max, eps)
# done
def update_bond(psi, i, U_bond, chi_max, eps):
"""Apply `U_bond` acting on i,j=(i+1) to `psi`."""
j = (i + 1) % psi.L
# construct theta matrix
theta = psi.get_theta2(i) # vL i j vR
# apply U
Utheta = np.tensordot(U_bond, theta, axes=([2, 3], [1, 2])) # i j [i*] [j*], vL [i] [j] vR
Utheta = np.transpose(Utheta, [2, 0, 1, 3]) # vL i j vR
# split and truncate
Ai, Sj, Bj = split_truncate_theta(Utheta, chi_max, eps)
# put back into MPS
Gi = np.tensordot(np.diag(psi.Ss[i]**(-1)), Ai, axes=[1, 0]) # vL [vL*], [vL] i vC
psi.Bs[i] = np.tensordot(Gi, np.diag(Sj), axes=[2, 0]) # vL i [vC], [vC] vC
psi.Ss[j] = Sj # vC
psi.Bs[j] = Bj # vC j vR
def example_TEBD_gs_tf_ising_finite(L, g):
print("finite TEBD, imaginary time evolution, transverse field Ising")
print("L={L:d}, g={g:.2f}".format(L=L, g=g))
import a_mps
import b_model
M = b_model.TFIModel(L=L, J=1., g=g, bc='finite')
psi = a_mps.init_FM_MPS(M.L, M.d, M.bc)
for dt in [0.1, 0.01, 0.001, 1.e-4, 1.e-5]:
U_bonds = calc_U_bonds(M.H_bonds, dt)
run_TEBD(psi, U_bonds, N_steps=500, chi_max=30, eps=1.e-10)
E = np.sum(psi.bond_expectation_value(M.H_bonds))
print("dt = {dt:.5f}: E = {E:.13f}".format(dt=dt, E=E))
print("final bond dimensions: ", psi.get_chi())
mag_x = np.sum(psi.site_expectation_value(M.sigmax))
mag_z = np.sum(psi.site_expectation_value(M.sigmaz))
print("magnetization in X = {mag_x:.5f}".format(mag_x=mag_x))
print("magnetization in Z = {mag_z:.5f}".format(mag_z=mag_z))
if L < 20: # compare to exact result
from tfi_exact import finite_gs_energy
E_exact = finite_gs_energy(L, 1., g)
print("Exact diagonalization: E = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E - E_exact) / E_exact))
return E, psi, M
def example_TEBD_gs_tf_ising_infinite(g):
print("infinite TEBD, imaginary time evolution, transverse field Ising")
print("g={g:.2f}".format(g=g))
import a_mps
import b_model
M = b_model.TFIModel(L=2, J=1., g=g, bc='infinite')
psi = a_mps.init_FM_MPS(M.L, M.d, M.bc)
for dt in [0.1, 0.01, 0.001, 1.e-4, 1.e-5]:
U_bonds = calc_U_bonds(M.H_bonds, dt)
run_TEBD(psi, U_bonds, N_steps=500, chi_max=30, eps=1.e-10)
E = np.mean(psi.bond_expectation_value(M.H_bonds))
print("dt = {dt:.5f}: E (per site) = {E:.13f}".format(dt=dt, E=E))
print("final bond dimensions: ", psi.get_chi())
mag_x = np.mean(psi.site_expectation_value(M.sigmax))
mag_z = np.mean(psi.site_expectation_value(M.sigmaz))
print("<sigma_x> = {mag_x:.5f}".format(mag_x=mag_x))
print("<sigma_z> = {mag_z:.5f}".format(mag_z=mag_z))
print("correlation length:", psi.correlation_length())
# compare to exact result
from tfi_exact import infinite_gs_energy
E_exact = infinite_gs_energy(1., g)
print("Analytic result: E (per site) = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E - E_exact) / E_exact))
return E, psi, M
def example_TEBD_tf_ising_lightcone(L, g, tmax, dt):
print("finite TEBD, real time evolution, transverse field Ising")
print("L={L:d}, g={g:.2f}, tmax={tmax:.2f}, dt={dt:.3f}".format(L=L, g=g, tmax=tmax, dt=dt))
# find ground state with TEBD or DMRG
# E, psi, M = example_TEBD_gs_tf_ising_finite(L, g)
from d_dmrg import example_DMRG_tf_ising_finite
E, psi, M = example_DMRG_tf_ising_finite(L, g)
i0 = L // 2
# apply sigmaz on site i0
SzB = np.tensordot(M.sigmaz, psi.Bs[i0], axes=[1, 1]) # i [i*], vL [i] vR
psi.Bs[i0] = np.transpose(SzB, [1, 0, 2]) # vL i vR
U_bonds = calc_U_bonds(M.H_bonds, 1.j * dt) # (imaginary dt -> realtime evolution)
S = [psi.entanglement_entropy()]
Nsteps = int(tmax / dt + 0.5)
for n in range(Nsteps):
if abs((n * dt + 0.1) % 0.2 - 0.1) < 1.e-10:
print("t = {t:.2f}, chi =".format(t=n * dt), psi.get_chi())
run_TEBD(psi, U_bonds, 1, chi_max=50, eps=1.e-10)
S.append(psi.entanglement_entropy())
import matplotlib.pyplot as plt
plt.figure()
plt.imshow(S[::-1],
vmin=0.,
aspect='auto',
interpolation='nearest',
extent=(0, L - 1., -0.5 * dt, (Nsteps + 0.5) * dt))
plt.xlabel('site $i$')
plt.ylabel('time $t/J$')
plt.ylim(0., tmax)
plt.colorbar().set_label('entropy $S$')
filename = 'c_tebd_lightcone_{g:.2f}.pdf'.format(g=g)
plt.savefig(filename)
print("saved " + filename)
if __name__ == "__main__":
example_TEBD_gs_tf_ising_finite(L=10, g=1.)
print("-" * 100)
example_TEBD_gs_tf_ising_infinite(g=1.5)
print("-" * 100)
example_TEBD_tf_ising_lightcone(L=20, g=1.5, tmax=3., dt=0.01)
d_dmrg.py¶
"""Toy code implementing the density-matrix renormalization group (DMRG)."""
# Copyright 2018-2021 TeNPy Developers, GNU GPLv3
import numpy as np
from a_mps import split_truncate_theta
import scipy.sparse
import scipy.sparse.linalg.eigen.arpack as arp
class SimpleHeff(scipy.sparse.linalg.LinearOperator):
"""Class for the effective Hamiltonian.
To be diagonalized in `SimpleDMRGEnginge.update_bond`. Looks like this::
.--vL* vR*--.
| i* j* |
| | | |
(LP)---(W1)--(W2)----(RP)
| | | |
| i j |
.--vL vR--.
"""
def __init__(self, LP, RP, W1, W2):
self.LP = LP # vL wL* vL*
self.RP = RP # vR* wR* vR
self.W1 = W1 # wL wC i i*
self.W2 = W2 # wC wR j j*
chi1, chi2 = LP.shape[0], RP.shape[2]
d1, d2 = W1.shape[2], W2.shape[2]
self.theta_shape = (chi1, d1, d2, chi2) # vL i j vR
self.shape = (chi1 * d1 * d2 * chi2, chi1 * d1 * d2 * chi2)
self.dtype = W1.dtype
def _matvec(self, theta):
"""Calculate |theta'> = H_eff |theta>.
This function is used by :func:scipy.sparse.linalg.eigen.arpack.eigsh` to diagonalize
the effective Hamiltonian with a Lanczos method, withouth generating the full matrix."""
x = np.reshape(theta, self.theta_shape) # vL i j vR
x = np.tensordot(self.LP, x, axes=(2, 0)) # vL wL* [vL*], [vL] i j vR
x = np.tensordot(x, self.W1, axes=([1, 2], [0, 3])) # vL [wL*] [i] j vR, [wL] wC i [i*]
x = np.tensordot(x, self.W2, axes=([3, 1], [0, 3])) # vL [j] vR [wC] i, [wC] wR j [j*]
x = np.tensordot(x, self.RP, axes=([1, 3], [0, 1])) # vL [vR] i [wR] j, [vR*] [wR*] vR
x = np.reshape(x, self.shape[0])
return x
class SimpleDMRGEngine:
"""DMRG algorithm, implemented as class holding the necessary data.
Parameters
----------
psi, model, chi_max, eps:
See attributes
Attributes
----------
psi : SimpleMPS
The current ground-state (approximation).
model :
The model of which the groundstate is to be calculated.
chi_max, eps:
Truncation parameters, see :func:`a_mps.split_truncate_theta`.
LPs, RPs : list of np.Array[ndim=3]
Left and right parts ("environments") of the effective Hamiltonian.
``LPs[i]`` is the contraction of all parts left of site `i` in the network ``<psi|H|psi>``,
and similar ``RPs[i]`` for all parts right of site `i`.
Each ``LPs[i]`` has legs ``vL wL* vL*``, ``RPS[i]`` has legs ``vR* wR* vR``
"""
def __init__(self, psi, model, chi_max, eps):
assert psi.L == model.L and psi.bc == model.bc # ensure compatibility
self.H_mpo = model.H_mpo
self.psi = psi
self.LPs = [None] * psi.L
self.RPs = [None] * psi.L
self.chi_max = chi_max
self.eps = eps
# initialize left and right environment
D = self.H_mpo[0].shape[0]
chi = psi.Bs[0].shape[0]
LP = np.zeros([chi, D, chi], dtype=float) # vL wL* vL*
RP = np.zeros([chi, D, chi], dtype=float) # vR* wR* vR
LP[:, 0, :] = np.eye(chi)
RP[:, D - 1, :] = np.eye(chi)
self.LPs[0] = LP
self.RPs[-1] = RP
# initialize necessary RPs
for i in range(psi.L - 1, 1, -1):
self.update_RP(i)
def sweep(self):
# sweep from left to right
for i in range(self.psi.nbonds - 1):
self.update_bond(i)
# sweep from right to left
for i in range(self.psi.nbonds - 1, 0, -1):
self.update_bond(i)
def update_bond(self, i):
j = (i + 1) % self.psi.L
# get effective Hamiltonian
Heff = SimpleHeff(self.LPs[i], self.RPs[j], self.H_mpo[i], self.H_mpo[j])
# Diagonalize Heff, find ground state `theta`
theta0 = np.reshape(self.psi.get_theta2(i), [Heff.shape[0]]) # initial guess
e, v = arp.eigsh(Heff, k=1, which='SA', return_eigenvectors=True, v0=theta0)
theta = np.reshape(v[:, 0], Heff.theta_shape)
# split and truncate
Ai, Sj, Bj = split_truncate_theta(theta, self.chi_max, self.eps)
# put back into MPS
Gi = np.tensordot(np.diag(self.psi.Ss[i]**(-1)), Ai, axes=[1, 0]) # vL [vL*], [vL] i vC
self.psi.Bs[i] = np.tensordot(Gi, np.diag(Sj), axes=[2, 0]) # vL i [vC], [vC*] vC
self.psi.Ss[j] = Sj # vC
self.psi.Bs[j] = Bj # vC j vR
self.update_LP(i)
self.update_RP(j)
def update_RP(self, i):
"""Calculate RP right of site `i-1` from RP right of site `i`."""
j = (i - 1) % self.psi.L
RP = self.RPs[i] # vR* wR* vR
B = self.psi.Bs[i] # vL i vR
Bc = B.conj() # vL* i* vR*
W = self.H_mpo[i] # wL wR i i*
RP = np.tensordot(B, RP, axes=[2, 0]) # vL i [vR], [vR*] wR* vR
RP = np.tensordot(RP, W, axes=[[1, 2], [3, 1]]) # vL [i] [wR*] vR, wL [wR] i [i*]
RP = np.tensordot(RP, Bc, axes=[[1, 3], [2, 1]]) # vL [vR] wL [i], vL* [i*] [vR*]
self.RPs[j] = RP # vL wL vL* (== vR* wR* vR on site i-1)
def update_LP(self, i):
"""Calculate LP left of site `i+1` from LP left of site `i`."""
j = (i + 1) % self.psi.L
LP = self.LPs[i] # vL wL vL*
B = self.psi.Bs[i] # vL i vR
G = np.tensordot(np.diag(self.psi.Ss[i]), B, axes=[1, 0]) # vL [vL*], [vL] i vR
A = np.tensordot(G, np.diag(self.psi.Ss[j]**-1), axes=[2, 0]) # vL i [vR], [vR*] vR
Ac = A.conj() # vL* i* vR*
W = self.H_mpo[i] # wL wR i i*
LP = np.tensordot(LP, A, axes=[2, 0]) # vL wL* [vL*], [vL] i vR
LP = np.tensordot(W, LP, axes=[[0, 3], [1, 2]]) # [wL] wR i [i*], vL [wL*] [i] vR
LP = np.tensordot(Ac, LP, axes=[[0, 1], [2, 1]]) # [vL*] [i*] vR*, wR [i] [vL] vR
self.LPs[j] = LP # vR* wR vR (== vL wL* vL* on site i+1)
def example_DMRG_tf_ising_finite(L, g):
print("finite DMRG, transverse field Ising")
print("L={L:d}, g={g:.2f}".format(L=L, g=g))
import a_mps
import b_model
M = b_model.TFIModel(L=L, J=1., g=g, bc='finite')
psi = a_mps.init_FM_MPS(M.L, M.d, M.bc)
eng = SimpleDMRGEngine(psi, M, chi_max=30, eps=1.e-10)
for i in range(10):
eng.sweep()
E = np.sum(psi.bond_expectation_value(M.H_bonds))
print("sweep {i:2d}: E = {E:.13f}".format(i=i + 1, E=E))
print("final bond dimensions: ", psi.get_chi())
mag_x = np.sum(psi.site_expectation_value(M.sigmax))
mag_z = np.sum(psi.site_expectation_value(M.sigmaz))
print("magnetization in X = {mag_x:.5f}".format(mag_x=mag_x))
print("magnetization in Z = {mag_z:.5f}".format(mag_z=mag_z))
if L < 20: # compare to exact result
from tfi_exact import finite_gs_energy
E_exact = finite_gs_energy(L, 1., g)
print("Exact diagonalization: E = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E - E_exact) / E_exact))
return E, psi, M
def example_DMRG_tf_ising_infinite(g):
print("infinite DMRG, transverse field Ising")
print("g={g:.2f}".format(g=g))
import a_mps
import b_model
M = b_model.TFIModel(L=2, J=1., g=g, bc='infinite')
psi = a_mps.init_FM_MPS(M.L, M.d, M.bc)
eng = SimpleDMRGEngine(psi, M, chi_max=20, eps=1.e-14)
for i in range(20):
eng.sweep()
E = np.mean(psi.bond_expectation_value(M.H_bonds))
print("sweep {i:2d}: E (per site) = {E:.13f}".format(i=i + 1, E=E))
print("final bond dimensions: ", psi.get_chi())
mag_x = np.mean(psi.site_expectation_value(M.sigmax))
mag_z = np.mean(psi.site_expectation_value(M.sigmaz))
print("<sigma_x> = {mag_x:.5f}".format(mag_x=mag_x))
print("<sigma_z> = {mag_z:.5f}".format(mag_z=mag_z))
print("correlation length:", psi.correlation_length())
# compare to exact result
from tfi_exact import infinite_gs_energy
E_exact = infinite_gs_energy(1., g)
print("Analytic result: E (per site) = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E - E_exact) / E_exact))
return E, psi, M
if __name__ == "__main__":
example_DMRG_tf_ising_finite(L=10, g=1.)
print("-" * 100)
example_DMRG_tf_ising_infinite(g=1.5)
tfi_exact.py¶
"""Provides exact ground state energies for the transverse field ising model for comparison.
The Hamiltonian reads
.. math ::
H = - J \\sum_{i} \\sigma^x_i \\sigma^x_{i+1} - g \\sum_{i} \\sigma^z_i
"""
# Copyright 2019-2021 TeNPy Developers, GNU GPLv3
import numpy as np
import scipy.sparse as sparse
import scipy.sparse.linalg.eigen.arpack as arp
import warnings
import scipy.integrate
def finite_gs_energy(L, J, g):
"""For comparison: obtain ground state energy from exact diagonalization.
Exponentially expensive in L, only works for small enough `L` <~ 20.
"""
if L >= 20:
warnings.warn("Large L: Exact diagonalization might take a long time!")
# get single site operaors
sx = sparse.csr_matrix(np.array([[0., 1.], [1., 0.]]))
sz = sparse.csr_matrix(np.array([[1., 0.], [0., -1.]]))
id = sparse.csr_matrix(np.eye(2))
sx_list = [] # sx_list[i] = kron([id, id, ..., id, sx, id, .... id])
sz_list = []
for i_site in range(L):
x_ops = [id] * L
z_ops = [id] * L
x_ops[i_site] = sx
z_ops[i_site] = sz
X = x_ops[0]
Z = z_ops[0]
for j in range(1, L):
X = sparse.kron(X, x_ops[j], 'csr')
Z = sparse.kron(Z, z_ops[j], 'csr')
sx_list.append(X)
sz_list.append(Z)
H_xx = sparse.csr_matrix((2**L, 2**L))
H_z = sparse.csr_matrix((2**L, 2**L))
for i in range(L - 1):
H_xx = H_xx + sx_list[i] * sx_list[(i + 1) % L]
for i in range(L):
H_z = H_z + sz_list[i]
H = -J * H_xx - g * H_z
E, V = arp.eigsh(H, k=1, which='SA', return_eigenvectors=True, ncv=20)
return E[0]
def infinite_gs_energy(J, g):
"""For comparison: Calculate groundstate energy density from analytic formula.
The analytic formula stems from mapping the model to free fermions, see P. Pfeuty, The one-
dimensional Ising model with a transverse field, Annals of Physics 57, p. 79 (1970). Note that
we use Pauli matrices compared this reference using spin-1/2 matrices and replace the sum_k ->
integral dk/2pi to obtain the result in the N -> infinity limit.
"""
def f(k, lambda_):
return np.sqrt(1 + lambda_**2 + 2 * lambda_ * np.cos(k))
E0_exact = -g / (J * 2. * np.pi) * scipy.integrate.quad(f, -np.pi, np.pi, args=(J / g, ))[0]
return E0_exact
Python scripts¶
These example scripts illustrate the very basic interface for calling TeNPy.
They are included in the [TeNPySource] repository in the folder examples/
,
we include them here in the documentation for reference.
You need to install TeNPy to call them (see Installation instructions), but you can copy them anywhere before execution.
(Some scripts include other files from the same folder, though; copy those as well.)
a_np_conserved.py¶
"""An example code to demonstrate the usage of :class:`~tenpy.linalg.np_conserved.Array`.
This example includes the following steps:
1) create Arrays for an Neel MPS
2) create an MPO representing the nearest-neighbour AFM Heisenberg Hamiltonian
3) define 'environments' left and right
4) contract MPS and MPO to calculate the energy
5) extract two-site hamiltonian ``H2`` from the MPO
6) calculate ``exp(-1.j*dt*H2)`` by diagonalization of H2
7) apply ``exp(H2)`` to two sites of the MPS and truncate with svd
Note that this example uses only np_conserved, but no other modules.
Compare it to the example `b_mps.py`,
which does the same steps using a few predefined classes like MPS and MPO.
"""
# Copyright 2018-2021 TeNPy Developers, GNU GPLv3
import tenpy.linalg.np_conserved as npc
import numpy as np
# model parameters
Jxx, Jz = 1., 1.
L = 20
dt = 0.1
cutoff = 1.e-10
print("Jxx={Jxx}, Jz={Jz}, L={L:d}".format(Jxx=Jxx, Jz=Jz, L=L))
print("1) create Arrays for an Neel MPS")
# vL ->--B-->- vR
# |
# ^
# |
# p
# create a ChargeInfo to specify the nature of the charge
chinfo = npc.ChargeInfo([1], ['2*Sz']) # the second argument is just a descriptive name
# create LegCharges on physical leg and even/odd bonds
p_leg = npc.LegCharge.from_qflat(chinfo, [[1], [-1]]) # charges for up, down
v_leg_even = npc.LegCharge.from_qflat(chinfo, [[0]])
v_leg_odd = npc.LegCharge.from_qflat(chinfo, [[1]])
B_even = npc.zeros([v_leg_even, v_leg_odd.conj(), p_leg],
labels=['vL', 'vR', 'p']) # virtual left/right, physical
B_odd = npc.zeros([v_leg_odd, v_leg_even.conj(), p_leg], labels=['vL', 'vR', 'p'])
B_even[0, 0, 0] = 1. # up
B_odd[0, 0, 1] = 1. # down
Bs = [B_even, B_odd] * (L // 2) + [B_even] * (L % 2) # (right-canonical)
Ss = [np.ones(1)] * L # Ss[i] are singular values between Bs[i-1] and Bs[i]
# Side remark:
# An MPS is expected to have non-zero entries everywhere compatible with the charges.
# In general, we recommend to use `sort_legcharge` (or `as_completely_blocked`)
# to ensure complete blocking. (But the code will also work, if you don't do it.)
# The drawback is that this might introduce permutations in the indices of single legs,
# which you have to keep in mind when converting dense numpy arrays to and from npc.Arrays.
print("2) create an MPO representing the AFM Heisenberg Hamiltonian")
# p*
# |
# ^
# |
# wL ->--W-->- wR
# |
# ^
# |
# p
# create physical spin-1/2 operators Sz, S+, S-
Sz = npc.Array.from_ndarray([[0.5, 0.], [0., -0.5]], [p_leg, p_leg.conj()], labels=['p', 'p*'])
Sp = npc.Array.from_ndarray([[0., 1.], [0., 0.]], [p_leg, p_leg.conj()], labels=['p', 'p*'])
Sm = npc.Array.from_ndarray([[0., 0.], [1., 0.]], [p_leg, p_leg.conj()], labels=['p', 'p*'])
Id = npc.eye_like(Sz, labels=Sz.get_leg_labels()) # identity
mpo_leg = npc.LegCharge.from_qflat(chinfo, [[0], [2], [-2], [0], [0]])
W_grid = [[Id, Sp, Sm, Sz, None ],
[None, None, None, None, 0.5 * Jxx * Sm],
[None, None, None, None, 0.5 * Jxx * Sp],
[None, None, None, None, Jz * Sz ],
[None, None, None, None, Id ]] # yapf:disable
W = npc.grid_outer(W_grid, [mpo_leg, mpo_leg.conj()], grid_labels=['wL', 'wR'])
# wL/wR = virtual left/right of the MPO
Ws = [W] * L
print("3) define 'environments' left and right")
# .---->- vR vL ->----.
# | |
# envL->- wR wL ->-envR
# | |
# .---->- vR* vL*->----.
envL = npc.zeros([W.get_leg('wL').conj(), Bs[0].get_leg('vL').conj(), Bs[0].get_leg('vL')],
labels=['wR', 'vR', 'vR*'])
envL[0, :, :] = npc.diag(1., envL.legs[1])
envR = npc.zeros([W.get_leg('wR').conj(), Bs[-1].get_leg('vR').conj(), Bs[-1].get_leg('vR')],
labels=['wL', 'vL', 'vL*'])
envR[-1, :, :] = npc.diag(1., envR.legs[1])
print("4) contract MPS and MPO to calculate the energy <psi|H|psi>")
contr = envL
for i in range(L):
# contr labels: wR, vR, vR*
contr = npc.tensordot(contr, Bs[i], axes=('vR', 'vL'))
# wR, vR*, vR, p
contr = npc.tensordot(contr, Ws[i], axes=(['p', 'wR'], ['p*', 'wL']))
# vR*, vR, wR, p
contr = npc.tensordot(contr, Bs[i].conj(), axes=(['p', 'vR*'], ['p*', 'vL*']))
# vR, wR, vR*
# note that the order of the legs changed, but that's no problem with labels:
# the arrays are automatically transposed as necessary
E = npc.inner(contr, envR, axes=(['vR', 'wR', 'vR*'], ['vL', 'wL', 'vL*']))
print("E =", E)
print("5) calculate two-site hamiltonian ``H2`` from the MPO")
# label left, right physical legs with p, q
W0 = W.replace_labels(['p', 'p*'], ['p0', 'p0*'])
W1 = W.replace_labels(['p', 'p*'], ['p1', 'p1*'])
H2 = npc.tensordot(W0, W1, axes=('wR', 'wL')).itranspose(['wL', 'wR', 'p0', 'p1', 'p0*', 'p1*'])
H2 = H2[0, -1] # (If H has single-site terms, it's not that simple anymore)
print("H2 labels:", H2.get_leg_labels())
print("6) calculate exp(H2) by diagonalization of H2")
# diagonalization requires to view H2 as a matrix
H2 = H2.combine_legs([('p0', 'p1'), ('p0*', 'p1*')], qconj=[+1, -1])
print("labels after combine_legs:", H2.get_leg_labels())
E2, U2 = npc.eigh(H2)
print("Eigenvalues of H2:", E2)
U_expE2 = U2.scale_axis(np.exp(-1.j * dt * E2), axis=1) # scale_axis ~= apply an diagonal matrix
exp_H2 = npc.tensordot(U_expE2, U2.conj(), axes=(1, 1))
exp_H2.iset_leg_labels(H2.get_leg_labels())
exp_H2 = exp_H2.split_legs() # by default split all legs which are `LegPipe`
# (this restores the originial labels ['p0', 'p1', 'p0*', 'p1*'] of `H2` in `exp_H2`)
print("7) apply exp(H2) to even/odd bonds of the MPS and truncate with svd")
# (this implements one time step of first order TEBD)
for even_odd in [0, 1]:
for i in range(even_odd, L - 1, 2):
B_L = Bs[i].scale_axis(Ss[i], 'vL').ireplace_label('p', 'p0')
B_R = Bs[i + 1].replace_label('p', 'p1')
theta = npc.tensordot(B_L, B_R, axes=('vR', 'vL'))
theta = npc.tensordot(exp_H2, theta, axes=(['p0*', 'p1*'], ['p0', 'p1']))
# view as matrix for SVD
theta = theta.combine_legs([('vL', 'p0'), ('p1', 'vR')], new_axes=[0, 1], qconj=[+1, -1])
# now theta has labels '(vL.p0)', '(p1.vR)'
U, S, V = npc.svd(theta, inner_labels=['vR', 'vL'])
# truncate
keep = S > cutoff
S = S[keep]
invsq = np.linalg.norm(S)
Ss[i + 1] = S / invsq
U = U.iscale_axis(S / invsq, 'vR')
Bs[i] = U.split_legs('(vL.p0)').iscale_axis(Ss[i]**(-1), 'vL').ireplace_label('p0', 'p')
Bs[i + 1] = V.split_legs('(p1.vR)').ireplace_label('p1', 'p')
print("finished")
b_mps.py¶
"""Simplified version of `a_np_conserved.py` making use of other classes (like MPS, MPO).
This example includes the following steps:
1) create Arrays for an Neel MPS
2) create an MPO representing the nearest-neighbour AFM Heisenberg Hamiltonian
3) define 'environments' left and right
4) contract MPS and MPO to calculate the energy
5) extract two-site hamiltonian ``H2`` from the MPO
6) calculate ``exp(-1.j*dt*H2)`` by diagonalization of H2
7) apply ``exp(H2)`` to two sites of the MPS and truncate with svd
Note that this example performs the same steps as `a_np_conserved.py`,
but makes use of other predefined classes except npc.
"""
# Copyright 2018-2021 TeNPy Developers, GNU GPLv3
import tenpy.linalg.np_conserved as npc
import numpy as np
# some more imports
from tenpy.networks.site import SpinHalfSite
from tenpy.models.lattice import Chain
from tenpy.networks.mps import MPS
from tenpy.networks.mpo import MPO, MPOEnvironment
from tenpy.algorithms.truncation import svd_theta
# model parameters
Jxx, Jz = 1., 1.
L = 20
dt = 0.1
cutoff = 1.e-10
print("Jxx={Jxx}, Jz={Jz}, L={L:d}".format(Jxx=Jxx, Jz=Jz, L=L))
print("1) create Arrays for an Neel MPS")
site = SpinHalfSite(conserve='Sz') # predefined charges and Sp,Sm,Sz operators
p_leg = site.leg
chinfo = p_leg.chinfo
# make lattice from unit cell and create product state MPS
lat = Chain(L, site, bc_MPS='finite')
state = ["up", "down"] * (L // 2) + ["up"] * (L % 2) # Neel state
print("state = ", state)
psi = MPS.from_product_state(lat.mps_sites(), state, lat.bc_MPS)
print("2) create an MPO representing the AFM Heisenberg Hamiltonian")
# predefined physical spin-1/2 operators Sz, S+, S-
Sz, Sp, Sm, Id = site.Sz, site.Sp, site.Sm, site.Id
mpo_leg = npc.LegCharge.from_qflat(chinfo, [[0], [2], [-2], [0], [0]])
W_grid = [[Id, Sp, Sm, Sz, None ],
[None, None, None, None, 0.5 * Jxx * Sm],
[None, None, None, None, 0.5 * Jxx * Sp],
[None, None, None, None, Jz * Sz ],
[None, None, None, None, Id ]] # yapf:disable
W = npc.grid_outer(W_grid, [mpo_leg, mpo_leg.conj()], grid_labels=['wL', 'wR'])
# wL/wR = virtual left/right of the MPO
Ws = [W] * L
Ws[0] = W[:1, :]
Ws[-1] = W[:, -1:]
H = MPO(psi.sites, Ws, psi.bc, IdL=0, IdR=-1)
print("3) define 'environments' left and right")
# this is automatically done during initialization of MPOEnvironment
env = MPOEnvironment(psi, H, psi)
envL = env.get_LP(0)
envR = env.get_RP(L - 1)
print("4) contract MPS and MPO to calculate the energy <psi|H|psi>")
E = env.full_contraction(L - 1)
print("E =", E)
print("5) calculate two-site hamiltonian ``H2`` from the MPO")
# label left, right physical legs with p, q
W0 = H.get_W(0).replace_labels(['p', 'p*'], ['p0', 'p0*'])
W1 = H.get_W(1).replace_labels(['p', 'p*'], ['p1', 'p1*'])
H2 = npc.tensordot(W0, W1, axes=('wR', 'wL')).itranspose(['wL', 'wR', 'p0', 'p1', 'p0*', 'p1*'])
H2 = H2[H.IdL[0], H.IdR[2]] # (If H has single-site terms, it's not that simple anymore)
print("H2 labels:", H2.get_leg_labels())
print("6) calculate exp(H2) by diagonalization of H2")
# diagonalization requires to view H2 as a matrix
H2 = H2.combine_legs([('p0', 'p1'), ('p0*', 'p1*')], qconj=[+1, -1])
print("labels after combine_legs:", H2.get_leg_labels())
E2, U2 = npc.eigh(H2)
print("Eigenvalues of H2:", E2)
U_expE2 = U2.scale_axis(np.exp(-1.j * dt * E2), axis=1) # scale_axis ~= apply a diagonal matrix
exp_H2 = npc.tensordot(U_expE2, U2.conj(), axes=(1, 1))
exp_H2.iset_leg_labels(H2.get_leg_labels())
exp_H2 = exp_H2.split_legs() # by default split all legs which are `LegPipe`
# (this restores the originial labels ['p0', 'p1', 'p0*', 'p1*'] of `H2` in `exp_H2`)
# alternative way: use :func:`~tenpy.linalg.np_conserved.expm`
exp_H2_alternative = npc.expm(-1.j * dt * H2).split_legs()
assert (npc.norm(exp_H2_alternative - exp_H2) < 1.e-14)
print("7) apply exp(H2) to even/odd bonds of the MPS and truncate with svd")
# (this implements one time step of first order TEBD)
trunc_par = {'svd_min': cutoff, 'trunc_cut': None}
for even_odd in [0, 1]:
for i in range(even_odd, L - 1, 2):
theta = psi.get_theta(i, 2) # handles canonical form (i.e. scaling with 'S')
theta = npc.tensordot(exp_H2, theta, axes=(['p0*', 'p1*'], ['p0', 'p1']))
# view as matrix for SVD
theta = theta.combine_legs([('vL', 'p0'), ('p1', 'vR')], new_axes=[0, 1], qconj=[+1, -1])
# now theta has labels '(vL.p0)', '(p1.vR)'
U, S, V, err, invsq = svd_theta(theta, trunc_par, inner_labels=['vR', 'vL'])
psi.set_SR(i, S)
A_L = U.split_legs('(vL.p0)').ireplace_label('p0', 'p')
B_R = V.split_legs('(p1.vR)').ireplace_label('p1', 'p')
psi.set_B(i, A_L, form='A') # left-canonical form
psi.set_B(i + 1, B_R, form='B') # right-canonical form
print("finished")
c_tebd.py¶
"""Example illustrating the use of TEBD in tenpy.
The example functions in this class do the same as the ones in `toycodes/c_tebd.py`, but make use
of the classes defined in tenpy.
"""
# Copyright 2018-2021 TeNPy Developers, GNU GPLv3
import numpy as np
from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.algorithms import tebd
def example_TEBD_gs_tf_ising_finite(L, g):
print("finite TEBD, imaginary time evolution, transverse field Ising")
print("L={L:d}, g={g:.2f}".format(L=L, g=g))
model_params = dict(L=L, J=1., g=g, bc_MPS='finite', conserve=None)
M = TFIChain(model_params)
product_state = ["up"] * M.lat.N_sites
psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
tebd_params = {
'order': 2,
'delta_tau_list': [0.1, 0.01, 0.001, 1.e-4, 1.e-5],
'N_steps': 10,
'max_error_E': 1.e-6,
'trunc_params': {
'chi_max': 30,
'svd_min': 1.e-10
},
}
eng = tebd.TEBDEngine(psi, M, tebd_params)
eng.run_GS() # the main work...
# expectation values
E = np.sum(M.bond_energies(psi)) # M.bond_energies() works only a for NearestNeighborModel
# alternative: directly measure E2 = np.sum(psi.expectation_value(M.H_bond[1:]))
print("E = {E:.13f}".format(E=E))
print("final bond dimensions: ", psi.chi)
mag_x = np.sum(psi.expectation_value("Sigmax"))
mag_z = np.sum(psi.expectation_value("Sigmaz"))
print("magnetization in X = {mag_x:.5f}".format(mag_x=mag_x))
print("magnetization in Z = {mag_z:.5f}".format(mag_z=mag_z))
if L < 20: # compare to exact result
from tfi_exact import finite_gs_energy
E_exact = finite_gs_energy(L, 1., g)
print("Exact diagonalization: E = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E - E_exact) / E_exact))
return E, psi, M
def example_TEBD_gs_tf_ising_infinite(g):
print("infinite TEBD, imaginary time evolution, transverse field Ising")
print("g={g:.2f}".format(g=g))
model_params = dict(L=2, J=1., g=g, bc_MPS='infinite', conserve=None)
M = TFIChain(model_params)
product_state = ["up"] * M.lat.N_sites
psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
tebd_params = {
'order': 2,
'delta_tau_list': [0.1, 0.01, 0.001, 1.e-4, 1.e-5],
'N_steps': 10,
'max_error_E': 1.e-8,
'trunc_params': {
'chi_max': 30,
'svd_min': 1.e-10
},
}
eng = tebd.TEBDEngine(psi, M, tebd_params)
eng.run_GS() # the main work...
E = np.mean(M.bond_energies(psi)) # M.bond_energies() works only a for NearestNeighborModel
# alternative: directly measure E2 = np.mean(psi.expectation_value(M.H_bond))
print("E (per site) = {E:.13f}".format(E=E))
print("final bond dimensions: ", psi.chi)
mag_x = np.mean(psi.expectation_value("Sigmax"))
mag_z = np.mean(psi.expectation_value("Sigmaz"))
print("<sigma_x> = {mag_x:.5f}".format(mag_x=mag_x))
print("<sigma_z> = {mag_z:.5f}".format(mag_z=mag_z))
print("correlation length:", psi.correlation_length())
# compare to exact result
from tfi_exact import infinite_gs_energy
E_exact = infinite_gs_energy(1., g)
print("Analytic result: E (per site) = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E - E_exact) / E_exact))
return E, psi, M
def example_TEBD_tf_ising_lightcone(L, g, tmax, dt):
print("finite TEBD, real time evolution")
print("L={L:d}, g={g:.2f}, tmax={tmax:.2f}, dt={dt:.3f}".format(L=L, g=g, tmax=tmax, dt=dt))
# find ground state with TEBD or DMRG
# E, psi, M = example_TEBD_gs_tf_ising_finite(L, g)
from d_dmrg import example_DMRG_tf_ising_finite
print("(run DMRG to get the groundstate)")
E, psi, M = example_DMRG_tf_ising_finite(L, g)
print("(DMRG finished)")
i0 = L // 2
# apply sigmaz on site i0
psi.apply_local_op(i0, 'Sigmaz', unitary=True)
dt_measure = 0.05
# tebd.TEBDEngine makes 'N_steps' steps of `dt` at once;
# for second order this is more efficient.
tebd_params = {
'order': 2,
'dt': dt,
'N_steps': int(dt_measure / dt + 0.5),
'trunc_params': {
'chi_max': 50,
'svd_min': 1.e-10,
'trunc_cut': None
},
}
eng = tebd.TEBDEngine(psi, M, tebd_params)
S = [psi.entanglement_entropy()]
for n in range(int(tmax / dt_measure + 0.5)):
eng.run()
S.append(psi.entanglement_entropy())
import matplotlib.pyplot as plt
plt.figure()
plt.imshow(S[::-1],
vmin=0.,
aspect='auto',
interpolation='nearest',
extent=(0, L - 1., -0.5 * dt_measure, eng.evolved_time + 0.5 * dt_measure))
plt.xlabel('site $i$')
plt.ylabel('time $t/J$')
plt.ylim(0., tmax)
plt.colorbar().set_label('entropy $S$')
filename = 'c_tebd_lightcone_{g:.2f}.pdf'.format(g=g)
plt.savefig(filename)
print("saved " + filename)
def example_TEBD_gs_tf_ising_next_nearest_neighbor(L, g, Jp):
from tenpy.models.spins_nnn import SpinChainNNN2
from tenpy.models.model import NearestNeighborModel
print("finite TEBD, imaginary time evolution, transverse field Ising next-nearest neighbor")
print("L={L:d}, g={g:.2f}, Jp={Jp:.2f}".format(L=L, g=g, Jp=Jp))
model_params = dict(
L=L,
Jx=1.,
Jy=0.,
Jz=0.,
Jxp=Jp,
Jyp=0.,
Jzp=0.,
hz=g,
bc_MPS='finite',
conserve=None,
)
# we start with the non-grouped sites, but next-nearest neighbor interactions, building the MPO
M = SpinChainNNN2(model_params)
product_state = ["up"] * M.lat.N_sites
psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
# now we group each to sites ...
psi.group_sites(n=2) # ... in the state
M.group_sites(n=2) # ... and model
# now, M has only 'nearest-neighbor' interactions with respect to the grouped sites
# thus, we can convert the MPO into H_bond terms:
M_nn = NearestNeighborModel.from_MPOModel(M) # hence, we can initialize H_bond from the MPO
# now, we continue to run TEBD as before
tebd_params = {
'order': 2,
'delta_tau_list': [0.1, 0.01, 0.001, 1.e-4, 1.e-5],
'N_steps': 10,
'max_error_E': 1.e-6,
'trunc_params': {
'chi_max': 30,
'svd_min': 1.e-10
},
}
eng = tebd.TEBDEngine(psi, M_nn, tebd_params) # use M_nn and grouped psi
eng.run_GS() # the main work...
# expectation values:
E = np.sum(M_nn.bond_energies(psi)) # bond_energies() works only a for NearestNeighborModel
print("E = {E:.13f}".format(E=E))
print("final bond dimensions: ", psi.chi)
# we can split the sites of the state again for an easier evaluation of expectation values
psi.group_split()
mag_x = 2. * np.sum(psi.expectation_value("Sx")) # factor of 2 for Sx vs Sigmax
mag_z = 2. * np.sum(psi.expectation_value("Sz"))
print("magnetization in X = {mag_x:.5f}".format(mag_x=mag_x))
print("magnetization in Z = {mag_z:.5f}".format(mag_z=mag_z))
return E, psi, M
if __name__ == "__main__":
import logging
logging.basicConfig(level=logging.INFO)
example_TEBD_gs_tf_ising_finite(L=10, g=1.)
print("=" * 100, '', '', "=" * 100, sep='\n')
example_TEBD_gs_tf_ising_infinite(g=1.5)
print("=" * 100, '', '', "=" * 100, sep='\n')
example_TEBD_tf_ising_lightcone(L=20, g=1.5, tmax=3., dt=0.01)
print("=" * 100, '', '', "=" * 100, sep='\n')
example_TEBD_gs_tf_ising_next_nearest_neighbor(L=10, g=1.0, Jp=0.1)
d_dmrg.py¶
"""Example illustrating the use of DMRG in tenpy.
The example functions in this class do the same as the ones in `toycodes/d_dmrg.py`,
but make use of the classes defined in tenpy.
"""
# Copyright 2018-2021 TeNPy Developers, GNU GPLv3
import numpy as np
from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.models.spins import SpinModel
from tenpy.algorithms import dmrg
def example_DMRG_tf_ising_finite(L, g):
print("finite DMRG, transverse field Ising model")
print("L={L:d}, g={g:.2f}".format(L=L, g=g))
model_params = dict(L=L, J=1., g=g, bc_MPS='finite', conserve=None)
M = TFIChain(model_params)
product_state = ["up"] * M.lat.N_sites
psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
dmrg_params = {
'mixer': None, # setting this to True helps to escape local minima
'max_E_err': 1.e-10,
'trunc_params': {
'chi_max': 30,
'svd_min': 1.e-10
},
'combine': True
}
info = dmrg.run(psi, M, dmrg_params) # the main work...
E = info['E']
print("E = {E:.13f}".format(E=E))
print("final bond dimensions: ", psi.chi)
mag_x = np.sum(psi.expectation_value("Sigmax"))
mag_z = np.sum(psi.expectation_value("Sigmaz"))
print("magnetization in X = {mag_x:.5f}".format(mag_x=mag_x))
print("magnetization in Z = {mag_z:.5f}".format(mag_z=mag_z))
if L < 20: # compare to exact result
from tfi_exact import finite_gs_energy
E_exact = finite_gs_energy(L, 1., g)
print("Exact diagonalization: E = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E - E_exact) / E_exact))
return E, psi, M
def example_1site_DMRG_tf_ising_finite(L, g):
print("single-site finite DMRG, transverse field Ising model")
print("L={L:d}, g={g:.2f}".format(L=L, g=g))
model_params = dict(L=L, J=1., g=g, bc_MPS='finite', conserve=None)
M = TFIChain(model_params)
product_state = ["up"] * M.lat.N_sites
psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
dmrg_params = {
'mixer': True, # setting this to True is essential for the 1-site algorithm to work.
'max_E_err': 1.e-10,
'trunc_params': {
'chi_max': 30,
'svd_min': 1.e-10
},
'combine': False,
'active_sites': 1 # specifies single-site
}
info = dmrg.run(psi, M, dmrg_params)
E = info['E']
print("E = {E:.13f}".format(E=E))
print("final bond dimensions: ", psi.chi)
mag_x = np.sum(psi.expectation_value("Sigmax"))
mag_z = np.sum(psi.expectation_value("Sigmaz"))
print("magnetization in X = {mag_x:.5f}".format(mag_x=mag_x))
print("magnetization in Z = {mag_z:.5f}".format(mag_z=mag_z))
if L < 20: # compare to exact result
from tfi_exact import finite_gs_energy
E_exact = finite_gs_energy(L, 1., g)
print("Exact diagonalization: E = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E - E_exact) / E_exact))
return E, psi, M
def example_DMRG_tf_ising_infinite(g):
print("infinite DMRG, transverse field Ising model")
print("g={g:.2f}".format(g=g))
model_params = dict(L=2, J=1., g=g, bc_MPS='infinite', conserve=None)
M = TFIChain(model_params)
product_state = ["up"] * M.lat.N_sites
psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
dmrg_params = {
'mixer': True, # setting this to True helps to escape local minima
'trunc_params': {
'chi_max': 30,
'svd_min': 1.e-10
},
'max_E_err': 1.e-10,
}
# Sometimes, we want to call a 'DMRG engine' explicitly
eng = dmrg.TwoSiteDMRGEngine(psi, M, dmrg_params)
E, psi = eng.run() # equivalent to dmrg.run() up to the return parameters.
print("E = {E:.13f}".format(E=E))
print("final bond dimensions: ", psi.chi)
mag_x = np.mean(psi.expectation_value("Sigmax"))
mag_z = np.mean(psi.expectation_value("Sigmaz"))
print("<sigma_x> = {mag_x:.5f}".format(mag_x=mag_x))
print("<sigma_z> = {mag_z:.5f}".format(mag_z=mag_z))
print("correlation length:", psi.correlation_length())
# compare to exact result
from tfi_exact import infinite_gs_energy
E_exact = infinite_gs_energy(1., g)
print("Analytic result: E (per site) = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E - E_exact) / E_exact))
return E, psi, M
def example_1site_DMRG_tf_ising_infinite(g):
print("single-site infinite DMRG, transverse field Ising model")
print("g={g:.2f}".format(g=g))
model_params = dict(L=2, J=1., g=g, bc_MPS='infinite', conserve=None)
M = TFIChain(model_params)
product_state = ["up"] * M.lat.N_sites
psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
dmrg_params = {
'mixer': True, # setting this to True is essential for the 1-site algorithm to work.
'trunc_params': {
'chi_max': 30,
'svd_min': 1.e-10
},
'max_E_err': 1.e-10,
'combine': True
}
eng = dmrg.SingleSiteDMRGEngine(psi, M, dmrg_params)
E, psi = eng.run() # equivalent to dmrg.run() up to the return parameters.
print("E = {E:.13f}".format(E=E))
print("final bond dimensions: ", psi.chi)
mag_x = np.mean(psi.expectation_value("Sigmax"))
mag_z = np.mean(psi.expectation_value("Sigmaz"))
print("<sigma_x> = {mag_x:.5f}".format(mag_x=mag_x))
print("<sigma_z> = {mag_z:.5f}".format(mag_z=mag_z))
print("correlation length:", psi.correlation_length())
# compare to exact result
from tfi_exact import infinite_gs_energy
E_exact = infinite_gs_energy(1., g)
print("Analytic result: E (per site) = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E - E_exact) / E_exact))
def example_DMRG_heisenberg_xxz_infinite(Jz, conserve='best'):
print("infinite DMRG, Heisenberg XXZ chain")
print("Jz={Jz:.2f}, conserve={conserve!r}".format(Jz=Jz, conserve=conserve))
model_params = dict(
L=2,
S=0.5, # spin 1/2
Jx=1.,
Jy=1.,
Jz=Jz, # couplings
bc_MPS='infinite',
conserve=conserve)
M = SpinModel(model_params)
product_state = ["up", "down"] # initial Neel state
psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
dmrg_params = {
'mixer': True, # setting this to True helps to escape local minima
'trunc_params': {
'chi_max': 100,
'svd_min': 1.e-10,
},
'max_E_err': 1.e-10,
}
info = dmrg.run(psi, M, dmrg_params)
E = info['E']
print("E = {E:.13f}".format(E=E))
print("final bond dimensions: ", psi.chi)
Sz = psi.expectation_value("Sz") # Sz instead of Sigma z: spin-1/2 operators!
mag_z = np.mean(Sz)
print("<S_z> = [{Sz0:.5f}, {Sz1:.5f}]; mean ={mag_z:.5f}".format(Sz0=Sz[0],
Sz1=Sz[1],
mag_z=mag_z))
# note: it's clear that mean(<Sz>) is 0: the model has Sz conservation!
print("correlation length:", psi.correlation_length())
corrs = psi.correlation_function("Sz", "Sz", sites1=range(10))
print("correlations <Sz_i Sz_j> =")
print(corrs)
return E, psi, M
if __name__ == "__main__":
import logging
logging.basicConfig(level=logging.INFO)
example_DMRG_tf_ising_finite(L=10, g=1.)
print("-" * 100)
example_1site_DMRG_tf_ising_finite(L=10, g=1.)
print("-" * 100)
example_DMRG_tf_ising_infinite(g=1.5)
print("-" * 100)
example_1site_DMRG_tf_ising_infinite(g=1.5)
print("-" * 100)
example_DMRG_heisenberg_xxz_infinite(Jz=1.5)
e_tdvp.py¶
"""Example illustrating the use of TDVP in tenpy.
As of now, we have TDVP only for finite systems. The call structure is quite similar to TEBD. A
difference is that we can run one-site TDVP or two-site TDVP. In the former, the bond dimension can
not grow; the latter allows to grow the bond dimension and hence requires a truncation.
"""
# Copyright 2019-2021 TeNPy Developers, GNU GPLv3
import numpy as np
import tenpy.linalg.np_conserved as npc
import tenpy.models.spins
import tenpy.networks.mps as mps
import tenpy.networks.site as site
from tenpy.algorithms import tdvp
from tenpy.networks.mps import MPS
import copy
def run_out_of_equilibrium():
L = 10
chi = 5
delta_t = 0.1
model_params = {
'L': L,
'S': 0.5,
'conserve': 'Sz',
'Jz': 1.0,
'Jy': 1.0,
'Jx': 1.0,
'hx': 0.0,
'hy': 0.0,
'hz': 0.0,
'muJ': 0.0,
'bc_MPS': 'finite',
}
heisenberg = tenpy.models.spins.SpinChain(model_params)
product_state = ["up"] * (L // 2) + ["down"] * (L - L // 2)
# starting from a domain-wall product state which is not an eigenstate of the Heisenberg model
psi = MPS.from_product_state(heisenberg.lat.mps_sites(),
product_state,
bc=heisenberg.lat.bc_MPS,
form='B')
tdvp_params = {
'start_time': 0,
'dt': delta_t,
'trunc_params': {
'chi_max': chi,
'svd_min': 1.e-10,
'trunc_cut': None
}
}
tdvp_engine = tdvp.TDVPEngine(psi, heisenberg, tdvp_params)
times = []
S_mid = []
for i in range(30):
tdvp_engine.run_two_sites(N_steps=1)
times.append(tdvp_engine.evolved_time)
S_mid.append(psi.entanglement_entropy(bonds=[L // 2])[0])
for i in range(30):
tdvp_engine.run_one_site(N_steps=1)
#psi_2=copy.deepcopy(psi)
#psi_2.canonical_form()
times.append(tdvp_engine.evolved_time)
S_mid.append(psi.entanglement_entropy(bonds=[L // 2])[0])
import matplotlib.pyplot as plt
plt.figure()
plt.plot(times, S_mid)
plt.xlabel('t')
plt.ylabel('S')
plt.axvline(x=3.1, color='red')
plt.text(0.0, 0.0000015, "Two sites update")
plt.text(3.1, 0.0000015, "One site update")
plt.show()
if __name__ == "__main__":
import logging
logging.basicConfig(level=logging.INFO)
run_out_of_equilibrium()
purification.py¶
from tenpy.models.tf_ising import TFIChain
from tenpy.networks.purification_mps import PurificationMPS
from tenpy.algorithms.purification import PurificationTEBD, PurificationApplyMPO
def imag_tebd(L=30, beta_max=3., dt=0.05, order=2, bc="finite"):
model_params = dict(L=L, J=1., g=1.2)
M = TFIChain(model_params)
psi = PurificationMPS.from_infiniteT(M.lat.mps_sites(), bc=bc)
options = {
'trunc_params': {
'chi_max': 100,
'svd_min': 1.e-8
},
'order': order,
'dt': dt,
'N_steps': 1
}
beta = 0.
eng = PurificationTEBD(psi, M, options)
Szs = [psi.expectation_value("Sz")]
betas = [0.]
while beta < beta_max:
beta += 2. * dt # factor of 2: |psi> ~= exp^{- dt H}, but rho = |psi><psi|
betas.append(beta)
eng.run_imaginary(dt) # cool down by dt
Szs.append(psi.expectation_value("Sz")) # and further measurements...
return {'beta': betas, 'Sz': Szs}
def imag_apply_mpo(L=30, beta_max=3., dt=0.05, order=2, bc="finite", approx="II"):
model_params = dict(L=L, J=1., g=1.2)
M = TFIChain(model_params)
psi = PurificationMPS.from_infiniteT(M.lat.mps_sites(), bc=bc)
options = {'trunc_params': {'chi_max': 100, 'svd_min': 1.e-8}}
beta = 0.
if order == 1:
Us = [M.H_MPO.make_U(-dt, approx)]
elif order == 2:
Us = [M.H_MPO.make_U(-d * dt, approx) for d in [0.5 + 0.5j, 0.5 - 0.5j]]
eng = PurificationApplyMPO(psi, Us[0], options)
Szs = [psi.expectation_value("Sz")]
betas = [0.]
while beta < beta_max:
beta += 2. * dt # factor of 2: |psi> ~= exp^{- dt H}, but rho = |psi><psi|
betas.append(beta)
for U in Us:
eng.init_env(U) # reset environment, initialize new copy of psi
eng.run() # apply U to psi
Szs.append(psi.expectation_value("Sz")) # and further measurements...
return {'beta': betas, 'Sz': Szs}
if __name__ == "__main__":
import logging
logging.basicConfig(level=logging.INFO)
data_tebd = imag_tebd()
data_mpo = imag_apply_mpo()
import numpy as np
from matplotlib.pyplot import plt
plt.plot(data_mpo['beta'], np.sum(data_mpo['Sz'], axis=1), label='MPO')
plt.plot(data_tebd['beta'], np.sum(data_tebd['Sz'], axis=1), label='TEBD')
plt.xlabel(r'$\beta$')
plt.ylabel(r'total $S^z$')
plt.show()
tfi_exact.py¶
"""Provides exact ground state energies for the transverse field ising model for comparison.
The Hamiltonian reads
.. math ::
H = - J \\sum_{i} \\sigma^x_i \\sigma^x_{i+1} - g \\sum_{i} \\sigma^z_i
"""
# Copyright 2019-2021 TeNPy Developers, GNU GPLv3
import numpy as np
import scipy.sparse as sparse
import scipy.sparse.linalg.eigen.arpack as arp
import warnings
import scipy.integrate
def finite_gs_energy(L, J, g):
"""For comparison: obtain ground state energy from exact diagonalization.
Exponentially expensive in L, only works for small enough `L` <~ 20.
"""
if L >= 20:
warnings.warn("Large L: Exact diagonalization might take a long time!")
# get single site operaors
sx = sparse.csr_matrix(np.array([[0., 1.], [1., 0.]]))
sz = sparse.csr_matrix(np.array([[1., 0.], [0., -1.]]))
id = sparse.csr_matrix(np.eye(2))
sx_list = [] # sx_list[i] = kron([id, id, ..., id, sx, id, .... id])
sz_list = []
for i_site in range(L):
x_ops = [id] * L
z_ops = [id] * L
x_ops[i_site] = sx
z_ops[i_site] = sz
X = x_ops[0]
Z = z_ops[0]
for j in range(1, L):
X = sparse.kron(X, x_ops[j], 'csr')
Z = sparse.kron(Z, z_ops[j], 'csr')
sx_list.append(X)
sz_list.append(Z)
H_xx = sparse.csr_matrix((2**L, 2**L))
H_z = sparse.csr_matrix((2**L, 2**L))
for i in range(L - 1):
H_xx = H_xx + sx_list[i] * sx_list[(i + 1) % L]
for i in range(L):
H_z = H_z + sz_list[i]
H = -J * H_xx - g * H_z
E, V = arp.eigsh(H, k=1, which='SA', return_eigenvectors=True, ncv=20)
return E[0]
def infinite_gs_energy(J, g):
"""For comparison: Calculate groundstate energy density from analytic formula.
The analytic formula stems from mapping the model to free fermions, see P. Pfeuty, The one-
dimensional Ising model with a transverse field, Annals of Physics 57, p. 79 (1970). Note that
we use Pauli matrices compared this reference using spin-1/2 matrices and replace the sum_k ->
integral dk/2pi to obtain the result in the N -> infinity limit.
"""
def f(k, lambda_):
return np.sqrt(1 + lambda_**2 + 2 * lambda_ * np.cos(k))
E0_exact = -g / (J * 2. * np.pi) * scipy.integrate.quad(f, -np.pi, np.pi, args=(J / g, ))[0]
return E0_exact
z_exact_diag.py¶
"""A simple example comparing DMRG output with full diagonalization (ED).
Sorry that this is not well documented! ED is meant to be used for debugging only ;)
"""
# Copyright 2018-2021 TeNPy Developers, GNU GPLv3
import tenpy.linalg.np_conserved as npc
from tenpy.models.xxz_chain import XXZChain
from tenpy.networks.mps import MPS
from tenpy.algorithms.exact_diag import ExactDiag
from tenpy.algorithms import dmrg
def example_exact_diagonalization(L, Jz):
xxz_pars = dict(L=L, Jxx=1., Jz=Jz, hz=0.0, bc_MPS='finite')
M = XXZChain(xxz_pars)
product_state = ["up", "down"] * (xxz_pars['L'] // 2) # this selects a charge sector!
psi_DMRG = MPS.from_product_state(M.lat.mps_sites(), product_state)
charge_sector = psi_DMRG.get_total_charge(True) # ED charge sector should match
ED = ExactDiag(M, charge_sector=charge_sector, max_size=2.e6)
ED.build_full_H_from_mpo()
# ED.build_full_H_from_bonds() # whatever you prefer
print("start diagonalization")
ED.full_diagonalization() # the expensive part for large L
E0_ED, psi_ED = ED.groundstate() # return the ground state
print("psi_ED =", psi_ED)
print("run DMRG")
dmrg.run(psi_DMRG, M, {'verbose': 0}) # modifies psi_DMRG in place!
# first way to compare ED with DMRG: convert MPS to ED vector
psi_DMRG_full = ED.mps_to_full(psi_DMRG)
print("psi_DMRG_full =", psi_DMRG_full)
ov = npc.inner(psi_ED, psi_DMRG_full, axes='range', do_conj=True)
print("<psi_ED|psi_DMRG_full> =", ov)
assert (abs(abs(ov) - 1.) < 1.e-13)
# second way: convert ED vector to MPS
psi_ED_mps = ED.full_to_mps(psi_ED)
ov2 = psi_ED_mps.overlap(psi_DMRG)
print("<psi_ED_mps|psi_DMRG> =", ov2)
assert (abs(abs(ov2) - 1.) < 1.e-13)
assert (abs(ov - ov2) < 1.e-13)
# -> advantage: expectation_value etc. of MPS are available!
print("<Sz> =", psi_ED_mps.expectation_value('Sz'))
if __name__ == "__main__":
example_exact_diagonalization(10, 1.)
Jupyter Notebooks¶
This is a collection of [jupyter] notebooks from the [TeNPyNotebooks] repository. You need to install TeNPy to execute them (see Installation instructions), but you can copy them anywhere before execution. Note that some of them might take a while to run, as they contain more extensive examples.
A first TEBD Example¶
Like examples/c_tebd.py
, this notebook shows the basic interface for TEBD. It initalized the transverse field Ising model \(H = J XX + g Z\) at the critical point \(J=g=1\), and an MPS in the all-up state \(|\uparrow\cdots \uparrow\rangle\). It then performs a real-time evolution with TEBD and measures a few observables. This setup correspond to a global quench from \(g =\infty\) to \(g=1\).
[1]:
import numpy as np
import scipy
import matplotlib.pyplot as plt
import matplotlib
[2]:
import tenpy
import tenpy.linalg.np_conserved as npc
from tenpy.algorithms import tebd
from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
[3]:
L = 30
[4]:
model_params = {
'J': 1. , 'g': 1., # critical
'L': L,
'bc_MPS': 'finite',
}
M = TFIChain(model_params)
Reading 'bc_MPS'='finite' for config TFIChain
Reading 'L'=30 for config TFIChain
Reading 'J'=1.0 for config TFIChain
Reading 'g'=1.0 for config TFIChain
[5]:
psi = MPS.from_lat_product_state(M.lat, [['up']])
[6]:
tebd_params = {
'N_steps': 1,
'dt': 0.1,
'order': 4,
'trunc_params': {'chi_max': 100, 'svd_min': 1.e-12}
}
eng = tebd.Engine(psi, M, tebd_params)
Subconfig 'trunc_params'=Config(<3 options>, 'trunc_params') for config TEBD
[7]:
def measurement(eng, data):
keys = ['t', 'entropy', 'Sx', 'Sz', 'corr_XX', 'corr_ZZ', 'trunc_err']
if data is None:
data = dict([(k, []) for k in keys])
data['t'].append(eng.evolved_time)
data['entropy'].append(eng.psi.entanglement_entropy())
data['Sx'].append(eng.psi.expectation_value('Sigmax'))
data['Sz'].append(eng.psi.expectation_value('Sigmaz'))
data['corr_XX'].append(eng.psi.correlation_function('Sigmax', 'Sigmax'))
data['trunc_err'].append(eng.trunc_err.eps)
return data
[8]:
data = measurement(eng, None)
[9]:
while eng.evolved_time < 5.:
eng.run()
measurement(eng, data)
Reading 'dt'=0.1 for config TEBD
Reading 'N_steps'=1 for config TEBD
Reading 'order'=4 for config TEBD
Calculate U for {'order': 4, 'delta_t': 0.1, 'type_evo': 'real', 'E_offset': None, 'tau': 0.1}
--> time=0.100, max_chi=6, Delta_S=5.5243e-02, S=0.0552432967, since last update: 0.5 s
--> time=0.200, max_chi=6, Delta_S=1.0453e-01, S=0.1597705686, since last update: 0.5 s
--> time=0.300, max_chi=8, Delta_S=1.1368e-01, S=0.2734471407, since last update: 0.5 s
--> time=0.400, max_chi=10, Delta_S=1.0226e-01, S=0.3757082127, since last update: 0.5 s
--> time=0.500, max_chi=12, Delta_S=8.2474e-02, S=0.4581821173, since last update: 0.5 s
--> time=0.600, max_chi=12, Delta_S=6.3393e-02, S=0.5215746922, since last update: 0.5 s
--> time=0.700, max_chi=15, Delta_S=5.0837e-02, S=0.5724119006, since last update: 0.5 s
--> time=0.800, max_chi=18, Delta_S=4.7046e-02, S=0.6194580889, since last update: 0.5 s
--> time=0.900, max_chi=20, Delta_S=5.0606e-02, S=0.6700636890, since last update: 0.5 s
--> time=1.000, max_chi=20, Delta_S=5.7462e-02, S=0.7275254979, since last update: 0.5 s
--> time=1.100, max_chi=24, Delta_S=6.3115e-02, S=0.7906402648, since last update: 0.5 s
--> time=1.200, max_chi=26, Delta_S=6.4779e-02, S=0.8554189184, since last update: 0.5 s
--> time=1.300, max_chi=30, Delta_S=6.2269e-02, S=0.9176882429, since last update: 0.5 s
--> time=1.400, max_chi=34, Delta_S=5.7451e-02, S=0.9751394599, since last update: 0.5 s
--> time=1.500, max_chi=40, Delta_S=5.2899e-02, S=1.0280386118, since last update: 0.5 s
--> time=1.600, max_chi=40, Delta_S=5.0543e-02, S=1.0785817862, since last update: 0.5 s
--> time=1.700, max_chi=46, Delta_S=5.0852e-02, S=1.1294340516, since last update: 0.5 s
--> time=1.800, max_chi=52, Delta_S=5.2841e-02, S=1.1822748827, since last update: 0.7 s
--> time=1.900, max_chi=57, Delta_S=5.4804e-02, S=1.2370787276, since last update: 0.9 s
--> time=2.000, max_chi=62, Delta_S=5.5311e-02, S=1.2923895877, since last update: 0.9 s
--> time=2.100, max_chi=71, Delta_S=5.3906e-02, S=1.3462960135, since last update: 1.0 s
--> time=2.200, max_chi=80, Delta_S=5.1209e-02, S=1.3975050668, since last update: 1.2 s
--> time=2.300, max_chi=85, Delta_S=4.8446e-02, S=1.4459514729, since last update: 1.4 s
--> time=2.400, max_chi=95, Delta_S=4.6732e-02, S=1.4926837515, since last update: 1.4 s
--> time=2.500, max_chi=100, Delta_S=4.6486e-02, S=1.5391700161, since last update: 1.8 s
--> time=2.600, max_chi=100, Delta_S=4.7282e-02, S=1.5864521162, since last update: 1.9 s
--> time=2.700, max_chi=100, Delta_S=4.8160e-02, S=1.6346120966, since last update: 1.9 s
--> time=2.800, max_chi=100, Delta_S=4.8202e-02, S=1.6828138289, since last update: 1.6 s
--> time=2.900, max_chi=100, Delta_S=4.7044e-02, S=1.7298576742, since last update: 1.7 s
--> time=3.000, max_chi=100, Delta_S=4.5033e-02, S=1.7748910539, since last update: 1.6 s
--> time=3.100, max_chi=100, Delta_S=4.2960e-02, S=1.8178514255, since last update: 1.5 s
--> time=3.200, max_chi=100, Delta_S=4.1575e-02, S=1.8594265450, since last update: 1.4 s
--> time=3.300, max_chi=100, Delta_S=4.1182e-02, S=1.9006080872, since last update: 1.4 s
--> time=3.400, max_chi=100, Delta_S=4.1503e-02, S=1.9421110677, since last update: 1.4 s
--> time=3.500, max_chi=100, Delta_S=4.1873e-02, S=1.9839839097, since last update: 1.6 s
--> time=3.600, max_chi=100, Delta_S=4.1645e-02, S=2.0256293591, since last update: 1.6 s
--> time=3.700, max_chi=100, Delta_S=4.0570e-02, S=2.0661996178, since last update: 1.8 s
--> time=3.800, max_chi=100, Delta_S=3.8897e-02, S=2.1050963078, since last update: 1.6 s
--> time=3.900, max_chi=100, Delta_S=3.7189e-02, S=2.1422855970, since last update: 2.0 s
--> time=4.000, max_chi=100, Delta_S=3.5994e-02, S=2.1782792652, since last update: 1.6 s
--> time=4.100, max_chi=100, Delta_S=3.5534e-02, S=2.2138131347, since last update: 1.6 s
--> time=4.200, max_chi=100, Delta_S=3.5597e-02, S=2.2494096463, since last update: 1.6 s
--> time=4.300, max_chi=100, Delta_S=3.5673e-02, S=2.2850830917, since last update: 1.5 s
--> time=4.400, max_chi=100, Delta_S=3.5272e-02, S=2.3203546259, since last update: 1.4 s
--> time=4.500, max_chi=100, Delta_S=3.4188e-02, S=2.3545423452, since last update: 1.4 s
--> time=4.600, max_chi=100, Delta_S=3.2596e-02, S=2.3871383468, since last update: 1.4 s
--> time=4.700, max_chi=100, Delta_S=3.0915e-02, S=2.4180531110, since last update: 1.6 s
--> time=4.800, max_chi=100, Delta_S=2.9546e-02, S=2.4475988717, since last update: 1.7 s
--> time=4.900, max_chi=100, Delta_S=2.8630e-02, S=2.4762292780, since last update: 1.6 s
--> time=5.000, max_chi=100, Delta_S=2.7962e-02, S=2.5041913364, since last update: 1.6 s
--> time=5.100, max_chi=100, Delta_S=2.7099e-02, S=2.5312904883, since last update: 1.6 s
[10]:
plt.plot(data['t'], np.array(data['entropy'])[:, L//2])
plt.xlabel('time $t$')
plt.ylabel('entropy $S$')
[10]:
Text(0, 0.5, 'entropy $S$')

The growth of \(S\) linear in time is typical for a global quench and to be expected from the quasi-particle picture
[11]:
plt.plot(data['t'], np.sum(data['Sx'], axis=1), label="X")
plt.plot(data['t'], np.sum(data['Sz'], axis=1), label="Z")
plt.xlabel('time $t$')
plt.ylabel('magnetization')
plt.legend(loc='best')
[11]:
<matplotlib.legend.Legend at 0x7f1018760e80>

The strict conservation of X
being zero is ensured by charge conservation, because X
changes the parity sector.
Nevertheless, the XX
correlation function can be nontrivial:
[12]:
corrs = np.array(data['corr_XX'])
tmax = data['t'][-1]
x = np.arange(L)
cmap = matplotlib.cm.viridis
for i, t in list(enumerate(data['t'])):
if i == 0 or i == len(data['t']) - 1:
label = '{t:.2f}'.format(t=t)
else:
label = None
plt.plot(x, corrs[i, L//2, :], color=cmap(t/tmax), label=label)
plt.xlabel(r'time $t$')
plt.ylabel(r'correlations $\langle X_i X_{j:d}\rangle$'.format(j=L//2))
plt.yscale('log')
plt.ylim(1.e-6, 1.)
plt.legend()
plt.show()

The output of the run showed that we gradually increased the bond dimension and only reached the maximum chi around \(t=2.5\). At this point we start to truncate significantly, because we cut off the tail whatever the singular values are. This is clearly visible if we plot the truncation error vs. time. Note the log-scale, though: if you are fine with an error of say 1 permille for expectation values, you can still go on for a bit more!
[13]:
plt.plot(data['t'], data['trunc_err'])
plt.yscale('log')
#plt.ylim(1.e-15, 1.)
plt.xlabel('time $t$')
plt.ylabel('truncation error')
[13]:
Text(0, 0.5, 'truncation error')

[ ]:
A first finite DMRG Example¶
Like examples/d_dmrg.py
, this notebook shows the basic interface for DMRG. It initalized the transverse field Ising model \(H = J XX + g Z\) at the critical point \(J=g=1\), and a finite MPS in the all-up state \(|\uparrow\cdots \uparrow\rangle\). It then runs DMRG to find the ground state. Finally, we look at the profile of the entanglement-cuts.
[1]:
import numpy as np
import scipy
import matplotlib.pyplot as plt
[2]:
import tenpy
import tenpy.linalg.np_conserved as npc
from tenpy.algorithms import dmrg
from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
[3]:
L = 100
[4]:
model_params = {
'J': 1. , 'g': 1., # critical
'L': L,
'bc_MPS': 'finite',
}
M = TFIChain(model_params)
Reading 'bc_MPS'='finite' for config TFIChain
Reading 'L'=100 for config TFIChain
Reading 'J'=1.0 for config TFIChain
Reading 'g'=1.0 for config TFIChain
[5]:
psi = MPS.from_lat_product_state(M.lat, [['up']])
[6]:
dmrg_params = {
'mixer': None, # setting this to True helps to escape local minima
'max_E_err': 1.e-10,
'trunc_params': {
'chi_max': 100,
'svd_min': 1.e-10,
},
'verbose': True,
'combine': True
}
eng = dmrg.TwoSiteDMRGEngine(psi, M, dmrg_params)
E, psi = eng.run() # the main work; modifies psi in place
Reading 'combine'=True for config TwoSiteDMRGEngine
Subconfig 'trunc_params'=Config(<3 options>, 'trunc_params') for config TwoSiteDMRGEngine
Reading 'max_E_err'=1e-10 for config TwoSiteDMRGEngine
Reading 'mixer'=None for config TwoSiteDMRGEngine
================================================================================
sweep 1, age = 100
Energy = -126.9290280127265333, S = 0.3766760098039985, norm_err = 1.2e-01
Current memory usage 97.4 MB, time elapsed: 2.3 s
Delta E = nan, Delta S = nan (per sweep)
max_trunc_err = 0.0000e+00, max_E_trunc = 1.8474e-13
MPS bond dimensions: [2, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 2]
================================================================================
sweep 2, age = 100
Energy = -126.9618018107068309, S = 0.5108558254154341, norm_err = 7.0e-03
Current memory usage 99.5 MB, time elapsed: 5.6 s
Delta E = -3.2774e-02, Delta S = 1.3418e-01 (per sweep)
max_trunc_err = 0.0000e+00, max_E_trunc = 1.9895e-13
MPS bond dimensions: [2, 4, 8, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 8, 4, 2]
================================================================================
sweep 3, age = 100
Energy = -126.9618767383882130, S = 0.5277643082687588, norm_err = 1.1e-05
Current memory usage 107.0 MB, time elapsed: 10.1 s
Delta E = -7.4928e-05, Delta S = 1.6908e-02 (per sweep)
max_trunc_err = 4.7789e-20, max_E_trunc = 2.8422e-13
MPS bond dimensions: [2, 4, 8, 16, 27, 35, 39, 43, 46, 49, 49, 52, 55, 60, 62, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 61, 59, 60, 55, 51, 47, 44, 42, 42, 38, 35, 32, 23, 16, 8, 4, 2]
================================================================================
sweep 4, age = 100
Energy = -126.9618767396799086, S = 0.5277994024302479, norm_err = 5.8e-10
Current memory usage 115.4 MB, time elapsed: 16.9 s
Delta E = -1.2917e-09, Delta S = 3.5094e-05 (per sweep)
max_trunc_err = 1.3388e-18, max_E_trunc = 2.8422e-13
MPS bond dimensions: [2, 4, 8, 16, 22, 30, 34, 36, 41, 45, 47, 48, 52, 54, 57, 61, 61, 63, 64, 66, 67, 70, 71, 72, 74, 76, 76, 77, 79, 79, 80, 82, 83, 83, 83, 84, 84, 84, 84, 86, 86, 89, 88, 89, 89, 88, 88, 89, 88, 88, 88, 87, 86, 87, 87, 87, 87, 86, 86, 85, 85, 84, 84, 84, 83, 83, 83, 83, 81, 80, 79, 77, 77, 77, 75, 73, 71, 70, 67, 66, 64, 62, 61, 61, 57, 54, 52, 48, 47, 45, 41, 36, 34, 30, 22, 16, 8, 4, 2]
================================================================================
sweep 5, age = 100
Energy = -126.9618767396792691, S = 0.5277994025033066, norm_err = 2.1e-13
Current memory usage 115.4 MB, time elapsed: 20.4 s
Delta E = 6.3949e-13, Delta S = 7.3059e-11 (per sweep)
max_trunc_err = 6.5986e-20, max_E_trunc = 3.1264e-13
MPS bond dimensions: [2, 4, 8, 16, 22, 30, 34, 36, 41, 45, 47, 48, 52, 54, 57, 61, 61, 63, 64, 66, 67, 71, 71, 74, 77, 77, 78, 79, 81, 81, 81, 83, 83, 84, 85, 85, 85, 85, 86, 88, 89, 90, 90, 90, 90, 90, 90, 91, 91, 91, 90, 90, 90, 90, 90, 90, 90, 90, 89, 88, 86, 86, 85, 85, 85, 84, 83, 83, 81, 81, 81, 79, 78, 78, 77, 73, 71, 71, 67, 66, 64, 63, 61, 61, 57, 54, 52, 48, 47, 45, 41, 36, 34, 30, 22, 16, 8, 4, 2]
================================================================================
DMRG finished after 5 sweeps.
total size = 100, maximum chi = 91
================================================================================
Expectation Values¶
[7]:
# the ground state energy was directly returned by dmrg.run()
print("ground state energy = ", E)
# there are other ways to extract the energy from psi:
E1 = M.H_MPO.expectation_value(psi) # based on the MPO
E2 = np.sum(M.bond_energies(psi)) # based on bond terms of H, works only for a NearestNeighborModel
assert abs(E-E1) < 1.e-10 and abs(E-E2) < 1.e-10
ground state energy = -126.96187673967927
[8]:
# onsite expectation values
X = psi.expectation_value("Sigmax")
Z = psi.expectation_value("Sigmaz")
x = np.arange(psi.L)
plt.figure()
plt.plot(x, Z, label="Z")
plt.plot(x, X, label="X") # note: it's clear that this is zero due to charge conservation!
plt.xlabel("site")
plt.ylabel("onsite expectation value")
plt.legend()
plt.show()

[9]:
# correlation functions
i0 = psi.L // 4 # for fixed `i`
j = np.arange(i0 + 1, psi.L)
XX = psi.term_correlation_function_right([("Sigmax", 0)], [("Sigmax", 0)], i_L=i0, j_R=j)
XX_disc = XX - X[i0] * X[j]
ZZ = psi.term_correlation_function_right([("Sigmaz", 0)], [("Sigmaz", 0)], i_L=i0, j_R=j)
ZZ_disc = ZZ - Z[i0] * Z[j]
dx = j - i0
plt.figure()
plt.plot(dx, XX_disc, label="X X")
plt.plot(dx, ZZ_disc, label="Z Z")
plt.xlabel(r"distance $|i-j|$")
plt.ylabel(r"disconnected correlations $\langle A_i A_j\rangle - \langle A_i \rangle\langle A_j\rangle$")
plt.legend()
plt.loglog()
plt.show()

We find power-law decaying correlations, as expected for a critical model. For a gapped model, we would expect exponentially decaying correlations.
We now look at the entanglement entropy. The transverse-field Ising model is critical at \(g=J\). Conformal field theory, Calabrese,Cardy 2004, predicts an entanglement entropy profile of
[12]:
S = psi.entanglement_entropy()
bonds = np.arange(0.5, psi.L-1)
plt.plot(bonds, S, 'o', label="S")
# preform fit to extract the central charge
central_charge, const, res = tenpy.tools.fit.central_charge_from_S_profile(psi)
fit = tenpy.tools.fit.entropy_profile_from_CFT(bonds + 0.5, psi.L, central_charge, const)
print(f"extraced central charge {central_charge:.5f} with residuum {res:.2e}")
print("(Expect central charge = 0.5 for the transverse field Ising model.)")
plt.plot(bonds, fit, label=f"fit with $c={central_charge:.3f}$")
plt.xlabel("bond")
plt.ylabel("entanglement entropy $S$")
plt.legend()
plt.show()
extraced central charge 0.50788 with residuum 1.95e-09
(Expect central charge = 0.5 for the transverse field Ising model.)

[ ]:
Lattices: visualization and further examples¶
This notebook demonstrates a few ways to visualize lattices and couplings inside the lattice.
[1]:
import tenpy
[2]:
import numpy as np
import matplotlib.pyplot as plt
from tenpy.models import lattice
In the following, we will focus on the Honeycomb lattice as an example with a non-trivial unit cell. If you want to try it out yourself for a different lattice, simply adjust the following alias and re-run all the cells below:
[3]:
MyLattice = lattice.Honeycomb
Lu = MyLattice.Lu # = 2 = the number of sites in the unit cell
fig_args = dict(figsize=(7, 5), dpi=150) # make figures a bit larger
plotting the lattice itself¶
To get started, let’s recall that a lattice consists of a unit cell that is repeated in the directions of the basis. The following plot visualizes the first unit cell and basis and plots the sites in the whole lattice. For the Honeycomb lattice, we have two different sites in the unit cell, which get visualized by different markers.
[4]:
lat = MyLattice(5, 4, sites=None, bc='periodic')
[5]:
plt.figure(**fig_args)
ax = plt.gca()
lat.plot_sites(ax)
ax.set_aspect('equal')
lat.plot_basis(ax, origin=-0.5*(lat.basis[0] + lat.basis[1]))
ax.set_xlim(-1)
ax.set_ylim(-1)
[5]:
(-1, 5.525)

We can also plot the nearest- and next-nearest-neighbor bonds:
[6]:
plt.figure(**fig_args)
ax = plt.gca()
lat.plot_sites(ax)
lat.plot_coupling(ax)
lat.plot_coupling(ax, lat.pairs['next_nearest_neighbors'], linestyle=':', color='r')
ax.set_aspect('equal')

If you have a 1D MPS, it’s winding through the lattice is defined by the order
(See the userguide with “Details on the lattice geometry”). You can plot it as follows:
[7]:
plt.figure(**fig_args)
ax = plt.gca()
lat.plot_sites(ax)
lat.plot_order(ax)
ax.set_aspect('equal')

Visually verifying pairs of couplings by plotting them¶
In this section, we visually verify that the lattice pairs
like the nearest_neighbors
are indeed what they claim to be. To acchieve this, we first get all the possible distances of them, plot circles with these distances and lines connecting the points for each distance.
Then, you have to stare at the plots and verify that these couplings include all pairs you want to have.
[8]:
lat = MyLattice(12, 12, sites=None, bc='periodic') # the lattice to plot
lat_pairs = lat.pairs # the coupling pairs to plot
[9]:
# get distances of the couplings
dist_pair = {}
for pair in lat_pairs:
#print(pair)
dist = None
for u1, u2, dx in lat_pairs[pair]:
d = lat.distance(u1, u2, dx)
#print(u1, u2, dx, d)
if dist is None:
dist = d
dist_pair[d] = pair
else:
assert abs(dist-d) < 1.e-14
dists = sorted(dist_pair.keys())
if len(dists) != len(lat_pairs):
raise ValueError("no unique mapping dist -> pair")
[10]:
print("(dist) (pairs)")
for d in dists:
print("{0:.6f} {1}".format(d, dist_pair[d]))
(dist) (pairs)
0.577350 nearest_neighbors
1.000000 next_nearest_neighbors
1.154701 next_next_nearest_neighbors
1.527525 fourth_nearest_neighbors
1.732051 fifth_nearest_neighbors
[11]:
colors = [plt.cm.viridis(r/dists[-1]) for r in dists]
centers = np.array([[3, 3, 0], [8, 3, 1], [3, 8, 2], [8, 8, 3]]) # one center for each site in the unit cell
us = list(range(Lu))
[12]:
fig = plt.figure(**fig_args)
ax = plt.gca()
lat.plot_sites(ax, markersize=1.3)
for u, center in zip(us, centers):
center = lat.position(center)
for r, c in zip(dists, colors):
circ = plt.Circle(center, r, fill=False, color=c)
ax.add_artist(circ)
ax.set_aspect(1.)
t = ax.set_title("distances: " + ' '.join(['{0:.2f}'.format(d) for d in dists]))

[13]:
for dist in dists:
pair_name = dist_pair[dist]
print(dist, pair_name)
pairs = lat.pairs[pair_name]
fig = plt.figure(**fig_args)
ax = plt.gca()
lat.plot_sites(ax, markersize=1.3)
for u, center in zip(us, centers):
center = lat.position(center)
for r, c in zip(dists, colors):
circ = plt.Circle(center, r, fill=False, color=c)
ax.add_artist(circ)
pairs_with_reverse = pairs + [(u2, u1, -np.array(dx)) for u1, u2, dx in pairs]
for u1, u2, dx in pairs_with_reverse:
print(u1, u2, dx, lat.distance(u1, u2, dx))
start = centers[u1]
end = start.copy()
end[-1] = u2
end[:-1] = start[:-1] + dx
x1, y1 = lat.position(start)
x2, y2 = lat.position(end)
ax.arrow(x1, y1, x2-x1, y2 - y1)
for u in us:
x, y = lat.position(centers[u])
number = lat.count_neighbors(u, pair_name)
ax.text(x, y, str(number), color='r')
ax.set_aspect(1.)
ax.set_title(pair_name + ' distance = {0:.3f}'.format(dist))
0.5773502691896258 nearest_neighbors
0 1 [0 0] 0.5773502691896258
1 0 [1 0] 0.5773502691896257
1 0 [0 1] 0.5773502691896258
1 0 [0 0] 0.5773502691896258
0 1 [-1 0] 0.5773502691896257
0 1 [ 0 -1] 0.5773502691896258
0.9999999999999999 next_nearest_neighbors
0 0 [1 0] 0.9999999999999999
0 0 [0 1] 1.0
0 0 [ 1 -1] 0.9999999999999999
1 1 [1 0] 0.9999999999999999
1 1 [0 1] 1.0
1 1 [ 1 -1] 0.9999999999999999
0 0 [-1 0] 0.9999999999999999
0 0 [ 0 -1] 1.0
0 0 [-1 1] 0.9999999999999999
1 1 [-1 0] 0.9999999999999999
1 1 [ 0 -1] 1.0
1 1 [-1 1] 0.9999999999999999
1.1547005383792515 next_next_nearest_neighbors
1 0 [1 1] 1.1547005383792515
0 1 [-1 1] 1.1547005383792515
0 1 [ 1 -1] 1.1547005383792515
0 1 [-1 -1] 1.1547005383792515
1 0 [ 1 -1] 1.1547005383792515
1 0 [-1 1] 1.1547005383792515
1.5275252316519468 fourth_nearest_neighbors
0 1 [0 1] 1.5275252316519468
0 1 [1 0] 1.5275252316519465
0 1 [ 1 -2] 1.5275252316519465
0 1 [ 0 -2] 1.5275252316519468
0 1 [-2 0] 1.5275252316519465
0 1 [-2 1] 1.5275252316519465
1 0 [ 0 -1] 1.5275252316519468
1 0 [-1 0] 1.5275252316519465
1 0 [-1 2] 1.5275252316519465
1 0 [0 2] 1.5275252316519468
1 0 [2 0] 1.5275252316519465
1 0 [ 2 -1] 1.5275252316519465
1.7320508075688772 fifth_nearest_neighbors
0 0 [1 1] 1.7320508075688772
0 0 [ 2 -1] 1.7320508075688772
0 0 [-1 2] 1.7320508075688772
1 1 [1 1] 1.7320508075688772
1 1 [ 2 -1] 1.7320508075688772
1 1 [-1 2] 1.7320508075688772
0 0 [-1 -1] 1.7320508075688772
0 0 [-2 1] 1.7320508075688772
0 0 [ 1 -2] 1.7320508075688772
1 1 [-1 -1] 1.7320508075688772
1 1 [-2 1] 1.7320508075688772
1 1 [ 1 -2] 1.7320508075688772





[ ]:
Toric Code¶
This notebook shows how to implement Kitaev’s Toric Code model on an infinite cylinder geometry. The Hamiltonian of the Toric Code is:
where \(B_p = \prod_{i \in p} \sigma^z_i\) denotes the product of \(\sigma^z\) around an elementary plaquette and \(A_s = \prod_{i \in s} \sigma^x_i\) denotes the product of \(\sigma^x\) on the four links sticking out of a site of the lattice. The \(B_p\) and \(A_s\) operators commute with each other and have eigenvalues \(\pm 1\), hence the ground state will have eigenvalue \(+1\) for each of them.
On an infinite cylinder, the model exhibits a four-fold degeneracy of the ground state, characterized by the different values \(\pm 1\) of the incontractible Wilson and t’Hooft loops. These loop operators are defined by the products
around the cylinder.
TeNPy already implements this hamiltonian in tenpy.models.toric_code.ToricCode
. This notebook demonstrates how to extend the existing model by additinoal terms in the Hamiltonian and convenient functions of the loop operators for simplified measurements. Then we run iDMRG to obtain the four different ground states and check that they are indeed orthogonal and degenerate.
[1]:
import numpy as np
import scipy
import matplotlib.pyplot as plt
np.set_printoptions(precision=5, suppress=True, linewidth=120)
[2]:
import tenpy
import tenpy.linalg.np_conserved as npc
from tenpy.algorithms.dmrg import TwoSiteDMRGEngine
from tenpy.networks.mps import MPS
from tenpy.networks.terms import TermList
from tenpy.models.toric_code import ToricCode, DualSquare
from tenpy.models.lattice import Square
Let’s plot the DualSquare lattice first to get the geometry clear. The unit cell consists of two sites, the blue one on the vertical links (\(u=0\)), and the orange one on the horizontal links (\(u=1\)). We also plot lines illustrating where the Wilson and t’Hooft loops go.
[3]:
plt.figure(figsize=(7, 5))
ax = plt.gca()
lat = DualSquare(4, 4, None, bc='periodic')
sq = Square(4, 4, None, bc='periodic')
sq.plot_coupling(ax, linewidth=3.)
ax.plot([2., 2.], [-0.5, 4.3], 'r:', linewidth=5., label="Wilson")
ax.plot([2.5, 2.5], [-0.5, 4.3], 'b:', linewidth=5., label="t'Hooft")
lat.plot_sites(ax)
lat.plot_basis(ax, origin=-0.25*(lat.basis[0] + lat.basis[1]))
ax.set_aspect('equal')
ax.set_xlim(-1, 6)
ax.set_ylim(-1)
ax.legend()
plt.show()

Extending the existing model¶
We will implement additional parameters which allow to optionally add terms for the Wilson and t’Hooft loop operators \(W, H\) defined above,
Note that we only want to add a single loop for each of them, not one at each possible starting point. Hence, the correct method is add_local_term
, not add_multi_coupling_term
or add_coupling_term
.
The sign of \(\mathtt{J_{WL}},\mathtt{J_{HL}}\) will allow us to select sectors with \(\langle \psi | W |\psi \rangle = \pm 1\) and \(\langle \psi | H |\psi \rangle = \pm 1\), respectively. Note that so far the constraints commute with the Hamiltonian and with each other. However, this is no longer the case if we add another global field \(H \rightarrow H -\mathtt{h} \sum_i \sigma^z_i\), which makes the Hamiltonian no longer exactly solvable.
[4]:
class ExtendedToricCode(ToricCode):
def init_terms(self, model_params):
ToricCode.init_terms(self, model_params) # add terms of the original ToricCode model
Ly = self.lat.shape[1]
J_WL = model_params.get('J_WL', 0.)
J_HL = model_params.get('J_HL', 0.)
# unit-cell indices:
# u=0: vertical links
# u=1: horizontal links
# Wilson Loop
x, u = 0, 0 # vertical links
self.add_local_term(-J_WL, [('Sigmaz', [x, y, u]) for y in range(Ly)])
# t'Hooft Loop
x, u = 0, 1 # vertical links
self.add_local_term(-J_HL, [('Sigmax', [x, y, u]) for y in range(Ly)])
h = model_params.get('h', 0.)
for u in range(2):
self.add_onsite(-h, u, 'Sigmaz')
def wilson_loop_y(self, psi):
"""Measure wilson loop around the cylinder."""
Ly = self.lat.shape[1]
x, u = 0, 0 # vertical links
W = TermList.from_lattice_locations(self.lat, [[("Sigmaz",[x, y, u]) for y in range(Ly)]])
return psi.expectation_value_terms_sum(W)[0]
def hooft_loop_y(self, psi):
"""Measure t'Hooft loop around the cylinder."""
Ly = self.lat.shape[1]
x, u = 0, 1 # horizontal links
H = TermList.from_lattice_locations(self.lat, [[("Sigmax",[x, y, u]) for y in range(Ly)]])
return psi.expectation_value_terms_sum(H)[0]
iDMRG¶
We now run the iDMRG algorithm for the four different sectors to obtain the four degenerate ground states of the system.
To reliably get the 4 different ground states, we will add the energetic constraints for the loops. In this case, the original Hamiltonian will completely commute with the loop operators, such that you directly get the correct ground state. However, this is a very peculiar case - in general, e.g. when you add the external field with \(h \neq 0\), this will no longer be the case. We hence demonstrate a more “robust” way of getting the four different ground states (at least if you have an idea which operator distinguishes them): For each sector, we run DMRG twice: first with the energy penalties for the \(W,H\) loops to get an initial \(|\psi\rangle\) in the desired sector. Then we run DMRG again for the clean toric code model, without the \(W,H\) added, to make sure we have a ground state of the clean model.
[5]:
dmrg_params = {
'mixer': True,
'trunc_params': {'chi_max': 64,
'svd_min': 1.e-8},
'max_E_err': 1.e-8,
'max_S_err': 1.e-7,
'N_sweeps_check': 4,
'max_sweeps':24,
'verbose': 0,
}
model_params = {
'Lx': 1, 'Ly': 4, # Ly is set below
'bc_MPS': "infinite",
'conserve': None,
'verbose': 0,
}
def run_DMRG(Ly, J_WL, J_HL, h=0.):
print("="*80)
print(f"Start iDMRG for Ly={Ly:d}, J_WL={J_WL:.2f}, J_HL={J_HL:.2f}, h={h:.2f}")
model_params_clean = model_params.copy()
model_params_clean['Ly'] = Ly
model_params_clean['h'] = h
model_clean = ExtendedToricCode(model_params_clean)
model_params_seed = model_params_clean.copy()
model_params_seed['J_WL'] = J_WL
model_params_seed['J_HL'] = J_HL
model = ExtendedToricCode(model_params_seed)
psi = MPS.from_lat_product_state(model.lat, [[["up"]]])
eng = TwoSiteDMRGEngine(psi, model, dmrg_params)
E0, psi = eng.run()
WL = model.wilson_loop_y(psi)
HL = model.hooft_loop_y(psi)
print(f"after first DMRG run: <psi|W|psi> = {WL: .3f}")
print(f"after first DMRG run: <psi|H|psi> = {HL: .3f}")
print(f"after first DMRG run: E (including W/H loops) = {E0:.10f}")
E0_clean = model_clean.H_MPO.expectation_value(psi)
print(f"after first DMRG run: E (excluding W/H loops) = {E0_clean:.10f}")
# switch to model without Wilson/Hooft loops
eng.init_env(model=model_clean)
E1, psi = eng.run()
WL = model_clean.wilson_loop_y(psi)
HL = model_clean.hooft_loop_y(psi)
print(f"after second DMRG run: <psi|W|psi> = {WL: .3f}")
print(f"after second DMRG run: <psi|H|psi> = {HL: .3f}")
print(f"after second DMRG run: E (excluding W/H loops) = {E1:.10f}")
print("max chi: ", max(psi.chi))
return {'psi': psi,
'model': model_clean,
'E0': E0, 'E0_clean': E0_clean, 'E': E1,
'WL': WL, 'HL': HL}
print("="*80)
[6]:
results_loops = {}
for J_WL in [-5., 5.]:
for J_HL in [-5., 5.]:
results_loops[(J_WL, J_HL)] = run_DMRG(4, J_WL, J_HL)
================================================================================
Start iDMRG for Ly=4, J_WL=-5.00, J_HL=-5.00, h=0.00
after first DMRG run: <psi|W|psi> = -1.000
after first DMRG run: <psi|H|psi> = -1.000
after first DMRG run: E (including W/H loops) = -2.2500000000
after first DMRG run: E (excluding W/H loops) = -1.0000000000
after second DMRG run: <psi|W|psi> = -1.000
after second DMRG run: <psi|H|psi> = -1.000
after second DMRG run: E (excluding W/H loops) = -1.0000000000
max chi: 8
================================================================================
Start iDMRG for Ly=4, J_WL=-5.00, J_HL=5.00, h=0.00
after first DMRG run: <psi|W|psi> = -1.000
after first DMRG run: <psi|H|psi> = 1.000
after first DMRG run: E (including W/H loops) = -2.2500000000
after first DMRG run: E (excluding W/H loops) = -1.0000000000
after second DMRG run: <psi|W|psi> = -1.000
after second DMRG run: <psi|H|psi> = 1.000
after second DMRG run: E (excluding W/H loops) = -1.0000000000
max chi: 8
================================================================================
Start iDMRG for Ly=4, J_WL=5.00, J_HL=-5.00, h=0.00
after first DMRG run: <psi|W|psi> = 1.000
after first DMRG run: <psi|H|psi> = -1.000
after first DMRG run: E (including W/H loops) = -2.2500000000
after first DMRG run: E (excluding W/H loops) = -1.0000000000
after second DMRG run: <psi|W|psi> = 1.000
after second DMRG run: <psi|H|psi> = -1.000
after second DMRG run: E (excluding W/H loops) = -1.0000000000
max chi: 8
================================================================================
Start iDMRG for Ly=4, J_WL=5.00, J_HL=5.00, h=0.00
after first DMRG run: <psi|W|psi> = 1.000
after first DMRG run: <psi|H|psi> = 1.000
after first DMRG run: E (including W/H loops) = -2.2500000000
after first DMRG run: E (excluding W/H loops) = -1.0000000000
after second DMRG run: <psi|W|psi> = 1.000
after second DMRG run: <psi|H|psi> = 1.000
after second DMRG run: E (excluding W/H loops) = -1.0000000000
max chi: 8
As we can see from the output, the idea worked and we indeed get the desired expectation values for the W/H loops. Also, we can directly see that the ground states are degenerate in energy.
To check that we found indeed four orhogonal ground states, let’s calculate mutual overlaps \(\langle \psi_i | \psi_j \rangle\) for \(i,j \in range(4)\):
[7]:
psi_list = [res['psi'] for res in results_loops.values()]
overlaps= [[psi_i.overlap(psi_j) for psi_j in psi_list] for psi_i in psi_list]
print("overlaps")
print(np.array(overlaps))
overlaps
[[ 1.+0.j 0.+0.j -0.+0.j 0.-0.j]
[ 0.+0.j 1.+0.j 0.+0.j -0.+0.j]
[-0.+0.j 0.+0.j 1.+0.j 0.+0.j]
[ 0.-0.j -0.+0.j 0.+0.j 1.+0.j]]
The correlation length is almost vanishing, as it should be:
[8]:
print("Correlation lengths: ")
model = results_loops[(+5., +5.)]['model']
print(np.array([psi.correlation_length() / model.lat.N_sites_per_ring for psi in psi_list]))
Correlation lengths:
[0.0294 0.02924 0.02915 0.02876]
Intermezzo: Let’s quickly check for a single case that this still works for non-zero \(h = 0.1\), where the t’Hooft loop no longer commutes with the Hamiltonian:
[9]:
res_h = run_DMRG(4, +5., -5., 0.1)
================================================================================
Start iDMRG for Ly=4, J_WL=5.00, J_HL=-5.00, h=0.10
after first DMRG run: <psi|W|psi> = 1.000
after first DMRG run: <psi|H|psi> = -1.000
after first DMRG run: E (including W/H loops) = -2.2516147149
after first DMRG run: E (excluding W/H loops) = -1.0018727133
after second DMRG run: <psi|W|psi> = 1.000
after second DMRG run: <psi|H|psi> = -0.995
after second DMRG run: E (excluding W/H loops) = -1.0025283892
max chi: 64
The t’Hooft loop no longer is exactly 1 in the ground state, since it no longer commutes with the Hamiltonian, but it is still close to one, so it still distinguishes the degenerate states. As expected, the local perturbation of the field can not destry the global, topological nature of the ground state (manifold).
Topological entanglement entropy¶
Back to \(h=0\). For the toric code, the half-cylinder entanglement entropy should be \((L_y-1)\log 2\). Indeed, we get the correct value at bond 0:
[10]:
print("Entanglement entropies / log(2)")
print(np.array([psi.entanglement_entropy()[0]/np.log(2) for psi in psi_list]))
Entanglement entropies / log(2)
[3. 3. 3. 3.]
Recall that bond 0 is usually a sensible cut for infinite DMRG on a common lattice. This is especially important if you want to extract the topological entanglement entropy \(\gamma\) from a fit \(S(L_y) = \alpha L_y - \gamma\), as corners in the cut can also contribute to \(\gamma\).
Let us demonstrate the extraction of \(\gamma = \log(2)\) for the toric code:
[11]:
model_params['order'] = 'Cstyle' # The effect doesn't appear with the "default" ordering for the toric code.
# This is also a hint that you need bond 0: you want something independent of what order you choose
# inside the MPS unit cell.
results_Ly = {}
for Ly in range(2, 7):
results_Ly[Ly] = run_DMRG(Ly, 5., 5.)
================================================================================
Start iDMRG for Ly=2, J_WL=5.00, J_HL=5.00, h=0.00
after first DMRG run: <psi|W|psi> = 1.000
after first DMRG run: <psi|H|psi> = 1.000
after first DMRG run: E (including W/H loops) = -3.5000000000
after first DMRG run: E (excluding W/H loops) = -1.0000000000
after second DMRG run: <psi|W|psi> = 1.000
after second DMRG run: <psi|H|psi> = 1.000
after second DMRG run: E (excluding W/H loops) = -1.0000000000
max chi: 4
================================================================================
Start iDMRG for Ly=3, J_WL=5.00, J_HL=5.00, h=0.00
after first DMRG run: <psi|W|psi> = 1.000
after first DMRG run: <psi|H|psi> = 1.000
after first DMRG run: E (including W/H loops) = -2.6666666667
after first DMRG run: E (excluding W/H loops) = -1.0000000000
after second DMRG run: <psi|W|psi> = 1.000
after second DMRG run: <psi|H|psi> = 1.000
after second DMRG run: E (excluding W/H loops) = -1.0000000000
max chi: 8
================================================================================
Start iDMRG for Ly=4, J_WL=5.00, J_HL=5.00, h=0.00
after first DMRG run: <psi|W|psi> = 1.000
after first DMRG run: <psi|H|psi> = 1.000
after first DMRG run: E (including W/H loops) = -2.2500000000
after first DMRG run: E (excluding W/H loops) = -1.0000000000
after second DMRG run: <psi|W|psi> = 1.000
after second DMRG run: <psi|H|psi> = 1.000
after second DMRG run: E (excluding W/H loops) = -1.0000000000
max chi: 16
================================================================================
Start iDMRG for Ly=5, J_WL=5.00, J_HL=5.00, h=0.00
after first DMRG run: <psi|W|psi> = 1.000
after first DMRG run: <psi|H|psi> = 1.000
after first DMRG run: E (including W/H loops) = -2.0000000000
after first DMRG run: E (excluding W/H loops) = -1.0000000000
after second DMRG run: <psi|W|psi> = 1.000
after second DMRG run: <psi|H|psi> = 1.000
after second DMRG run: E (excluding W/H loops) = -1.0000000000
max chi: 32
================================================================================
Start iDMRG for Ly=6, J_WL=5.00, J_HL=5.00, h=0.00
after first DMRG run: <psi|W|psi> = 1.000
after first DMRG run: <psi|H|psi> = 1.000
after first DMRG run: E (including W/H loops) = -1.8333333333
after first DMRG run: E (excluding W/H loops) = -1.0000000000
after second DMRG run: <psi|W|psi> = 1.000
after second DMRG run: <psi|H|psi> = 1.000
after second DMRG run: E (excluding W/H loops) = -1.0000000000
max chi: 64
[12]:
Lys = np.array(list(results_Ly.keys()))
S0s = np.array([res['psi'].entanglement_entropy()[0] for res in results_Ly.values()])
S1s = np.array([res['psi'].entanglement_entropy()[2] for res in results_Ly.values()])
plt.figure(figsize=(7, 5))
ax = plt.gca()
ax.plot(Lys, S0s / np.log(2), 'bo', label='data at bond 0')
ax.plot(Lys, S1s / np.log(2), 'rs', label='data at wrong bond')
Lys_all = np.arange(0., max(Lys) + 2.)
ax.plot(Lys_all, Lys_all - 1., '-', label='expected: $L_y - 1$')
ax.axhline(0., linestyle='--', color='k')
ax.set_xlabel('$L_y$')
ax.set_ylabel('$S / log(2)$')
ax.legend()
plt.show()

As expected, We find the scaling \(S(L_y) = \log(2) L_y - \log(2)\) at bond 0, indicating that \(\gamma = \log(2)\). Cutting the MPS at bond 1 (red points) is a “wired” cut of the cylinder and does not show the expected scaling.
[ ]:
Troubleshooting and FAQ¶
- I updated to a new version and now I get and error/warning.
Take a look at the section “Backwards incomptatible changes” in /changelog of the corresponding versions since when you updated.
- Where did all the output go?
Take a look at Logging and terminal output.
- What are possible parameters for …?
- How can I set the number of threads TeNPy is using?
Most algorithms in TeNPy don’t use any parallelization besides what the underlying BLAS provides, so that depends on how you installed TeNPy, numpy and scipy! Using for example an
export OMP_NUM_THREADS=4
should limit it to 4 threads under usual setups, but you might also want toexport MKL_NUM_THREADS=4
instead, if you are sure that you are using MKL.- Why is TeNPy not respecting MKL_NUM_THREADS?
It might be that it is not using MKL. On linux, check whether you have installed a pip version of numpy or scipy in $HOME/.local/lib/python3.* Those packages do not use MKL - you would need to install numpy and scipy from conda. If you use the conda-forge channel as recommended in the installation, also make sure that you select the BLAS provided by MKL, see the note in install/conda.rst.
I get an error when …¶
- … I try to measure
Sx_i Sx_j
correlations in a state with Sz conseration. Right now this is not possible. See the basic facts in Charge conservation with np_conserved.
I get a warning about …¶
- … an unused parameter.
Make sure that you don’t have a typo and that it is in the right parameter set! Also, check the logging output whether the parameter was actually used. For further details, see Parameters and options
Literature and References¶
This is a (by far non-exhaustive) list of some references for the various ideas behind the code. They can be cited like this:
[TeNPyNotes] for TeNPy/software related sources
[white1992] (lowercase first-author + year) for entries from literature.bib.
General reading¶
[schollwoeck2011] is an extensive introduction to MPS, DMRG and TEBD with lots of details on the implementations, and a classic read, although a bit lengthy. Our [TeNPyNotes] are a shorter summary of the important concepts, similar as [orus2014]. [paeckel2019] is a very good, recent review focusing on time evolution with MPS. The lecture notes of [eisert2013] explain the area law as motivation for tensor networks very well. PEPS are for example reviewed in [verstraete2008], [eisert2013] and [orus2014]. [stoudenmire2012] reviews the use of DMRG for 2D systems. [cirac2009] discusses the different groups of tensor network states.
Algorithm developments¶
[white1992, white1993] is the invention of DMRG, which started everything. [vidal2004] introduced TEBD. [white2005] and [hubig2015] solved problems for single-site DMRG. [mcculloch2008] was a huge step forward to solve convergence problems for infinite DMRG. [singh2010, singh2011] explain how to incorporate Symmetries. [haegeman2011] introduced TDVP, again explained more accessible in [haegeman2016]. [zaletel2015] is another standard method for time-evolution with long-range Hamiltonians. [karrasch2013] gives some tricks to do finite-temperature simulations (DMRG), which is a bit extended in [hauschild2018a]. [vidal2007] introduced MERA. The scaling \(S=c/6 log(\chi)\) at a 1D critical point is explained in [pollmann2009].
References¶
- barthel2016
Thomas Barthel. Matrix product purifications for canonical ensembles and quantum number distributions. Physical Review B, 94(11):115157, September 2016. arXiv:1607.01696, doi:10.1103/PhysRevB.94.115157.
- barthel2020
Thomas Barthel and Yikang Zhang. Optimized Lie-Trotter-Suzuki decompositions for two and three non-commuting terms. Annals of Physics, 418:168165, July 2020. arXiv:1901.04974, doi:10.1016/j.aop.2020.168165.
- calabrese2004
Pasquale Calabrese and John Cardy. Entanglement Entropy and Quantum Field Theory. Journal of Statistical Mechanics: Theory and Experiment, 2004(06):P06002, June 2004. arXiv:hep-th/0405152, doi:10.1088/1742-5468/2004/06/P06002.
- cincio2013
Lukasz Cincio and Guifre Vidal. Characterizing topological order by studying the ground states of an infinite cylinder. Physical Review Letters, 110(6):067208, February 2013. arXiv:1208.2623, doi:10.1103/PhysRevLett.110.067208.
- cirac2009
J. I. Cirac and F. Verstraete. Renormalization and tensor product states in spin chains and lattices. Journal of Physics A: Mathematical and Theoretical, 42(50):504004, December 2009. arXiv:0910.1130, doi:10.1088/1751-8113/42/50/504004.
- eisert2013(1,2)
J. Eisert. Entanglement and tensor network states. arXiv:1308.3318 [cond-mat, physics:quant-ph], September 2013. arXiv:1308.3318.
- grushin2015
Adolfo G. Grushin, Johannes Motruk, Michael P. Zaletel, and Frank Pollmann. Characterization and stability of a fermionic \nu=1/3 fractional Chern insulator. Physical Review B, 91(3):035136, January 2015. arXiv:1407.6985, doi:10.1103/PhysRevB.91.035136.
- haegeman2011
Jutho Haegeman, J. Ignacio Cirac, Tobias J. Osborne, Iztok Pizorn, Henri Verschelde, and Frank Verstraete. Time-dependent variational principle for quantum lattices. Physical Review Letters, 107(7):070601, August 2011. arXiv:1103.0936, doi:10.1103/PhysRevLett.107.070601.
- haegeman2016
Jutho Haegeman, Christian Lubich, Ivan Oseledets, Bart Vandereycken, and Frank Verstraete. Unifying time evolution and optimization with matrix product states. Physical Review B, 94(16):165116, October 2016. arXiv:1408.5056, doi:10.1103/PhysRevB.94.165116.
- hauschild2018
Johannes Hauschild, Eyal Leviatan, Jens H. Bardarson, Ehud Altman, Michael P. Zaletel, and Frank Pollmann. Finding purifications with minimal entanglement. Physical Review B, 98(23):235163, December 2018. arXiv:1711.01288, doi:10.1103/PhysRevB.98.235163.
- hauschild2018a(1,2)
Johannes Hauschild and Frank Pollmann. Efficient numerical simulations with Tensor Networks: Tensor Network Python (TeNPy). SciPost Physics Lecture Notes, pages 5, October 2018. arXiv:1805.00055, doi:10.21468/SciPostPhysLectNotes.5.
- hubig2015
Claudius Hubig, Ian P. McCulloch, Ulrich Schollwöck, and F. Alexander Wolf. A Strictly Single-Site DMRG Algorithm with Subspace Expansion. Physical Review B, 91(15):155115, April 2015. arXiv:1501.05504, doi:10.1103/PhysRevB.91.155115.
- karrasch2013
C. Karrasch, J. H. Bardarson, and J. E. Moore. Reducing the numerical effort of finite-temperature density matrix renormalization group transport calculations. New Journal of Physics, 15(8):083031, August 2013. arXiv:1303.3942, doi:10.1088/1367-2630/15/8/083031.
- mcculloch2008
I. P. McCulloch. Infinite size density matrix renormalization group, revisited. arXiv:0804.2509 [cond-mat], April 2008. arXiv:0804.2509.
- murg2010
V. Murg, J. I. Cirac, B. Pirvu, and F. Verstraete. Matrix product operator representations. New Journal of Physics, 12(2):025012, February 2010. arXiv:0804.3976, doi:10.1088/1367-2630/12/2/025012.
- neupert2011
Titus Neupert, Luiz Santos, Claudio Chamon, and Christopher Mudry. Fractional quantum Hall states at zero magnetic field. Physical Review Letters, 106(23):236804, June 2011. arXiv:1012.4723, doi:10.1103/PhysRevLett.106.236804.
- orus2014(1,2)
Roman Orus. A Practical Introduction to Tensor Networks: Matrix Product States and Projected Entangled Pair States. Annals of Physics, 349:117–158, October 2014. arXiv:1306.2164, doi:10.1016/j.aop.2014.06.013.
- paeckel2019
Sebastian Paeckel, Thomas Köhler, Andreas Swoboda, Salvatore R. Manmana, Ulrich Schollwöck, and Claudius Hubig. Time-evolution methods for matrix-product states. Annals of Physics, 411:167998, December 2019. arXiv:1901.05824, doi:10.1016/j.aop.2019.167998.
- pollmann2009
Frank Pollmann, Subroto Mukerjee, Ari Turner, and Joel E. Moore. Theory of finite-entanglement scaling at one-dimensional quantum critical points. Physical Review Letters, 102(25):255701, June 2009. arXiv:0812.2903, doi:10.1103/PhysRevLett.102.255701.
- pollmann2012
Frank Pollmann and Ari M. Turner. Detection of Symmetry Protected Topological Phases in 1D. Physical Review B, 86(12):125441, September 2012. arXiv:1204.0704, doi:10.1103/PhysRevB.86.125441.
- resta1998
Raffaele Resta. Quantum-Mechanical Position Operator in Extended Systems. Physical Review Letters, 80(9):1800–1803, March 1998. doi:10.1103/PhysRevLett.80.1800.
- schollwoeck2011
Ulrich Schollwoeck. The density-matrix renormalization group in the age of matrix product states. Annals of Physics, 326(1):96–192, January 2011. arXiv:1008.3477, doi:10.1016/j.aop.2010.09.012.
- schuch2013
Norbert Schuch. Condensed Matter Applications of Entanglement Theory. arXiv:1306.5551 [cond-mat, physics:quant-ph], June 2013. arXiv:1306.5551.
- shapourian2017
Hassan Shapourian, Ken Shiozaki, and Shinsei Ryu. Many-body topological invariants for fermionic symmetry-protected topological phases. Physical Review Letters, 118(21):216402, May 2017. arXiv:1607.03896, doi:10.1103/PhysRevLett.118.216402.
- singh2010
Sukhwinder Singh, Robert N. C. Pfeifer, and Guifre Vidal. Tensor network decompositions in the presence of a global symmetry. Physical Review A, 82(5):050301, November 2010. arXiv:0907.2994, doi:10.1103/PhysRevA.82.050301.
- singh2011
Sukhwinder Singh, Robert N. C. Pfeifer, and Guifre Vidal. Tensor network states and algorithms in the presence of a global U(1) symmetry. Physical Review B, 83(11):115125, March 2011. arXiv:1008.4774, doi:10.1103/PhysRevB.83.115125.
- stoudenmire2010
E. M. Stoudenmire and Steven R. White. Minimally Entangled Typical Thermal State Algorithms. New Journal of Physics, 12(5):055026, May 2010. arXiv:1002.1305, doi:10.1088/1367-2630/12/5/055026.
- stoudenmire2012
E. M. Stoudenmire and Steven R. White. Studying Two Dimensional Systems With the Density Matrix Renormalization Group. Annual Review of Condensed Matter Physics, 3(1):111–128, March 2012. arXiv:1105.1374, doi:10.1146/annurev-conmatphys-020911-125018.
- suzuki1991
Masuo Suzuki. General theory of fractal path integrals with applications to many-body theories and statistical physics. Journal of Mathematical Physics, 32(2):400–407, February 1991. doi:10.1063/1.529425.
- verstraete2008
F. Verstraete, J. I. Cirac, and V. Murg. Matrix Product States, Projected Entangled Pair States, and variational renormalization group methods for quantum spin systems. Advances in Physics, 57(2):143–224, March 2008. arXiv:0907.2796, doi:10.1080/14789940801912366.
- vidal2004
G. Vidal. Efficient simulation of one-dimensional quantum many-body systems. Physical Review Letters, 93(4):040502, July 2004. arXiv:quant-ph/0310089, doi:10.1103/PhysRevLett.93.040502.
- vidal2007
G. Vidal. Entanglement Renormalization. Physical Review Letters, 99(22):220405, November 2007. doi:10.1103/PhysRevLett.99.220405.
- white1992(1,2)
Steven R. White. Density matrix formulation for quantum renormalization groups. Physical Review Letters, 69(19):2863–2866, November 1992. doi:10.1103/PhysRevLett.69.2863.
- white1993
Steven R. White. Density-matrix algorithms for quantum renormalization groups. Physical Review B, 48(14):10345–10356, October 1993. doi:10.1103/PhysRevB.48.10345.
- white2005
Steven R. White. Density matrix renormalization group algorithms with a single center site. Physical Review B, 72(18):180403, November 2005. arXiv:cond-mat/0508709, doi:10.1103/PhysRevB.72.180403.
- yang2012
Shuo Yang, Zheng-Cheng Gu, Kai Sun, and S. Das Sarma. Topological flat band models with arbitrary Chern numbers. Physical Review B, 86(24):241112, December 2012. arXiv:1205.5792, doi:10.1103/PhysRevB.86.241112.
- zaletel2015
Michael P. Zaletel, Roger S. K. Mong, Christoph Karrasch, Joel E. Moore, and Frank Pollmann. Time-evolving a matrix product state with long-ranged interactions. Physical Review B, 91(16):165112, April 2015. arXiv:1407.1832, doi:10.1103/PhysRevB.91.165112.
Papers using TeNPy¶
This page collects papers using (and citing) the TeNPy library, both as an inspiration what can be done, as well as to keep track of the usage, such that we can see how useful our work is to the community. It keeps us motivated!
To include your own work, you can either fill out this template on github, or you can directly add your citation in this Zotero online library (and notify us about it or just wait).
Entries in the following list are sorted by year-author.
Johannes Hauschild, Eyal Leviatan, Jens H. Bardarson, Ehud Altman, Michael P. Zaletel, and Frank Pollmann. Finding purifications with minimal entanglement. Physical Review B, 98(23):235163, December 2018. arXiv:1711.01288, doi:10.1103/PhysRevB.98.235163.
Johannes Hauschild and Frank Pollmann. Efficient numerical simulations with Tensor Networks: Tensor Network Python (TeNPy). SciPost Physics Lecture Notes, pages 5, October 2018. arXiv:1805.00055, doi:10.21468/SciPostPhysLectNotes.5.
Maximilian Schulz, Scott Richard Taylor, Christopher Andrew Hooley, and Antonello Scardicchio. Energy transport in a disordered spin chain with broken U(1) symmetry: Diffusion, subdiffusion, and many-body localization. Physical Review B, 98(18):180201, November 2018. arXiv:1805.01036, doi:10.1103/PhysRevB.98.180201.
Yasuhiro Tada. Non-thermodynamic nature of the orbital angular momentum in neutral fermionic superfluids. Physical Review B, 97(21):214523, June 2018. arXiv:1805.11226, doi:10.1103/PhysRevB.97.214523.
Annabelle Bohrdt, Fabian Grusdt, and Michael Knap. Dynamical formation of a magnetic polaron in a two-dimensional quantum antiferromagnet. arXiv:1907.08214 [cond-mat, physics:quant-ph], July 2019. arXiv:1907.08214.
Fabian Grusdt, Annabelle Bohrdt, and Eugene Demler. Microscopic spinon-chargon theory of magnetic polarons in the t-J model. Physical Review B, 99(22):224422, June 2019. arXiv:1901.01113, doi:10.1103/PhysRevB.99.224422.
Johannes Michael Hauschild. Quantum Many-Body Systems Far Out of Equilibrium — Simulations with Tensor Networks. Dissertation, Technische Universität München, München, 2019.
Michael Rader and Andreas M. Läuchli. Floating Phases in One-Dimensional Rydberg Ising Chains. arXiv:1908.02068 [cond-mat, physics:quant-ph], August 2019. arXiv:1908.02068.
Leon Schoonderwoerd, Frank Pollmann, and Gunnar Möller. Interaction-driven plateau transition between integer and fractional Chern Insulators. arXiv:1908.00988 [cond-mat], August 2019. arXiv:1908.00988.
Bartholomew Andrews, Madhav Mohan, and Titus Neupert. Abelian topological order of \$\nu=2/5\$ and \$3/7\$ fractional quantum Hall states in lattice models. arXiv:2007.08870 [cond-mat], August 2020. arXiv:2007.08870.
Bartholomew Andrews and Alexey Soluyanov. Fractional quantum Hall states for moir\'e superstructures in the Hofstadter regime. Physical Review B, 101(23):235312, June 2020. arXiv:2004.06602, doi:10.1103/PhysRevB.101.235312.
Mari Carmen Banuls, Michal P. Heller, Karl Jansen, Johannes Knaute, and Viktor Svensson. From spin chains to real-time thermal field theory using tensor networks. Physical Review Research, 2(3):033301, August 2020. arXiv:1912.08836, doi:10.1103/PhysRevResearch.2.033301.
A. Bohrdt, Y. Wang, J. Koepsell, M. Kánasz-Nagy, E. Demler, and F. Grusdt. Dominant fifth-order correlations in doped quantum anti-ferromagnets. arXiv:2007.07249 [cond-mat, physics:quant-ph], July 2020. arXiv:2007.07249.
Umberto Borla, Ruben Verresen, Fabian Grusdt, and Sergej Moroz. Confined phases of one-dimensional spinless fermions coupled to \$Z_2\$ gauge theory. Physical Review Letters, 124(12):120503, March 2020. arXiv:1909.07399, doi:10.1103/PhysRevLett.124.120503.
Umberto Borla, Ruben Verresen, Jeet Shah, and Sergej Moroz. Gauging the Kitaev chain. arXiv:2010.00607 [cond-mat, physics:hep-lat, physics:hep-th, physics:quant-ph], October 2020. arXiv:2010.00607.
M. Michael Denner, Mark H. Fischer, and Titus Neupert. Efficient Learning of a One-dimensional Density Functional Theory. Physical Review Research, 2(3):033388, September 2020. arXiv:2005.03014, doi:10.1103/PhysRevResearch.2.033388.
Elmer V. H. Doggen, Igor V. Gornyi, Alexander D. Mirlin, and Dmitry G. Polyakov. Slow many-body delocalization beyond one dimension. arXiv:2002.07635 [cond-mat], July 2020. arXiv:2002.07635.
Xiao-Yu Dong and D. N. Sheng. Spin-1 Kitaev-Heisenberg model on a two-dimensional honeycomb lattice. Physical Review B, 102(12):121102, September 2020. arXiv:1911.12854, doi:10.1103/PhysRevB.102.121102.
Axel Gagge and Jonas Larson. Superradiance, charge density waves and lattice gauge theory in a generalized Rabi-Hubbard chain. arXiv:2006.13588 [cond-mat, physics:quant-ph], June 2020. arXiv:2006.13588.
Zi-Yong Ge, Rui-Zhen Huang, Zi Yang Meng, and Heng Fan. Approximating Lattice Gauge Theories on Superconducting Circuits: Quantum Phase Transition and Quench Dynamics. arXiv:2009.13350 [cond-mat, physics:quant-ph], September 2020. arXiv:2009.13350.
Yixuan Huang, Xiao-Yu Dong, D. N. Sheng, and C. S. Ting. Quantum phase diagram and chiral spin liquid in the extended spin-\$\frac\1\\2\\$ honeycomb XY model. arXiv:1912.11156 [cond-mat], August 2020. arXiv:1912.11156.
Yixuan Huang, D. N. Sheng, and C. S. Ting. Coupled dimer and bond-order-wave order in the quarter-filled one-dimensional Kondo lattice model. arXiv:2009.13089 [cond-mat], September 2020. arXiv:2009.13089.
C. Hubig, A. Bohrdt, M. Knap, F. Grusdt, and J. I. Cirac. Evaluation of time-dependent correlators after a local quench in iPEPS: hole motion in the t-J model. SciPost Physics, 8(2):021, February 2020. arXiv:1911.01159, doi:10.21468/SciPostPhys.8.2.021.
Geoffrey Ji, Muqing Xu, Lev Haldar Kendrick, Christie S. Chiu, Justus C. Brüggenjürgen, Daniel Greif, Annabelle Bohrdt, Fabian Grusdt, Eugene Demler, Martin Lebrat, and Markus Greiner. Dynamical interplay between a single hole and a Hubbard antiferromagnet. arXiv:2006.06672 [cond-mat, physics:quant-ph], June 2020. arXiv:2006.06672.
Philipp W. Klein, Adolfo G. Grushin, and Karyn Le Hur. Stochastic Chern number from interactions and light response. arXiv:2002.01742 [cond-mat, physics:hep-th, physics:math-ph, physics:quant-ph], August 2020. arXiv:2002.01742.
Benedikt Kloss and Yevgeny Bar Lev. Spin transport in disordered long-range interacting spin chain. Physical Review B, 102(6):060201, August 2020. arXiv:1911.07857, doi:10.1103/PhysRevB.102.060201.
Joannis Koepsell, Dominik Bourgund, Pimonpan Sompet, Sarah Hirthe, Annabelle Bohrdt, Yao Wang, Fabian Grusdt, Eugene Demler, Guillaume Salomon, Christian Gross, and Immanuel Bloch. Microscopic evolution of doped Mott insulators from polaronic metal to Fermi liquid. arXiv:2009.04440 [cond-mat, physics:quant-ph], September 2020. arXiv:2009.04440.
Korbinian Kottmann, Patrick Huembeli, Maciej Lewenstein, and Antonio Acin. Unsupervised phase discovery with deep anomaly detection. arXiv:2003.09905 [cond-mat, physics:quant-ph], March 2020. arXiv:2003.09905.
Yoshihito Kuno and Yasuhiro Hatsugai. Interaction Induced Topological Charge Pump. arXiv:2007.11215 [cond-mat], July 2020. arXiv:2007.11215.
Dongkeun Lee and Wonmin Son. Bell-type correlation at quantum phase transitions in spin-1 chain. arXiv:2010.09999 [quant-ph], October 2020. arXiv:2010.09999.
Alessio Lerose, Michael Sonner, and Dmitry A. Abanin. Influence matrix approach to many-body Floquet dynamics. arXiv:2009.10105 [cond-mat, physics:quant-ph], September 2020. arXiv:2009.10105.
Iman Mahyaeh, Jurriaan Wouters, and Dirk Schuricht. Phase diagram of the \$\mathbb\\vphantom \Z\vphantom \\_3\$-Fock parafermion chain with pair hopping. arXiv:2003.07812 [cond-mat], September 2020. arXiv:2003.07812.
Ivan Morera, Grigori E. Astrakharchik, Artur Polls, and Bruno Juliá-Díaz. Universal dimerized quantum droplets in a one-dimensional lattice. arXiv:2007.01786 [cond-mat, physics:quant-ph], July 2020. arXiv:2007.01786.
Ivan Morera, Irénée Frérot, Artur Polls, and Bruno Juliá-Díaz. Entanglement entropy in low-energy field theories at finite chemical potential. Physical Review Research, 2(3):033016, July 2020. arXiv:1907.01204, doi:10.1103/PhysRevResearch.2.033016.
Johannes Motruk and Ilyoun Na. Detecting fractional Chern insulators in optical lattices through quantized displacement. arXiv:2005.09860 [cond-mat, physics:quant-ph], May 2020. arXiv:2005.09860.
Nelson Darkwah Oppong, Giulio Pasqualetti, Oscar Bettermann, Philip Zechmann, Michael Knap, Immanuel Bloch, and Simon Fölling. Probing transport and slow relaxation in the mass-imbalanced Fermi-Hubbard model. arXiv:2011.12411 [cond-mat, physics:quant-ph], November 2020. arXiv:2011.12411.
Thomas Quella. Symmetry protected topological phases beyond groups: The q-deformed bilinear-biquadratic spin chain. arXiv:2011.12679 [cond-mat, physics:math-ph], November 2020. arXiv:2011.12679.
C. Repellin, J. Léonard, and N. Goldman. Hall drift of fractional Chern insulators in few-boson systems. arXiv:2005.09689 [cond-mat, physics:quant-ph], July 2020. arXiv:2005.09689.
Ananda Roy, Johannes Hauschild, and Frank Pollmann. Quantum phases of a one-dimensional Majorana-Bose-Hubbard model. Physical Review B, 101(7):075419, February 2020. arXiv:1911.08120, doi:10.1103/PhysRevB.101.075419.
Ananda Roy, Frank Pollmann, and Hubert Saleur. Entanglement Hamiltonian of the 1+1-dimensional free, compactified boson conformal field theory. Journal of Statistical Mechanics: Theory and Experiment, 2020(8):083104, August 2020. arXiv:2004.14370, doi:10.1088/1742-5468/aba498.
Ananda Roy, Dirk Schuricht, Johannes Hauschild, Frank Pollmann, and Hubert Saleur. The quantum sine-Gordon model with quantum circuits. arXiv:2007.06874 [cond-mat, physics:hep-lat, physics:nlin, physics:quant-ph], July 2020. arXiv:2007.06874.
Tomohiro Soejima, Daniel E. Parker, Nick Bultinck, Johannes Hauschild, and Michael P. Zaletel. Efficient simulation of moire materials using the density matrix renormalization group. arXiv:2009.02354 [cond-mat], September 2020. arXiv:2009.02354.
John Sous, Benedikt Kloss, Dante M. Kennes, David R. Reichman, and Andrew J. Millis. Phonon-induced disorder in dynamics of optically pumped metals from non-linear electron-phonon coupling. arXiv:2009.00619 [cond-mat], September 2020. arXiv:2009.00619.
Aaron Szasz, Johannes Motruk, Michael P. Zaletel, and Joel E. Moore. Chiral spin liquid phase of the triangular lattice Hubbard model: a density matrix renormalization group study. Physical Review X, 10(2):021042, May 2020. arXiv:1808.00463, doi:10.1103/PhysRevX.10.021042.
Yasuhiro Tada. Quantum criticality of magnetic catalysis in two-dimensional correlated Dirac fermions. Physical Review Research, 2(3):033363, September 2020. arXiv:2005.01990, doi:10.1103/PhysRevResearch.2.033363.
Qicheng Tang and W. Zhu. Measurement-induced phase transition: A case study in the non-integrable model by density-matrix renormalization group calculations. Physical Review Research, 2(1):013022, January 2020. arXiv:1908.11253, doi:10.1103/PhysRevResearch.2.013022.
Ruben Verresen, Mikhail D. Lukin, and Ashvin Vishwanath. Prediction of Toric Code Topological Order from Rydberg Blockade. arXiv:2011.12310 [cond-mat, physics:physics, physics:quant-ph], November 2020. arXiv:2011.12310.
L. Villa, J. Despres, S. J. Thomson, and L. Sanchez-Palencia. Local quench spectroscopy of many-body quantum systems. arXiv:2007.08381 [cond-mat], July 2020. arXiv:2007.08381.
Botao Wang, Xiao-Yu Dong, F. Nur Ünal, and André Eckardt. Robust and Ultrafast State Preparation by Ramping Artificial Gauge Potentials. arXiv:2009.00560 [cond-mat, physics:quant-ph], September 2020. arXiv:2009.00560.
Zhaoyou Wang and Emily J. Davis. Calculating Renyi Entropies with Neural Autoregressive Quantum States. arXiv:2003.01358 [cond-mat, physics:quant-ph], March 2020. arXiv:2003.01358.
Jurriaan Wouters, Hosho Katsura, and Dirk Schuricht. Constructive approach for frustration-free models using Witten's conjugation. arXiv:2005.12825 [cond-mat, physics:math-ph], May 2020. arXiv:2005.12825.
Elisabeth Wybo, Michael Knap, and Frank Pollmann. Entanglement dynamics of a many-body localized system coupled to a bath. Physical Review B, 102(6):064304, August 2020. arXiv:2004.13072, doi:10.1103/PhysRevB.102.064304.
Elisabeth Wybo, Frank Pollmann, S. L. Sondhi, and Yizhi You. Visualizing Quasiparticles from Quantum Entanglement for general 1D phases. arXiv:2010.15137 [cond-mat, physics:hep-th, physics:quant-ph], October 2020. arXiv:2010.15137.
Ya-Hui Zhang and Zheng Zhu. Symmetric pseudogap metal in a generalized \$t-J\$ model. arXiv:2008.11204 [cond-mat], August 2020. arXiv:2008.11204.
Joaquin F. Rodriguez-Nieva, Alexander Schuckert, Dries Sels, Michael Knap, and Eugene Demler. Transverse instability and universal decay of spin spiral order in the Heisenberg model. arXiv:2011.07058 [cond-mat], November 2020. arXiv:2011.07058.
Contributing¶
There are lots of things where you can help, even if you don’t wont to dig deep into the source code. You are welcome to do any of the following things, all of them are very helpful!
Report bugs and problems, such that they can be fixed.
Implement new models.
Update and extend the documentation.
Provide examples of how to use TeNPy.
Give feedback on how you like TeNPy and what you would like to see improved.
Help fixing bugs.
Help fixing minor issues.
Extend the functionality by implementing new functions, methods, and algorithms.
The code is maintained in a git repository, the official repository is on github. Even if you’re not yet on the developer team, you can still submit pull requests on github. If you’re unsure how or what to do, you can ask for help in the [TeNPyForum]. If you want to become a member of the developer team, just ask ;-)
Thank You!
Coding Guidelines¶
To keep consistency, we ask you to comply with the following guidelines for contributions. However, these are just guidelines - it still helps if you contribute something, even if doesn’t follow these rules ;-)
Use a code style based on PEP 8. The git repo includes a config file
.style.yapf
for the python package yapf. yapf is a tool to auto-format code, e.g., by the commandyapf -i some/file
(-i for “in place”). We run yapf on a regular basis on the github main branch. If your branch diverged, it might help to run yapf before merging.Note
Since no tool is perfect, you can format some regions of code manually and enclose them with the special comments
# yapf: disable
and# yapf: enable
.Every function/class/module should be documented by its doc-string, see PEP 257. We auto-format the doc-strings with docformatter on a regular basis.
Additional documentation for the user guide is in the folder
doc/
.The documentation uses reStructuredText. If you are new to reStructuredText, read this introduction. We use the numpy style for doc-strings (with the napoleon extension to sphinx). You can read abouth them in these Instructions for the doc strings. In addition, you can take a look at the following example file. Helpful hints on top of that:
r"""<- this r makes me a raw string, thus '\' has no special meaning. Otherwise you would need to escape backslashes, e.g. in math formulas. You can include cross references to classes, methods, functions, modules like :class:`~tenpy.linalg.np_conserved.Array`, :meth:`~tenpy.linalg.np_conserved.Array.to_ndarray`, :func:`tenpy.tools.math.toiterable`, :mod:`tenpy.linalg.np_conserved`. The ~ in the beginning makes only the last part of the name appear in the generated documentation. Documents of the userguide can be referenced with :doc:`/intro_npc` even from inside the doc-strings. You can also cross-link to other documentations, e.g. :class:`numpy.ndarray`, :func`scipy.linalg.svd` and :mod: will work. Moreover, you can link to github issues, arXiv papers, dois, and topics in the community forum with e.g. :issue:`5`, :arxiv:`1805.00055`, :doi:`10.1000/1` and :forum:`3`. Citations from the literature list can be cited as :cite:`white1992` using the bibtex key. Write inline formulas as :math:`H |\Psi\rangle = E |\Psi\rangle` or displayed equations as .. math :: e^{i\pi} + 1 = 0 In doc-strings, math can only be used in the Notes section. To refer to variables within math, use `\mathtt{varname}`. .. todo :: This block can describe things which need to be done and is automatically included in a section of :doc:`todo`. """
Use relative imports within TeNPy. Example:
from ..linalg import np_conserved as npc
Use the python package pytest for testing. Run it simply with
pytest
in tests/. You should make sure that all tests run through, before yougit push
back into the public repo. Long-running tests are marked with the attribute slow; for a quick check you can also runpytest -m "not slow"
.We have set up github actions to automatically run the tests.
Reversely, if you write new functions, please also include suitable tests!
During development, you might introduce
# TODO
comments. But also try to remove them again later! If you’re not 100% sure that you will remove it soon, please add a doc-string with a.. todo ::
block, such that we can keep track of it.Unfinished functions should
raise NotImplementedError()
.Summarize the changes you have made in the Changelog under [latest].
If you want to try out new things in temporary files: any folder named
playground
is ignored by git.If you add a new toycode or example: add a reference to include it in the documentation.
We’ve created a sphinx extensions for documenting config-option dictionaries. If a class takes a dictionary of options, we usually call it options, convert it to a
Config
at the very beginning of the __init__ withasConfig()
, save it asself.options
, and document it in the class doc-string with a.. cfg:config ::
directive. The name of the config should usually be the class-name (if that is sufficiently unique), or for algorithms directly the common name of the algorithm, e.g. “DMRG”; use the same name for the use the same name for the documentation of the.. cfg:config ::
directive as for theConfig
class instance. Attributes which are simply read-out options should be documented by just referencing the options with the:cfg:option:`configname.optionname`
role.
Bulding the documentation¶
You can use Sphinx to generate the full documentation in various formats (including HTML or PDF) yourself, as described in the following.
First, I will assume that you downloaded the [TeNPySource] repository with:
git clone --recursive https://github.com/tenpy/tenpy
This includes the [TeNPyNotebooks] as a git submodule; you might need to git submodule update if it is out of date.
Building the documentation requires a few more packages (including Sphinx). The recommended way is to create a separate conda environment for it with:
conda env create -f doc/environment.yml # make sure to use the file from the doc/ subfolder!
conda activate tenpydoc
Alternatively, you can use pip and pip install -r doc/requirements.txt
, but this will not be able to install
all dependencies: some packages like Graphviz are not available from pip alone.
Afterwards, simply go to the folder doc/
and run the following command:
make html
This should generate the html documentation in the folder doc/sphinx_build/html.
Open this folder (or to be precise: the file index.html in it) in your webbroser
and enjoy this and other documentation beautifully rendered, with cross links, math formulas
and even a search function.
Other output formats are available as other make targets, e.g., make latexpdf
.
Note
Building the documentation with sphinx requires loading the TeNPy modules. The conf.py adjusts the python sys.path to include the /tenpy folder from root directory of the git repository. It will not use the cython-compiled parts.
To-Do list¶
You can check https://github.com/tenpy/tenpy/issues for things to be done.
The following list is auto-generated by sphinx, extracting .. todo ::
blocks from doc-strings of the code.
Todo
TDVP is currently not implemented with the sweep class.
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.8.3/tenpy/algorithms/mps_common.py:docstring of tenpy.algorithms.mps_common.Sweep, line 6.)
Todo
- implement or wrap netcon.m, a function to find optimal contractionn sequences
improve helpfulness of Warnings
_do_trace: trace over all pairs of legs at once. need the corresponding npc function first.
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.8.3/tenpy/algorithms/network_contractor.py:docstring of tenpy.algorithms.network_contractor, line 10.)
Todo
This is still a beta version, use with care. The interface might still change.
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.8.3/tenpy/algorithms/tdvp.py:docstring of tenpy.algorithms.tdvp, line 12.)
Todo
long-term: Much of the code is similar as in DMRG. To avoid too much duplicated code, we should have a general way to sweep through an MPS and updated one or two sites, used in both cases.
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.8.3/tenpy/algorithms/tdvp.py:docstring of tenpy.algorithms.tdvp, line 16.)
Todo
add further terms (e.g. c^dagger c^dagger + h.c.) to the Hamiltonian.
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.8.3/tenpy/models/fermions_spinless.py:docstring of tenpy.models.fermions_spinless, line 3.)
Todo
WARNING: These models are still under development and not yet tested for correctness. Use at your own risk! Replicate known results to confirm models work correctly. Long term: implement different lattices. Long term: implement variable hopping strengths Jx, Jy.
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.8.3/tenpy/models/hofstadter.py:docstring of tenpy.models.hofstadter, line 3.)
Todo
This is a naive, expensive implementation contracting the full network. Try to follow arXiv:1711.01104 for a better estimate; would that even work in the infinite limit?
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.8.3/tenpy/networks/mpo.py:docstring of tenpy.networks.mpo.MPO.variance, line 5.)
Todo
might be useful to add a “cleanup” function which removes operators cancelling each other and/or unused states. Or better use a ‘compress’ of the MPO?
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.8.3/tenpy/networks/mpo.py:docstring of tenpy.networks.mpo.MPOGraph, line 17.)
Todo
Make more general: it should be possible to specify states as strings.
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.8.3/tenpy/networks/mps.py:docstring of tenpy.networks.mps.build_initial_state, line 14.)
Todo
test, provide more.
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.8.3/tenpy/simulations/measurement.py:docstring of tenpy.simulations.measurement, line 7.)
Todo
provide examples, give user guide
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.8.3/tenpy/simulations/simulation.py:docstring of tenpy.simulations.simulation, line 8.)
Todo
For memory caching with big MPO environments, we need a Hdf5Cacher clearing the memo’s every now and then (triggered by what?).
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.8.3/tenpy/tools/hdf5_io.py:docstring of tenpy.tools.hdf5_io, line 65.)
Tenpy main module¶
full name: tenpy
parent module:
tenpy
type: module
Submodules
A collection of algorithms such as TEBD and DMRG. |
|
Linear-algebra tools for tensor networks. |
|
Definition of the various models. |
|
Definitions of tensor networks like MPS and MPO. |
|
A collection of tools: mostly short yet quite useful functions. |
|
Access to version of this library. |
Module description
TeNPy - a Python library for Tensor Network Algorithms
TeNPy is a library for algorithms working with tensor networks, e.g., matrix product states and -operators, designed to study the physics of strongly correlated quantum systems. The code is intended to be accessible for newcommers and yet powerful enough for day-to-day research.
|
Run the simulation with a simulation class. |
Command line interface. |
|
Print information about the version of tenpy and used libraries. |
-
tenpy.
__version__
= '0.8.3'¶ hard-coded version string
-
tenpy.
__full_version__
= '0.8.3'¶ full version from git description, and numpy/scipy/python versions
Submodules
A collection of algorithms such as TEBD and DMRG. |
|
Linear-algebra tools for tensor networks. |
|
Definition of the various models. |
|
Definitions of tensor networks like MPS and MPO. |
|
Simulation setup. |
|
A collection of tools: mostly short yet quite useful functions. |
|
Access to version of this library. |