Tensor Network Python (TeNPy)

GitHub last commit Documentation Status Build GitHub issues PyPi

TeNPy (short for ‘Tensor Network Python’) is a Python library for the simulation of strongly correlated quantum systems with tensor networks.

The philosophy of this library is to get a new balance of a good readability and usability for new-comers, and at the same time powerful algorithms and fast development of new algorithms for experts. For good readability, we include an extensive documentation next to the code, both in Python doc strings and separately as user guides, as well as simple example codes and even toy codes, which just demonstrate various algorithms (like TEBD and DMRG) in ~100 lines per file.

How do I get set up?

If you have the conda package manager, you can install the latest released version of TeNPy with:

conda install --channel=conda-forge physics-tenpy

Further details and alternative methods can be found the file doc/INSTALL.rst. The latest version of the source code can be obtained from https://github.com/tenpy/tenpy.

How to read the documentation

The documentation is available online at https://tenpy.readthedocs.io/. The documentation is roughly split in two parts: on one hand the full “reference” containing the documentation of all functions, classes, methods, etc., and on the other hand the “user guide” containing some introductions and additional explanations.

The documentation is based on Python’s docstrings, and some additional *.rst files located in the folder doc/ of the repository. All documentation is formated as reStructuredText, which means it is quite readable in the source plain text, but can also be converted to other formats. If you like it simple, you can just use intective python help(), Python IDEs of your choice or jupyter notebooks, or just read the source. Moreover, the documentation gets converted into HTML using Sphinx, and is made available online at https://tenpy.readthedocs.io/. The big advantages of the (online) HTML documentation are a lot of cross-links between different functions, and even a search function. If you prefer yet another format, you can try to build the documentation yourself, as described in doc/contr/build_doc.rst.

Help - I looked at the documentation, but I don’t understand how …?

We have set up a community forum at https://tenpy.johannes-hauschild.de/, where you can post questions and hopefully find answers. Once you got some experience with TeNPy, you might also be able to contribute to the community and answer some questions yourself ;-) We also use this forum for official annoucements, for example when we release a new version.

I found a bug

You might want to check the github issues, if someone else already reported the same problem. To report a new bug, just open a new issue on github. If you already know how to fix it, you can just create a pull request :) If you are not sure whether your problem is a bug or a feature, you can also ask for help in the TeNPy forum.

Citing TeNPy

When you use TeNPy for a work published in an academic journal, you can cite this paper to acknowledge the work put into the development of TeNPy. (The license of TeNPy does not force you, however.) For example, you could add the sentence "Calculations were performed using the TeNPy Library (version X.X.X)\cite{tenpy}." in the acknowledgements or in the main text.

The corresponding BibTex Entry would be the following (the \url{...} requires \usepackage{hyperref} in the LaTeX preamble.):

@Article{tenpy,
    title={{Efficient numerical simulations with Tensor Networks: Tensor Network Python (TeNPy)}},
    author={Johannes Hauschild and Frank Pollmann},
    journal={SciPost Phys. Lect. Notes},
    pages={5},
    year={2018},
    publisher={SciPost},
    doi={10.21468/SciPostPhysLectNotes.5},
    url={https://scipost.org/10.21468/SciPostPhysLectNotes.5},
    archiveprefix={arXiv},
    eprint={1805.00055},
    note={Code available from \url{https://github.com/tenpy/tenpy}},
}

Acknowledgment

This work was funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DE-AC02-05- CH11231 through the Scientific Discovery through Advanced Computing (SciDAC) program (KC23DAC Topological and Correlated Matter via Tensor Networks and Quantum Monte Carlo).

License

The code is licensed under GPL-v3.0 given in the file LICENSE of the repository, in the online documentation readable at https://tenpy.readthedocs.io/en/latest/install/license.html.

Installation instructions

With the [conda] package manager you can install python with:

conda install --channel=conda-forge physics-tenpy

More details and tricks in Installation with conda from conda-forge.

If you don’t have conda, but you have [pip], you can:

pip install physics-tenpy

More details for this method can be found in Installation from PyPi with pip.

We also have a bunch of optional Extra requirements, which you don’t have to install to use TeNPy, but you might want to.

The method with the minimal requirements is to just download the source and adjust the PYTHONPATH, as described in Installation from source. This is also the recommended way if you plan to modify parts of the source.

Installation with conda from conda-forge

We provide a package for the [conda] package manager in the conda-forge channel, so you can install TeNPy as:

conda install --channel=conda-forge physics-tenpy

Following the recommondation of conda-forge, you can also make conda-forge the default channel as follows:

conda config --add channels conda-forge
conda config --set channel_priority strict

If you have done this, you don’t need to specify the --channel=conda-forge explicitly.

Moreover, it is actually recommended to create a separate environment. To create a conda environment with the name tenpy, where the TeNPy package (called physics-tenpy) is installed:

conda create --name tenpy --channel=conda-forge physics-tenpy

In that case, you need to activate the environment each time you want to use the package with:

conda activate tenpy

The big advantage of this approach is that it allows multiple version of software to be installed in parallel, e.g., if one of your projects requires python>=3.8 and another one requires an old library which doesn’t support that. Further info can be found in the conda documentation.

Installation from PyPi with pip

Preparation: install requirements

If you have the [conda] package manager from anaconda, you can just download the environment.yml file out of the repository and create a new environment (called tenpy, if you don’t speficy another name) for TeNPy with all the required packages:

conda env create -f environment.yml
conda activate tenpy

Further information on conda environments can be found in the conda documentation. Note that installing conda also installs a version of [pip].

Alternatively, if you only have [pip] (and not [conda]), install the required packages with the following command (after downloading the requirements.txt file from the repository):

pip install -r requirements.txt

Note

Make sure that the pip you call corresponds to the python version you want to use. (One way to ensure this is to use python -m pip instead of a simple pip.) Also, you might need to use the argument --user to install the packages to your home directory, if you don’t have sudo rights. (Using --user with conda’s pip is discouraged, though.)

Warning

It might just be a temporary problem, but I found that the pip version of numpy is incompatible with the python distribution of anaconda. If you have installed the intelpython or anaconda distribution, use the conda packagemanager instead of pip for updating the packages whenever possible!

Installing the latest stable TeNPy package

Now we are ready to install TeNPy. It should be as easy as (note the different package name - ‘tenpy’ was taken!)

pip install physics-tenpy

Note

If the installation fails, don’t give up yet. In the minimal version, tenpy requires only pure Python with somewhat up-to-date NumPy and SciPy. See Installation from source.

Installation of the latest version from Github

To get the latest development version from the github master branch, you can use:

pip install git+git://github.com/tenpy/tenpy.git

This should already have the lastest features described in [latest]. Disclaimer: this might sometimes be broken, although we do our best to keep to keep it stable as well.

Installation from the downloaded source folder

Finally, if you downloaded the source and want to modify parts of the source, You can also install TeNPy with in development version with --editable:

cd $HOME/tenpy # after downloading the source, got to the repository
pip install --editable .
Uninstalling a pip-installed version

In all of the above cases, you can uninstall tenpy with:

pip uninstall physics-tenpy

Updating to a new version

Before you update, take a look at the Release Notes, which lists the changes, fixes, and new stuff. Most importantly, it has a section on backwards incompatible changes (i.e., changes which may break your existing code) along with information how to fix it. Of course, we try to avoid introducing such incompatible changes, but sometimes, there’s no way around them. If you skip some intermediate version(s) for the update, read also the release notes of those!

How to update depends a little bit on the way you installed TeNPy. Of course, you have always the option to just remove the TeNPy files (possibly with a pip uninstall physics-tenpy), and to start over with downloading and installing the newest version.

When installed with pip

When you installed TeNPy with [pip], you just need to do a

pip install --upgrade physics-tenpy
When installed from source

If you used git clone ... to download the repository, you can update to the newest version using [git]. First, briefly check that you didn’t change anything you need to keep with git status. Then, do a git pull to download (and possibly merge) the newest commit from the repository.

Note

If some Cython file (ending in .pyx) got renamed/removed (e.g., when updating from v0.3.0 to v0.4.0), you first need to remove the corresponding binary files. You can do so with the command bash cleanup.sh.

Furthermore, whenever one of the cython files (ending in .pyx) changed, you need to re-compile it. To do that, simply call the command bash ./compile again. If you are unsure whether a cython file changed, compiling again doesn’t hurt.

To summarize, you need to execute the following bash commands in the repository:

# 0) make a backup of the whole folder
git status   # check the output whether you modified some files
git pull
bash ./cleanup.sh  # (confirm with 'y')
bash ./compile.sh

Installation from source

Minimal Requirements

This code works with a minimal requirement of pure Python>=3.5 and somewhat recent versions of NumPy and SciPy.

Getting the source

The following instructions are for (some kind of) Linux, and tested on Ubuntu. However, the code itself should work on other operating systems as well (in particular MacOS and Windows).

The offical repository is at https://github.com/tenpy/tenpy.git. To get the latest version of the code, you can clone it with [git] using the following commands:

git clone https://github.com/tenpy/tenpy.git $HOME/TeNPy
cd $HOME/TeNPy

Note

Adjust $HOME/TeNPy to the path wherever you want to save the library.

Optionally, if you don’t want to contribute, you can checkout the latest stable release:

git tag   # this prints the available version tags
git checkout v0.3.0  # or whatever is the lastest stable version

Note

In case you don’t have [git] installed, you can download the repository as a ZIP archive. You can find it under releases, or the latest development version.

Minimal installation: Including tenpy into PYTHONPATH

The python source is in the directory tenpy/ of the repository. This folder tenpy/ should be placed in (one of the folders of) the environment variable PYTHONPATH. On Linux, you can simply do this with the following line in the terminal:

export PYTHONPATH=$HOME/TeNPy

(If you have already a path in this variable, separate the paths with a colon :.) However, if you enter this in the terminal, it will only be temporary for the terminal session where you entered it. To make it permanently, you can add the above line to the file $HOME/.bashrc. You might need to restart the terminal session or need to relogin to force a reload of the ~/.bashrc.

Whenever the path is set, you should be able to use the library from within python:

>>> import tenpy
/home/username/TeNPy/tenpy/tools/optimization.py:276: UserWarning: Couldn't load compiled cython code. Code will run a bit slower.
  warnings.warn("Couldn't load compiled cython code. Code will run a bit slower.")
>>> tenpy.show_config()
tenpy 0.4.0.dev0+7706003 (not compiled),
git revision 77060034a9fa64d2c7c16b4211e130cf5b6f5272 using
python 3.7.3 (default, Mar 27 2019, 22:11:17)
[GCC 7.3.0]
numpy 1.16.3, scipy 1.2.1

tenpy.show_config() prints the current version of the used TeNPy library as well as the versions of the used python, numpy and scipy libraries, which might be different on your computer. It is a good idea to save this data (given as string in tenpy.version.version_summary along with your data to allow to reproduce your results exactly.

If you got a similar output as above: congratulations! You can now run the codes :)

Compilation of np_conserved

At the heart of the TeNPy library is the module tenpy.linalg.np_conseved, which provides an Array class to exploit the conservation of abelian charges. The data model of python is not ideal for the required book-keeping, thus we have implemented the same np_conserved module in Cython. This allows to compile (and thereby optimize) the corresponding python module, thereby speeding up the execution of the code. While this might give a significant speed-up for code with small matrix dimensions, don’t expect the same speed-up in cases where most of the CPU-time is already spent in matrix multiplications (i.e. if the bond dimension of your MPS is huge).

To compile the code, you first need to install Cython

conda install cython                    # when using anaconda, or
pip install --upgrade Cython            # when using pip

Moreover, you need a C++ compiler. For example, on Ubuntu you can install sudo apt-get install build_essential, or on Windows you can download MS Visual Studio 2015. If you use anaconda, you can also use one conda install -c conda-forge cxx-compiler.

After that, go to the root directory of TeNPy ($HOME/TeNPy) and simply run

bash ./compile.sh

Note that it is not required to separately download (and install) Intel MKL: the compilation just obtains the includes from numpy. In other words, if your current numpy version uses MKL (as the one provided by anaconda), the compiled TeNPy code will also use it.

After a successful compilation, the warning that TeNPy was not compiled should go away:

>>> import tenpy
>>> tenpy.show_config()
tenpy 0.4.0.dev0+b60bad3 (compiled from git rev. b60bad3243b7e54f549f4f7c1f074dc55bb54ba3),
git revision b60bad3243b7e54f549f4f7c1f074dc55bb54ba3 using
python 3.7.3 (default, Mar 27 2019, 22:11:17)
[GCC 7.3.0]
numpy 1.16.3, scipy 1.2.1

Note

For further optimization options, look at tenpy.tools.optimization.

Extra requirements

We have some extra requirements that you don’t need to install to use TeNPy, but that you might find usefull to work with. TeNPy does not import the following libraries (at least not globally), but some functions might expect arguments behaving like objects from these libraries.

Note

If you created a [conda] environment with conda env create -f environment.yml, all the extra requirements below should already be installed :) (However, a pip install -r requirements.txt does not install them.)

Matplotlib

The first extra requirement is the [matplotlib] plotting library. Some functions expect a matplotlib.axes.Axes instance as argument to plot some data for visualization.

Intel’s Math Kernel Library (MKL)

If you want to run larger simulations, we recommend the use of Intel’s MKL. It ships with a Lapack library, and uses optimization for Intel CPUs. Moreover, it uses parallelization of the LAPACK/BLAS routines, which makes execution much faster. As of now, the library itself supports no other way of parallelization.

If you don’t have a python version which is built against MKL, we recommend using [conda] or directly intelpython. Conda has the advantage that it allows to use different environments for different projects. Both are available for Linux, Mac and Windows; note that you don’t even need administrator rights to install it on linux. Simply follow the (straight-forward) instructions of the web page for the installation. After a successfull installation, if you run python interactively, the first output line should state the python version and contain Anaconda or Intel Corporation, respectively.

If you have a working conda package manager, you can install the numpy build against mkl with:

conda install mkl numpy scipy

Note

MKL uses different threads to parallelize various BLAS and LAPACK routines. If you run the code on a cluster, make sure that you specify the number of used cores/threads correctly. By default, MKL uses all the available CPUs, which might be in stark contrast than what you required from the cluster. The easiest way to set the used threads is using the environment variable MKL_NUM_THREADS (or OMP_NUM_THREADS). For a dynamic change of the used threads, you might want to look at process.

HDF5 file format support

We support exporting data to files in the [HDF5] format through the python interface of the h5py <https://docs.h5py.org/en/stable/> package, see Saving to disk: input/output for more information. However, that requires the installation of the HDF5 library and h5py.

YAML parameter files

The tenpy.tools.params.Config class supports reading and writing YAML files, which requires the package pyyaml; pip install pyyaml.

Tests

To run the tests, you need to install pytest, which you can for example do with pip install pytest. For information how to run the tests, see Checking the installation.

Checking the installation

The first check of whether tenpy is installed successfully, is to try to import it from within python:

>>> import tenpy

Note

If this raises a warning Couldn't load compiled cython code. Code will run a bit slower., something went wrong with the compilation of the Cython parts (or you didn’t compile at all). While the code might run slower, the results should still be the same.

The function tenpy.show_config() prints information about the used versions of tenpy, numpy and scipy, as well on the fact whether the Cython parts were compiled and could be imported.

As a further check of the installation you can try to run (one of) the python files in the examples/ subfolder; hopefully all of them should run without error.

You can also run the automated testsuite with pytest to make sure everything works fine. If you have pytest installed, you can go to the tests folder of the repository, and run the tests with:

cd tests
pytest

In case of errors or failures it gives a detailed traceback and possibly some output of the test. At least the stable releases should run these tests without any failures.

If you can run the examples but not the tests, check whether pytest actually uses the correct python version.

The test suite is also run automatically by github actions and with travis-ci, results can be inspected here.

TeNPy developer team

The following people are part of the TeNPy developer team.
The full list of contributors can be obtained from the git repository with ``git shortlog -sn``.

Johannes Hauschild        tenpy@johannes-hauschild.de
Frank Pollmann
Michael P. Zaletel
Maximilian Schulz
Leon Schoonderwoerd
Kévin Hémery
Samuel Scalet
Markus Drescher
Wilhelm Kadow
Gunnar Moeller
Jakob Unfried
Yu-Chin Tzeng

Further, the code is based on an earlier version of the library, mainly developed by
Frank Pollmann, Michael P. Zaletel and Roger S. K. Mong.

License

The source code documented here is published under a GPL v3 license, which we include below.

                    GNU GENERAL PUBLIC LICENSE
                       Version 3, 29 June 2007

 Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
 Everyone is permitted to copy and distribute verbatim copies
 of this license document, but changing it is not allowed.

                            Preamble

  The GNU General Public License is a free, copyleft license for
software and other kinds of works.

  The licenses for most software and other practical works are designed
to take away your freedom to share and change the works.  By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.  We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors.  You can apply it to
your programs, too.

  When we speak of free software, we are referring to freedom, not
price.  Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.

  To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights.  Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.

  For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received.  You must make sure that they, too, receive
or can get the source code.  And you must show them these terms so they
know their rights.

  Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.

  For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software.  For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.

  Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so.  This is fundamentally incompatible with the aim of
protecting users' freedom to change the software.  The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable.  Therefore, we
have designed this version of the GPL to prohibit the practice for those
products.  If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.

  Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary.  To prevent this, the GPL assures that
patents cannot be used to render the program non-free.

  The precise terms and conditions for copying, distribution and
modification follow.

                       TERMS AND CONDITIONS

  0. Definitions.

  "This License" refers to version 3 of the GNU General Public License.

  "Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.

  "The Program" refers to any copyrightable work licensed under this
License.  Each licensee is addressed as "you".  "Licensees" and
"recipients" may be individuals or organizations.

  To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy.  The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.

  A "covered work" means either the unmodified Program or a work based
on the Program.

  To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy.  Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.

  To "convey" a work means any kind of propagation that enables other
parties to make or receive copies.  Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.

  An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License.  If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.

  1. Source Code.

  The "source code" for a work means the preferred form of the work
for making modifications to it.  "Object code" means any non-source
form of a work.

  A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.

  The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form.  A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.

  The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities.  However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work.  For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.

  The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.

  The Corresponding Source for a work in source code form is that
same work.

  2. Basic Permissions.

  All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met.  This License explicitly affirms your unlimited
permission to run the unmodified Program.  The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work.  This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.

  You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force.  You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright.  Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.

  Conveying under any other circumstances is permitted solely under
the conditions stated below.  Sublicensing is not allowed; section 10
makes it unnecessary.

  3. Protecting Users' Legal Rights From Anti-Circumvention Law.

  No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.

  When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.

  4. Conveying Verbatim Copies.

  You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.

  You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.

  5. Conveying Modified Source Versions.

  You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:

    a) The work must carry prominent notices stating that you modified
    it, and giving a relevant date.

    b) The work must carry prominent notices stating that it is
    released under this License and any conditions added under section
    7.  This requirement modifies the requirement in section 4 to
    "keep intact all notices".

    c) You must license the entire work, as a whole, under this
    License to anyone who comes into possession of a copy.  This
    License will therefore apply, along with any applicable section 7
    additional terms, to the whole of the work, and all its parts,
    regardless of how they are packaged.  This License gives no
    permission to license the work in any other way, but it does not
    invalidate such permission if you have separately received it.

    d) If the work has interactive user interfaces, each must display
    Appropriate Legal Notices; however, if the Program has interactive
    interfaces that do not display Appropriate Legal Notices, your
    work need not make them do so.

  A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit.  Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.

  6. Conveying Non-Source Forms.

  You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:

    a) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by the
    Corresponding Source fixed on a durable physical medium
    customarily used for software interchange.

    b) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by a
    written offer, valid for at least three years and valid for as
    long as you offer spare parts or customer support for that product
    model, to give anyone who possesses the object code either (1) a
    copy of the Corresponding Source for all the software in the
    product that is covered by this License, on a durable physical
    medium customarily used for software interchange, for a price no
    more than your reasonable cost of physically performing this
    conveying of source, or (2) access to copy the
    Corresponding Source from a network server at no charge.

    c) Convey individual copies of the object code with a copy of the
    written offer to provide the Corresponding Source.  This
    alternative is allowed only occasionally and noncommercially, and
    only if you received the object code with such an offer, in accord
    with subsection 6b.

    d) Convey the object code by offering access from a designated
    place (gratis or for a charge), and offer equivalent access to the
    Corresponding Source in the same way through the same place at no
    further charge.  You need not require recipients to copy the
    Corresponding Source along with the object code.  If the place to
    copy the object code is a network server, the Corresponding Source
    may be on a different server (operated by you or a third party)
    that supports equivalent copying facilities, provided you maintain
    clear directions next to the object code saying where to find the
    Corresponding Source.  Regardless of what server hosts the
    Corresponding Source, you remain obligated to ensure that it is
    available for as long as needed to satisfy these requirements.

    e) Convey the object code using peer-to-peer transmission, provided
    you inform other peers where the object code and Corresponding
    Source of the work are being offered to the general public at no
    charge under subsection 6d.

  A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.

  A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling.  In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage.  For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product.  A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.

  "Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source.  The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.

  If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information.  But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).

  The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed.  Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.

  Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.

  7. Additional Terms.

  "Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law.  If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.

  When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it.  (Additional permissions may be written to require their own
removal in certain cases when you modify the work.)  You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.

  Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:

    a) Disclaiming warranty or limiting liability differently from the
    terms of sections 15 and 16 of this License; or

    b) Requiring preservation of specified reasonable legal notices or
    author attributions in that material or in the Appropriate Legal
    Notices displayed by works containing it; or

    c) Prohibiting misrepresentation of the origin of that material, or
    requiring that modified versions of such material be marked in
    reasonable ways as different from the original version; or

    d) Limiting the use for publicity purposes of names of licensors or
    authors of the material; or

    e) Declining to grant rights under trademark law for use of some
    trade names, trademarks, or service marks; or

    f) Requiring indemnification of licensors and authors of that
    material by anyone who conveys the material (or modified versions of
    it) with contractual assumptions of liability to the recipient, for
    any liability that these contractual assumptions directly impose on
    those licensors and authors.

  All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10.  If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term.  If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.

  If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.

  Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.

  8. Termination.

  You may not propagate or modify a covered work except as expressly
provided under this License.  Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).

  However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.

  Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.

  Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License.  If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.

  9. Acceptance Not Required for Having Copies.

  You are not required to accept this License in order to receive or
run a copy of the Program.  Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance.  However,
nothing other than this License grants you permission to propagate or
modify any covered work.  These actions infringe copyright if you do
not accept this License.  Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.

  10. Automatic Licensing of Downstream Recipients.

  Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License.  You are not responsible
for enforcing compliance by third parties with this License.

  An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations.  If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.

  You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License.  For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.

  11. Patents.

  A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based.  The
work thus licensed is called the contributor's "contributor version".

  A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version.  For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.

  Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.

  In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement).  To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.

  If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients.  "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.

  If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.

  A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License.  You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.

  Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.

  12. No Surrender of Others' Freedom.

  If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License.  If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all.  For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.

  13. Use with the GNU Affero General Public License.

  Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work.  The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.

  14. Revised Versions of this License.

  The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time.  Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.

  Each version is given a distinguishing version number.  If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation.  If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.

  If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.

  Later license versions may give you additional or different
permissions.  However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.

  15. Disclaimer of Warranty.

  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

  16. Limitation of Liability.

  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.

  17. Interpretation of Sections 15 and 16.

  If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.

                     END OF TERMS AND CONDITIONS

            How to Apply These Terms to Your New Programs

  If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.

  To do so, attach the following notices to the program.  It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.

    <one line to give the program's name and a brief idea of what it does.>
    Copyright (C) <year>  <name of author>

    This program is free software: you can redistribute it and/or modify
    it under the terms of the GNU General Public License as published by
    the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU General Public License for more details.

    You should have received a copy of the GNU General Public License
    along with this program.  If not, see <https://www.gnu.org/licenses/>.

Also add information on how to contact you by electronic and paper mail.

  If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:

    <program>  Copyright (C) <year>  <name of author>
    This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
    This is free software, and you are welcome to redistribute it
    under certain conditions; type `show c' for details.

The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License.  Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".

  You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.

  The GNU General Public License does not permit incorporating your program
into proprietary programs.  If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library.  If this is what you want to do, use the GNU Lesser General
Public License instead of this License.  But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.

Release Notes

The project adheres semantic versioning.

All notable changes to the project should be documented in the changelog. The most important things should be summarized in the release notes.

The changes in [latest] are implemented in the latest development version on github, but not yet released.

Changes compared to previous TeNPy highlights the most important changes compared to the other, previously developed (closed source) TeNPy version.

[latest]

Release Notes

TODO: Summarize the most important changes

Changelog
Backwards incompatible changes
  • nothing yet

Added
  • nothing yet

Changed
  • nothing yet

Fixed
  • nothing yet

[0.6.1] - 2020-05-18

Release Notes

This only is a follow-up release to [0.6.0] - 2020-05-16. It fixes a small bug in the examples/c_tebd.py and some roundoff problems in the tests.

It is now possible to install TeNPy with the conda package manager:

conda install --channel=conda-forge physics-tenpy

[0.6.0] - 2020-05-16

Release Notes

This release contains a major update of the documentation, which is now hosted by “Read the Docs” at https://tenpy.readthedocs.io/. Update your bookmark :-)

Apart from that, this release introduces a format how to save and load data (in particular TeNPy classes) to HDF5 files. See Saving to disk: input/output for more details. To use that feature, you need to install the h5py package (and therefore some version of the HDF5 library). This is easy with anaconda, conda install h5py, but might be cumbersome on your local computing cluster. (However, many university computing clusters have some version of HDF5 installed already. Check with your local sysadmin.)

Moreover, we changed how we read out parameter dictionaries - instead of the get_parameter() function, we have now a Config class which behaves like a dictionary, you can simpy use options.get(key, default) for model parameters - as you would do for a python dictionary.

Changelog
Backwards incompatible changes
  • Created a class Config to replace Python-native parameter dictionaries and add some useful functionality. Old code using tenpy.tools.params.get_parameter() and tenpy.tools.params.unused_parameters() still works as before, but raises a warning, and should be replaced. For example, if you defined your own models, you should replace calls get_parameter(model_params, "key", "default_value", "ModelName") with model_params.get("key", "default_value"), the latter syntax being what you would use for a normal python dictionary as well.

  • Renamed the following class parameter dictionaries to simply options for more consitency. Old code using the class attributes should still work (since we provide property aliases), but raises warnings. Note that this affects also derived classes (for example the TwoSiteDMRGEngine).

    • tenpy.algorithms.dmrg.DMRGEngine.DMRG_params (was already renamed to engine_params in versin 0.5.0)

    • tenpy.algorithms.mps_common.Sweep.engine_params

    • tenpy.algorithms.tebd.Engine.TEBD_params

    • tenpy.algorithms.tdvp.Engine.TDVP_params

    • tenpy.linalg.lanczos.Lanczos

  • Changed the arguments of tenpy.models.model.MultiCouplingModel(): We replaced the three arguments u0, op0 and other_op with other_ops=[(u1, op1, dx1), (op2, u2, dx2), ...] by single, equivalent argment ops which should now read ops=[(op0, dx0, u0), (op1, dx1, u1), (op2, dx2, u2), ...], where dx0 = [0]*lat.dim. Note the changed order inside the tuple! Old code (which specifies opstr and category as keyword argument, if at all) still works as before, but raises a warning, and should be replaced. Since tenpy.lattice.Lattice.possible_multi_couplings() used similar arguments, they were changed as well.

  • Don’t save H_MPO_graph as model attribute anymore - this also wasn’t documented.

  • Renamed the truncation parameter symmetry_tol to degeneracy_tol and make the criterion more reasonable by not checking \(log(S_i/S_j) < log(symmetry_tol)\), but simply \(log(S_i/S_j) < degeneracy_tol\). The latter makes more sense, as it is equivalent to \((S_i - S_j)/S_j < exp(degeneracy_tol) - 1 = degeneracy_tol + \mathcal{O}(degeneracy_tol^2)\).

  • Deprecated tenpy.networks.mps.MPS.increase_L() in favor of the newly added tenpy.networks.mps.MPS.enlarge_mps_unit_cell() (taking factor instead of new_L=factor*L as argument).

  • tenpy.networks.mps.MPS.correlation_function() now auto-determines whether a Jordan-Wigner string is necessary. If any of the given operators is directly an npc Array, it will now raise an error; set autoJW=False in that case.

  • Instead of “monkey-patching” matvec of the tenpy.algorithms.mps_common.EffectiveH for the case that ortho_to_envs is not empty, we defined a proper class NpcLinearOperatorWrapper, which serves as baseclass for OrthogonalNpcLinearOperator. The argument ortho_to_envs has been removed from EffectiveH.

  • Switch order of the sites in the unit cell for the DualSquare, and redefine what the "default" order means. This is a huge optimization of DMRG, reducing the necessary MPS bond dimension for the ground state to the optimal \(2^{L-1}\) on each bond.

  • Deprecated the Lanczos funciton/class argument orthogonal_to of in LanczosGroundState. Instead, one can use the OrthogonalNpcLinearOperator.

  • Deprecation warning for changing the default argument of shift_ket for non-zero shift_bra of the TransferMatrix.

Added
Changed
  • DEFAULT DMRG paramter 'diag_method' from 'lanczos' to 'default', which is the same for large bond dimensions, but performs a full exact diagonalization if the effective Hamiltonian has small dimensions. The threshold introduced is the new DMRG parameter 'max_N_for_ED'.

  • DEFAULT parameter charge_sector=None instead of charge_sector=0 in tenpy.networks.mps.MPS.overlap() to look for eigenvalues of the transfer matrix in all charge sectors, and not assume that it’s the 0 sector.

  • Derive the following classes (and their subclasses) from the new Hdf5Exportable to support saving to HDF5: - Site - Terms - OnsiteTerms - CouplingTerms - Model, i.e., all model classes.

  • Instead of just defining to_matrix and adjoint for EffectiveH, define the interface directly for NpcLinearOperator.

  • Try to keep the charge block structure as far as possible for add_charge() and drop_charge()

Fixed
  • Adjust the default DMRG parameter min_sweeps if chi_list is set.

  • Avoid some unnecessary transpositions in MPO environments for MPS sweeps (e.g. in DMRG).

  • sort(bunch=True) could return un-bunched Array, but still set the bunched flag.

  • LegPipe did not initialize self.bunched correctly.

  • issue #98: Error of calling psi.canonical_form() directly after disabling the DMRG mixer.

  • svd() with full_matrices=True gave wrong charges.

  • tenpy.linalg.np_conserved.Array.drop_charge() and tenpy.lina.np_conserved.Array.drop_charge() did not copy over labels.

  • wrong pairs for the fifth_nearest_neighbors of the Honeycomb.

  • Continue in tenpy.algorithms.dmrg.full_diag_effH() with a warning instaed of raising an Error, if the effective Hamltonian is zero.

  • correlation_length(): check for hermitian Flag might have raised and Error with new numpy warnings

  • correlation_function() did not respect argument str_on_first=False.

  • tenpy.networks.mps.MPS.get_op() worked unexpected for infinite bc with incomensurate self.L and len(op_list).

  • tenpy.networks.mps.MPS.permute_sites() did modify the given perm.

  • issue #105 Unintended side-effects using lanczos_params.verbose in combination with orthogonal_to

  • issue #108 tenpy.linalg.sparse.FlatLinearOperator._matvec() changes self._charge_sector

[0.5.0] - 2019-12-18

Backwards incompatible changes
Changed
  • Addition/subtraction of Array: check whether the both arrays have the same labels in differnt order, and in that case raise a warning that we will transpose in the future.

  • Made tenpy.linalg.np_conserved.Array.get_block() public (previously tenpy.linalg.np_conserved.Array._get_block).

  • groundstate() now returns a tuple (E0, psi0) instead of just psi0. Moreover, the argument charge_sector was added.

  • Simplification in the Lattice: Instead of having separate arguments/attributes/functions for 'nearest_neighbors', 'next_nearest_neighbors', 'next_next_nearest_neighbors' and possibly (Honeycomb) even 'fourth_nearest_neighbors', 'fifth_nearest_neighbors', collect them in a dictionary called pairs. Old call structures still allowed, but deprecated.

  • issue #94: Array addition and inner() should reflect the order of the labels, if they coincided. Will change the default behaviour in the future, raising FutureWarning for now.

  • Default parameter for DMRG params: increased precision by setting P_tol_min down to the maximum of 1.e-30, lanczos_params['svd_min']**2 * P_tol_to_trunc, lanczos_params['trunc_cut']**2 * P_tol_to_trunc by default.

Added
Fixed
  • MPO expectation_value() did not work for finite systems.

  • Calling compute_K() repeatedly with default parameters but on states with different chi would use the chi of the very first call for the truncation parameters.

  • allow MPSEnvironment and MPOEnvironment to have MPS/MPO with different length

  • group_sites() didn’t work correctly in some situations.

  • matvec_to_array() returned the transposed of A.

  • tenpy.networks.mps.MPS.from_full() messed up the form of the first array.

  • issue #95: blowup of errors in DMRG with update_env > 0. Turns out to be a problem in the precision of the truncation error: TruncationError.eps was set to 0 if it would be smaller than machine precision. To fix it, I added from_S().

[0.4.1] - 2019-08-14

Backwards incompatible changes
Changed
Added
Fixed
  • issue #36: long-range couplings could give IndexError.

  • issue #42: Onsite-terms in FermiHubbardModel were wrong for lattices with non-trivial unit cell.

  • Missing a factor 0.5 in GUE().

  • Allow TermList to have terms with multiple operators acting on the same site.

  • Allow MPS indices outside unit cell in mps2lat_idx() and lat2mps_idx().

  • expectation_value() did not work for n-site operators.

[0.4.0] - 2019-04-28

Backwards incompatible changes
  • The argument order of tenpy.models.lattice.Lattice could be a tuple (priority, snake_winding) before. This is no longer valid and needs to be replaced by ("standard", snake_winding, priority).

  • Moved the boundary conditions bc_coupling from the tenpy.models.model.CouplingModel into the tenpy.models.lattice.Lattice (as bc). Using the parameter bc_coupling will raise a FutureWarning, one should set the boundary conditions directly in the lattice.

  • Added parameter permute (True by default) in tenpy.networks.mps.MPS.from_product_state() and tenpy.networks.mps.MPS.from_Bflat(). The resulting state will therefore be independent of the “conserve” parameter of the Sites - unlike before, where the meaning of the p_state argument might have changed.

  • Generalize and rename tenpy.networks.site.DoubleSite to tenpy.networks.site.GroupedSite, to allow for an arbitrary number of sites to be grouped. Arguments site0, site1, label0, label1 of the __init__ can be replaced with [site0, site1], [label0, label1] and op0, op1 of the kronecker_product with [op0, op1]; this will recover the functionality of the DoubleSite.

  • Restructured callstructure of Mixer in DMRG, allowing an implementation of other mixers. To enable the mixer, set the DMRG parameter "mixer" to True or 'DensityMatrixMixer' instead of just 'Mixer'.

  • The interaction parameter in the tenpy.models.bose_hubbbard_chain.BoseHubbardModel (and tenpy.models.bose_hubbbard_chain.BoseHubbardChain) did not correspond to \(U/2 N (N-1)\) as claimed in the Hamiltonian, but to \(U N^2\). The correcting factor 1/2 and change in the chemical potential have been fixed.

  • Major restructuring of tenpy.linalg.np_conserved and tenpy.linalg.charges. This should not break backwards-compatibility, but if you compiled the cython files, you need to remove the old binaries in the source directory. Using bash cleanup.sh might be helpful to do that, but also remove other files within the repository, so be careful and make a backup beforehand to be on the save side. Afterwards recompile with bash compile.sh.

  • Changed structure of tenpy.models.model.CouplingModel.onsite_terms and tenpy.models.model.CouplingModel.coupling_terms: Each of them is now a dictionary with category strings as keys and the newly introduced tenpy.networks.terms.OnsiteTerms and tenpy.networks.terms.CouplingTerms as values.

  • tenpy.models.model.CouplingModel.calc_H_onsite() is deprecated in favor of new methods.

  • Argument raise_op2_left of tenpy.models.model.CouplingModel.add_coupling() is deprecated.

Added
Changed
  • moved toycodes from the folder examples/ to a new folder toycodes/ to separate them clearly.

  • major remodelling of the internals of tenpy.linalg.np_conserved and tenpy.linalg.charges.
    • Introduced the new module tenpy/linalg/_npc_helper.pyx which contains all the Cython code, and gets imported by

    • Array now rejects addition/subtraction with other types

    • Array now rejects multiplication/division with non-scalar types

    • By default, make deep copies of npc Arrays.

  • Restructured lanczos into a class, added time evolution calculating exp(A*dt)|psi0>

  • Warning for poorly conditioned Lanczos; to overcome this enable the new parameter reortho.

  • Simplified call strucutre of extend(), and extend().

  • Restructured tenpy.algorithms.dmrg:

    • run() is now just a wrapper around the new run(), run(psi, model, pars) is roughly equivalent to eng = EngineCombine(psi, model, pars); eng.run().

    • Added init_env() and reset_stats() to allow a simple restart of DMRG with slightly different parameters, e.g. for tuning Hamiltonian parameters.

    • Call canonical_form() for infinite systems if the final state is not in canonical form.

  • Changed default values for some parameters:

    • set trunc_params['chi_max'] = 100. Not setting a chi_max at all will lead to memory problems. Disable DMRG_params['chi_list'] = None by default to avoid conflicting settings.

    • reduce to mixer_params['amplitude'] = 1.e-5. A too strong mixer screws DMRG up pretty bad.

    • increase Lanczos_params['N_cache'] = N_max (i.e., keep all states)

    • set DMRG_params['P_tol_to_trunc'] = 0.05 and provide reasonable …_min and …_max values.

    • increased (default) DMRG accuracy by setting DMRG_params['max_E_err'] = 1.e-8 and DMRG_params['max_S_err'] = 1.e-5.

    • don’t check the (absolute) energy for convergence in Lanczos.

    • set DMRG_params['norm_tol'] = 1.e-5 to check whether the final state is in canonical form.

  • Verbosity of get_parameter() reduced: Print parameters only for verbosity >=1. and default values only for verbosity >= 2.

  • Don’t print the energy during real-time TEBD evolution - it’s preserved up to truncation errors.

  • Renamed the SquareLattice class to tenpy.models.lattice.Square for better consistency.

  • auto-determine whether Jordan-Wigner strings are necessary in add_coupling().

  • The way the labels of npc Arrays are stored internally changed to a simple list with None entries. There is a deprecated propery setter yielding a dictionary with the labels.

  • renamed first_LP and last_RP arguments of MPSEnvironment and MPOEnvironment to init_LP and init_RP.

  • Testing: insetad of the (outdated) nose, we now use pytest <https://pytest.org> for testing.

Fixed
  • issue #22: Serious bug in tenpy.linalg.np_conserved.inner(): if do_conj=True is used with non-zero qtotal, it returned 0. instead of non-zero values.

  • avoid error in tenpy.networks.mps.MPS.apply_local_op()

  • Don’t carry around total charge when using DMRG with a mixer

  • Corrected couplings of the FermionicHubbardChain

  • issue #2: memory leak in cython parts when using intelpython/anaconda

  • issue #4: incompatible data types.

  • issue #6: the CouplingModel generated wrong Couplings in some cases

  • issue #19: Convergence of energy was slow for infinite systems with N_sweeps_check=1

  • more reasonable traceback in case of wrong labels

  • wrong dtype of npc.Array when adding/subtracting/… arrays of different data types

  • could get wrong H_bond for completely decoupled chains.

  • SVD could return outer indices with different axes

  • tenpy.networks.mps.MPS.overlap() works now for MPS with different total charge (e.g. after psi.apply_local_op(i, 'Sp')).

  • skip existing graph edges in MPOGraph.add() when building up terms without the strength part.

Removed

[0.3.0] - 2018-02-19

This is the first version published on github.

Added
  • Cython modules for np_conserved and charges, which can optionally be compiled for speed-ups

  • tools.optimization for dynamical optimization

  • Various models.

  • More predefined lattice sites.

  • Example toy-codes.

  • Network contractor for general networks

Changed
  • Switch to python3

Removed
  • Python 2 support.

[0.2.0] - 2017-02-24

  • Compatible with python2 and python3 (using the 2to3 tool).

  • Development version.

  • Includes TEBD and DMRG.

Changes compared to previous TeNPy

This library is based on a previous (closed source) version developed mainly by Frank Pollmann, Michael P. Zaletel and Roger S. K. Mong. While allmost all files are completely rewritten and not backwards compatible, the overall structure is similar. In the following, we list only the most important changes.

Global Changes
  • syntax style based on PEP8. Use $>yapf -r -i ./ to ensure consitent formatting over the whole project. Special comments # yapf: disable and # yapf: enable can be used for manual formatting of some regions in code.

  • Following PEP8, we distinguish between ‘private’ functions, indicated by names starting with an underscore and to be used only within the library, and the public API. The puplic API should be backwards-compatible with different releases, while private functions might change at any time.

  • all modules are in the folder tenpy to avoid name conflicts with other libraries.

  • withing the library, relative imports are used, e.g., from ..tools.math import (toiterable, tonparray) Exception: the files in tests/ and examples/ run as __main__ and can’t use relative imports

    Files outside of the library (and in tests/, examples/) should use absolute imports, e.g. import tenpy.algorithms.tebd

  • renamed tenpy/mps/ to tenpy/networks, since it containes various tensor networks.

  • added Site describing the local physical sites by providing the physical LegCharge and onsite operators.

np_conserved
  • pure python, no need to compile!

  • in module tenpy.linalg instead of algorithms/linalg.

  • moved functionality for charges to charges

  • Introduced the classes ChargeInfo (basically the old q_number, and mod_q) and LegCharge (the old qind, qconj).

  • Introduced the class LegPipe to replace the old leg_pipe. It is derived from LegCharge and used as a leg in the array class. Thus any inherited array (after tensordot etc still has all the necessary information to split the legs. (The legs are shared between different arrays, so it’s saved only once in memory)

  • Enhanced indexing of the array class to support slices and 1D index arrays along certain axes

  • more functions, e.g. grid_outer()

TEBD
  • Introduced TruncationError for easy handling of total truncation error.

  • some truncation parameters are renamed and may have a different meaning, e.g. svd_max -> svd_min has no ‘log’ in the definition.

DMRG
  • separate Lanczos module in tenpy/linalg/. Strangely, the old version orthoganalized against the complex conjugates of orthogonal_to (contrary to it’s doc string!) (and thus calculated ‘theta_o’ as bra, not ket).

  • cleaned up, provide prototypes for DMRG engine and mixer.

Tools
  • added tenpy.tools.misc, which contains ‘random stuff’ from old tools.math like to_iterable and to_array (renamed to follow PEP8, documented)

  • moved stuff for fitting to tenpy.tools.fit

  • enhanced tenpy.tools.string.vert_join() for nice formatting

  • moved (parts of) old cluster/omp.py to tenpy.tools.process

  • added tenpy.tools.params for a simplified handling of parameter/arguments for models and/or algorithms. Similar as the old models.model.set_var, but use it also for algorithms. Also, it may modify the given dictionary.

Introductions

The following documents are meant as introductions to various topics relevant to TeNPy.

If you are new to TeNPy, read the Overview.

Overview

Repository

The root directory of this git repository contains the following folders:

tenpy

The actual source code of the library. Every subfolder contains an __init__.py file with a summary what the modules in it are good for. (This file is also necessary to mark the folder as part of the python package. Consequently, other subfolders of the git repo should not include a __init__.py file.)

toycodes

Simple toy codes completely independet of the remaining library (i.e., codes in tenpy/). These codes should be quite readable and intend to give a flavor of how (some of) the algorithms work.

examples

Some example files demonstrating the usage and interface of the library.

doc

A folder containing the documentation: the user guide is contained in the *.rst files. The online documentation is autogenerated from these files and the docstrings of the library. This folder contains a make file for building the documentation, run make help for the different options. The necessary files for the reference in doc/reference can be auto-generated/updated with make src2html.

tests

Contains files with test routines, to be used with pytest. If you are set up correctly and have pytest installed, you can run the test suite with pytest from within the tests/ folder.

build

This folder is not distributed with the code, but is generated by setup.py (or compile.sh, respectively). It contains compiled versions of the Cython files, and can be ignored (and even removed without loosing functionality).

Code structure: getting started

There are several layers of abstraction in TeNPy. While there is a certain hierarchy of how the concepts build up on each other, the user can decide to utilize only some of them. A maximal flexibility is provided by an object oriented style based on classes, which can be inherited and adjusted to individual demands.

The following figure gives an overview of the most important modules, classes and functions in TeNPy. Gray backgrounds indicate (sub)modules, yellow backgrounds indicate classes. Red arrows indicate inheritance relations, dashed black arrows indicate a direct use. (The individual models might be derived from the NearestNeighborModel depending on the geometry of the lattice.) There is a clear hierarchy from high-level algorithms in the tenpy.algorithms module down to basic operations from linear algebra in the tenpy.linalg module.

_images/tenpy_layers.png
Most basic level: linear algebra

Note

See Charge conservation with np_conserved for more information on defining charges for arrays.

The most basic layer is given by in the linalg module, which provides basic features of linear algebra. In particular, the np_conserved submodule implements an Array class which is used to represent the tensors. The basic interface of np_conserved is very similar to that of the NumPy and SciPy libraries. However, the Array class implements abelian charge conservation. If no charges are to be used, one can use ‘trivial’ arrays, as shown in the following example code.

"""Basic use of the `Array` class with trivial arrays."""
# Copyright 2019-2020 TeNPy Developers, GNU GPLv3

import tenpy.linalg.np_conserved as npc

M = npc.Array.from_ndarray_trivial([[0., 1.], [1., 0.]])
v = npc.Array.from_ndarray_trivial([2., 4. + 1.j])
v[0] = 3.  # set indiviual entries like in numpy
print("|v> =", v.to_ndarray())
# |v> = [ 3.+0.j  4.+1.j]

M_v = npc.tensordot(M, v, axes=[1, 0])
print("M|v> =", M_v.to_ndarray())
# M|v> = [ 4.+1.j  3.+0.j]
print("<v|M|v> =", npc.inner(v.conj(), M_v, axes='range'))
# <v|M|v> = (24+0j)

The number and types of symmetries are specified in a ChargeInfo class. An Array instance represents a tensor satisfying a charge rule specifying which blocks of it are nonzero. Internally, it stores only the non-zero blocks of the tensor, along with one LegCharge instance for each leg, which contains the charges and sign qconj for each leg. We can combine multiple legs into a single larger LegPipe, which is derived from the LegCharge and stores all the information necessary to later split the pipe.

The following code explicitly defines the spin-1/2 \(S^+, S^-, S^z\) operators and uses them to generate and diagonalize the two-site Hamiltonian \(H = \vec{S} \cdot \vec{S}\). It prints the charge values (by default sorted ascending) and the eigenvalues of H.

"""Explicit definition of charges and spin-1/2 operators."""
# Copyright 2019-2020 TeNPy Developers, GNU GPLv3

import tenpy.linalg.np_conserved as npc

# consider spin-1/2 with Sz-conservation
chinfo = npc.ChargeInfo([1])  # just a U(1) charge
# charges for up, down state
p_leg = npc.LegCharge.from_qflat(chinfo, [[1], [-1]])
Sz = npc.Array.from_ndarray([[0.5, 0.], [0., -0.5]], [p_leg, p_leg.conj()])
Sp = npc.Array.from_ndarray([[0., 1.], [0., 0.]], [p_leg, p_leg.conj()])
Sm = npc.Array.from_ndarray([[0., 0.], [1., 0.]], [p_leg, p_leg.conj()])

Hxy = 0.5 * (npc.outer(Sp, Sm) + npc.outer(Sm, Sp))
Hz = npc.outer(Sz, Sz)
H = Hxy + Hz
# here, H has 4 legs
H.iset_leg_labels(["s1", "t1", "s2", "t2"])
H = H.combine_legs([["s1", "s2"], ["t1", "t2"]], qconj=[+1, -1])
# here, H has 2 legs
print(H.legs[0].to_qflat().flatten())
# prints [-2  0  0  2]
E, U = npc.eigh(H)  # diagonalize blocks individually
print(E)
# [ 0.25 -0.75  0.25  0.25]
Sites for the local Hilbert space and tensor networks

The next basic concept is that of a local Hilbert space, which is represented by a Site in TeNPy. This class does not only label the local states and define the charges, but also provides onsite operators. For example, the SpinHalfSite provides the \(S^+, S^-, S^z\) operators under the names 'Sp', 'Sm', 'Sz', defined as Array instances similarly as in the code above. Since the most common sites like for example the SpinSite (for general spin S=0.5, 1, 1.5,…), BosonSite and FermionSite are predefined, a user of TeNPy usually does not need to define the local charges and operators explicitly. The total Hilbert space, i.e, the tensor product of the local Hilbert spaces, is then just given by a list of Site instances. If desired, different kinds of Site can be combined in that list. This list is then given to classes representing tensor networks like the MPS and MPO. The tensor network classes also use Array instances for the tensors of the represented network.

The following example illustrates the initialization of a spin-1/2 site, an MPS representing the Neel state, and an MPO representing the Heisenberg model by explicitly defining the W tensor.

"""Initialization of sites, MPS and MPO."""
# Copyright 2019-2020 TeNPy Developers, GNU GPLv3

from tenpy.networks.site import SpinHalfSite
from tenpy.networks.mps import MPS
from tenpy.networks.mpo import MPO

spin = SpinHalfSite(conserve="Sz")
print(spin.Sz.to_ndarray())
# [[ 0.5  0. ]
#  [ 0.  -0.5]]

N = 6  # number of sites
sites = [spin] * N  # repeat entry of list N times
pstate = ["up", "down"] * (N // 2)  # Neel state
psi = MPS.from_product_state(sites, pstate, bc="finite")
print("<Sz> =", psi.expectation_value("Sz"))
# <Sz> = [ 0.5 -0.5  0.5 -0.5]
print("<Sp_i Sm_j> =", psi.correlation_function("Sp", "Sm"), sep="\n")
# <Sp_i Sm_j> =
# [[1. 0. 0. 0. 0. 0.]
#  [0. 0. 0. 0. 0. 0.]
#  [0. 0. 1. 0. 0. 0.]
#  [0. 0. 0. 0. 0. 0.]
#  [0. 0. 0. 0. 1. 0.]
#  [0. 0. 0. 0. 0. 0.]]

# define an MPO
Id, Sp, Sm, Sz = spin.Id, spin.Sp, spin.Sm, spin.Sz
J, Delta, hz = 1., 1., 0.2
W_bulk = [[Id, Sp, Sm, Sz, -hz * Sz], [None, None, None, None, 0.5 * J * Sm],
          [None, None, None, None, 0.5 * J * Sp], [None, None, None, None, J * Delta * Sz],
          [None, None, None, None, Id]]
W_first = [W_bulk[0]]  # first row
W_last = [[row[-1]] for row in W_bulk]  # last column
Ws = [W_first] + [W_bulk] * (N - 2) + [W_last]
H = MPO.from_grids([spin] * N, Ws, bc='finite', IdL=0, IdR=-1)
print("<psi|H|psi> =", H.expectation_value(psi))
# <psi|H|psi> = -1.25
Models

Note

See Models for more information on sites and how to define and extend models on your own.

Technically, the explicit definition of an MPO is already enough to call an algorithm like DMRG in dmrg. However, writing down the W tensors is cumbersome especially for more complicated models. Hence, TeNPy provides another layer of abstraction for the definition of models, which we discuss first. Different kinds of algorithms require different representations of the Hamiltonian. Therefore, the library offers to specify the model abstractly by the individual onsite terms and coupling terms of the Hamiltonian. The following example illustrates this, again for the Heisenberg model.

"""Definition of a model: the XXZ chain."""
# Copyright 2019-2020 TeNPy Developers, GNU GPLv3

from tenpy.networks.site import SpinSite
from tenpy.models.lattice import Chain
from tenpy.models.model import CouplingModel, NearestNeighborModel, MPOModel


class XXZChain(CouplingModel, NearestNeighborModel, MPOModel):
    def __init__(self, L=2, S=0.5, J=1., Delta=1., hz=0.):
        spin = SpinSite(S=S, conserve="Sz")
        # the lattice defines the geometry
        lattice = Chain(L, spin, bc="open", bc_MPS="finite")
        CouplingModel.__init__(self, lattice)
        # add terms of the Hamiltonian
        self.add_coupling(J * 0.5, 0, "Sp", 0, "Sm", 1)  # Sp_i Sm_{i+1}
        self.add_coupling(J * 0.5, 0, "Sp", 0, "Sm", -1)  # Sp_i Sm_{i-1}
        self.add_coupling(J * Delta, 0, "Sz", 0, "Sz", 1)
        # (for site dependent prefactors, the strength can be an array)
        self.add_onsite(-hz, 0, "Sz")

        # finish initialization
        # generate MPO for DMRG
        MPOModel.__init__(self, lat, self.calc_H_MPO())
        # generate H_bond for TEBD
        NearestNeighborModel.__init__(self, lat, self.calc_H_bond())

While this generates the same MPO as in the previous code, this example can easily be adjusted and generalized, for example to a higher dimensional lattice by just specifying a different lattice. Internally, the MPO is generated using a finite state machine picture. This allows not only to translate more complicated Hamiltonians into their corresponding MPOs, but also to automate the mapping from a higher dimensional lattice to the 1D chain along which the MPS winds. Note that this mapping introduces longer-range couplings, so the model can no longer be defined to be a NearestNeighborModel suited for TEBD if another lattice than the Chain is to be used. Of course, many commonly studied models are also predefined. For example, the following code initializes the Heisenberg model on a kagome lattice; the spin liquid nature of the ground state of this model is highly debated in the current literature.

"""Initialization of the Heisenberg model on a kagome lattice."""
# Copyright 2019-2020 TeNPy Developers, GNU GPLv3

from tenpy.models.spins import SpinModel

model_params = {
    "S": 0.5,  # Spin 1/2
    "lattice": "Kagome",
    "bc_MPS": "infinite",
    "bc_y": "cylinder",
    "Ly": 2,  # defines cylinder circumference
    "conserve": "Sz",  # use Sz conservation
    "Jx": 1.,
    "Jy": 1.,
    "Jz": 1.  # Heisenberg coupling
}
model = SpinModel(model_params)
Algorithms

The highest level in TeNPy is given by algorithms like DMRG and TEBD. Using the previous concepts, setting up a simulation running those algorithms is a matter of just a few lines of code. The following example runs a DMRG simulation, see dmrg, exemplary for the transverse field Ising model at the critical point.

"""Call of (finite) DMRG."""
# Copyright 2019-2020 TeNPy Developers, GNU GPLv3

from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.algorithms import dmrg

N = 16  # number of sites
model = TFIChain({"L": N, "J": 1., "g": 1., "bc_MPS": "finite"})
sites = model.lat.mps_sites()
psi = MPS.from_product_state(sites, ['up'] * N, "finite")
dmrg_params = {"trunc_params": {"chi_max": 100, "svd_min": 1.e-10}, "mixer": True}
info = dmrg.run(psi, model, dmrg_params)
print("E =", info['E'])
# E = -20.01638790048513
print("max. bond dimension =", max(psi.chi))
# max. bond dimension = 27

The switch from DMRG to gls{iDMRG} in TeNPy is simply accomplished by a change of the parameter "bc_MPS" from "finite" to "infinite", both for the model and the state. The returned E is then the energy density per site. Due to the translation invariance, one can also evaluate the correlation length, here slightly away from the critical point.

"""Call of infinite DMRG."""
# Copyright 2019-2020 TeNPy Developers, GNU GPLv3

from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.algorithms import dmrg

N = 2  # number of sites in unit cell
model = TFIChain({"L": N, "J": 1., "g": 1.1, "bc_MPS": "infinite"})
sites = model.lat.mps_sites()
psi = MPS.from_product_state(sites, ['up'] * N, "infinite")
dmrg_params = {"trunc_params": {"chi_max": 100, "svd_min": 1.e-10}, "mixer": True}
info = dmrg.run(psi, model, dmrg_params)
print("E =", info['E'])
# E = -1.342864022725017
print("max. bond dimension =", max(psi.chi))
# max. bond dimension = 56
print("corr. length =", psi.correlation_length())
# corr. length = 4.915809146764157

Running time evolution with TEBD requires an additional loop, during which the desired observables have to be measured. The following code shows this directly for the infinite version of TEBD.

"""Call of (infinite) TEBD."""
# Copyright 2019-2020 TeNPy Developers, GNU GPLv3

from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.algorithms import tebd

M = TFIChain({"L": 2, "J": 1., "g": 1.5, "bc_MPS": "infinite"})
psi = MPS.from_product_state(M.lat.mps_sites(), [0] * 2, "infinite")
tebd_params = {
    "order": 2,
    "delta_tau_list": [0.1, 0.001, 1.e-5],
    "max_error_E": 1.e-6,
    "trunc_params": {
        "chi_max": 30,
        "svd_min": 1.e-10
    }
}
eng = tebd.Engine(psi, M, tebd_params)
eng.run_GS()  # imaginary time evolution with TEBD
print("E =", sum(psi.expectation_value(M.H_bond)) / psi.L)
print("final bond dimensions: ", psi.chi)

Toy Codes

The following “toy codes” are included in the TeNPy repository in the folder toycodes/, we include them here in the documentation for reference. They are meant to give you a flavor of the different algorithms, while keeping the codes as readable and simple as possible. The only requirements to run them are Python 3, Numpy, and Scipy. Simply go to the folder where you downloaded them, and execute them with python.

Toycode a_mps.py
"""Toy code implementing a matrix product state."""
# Copyright 2018-2020 TeNPy Developers, GNU GPLv3

import numpy as np
from scipy.linalg import svd
# if you get an error message "LinAlgError: SVD did not converge",
# uncomment the following line. (This requires TeNPy to be installed.)
#  from tenpy.linalg.svd_robust import svd  # (works like scipy.linalg.svd)


class SimpleMPS:
    """Simple class for a matrix product state.

    We index sites with `i` from 0 to L-1; bond `i` is left of site `i`.
    We *assume* that the state is in right-canonical form.

    Parameters
    ----------
    Bs, Ss, bc:
        Same as attributes.

    Attributes
    ----------
    Bs : list of np.Array[ndim=3]
        The 'matrices' in right-canonical form, one for each physical site
        (within the unit-cell for an infinite MPS).
        Each `B[i]` has legs (virtual left, physical, virtual right), in short ``vL i vR``
    Ss : list of np.Array[ndim=1]
        The Schmidt values at each of the bonds, ``Ss[i]`` is left of ``Bs[i]``.
    bc : 'infinite', 'finite'
        Boundary conditions.
    L : int
        Number of sites (in the unit-cell for an infinite MPS).
    nbonds : int
        Number of (non-trivial) bonds: L-1 for 'finite' boundary conditions
    """
    def __init__(self, Bs, Ss, bc='finite'):
        assert bc in ['finite', 'infinite']
        self.Bs = Bs
        self.Ss = Ss
        self.bc = bc
        self.L = len(Bs)
        self.nbonds = self.L - 1 if self.bc == 'finite' else self.L

    def copy(self):
        return SimpleMPS([B.copy() for B in self.Bs], [S.copy() for S in self.Ss], self.bc)

    def get_theta1(self, i):
        """Calculate effective single-site wave function on sites i in mixed canonical form.

        The returned array has legs ``vL, i, vR`` (as one of the Bs).
        """
        return np.tensordot(np.diag(self.Ss[i]), self.Bs[i], [1, 0])  # vL [vL'], [vL] i vR

    def get_theta2(self, i):
        """Calculate effective two-site wave function on sites i,j=(i+1) in mixed canonical form.

        The returned array has legs ``vL, i, j, vR``.
        """
        j = (i + 1) % self.L
        return np.tensordot(self.get_theta1(i), self.Bs[j], [2, 0])  # vL i [vR], [vL] j vR

    def get_chi(self):
        """Return bond dimensions."""
        return [self.Bs[i].shape[2] for i in range(self.nbonds)]

    def site_expectation_value(self, op):
        """Calculate expectation values of a local operator at each site."""
        result = []
        for i in range(self.L):
            theta = self.get_theta1(i)  # vL i vR
            op_theta = np.tensordot(op, theta, axes=[1, 1])  # i [i*], vL [i] vR
            result.append(np.tensordot(theta.conj(), op_theta, [[0, 1, 2], [1, 0, 2]]))
            # [vL*] [i*] [vR*], [i] [vL] [vR]
        return np.real_if_close(result)

    def bond_expectation_value(self, op):
        """Calculate expectation values of a local operator at each bond."""
        result = []
        for i in range(self.nbonds):
            theta = self.get_theta2(i)  # vL i j vR
            op_theta = np.tensordot(op[i], theta, axes=[[2, 3], [1, 2]])
            # i j [i*] [j*], vL [i] [j] vR
            result.append(np.tensordot(theta.conj(), op_theta, [[0, 1, 2, 3], [2, 0, 1, 3]]))
            # [vL*] [i*] [j*] [vR*], [i] [j] [vL] [vR]
        return np.real_if_close(result)

    def entanglement_entropy(self):
        """Return the (von-Neumann) entanglement entropy for a bipartition at any of the bonds."""
        bonds = range(1, self.L) if self.bc == 'finite' else range(0, self.L)
        result = []
        for i in bonds:
            S = self.Ss[i].copy()
            S[S < 1.e-20] = 0.  # 0*log(0) should give 0; avoid warning or NaN.
            S2 = S * S
            assert abs(np.linalg.norm(S) - 1.) < 1.e-14
            result.append(-np.sum(S2 * np.log(S2)))
        return np.array(result)

    def correlation_length(self):
        """Diagonalize transfer matrix to obtain the correlation length."""
        import scipy.sparse.linalg.eigen.arpack as arp
        assert self.bc == 'infinite'  # works only in the infinite case
        B = self.Bs[0]  # vL i vR
        chi = B.shape[0]
        T = np.tensordot(B, np.conj(B), axes=[1, 1])  # vL [i] vR, vL* [i*] vR*
        T = np.transpose(T, [0, 2, 1, 3])  # vL vL* vR vR*
        for i in range(1, self.L):
            B = self.Bs[i]
            T = np.tensordot(T, B, axes=[2, 0])  # vL vL* [vR] vR*, [vL] i vR
            T = np.tensordot(T, np.conj(B), axes=[[2, 3], [0, 1]])
            # vL vL* [vR*] [i] vR, [vL*] [i*] vR*
        T = np.reshape(T, (chi**2, chi**2))
        # Obtain the 2nd largest eigenvalue
        eta = arp.eigs(T, k=2, which='LM', return_eigenvectors=False, ncv=20)
        return -self.L / np.log(np.min(np.abs(eta)))


def init_FM_MPS(L, d, bc='finite'):
    """Return a ferromagnetic MPS (= product state with all spins up)"""
    B = np.zeros([1, d, 1], np.float)
    B[0, 0, 0] = 1.
    S = np.ones([1], np.float)
    Bs = [B.copy() for i in range(L)]
    Ss = [S.copy() for i in range(L)]
    return SimpleMPS(Bs, Ss, bc)


def split_truncate_theta(theta, chi_max, eps):
    """Split and truncate a two-site wave function in mixed canonical form.

    Split a two-site wave function as follows::
          vL --(theta)-- vR     =>    vL --(A)--diag(S)--(B)-- vR
                |   |                       |             |
                i   j                       i             j

    Afterwards, truncate in the new leg (labeled ``vC``).

    Parameters
    ----------
    theta : np.Array[ndim=4]
        Two-site wave function in mixed canonical form, with legs ``vL, i, j, vR``.
    chi_max : int
        Maximum number of singular values to keep
    eps : float
        Discard any singular values smaller than that.

    Returns
    -------
    A : np.Array[ndim=3]
        Left-canonical matrix on site i, with legs ``vL, i, vC``
    S : np.Array[ndim=1]
        Singular/Schmidt values.
    B : np.Array[ndim=3]
        Right-canonical matrix on site j, with legs ``vC, j, vR``
    """
    chivL, dL, dR, chivR = theta.shape
    theta = np.reshape(theta, [chivL * dL, dR * chivR])
    X, Y, Z = svd(theta, full_matrices=False)
    # truncate
    chivC = min(chi_max, np.sum(Y > eps))
    piv = np.argsort(Y)[::-1][:chivC]  # keep the largest `chivC` singular values
    X, Y, Z = X[:, piv], Y[piv], Z[piv, :]
    # renormalize
    S = Y / np.linalg.norm(Y)  # == Y/sqrt(sum(Y**2))
    # split legs of X and Z
    A = np.reshape(X, [chivL, dL, chivC])
    B = np.reshape(Z, [chivC, dR, chivR])
    return A, S, B
Toycode b_model.py
"""Toy code implementing the transverse-field ising model."""
# Copyright 2018-2020 TeNPy Developers, GNU GPLv3

import numpy as np


class TFIModel:
    """Simple class generating the Hamiltonian of the transverse-field Ising model.

    The Hamiltonian reads
    .. math ::
        H = - J \\sum_{i} \\sigma^x_i \\sigma^x_{i+1} - g \\sum_{i} \\sigma^z_i

    Parameters
    ----------
    L : int
        Number of sites.
    J, g : float
        Coupling parameters of the above defined Hamiltonian.
    bc : 'infinite', 'finite'
        Boundary conditions.

    Attributes
    ----------
    L : int
        Number of sites.
    bc : 'infinite', 'finite'
        Boundary conditions.
    sigmax, sigmay, sigmaz, id :
        Local operators, namely the Pauli matrices and identity.
    H_bonds : list of np.Array[ndim=4]
        The Hamiltonian written in terms of local 2-site operators, ``H = sum_i H_bonds[i]``.
        Each ``H_bonds[i]`` has (physical) legs (i out, (i+1) out, i in, (i+1) in),
        in short ``i j i* j*``.
    H_mpo : lit of np.Array[ndim=4]
        The Hamiltonian written as an MPO.
        Each ``H_mpo[i]`` has legs (virutal left, virtual right, physical out, physical in),
        in short ``wL wR i i*``.
    """
    def __init__(self, L, J, g, bc='finite'):
        assert bc in ['finite', 'infinite']
        self.L, self.d, self.bc = L, 2, bc
        self.J, self.g = J, g
        self.sigmax = np.array([[0., 1.], [1., 0.]])
        self.sigmay = np.array([[0., -1j], [1j, 0.]])
        self.sigmaz = np.array([[1., 0.], [0., -1.]])
        self.id = np.eye(2)
        self.init_H_bonds()
        self.init_H_mpo()

    def init_H_bonds(self):
        """Initialize `H_bonds` hamiltonian.

        Called by __init__().
        """
        sx, sz, id = self.sigmax, self.sigmaz, self.id
        d = self.d
        nbonds = self.L - 1 if self.bc == 'finite' else self.L
        H_list = []
        for i in range(nbonds):
            gL = gR = 0.5 * self.g
            if self.bc == 'finite':
                if i == 0:
                    gL = self.g
                if i + 1 == self.L - 1:
                    gR = self.g
            H_bond = -self.J * np.kron(sx, sx) - gL * np.kron(sz, id) - gR * np.kron(id, sz)
            # H_bond has legs ``i, j, i*, j*``
            H_list.append(np.reshape(H_bond, [d, d, d, d]))
        self.H_bonds = H_list

    # (note: not required for TEBD)
    def init_H_mpo(self):
        """Initialize `H_mpo` Hamiltonian.

        Called by __init__().
        """
        w_list = []
        for i in range(self.L):
            w = np.zeros((3, 3, self.d, self.d), dtype=np.float)
            w[0, 0] = w[2, 2] = self.id
            w[0, 1] = self.sigmax
            w[0, 2] = -self.g * self.sigmaz
            w[1, 2] = -self.J * self.sigmax
            w_list.append(w)
        self.H_mpo = w_list
Toycode c_tebd.py
"""Toy code implementing the time evolving block decimation (TEBD)."""
# Copyright 2018-2020 TeNPy Developers, GNU GPLv3

import numpy as np
from scipy.linalg import expm
from a_mps import split_truncate_theta


def calc_U_bonds(H_bonds, dt):
    """Given the H_bonds, calculate ``U_bonds[i] = expm(-dt*H_bonds[i])``.

    Each local operator has legs (i out, (i+1) out, i in, (i+1) in), in short ``i j i* j*``.
    Note that no imaginary 'i' is included, thus real `dt` means 'imaginary time' evolution!
    """
    d = H_bonds[0].shape[0]
    U_bonds = []
    for H in H_bonds:
        H = np.reshape(H, [d * d, d * d])
        U = expm(-dt * H)
        U_bonds.append(np.reshape(U, [d, d, d, d]))
    return U_bonds


def run_TEBD(psi, U_bonds, N_steps, chi_max, eps):
    """Evolve for `N_steps` time steps with TEBD."""
    Nbonds = psi.L - 1 if psi.bc == 'finite' else psi.L
    assert len(U_bonds) == Nbonds
    for n in range(N_steps):
        for k in [0, 1]:  # even, odd
            for i_bond in range(k, Nbonds, 2):
                update_bond(psi, i_bond, U_bonds[i_bond], chi_max, eps)
    # done


def update_bond(psi, i, U_bond, chi_max, eps):
    """Apply `U_bond` acting on i,j=(i+1) to `psi`."""
    j = (i + 1) % psi.L
    # construct theta matrix
    theta = psi.get_theta2(i)  # vL i j vR
    # apply U
    Utheta = np.tensordot(U_bond, theta, axes=([2, 3], [1, 2]))  # i j [i*] [j*], vL [i] [j] vR
    Utheta = np.transpose(Utheta, [2, 0, 1, 3])  # vL i j vR
    # split and truncate
    Ai, Sj, Bj = split_truncate_theta(Utheta, chi_max, eps)
    # put back into MPS
    Gi = np.tensordot(np.diag(psi.Ss[i]**(-1)), Ai, axes=[1, 0])  # vL [vL*], [vL] i vC
    psi.Bs[i] = np.tensordot(Gi, np.diag(Sj), axes=[2, 0])  # vL i [vC], [vC] vC
    psi.Ss[j] = Sj  # vC
    psi.Bs[j] = Bj  # vC j vR


def example_TEBD_gs_tf_ising_finite(L, g):
    print("finite TEBD, imaginary time evolution, transverse field Ising")
    print("L={L:d}, g={g:.2f}".format(L=L, g=g))
    import a_mps
    import b_model
    M = b_model.TFIModel(L=L, J=1., g=g, bc='finite')
    psi = a_mps.init_FM_MPS(M.L, M.d, M.bc)
    for dt in [0.1, 0.01, 0.001, 1.e-4, 1.e-5]:
        U_bonds = calc_U_bonds(M.H_bonds, dt)
        run_TEBD(psi, U_bonds, N_steps=500, chi_max=30, eps=1.e-10)
        E = np.sum(psi.bond_expectation_value(M.H_bonds))
        print("dt = {dt:.5f}: E = {E:.13f}".format(dt=dt, E=E))
    print("final bond dimensions: ", psi.get_chi())
    mag_x = np.sum(psi.site_expectation_value(M.sigmax))
    mag_z = np.sum(psi.site_expectation_value(M.sigmaz))
    print("magnetization in X = {mag_x:.5f}".format(mag_x=mag_x))
    print("magnetization in Z = {mag_z:.5f}".format(mag_z=mag_z))
    if L < 20:  # compare to exact result
        from tfi_exact import finite_gs_energy
        E_exact = finite_gs_energy(L, 1., g)
        print("Exact diagonalization: E = {E:.13f}".format(E=E_exact))
        print("relative error: ", abs((E - E_exact) / E_exact))
    return E, psi, M


def example_TEBD_gs_tf_ising_infinite(g):
    print("infinite TEBD, imaginary time evolution, transverse field Ising")
    print("g={g:.2f}".format(g=g))
    import a_mps
    import b_model
    M = b_model.TFIModel(L=2, J=1., g=g, bc='infinite')
    psi = a_mps.init_FM_MPS(M.L, M.d, M.bc)
    for dt in [0.1, 0.01, 0.001, 1.e-4, 1.e-5]:
        U_bonds = calc_U_bonds(M.H_bonds, dt)
        run_TEBD(psi, U_bonds, N_steps=500, chi_max=30, eps=1.e-10)
        E = np.mean(psi.bond_expectation_value(M.H_bonds))
        print("dt = {dt:.5f}: E (per site) = {E:.13f}".format(dt=dt, E=E))
    print("final bond dimensions: ", psi.get_chi())
    mag_x = np.mean(psi.site_expectation_value(M.sigmax))
    mag_z = np.mean(psi.site_expectation_value(M.sigmaz))
    print("<sigma_x> = {mag_x:.5f}".format(mag_x=mag_x))
    print("<sigma_z> = {mag_z:.5f}".format(mag_z=mag_z))
    print("correlation length:", psi.correlation_length())
    # compare to exact result
    from tfi_exact import infinite_gs_energy
    E_exact = infinite_gs_energy(1., g)
    print("Analytic result: E (per site) = {E:.13f}".format(E=E_exact))
    print("relative error: ", abs((E - E_exact) / E_exact))
    return E, psi, M


def example_TEBD_tf_ising_lightcone(L, g, tmax, dt):
    print("finite TEBD, real time evolution, transverse field Ising")
    print("L={L:d}, g={g:.2f}, tmax={tmax:.2f}, dt={dt:.3f}".format(L=L, g=g, tmax=tmax, dt=dt))
    # find ground state with TEBD or DMRG
    #  E, psi, M = example_TEBD_gs_tf_ising_finite(L, g)
    from d_dmrg import example_DMRG_tf_ising_finite
    E, psi, M = example_DMRG_tf_ising_finite(L, g)
    i0 = L // 2
    # apply sigmaz on site i0
    SzB = np.tensordot(M.sigmaz, psi.Bs[i0], axes=[1, 1])  # i [i*], vL [i] vR
    psi.Bs[i0] = np.transpose(SzB, [1, 0, 2])  # vL i vR
    U_bonds = calc_U_bonds(M.H_bonds, 1.j * dt)  # (imaginary dt -> realtime evolution)
    S = [psi.entanglement_entropy()]
    Nsteps = int(tmax / dt + 0.5)
    for n in range(Nsteps):
        if abs((n * dt + 0.1) % 0.2 - 0.1) < 1.e-10:
            print("t = {t:.2f}, chi =".format(t=n * dt), psi.get_chi())
        run_TEBD(psi, U_bonds, 1, chi_max=50, eps=1.e-10)
        S.append(psi.entanglement_entropy())
    import matplotlib.pyplot as plt
    plt.figure()
    plt.imshow(S[::-1],
               vmin=0.,
               aspect='auto',
               interpolation='nearest',
               extent=(0, L - 1., -0.5 * dt, (Nsteps + 0.5) * dt))
    plt.xlabel('site $i$')
    plt.ylabel('time $t/J$')
    plt.ylim(0., tmax)
    plt.colorbar().set_label('entropy $S$')
    filename = 'c_tebd_lightcone_{g:.2f}.pdf'.format(g=g)
    plt.savefig(filename)
    print("saved " + filename)


if __name__ == "__main__":
    example_TEBD_gs_tf_ising_finite(L=10, g=1.)
    print("-" * 100)
    example_TEBD_gs_tf_ising_infinite(g=1.5)
    print("-" * 100)
    example_TEBD_tf_ising_lightcone(L=20, g=1.5, tmax=3., dt=0.01)
Toycode d_dmrg.py
"""Toy code implementing the density-matrix renormalization group (DMRG)."""
# Copyright 2018-2020 TeNPy Developers, GNU GPLv3

import numpy as np
from a_mps import split_truncate_theta
import scipy.sparse
import scipy.sparse.linalg.eigen.arpack as arp


class SimpleHeff(scipy.sparse.linalg.LinearOperator):
    """Class for the effective Hamiltonian.

    To be diagonalized in `SimpleDMRGEnginge.update_bond`. Looks like this::

        .--vL*           vR*--.
        |       i*    j*      |
        |       |     |       |
        (LP)---(W1)--(W2)----(RP)
        |       |     |       |
        |       i     j       |
        .--vL             vR--.
    """
    def __init__(self, LP, RP, W1, W2):
        self.LP = LP  # vL wL* vL*
        self.RP = RP  # vR* wR* vR
        self.W1 = W1  # wL wC i i*
        self.W2 = W2  # wC wR j j*
        chi1, chi2 = LP.shape[0], RP.shape[2]
        d1, d2 = W1.shape[2], W2.shape[2]
        self.theta_shape = (chi1, d1, d2, chi2)  # vL i j vR
        self.shape = (chi1 * d1 * d2 * chi2, chi1 * d1 * d2 * chi2)
        self.dtype = W1.dtype

    def _matvec(self, theta):
        """Calculate |theta'> = H_eff |theta>.

        This function is used by :func:scipy.sparse.linalg.eigen.arpack.eigsh` to diagonalize
        the effective Hamiltonian with a Lanczos method, withouth generating the full matrix."""
        x = np.reshape(theta, self.theta_shape)  # vL i j vR
        x = np.tensordot(self.LP, x, axes=(2, 0))  # vL wL* [vL*], [vL] i j vR
        x = np.tensordot(x, self.W1, axes=([1, 2], [0, 3]))  # vL [wL*] [i] j vR, [wL] wC i [i*]
        x = np.tensordot(x, self.W2, axes=([3, 1], [0, 3]))  # vL [j] vR [wC] i, [wC] wR j [j*]
        x = np.tensordot(x, self.RP, axes=([1, 3], [0, 1]))  # vL [vR] i [wR] j, [vR*] [wR*] vR
        x = np.reshape(x, self.shape[0])
        return x


class SimpleDMRGEngine:
    """DMRG algorithm, implemented as class holding the necessary data.

    Parameters
    ----------
    psi, model, chi_max, eps:
        See attributes

    Attributes
    ----------
    psi : SimpleMPS
        The current ground-state (approximation).
    model :
        The model of which the groundstate is to be calculated.
    chi_max, eps:
        Truncation parameters, see :func:`a_mps.split_truncate_theta`.
    LPs, RPs : list of np.Array[ndim=3]
        Left and right parts ("environments") of the effective Hamiltonian.
        ``LPs[i]`` is the contraction of all parts left of site `i` in the network ``<psi|H|psi>``,
        and similar ``RPs[i]`` for all parts right of site `i`.
        Each ``LPs[i]`` has legs ``vL wL* vL*``, ``RPS[i]`` has legs ``vR* wR* vR``
    """
    def __init__(self, psi, model, chi_max, eps):
        assert psi.L == model.L and psi.bc == model.bc  # ensure compatibility
        self.H_mpo = model.H_mpo
        self.psi = psi
        self.LPs = [None] * psi.L
        self.RPs = [None] * psi.L
        self.chi_max = chi_max
        self.eps = eps
        # initialize left and right environment
        D = self.H_mpo[0].shape[0]
        chi = psi.Bs[0].shape[0]
        LP = np.zeros([chi, D, chi], dtype=np.float)  # vL wL* vL*
        RP = np.zeros([chi, D, chi], dtype=np.float)  # vR* wR* vR
        LP[:, 0, :] = np.eye(chi)
        RP[:, D - 1, :] = np.eye(chi)
        self.LPs[0] = LP
        self.RPs[-1] = RP
        # initialize necessary RPs
        for i in range(psi.L - 1, 1, -1):
            self.update_RP(i)

    def sweep(self):
        # sweep from left to right
        for i in range(self.psi.nbonds - 1):
            self.update_bond(i)
        # sweep from right to left
        for i in range(self.psi.nbonds - 1, 0, -1):
            self.update_bond(i)

    def update_bond(self, i):
        j = (i + 1) % self.psi.L
        # get effective Hamiltonian
        Heff = SimpleHeff(self.LPs[i], self.RPs[j], self.H_mpo[i], self.H_mpo[j])
        # Diagonalize Heff, find ground state `theta`
        theta0 = np.reshape(self.psi.get_theta2(i), [Heff.shape[0]])  # initial guess
        e, v = arp.eigsh(Heff, k=1, which='SA', return_eigenvectors=True, v0=theta0)
        theta = np.reshape(v[:, 0], Heff.theta_shape)
        # split and truncate
        Ai, Sj, Bj = split_truncate_theta(theta, self.chi_max, self.eps)
        # put back into MPS
        Gi = np.tensordot(np.diag(self.psi.Ss[i]**(-1)), Ai, axes=[1, 0])  # vL [vL*], [vL] i vC
        self.psi.Bs[i] = np.tensordot(Gi, np.diag(Sj), axes=[2, 0])  # vL i [vC], [vC*] vC
        self.psi.Ss[j] = Sj  # vC
        self.psi.Bs[j] = Bj  # vC j vR
        self.update_LP(i)
        self.update_RP(j)

    def update_RP(self, i):
        """Calculate RP right of site `i-1` from RP right of site `i`."""
        j = (i - 1) % self.psi.L
        RP = self.RPs[i]  # vR* wR* vR
        B = self.psi.Bs[i]  # vL i vR
        Bc = B.conj()  # vL* i* vR*
        W = self.H_mpo[i]  # wL wR i i*
        RP = np.tensordot(B, RP, axes=[2, 0])  # vL i [vR], [vR*] wR* vR
        RP = np.tensordot(RP, W, axes=[[1, 2], [3, 1]])  # vL [i] [wR*] vR, wL [wR] i [i*]
        RP = np.tensordot(RP, Bc, axes=[[1, 3], [2, 1]])  # vL [vR] wL [i], vL* [i*] [vR*]
        self.RPs[j] = RP  # vL wL vL* (== vR* wR* vR on site i-1)

    def update_LP(self, i):
        """Calculate LP left of site `i+1` from LP left of site `i`."""
        j = (i + 1) % self.psi.L
        LP = self.LPs[i]  # vL wL vL*
        B = self.psi.Bs[i]  # vL i vR
        G = np.tensordot(np.diag(self.psi.Ss[i]), B, axes=[1, 0])  # vL [vL*], [vL] i vR
        A = np.tensordot(G, np.diag(self.psi.Ss[j]**-1), axes=[2, 0])  # vL i [vR], [vR*] vR
        Ac = A.conj()  # vL* i* vR*
        W = self.H_mpo[i]  # wL wR i i*
        LP = np.tensordot(LP, A, axes=[2, 0])  # vL wL* [vL*], [vL] i vR
        LP = np.tensordot(W, LP, axes=[[0, 3], [1, 2]])  # [wL] wR i [i*], vL [wL*] [i] vR
        LP = np.tensordot(Ac, LP, axes=[[0, 1], [2, 1]])  # [vL*] [i*] vR*, wR [i] [vL] vR
        self.LPs[j] = LP  # vR* wR vR (== vL wL* vL* on site i+1)


def example_DMRG_tf_ising_finite(L, g):
    print("finite DMRG, transverse field Ising")
    print("L={L:d}, g={g:.2f}".format(L=L, g=g))
    import a_mps
    import b_model
    M = b_model.TFIModel(L=L, J=1., g=g, bc='finite')
    psi = a_mps.init_FM_MPS(M.L, M.d, M.bc)
    eng = SimpleDMRGEngine(psi, M, chi_max=30, eps=1.e-10)
    for i in range(10):
        eng.sweep()
        E = np.sum(psi.bond_expectation_value(M.H_bonds))
        print("sweep {i:2d}: E = {E:.13f}".format(i=i + 1, E=E))
    print("final bond dimensions: ", psi.get_chi())
    mag_x = np.sum(psi.site_expectation_value(M.sigmax))
    mag_z = np.sum(psi.site_expectation_value(M.sigmaz))
    print("magnetization in X = {mag_x:.5f}".format(mag_x=mag_x))
    print("magnetization in Z = {mag_z:.5f}".format(mag_z=mag_z))
    if L < 20:  # compare to exact result
        from tfi_exact import finite_gs_energy
        E_exact = finite_gs_energy(L, 1., g)
        print("Exact diagonalization: E = {E:.13f}".format(E=E_exact))
        print("relative error: ", abs((E - E_exact) / E_exact))
    return E, psi, M


def example_DMRG_tf_ising_infinite(g):
    print("infinite DMRG, transverse field Ising")
    print("g={g:.2f}".format(g=g))
    import a_mps
    import b_model
    M = b_model.TFIModel(L=2, J=1., g=g, bc='infinite')
    psi = a_mps.init_FM_MPS(M.L, M.d, M.bc)
    eng = SimpleDMRGEngine(psi, M, chi_max=20, eps=1.e-14)
    for i in range(20):
        eng.sweep()
        E = np.mean(psi.bond_expectation_value(M.H_bonds))
        print("sweep {i:2d}: E (per site) = {E:.13f}".format(i=i + 1, E=E))
    print("final bond dimensions: ", psi.get_chi())
    mag_x = np.mean(psi.site_expectation_value(M.sigmax))
    mag_z = np.mean(psi.site_expectation_value(M.sigmaz))
    print("<sigma_x> = {mag_x:.5f}".format(mag_x=mag_x))
    print("<sigma_z> = {mag_z:.5f}".format(mag_z=mag_z))
    print("correlation length:", psi.correlation_length())
    # compare to exact result
    from tfi_exact import infinite_gs_energy
    E_exact = infinite_gs_energy(1., g)
    print("Analytic result: E (per site) = {E:.13f}".format(E=E_exact))
    print("relative error: ", abs((E - E_exact) / E_exact))
    return E, psi, M


if __name__ == "__main__":
    example_DMRG_tf_ising_finite(L=10, g=1.)
    print("-" * 100)
    example_DMRG_tf_ising_infinite(g=1.5)
Toycode tfi_exact.py
"""Provides exact ground state energies for the transverse field ising model for comparison.

The Hamiltonian reads
.. math ::
    H = - J \\sum_{i} \\sigma^x_i \\sigma^x_{i+1} - g \\sum_{i} \\sigma^z_i
"""
# Copyright 2019-2020 TeNPy Developers, GNU GPLv3

import numpy as np
import scipy.sparse as sparse
import scipy.sparse.linalg.eigen.arpack as arp
import warnings
import scipy.integrate


def finite_gs_energy(L, J, g):
    """For comparison: obtain ground state energy from exact diagonalization.

    Exponentially expensive in L, only works for small enough `L` <~ 20.
    """
    if L >= 20:
        warnings.warn("Large L: Exact diagonalization might take a long time!")
    # get single site operaors
    sx = sparse.csr_matrix(np.array([[0., 1.], [1., 0.]]))
    sz = sparse.csr_matrix(np.array([[1., 0.], [0., -1.]]))
    id = sparse.csr_matrix(np.eye(2))
    sx_list = []  # sx_list[i] = kron([id, id, ..., id, sx, id, .... id])
    sz_list = []
    for i_site in range(L):
        x_ops = [id] * L
        z_ops = [id] * L
        x_ops[i_site] = sx
        z_ops[i_site] = sz
        X = x_ops[0]
        Z = z_ops[0]
        for j in range(1, L):
            X = sparse.kron(X, x_ops[j], 'csr')
            Z = sparse.kron(Z, z_ops[j], 'csr')
        sx_list.append(X)
        sz_list.append(Z)
    H_xx = sparse.csr_matrix((2**L, 2**L))
    H_z = sparse.csr_matrix((2**L, 2**L))
    for i in range(L - 1):
        H_xx = H_xx + sx_list[i] * sx_list[(i + 1) % L]
    for i in range(L):
        H_z = H_z + sz_list[i]
    H = -J * H_xx - g * H_z
    E, V = arp.eigsh(H, k=1, which='SA', return_eigenvectors=True, ncv=20)
    return E[0]


def infinite_gs_energy(J, g):
    """For comparison: Calculate groundstate energy density from analytic formula.

    The analytic formula stems from mapping the model to free fermions, see P. Pfeuty, The one-
    dimensional Ising model with a transverse field, Annals of Physics 57, p. 79 (1970). Note that
    we use Pauli matrices compared this reference using spin-1/2 matrices and replace the sum_k ->
    integral dk/2pi to obtain the result in the N -> infinity limit.
    """
    def f(k, lambda_):
        return np.sqrt(1 + lambda_**2 + 2 * lambda_ * np.cos(k))

    E0_exact = -g / (J * 2. * np.pi) * scipy.integrate.quad(f, -np.pi, np.pi, args=(J / g, ))[0]
    return E0_exact

Example codes

The following “examples” are included in the TeNPy repository in the folder examples/, we include them here in the documentation for reference. Theses examples are meant to give an idea how to use the library and demonstrate parts of the interface. It might be helpful to compare some of them (e.g., c_tebd.py and d_dmrg.py) to the Toy Codes. To run these examples, you have to have TeNPy installed, see Installation instructions. You do not need to save the examples inside the tenpy folder/repository, but you can execute them from anywhere (if TeNPy is installed correctly).

Example code a_np_conserved.py
"""An example code to demonstrate the usage of :class:`~tenpy.linalg.np_conserved.Array`.

This example includes the following steps:
1) create Arrays for an Neel MPS
2) create an MPO representing the nearest-neighbour AFM Heisenberg Hamiltonian
3) define 'environments' left and right
4) contract MPS and MPO to calculate the energy
5) extract two-site hamiltonian ``H2`` from the MPO
6) calculate ``exp(-1.j*dt*H2)`` by diagonalization of H2
7) apply ``exp(H2)`` to two sites of the MPS and truncate with svd

Note that this example uses only np_conserved, but no other modules.
Compare it to the example `b_mps.py`,
which does the same steps using a few predefined classes like MPS and MPO.
"""
# Copyright 2018-2020 TeNPy Developers, GNU GPLv3

import tenpy.linalg.np_conserved as npc
import numpy as np

# model parameters
Jxx, Jz = 1., 1.
L = 20
dt = 0.1
cutoff = 1.e-10
print("Jxx={Jxx}, Jz={Jz}, L={L:d}".format(Jxx=Jxx, Jz=Jz, L=L))

print("1) create Arrays for an Neel MPS")

#  vL ->--B-->- vR
#         |
#         ^
#         |
#         p

# create a ChargeInfo to specify the nature of the charge
chinfo = npc.ChargeInfo([1], ['2*Sz'])  # the second argument is just a descriptive name

# create LegCharges on physical leg and even/odd bonds
p_leg = npc.LegCharge.from_qflat(chinfo, [[1], [-1]])  # charges for up, down
v_leg_even = npc.LegCharge.from_qflat(chinfo, [[0]])
v_leg_odd = npc.LegCharge.from_qflat(chinfo, [[1]])

B_even = npc.zeros([v_leg_even, v_leg_odd.conj(), p_leg],
                   labels=['vL', 'vR', 'p'])  # virtual left/right, physical
B_odd = npc.zeros([v_leg_odd, v_leg_even.conj(), p_leg], labels=['vL', 'vR', 'p'])
B_even[0, 0, 0] = 1.  # up
B_odd[0, 0, 1] = 1.  # down

Bs = [B_even, B_odd] * (L // 2) + [B_even] * (L % 2)  # (right-canonical)
Ss = [np.ones(1)] * L  # Ss[i] are singular values between Bs[i-1] and Bs[i]

# Side remark:
# An MPS is expected to have non-zero entries everywhere compatible with the charges.
# In general, we recommend to use `sort_legcharge` (or `as_completely_blocked`)
# to ensure complete blocking. (But the code will also work, if you don't do it.)
# The drawback is that this might introduce permutations in the indices of single legs,
# which you have to keep in mind when converting dense numpy arrays to and from npc.Arrays.

print("2) create an MPO representing the AFM Heisenberg Hamiltonian")

#         p*
#         |
#         ^
#         |
#  wL ->--W-->- wR
#         |
#         ^
#         |
#         p

# create physical spin-1/2 operators Sz, S+, S-
Sz = npc.Array.from_ndarray([[0.5, 0.], [0., -0.5]], [p_leg, p_leg.conj()], labels=['p', 'p*'])
Sp = npc.Array.from_ndarray([[0., 1.], [0., 0.]], [p_leg, p_leg.conj()], labels=['p', 'p*'])
Sm = npc.Array.from_ndarray([[0., 0.], [1., 0.]], [p_leg, p_leg.conj()], labels=['p', 'p*'])
Id = npc.eye_like(Sz, labels=Sz.get_leg_labels())  # identity

mpo_leg = npc.LegCharge.from_qflat(chinfo, [[0], [2], [-2], [0], [0]])

W_grid = [[Id,   Sp,   Sm,   Sz,   None          ],
          [None, None, None, None, 0.5 * Jxx * Sm],
          [None, None, None, None, 0.5 * Jxx * Sp],
          [None, None, None, None, Jz * Sz       ],
          [None, None, None, None, Id            ]]  # yapf:disable

W = npc.grid_outer(W_grid, [mpo_leg, mpo_leg.conj()], grid_labels=['wL', 'wR'])
# wL/wR = virtual left/right of the MPO
Ws = [W] * L

print("3) define 'environments' left and right")

#  .---->- vR     vL ->----.
#  |                       |
#  envL->- wR     wL ->-envR
#  |                       |
#  .---->- vR*    vL*->----.

envL = npc.zeros([W.get_leg('wL').conj(), Bs[0].get_leg('vL').conj(), Bs[0].get_leg('vL')],
                 labels=['wR', 'vR', 'vR*'])
envL[0, :, :] = npc.diag(1., envL.legs[1])
envR = npc.zeros([W.get_leg('wR').conj(), Bs[-1].get_leg('vR').conj(), Bs[-1].get_leg('vR')],
                 labels=['wL', 'vL', 'vL*'])
envR[-1, :, :] = npc.diag(1., envR.legs[1])

print("4) contract MPS and MPO to calculate the energy <psi|H|psi>")
contr = envL
for i in range(L):
    # contr labels: wR, vR, vR*
    contr = npc.tensordot(contr, Bs[i], axes=('vR', 'vL'))
    # wR, vR*, vR, p
    contr = npc.tensordot(contr, Ws[i], axes=(['p', 'wR'], ['p*', 'wL']))
    # vR*, vR, wR, p
    contr = npc.tensordot(contr, Bs[i].conj(), axes=(['p', 'vR*'], ['p*', 'vL*']))
    # vR, wR, vR*
    # note that the order of the legs changed, but that's no problem with labels:
    # the arrays are automatically transposed as necessary
E = npc.inner(contr, envR, axes=(['vR', 'wR', 'vR*'], ['vL', 'wL', 'vL*']))
print("E =", E)

print("5) calculate two-site hamiltonian ``H2`` from the MPO")
# label left, right physical legs with p, q
W0 = W.replace_labels(['p', 'p*'], ['p0', 'p0*'])
W1 = W.replace_labels(['p', 'p*'], ['p1', 'p1*'])
H2 = npc.tensordot(W0, W1, axes=('wR', 'wL')).itranspose(['wL', 'wR', 'p0', 'p1', 'p0*', 'p1*'])
H2 = H2[0, -1]  # (If H has single-site terms, it's not that simple anymore)
print("H2 labels:", H2.get_leg_labels())

print("6) calculate exp(H2) by diagonalization of H2")
# diagonalization requires to view H2 as a matrix
H2 = H2.combine_legs([('p0', 'p1'), ('p0*', 'p1*')], qconj=[+1, -1])
print("labels after combine_legs:", H2.get_leg_labels())
E2, U2 = npc.eigh(H2)
print("Eigenvalues of H2:", E2)
U_expE2 = U2.scale_axis(np.exp(-1.j * dt * E2), axis=1)  # scale_axis ~= apply an diagonal matrix
exp_H2 = npc.tensordot(U_expE2, U2.conj(), axes=(1, 1))
exp_H2.iset_leg_labels(H2.get_leg_labels())
exp_H2 = exp_H2.split_legs()  # by default split all legs which are `LegPipe`
# (this restores the originial labels ['p0', 'p1', 'p0*', 'p1*'] of `H2` in `exp_H2`)

print("7) apply exp(H2) to even/odd bonds of the MPS and truncate with svd")
# (this implements one time step of first order TEBD)
for even_odd in [0, 1]:
    for i in range(even_odd, L - 1, 2):
        B_L = Bs[i].scale_axis(Ss[i], 'vL').ireplace_label('p', 'p0')
        B_R = Bs[i + 1].replace_label('p', 'p1')
        theta = npc.tensordot(B_L, B_R, axes=('vR', 'vL'))
        theta = npc.tensordot(exp_H2, theta, axes=(['p0*', 'p1*'], ['p0', 'p1']))
        # view as matrix for SVD
        theta = theta.combine_legs([('vL', 'p0'), ('p1', 'vR')], new_axes=[0, 1], qconj=[+1, -1])
        # now theta has labels '(vL.p0)', '(p1.vR)'
        U, S, V = npc.svd(theta, inner_labels=['vR', 'vL'])
        # truncate
        keep = S > cutoff
        S = S[keep]
        invsq = np.linalg.norm(S)
        Ss[i + 1] = S / invsq
        U = U.iscale_axis(S / invsq, 'vR')
        Bs[i] = U.split_legs('(vL.p0)').iscale_axis(Ss[i]**(-1), 'vL').ireplace_label('p0', 'p')
        Bs[i + 1] = V.split_legs('(p1.vR)').ireplace_label('p1', 'p')
print("finished")
Example code b_mps.py
"""Simplified version of `a_np_conserved.py` making use of other classes (like MPS, MPO).

This example includes the following steps:
1) create Arrays for an Neel MPS
2) create an MPO representing the nearest-neighbour AFM Heisenberg Hamiltonian
3) define 'environments' left and right
4) contract MPS and MPO to calculate the energy
5) extract two-site hamiltonian ``H2`` from the MPO
6) calculate ``exp(-1.j*dt*H2)`` by diagonalization of H2
7) apply ``exp(H2)`` to two sites of the MPS and truncate with svd

Note that this example performs the same steps as `a_np_conserved.py`,
but makes use of other predefined classes except npc.
"""
# Copyright 2018-2020 TeNPy Developers, GNU GPLv3

import tenpy.linalg.np_conserved as npc
import numpy as np

# some more imports
from tenpy.networks.site import SpinHalfSite
from tenpy.models.lattice import Chain
from tenpy.networks.mps import MPS
from tenpy.networks.mpo import MPO, MPOEnvironment
from tenpy.algorithms.truncation import svd_theta

# model parameters
Jxx, Jz = 1., 1.
L = 20
dt = 0.1
cutoff = 1.e-10
print("Jxx={Jxx}, Jz={Jz}, L={L:d}".format(Jxx=Jxx, Jz=Jz, L=L))

print("1) create Arrays for an Neel MPS")
site = SpinHalfSite(conserve='Sz')  # predefined charges and Sp,Sm,Sz operators
p_leg = site.leg
chinfo = p_leg.chinfo
# make lattice from unit cell and create product state MPS
lat = Chain(L, site, bc_MPS='finite')
state = ["up", "down"] * (L // 2) + ["up"] * (L % 2)  # Neel state
print("state = ", state)
psi = MPS.from_product_state(lat.mps_sites(), state, lat.bc_MPS)

print("2) create an MPO representing the AFM Heisenberg Hamiltonian")

# predefined physical spin-1/2 operators Sz, S+, S-
Sz, Sp, Sm, Id = site.Sz, site.Sp, site.Sm, site.Id

mpo_leg = npc.LegCharge.from_qflat(chinfo, [[0], [2], [-2], [0], [0]])

W_grid = [[Id,   Sp,   Sm,   Sz,   None          ],
          [None, None, None, None, 0.5 * Jxx * Sm],
          [None, None, None, None, 0.5 * Jxx * Sp],
          [None, None, None, None, Jz * Sz       ],
          [None, None, None, None, Id            ]]  # yapf:disable

W = npc.grid_outer(W_grid, [mpo_leg, mpo_leg.conj()], grid_labels=['wL', 'wR'])
# wL/wR = virtual left/right of the MPO
Ws = [W] * L
Ws[0] = W[:1, :]
Ws[-1] = W[:, -1:]
H = MPO(psi.sites, Ws, psi.bc, IdL=0, IdR=-1)

print("3) define 'environments' left and right")

# this is automatically done during initialization of MPOEnvironment
env = MPOEnvironment(psi, H, psi)
envL = env.get_LP(0)
envR = env.get_RP(L - 1)

print("4) contract MPS and MPO to calculate the energy <psi|H|psi>")

E = env.full_contraction(L - 1)
print("E =", E)

print("5) calculate two-site hamiltonian ``H2`` from the MPO")
# label left, right physical legs with p, q
W0 = H.get_W(0).replace_labels(['p', 'p*'], ['p0', 'p0*'])
W1 = H.get_W(1).replace_labels(['p', 'p*'], ['p1', 'p1*'])
H2 = npc.tensordot(W0, W1, axes=('wR', 'wL')).itranspose(['wL', 'wR', 'p0', 'p1', 'p0*', 'p1*'])
H2 = H2[H.IdL[0], H.IdR[2]]  # (If H has single-site terms, it's not that simple anymore)
print("H2 labels:", H2.get_leg_labels())

print("6) calculate exp(H2) by diagonalization of H2")
# diagonalization requires to view H2 as a matrix
H2 = H2.combine_legs([('p0', 'p1'), ('p0*', 'p1*')], qconj=[+1, -1])
print("labels after combine_legs:", H2.get_leg_labels())
E2, U2 = npc.eigh(H2)
print("Eigenvalues of H2:", E2)
U_expE2 = U2.scale_axis(np.exp(-1.j * dt * E2), axis=1)  # scale_axis ~= apply a diagonal matrix
exp_H2 = npc.tensordot(U_expE2, U2.conj(), axes=(1, 1))
exp_H2.iset_leg_labels(H2.get_leg_labels())
exp_H2 = exp_H2.split_legs()  # by default split all legs which are `LegPipe`
# (this restores the originial labels ['p0', 'p1', 'p0*', 'p1*'] of `H2` in `exp_H2`)

# alternative way: use :func:`~tenpy.linalg.np_conserved.expm`
exp_H2_alternative = npc.expm(-1.j * dt * H2).split_legs()
assert (npc.norm(exp_H2_alternative - exp_H2) < 1.e-14)

print("7) apply exp(H2) to even/odd bonds of the MPS and truncate with svd")
# (this implements one time step of first order TEBD)
trunc_par = {'svd_min': cutoff, 'trunc_cut': None, 'verbose': 0}
for even_odd in [0, 1]:
    for i in range(even_odd, L - 1, 2):
        theta = psi.get_theta(i, 2)  # handles canonical form (i.e. scaling with 'S')
        theta = npc.tensordot(exp_H2, theta, axes=(['p0*', 'p1*'], ['p0', 'p1']))
        # view as matrix for SVD
        theta = theta.combine_legs([('vL', 'p0'), ('p1', 'vR')], new_axes=[0, 1], qconj=[+1, -1])
        # now theta has labels '(vL.p0)', '(p1.vR)'
        U, S, V, err, invsq = svd_theta(theta, trunc_par, inner_labels=['vR', 'vL'])
        psi.set_SR(i, S)
        A_L = U.split_legs('(vL.p0)').ireplace_label('p0', 'p')
        B_R = V.split_legs('(p1.vR)').ireplace_label('p1', 'p')
        psi.set_B(i, A_L, form='A')  # left-canonical form
        psi.set_B(i + 1, B_R, form='B')  # right-canonical form
print("finished")
Example code c_tebd.py
"""Example illustrating the use of TEBD in tenpy.

The example functions in this class do the same as the ones in `toycodes/c_tebd.py`, but make use
of the classes defined in tenpy.
"""
# Copyright 2018-2020 TeNPy Developers, GNU GPLv3

import numpy as np

from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.algorithms import tebd


def example_TEBD_gs_tf_ising_finite(L, g, verbose=True):
    print("finite TEBD, imaginary time evolution, transverse field Ising")
    print("L={L:d}, g={g:.2f}".format(L=L, g=g))
    model_params = dict(L=L, J=1., g=g, bc_MPS='finite', conserve=None, verbose=verbose)
    M = TFIChain(model_params)
    product_state = ["up"] * M.lat.N_sites
    psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
    tebd_params = {
        'order': 2,
        'delta_tau_list': [0.1, 0.01, 0.001, 1.e-4, 1.e-5],
        'N_steps': 10,
        'max_error_E': 1.e-6,
        'trunc_params': {
            'chi_max': 30,
            'svd_min': 1.e-10
        },
        'verbose': verbose,
    }
    eng = tebd.Engine(psi, M, tebd_params)
    eng.run_GS()  # the main work...

    # expectation values
    E = np.sum(M.bond_energies(psi))  # M.bond_energies() works only a for NearestNeighborModel
    # alternative: directly measure E2 = np.sum(psi.expectation_value(M.H_bond[1:]))
    print("E = {E:.13f}".format(E=E))
    print("final bond dimensions: ", psi.chi)
    mag_x = np.sum(psi.expectation_value("Sigmax"))
    mag_z = np.sum(psi.expectation_value("Sigmaz"))
    print("magnetization in X = {mag_x:.5f}".format(mag_x=mag_x))
    print("magnetization in Z = {mag_z:.5f}".format(mag_z=mag_z))
    if L < 20:  # compare to exact result
        from tfi_exact import finite_gs_energy
        E_exact = finite_gs_energy(L, 1., g)
        print("Exact diagonalization: E = {E:.13f}".format(E=E_exact))
        print("relative error: ", abs((E - E_exact) / E_exact))
    return E, psi, M


def example_TEBD_gs_tf_ising_infinite(g, verbose=True):
    print("infinite TEBD, imaginary time evolution, transverse field Ising")
    print("g={g:.2f}".format(g=g))
    model_params = dict(L=2, J=1., g=g, bc_MPS='infinite', conserve=None, verbose=verbose)
    M = TFIChain(model_params)
    product_state = ["up"] * M.lat.N_sites
    psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
    tebd_params = {
        'order': 2,
        'delta_tau_list': [0.1, 0.01, 0.001, 1.e-4, 1.e-5],
        'N_steps': 10,
        'max_error_E': 1.e-8,
        'trunc_params': {
            'chi_max': 30,
            'svd_min': 1.e-10
        },
        'verbose': verbose,
    }
    eng = tebd.Engine(psi, M, tebd_params)
    eng.run_GS()  # the main work...
    E = np.mean(M.bond_energies(psi))  # M.bond_energies() works only a for NearestNeighborModel
    # alternative: directly measure E2 = np.mean(psi.expectation_value(M.H_bond))
    print("E (per site) = {E:.13f}".format(E=E))
    print("final bond dimensions: ", psi.chi)
    mag_x = np.mean(psi.expectation_value("Sigmax"))
    mag_z = np.mean(psi.expectation_value("Sigmaz"))
    print("<sigma_x> = {mag_x:.5f}".format(mag_x=mag_x))
    print("<sigma_z> = {mag_z:.5f}".format(mag_z=mag_z))
    print("correlation length:", psi.correlation_length())
    # compare to exact result
    from tfi_exact import infinite_gs_energy
    E_exact = infinite_gs_energy(1., g)
    print("Analytic result: E (per site) = {E:.13f}".format(E=E_exact))
    print("relative error: ", abs((E - E_exact) / E_exact))
    return E, psi, M


def example_TEBD_tf_ising_lightcone(L, g, tmax, dt, verbose=True):
    print("finite TEBD, real time evolution")
    print("L={L:d}, g={g:.2f}, tmax={tmax:.2f}, dt={dt:.3f}".format(L=L, g=g, tmax=tmax, dt=dt))
    # find ground state with TEBD or DMRG
    #  E, psi, M = example_TEBD_gs_tf_ising_finite(L, g)
    from d_dmrg import example_DMRG_tf_ising_finite
    print("(run DMRG to get the groundstate)")
    E, psi, M = example_DMRG_tf_ising_finite(L, g, verbose=False)
    print("(DMRG finished)")
    i0 = L // 2
    # apply sigmaz on site i0
    psi.apply_local_op(i0, 'Sigmaz', unitary=True)
    dt_measure = 0.05
    # tebd.Engine makes 'N_steps' steps of `dt` at once; for second order this is more efficient.
    tebd_params = {
        'order': 2,
        'dt': dt,
        'N_steps': int(dt_measure / dt + 0.5),
        'trunc_params': {
            'chi_max': 50,
            'svd_min': 1.e-10,
            'trunc_cut': None
        },
        'verbose': verbose,
    }
    eng = tebd.Engine(psi, M, tebd_params)
    S = [psi.entanglement_entropy()]
    for n in range(int(tmax / dt_measure + 0.5)):
        eng.run()
        S.append(psi.entanglement_entropy())
    import matplotlib.pyplot as plt
    plt.figure()
    plt.imshow(S[::-1],
               vmin=0.,
               aspect='auto',
               interpolation='nearest',
               extent=(0, L - 1., -0.5 * dt_measure, eng.evolved_time + 0.5 * dt_measure))
    plt.xlabel('site $i$')
    plt.ylabel('time $t/J$')
    plt.ylim(0., tmax)
    plt.colorbar().set_label('entropy $S$')
    filename = 'c_tebd_lightcone_{g:.2f}.pdf'.format(g=g)
    plt.savefig(filename)
    print("saved " + filename)


def example_TEBD_gs_tf_ising_next_nearest_neighbor(L, g, Jp, verbose=True):
    from tenpy.models.spins_nnn import SpinChainNNN2
    from tenpy.models.model import NearestNeighborModel
    print("finite TEBD, imaginary time evolution, transverse field Ising next-nearest neighbor")
    print("L={L:d}, g={g:.2f}, Jp={Jp:.2f}".format(L=L, g=g, Jp=Jp))
    model_params = dict(L=L,
                        Jx=1.,
                        Jy=0.,
                        Jz=0.,
                        Jxp=Jp,
                        Jyp=0.,
                        Jzp=0.,
                        hz=g,
                        bc_MPS='finite',
                        conserve=None,
                        verbose=verbose)
    # we start with the non-grouped sites, but next-nearest neighbor interactions, building the MPO
    M = SpinChainNNN2(model_params)
    product_state = ["up"] * M.lat.N_sites
    psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)

    # now we group each to sites ...
    psi.group_sites(n=2)  # ... in the state
    M.group_sites(n=2)  # ... and model
    # now, M has only 'nearest-neighbor' interactions with respect to the grouped sites
    # thus, we can convert the MPO into H_bond terms:
    M_nn = NearestNeighborModel.from_MPOModel(M)  # hence, we can initialize H_bond from the MPO

    # now, we continue to run TEBD as before
    tebd_params = {
        'order': 2,
        'delta_tau_list': [0.1, 0.01, 0.001, 1.e-4, 1.e-5],
        'N_steps': 10,
        'max_error_E': 1.e-6,
        'trunc_params': {
            'chi_max': 30,
            'svd_min': 1.e-10
        },
        'verbose': verbose,
    }
    eng = tebd.Engine(psi, M_nn, tebd_params)  # use M_nn and grouped psi
    eng.run_GS()  # the main work...

    # expectation values:
    E = np.sum(M_nn.bond_energies(psi))  # bond_energies() works only a for NearestNeighborModel
    print("E = {E:.13f}".format(E=E))
    print("final bond dimensions: ", psi.chi)
    # we can split the sites of the state again for an easier evaluation of expectation values
    psi.group_split()
    mag_x = 2. * np.sum(psi.expectation_value("Sx"))  # factor of 2 for Sx vs Sigmax
    mag_z = 2. * np.sum(psi.expectation_value("Sz"))
    print("magnetization in X = {mag_x:.5f}".format(mag_x=mag_x))
    print("magnetization in Z = {mag_z:.5f}".format(mag_z=mag_z))
    return E, psi, M


if __name__ == "__main__":
    example_TEBD_gs_tf_ising_finite(L=10, g=1.)
    print("-" * 100)
    example_TEBD_gs_tf_ising_infinite(g=1.5)
    print("-" * 100)
    example_TEBD_tf_ising_lightcone(L=20, g=1.5, tmax=3., dt=0.01)
    print("-" * 100)
    example_TEBD_gs_tf_ising_next_nearest_neighbor(L=10, g=1.0, Jp=0.1)
Example code d_dmrg.py
"""Example illustrating the use of DMRG in tenpy.

The example functions in this class do the same as the ones in `toycodes/d_dmrg.py`,
but make use of the classes defined in tenpy.

.. todo ::
Docstrings?
"""
# Copyright 2018-2020 TeNPy Developers, GNU GPLv3

import numpy as np

from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.models.spins import SpinModel
from tenpy.algorithms import dmrg


def example_DMRG_tf_ising_finite(L, g, verbose=True):
    print("finite DMRG, transverse field Ising model")
    print("L={L:d}, g={g:.2f}".format(L=L, g=g))
    model_params = dict(L=L, J=1., g=g, bc_MPS='finite', conserve=None, verbose=verbose)
    M = TFIChain(model_params)
    product_state = ["up"] * M.lat.N_sites
    psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
    dmrg_params = {
        'mixer': None,  # setting this to True helps to escape local minima
        'max_E_err': 1.e-10,
        'trunc_params': {
            'chi_max': 30,
            'svd_min': 1.e-10
        },
        'verbose': verbose,
        'combine': True
    }
    info = dmrg.run(psi, M, dmrg_params)  # the main work...
    E = info['E']
    print("E = {E:.13f}".format(E=E))
    print("final bond dimensions: ", psi.chi)
    mag_x = np.sum(psi.expectation_value("Sigmax"))
    mag_z = np.sum(psi.expectation_value("Sigmaz"))
    print("magnetization in X = {mag_x:.5f}".format(mag_x=mag_x))
    print("magnetization in Z = {mag_z:.5f}".format(mag_z=mag_z))
    if L < 20:  # compare to exact result
        from tfi_exact import finite_gs_energy
        E_exact = finite_gs_energy(L, 1., g)
        print("Exact diagonalization: E = {E:.13f}".format(E=E_exact))
        print("relative error: ", abs((E - E_exact) / E_exact))
    return E, psi, M


def example_1site_DMRG_tf_ising_finite(L, g, verbose=True):
    print("single-site finite DMRG, transverse field Ising model")
    print("L={L:d}, g={g:.2f}".format(L=L, g=g))
    model_params = dict(L=L, J=1., g=g, bc_MPS='finite', conserve=None, verbose=verbose)
    M = TFIChain(model_params)
    product_state = ["up"] * M.lat.N_sites
    psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
    dmrg_params = {
        'mixer': True,  # setting this to True is essential for the 1-site algorithm to work.
        'max_E_err': 1.e-10,
        'trunc_params': {
            'chi_max': 30,
            'svd_min': 1.e-10
        },
        'verbose': verbose,
        'combine': False,
        'active_sites': 1  # specifies single-site
    }
    info = dmrg.run(psi, M, dmrg_params)
    E = info['E']
    print("E = {E:.13f}".format(E=E))
    print("final bond dimensions: ", psi.chi)
    mag_x = np.sum(psi.expectation_value("Sigmax"))
    mag_z = np.sum(psi.expectation_value("Sigmaz"))
    print("magnetization in X = {mag_x:.5f}".format(mag_x=mag_x))
    print("magnetization in Z = {mag_z:.5f}".format(mag_z=mag_z))
    if L < 20:  # compare to exact result
        from tfi_exact import finite_gs_energy
        E_exact = finite_gs_energy(L, 1., g)
        print("Exact diagonalization: E = {E:.13f}".format(E=E_exact))
        print("relative error: ", abs((E - E_exact) / E_exact))
    return E, psi, M


def example_DMRG_tf_ising_infinite(g, verbose=True):
    print("infinite DMRG, transverse field Ising model")
    print("g={g:.2f}".format(g=g))
    model_params = dict(L=2, J=1., g=g, bc_MPS='infinite', conserve=None, verbose=verbose)
    M = TFIChain(model_params)
    product_state = ["up"] * M.lat.N_sites
    psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
    dmrg_params = {
        'mixer': True,  # setting this to True helps to escape local minima
        'trunc_params': {
            'chi_max': 30,
            'svd_min': 1.e-10
        },
        'max_E_err': 1.e-10,
        'verbose': verbose,
    }
    # Sometimes, we want to call a 'DMRG engine' explicitly
    eng = dmrg.TwoSiteDMRGEngine(psi, M, dmrg_params)
    E, psi = eng.run()  # equivalent to dmrg.run() up to the return parameters.
    print("E = {E:.13f}".format(E=E))
    print("final bond dimensions: ", psi.chi)
    mag_x = np.mean(psi.expectation_value("Sigmax"))
    mag_z = np.mean(psi.expectation_value("Sigmaz"))
    print("<sigma_x> = {mag_x:.5f}".format(mag_x=mag_x))
    print("<sigma_z> = {mag_z:.5f}".format(mag_z=mag_z))
    print("correlation length:", psi.correlation_length())
    # compare to exact result
    from tfi_exact import infinite_gs_energy
    E_exact = infinite_gs_energy(1., g)
    print("Analytic result: E (per site) = {E:.13f}".format(E=E_exact))
    print("relative error: ", abs((E - E_exact) / E_exact))
    return E, psi, M


def example_1site_DMRG_tf_ising_infinite(g, verbose=True):
    print("single-site infinite DMRG, transverse field Ising model")
    print("g={g:.2f}".format(g=g))
    model_params = dict(L=2, J=1., g=g, bc_MPS='infinite', conserve=None, verbose=verbose)
    M = TFIChain(model_params)
    product_state = ["up"] * M.lat.N_sites
    psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
    dmrg_params = {
        'mixer': True,  # setting this to True is essential for the 1-site algorithm to work.
        'trunc_params': {
            'chi_max': 30,
            'svd_min': 1.e-10
        },
        'max_E_err': 1.e-10,
        'verbose': verbose,
        'combine': True
    }
    eng = dmrg.SingleSiteDMRGEngine(psi, M, dmrg_params)
    E, psi = eng.run()  # equivalent to dmrg.run() up to the return parameters.
    print("E = {E:.13f}".format(E=E))
    print("final bond dimensions: ", psi.chi)
    mag_x = np.mean(psi.expectation_value("Sigmax"))
    mag_z = np.mean(psi.expectation_value("Sigmaz"))
    print("<sigma_x> = {mag_x:.5f}".format(mag_x=mag_x))
    print("<sigma_z> = {mag_z:.5f}".format(mag_z=mag_z))
    print("correlation length:", psi.correlation_length())
    # compare to exact result
    from tfi_exact import infinite_gs_energy
    E_exact = infinite_gs_energy(1., g)
    print("Analytic result: E (per site) = {E:.13f}".format(E=E_exact))
    print("relative error: ", abs((E - E_exact) / E_exact))


def example_DMRG_heisenberg_xxz_infinite(Jz, conserve='best', verbose=True):
    print("infinite DMRG, Heisenberg XXZ chain")
    print("Jz={Jz:.2f}, conserve={conserve!r}".format(Jz=Jz, conserve=conserve))
    model_params = dict(
        L=2,
        S=0.5,  # spin 1/2
        Jx=1.,
        Jy=1.,
        Jz=Jz,  # couplings
        bc_MPS='infinite',
        conserve=conserve,
        verbose=verbose)
    M = SpinModel(model_params)
    product_state = ["up", "down"]  # initial Neel state
    psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
    dmrg_params = {
        'mixer': True,  # setting this to True helps to escape local minima
        'trunc_params': {
            'chi_max': 100,
            'svd_min': 1.e-10,
        },
        'max_E_err': 1.e-10,
        'verbose': verbose,
    }
    info = dmrg.run(psi, M, dmrg_params)
    E = info['E']
    print("E = {E:.13f}".format(E=E))
    print("final bond dimensions: ", psi.chi)
    Sz = psi.expectation_value("Sz")  # Sz instead of Sigma z: spin-1/2 operators!
    mag_z = np.mean(Sz)
    print("<S_z> = [{Sz0:.5f}, {Sz1:.5f}]; mean ={mag_z:.5f}".format(Sz0=Sz[0],
                                                                     Sz1=Sz[1],
                                                                     mag_z=mag_z))
    # note: it's clear that mean(<Sz>) is 0: the model has Sz conservation!
    print("correlation length:", psi.correlation_length())
    corrs = psi.correlation_function("Sz", "Sz", sites1=range(10))
    print("correlations <Sz_i Sz_j> =")
    print(corrs)
    return E, psi, M


if __name__ == "__main__":
    example_DMRG_tf_ising_finite(L=10, g=1., verbose=True)
    print("-" * 100)
    example_1site_DMRG_tf_ising_finite(L=10, g=1., verbose=True)
    print("-" * 100)
    example_DMRG_tf_ising_infinite(g=1.5, verbose=True)
    print("-" * 100)
    example_1site_DMRG_tf_ising_infinite(g=1.5, verbose=True)
    print("-" * 100)
    example_DMRG_heisenberg_xxz_infinite(Jz=1.5)
Example code e_tdvp.py
"""Example illustrating the use of TDVP in tenpy.

As of now, we have TDVP only for finite systems. The call structure is quite similar to TEBD. A
difference is that we can run one-site TDVP or two-site TDVP. In the former, the bond dimension can
not grow; the latter allows to grow the bond dimension and hence requires a truncation.
"""
# Copyright 2019-2020 TeNPy Developers, GNU GPLv3
import numpy as np
import tenpy.linalg.np_conserved as npc
import tenpy.models.spins
import tenpy.networks.mps as mps
import tenpy.networks.site as site
from tenpy.algorithms import tdvp
from tenpy.networks.mps import MPS
import copy


def run_out_of_equilibrium():
    L = 10
    chi = 5
    delta_t = 0.1
    model_params = {
        'L': L,
        'S': 0.5,
        'conserve': 'Sz',
        'Jz': 1.0,
        'Jy': 1.0,
        'Jx': 1.0,
        'hx': 0.0,
        'hy': 0.0,
        'hz': 0.0,
        'muJ': 0.0,
        'bc_MPS': 'finite',
    }

    heisenberg = tenpy.models.spins.SpinChain(model_params)
    product_state = ["up"] * (L // 2) + ["down"] * (L - L // 2)
    # starting from a domain-wall product state which is not an eigenstate of the Heisenberg model
    psi = MPS.from_product_state(heisenberg.lat.mps_sites(),
                                 product_state,
                                 bc=heisenberg.lat.bc_MPS,
                                 form='B')

    tdvp_params = {
        'start_time': 0,
        'dt': delta_t,
        'trunc_params': {
            'chi_max': chi,
            'svd_min': 1.e-10,
            'trunc_cut': None
        }
    }
    tdvp_engine = tdvp.Engine(psi, heisenberg, tdvp_params)
    times = []
    S_mid = []
    for i in range(30):
        tdvp_engine.run_two_sites(N_steps=1)
        times.append(tdvp_engine.evolved_time)
        S_mid.append(psi.entanglement_entropy(bonds=[L // 2])[0])
    for i in range(30):
        tdvp_engine.run_one_site(N_steps=1)
        #psi_2=copy.deepcopy(psi)
        #psi_2.canonical_form()
        times.append(tdvp_engine.evolved_time)
        S_mid.append(psi.entanglement_entropy(bonds=[L // 2])[0])
    import matplotlib.pyplot as plt
    plt.figure()
    plt.plot(times, S_mid)
    plt.xlabel('t')
    plt.ylabel('S')
    plt.axvline(x=3.1, color='red')
    plt.text(0.0, 0.0000015, "Two sites update")
    plt.text(3.1, 0.0000015, "One site update")
    plt.show()


if __name__ == "__main__":
    run_out_of_equilibrium()
Example code tfi_exact.py
"""Provides exact ground state energies for the transverse field ising model for comparison.

The Hamiltonian reads
.. math ::
    H = - J \\sum_{i} \\sigma^x_i \\sigma^x_{i+1} - g \\sum_{i} \\sigma^z_i
"""
# Copyright 2019-2020 TeNPy Developers, GNU GPLv3

import numpy as np
import scipy.sparse as sparse
import scipy.sparse.linalg.eigen.arpack as arp
import warnings
import scipy.integrate


def finite_gs_energy(L, J, g):
    """For comparison: obtain ground state energy from exact diagonalization.

    Exponentially expensive in L, only works for small enough `L` <~ 20.
    """
    if L >= 20:
        warnings.warn("Large L: Exact diagonalization might take a long time!")
    # get single site operaors
    sx = sparse.csr_matrix(np.array([[0., 1.], [1., 0.]]))
    sz = sparse.csr_matrix(np.array([[1., 0.], [0., -1.]]))
    id = sparse.csr_matrix(np.eye(2))
    sx_list = []  # sx_list[i] = kron([id, id, ..., id, sx, id, .... id])
    sz_list = []
    for i_site in range(L):
        x_ops = [id] * L
        z_ops = [id] * L
        x_ops[i_site] = sx
        z_ops[i_site] = sz
        X = x_ops[0]
        Z = z_ops[0]
        for j in range(1, L):
            X = sparse.kron(X, x_ops[j], 'csr')
            Z = sparse.kron(Z, z_ops[j], 'csr')
        sx_list.append(X)
        sz_list.append(Z)
    H_xx = sparse.csr_matrix((2**L, 2**L))
    H_z = sparse.csr_matrix((2**L, 2**L))
    for i in range(L - 1):
        H_xx = H_xx + sx_list[i] * sx_list[(i + 1) % L]
    for i in range(L):
        H_z = H_z + sz_list[i]
    H = -J * H_xx - g * H_z
    E, V = arp.eigsh(H, k=1, which='SA', return_eigenvectors=True, ncv=20)
    return E[0]


def infinite_gs_energy(J, g):
    """For comparison: Calculate groundstate energy density from analytic formula.

    The analytic formula stems from mapping the model to free fermions, see P. Pfeuty, The one-
    dimensional Ising model with a transverse field, Annals of Physics 57, p. 79 (1970). Note that
    we use Pauli matrices compared this reference using spin-1/2 matrices and replace the sum_k ->
    integral dk/2pi to obtain the result in the N -> infinity limit.
    """
    def f(k, lambda_):
        return np.sqrt(1 + lambda_**2 + 2 * lambda_ * np.cos(k))

    E0_exact = -g / (J * 2. * np.pi) * scipy.integrate.quad(f, -np.pi, np.pi, args=(J / g, ))[0]
    return E0_exact
Example code purication.py
Example code z_exact_diag.py
"""A simple example comparing DMRG output with full diagonalization (ED).

Sorry that this is not well documented! ED is meant to be used for debugging only ;)
"""
# Copyright 2018-2020 TeNPy Developers, GNU GPLv3

import tenpy.linalg.np_conserved as npc
from tenpy.models.xxz_chain import XXZChain
from tenpy.networks.mps import MPS

from tenpy.algorithms.exact_diag import ExactDiag
from tenpy.algorithms import dmrg


def example_exact_diagonalization(L, Jz):
    xxz_pars = dict(L=L, Jxx=1., Jz=Jz, hz=0.0, bc_MPS='finite')
    M = XXZChain(xxz_pars)

    product_state = ["up", "down"] * (xxz_pars['L'] // 2)  # this selects a charge sector!
    psi_DMRG = MPS.from_product_state(M.lat.mps_sites(), product_state)
    charge_sector = psi_DMRG.get_total_charge(True)  # ED charge sector should match

    ED = ExactDiag(M, charge_sector=charge_sector, max_size=2.e6)
    ED.build_full_H_from_mpo()
    # ED.build_full_H_from_bonds()  # whatever you prefer
    print("start diagonalization")
    ED.full_diagonalization()  # the expensive part for large L
    E0_ED, psi_ED = ED.groundstate()  # return the ground state
    print("psi_ED =", psi_ED)

    print("run DMRG")
    dmrg.run(psi_DMRG, M, {'verbose': 0})  # modifies psi_DMRG in place!
    # first way to compare ED with DMRG: convert MPS to ED vector
    psi_DMRG_full = ED.mps_to_full(psi_DMRG)
    print("psi_DMRG_full =", psi_DMRG_full)
    ov = npc.inner(psi_ED, psi_DMRG_full, axes='range', do_conj=True)
    print("<psi_ED|psi_DMRG_full> =", ov)
    assert (abs(abs(ov) - 1.) < 1.e-13)

    # second way: convert ED vector to MPS
    psi_ED_mps = ED.full_to_mps(psi_ED)
    ov2 = psi_ED_mps.overlap(psi_DMRG)
    print("<psi_ED_mps|psi_DMRG> =", ov2)
    assert (abs(abs(ov2) - 1.) < 1.e-13)
    assert (abs(ov - ov2) < 1.e-13)
    # -> advantage: expectation_value etc. of MPS are available!
    print("<Sz> =", psi_ED_mps.expectation_value('Sz'))


if __name__ == "__main__":
    example_exact_diagonalization(10, 1.)
Advanced examples

The following “advanced examples” are included in the TeNPy repository in the folder examples/advanced, we include them here in the documentation for reference. Theses “advanced examples” go beyond the basic usage of the algorithm, but can give you an idea for certain tasks. It’s a somewhat random collection, feel free to suggest further examples.

Advanced example xxz_corr_length.py
"""Calculate the correleation legnth of the transferse field Ising model for various h_z.

This example uses DMRG to find the ground state of the transverse field Ising model when tuning
through the phase transition by changing the field `hz`. It uses
:meth:`~tenpy.networks.mps.MPS.correlation_length` to extract the correlation length of the ground
state, and plots it vs. hz in the end.
"""
# Copyright 2018-2020 TeNPy Developers, GNU GPLv3

import numpy as np

from tenpy.models.spins import SpinChain
from tenpy.networks.mps import MPS
from tenpy.algorithms import dmrg
import matplotlib.pyplot as plt


def run(Jzs):
    L = 2
    model_params = dict(L=L, Jx=1., Jy=1., Jz=Jzs[0], bc_MPS='infinite', conserve='Sz', verbose=0)
    chi = 300
    dmrg_params = {
        'trunc_params': {
            'chi_max': chi,
            'svd_min': 1.e-10,
            'trunc_cut': None
        },
        'update_env': 20,
        'start_env': 20,
        'max_E_err': 0.0001,
        'max_S_err': 0.0001,
        'verbose': 1,
        'mixer': False
    }

    M = SpinChain(model_params)
    psi = MPS.from_product_state(M.lat.mps_sites(), (["up", "down"] * L)[:L], M.lat.bc_MPS)

    engine = dmrg.TwoSiteDMRGEngine(psi, M, dmrg_params)
    np.set_printoptions(linewidth=120)
    corr_length = []
    for Jz in Jzs:
        print("-" * 80)
        print("Jz =", Jz)
        print("-" * 80)
        model_params['Jz'] = Jz
        M = SpinChain(model_params)
        engine.init_env(model=M)  # (re)initialize DMRG environment with new model
        # this uses the result from the previous DMRG as first initial guess
        engine.run()
        # psi is modified by engine.run() and now represents the ground state for the current `Jz`.
        corr_length.append(psi.correlation_length(tol_ev0=1.e-3))
        print("corr. length", corr_length[-1])
        print("<Sz>", psi.expectation_value('Sz'))
        dmrg_params['start_env'] = 0  # (some of) the parameters are read out again
    corr_length = np.array(corr_length)
    results = {
        'model_params': model_params,
        'dmrg_params': dmrg_params,
        'Jzs': Jzs,
        'corr_length': corr_length,
        'eval_transfermatrix': np.exp(-1. / corr_length)
    }
    return results


def plot(results, filename):
    corr_length = results['corr_length']
    Jzs = results['Jzs']
    plt.plot(Jzs, np.exp(-1. / corr_length))
    plt.xlabel(r'$J_z/J_x$')
    plt.ylabel(r'$t = \exp(-\frac{1}{\xi})$')
    plt.savefig(filename)
    print("saved to " + filename)


if __name__ == "__main__":
    filename = 'xxz_corrlength.pkl'
    import pickle
    import os.path
    if not os.path.exists(filename):
        results = run(list(np.arange(4.0, 1.5, -0.25)) + list(np.arange(1.5, 0.8, -0.05)))
        with open(filename, 'wb') as f:
            pickle.dump(results, f)
    else:
        print("just load the data")
        with open(filename, 'rb') as f:
            results = pickle.load(f)
    plot(results, filename[:-4] + '.pdf')
Advanced example central_charge_ising.py
"""Example to extract the central charge from the entranglement scaling.

This example code evaluate the central charge of the transverse field Ising model using IDMRG.
The expected value for the central charge c = 1/2. The code always recycle the environment from
the previous simulation, which can be seen at the "age".
"""
# Copyright 2018-2020 TeNPy Developers, GNU GPLv3

import numpy as np
import tenpy
import time
from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.algorithms import dmrg


def example_DMRG_tf_ising_infinite_S_xi_scaling(g):
    model_params = dict(L=2, J=1., g=g, bc_MPS='infinite', conserve='best', verbose=0)
    M = TFIChain(model_params)
    product_state = ["up"] * M.lat.N_sites
    psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
    dmrg_params = {
        'start_env': 10,
        'mixer': False,
        #  'mixer_params': {'amplitude': 1.e-3, 'decay': 5., 'disable_after': 50},
        'trunc_params': {
            'chi_max': 5,
            'svd_min': 1.e-10
        },
        'max_E_err': 1.e-9,
        'max_S_err': 1.e-6,
        'update_env': 0,
        'verbose': 0
    }

    chi_list = np.arange(7, 31, 2)
    s_list = []
    xi_list = []
    eng = dmrg.TwoSiteDMRGEngine(psi, M, dmrg_params)

    for chi in chi_list:

        t0 = time.time()
        eng.reset_stats(
        )  # necessary if you for example have a fixed numer of sweeps, if you don't set this you option your simulation stops after initial number of sweeps!
        eng.trunc_params['chi_max'] = chi
        ##   DMRG Calculation    ##
        print("Start IDMRG CALCULATION")
        eng.run()
        eng.options['mixer'] = None
        psi.canonical_form()

        ##   Calculating bond entropy and correlation length  ##
        s_list.append(np.mean(psi.entanglement_entropy()))
        xi_list.append(psi.correlation_length())

        print(chi,
              time.time() - t0,
              np.mean(psi.expectation_value(M.H_bond)),
              s_list[-1],
              xi_list[-1],
              flush=True)
        tenpy.tools.optimization.optimize(3)  # quite some speedup for small chi

        print("SETTING NEW BOND DIMENSION")

    return s_list, xi_list


def fit_plot_central_charge(s_list, xi_list, filename):
    """Plot routine in order to determine the cental charge."""
    import matplotlib.pyplot as plt
    from scipy.optimize import curve_fit

    def fitFunc(Xi, c, a):
        return (c / 6) * np.log(Xi) + a

    Xi = np.array(xi_list)
    S = np.array(s_list)
    LXi = np.log(Xi)  # Logarithm of the correlation length xi

    fitParams, fitCovariances = curve_fit(fitFunc, Xi, S)

    # Plot fitting parameter and covariances
    print('c =', fitParams[0], 'a =', fitParams[1])
    print('Covariance Matrix', fitCovariances)

    # plot the data as blue circles
    plt.errorbar(LXi,
                 S,
                 fmt='o',
                 c='blue',
                 ms=5.5,
                 markerfacecolor='white',
                 markeredgecolor='blue',
                 markeredgewidth=1.4)
    # plot the fitted line
    plt.plot(LXi,
             fitFunc(Xi, fitParams[0], fitParams[1]),
             linewidth=1.5,
             c='black',
             label='fit c={c:.2f}'.format(c=fitParams[0]))

    plt.xlabel(r'$\log{\,}\xi_{\chi}$', fontsize=16)
    plt.ylabel(r'$S$', fontsize=16)
    plt.legend(loc='lower right', borderaxespad=0., fancybox=True, shadow=True, fontsize=16)
    plt.savefig(filename)


if __name__ == "__main__":
    s_list, xi_list = example_DMRG_tf_ising_infinite_S_xi_scaling(g=1)
    fit_plot_central_charge(s_list, xi_list, "central_charge_ising.pdf")
Advanced example mpo_exponential_decay.py
"""Demonstration of the mpo.MPO.from_grids method.

We construct a MPO model for a spin 1/2 Heisenberg chain with an infinite number of
2-sites interactions, with strength that decays exponentially with the distance between the sites.

Because of the infinite number of couplings it is not possible to construct the MPO from a
coupling model. However the tensors that form the MPO have a surprisingly simple
form (see the grid below).

We run the iDMRG algorithm to find the ground state and energy density of the
system in the thermodynamic limit.
"""
# Copyright 2018-2020 TeNPy Developers, GNU GPLv3

import numpy as np
import tenpy.linalg.np_conserved as npc

from tenpy.networks.mpo import MPO
from tenpy.networks.mps import MPS
from tenpy.networks.site import SpinHalfSite
from tenpy.models.model import MPOModel
from tenpy.models.lattice import Chain
from tenpy.algorithms import dmrg
from tenpy.tools.params import asConfig


class ExponentiallyDecayingHeisenberg(MPOModel):
    r"""Spin-1/2 Heisenberg Chain with exponentially decaying interactions.

    The Hamiltonian reads:

    .. math ::

        H = \sum_i \sum_{j>i} \exp(-\frac{|j-i-1|}{\mathtt{xi}}) (
                  \mathtt{Jxx}/2 (S^{+}_i S^{-}_j + S^{-}_i S^{+}_j)
                + \mathtt{Jz} S^z_i S^z_j                        ) \\
            - \sum_i \mathtt{hz} S^z_i

    All parameters are collected in a single dictionary `model_params`.

    Parameters
    ----------
    L : int
        Length of the chain.
    Jxx, Jz, hz, xi: float
        Coupling parameters as defined for the Hamiltonian above.
    bc_MPS : {'finite' | 'infinte'}
        MPS boundary conditions.
    conserve : 'Sz' | 'parity' | None
        What should be conserved. See :class:`~tenpy.networks.Site.SpinHalfSite`.
    """
    def __init__(self, model_params):
        # model parameters
        model_params = asConfig(model_params, "ExponentiallyDecayingHeisenberg")
        L = model_params.get('L', 2)
        xi = model_params.get('xi', 0.5)
        Jxx = model_params.get('Jxx', 1.)
        Jz = model_params.get('Jz', 1.5)
        hz = model_params.get('hz', 0.)
        conserve = model_params.get('conserve', 'Sz')
        if xi == 0.:
            g = 0.
        elif xi == np.inf:
            g = 1.
        else:
            g = np.exp(-1 / (xi))

        # Define the sites and the lattice, which in this case is a simple uniform chain
        # of spin 1/2 sites
        site = SpinHalfSite(conserve=conserve)
        lat = Chain(L, site, bc_MPS='infinite', bc='periodic')

        # The operators that appear in the Hamiltonian. Standard spin operators are
        # already defined for the spin 1/2 site, but it is also possible to add new
        # operators using the add_op method
        Sz, Sp, Sm, Id = site.Sz, site.Sp, site.Sm, site.Id

        # yapf:disable
        # The grid (list of lists) that defines the MPO. It is possible to define the
        # operators in the grid in the following ways:
        # 1) NPC arrays, defined above:
        grid = [[Id,   Sp,   Sm,   Sz,   -hz*Sz    ],
                [None, g*Id, None, None, 0.5*Jxx*Sm],
                [None, None, g*Id, None, 0.5*Jxx*Sp],
                [None, None, None, g*Id, Jz*Sz     ],
                [None, None, None, None, Id        ]]
        # 2) In the form [("OpName", strength)], where "OpName" is the name of the
        # operator (e.g. "Sm" for Sm) and "strength" is a number that multiplies it.
        grid = [[[("Id", 1)], [("Sp",1)], [("Sm",1)], [("Sz",1)], [("Sz", -hz)]    ],
                [None       , [("Id",g)], None      , None      , [("Sm", 0.5*Jxx)]],
                [None       , None      , [("Id",g)], None      , [("Sp", 0.5*Jxx)]],
                [None       , None      , None      , [("Id",g)], [("Sz",Jz)]      ],
                [None       , None      , None      , None      , [("Id",1)]       ]]
        # 3) It is also possible to write a single "OpName", equivalent to
        # [("OpName", 1)].
        grid = [["Id"       , "Sp"      , "Sm"      , "Sz"      , [("Sz", -hz)]    ],
                [None       , [("Id",g)], None      , None      , [("Sm", 0.5*Jxx)]],
                [None       , None      , [("Id",g)], None      , [("Sp", 0.5*Jxx)]],
                [None       , None      , None      , [("Id",g)], [("Sz",Jz)]      ],
                [None       , None      , None      , None      , "Id"             ]]
        # yapf:enable
        grids = [grid] * L

        # Generate the MPO from the grid. Note that it is not necessary to specify
        # the physical legs and their charges, since the from_grids method can extract
        # this information from the position of the operators inside the grid.
        H = MPO.from_grids(lat.mps_sites(), grids, bc='infinite', IdL=0, IdR=-1)
        MPOModel.__init__(self, lat, H)


def example_run_dmrg():
    """Use iDMRG to extract information about the ground state of the system."""
    model_params = dict(L=2, Jxx=1, Jz=1.5, xi=0.8, verbose=1)
    model = ExponentiallyDecayingHeisenberg(model_params)
    psi = MPS.from_product_state(model.lat.mps_sites(), ["up", "down"], bc='infinite')
    dmrg_params = {
        'mixer': True,
        'chi_list': {
            0: 100
        },
        'trunc_params': {
            'svd_min': 1.e-10
        },
        'verbose': 1
    }
    results = dmrg.run(psi, model, dmrg_params)
    print("Energy per site: ", results['E'])
    print("<Sz>: ", psi.expectation_value('Sz'))


if __name__ == "__main__":
    example_run_dmrg()

Charge conservation with np_conserved

The basic idea is quickly summarized: By inspecting the Hamiltonian, you can identify symmetries, which correspond to conserved quantities, called charges. These charges divide the tensors into different sectors. This can be used to infer for example a block-diagonal structure of certain matrices, which in turn speeds up SVD or diagonalization a lot. Even for more general (non-square-matrix) tensors, charge conservation imposes restrictions which blocks of a tensor can be non-zero. Only those blocks need to be saved, which ultimately (= for large enough arrays) leads to a speedup of many routines, e.g., tensordot.

This introduction covers our implementation of charges; explaining mathematical details of the underlying symmetry is beyond its scope. We refer you to the corresponding chapter in our [TeNPyNotes] for a more general introduction of the idea (also stating the “charge rule” introduced below). Ref. [Singh2009] explains why it works form a mathematical point of view, [Singh2010] has the focus on a \(U(1)\) symmetry and might be easier to read.

Notations

Lets fix the notation of certain terms for this introduction and the doc-strings in np_conserved. This might be helpful if you know the basics from a different context. If you’re new to the subject, keep reading even if you don’t understand each detail, and come back to this section when you encounter the corresponding terms again.

A Array is a multi-dimensional array representing a tensor with the entries:

\[T_{a_0, a_1, ... a_{rank-1}} \quad \text{ with } \quad a_i \in \lbrace 0, ..., n_i-1 \rbrace\]

Each leg \(a_i\) corresponds the a vector space of dimension n_i.

An index of a leg is a particular value \(a_i \in \lbrace 0, ... ,n_i-1\rbrace\).

The rank is the number of legs, the shape is \((n_0, ..., n_{rank-1})\).

We restrict ourselfes to abelian charges with entries in \(\mathbb{Z}\) or in \(\mathbb{Z}_m\). The nature of a charge is specified by \(m\); we set \(m=1\) for charges corresponding to \(\mathbb{Z}\). The number of charges is refered to as qnumber as a short hand, and the collection of \(m\) for each charge is called qmod. The qnumber, qmod and possibly descriptive names of the charges are saved in an instance of ChargeInfo.

To each index of each leg, a value of the charge(s) is associated. A charge block is a contiguous slice corresponding to the same charge(s) of the leg. A qindex is an index in the list of charge blocks for a certain leg. A charge sector is for given charge(s) is the set of all qindices of that charge(s). A leg is blocked if all charge sectors map one-to-one to qindices. Finally, a leg is sorted, if the charges are sorted lexiographically. Note that a sorted leg is always blocked. We can also speak of the complete array to be blocked by charges or legcharge-sorted, which means that all of its legs are blocked or sorted, respectively. The charge data for a single leg is collected in the class LegCharge. A LegCharge has also a flag qconj, which tells whether the charges point inward (+1) or outward (-1). What that means, is explained later in Which entries of the npc Array can be non-zero?.

For completeness, let us also summarize also the internal structure of an Array here: The array saves only non-zero blocks, collected as a list of np.array in self._data. The qindices necessary to map these blocks to the original leg indices are collected in self._qdata An array is said to be qdata-sorted if its self._qdata is lexiographically sorted. More details on this follow later. However, note that you usually shouldn’t access _qdata and _data directly - this is only necessary from within tensordot, svd, etc. Also, an array has a total charge, defining which entries can be non-zero - details in Which entries of the npc Array can be non-zero?.

Finally, a leg pipe (implemented in LegPipe) is used to formally combine multiple legs into one leg. Again, more details follow later.

Physical Example

For concreteness, you can think of the Hamiltonian \(H = -t \sum_{<i,j>} (c^\dagger_i c_j + H.c.) + U n_i n_j\) with \(n_i = c^\dagger_i c_i\). This Hamiltonian has the global \(U(1)\) gauge symmetry \(c_i \rightarrow c_i e^{i\phi}\). The corresponding charge is the total number of particles \(N = \sum_i n_i\). You would then introduce one charge with \(m=1\).

Note that the total charge is a sum of local terms, living on single sites. Thus, you can infer the charge of a single physical site: it’s just the value \(q_i = n_i \in \mathbb{N}\) for each of the states.

Note that you can only assign integer charges. Consider for example the spin 1/2 Heisenberg chain. Here, you can naturally identify the magnetization \(S^z = \sum_i S^z_i\) as the conserved quantity, with values \(S^z_i = \pm \frac{1}{2}\). Obviously, if \(S^z\) is conserved, then so is \(2 S^z\), so you can use the charges \(q_i = 2 S^z_i \in \lbrace-1, +1 \rbrace\) for the down and up states, respectively. Alternatively, you can also use a shift and define \(q_i = S^z_i + \frac{1}{2} \in \lbrace 0, 1 \rbrace\).

As another example, consider BCS like terms \(\sum_k (c^\dagger_k c^\dagger_{-k} + H.c.)\). These terms break the total particle conservation, but they preserve the total parity, i.e., \(N % 2\) is conserved. Thus, you would introduce a charge with \(m = 2\) in this case.

In the above examples, we had only a single charge conserved at a time, but you might be lucky and have multiple conserved quantities, e.g. if you have two chains coupled only by interactions. TeNPy is designed to handle the general case of multiple charges. When giving examples, we will restrict to one charge, but everything generalizes to multiple charges.

The different formats for LegCharge

As mentioned above, we assign charges to each index of each leg of a tensor. This can be done in three formats: qflat, as qind and as qdict. Let me explain them with examples, for simplicity considereing only a single charge (the most inner array has one entry for each charge).

qflat form: simply a list of charges for each index.

An example:

qflat = [[-2], [-1], [-1], [0], [0], [0], [0], [3], [3]]

This tells you that the leg has size 9, the charges for are [-2], [-1], [-1], ..., [3] for the indices 0, 1, 2, 3,..., 8. You can identify four charge blocks slice(0, 1), slice(1, 3), slice(3, 7), slice(7, 9) in this example, which have charges [-2], [-1], [0], [3]. In other words, the indices 1, 2 (which are in slice(1, 3)) have the same charge value [-1]. A qindex would just enumerate these blocks as 0, 1, 2, 3.

qind form: a 1D array slices and a 2D array charges.

This is a more compact version than the qflat form: the slices give a partition of the indices and the charges give the charge values. The same example as above would simply be:

slices = [0, 1, 3, 7, 9]
charges = [[-2], [-1], [0], [3]]

Note that slices includes 0 as first entry and the number of indices (here 9) as last entries. Thus it has len block_number + 1, where block_number (given by block_number) is the number of charge blocks in the leg, i.e. a qindex runs from 0 to block_number-1. On the other hand, the 2D array charges has shape (block_number, qnumber), where qnumber is the number of charges (given by qnumber).

In that way, the qind form maps an qindex, say qi, to the indices slice(slices[qi], slices[qi+1]) and the charge(s) charges[qi].

qdict form: a dictionary in the other direction than qind, taking charge tuples to slices.

Again for the same example:

{(-2,): slice(0, 1),
 (-1,): slice(1, 3),
 (0,) : slice(3, 7),
 (3,) : slice(7, 9)}

Since the keys of a dictionary are unique, this form is only possible if the leg is completely blocked.

The LegCharge saves the charge data of a leg internally in qind form, directly in the attribute slices and charges. However, it also provides convenient functions for conversion between from and to the qflat and qdict form.

The above example was nice since all charges were sorted and the charge blocks were ‘as large as possible’. This is however not required.

The following example is also a valid qind form:

slices = [0, 1, 3, 5, 7, 9]
charges = [[-2], [-1], [0], [0], [3]]

This leads to the same qflat form as the above examples, thus representing the same charges on the leg indices. However, regarding our Arrays, this is quite different, since it diveds the leg into 5 (instead of previously 4) charge blocks. We say the latter example is not bunched, while the former one is bunched.

To make the different notions of sorted and bunched clearer, consider the following (valid) examples:

charges

bunched

sorted

blocked

[[-2], [-1], [0], [1], [3]]

True

True

True

[[-2], [-1], [0], [0], [3]]

False

True

False

[[-2], [0], [-1], [1], [3]]

True

False

True

[[-2], [0], [-1], [0], [3]]

True

False

False

If a leg is bunched and sorted, it is automatically blocked (but not vice versa). See also below for further comments on that.

Which entries of the npc Array can be non-zero?

The reason for the speedup with np_conserved lies in the fact that it saves only the blocks ‘compatible’ with the charges. But how is this ‘compatible’ defined?

Assume you have a tensor, call it \(T\), and the LegCharge for all of its legs, say \(a, b, c, ...\).

Remeber that the LegCharge associates to each index of the leg a charge value (for each of the charges, if qnumber > 1). Let a.to_qflat()[ia] denote the charge(s) of index ia for leg a, and similar for other legs.

In addition, the LegCharge has a flag qconj. This flag qconj is only a sign, saved as +1 or -1, specifying whether the charges point ‘inward’ (+1, default) or ‘outward’ (-1) of the tensor.

Then, the total charge of an entry T[ia, ib, ic, ...] of the tensor is defined as:

qtotal[ia, ib, ic, ...] = a.to_qflat()[ia] * a.qconj + b.to_qflat()[ib] * b.qconj + c.to_qflat()[ic] * c.qconj + ...  modulo qmod

The rule which entries of the a Array can be non-zero (i.e., are ‘compatible’ with the charges), is then very simple:

Rule for non-zero entries

An entry ia, ib, ic, ... of a Array can only be non-zero, if qtotal[ia, ib, ic, ...] matches the unique qtotal attribute of the class.

In other words, there is a single total charge .qtotal attribute of a Array. All indices ia, ib, ic, ... for which the above defined qtotal[ia, ib, ic, ...] matches this total charge, are said to be compatible with the charges and can be non-zero. All other indices are incompatible with the charges and must be zero.

In case of multiple charges, qnumber > 1, is a straigth-forward generalization: an entry can only be non-zero if it is compatible with each of the defined charges.

The pesky qconj - contraction as an example

Why did we introduce the qconj flag? Remember it’s just a sign telling whether the charge points inward or outward. So whats the reasoning?

The short answer is, that LegCharges actually live on bonds (i.e., legs which are to be contracted) rather than individual tensors. Thus, it is convenient to share the LegCharges between different legs and even tensors, and just adjust the sign of the charges with qconj.

As an example, consider the contraction of two tensors, \(C_{ia,ic} = \sum_{ib} A_{ia,ib} B_{ib,ic}\). For simplicity, say that the total charge of all three tensors is zero. What are the implications of the above rule for non-zero entries? Or rather, how can we ensure that C complies with the above rule? An entry C[ia,ic] will only be non-zero, if there is an ib such that both A[ia,ib] and B[ib,ic] are non-zero, i.e., both of the following equations are fullfilled:

A.qtotal == A.legs[0].to_qflat()[ia] * A.legs[0].qconj + A.legs[1].to_qflat()[ib] * A.legs[1].qconj  modulo qmod
B.qtotal == B.legs[0].to_qflat()[ib] * B.legs[0].qconj + B.legs[1].to_qflat()[ic] * B.legs[1].qconj  modulo qmod

(A.legs[0] is the LegCharge saving the charges of the first leg (with index ia) of A.)

For the uncontracted legs, we just keep the charges as they are:

C.legs = [A.legs[0], B.legs[1]]

It is then straight-forward to check, that the rule is fullfilled for \(C\), if the following condition is met:

A.qtotal + B.qtotal - C.qtotal == A.legs[1].to_qflat()[ib] A.b.qconj + B.legs[0].to_qflat()[ib] B.b.qconj  modulo qmod

The easiest way to meet this condition is (1) to require that A.b and B.b share the same charges b.to_qflat(), but have opposite qconj, and (2) to define C.qtotal = A.qtotal + B.qtotal. This justifies the introduction of qconj: when you define the tensors, you have to define the LegCharge for the b only once, say for A.legs[1]. For B.legs[0] you simply use A.legs[1].conj() which creates a copy of the LegCharge with shared slices and charges, but opposite qconj. As a more impressive example, all ‘physical’ legs of an MPS can usually share the same LegCharge (up to different qconj if the local Hilbert space is the same). This leads to the following convention:

Convention

When an npc algorithm makes tensors which share a bond (either with the input tensors, as for tensordot, or amongst the output tensors, as for SVD), the algorithm is free, but not required, to use the same LegCharge for the tensors sharing the bond, without making a copy. Thus, if you want to modify a LegCharge, you must make a copy first (e.g. by using methods of LegCharge for what you want to acchive).

Assigning charges to non-physical legs

From the above physical examples, it should be clear how you assign charges to physical legs. But what about other legs, e.g, the virtual bond of an MPS (or an MPO)?

The charge of these bonds must be derived by using the ‘rule for non-zero entries’, as far as they are not arbitrary. As a concrete example, consider an MPS on just two spin 1/2 sites:

|        _____         _____
|   x->- | A | ->-y->- | B | ->-z
|        -----         -----
|          ^             ^
|          |p            |p

The two legs p are the physical legs and share the same charge, as they both describe the same local Hilbert space. For better distincition, let me label the indices of them by \(\uparrow=0\) and \(\downarrow=1\). As noted above, we can associate the charges 1 (\(p=\uparrow\)) and -1 (\(p=\downarrow\)), respectively, so we define:

chinfo = npc.ChargeInfo([1], ['2*Sz'])
p  = npc.LegCharge.from_qflat(chinfo, [1, -1], qconj=+1)

For the qconj signs, we stick to the convention used in our MPS code and indicated by the arrows in above ‘picture’: physical legs are incoming (qconj=+1), and from left to right on the virtual bonds. This is acchieved by using [p, x, y.conj()] as legs for A, and [p, y, z.conj()] for B, with the default qconj=+1 for all p, x, y, z: y.conj() has the same charges as y, but opposite qconj=-1.

The legs x and z of an L=2 MPS, are ‘dummy’ legs with just one index 0. The charge on one of them, as well as the total charge of both A and B is arbitrary (i.e., a gauge freedom), so we make a simple choice: total charge 0 on both arrays, as well as for \(x=0\), x = npc.LegCharge.from_qflat(chinfo, [0], qconj=+1).

The charges on the bonds y and z then depend on the state the MPS represents. Here, we consider a singlet \(\psi = (|\uparrow \downarrow\rangle - |\downarrow \uparrow\rangle)/\sqrt{2}\) as a simple example. A possible MPS representation is given by:

A[up, :, :]   = [[1/2.**0.5, 0]]     B[up, :, :]   = [[0], [-1]]
A[down, :, :] = [[0, 1/2.**0.5]]     B[down, :, :] = [[1], [0]]

There are two non-zero entries in A, for the indices \((a, x, y) = (\uparrow, 0, 0)\) and \((\downarrow, 0, 1)\). For \((a, x, y) = (\uparrow, 0, 0)\), we want:

A.qtotal = 0 = p.to_qflat()[up] * p.qconj + x.to_qflat()[0] * x.qconj + y.conj().to_qflat()[0] * y.conj().qconj
             = 1                * (+1)    + 0               * (+1)    + y.conj().to_qflat()[0] * (-1)

This fixes the charge of y=0 to 1. A similar calculation for \((a, x, y) = (\downarrow, 0, 1)\) yields the charge -1 for y=1. We have thus all the charges of the leg y and can define y = npc.LegCharge.from_qflat(chinfo, [1, -1], qconj=+1).

Now take a look at the entries of B. For the non-zero entry \((b, y, z) = (\uparrow, 1, 0)\), we want:

B.qtotal = 0 = p.to_qflat()[up] * p.qconj + y.to_qflat()[1] * y.qconj + z.conj().to_qflat()[0] * z.conj().qconj
             = 1                * (+1)    + (-1)            * (+1)    + z.conj().to_qflat()[0] * (-1)

This implies the charge 0 for z = 0, thus z = npc.LegCharge.form_qflat(chinfo, [0], qconj=+1). Finally, note that the rule for \((b, y, z) = (\downarrow, 0, 0)\) is automatically fullfilled! This is an implication of the fact that the singlet has a well defined value for \(S^z_a + S^z_b\). For other states without fixed magnetization (e.g., \(|\uparrow \uparrow\rangle + |\downarrow \downarrow\rangle\)) this would not be the case, and we could not use charge conservation.

As an exercise, you can calculate the charge of z in the case that A.qtotal=5, B.qtotal = -1 and charge 2 for x=0. The result is -2.

Note

This section is meant be an pedagogical introduction. In you program, you can use the functions detect_legcharge() (which does exactly what’s described above) or detect_qtotal() (if you know all LegCharges, but not qtotal).

Array creation

Making an new Array requires both the tensor entries (data) and charge data.

The default initialization a = Array(...) creates an empty Array, where all entries are zero (equivalent to zeros()). (Non-zero) data can be provided either as a dense np.array to from_ndarray(), or by providing a numpy function such as np.random, np.ones etc. to from_func().

In both cases, the charge data is provided by one ChargeInfo, and a LegCharge instance for each of the legs.

Note

The charge data instances are not copied, in order to allow it to be shared between different Arrays. Consequently, you must make copies of the charge data, if you manipulate it directly. (However, methods like sort() do that for you.)

Of course, a new Array can also created using the charge data from exisiting Arrays, for examples with zeros_like() or creating a (deep or shallow) copy(). Further, there are the higher level functions like tensordot() or svd(), which also return new Arrays.

Further, new Arrays are created by the various functions like tensordot or svd in np_conserved.

Complete blocking of Charges

While the code was designed in such a way that each charge sector has a different charge, the code should still run correctly if multiple charge sectors (for different qindex) correspond to the same charge. In this sense Array can act like a sparse array class to selectively store subblocks. Algorithms which need a full blocking should state that explicitly in their doc-strings. (Some functions (like svd and eigh) require complete blocking internally, but if necessary they just work on a temporary copy returned by as_completely_blocked()).

If you expect the tensor to be dense subject to charge constraints (as for MPS), it will be most efficient to fully block by charge, so that work is done on large chunks.

However, if you expect the tensor to be sparser than required by charge (as for an MPO), it may be convenient not to completely block, which forces smaller matrices to be stored, and hence many zeroes to be dropped. Nevertheless, the algorithms were not designed with this in mind, so it is not recommended in general. (If you want to use it, run a benchmark to check whether it is really faster!)

If you haven’t created the array yet, you can call sort() (with bunch=True) on each LegCharge which you want to block. This sorts by charges and thus induces a permution of the indices, which is also returned as an 1D array perm. For consistency, you have to apply this permutation to your flat data as well.

Alternatively, you can simply call sort_legcharge() on an existing Array. It calls sort() internally on the specified legs and performs the necessary permutations directly to (a copy of) self. Yet, you should keep in mind, that the axes are permuted afterwards.

Internal Storage schema of npc Arrays

The actual data of the tensor is stored in _data. Rather than keeping a single np.array (which would have many zeros in it), we store only the non-zero sub blocks. So _data is a python list of np.array’s. The order in which they are stored in the list is not physically meaningful, and so not guaranteed (more on this later). So to figure out where the sub block sits in the tensor, we need the _qdata structure (on top of the LegCharges in legs).

Consider a rank 3 tensor T, with the first leg like:

legs[0].slices = np.array([0, 1, 4, ...])
legs[0].charges = np.array([[-2], [1], ...])

Each row of charges gives the charges for a charge block of the leg, with the actual indices of the total tensor determined by the slices. The qindex simply enumerates the charge blocks of a lex. Picking a qindex (and thus a charge block) from each leg, we have a subblock of the tensor.

For each (non-zero) subblock of the tensor, we put a (numpy) ndarray entry in the _data list. Since each subblock of the tensor is specified by rank qindices, we put a corresponding entry in _qdata, which is a 2D array of shape (#stored_blocks, rank). Each row corresponds to a non-zero subblock, and there are rank columns giving the corresponding qindex for each leg.

Example: for a rank 3 tensor we might have:

T._data = [t1, t2, t3, t4, ...]
T._qdata = np.array([[3, 2, 1],
                     [1, 1, 1],
                     [4, 2, 2],
                     [2, 1, 2],
                     ...       ])

The third subblock has an ndarray t3, and qindices [4 2 2] for the three legs.

  • To find the position of t3 in the actual tensor you can use get_slice():

    T.legs[0].get_slice(4), T.legs[1].get_slice(2), T.legs[2].get_slice(2)
    

    The function leg.get_charges(qi) simply returns slice(leg.slices[qi], leg.slices[qi+1])

  • To find the charges of t3, we an use get_charge():

    T.legs[0].get_charge(2), T.legs[1].get_charge(2), T.legs[2].get_charge(2)
    

    The function leg.get_charge(qi) simply returns leg.charges[qi]*leg.qconj.

Note

Outside of np_conserved, you should use the API to access the entries. If you really need to iterate over all blocks of an Array T, try for (block, blockslices, charges, qindices) in T: do_something().

The order in which the blocks stored in _data/_qdata is arbitrary (although of course _data and _qdata must be in correspondence). However, for many purposes it is useful to sort them according to some convention. So we include a flag ._qdata_sorted to the array. So, if sorted (with isort_qdata(), the _qdata example above goes to

_qdata = np.array([[1, 1, 1],
                   [3, 2, 1],
                   [2, 1, 2],
                   [4, 2, 2],
                   ...       ])

Note that np.lexsort chooses the right-most column to be the dominant key, a convention we follow throughout.

If _qdata_sorted == True, _qdata and _data are guaranteed to be lexsorted. If _qdata_sorted == False, there is no gaurantee. If an algorithm modifies _qdata, it must set _qdata_sorted = False (unless it gaurantees it is still sorted). The routine sort_qdata() brings the data to sorted form.

Indexing of an Array

Although it is usually not necessary to access single entries of an Array, you can of course do that. In the simplest case, this is something like A[0, 2, 1] for a rank-3 Array A. However, accessing single entries is quite slow and usually not recommended. For small Arrays, it may be convenient to convert them back to flat numpy arrays with to_ndarray().

On top of that very basic indexing, Array supports slicing and some kind of advanced indexing, which is however different from the one of numpy arrarys (described here). Unlike numpy arrays, our Array class does not broadcast existing index arrays – this would be terribly slow. Also, np.newaxis is not supported, since inserting new axes requires additional information for the charges.

Instead, we allow just indexing of the legs independent of each other, of the form A[i0, i1, ...]. If all indices i0, i1, ... are integers, the single corresponding entry (of type dtype) is returned.

However, the individual ‘indices’ i0 for the individual legs can also be one of what is described in the following list. In that case, a new Array with less data (specified by the indices) is returned.

The ‘indices’ can be:

  • an int: fix the index of that axis, return array with one less dimension. See also take_slice().

  • a slice(None) or :: keep the complete axis

  • an Ellipsis or ...: shorthand for slice(None) for missing axes to fix the len

  • an 1D bool ndarray mask: apply a mask to that axis, see iproject().

  • a slice(start, stop, step) or start:stop:step: keep only the indices specified by the slice. This is also implemented with iproject.

  • an 1D int ndarray mask: keep only the indices specified by the array. This is also implemented with iproject.

For slices and 1D arrays, additional permuations may be perfomed with the help of permute().

If the number of indices is less than rank, the remaining axes remain free, so for a rank 4 Array A, A[i0, i1] == A[i0, i1, ...] == A[i0, i1, :, :].

Note that indexing always copies the data – even if int contains just slices, in which case numpy would return a view. However, assigning with A[:, [3, 5], 3] = B should work as you would expect.

Warning

Due to numpy’s advanced indexing, for 1D integer arrays a0 and a1 the following holds

A[a0, a1].to_ndarray() == A.to_ndarray()[np.ix_(a0, a1)] != A.to_ndarray()[a0, a1]

For a combination of slices and arrays, things get more complicated with numpys advanced indexing. In that case, a simple np.ix_(...) doesn’t help any more to emulate our version of indexing.

Introduction to combine_legs, split_legs and LegPipes

Often, it is necessary to “combine” multiple legs into one: for example to perfom a SVD, a tensor needs to be viewed as a matrix. For a flat array, this can be done with np.reshape, e.g., if A has shape (10, 3, 7) then B = np.reshape(A, (30, 7)) will result in a (view of the) array with one less dimension, but a “larger” first leg. By default (order='C'), this results in

B[i*3 + j , k] == A[i, j, k] for i in range(10) for j in range(3) for k in range(7)

While for a np.array, also a reshaping (10, 3, 7) -> (2, 21, 5) would be allowed, it does not make sense physically. The only sensible “reshape” operation on an Array are

  1. to combine multiple legs into one leg pipe (LegPipe) with combine_legs(), or

  2. to split a pipe of previously combined legs with split_legs().

Each leg has a Hilbert space, and a representation of the symmetry on that Hilbert space. Combining legs corresponds to the tensor product operation, and for abelian groups, the corresponding “fusion” of the representation is the simple addition of charge.

Fusion is not a lossless process, so if we ever want to split the combined leg, we need some additional data to tell us how to reverse the tensor product. This data is saved in the class LegPipe, derived from the LegCharge and used as new leg. Details of the information contained in a LegPipe are given in the class doc string.

The rough usage idea is as follows:

  1. You can call combine_legs() without supplying any LegPipes, combine_legs will then make them for you.

    Nevertheless, if you plan to perform the combination over and over again on sets of legs you know to be identical [with same charges etc, up to an overall -1 in qconj on all incoming and outgoing Legs] you might make a LegPipe anyway to save on the overhead of computing it each time.

  2. In any way, the resulting Array will have a LegPipe as a LegCharge on the combined leg. Thus, it – and all tensors inheriting the leg (e.g. the results of svd, tensordot etc.) – will have the information how to split the LegPipe back to the original legs.

  3. Once you performed the necessary operations, you can call split_legs(). This uses the information saved in the LegPipe to split the legs, recovering the original legs.

For a LegPipe, conj() changes qconj for the outgoing pipe and the incoming legs. If you need a LegPipe with the same incoming qconj, use outer_conj().

Leg labeling

It’s convenient to name the legs of a tensor: for instance, we can name legs 0, 1, 2 to be 'a', 'b', 'c': \(T_{i_a,i_b,i_c}\). That way we don’t have to remember the ordering! Under tensordot, we can then call

U = npc.tensordot(S, T, axes = [ [...],  ['b'] ] )

without having to remember where exactly 'b' is. Obviously U should then inherit the name of its legs from the uncontracted legs of S and T. So here is how it works:

  • Labels can only be strings. The labels should not include the characters . or ?. Internally, the labels are stored as dict a.labels = {label: leg_position, ...}. Not all legs need a label.

  • To set the labels, call

    A.set_labels(['a', 'b', None, 'c', ... ])
    

    which will set up the labeling {'a': 0, 'b': 1, 'c': 3 ...}.

  • (Where implemented) the specification of axes can use either the labels or the index positions. For instance, the call tensordot(A, B, [['a', 2, 'c'], [...]]) will interpret 'a' and 'c' as labels (calling get_leg_indices() to find their positions using the dict) and 2 as ‘the 2nd leg’. That’s why we require labels to be strings!

  • Labels will be intelligently inherited through the various operations of np_conserved.
    • Under transpose, labels are permuted.

    • Under tensordot, labels are inherited from uncontracted legs. If there is a collision, both labels are dropped.

    • Under combine_legs, labels get concatenated with a . delimiter and sourrounded by brackets. Example: let a.labels = {'a': 1, 'b': 2, 'c': 3}. Then if b = a.combine_legs([[0, 1], [2]]), it will have b.labels = {'(a.b)': 0, '(c)': 1}. If some sub-leg of a combined leg isn’t named, then a '?#' label is inserted (with # the leg index), e.g., 'a.?0.c'.

    • Under split_legs, the labels are split using the delimiters (and the '?#' are dropped).

    • Under conj, iconj: take 'a' -> 'a*', 'a*' -> 'a', and '(a.(b*.c))' -> '(a*.(b.c*))'

    • Under svd, the outer labels are inherited, and inner labels can be optionally passed.

    • Under pinv, the labels are transposed.

See also
  • The module tenpy.linalg.np_conserved should contain all the API needed from the point of view of the algorithms. It contians the fundamental Array class and functions for working with them (creating and manipulating).

  • The module tenpy.linalg.charges contains implementations for the charge structure, for example the classes ChargeInfo, LegCharge, and LegPipe. As noted above, the ‘public’ API is imported to (and accessible from) np_conserved.

A full example code for spin-1/2

Below follows a full example demonstrating the creation and contraction of Arrays. (It’s the file a_np_conserved.py in the examples folder of the tenpy source.)

"""An example code to demonstrate the usage of :class:`~tenpy.linalg.np_conserved.Array`.

This example includes the following steps:
1) create Arrays for an Neel MPS
2) create an MPO representing the nearest-neighbour AFM Heisenberg Hamiltonian
3) define 'environments' left and right
4) contract MPS and MPO to calculate the energy
5) extract two-site hamiltonian ``H2`` from the MPO
6) calculate ``exp(-1.j*dt*H2)`` by diagonalization of H2
7) apply ``exp(H2)`` to two sites of the MPS and truncate with svd

Note that this example uses only np_conserved, but no other modules.
Compare it to the example `b_mps.py`,
which does the same steps using a few predefined classes like MPS and MPO.
"""
# Copyright 2018-2020 TeNPy Developers, GNU GPLv3

import tenpy.linalg.np_conserved as npc
import numpy as np

# model parameters
Jxx, Jz = 1., 1.
L = 20
dt = 0.1
cutoff = 1.e-10
print("Jxx={Jxx}, Jz={Jz}, L={L:d}".format(Jxx=Jxx, Jz=Jz, L=L))

print("1) create Arrays for an Neel MPS")

#  vL ->--B-->- vR
#         |
#         ^
#         |
#         p

# create a ChargeInfo to specify the nature of the charge
chinfo = npc.ChargeInfo([1], ['2*Sz'])  # the second argument is just a descriptive name

# create LegCharges on physical leg and even/odd bonds
p_leg = npc.LegCharge.from_qflat(chinfo, [[1], [-1]])  # charges for up, down
v_leg_even = npc.LegCharge.from_qflat(chinfo, [[0]])
v_leg_odd = npc.LegCharge.from_qflat(chinfo, [[1]])

B_even = npc.zeros([v_leg_even, v_leg_odd.conj(), p_leg],
                   labels=['vL', 'vR', 'p'])  # virtual left/right, physical
B_odd = npc.zeros([v_leg_odd, v_leg_even.conj(), p_leg], labels=['vL', 'vR', 'p'])
B_even[0, 0, 0] = 1.  # up
B_odd[0, 0, 1] = 1.  # down

Bs = [B_even, B_odd] * (L // 2) + [B_even] * (L % 2)  # (right-canonical)
Ss = [np.ones(1)] * L  # Ss[i] are singular values between Bs[i-1] and Bs[i]

# Side remark:
# An MPS is expected to have non-zero entries everywhere compatible with the charges.
# In general, we recommend to use `sort_legcharge` (or `as_completely_blocked`)
# to ensure complete blocking. (But the code will also work, if you don't do it.)
# The drawback is that this might introduce permutations in the indices of single legs,
# which you have to keep in mind when converting dense numpy arrays to and from npc.Arrays.

print("2) create an MPO representing the AFM Heisenberg Hamiltonian")

#         p*
#         |
#         ^
#         |
#  wL ->--W-->- wR
#         |
#         ^
#         |
#         p

# create physical spin-1/2 operators Sz, S+, S-
Sz = npc.Array.from_ndarray([[0.5, 0.], [0., -0.5]], [p_leg, p_leg.conj()], labels=['p', 'p*'])
Sp = npc.Array.from_ndarray([[0., 1.], [0., 0.]], [p_leg, p_leg.conj()], labels=['p', 'p*'])
Sm = npc.Array.from_ndarray([[0., 0.], [1., 0.]], [p_leg, p_leg.conj()], labels=['p', 'p*'])
Id = npc.eye_like(Sz, labels=Sz.get_leg_labels())  # identity

mpo_leg = npc.LegCharge.from_qflat(chinfo, [[0], [2], [-2], [0], [0]])

W_grid = [[Id,   Sp,   Sm,   Sz,   None          ],
          [None, None, None, None, 0.5 * Jxx * Sm],
          [None, None, None, None, 0.5 * Jxx * Sp],
          [None, None, None, None, Jz * Sz       ],
          [None, None, None, None, Id            ]]  # yapf:disable

W = npc.grid_outer(W_grid, [mpo_leg, mpo_leg.conj()], grid_labels=['wL', 'wR'])
# wL/wR = virtual left/right of the MPO
Ws = [W] * L

print("3) define 'environments' left and right")

#  .---->- vR     vL ->----.
#  |                       |
#  envL->- wR     wL ->-envR
#  |                       |
#  .---->- vR*    vL*->----.

envL = npc.zeros([W.get_leg('wL').conj(), Bs[0].get_leg('vL').conj(), Bs[0].get_leg('vL')],
                 labels=['wR', 'vR', 'vR*'])
envL[0, :, :] = npc.diag(1., envL.legs[1])
envR = npc.zeros([W.get_leg('wR').conj(), Bs[-1].get_leg('vR').conj(), Bs[-1].get_leg('vR')],
                 labels=['wL', 'vL', 'vL*'])
envR[-1, :, :] = npc.diag(1., envR.legs[1])

print("4) contract MPS and MPO to calculate the energy <psi|H|psi>")
contr = envL
for i in range(L):
    # contr labels: wR, vR, vR*
    contr = npc.tensordot(contr, Bs[i], axes=('vR', 'vL'))
    # wR, vR*, vR, p
    contr = npc.tensordot(contr, Ws[i], axes=(['p', 'wR'], ['p*', 'wL']))
    # vR*, vR, wR, p
    contr = npc.tensordot(contr, Bs[i].conj(), axes=(['p', 'vR*'], ['p*', 'vL*']))
    # vR, wR, vR*
    # note that the order of the legs changed, but that's no problem with labels:
    # the arrays are automatically transposed as necessary
E = npc.inner(contr, envR, axes=(['vR', 'wR', 'vR*'], ['vL', 'wL', 'vL*']))
print("E =", E)

print("5) calculate two-site hamiltonian ``H2`` from the MPO")
# label left, right physical legs with p, q
W0 = W.replace_labels(['p', 'p*'], ['p0', 'p0*'])
W1 = W.replace_labels(['p', 'p*'], ['p1', 'p1*'])
H2 = npc.tensordot(W0, W1, axes=('wR', 'wL')).itranspose(['wL', 'wR', 'p0', 'p1', 'p0*', 'p1*'])
H2 = H2[0, -1]  # (If H has single-site terms, it's not that simple anymore)
print("H2 labels:", H2.get_leg_labels())

print("6) calculate exp(H2) by diagonalization of H2")
# diagonalization requires to view H2 as a matrix
H2 = H2.combine_legs([('p0', 'p1'), ('p0*', 'p1*')], qconj=[+1, -1])
print("labels after combine_legs:", H2.get_leg_labels())
E2, U2 = npc.eigh(H2)
print("Eigenvalues of H2:", E2)
U_expE2 = U2.scale_axis(np.exp(-1.j * dt * E2), axis=1)  # scale_axis ~= apply an diagonal matrix
exp_H2 = npc.tensordot(U_expE2, U2.conj(), axes=(1, 1))
exp_H2.iset_leg_labels(H2.get_leg_labels())
exp_H2 = exp_H2.split_legs()  # by default split all legs which are `LegPipe`
# (this restores the originial labels ['p0', 'p1', 'p0*', 'p1*'] of `H2` in `exp_H2`)

print("7) apply exp(H2) to even/odd bonds of the MPS and truncate with svd")
# (this implements one time step of first order TEBD)
for even_odd in [0, 1]:
    for i in range(even_odd, L - 1, 2):
        B_L = Bs[i].scale_axis(Ss[i], 'vL').ireplace_label('p', 'p0')
        B_R = Bs[i + 1].replace_label('p', 'p1')
        theta = npc.tensordot(B_L, B_R, axes=('vR', 'vL'))
        theta = npc.tensordot(exp_H2, theta, axes=(['p0*', 'p1*'], ['p0', 'p1']))
        # view as matrix for SVD
        theta = theta.combine_legs([('vL', 'p0'), ('p1', 'vR')], new_axes=[0, 1], qconj=[+1, -1])
        # now theta has labels '(vL.p0)', '(p1.vR)'
        U, S, V = npc.svd(theta, inner_labels=['vR', 'vL'])
        # truncate
        keep = S > cutoff
        S = S[keep]
        invsq = np.linalg.norm(S)
        Ss[i + 1] = S / invsq
        U = U.iscale_axis(S / invsq, 'vR')
        Bs[i] = U.split_legs('(vL.p0)').iscale_axis(Ss[i]**(-1), 'vL').ireplace_label('p0', 'p')
        Bs[i + 1] = V.split_legs('(p1.vR)').ireplace_label('p1', 'p')
print("finished")

Models

What is a model?

Abstractly, a model stands for some physical (quantum) system to be described. For tensor networks algorithms, the model is usually specified as a Hamiltonian written in terms of second quantization. For example, let us consider a spin-1/2 Heisenberg model described by the Hamiltonian

\[H = J \sum_i S^x_i S^x_{i+1} + S^y_i S^y_{i+1} + S^z_i S^z_{i+1}\]

Note that a few things are defined more or less implicitly.

  • The local Hilbert space: it consists of Spin-1/2 degrees of freedom with the usual spin-1/2 operators \(S^x, S^y, S^z\).

  • The geometric (lattice) strucuture: above, we spoke of a 1D “chain”.

  • The boundary conditions: do we have open or periodic boundary conditions? The “chain” suggests open boundaries, which are in most cases preferable for MPS-based methods.

  • The range of i: How many sites do we consider (for a 2D system: in each direction)?

Obviously, these things need to be specified in TeNPy in one way or another, if we want to define a model.

Ultimately, our goal is to run some algorithm. However, different algorithm requires the model and Hamiltonian to be specified in different forms. We have one class for each such required form. For example dmrg requires an MPOModel, which contains the Hamiltonian written as an MPO. So a new model class suitable for DMRG should have this general structure:

class MyNewModel(MPOModel):
    def __init__(self, model_params):
        lattice = somehow_generate_lattice(model_params)
        H_MPO = somehow_generate_MPO(lattice, model_params)
        # initialize MPOModel
        MPOModel.__init__(self, lattice, H_MPO)

On the other hand, if we want to evolve a state with tebd we need a NearestNeighborModel, in which the Hamiltonian is written in terms of two-site bond-terms to allow a Suzuki-Trotter decomposition of the time-evolution operator:

class MyNewModel2(NearestNeighborModel):
    """General strucutre for a model suitable for TEBD."""
    def __init__(self, model_params):
        lattice = somehow_generate_lattice(model_params)
        H_bond = somehow_generate_H_bond(lattice, model_params)
        # initialize MPOModel
        NearestNeighborModel.__init__(self, lattice, H_bond)

Of course, the difficult part in these examples is to generate the H_MPO and H_bond in the required form. If you want to write it down by hand, you can of course do that. But it can be quite tedious to write every model multiple times, just because we need different representations of the same Hamiltonian. Luckily, there is a way out in TeNPy: the CouplingModel. Before we describe this class, let’s discuss the background of the Site and Site class.

The Hilbert space

The local Hilbert space is represented by a Site (read its doc-string!). In particular, the Site contains the local LegCharge and hence the meaning of each basis state needs to be defined. Beside that, the site contains the local operators - those give the real meaning to the local basis. Having the local operators in the site is very convenient, because it makes them available by name for example when you want to calculate expectation values. The most common sites (e.g. for spins, spin-less or spin-full fermions, or bosons) are predefined in the module tenpy.networks.site, but if necessary you can easily extend them by adding further local operators or completely write your own subclasses of Site.

The full Hilbert space is a tensor product of the local Hilbert space on each site.

Note

The LegCharge of all involved sites need to have a common ChargeInfo in order to allow the contraction of tensors acting on the various sites. This can be ensured with the function multi_sites_combine_charges().

An example where multi_sites_combine_charges() is needed would be a coupling of different types of sites, e.g., when a tight binding chain of fermions is coupled to some local spin degrees of freedom. Another use case of this function would be a model with a $U(1)$ symmetry involving only half the sites, say \(\sum_{i=0}^{L/2} n_{2i}\).

Note

If you don’t know about the charges and np_conserved yet, but want to get started with models right away, you can set conserve=None in the existing sites or use leg = tenpy.linalg.np_conserved.LegCharge.from_trivial(d) for an implementation of your custom site, where d is the dimension of the local Hilbert space. Alternatively, you can find some introduction to the charges in the Charge conservation with np_conserved.

The geometry : lattice class

The geometry is usually given by some kind of lattice structure how the sites are arranged, e.g. implicitly with the sum over nearest neighbours \(\sum_{<i, j>}\). In TeNPy, this is specified by a Lattice class, which contains a unit cell of a few Site which are shifted periodically by its basis vectors to form a regular lattice. Again, we have pre-defined some basic lattices like a Chain, two chains coupled as a Ladder or 2D lattices like the Square, Honeycomb and Kagome lattices; but you are also free to define your own generalizations.

MPS based algorithms like DMRG always work on purely 1D systems. Even if our model “lives” on a 2D lattice, these algorithms require to map it onto a 1D chain (probably at the cost of longer-range interactions). This mapping is also done by the lattice by defining the order (order) of the sites.

Note

Further details on the lattice geometry can be found in Details on the lattice geometry.

The CouplingModel: general structure

The CouplingModel provides a general, quite abstract way to specify a Hamiltonian of two-site couplings on a given lattice. Once initialized, its methods add_onsite() and add_coupling() allow to add onsite and coupling terms repeated over the different unit cells of the lattice. In that way, it basically allows a straight-forward translation of the Hamiltonian given as a math forumla \(H = \sum_{i} A_i B_{i+dx} + ...\) with onsite operators A, B,… into a model class.

The general structure for a new model based on the CouplingModel is then:

class MyNewModel3(CouplingModel,MPOModel,NearestNeighborModel):
    def __init__(self, ...):
        ...  # follow the basic steps explained below

In the initialization method __init__(self, ...) of this class you can then follow these basic steps:

  1. Read out the parameters.

  2. Given the parameters, determine the charges to be conserved. Initialize the LegCharge of the local sites accordingly.

  3. Define (additional) local operators needed.

  4. Initialize the needed Site.

    Note

    Using pre-defined sites like the SpinHalfSite is recommended and can replace steps 1-3.

  5. Initialize the lattice (or if you got the lattice as a parameter, set the sites in the unit cell).

  6. Initialize the CouplingModel with CouplingModel.__init__(self, lat).

  7. Use add_onsite() and add_coupling() to add all terms of the Hamiltonian. Here, the pairs of the lattice can come in handy, for example:

    self.add_onsite(-np.asarray(h), 0, 'Sz')
    for u1, u2, dx in self.lat.pairs['nearest_neighbors']:
        self.add_coupling(0.5*J, u1, 'Sp', u2, 'Sm', dx, plus_hc=True)
        self.add_coupling(    J, u1, 'Sz', u2, 'Sz', dx)
    

    Note

    The method add_coupling() adds the coupling only in one direction, i.e. not switching i and j in a \(\sum_{\langle i, j\rangle}\). If you have terms like \(c^\dagger_i c_j\) or \(S^{+}_i S^{-}_j\) in your Hamiltonian, you need to add it in both directions to get a Hermitian Hamiltonian! The easiest way to do that is to use the plus_hc option of add_onsite() and add_coupling(), as we did for the \(J/2 (S^{+}_i S^{-}_j + h.c.)\) terms of the Heisenberg model above. Alternatively, you can add the hermitian conjugate terms explicitly, see the examples in add_coupling() for more details.

    Note that the strength arguments of these functions can be (numpy) arrays for site-dependent couplings. If you need to add or multipliy some parameters of the model for the strength of certain terms, it is recommended use np.asarray beforehand – in that way lists will also work fine.

  8. Finally, if you derived from the MPOModel, you can call calc_H_MPO() to build the MPO and use it for the initialization as MPOModel.__init__(self, lat, self.calc_H_MPO()).

  9. Similarly, if you derived from the NearestNeighborModel, you can call calc_H_bond() to initialze it as NearestNeighborModel.__init__(self, lat, self.calc_H_bond()). Calling self.calc_H_bond() will fail for models which are not nearest-neighbors (with respect to the MPS ordering), so you should only subclass the NearestNeighborModel if the lattice is a simple Chain.

The CouplingModel works for Hamiltonians which are a sum of terms involving at most two sites. The generalization MultiCouplingModel can be used for Hamlitonians with coupling terms acting on more than 2 sites at once. Follow the exact same steps in the initialization, and just use the add_multi_coupling() instead or in addition to the add_coupling(). A prototypical example is the exactly solvable ToricCode.

The code of the module tenpy.models.xxz_chain is included below as an illustrative example how to implement a Model. The implementation of the XXZChain directly follows the steps outline above. The XXZChain2 implements the very same model, but based on the CouplingMPOModel explained in the next section.

"""Prototypical example of a 1D quantum model: the spin-1/2 XXZ chain.

The XXZ chain is contained in the more general :class:`~tenpy.models.spins.SpinChain`; the idea of
this module is more to serve as a pedagogical example for a model.
"""
# Copyright 2018-2020 TeNPy Developers, GNU GPLv3

import numpy as np

from .lattice import Site, Chain
from .model import CouplingModel, NearestNeighborModel, MPOModel, CouplingMPOModel
from ..linalg import np_conserved as npc
from ..tools.params import asConfig
from ..networks.site import SpinHalfSite  # if you want to use the predefined site

__all__ = ['XXZChain', 'XXZChain2']


class XXZChain(CouplingModel, NearestNeighborModel, MPOModel):
    r"""Spin-1/2 XXZ chain with Sz conservation.

    The Hamiltonian reads:

    .. math ::
        H = \sum_i \mathtt{Jxx}/2 (S^{+}_i S^{-}_{i+1} + S^{-}_i S^{+}_{i+1})
                 + \mathtt{Jz} S^z_i S^z_{i+1} \\
            - \sum_i \mathtt{hz} S^z_i

    All parameters are collected in a single dictionary `model_params`, which
    is turned into a :class:`~tenpy.tools.params.Config` object.

    Parameters
    ----------
    model_params : :class:`~tenpy.tools.params.Config`
        Parameters for the model. See :cfg:config:`XXZChain` below.

    Options
    -------
    .. cfg:config :: XXZChain
        :include: CouplingMPOModel

        L : int
            Length of the chain.
        Jxx, Jz, hz : float | array
            Coupling as defined for the Hamiltonian above.
        bc_MPS : {'finite' | 'infinte'}
            MPS boundary conditions. Coupling boundary conditions are chosen appropriately.

    """
    def __init__(self, model_params):
        # 0) read out/set default parameters
        model_params = asConfig(model_params, "XXZChain")
        L = model_params.get('L', 2)
        Jxx = model_params.get('Jxx', 1.)
        Jz = model_params.get('Jz', 1.)
        hz = model_params.get('hz', 0.)
        bc_MPS = model_params.get('bc_MPS', 'finite')
        # 1-3):
        USE_PREDEFINED_SITE = False
        if not USE_PREDEFINED_SITE:
            # 1) charges of the physical leg. The only time that we actually define charges!
            leg = npc.LegCharge.from_qflat(npc.ChargeInfo([1], ['2*Sz']), [1, -1])
            # 2) onsite operators
            Sp = [[0., 1.], [0., 0.]]
            Sm = [[0., 0.], [1., 0.]]
            Sz = [[0.5, 0.], [0., -0.5]]
            # (Can't define Sx and Sy as onsite operators: they are incompatible with Sz charges.)
            # 3) local physical site
            site = Site(leg, ['up', 'down'], Sp=Sp, Sm=Sm, Sz=Sz)
        else:
            # there is a site for spin-1/2 defined in TeNPy, so just we can just use it
            # replacing steps 1-3)
            site = SpinHalfSite(conserve='Sz')
        # 4) lattice
        bc = 'periodic' if bc_MPS == 'infinite' else 'open'
        lat = Chain(L, site, bc=bc, bc_MPS=bc_MPS)
        # 5) initialize CouplingModel
        CouplingModel.__init__(self, lat)
        # 6) add terms of the Hamiltonian
        # (u is always 0 as we have only one site in the unit cell)
        self.add_onsite(-hz, 0, 'Sz')
        self.add_coupling(Jxx * 0.5, 0, 'Sp', 0, 'Sm', 1, plus_hc=True)
        # instead of plus_hc=True, we could explicitly add the h.c. term with:
        self.add_coupling(Jz, 0, 'Sz', 0, 'Sz', 1)
        # 7) initialize H_MPO
        MPOModel.__init__(self, lat, self.calc_H_MPO())
        # 8) initialize H_bond (the order of 7/8 doesn't matter)
        NearestNeighborModel.__init__(self, lat, self.calc_H_bond())


class XXZChain2(CouplingMPOModel, NearestNeighborModel):
    """Another implementation of the Spin-1/2 XXZ chain with Sz conservation.

    This implementation takes the same parameters as the :class:`XXZChain`, but is implemented
    based on the :class:`~tenpy.models.model.CouplingMPOModel`.

    Parameters
    ----------
    model_params : dict | :class:`~tenpy.tools.params.Config`
        See :cfg:config:`XXZChain`
    """
    def __init__(self, model_params):
        model_params = asConfig(model_params, "XXZChain2")
        model_params.setdefault('lattice', "Chain")
        CouplingMPOModel.__init__(self, model_params)

    def init_sites(self, model_params):
        return SpinHalfSite(conserve='Sz')  # use predefined Site

    def init_terms(self, model_params):
        # read out parameters
        Jxx = model_params.get('Jxx', 1.)
        Jz = model_params.get('Jz', 1.)
        hz = model_params.get('hz', 0.)
        # add terms
        for u in range(len(self.lat.unit_cell)):
            self.add_onsite(-hz, u, 'Sz')
        for u1, u2, dx in self.lat.pairs['nearest_neighbors']:
            self.add_coupling(Jxx * 0.5, u1, 'Sp', u2, 'Sm', dx, plus_hc=True)
            self.add_coupling(Jz, u1, 'Sz', u2, 'Sz', dx)
The easy easy way: the CouplingMPOModel

Since many of the basic steps above are always the same, we don’t need to repeat them all the time. So we have yet another class helping to structure the initialization of models: the CouplingMPOModel. The general structure of this class is like this:

class CouplingMPOModel(CouplingModel,MPOModel):
    def __init__(self, model_param):
        # ... follows the basic steps 1-8 using the methods
        lat = self.init_lattice(self, model_param)  # for step 4
        # ...
        self.init_terms(self, model_param) # for step 6
        # ...

    def init_sites(self, model_param):
        # You should overwrite this

    def init_lattice(self, model_param):
        sites = self.init_sites(self, model_param) # for steps 1-3
        # initialize an arbitrary pre-defined lattice
        # using model_params['lattice']

    def init_terms(self, model_param):
        # does nothing.
        # You should overwrite this

The XXZChain2 included above illustrates, how it can be used. You need to implement steps 1-3) by overwriting the method init_sites() Step 4) is performed in the method init_lattice(), which initializes arbitrary 1D or 2D lattices; by default a simple 1D chain. If your model only works for specific lattices, you can overwrite this method in your own class. Step 6) should be done by overwriting the method init_terms(). Steps 5,7,8 and calls to the init_… methods for the other steps are done automatically if you just call the CouplingMPOModel.__init__(self, model_param).

The XXZChain and XXZChain2 work only with the Chain as lattice, since they are derived from the NearestNeighborModel. This allows to use them for TEBD in 1D (yeah!), but we can’t get the MPO for DMRG on a e.g. a Square lattice cylinder - although it’s intuitively clear, what the Hamiltonian there should be: just put the nearest-neighbor coupling on each bond of the 2D lattice.

It’s not possible to generalize a NearestNeighborModel to an arbitrary lattice where it’s no longer nearest Neigbors in the MPS sense, but we can go the other way around: first write the model on an arbitrary 2D lattice and then restrict it to a 1D chain to make it a NearestNeighborModel.

Let me illustrate this with another standard example model: the transverse field Ising model, implemented in the module tenpy.models.tf_ising included below. The TFIModel works for arbitrary 1D or 2D lattices. The TFIChain is then taking the exact same model making a NearestNeighborModel, which only works for the 1D chain.

"""Prototypical example of a quantum model: the transverse field Ising model.

Like the :class:`~tenpy.models.xxz_chain.XXZChain`, the transverse field ising chain
:class:`TFIChain` is contained in the more general :class:`~tenpy.models.spins.SpinChain`;
the idea is more to serve as a pedagogical example for a 'model'.

We choose the field along z to allow to conserve the parity, if desired.
"""
# Copyright 2018-2020 TeNPy Developers, GNU GPLv3

import numpy as np

from .model import CouplingMPOModel, NearestNeighborModel
from ..tools.params import asConfig
from ..networks.site import SpinHalfSite

__all__ = ['TFIModel', 'TFIChain']


class TFIModel(CouplingMPOModel):
    r"""Transverse field Ising model on a general lattice.

    The Hamiltonian reads:

    .. math ::
        H = - \sum_{\langle i,j\rangle, i < j} \mathtt{J} \sigma^x_i \sigma^x_{j}
            - \sum_{i} \mathtt{g} \sigma^z_i

    Here, :math:`\langle i,j \rangle, i< j` denotes nearest neighbor pairs, each pair appearing
    exactly once.
    All parameters are collected in a single dictionary `model_params`, which
    is turned into a :class:`~tenpy.tools.params.Config` object.

    Parameters
    ----------
    model_params : :class:`~tenpy.tools.params.Config`
        Parameters for the model. See :cfg:config:`TFIModel` below.

    Options
    -------
    .. cfg:config :: TFIModel
        :include: CouplingMPOModel

        conserve : None | 'parity'
            What should be conserved. See :class:`~tenpy.networks.Site.SpinHalfSite`.
        J, g : float | array
            Coupling as defined for the Hamiltonian above.

    """
    def init_sites(self, model_params):
        conserve = model_params.get('conserve', 'parity')
        assert conserve != 'Sz'
        if conserve == 'best':
            conserve = 'parity'
            if self.verbose >= 1.:
                print(self.name + ": set conserve to", conserve)
        site = SpinHalfSite(conserve=conserve)
        return site

    def init_terms(self, model_params):
        J = np.asarray(model_params.get('J', 1.))
        g = np.asarray(model_params.get('g', 1.))
        for u in range(len(self.lat.unit_cell)):
            self.add_onsite(-g, u, 'Sigmaz')
        for u1, u2, dx in self.lat.pairs['nearest_neighbors']:
            self.add_coupling(-J, u1, 'Sigmax', u2, 'Sigmax', dx)
        # done


class TFIChain(TFIModel, NearestNeighborModel):
    """The :class:`TFIModel` on a Chain, suitable for TEBD.

    See the :class:`TFIModel` for the documentation of parameters.
    """
    def __init__(self, model_params):
        model_params = asConfig(model_params, self.__class__.__name__)
        model_params.setdefault('lattice', "Chain")
        CouplingMPOModel.__init__(self, model_params)
Automation of Hermitian conjugation

As most physical Hamiltonians are Hermitian, these Hamiltonians are fully determined when only half of the mutually conjugate terms is defined. For example, a simple Hamiltonian:

\[H = \sum_{\langle i,j\rangle, i<j} - \mathtt{J} (c^{\dagger}_i c_j + c^{\dagger}_j c_i)\]

is fully determined by the term \(c^{\dagger}_i c_j\) if we demand that Hermitian conjugates are included automatically. In TeNPy, whenever you add a coupling using add_onsite(), add_coupling(), or add_multi_coupling(), you can use the optional argument plus_hc to automatically create and add the Hermitian conjugate of that coupling term - as shown above.

Additionally, in an MPO, explicitly adding both a non-Hermitian term and its conjugate increases the bond dimension of the MPO, which increases the memory requirements of the MPOEnvironment. Instead of adding the conjugate terms explicitly, you can set a flag explicit_plus_hc in the MPOCouplingModel parameters, which will ensure two things:

  1. The model and the MPO will only store half the terms of each Hermitian conjugate pair added, but the flag explicit_plus_hc indicates that they represent self + h.c.. In the example above, only the term \(c^{\dagger}_i c_j\) would be saved.

  2. At runtime during DMRG, the Hermitian conjugate of the (now non-Hermitian) MPO will be computed and applied along with the MPO, so that the effective Hamiltonian is still Hermitian.

Note

The model flag explicit_plus_hc should be used in conjunction with the flag plus_hc in add_coupling() or add_multi_coupling(). If plus_hc is False while explicit_plus_hc is True the MPO bond dimension will not be reduced, but you will still pay the additional computational cost of computing the Hermitian conjugate at runtime.

Thus, we end up with several use cases, depending on your preferences. Consider the FermionModel. If you do not care about the MPO bond dimension, and want to add Hermitian conjugate terms manually, you would set model_par[‘explicit_plus_hc’] = False and write:

self.add_coupling(-J, u1, 'Cd', u2, 'C', dx)
self.add_coupling(np.conj(-J), u2, 'Cd', u1, 'C', -dx)

If you wanted to save the trouble of the extra line of code (but still did not care about MPO bond dimension), you would keep the model_par, but instead write:

self.add_coupling(-J, u1, 'Cd', u2, 'C', dx, plus_hc=True)

Finally, if you wanted a reduction in MPO bond dimension, you would need to set model_par[‘explicit_plus_hc’] = True, and write:

self.add_coupling(-J, u1, 'Cd', u2, 'C', dx, plus_hc=True)
Some random remarks on models
  • Needless to say that we have also various predefined models under tenpy.models.

  • Of course, an MPO is all you need to initialize a MPOModel to be used for DMRG; you don’t have to use the CouplingModel or CouplingMPOModel. For example an exponentially decaying long-range interactions are not supported by the coupling model but straight-forward to include to an MPO, as demonstrated in the example examples/mpo_exponentially_decaying.py.

  • If the model of your interest contains Fermions, you should read the Fermions and the Jordan-Wigner transformation.

  • We suggest writing the model to take a single parameter dictionary for the initialization, as the CouplingMPOModel does. The CouplingMPOModel converts the dictionary to a dict-like Config with some additional features before passing it on to the init_lattice, init_site, … methods. It is recommended to read out providing default values with model_params.get("key", default_value), see get().

  • When you write a model and want to include a test that it can be at least constructed, take a look at tests/test_model.py.

Details on the lattice geometry

The Lattice class defines the geometry of the system. In the basic form, it represents a unit cell of a few sites repeated in one or multiple directions. Moreover, it maps this higher-dimensional geometry to a one-dimensional chain for MPS-based algorithms.

Visualization

A plot of the lattice can greatly help to understand which sites are connected by what couplings. The methods plot_* of the Lattice can do a good job for a quick illustration. Let’s look at the Honeycomb lattice as an example.

import matplotlib.pyplot as plt
from tenpy.models import lattice

plt.figure(figsize=(5, 6))
ax = plt.gca()
lat = lattice.Honeycomb(Lx=4, Ly=4, sites=None, bc='periodic')
lat.plot_coupling(ax)
lat.plot_order(ax, linestyle=':')
lat.plot_sites(ax)
lat.plot_basis(ax, origin=-0.5*(lat.basis[0] + lat.basis[1]))
ax.set_aspect('equal')
ax.set_xlim(-1)
ax.set_ylim(-1)
plt.show()

(Source code, png, hires.png, pdf)

_images/lattices-1.png

In this case, the unit cell (shaded green) consists of two sites, which for the purpose of plotting we just set to sites=None; in general you should specify instances of Site for that. The unit cell gets repeated in the directions given by the lattice basis (green arrows at the unit cell boundary). Hence, we can label each site by a lattice index (x, y, u) in this case, where x in range(Lx), y in range(Ly) specify the translation of the unit cell and u in range(len(unit_cell)), here u in [0, 1], specifies the index within the unit cell.

How an MPS winds through the lattice: the order

For MPS-based algorithms, we need to map a 2D lattice like the one above to a 1D chain. The red, dashed line in the plot indicates how an MPS winds through the 2D lattice. The MPS index i is a simple enumeration of the sites along this line, shown as numbers next to the sites in the plot. The methods mps2lat_idx() and lat2mps_idx() map indices of the MPS to and from indices of the lattice.

The MPS class itself is (mostly) agnostic of the underlying geometry. For example, expectation_value() will return a 1D array of the expectation value on each site indexed by the MPS index i. If you have a two-dimensional lattice, you can use mps2lat_values() to map this result to a 2D array index by the lattice indices.

A suitable order is critical for the efficiency of MPS-based algorithms. On one hand, different orderings can lead to different MPO bond-dimensions, with direct impact on the complexity scaling. On the other hand, it influences how much entanglement needs to go through each bonds of the underlying MPS, e.g., the ground strate to be found in DMRG, and therefore influences the required MPS bond dimensions. For the latter reason, the “optimal” ordering can not be known a priori and might even depend on your coupling parameters (and the phase you are in). In the end, you can just try different orderings and see which one works best.

The simplets way to change the order is to use a non-default value for the initialization parameter order of the Lattice class. This gets passed on to ordering(), which you an override in a custom lattice class to define new possible orderings. Alternatively, you can go the most general way and simply set the attribute order to be a 2D numpy array with lattice indices as rows, in the order you want.

import matplotlib.pyplot as plt
from tenpy.models import lattice

Lx, Ly = 3, 3
fig, axes = plt.subplots(2, 2, figsize=(7, 8))

lat1 = lattice.Honeycomb(Lx, Ly, sites=None, bc='periodic')  # default order
lat2 = lattice.Honeycomb(Lx, Ly, sites=None, bc='periodic',
                        order="Cstyle")  # first method to change order
# alternative: directly set "Cstyle" order
lat3 = lattice.Honeycomb(Lx, Ly, sites=None, bc='periodic')
lat3.order = lat2.ordering("Cstyle")  # now equivalent to lat2

# general: can apply arbitrary permutation to the order
lat4 = lattice.Honeycomb(Lx, Ly, sites=None, bc='periodic',
                        order="Cstyle")
old_order = lat4.order
permutation = []
for i in range(0, len(old_order), 2):
    permutation.append(i+1)
    permutation.append(i)
lat4.order = old_order[permutation, :]

for lat, label, ax in zip([lat1, lat2, lat3, lat4],
                          ["order='default'",
                           "order='Cstyle'",
                           "order='Cstyle'",
                           "custom permutation"],
                          axes.flatten()):
    lat.plot_coupling(ax)
    lat.plot_sites(ax)
    lat.plot_order(ax, linestyle=':', linewidth=2.)
    ax.set_aspect('equal')
    ax.set_title('order = ' + repr(label))

plt.show()

(Source code, png, hires.png, pdf)

_images/lattices-2.png
Boundary conditions

The Lattice defines the boundary conditions bc in each direction. It can be one of the usual 'open' or 'periodic' in each direcetion.

On top of that, there is the bc_MPS boundary condition of the MPS, one of 'finite', 'segment', 'infinite'. For an 'infinite' MPS, the whole lattice is repeated in the direction of the first basis vector of the lattice. For bc_MPS='infinite', the first direction should always be 'periodic', but you can also define a lattice with bc_MPS='finite', bc=['periodic', 'perioid'] for a finite system on the torus. This is discouraged, though, because the ground state MPS will require the squared bond dimension for the same precision in this case!

For two (or higher) dimensional lattices, e.g for DMRG on an infinite cylinder, you can also specify an integer shift instead of just saying 'periodic': Rolling the 2D lattice up into a cylinder, you have a degree of freedom which sites to connect. This is illustrated in the figure below for a Square lattice with bc=['periodic', shift] for shift in [-1, 0, 1] (different columns). In the first row, the orange markers indicate a pair of identified sites (see plot_bc_shift()). The dashed orange line indicates the direction of the cylinder axis. The line where the cylinder is “cut open” therefore winds around the the cylinder for a non-zero shift. (A similar thing happens even for shift=0 for more complicated lattices with non-orthogonal basis.) In the second row, we directly draw lines between all sites connected by nearest-neighbor couplings, as they appear in the MPO.

(Source code, png, hires.png, pdf)

_images/lattices-3.png
Irregular Lattices

The IrregularLattice allows to add or remove sites from/to an existing regular lattice. The doc-string of IrregularLattice contains several examples, let us consider another one here, where we use the IrregularLattice to “fix” the boundary of the Honeycomb lattice: when we use "open" boundary conditions for a finite system, there are two sites (on the lower left, and upper right), wich are not included into any hexagonal. The following example shows how to remove them from the system:

import matplotlib.pyplot as plt
from tenpy.models import lattice

Lx, Ly = 3, 3
fig, axes = plt.subplots(1, 2, True, True, figsize=(6, 4))

reg_lat = lattice.Honeycomb(Lx=Lx, Ly=Ly, sites=None, bc='open')
irr_lat = lattice.IrregularLattice(reg_lat, remove=[[0, 0, 0], [-1, -1, 1]])
for lat, label, ax in zip([reg_lat, irr_lat],
                          ["regular", "irregular"],
                          axes.flatten()):
    lat.plot_coupling(ax)
    lat.plot_order(ax, linestyle=':')
    lat.plot_sites(ax)
    ax.set_aspect('equal')
    ax.set_title(label)

plt.show()

(Source code, png, hires.png, pdf)

_images/lattices-4.png

Fermions and the Jordan-Wigner transformation

The Jordan-Wigner tranformation maps fermionic creation- and annihilation operators to (bosonic) spin-operators.

Spinless fermions in 1D

Let’s start by explicitly writing down the transformation. With the Pauli matrices \(\sigma^{x,y,z}_j\) and \(\sigma^{\pm}_j = (\sigma^x_j \pm \mathrm{i} \sigma^y_j)/2\) on each site, we can map

\[\begin{split}n_j &\leftrightarrow (\sigma^{z}_j + 1)/2 \\ c_j &\leftrightarrow (-1)^{\sum_{l < j} n_l} \sigma^{-}_j \\ c_j^\dagger &\leftrightarrow (-1)^{\sum_{l < j} n_l} \sigma^{+}_j\end{split}\]

The \(n_l\) in the second and third row are defined in terms of Pauli matrices according to the first row. We do not interpret the Pauli matrices as spin-1/2; they have nothing to do with the spin in the spin-full case. If you really want to interpret them physically, you might better think of them as hard-core bosons (\(b_j =\sigma^{-}_j, b^\dagger_j=\sigma^{+}_j\)), with a spin of the fermions mapping to a spin of the hard-core bosons.

Note that this transformation maps the fermionic operators \(c_j\) and \(c^\dagger_j\) to global operators; although they carry an index j indicating a site, they actually act on all sites l <= j! Thus, clearly the operators C and Cd defined in the FermionSite do not directly correspond to \(c_j\) and \(c^\dagger_j\). The part \((-1)^{\sum_{l < j} n_l}\) is called Jordan-Wigner string and in the FermionSite is given by the local operator \(JW := (-1)^{n_l}\) acting all sites l < j. Since this important, let me stress it again:

Warning

The fermionic operator \(c_j\) (and similar \(c^\dagger_j\)) maps to a global operator consisting of the Jordan-Wigner string built by the local operator JW on sites l < j and the local operator C (or Cd, respectively) on site j.

On the sites itself, the onsite operators C and Cd in the FermionSite fulfill the correct anti-commutation relation, without the need to include JW strings. The JW string is necessary to ensure the anti-commutation for operators acting on different sites.

Written in terms of onsite operators defined in the FermionSite, with the i-th entry entry in the list acting on site i, the relations are thus:

["JW", ..., "JW", "C",  "Id", ..., "Id"]   # for the annihilation operator
["JW", ..., "JW", "Cd", "Id", ..., "Id"]   # for the creation operator

Note that "JW" squares to the identity, "JW JW" == "Id", which is the reason that the Jordan-wigner string completely cancels in \(n_j = c^\dagger_j c_j\). In the above notation, this can be written as:

["JW", ..., "JW", "Cd",  "Id", ..., "Id"] * ["JW", ..., "JW", "C", "Id", ..., "Id"]
== ["JW JW", ..., "JW JW", "Cd C",  "Id Id", ..., "Id Id"]      # by definition of the tensorproduct
== ["Id",    ..., "Id",    "N",     "Id",    ..., "Id"]         # by definition of the local operators
# ("X Y" stands for the local operators X and Y applied on the same site. We assume that the "Cd" and "C" on the first line act on the same site.)

For a pair of operators acting on different sites, JW strings have to be included for every site between the operators. For example, taking i < j, \(c^\dagger_i c_j \leftrightarrow \sigma_i^{+} (-1)^{\sum_{i <=l < j} n_l} \sigma_j^{-}\). More explicitly, for j = i+2 we get:

["JW", ..., "JW", "Cd", "Id", "Id", "Id", ..., "Id"] * ["JW", ..., "JW", "JW", "JW", "C", "Id", ..., "Id"]
== ["JW JW", ..., "JW JW", "Cd JW",  "Id JW", "Id C", ..., "Id"]
== ["Id",    ..., "Id",    "Cd JW",  "JW",    "C",    ..., "Id"]

In other words, the Jordan-Wigner string appears only in the range i <= l < j, i.e. between the two sites and on the smaller/left one of them. (You can easily generalize this rule to cases with more than two \(c\) or \(c^\dagger\).)

This last line (as well as the last line of the previous example) can be rewritten by changing the order of the operators Cd JW to "JW Cd" == - "Cd". (This is valid because either site i is occupied, yielding a minus sign from the JW, or it is empty, yielding a 0 from the Cd.)

This is also the case for j < i, say j = i-2: \(c^\dagger_i c_j \leftrightarrow (-1)^{\sum_{j <=l < i} n_l} \sigma_i^{+} \sigma_j^{-}\). As shown in the following, the JW again appears on the left site, but this time acting after C:

["JW", ..., "JW", "JW", "JW", "Cd", "Id", ..., "Id"] * ["JW", ..., "JW", "C", "Id", "Id", "Id", ..., "Id"]
== ["JW JW", ..., "JW JW", "JW C",  "JW", "Cd Id", ..., "Id"]
== ["Id",    ..., "Id",    "JW C",  "JW", "Cd",    ..., "Id"]
Higher dimensions

For an MPO or MPS, you always have to define an ordering of all your sites. This ordering effectifely maps the higher-dimensional lattice to a 1D chain, usually at the expence of long-range hopping/interactions. With this mapping, the Jordan-Wigner transformation generalizes to higher dimensions in a straight-forward way.

Spinful fermions
_images/JordanWignerSpinHalf.png

As illustrated in the above picture, you can think of spin-1/2 fermions on a chain as spinless fermions living on a ladder (and analogous mappings for higher dimensional lattices). Each rung (a blue box in the picture) forms a SpinHalfFermionSite which is composed of two FermionSite (the circles in the picture) for spin-up and spin-down. The mapping of the spin-1/2 fermions onto the ladder induces an ordering of the spins, as the final result must again be a one-dimensional chain, now containing both spin species. The solid line indicates the convention for the ordering, the dashed lines indicate spin-preserving hopping \(c^\dagger_{s,i} c_{s,i+1} + h.c.\) and visualize the ladder structure. More generally, each species of fermions appearing in your model gets a separate label, and its Jordan-Wigner string includes the signs \((-1)^{n_l}\) of all species of fermions to the ‘left’ of it (in the sense of the ordering indicated by the solid line in the picture).

In the case of spin-1/2 fermions labeled by \(\uparrow\) and \(\downarrow\) on each site, the complete mapping is given (where j and l are indices of the FermionSite):

\[\begin{split}n_{\uparrow,j} &\leftrightarrow (\sigma^{z}_{\uparrow,j} + 1)/2 \\ n_{\downarrow,j} &\leftrightarrow (\sigma^{z}_{\downarrow,j} + 1)/2 \\ c_{\uparrow,j} &\leftrightarrow (-1)^{\sum_{l < j} n_{\uparrow,l} + n_{\downarrow,l}} \sigma^{-}_{\uparrow,j} \\ c^\dagger_{\uparrow,j} &\leftrightarrow (-1)^{\sum_{l < j} n_{\uparrow,l} + n_{\downarrow,l}} \sigma^{+}_{\uparrow,j} \\ c_{\downarrow,j} &\leftrightarrow (-1)^{\sum_{l < j} n_{\uparrow,l} + n_{\downarrow,l}} (-1)^{n_{\uparrow,j}} \sigma^{-}_{\downarrow,j} \\ c^\dagger_{\downarrow,j} &\leftrightarrow (-1)^{\sum_{l < j} n_{\uparrow,l} + n_{\downarrow,l}} (-1)^{n_{\uparrow,j}} \sigma^{+}_{\downarrow,j} \\\end{split}\]

In each of the above mappings the operators on the right hand sides commute; we can rewrite \((-1)^{\sum_{l < j} n_{\uparrow,l} + n_{\downarrow,l}} = \prod_{l < j} (-1)^{n_{\uparrow,l}} (-1)^{n_{\downarrow,l}}\), which resembles the actual structure in the code more closely. The parts of the operator acting in the same box of the picture, i.e. which have the same index j or l, are the ‘onsite’ operators in the SpinHalfFermionSite: for example JW on site j is given by \((-1)^{n_{\uparrow,j}} (-1)^{n_{\downarrow,j}}\), Cu is just the \(\sigma^{-}_{\uparrow,j}\), Cdu is \(\sigma^{+}_{\uparrow,j}\), Cd is \((-1)^{n_{\uparrow,j}} \sigma^{-}_{\downarrow,j}\). and Cdd is \((-1)^{n_{\uparrow,j}} \sigma^{+}_{\downarrow,j}\). Note the asymmetry regarding the spin in the definition of the onsite operators: the spin-down operators include Jordan-Wigner signs for the spin-up fermions on the same site. This asymetry stems from the ordering convention introduced by the solid line in the picture, according to which the spin-up site is “left” of the spin-down site. With the above definition, the operators within the same SpinHalfFermionSite fulfill the expected commutation relations, for example "Cu Cdd" == - "Cdd Cu", but again the JW on sites left of the operator pair is crucial to get the correct commutation relations globally.

Warning

Again, the fermionic operators \(c_{\downarrow,j}, c^\dagger_{\downarrow,j}, c_{\downarrow,j}, c^\dagger_{\downarrow,j}\) correspond to global operators consisting of the Jordan-Wigner string built by the local operator JW on sites l < j and the local operators 'Cu', 'Cdu', 'Cd', 'Cdd' on site j.

Written explicitly in terms of onsite operators defined in the FermionSite, with the j-th entry entry in the list acting on site j, the relations are:

["JW", ..., "JW", "Cu",  "Id", ..., "Id"]    # for the annihilation operator spin-up
["JW", ..., "JW", "Cd",  "Id", ..., "Id"]    # for the annihilation operator spin-down
["JW", ..., "JW", "Cdu",  "Id", ..., "Id"]   # for the creation operator spin-up
["JW", ..., "JW", "Cdd",  "Id", ..., "Id"]   # for the creation operator spin-down

As you can see, the asymmetry regaring the spins in the definition of the local onsite operators "Cu", "Cd", "Cdu", "Cdd" lead to a symmetric definition in the global sense. If you look at the definitions very closely, you can see that in terms like ["Id", "Cd JW", "JW", "Cd"] the Jordan-Wigner sign \((-1)^{n_\uparrow,2}\) appears twice (namely once in the definition of "Cd" and once in the "JW" on site 2) and could in principle be canceled, however in favor of a simplified handling in the code we do not recommend you to cancel it. Similar, within a spinless FermionSite, one can simplify "Cd JW" == "Cd" and "JW C" == "C", but these relations do not hold in the SpinHalfSite, and for consistency we recommend to explicitly keep the "JW" operator string even in nearest-neighbor models where it is not strictly necessary.

How to handle Jordan-Wigner strings in practice

There are only a few pitfalls where you have to keep the mapping in mind: When building a model, you map the physical fermionic operators to the usual spin/bosonic operators. The algorithms don’t care about the mapping, they just use the given Hamiltonian, be it given as MPO for DMRG or as nearest neighbor couplings for TEBD. Only when you do a measurement (e.g. by calculating an expectation value or a correlation function), you have to reverse this mapping. Be aware that in certain cases, e.g. when calculating the entanglement entropy on a certain bond, you cannot reverse this mapping (in a straightforward way), and thus your results might depend on how you defined the Jordan-Wigner string.

Whatever you do, you should first think about if (and how much of) the Jordan-Wigner string cancels. For example for many of the onsite operators (like the particle number operator N or the spin operators in the SpinHalfFermionSite) the Jordan-Wigner string cancels completely and you can just ignore it both in onsite-terms and couplings. In case of two operators acting on different sites, you typically have a Jordan-Wigner string inbetween (e.g. for the \(c^\dagger_i c_j\) examples described above and below) or no Jordan-Wigner strings at all (e.g. for density-density interactions \(n_i n_j\)). In fact, the case that the Jordan Wigner string on the left of the first non-trivial operator does not cancel is currently not supported for models and expectation values, as it usually doesn’t appear in practice. For terms involving more operators, things tend to get more complicated, e.g. \(c^\dagger_i c^\dagger_j c_k c_l\) with \(i < j < k < l\) requires a Jordan-Wigner string on sites m with \(i \leq m <j\) or \(k \leq m <l\), but not for \(j < m < k\).

Note

TeNPy keeps track of which onsite operators need a Jordan-Wigner string in the Site class, specifically in need_JW_string and op_needs_JW(). Hence, when you define custom sites or add extra operators to the sites, make sure that op_needs_JW() returns the expected results.

When building a model the Jordan-Wigner strings need to be taken into account. If you just specify the H_MPO or H_bond, it is your responsibility to use the correct mapping. However, if you use the add_coupling() method of the CouplingModel , (or the generalization add_multi_coupling() for more than 2 operators), TeNPy can use the information from the Site class to automatically add Jordan-Wigner strings as needed. Indeed, with the default argument op_string=None, add_coupling will automatically check whether the operators need Jordan-Wigner strings and correspondlingly set op_string='JW', str_on_first=True, if necessary. For add_multi_coupling, you cann’t even explicitly specify the correct Jordan-Wigner strings, but you must use op_string=None, from which it will automatically determine where Jordan-Wigner strings are needed.

Obviously, you should be careful about the convention which of the operators is applied first (in a physical sense as an operator acting on a state), as this corresponds to a sign of the prefactor. Read the doc-strings of add_coupling() add_multi_coupling() for details.

As a concrete example, let us specify a hopping \(\sum_{i} (c^\dagger_i c_{i+1} + h.c.) = \sum_{i} (c^\dagger_i c_{i+1} + c^\dagger_{i} c_{i-1})\) in a 1D chain of FermionSite with add_coupling(). The recoomended way is just:

add_coupling(strength, 0, 'Cd', 0, 'C', 1, plus_hc=True)

If you want to specify both the Jordan-Wigner string and the h.c. term explicitly, you can use:

add_coupling(strength, 0, 'Cd', 0, 'C', 1, op_string='JW', str_on_first=True)
add_coupling(strength, 0, 'Cd', 0, 'C', -1, op_string='JW', str_on_first=True)

Slightly more complicated, to specify the hopping \(\sum_{\langle i, j\rangle, s} (c^\dagger_{s,i} c_{s,j} + h.c.)\) in the Fermi-Hubbard model on a 2D square lattice, we could use:

for (dx, dy) in [(1, 0), (0, 1)]:
    add_coupling(strength, 0, 'Cdu', 0, 'Cu', (dx, dy), plus_hc=True)  # spin up
    add_coupling(strength, 0, 'Cdd', 0, 'Cd', (dx, dy), plus_hc=True)  # spin down

# or without `plus_hc`
for (dx, dy) in [(1, 0), (-1, 0), (0, 1), (0, -1)]:  # include -dx !
    add_coupling(strength, 0, 'Cdu', 0, 'Cu', (dx, dy))  # spin up
    add_coupling(strength, 0, 'Cdd', 0, 'Cd', (dx, dy))  # spin down

# or specifying the 'JW' string explicitly
for (dx, dy) in [(1, 0), (-1, 0), (0, 1), (0, -1)]:
    add_coupling(strength, 0, 'Cdu', 0, 'Cu', (dx, dy), 'JW', True)  # spin up
    add_coupling(strength, 0, 'Cdd', 0, 'Cd', (dx, dy), 'JW', True)  # spin down

The most important functions for doing measurements are probably expectation_value() and correlation_function(). Again, if all the Jordan-Wigner strings cancel, you don’t have to worry about them at all, e.g. for many onsite operators or correlation functions involving only number operators. If you build multi-site operators to be measured by expectation_value, take care to include the Jordan-Wigner string correctly.

Some MPS methods like correlation_function(), expectation_value_term() and expectation_value_terms_sum() automatically add Jordan-Wignder strings (at least with default arguments). Other more low-level functions like expectation_value_multi_sites() don’t do it. Hence, you should always watch out during measurements, if the function used needs special treatment for Jordan-Wigner strings.

Saving to disk: input/output

Using pickle

A simple and pythonic way to store data of TeNPy arrays is to use pickle from the Python standard library. Pickle allows to store (almost) arbitrary python objects, and the Array is no exception (and neither are other TeNPy classes).

Say that you have run DMRG to get a ground state psi as an MPS. With pickle, you can save it to disk as follows:

import pickle
with open('my_psi_file.pkl', 'wb') as f:
    pickle.dump(psi, f)

Here, the with ... : structure ensures that the file gets closed after the pickle dump, and the 'wb' indicates the file opening mode “write binary”. Reading the data from disk is as easy as ('rb' for reading binary):

with open('my_psi_file.pkl', 'rb') as f:
    psi = pickle.load(f)

Note

It is a good (scientific) practice to include meta-data to the file, like the parameters you used to generate that state. Instead of just the psi, you can simply store a dictionary containing psi and other data, e.g., data = {'psi': psi, 'dmrg_params': dmrg_params, 'model_params': model_params}. This can save you a lot of pain, when you come back looking at the files a few month later and forgot what you’ve done to generate them!

In some cases, compression can significantly reduce the space needed to save the data. This can for example be done with gzip (as well in the Python standard library). However, be warned that it might cause longer loading and saving times, i.e. it comes at the penalty of more CPU usage for the input/output. In Python, this requires only small adjustments:

import pickle
import gzip

# to save:
with gzip.open('my_data_file.pkl', 'wb') as f:
    pickle.dump(data, f)
# and to load:
with gzip.open('my_data_file.pkl', 'rb') as f:
    data = pickle.load(data, f)
Using HDF5 with h5py

While pickle is great for simple input/output of python objects, it also has disadvantages. The probably most dramatic one is the limited portability: saving data on one PC and loading it on another one might fail! Even exporting data from Python 2 to load them in Python 3 on the same machine can give quite some troubles. Moreover, pickle requires to load the whole file at once, which might be unnecessary if you only need part of the data, or even lead to memory problems if you have more data on disk than fits into RAM.

Hence, we support saving to HDF5 files as an alternative. The h5py package provides a dictionary-like interface for the file/group objects with numpy-like data sets, and is quite easy to use. If you don’t know about HDF5, read the quickstart of the h5py documentation (and this guide).

The implementation can be found in the tenpy.tools.hdf5_io module with the Hdf5Saver and Hdf5Loader classes and the wrapper functions save_to_hdf5(), load_from_hdf5().

The usage is very similar to pickle:

import h5py
from tenpy.tools import hdf5_io

data = {"psi": psi,  # e.g. an MPS
        "model": my_model,
        "parameters": {"L": 6, "g": 1.3}}

with h5py.File("file.h5", 'w') as f:
    hdf5_io.save_to_hdf5(f, data)
# ...
with h5py.File("file.h5", 'r') as f:
    data = hdf5_io.load_from_hdf5(f)
    # or for partial reading:
    pars = hdf5_io.load_from_hdf5(f, "/parameters")

Note

The hickle package imitates the pickle functionality while saving the data to HDF5 files. However, since it aims to be close to pickle, it results in a more complicated data structure than we want here.

Note

To use the export/import features to HDF5, you need to install the h5py python package (and hence some version of the HDF5 library).

Data format specification for saving to HDF5

This section motivates and defines the format how we save data of TeNPy-defined classes. The goal is to have the save_to_hdf5() function for saving sufficiently simple enough python objects (supported by the format) to disk in an HDF5 file, such that they can be reconstructed with the load_from_hdf5() function, as outlined in the example code above.

Guidelines of the format:

  1. Store enough data such that load_from_hdf5() can reconstruct a copy of the object (provided that the save did not fail with an error).

  2. Objects of a type supported by the HDF5 datasets (with the h5py interface) should be directly stored as h5py Dataset. Such objects are for example numpy arrays (of non-object dtype), scalars and strings.

  3. Allow to save (nested) python lists, tuples and dictionaries with values (and keys) which can be saved.

  4. Allow user-defined classes to implement a well-defined interface which allows to save instances of that class, hence extending what data can be saved. An instance of a class supporting the interface gets saved as an HDF5 Group. Class attributes are stored as entries of the group, metadata like the type should be stored in HDF5 attributes, see attributes.

  5. Simple and intuitive, human-readable structure for the HDF5 paths. For example, saving a simple dictionary {'a': np.arange(10), 'b': 123.45} should result in an HDF5 file with just the two data sets /a and /b.

  6. Allow loading only a subset of the data by specifying the path of the HDF5 group to be loaded. For the above example, specifying the path /b should result in loading the float 123.45, not the array.

  7. Avoid unnecessary copies if the same python object is referenced by different names, e.g, for the data {'c': large_obj, 'd': large_obj} with to references to the same large_obj, save it only once and use HDF5 hard-links such that /c and /d are the same HDF5 dataset/group. Also avoid the copies during the loading, i.e., the loaded dictionary should again have two references to a single object large_obj. This is also necessary to allow saving and loading of objects with cyclic references.

  8. Loading a dataset should be (fairly) secure and not execute arbitrary python code (even if the dataset was manipulated), as it is the case for pickle.

    Disclaimer: I’m not an security expert, so I can’t guarantee that… Also, loading a HDF5 file can import other python modules, so importing a manipulated file is not secure if you downloaded a malicious python file as well.

The full format specification is given by the what the code in hdf5_io does… Since this is not trivial to understand, let me summarize it here:

  • Following 1), simple scalars, strings and numpy arrays are saved as Dataset. Other objects are saved as a HDF5 Group, with the actual data being saved as group members (as sub-groups and sub-datasets) or as attributes (for metadata or simple data).

  • The type of the object is stored in the HDF5 attribute 'type', which is one of the global REPR_* variables in tenpy.tools.hdf5_io. The type determines the format for saving/loading of builtin types (list, …)

  • Userdefined classes which should be possible to export/import need to implement the methods save_hdf5 and from_hdf5 as specified in Hdf5Exportable. When saving such a class, the attribute 'type' is automatically set to 'instance', and the class name and module are saved under the attributes 'module' and 'class'. During loading, this information is used to automatically import the module, get the class and call the classmethod from_hdf5 for reconstruction. This can only work if the class definition already exists, i.e., you can only save class instances, not classes itself.

  • For most (python) classes, simply subclassing Hdf5Exportable should work to make the class exportable. The latter saves the contents of __dict__, with the extra attribute 'format' specifying whether the dictionary is “simple” (see below.).

  • The None object is saved as a group with the attribute 'type' being 'None' and no subgroups.

  • For iterables (list, tuple and set), we simple enumerate the entries and save entries as group members under the names '0', '1', '2', ..., and a maximum 'len' attribute.

  • The format for dictionaries depends on whether all keys are “simple”, which we define as being strings which are valid path names in HDF5, see valid_hdf5_path_component(). Following 4), the keys of a simple dictionary are directly used as names for group members, and the values being whatever object the group member represents.

  • Partial loading along 5) is possible by directly specifying the subgroup or the path to load_from_hdf5().

  • Guideline 6) is ensured as much as possible. However, there is a bug/exception: tuples with cyclic references are not re-constructed correctly; the inner objects will be lists instead of tuples (but with the same object entries).

Finally, we have to mention that many TeNPy classes are Hdf5Exportable. In particular, the Array supports this. To see what the exact format for those classes is, look at the save_hdf5 and from_hdf5 methods of those classes.

Note

There can be multiple possible output formats for the same object. The dictionary – with the format for simple keys or general keys – is such an example, but userdefined classes can use the same technique in their from_hdf5 method. The user might also explicitly choose a “lossy” output format (e.g. “flat” for np_conserved Arrays and LegCharges).

Tip

The above format specification is quite general and not bound to TeNPy. Feel free to use it in your own projects ;-) To separate the development, versions and issues of the format clearly from TeNPy, we maintain the code for it in a separate git repository, https://github.com/tenpy/hdf5_io

Literature

This is a (by far non-exhaustive) list of some references for the various ideas behind the code. They can be cited from the python doc-strings using the format [Author####]_. Within each category, we sort the references by year and author.

General reading

[Schollwoeck2011] is an extensive introduction to MPS, DMRG and TEBD with lots of details on the implementations, and a classic read, although a bit lengthy. Our [TeNPyNotes] are a shorter summary of the important concepts, similar as [Orus2014]. [Hubig2019] is a very good, recent review focusing on time evolution with MPS. The lecture notes of [Eisert2013] explain the area law as motivation for tensor networks very well. PEPS are for example reviewed in [Verstraete2009], [Eisert2013] and [Orus2014]. [Stoudenmire2011] reviews the use of DMRG for 2D systems. [Cirac2009] discusses the different groups of tensor network states.

Cirac2009

“Renormalization and tensor product states in spin chains and lattices” J. I. Cirac and F. Verstraete, Journal of Physics A: Mathematical and Theoretical, 42, 50 (2009) arXiv:0910.1130 doi:10.1088/1751-8113/42/50/504004

Verstraete2009

“Matrix Product States, Projected Entangled Pair States, and variational renormalization group methods for quantum spin systems” F. Verstraete and V. Murg and J.I. Cirac, Advances in Physics 57 2, 143-224 (2009) arXiv:0907.2796 doi:10.1080/14789940801912366

Schollwoeck2011

“The density-matrix renormalization group in the age of matrix product states” U. Schollwoeck, Annals of Physics 326, 96 (2011), arXiv:1008.3477 doi:10.1016/j.aop.2010.09.012

Stoudenmire2011

“Studying Two Dimensional Systems With the Density Matrix Renormalization Group” E.M. Stoudenmire, Steven R. White, Ann. Rev. of Cond. Mat. Physics, 3: 111-128 (2012), arXiv:1105.1374 doi:10.1146/annurev-conmatphys-020911-125018

Eisert2013(1,2)

“Entanglement and tensor network states” J. Eisert, Modeling and Simulation 3, 520 (2013) arXiv:1308.3318

Orus2014(1,2)

“A Practical Introduction to Tensor Networks: Matrix Product States and Projected Entangled Pair States” R. Orus, Annals of Physics 349, 117-158 (2014) arXiv:1306.2164 doi:10.1016/j.aop.2014.06.013

Hubig2019

“Time-evolution methods for matrix-product states” S. Paeckel, T. Köhler, A. Swoboda, S. R. Manmana, U. Schollwöck, C. Hubig, arXiv:1901.05824

Algorithm developments

[White1992] is the invention of DMRG, which started everything. [Vidal2004] introduced TEBD. [White2005] and [Hubig2015] solved problems for single-site DMRG. [McCulloch2008] was a huge step forward to solve convergence problems for infinite DMRG. [Singh2009], [Singh2010] explain how to incorporate Symmetries. [Haegeman2011] introduced TDVP, again explained more accessible in [Haegeman2016]. [Zaletel2015] is another standard method for time-evolution with long-range Hamiltonians. [Karrasch2013] gives some tricks to do finite-temperature simulations (DMRG), which is a bit extended in [Hauschild2018]. [Vidal2007] introduced MERA.

White1992

“Density matrix formulation for quantum renormalization groups” S. White, Phys. Rev. Lett. 69, 2863 (1992) doi:10.1103/PhysRevLett.69.2863, S. White, Phys. Rev. B 84, 10345 (1992) doi:10.1103/PhysRevB.48.10345

Vidal2004

“Efficient Simulation of One-Dimensional Quantum Many-Body Systems” G. Vidal, Phys. Rev. Lett. 93, 040502 (2004), arXiv:quant-ph/0310089 doi:10.1103/PhysRevLett.93.040502

White2005

“Density matrix renormalization group algorithms with a single center site” S. White, Phys. Rev. B 72, 180403(R) (2005), arXiv:cond-mat/0508709 doi:10.1103/PhysRevB.72.180403

Vidal2007

“Entanglement Renormalization” G. Vidal, Phys. Rev. Lett. 99, 220405 (2007), arXiv:cond-mat/0512165, doi:10.1103/PhysRevLett.99.220405

McCulloch2008

“Infinite size density matrix renormalization group, revisited” I. P. McCulloch, arXiv:0804.2509

Singh2009

“Tensor network decompositions in the presence of a global symmetry” S. Singh, R. Pfeifer, G. Vidal, Phys. Rev. A 82, 050301(R), arXiv:0907.2994 doi:10.1103/PhysRevA.82.050301

Stoudenmire2010

“Minimally Entangled Typical Thermal State Algorithms” E.M. Stoudenmire, Steven R. White, 2010 New J. Phys. 12, 055026, arXiv:1002.1305 doi:10.1088/1367-2630/12/5/055026

Singh2010

“Tensor network states and algorithms in the presence of a global U(1) symmetry” S. Singh, R. Pfeifer, G. Vidal, Phys. Rev. B 83, 115125, arXiv:1008.4774 doi:10.1103/PhysRevB.83.115125

Haegeman2011

“Time-Dependent Variational Principle for Quantum Lattices” J. Haegeman, J. I. Cirac, T. J. Osborne, I. Pizorn, H. Verschelde, F. Verstraete, Phys. Rev. Lett. 107, 070601 (2011), arXiv:1103.0936 doi:10.1103/PhysRevLett.107.070601

Karrasch2013

“Reducing the numerical effort of finite-temperature density matrix renormalization group calculations” C. Karrasch, J. H. Bardarson, J. E. Moore, New J. Phys. 15, 083031 (2013), arXiv:1303.3942 doi:10.1088/1367-2630/15/8/083031

Zaletel2015

“Time-evolving a matrix product state with long-ranged interactions” M. P. Zaletel, R. S. K. Mong, C. Karrasch, J. E. Moore, F. Pollmann, Phys. Rev. B 91, 165112 (2015), arXiv:1407.1832 doi:10.1103/PhysRevB.91.165112

Hubig2015

“Strictly single-site DMRG algorithm with subspace expansion” C. Hubig, I. P. McCulloch, U. Schollwoeck, F. A. Wolf, Phys. Rev. B 91, 155115 (2015), arXiv:1501.05504 doi:10.1103/PhysRevB.91.155115

Haegeman2016

“Unifying time evolution and optimization with matrix product states” J. Haegeman, C. Lubich, I. Oseledets, B. Vandereycken, F. Verstraete, Phys. Rev. B 94, 165116 (2016), arXiv:1408.5056 doi:10.1103/PhysRevB.94.165116

Hauschild2018

“Finding purifications with minimal entanglement” J. Hauschild, E. Leviatan, J. H. Bardarson, E. Altman, M. P. Zaletel, F. Pollmann, Phys. Rev. B 98, 235163 (2018), arXiv:1711.01288 doi:10.1103/PhysRevB.98.235163

Contributing

There are lots of things where you can help, even if you don’t wont to dig deep into the source code. You are welcome to do any of the following things, all of them are very helpful!

  • Report bugs and problems, such that they can be fixed.

  • Implement new models.

  • Update and extend the documentation.

  • Give feedback on how you like TeNPy and what you would like to see improved.

  • Help fixing bugs.

  • Help fixing minor issues.

  • Extend the functionality by implementing new functions, methods, and algorithms.

The code is maintained in a git repository, the official repository is on github. Even if you’re not yet on the developer team, you can still submit pull requests on github. If you’re unsure how or what to do, you can ask for help in the [TeNPyForum]. If you want to become a member of the developer team, just ask ;-)

Thank You!

Coding Guidelines

To keep consistency, we ask you to comply with the following guidelines for contributions. However, these are just guidelines - it still helps if you contribute something, even if doesn’t follow these rules ;-)

  • Use a code style based on PEP 8. The git repo includes a config file .style.yapf for the python package yapf. yapf is a tool to auto-format code, e.g., by the command yapf -i some/file (-i for “in place”). We run yapf on a regular basis on the github master branch. If your branch diverged, it might help to run yapf before merging.

    Note

    Since no tool is perfect, you can format some regions of code manually and enclose them with the special comments # yapf: disable and # yapf: enable.

  • Every function/class/module should be documented by its doc-string, see PEP 257. We auto-format the doc-strings with docformatter on a regular basis.

    Additional documentation for the user guide is in the folder doc/.

    The documentation uses reStructuredText. If you are new to reStructuredText, read this introduction. We use the numpy style for doc-strings (with the napoleon extension to sphinx). You can read abouth them in these Instructions for the doc strings. In addition, you can take a look at the following example file. Helpful hints on top of that:

    r"""<- this r makes me a raw string, thus '\' has no special meaning.
    Otherwise you would need to escape backslashes, e.g. in math formulas.
    
    You can include cross references to classes, methods, functions, modules like
    :class:`~tenpy.linalg.np_conserved.Array`, :meth:`~tenpy.linalg.np_conserved.Array.to_ndarray`,
    :func:`tenpy.tools.math.toiterable`, :mod:`tenpy.linalg.np_conserved`.
    The ~ in the beginning makes only the last part of the name appear in the generated documentation.
    Documents of the userguide can be referenced with :doc:`/intro_npc` even from inside the doc-strings.
    You can also cross-link to other documentations, e.g. :class:`numpy.ndarray`, :func`scipy.linalg.svd` and :mod: will work.
    
    Moreover, you can link to github issues, arXiv papers, dois, and topics in the community forum with
    e.g. :issue:`5`, :arxiv:`1805.00055`, :doi:`10.1000/1` and :forum:`3`.
    
    
    Write inline formulas as :math:`H |\Psi\rangle = E |\Psi\rangle` or displayed equations as
    .. math ::
    
       e^{i\pi} + 1 = 0
    
    In doc-strings, math can only be used in the Notes section.
    To refer to variables within math, use `\mathtt{varname}`.
    
    .. todo ::
    
       This block can describe things which need to be done and is automatically included in a section of :doc:`todo`.
    """
    
  • Use relative imports within TeNPy. Example:

    from ..linalg import np_conserved as npc
    
  • Use the python package pytest for testing. Run it simply with pytest in tests/. You should make sure that all tests run through, before you git push back into the public repo. Long-running tests are marked with the attribute slow; for a quick check you can also run pytest -m "not slow".

    We have set up github actions to automatically run the tests.

  • Reversely, if you write new functions, please also include suitable tests!

  • During development, you might introduce # TODO comments. But also try to remove them again later! If you’re not 100% sure that you will remove it soon, please add a doc-string with a .. todo :: block, such that we can keep track of it.

    Unfinished functions should raise NotImplementedError().

  • Summarize the changes you have made in the Changelog under [latest].

  • If you want to try out new things in temporary files: any folder named playground is ignored by git.

  • If you add a new toycode or example: add a reference to include it in the documentation.

  • We’ve created a sphinx extensions for documenting config-option dictionaries. If a class takes a dictionary of options, we usually call it options, convert it to a Config at the very beginning of the __init__ with asConfig(), save it as self.options, and document it in the class doc-string with a .. cfg:config :: directive. The name of the config should usually be the class-name (if that is sufficiently unique), or for algorithms directly the common name of the algorithm, e.g. “DMRG”; use the same name for the use the same name for the documentation of the .. cfg:config :: directive as for the Config class instance. Attributes which are simply read-out options should be documented by just referencing the options with the :cfg:option:`configname.optionname` role.

Bulding the documentation

You can use Sphinx to generate the full documentation in various formats (including HTML or PDF) yourself, as described in the following. First, install the extra requirements, i.e., Sphinx, with:

pip install -r doc/requirements.txt

Note

Plotting the inheritance graphs also requires Graphviz. If you have conda, installing it requires just conda install graphviz.

Afterwards, simply go to the folder doc/ and run the following command:

make html

This should generate the html documentation in the folder doc/sphinx_build/html. Open this folder (or to be precise: the file index.html in it) in your webbroser and enjoy this and other documentation beautifully rendered, with cross links, math formulas and even a search function. Other output formats are available as other make targets, e.g., make latexpdf.

Note

Building the documentation with sphinx requires loading the modules. The conf.py adjusts the python path to include the tenpy from root directory of the repository.

To-Do list

You can check https://github.com/tenpy/tenpy/issues for things to be done.

The following list is auto-generated by sphinx, extracting .. todo :: blocks from doc-strings of the code.

Todo

Write UserGuide!!!

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/algorithms/dmrg.py:docstring of tenpy.algorithms.dmrg, line 30.)

Todo

Rebuild TDVP engine as subclasses of sweep.

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/algorithms/mps_common.py:docstring of tenpy.algorithms.mps_common, line 21.)

Todo

  • implement or wrap netcon.m, a function to find optimal contractionn sequences

    (arXiv:1304.6112)

  • improve helpfulness of Warnings

  • _do_trace: trace over all pairs of legs at once. need the corresponding npc function first.

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/algorithms/network_contractor.py:docstring of tenpy.algorithms.network_contractor, line 10.)

Todo

This is still a beta version, use with care. The interface might still change.

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/algorithms/tdvp.py:docstring of tenpy.algorithms.tdvp, line 12.)

Todo

long-term: Much of the code is similar as in DMRG. To avoid too much duplicated code, we should have a general way to sweep through an MPS and updated one or two sites, used in both cases.

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/algorithms/tdvp.py:docstring of tenpy.algorithms.tdvp, line 16.)

Todo

add further terms (e.g. c^dagger c^dagger + h.c.) to the Hamiltonian.

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/models/fermions_spinless.py:docstring of tenpy.models.fermions_spinless, line 3.)

Todo

WARNING: These models are still under development and not yet tested for correctness. Use at your own risk! Replicate known results to confirm models work correctly. Long term: implement different lattices. Long term: implement variable hopping strengths Jx, Jy.

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/models/hofstadter.py:docstring of tenpy.models.hofstadter, line 3.)

Todo

make sure this function is used for expectation values…

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/models/lattice.py:docstring of tenpy.models.lattice.Honeycomb.mps2lat_values, line 53.)

Todo

make sure this function is used for expectation values…

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/models/lattice.py:docstring of tenpy.models.lattice.IrregularLattice.mps2lat_values, line 53.)

Todo

make sure this function is used for expectation values…

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/models/lattice.py:docstring of tenpy.models.lattice.Kagome.mps2lat_values, line 53.)

Todo

make sure this function is used for expectation values…

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/models/lattice.py:docstring of tenpy.models.lattice.Ladder.mps2lat_values, line 53.)

Todo

make sure this function is used for expectation values…

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/models/lattice.py:docstring of tenpy.models.lattice.Lattice.mps2lat_values, line 53.)

Todo

make sure this function is used for expectation values…

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/models/lattice.py:docstring of tenpy.models.lattice.TrivialLattice.mps2lat_values, line 53.)

Todo

implement MPO for time evolution…

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/models/model.py:docstring of tenpy.models.model.MPOModel, line 7.)

Todo

make sure this function is used for expectation values…

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/models/toric_code.py:docstring of tenpy.models.toric_code.DualSquare.mps2lat_values, line 53.)

Todo

This is a naive, expensive implementation contracting the full network. Try to follow arXiv:1711.01104 for a better estimate; would that even work in the infinite limit?

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/networks/mpo.py:docstring of tenpy.networks.mpo.MPO.variance, line 5.)

Todo

might be useful to add a “cleanup” function which removes operators cancelling each other and/or unused states. Or better use a ‘compress’ of the MPO?

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/networks/mpo.py:docstring of tenpy.networks.mpo.MPOGraph, line 17.)

Todo

Make more general: it should be possible to specify states as strings.

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/networks/mps.py:docstring of tenpy.networks.mps.build_initial_state, line 14.)

Todo

One can also look at the canonical ensembles by defining the conserved quantities differently, see Barthel (2016), arXiv:1607.01696 for details. Idea: usual charges on p, trivial charges on q; fix total charge to desired value. I think it should suffice to implement another from_infiniteT.

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/networks/purification_mps.py:docstring of tenpy.networks.purification_mps, line 106.)

Todo

Check if Jordan-Wigner strings for 4x4 operators are correct.

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/networks/site.py:docstring of tenpy.networks.site.SpinHalfFermionSite, line 61.)

Todo

For memory caching with big MPO environments, we need a Hdf5Cacher clearing the memo’s every now and then (triggered by what?).

(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.7.1/tenpy/tools/hdf5_io.py:docstring of tenpy.tools.hdf5_io, line 60.)

Tenpy main module

  • full name: tenpy

  • parent module: tenpy

  • type: module

Submodules

algorithms

A collection of algorithms such as TEBD and DMRG.

linalg

Linear-algebra tools for tensor networks.

models

Definition of the various models.

networks

Definitions of tensor networks like MPS and MPO.

tools

A collection of tools: mostly short yet quite useful functions.

version

Access to version of this library.

Module description

TeNPy - a Python library for Tensor Network Algorithms

TeNPy is a library for algorithms working with tensor networks, e.g., matrix product states and -operators, designed to study the physics of strongly correlated quantum systems. The code is intended to be accessible for newcommers and yet powerful enough for day-to-day research.

tenpy.__version__ = '0.7.1'

hard-coded version string

tenpy.__full_version__ = '0.7.1'

full version from git description, and numpy/scipy/python versions

tenpy.show_config()[source]

Print information about the version of tenpy and used libraries.

The information printed is tenpy.version.version_summary.

Submodules

algorithms

A collection of algorithms such as TEBD and DMRG.

linalg

Linear-algebra tools for tensor networks.

models

Definition of the various models.

networks

Definitions of tensor networks like MPS and MPO.

tools

A collection of tools: mostly short yet quite useful functions.

version

Access to version of this library.