Tensor Network Python (TeNPy)¶
TeNPy (short for ‘Tensor Network Python’) is a Python library for the simulation of strongly correlated quantum systems with tensor networks.
The philosophy of this library is to get a new balance of a good readability and usability for newcomers, and at the same time powerful algorithms and fast development of new algorithms for experts. For good readability, we include an extensive documentation next to the code, both in Python doc strings and separately as user guides, as well as simple example codes and even toy codes, which just demonstrate various algorithms (like TEBD and DMRG) in ~100 lines per file.
How do I get set up?¶
If you have the conda package manager, you can install the latest released version of TeNPy with:
conda install channel=condaforge physicstenpy
Further details and alternative methods can be found the file doc/INSTALL.rst. The latest version of the source code can be obtained from https://github.com/tenpy/tenpy.
How to read the documentation¶
The documentation is available online at https://tenpy.readthedocs.io/. The documentation is roughly split in two parts: on one hand the full “reference” containing the documentation of all functions, classes, methods, etc., and on the other hand the “user guide” containing some introductions and additional explanations.
The documentation is based on Python’s docstrings, and some additional *.rst
files located in the folder doc/ of the repository.
All documentation is formated as reStructuredText,
which means it is quite readable in the source plain text, but can also be converted to other formats.
If you like it simple, you can just use intective python help(), Python IDEs of your choice or jupyter notebooks, or just read the source.
Moreover, the documentation gets converted into HTML using Sphinx, and is made available online at https://tenpy.readthedocs.io/.
The big advantages of the (online) HTML documentation are a lot of crosslinks between different functions, and even a search function.
If you prefer yet another format, you can try to build the documentation yourself, as described in doc/contr/build_doc.rst
.
Help  I looked at the documentation, but I don’t understand how …?¶
We have set up a community forum at https://tenpy.johanneshauschild.de/, where you can post questions and hopefully find answers. Once you got some experience with TeNPy, you might also be able to contribute to the community and answer some questions yourself ;) We also use this forum for official annoucements, for example when we release a new version.
I found a bug¶
You might want to check the github issues, if someone else already reported the same problem. To report a new bug, just open a new issue on github. If you already know how to fix it, you can just create a pull request :) If you are not sure whether your problem is a bug or a feature, you can also ask for help in the TeNPy forum.
Citing TeNPy¶
When you use TeNPy for a work published in an academic journal, you can cite this paper to acknowledge the work put into the development of TeNPy.
(The license of TeNPy does not force you, however.)
For example, you could add the sentence "Calculations were performed using the TeNPy Library (version X.X.X)\cite{tenpy}."
in the acknowledgements or in the main text.
The corresponding BibTex Entry would be the following (the \url{...}
requires \usepackage{hyperref}
in the LaTeX preamble.):
@Article{tenpy,
title={{Efficient numerical simulations with Tensor Networks: Tensor Network Python (TeNPy)}},
author={Johannes Hauschild and Frank Pollmann},
journal={SciPost Phys. Lect. Notes},
pages={5},
year={2018},
publisher={SciPost},
doi={10.21468/SciPostPhysLectNotes.5},
url={https://scipost.org/10.21468/SciPostPhysLectNotes.5},
archiveprefix={arXiv},
eprint={1805.00055},
note={Code available from \url{https://github.com/tenpy/tenpy}},
}
To keep us motivated, you can also include your work into the list of papers using TeNPy.
Acknowledgment¶
This work was funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DEAC0205 CH11231 through the Scientific Discovery through Advanced Computing (SciDAC) program (KC23DAC Topological and Correlated Matter via Tensor Networks and Quantum Monte Carlo).
License¶
The code is licensed under GPLv3.0 given in the file LICENSE
of the repository,
in the online documentation readable at https://tenpy.readthedocs.io/en/latest/install/license.html.
Installation instructions¶
With the [conda] package manager you can install python with:
conda install channel=condaforge physicstenpy
More details and tricks in Installation with conda from condaforge.
If you don’t have conda, but you have [pip], you can:
pip install physicstenpy
More details for this method can be found in Installation from PyPi with pip.
We also have a bunch of optional Extra requirements, which you don’t have to install to use TeNPy, but you might want to.
The method with the minimal requirements is to just download the source and adjust the PYTHONPATH, as described in Installation from source. This is also the recommended way if you plan to modify parts of the source.
Installation with conda from condaforge¶
We provide a package for the [conda] package manager in the condaforge channel, so you can install TeNPy as:
conda install channel=condaforge physicstenpy
Following the recommondation of condaforge, you can also make condaforge the default channel as follows:
conda config add channels condaforge
conda config set channel_priority strict
If you have done this, you don’t need to specify the channel=condaforge
explicitly.
Moreover, it is actually recommended to create a separate environment. To create a conda environment with the name tenpy, where the TeNPy package (called physicstenpy) is installed:
conda create name tenpy channel=condaforge physicstenpy
In that case, you need to activate the environment each time you want to use the package with:
conda activate tenpy
The big advantage of this approach is that it allows multiple version of software to be installed in parallel, e.g., if one of your projects requires python>=3.8 and another one requires an old library which doesn’t support that. Further info can be found in the conda documentation.
Installation from PyPi with pip¶
Preparation: install requirements¶
If you have the [conda] package manager from anaconda, you can just download the
environment.yml file out of the repository
and create a new environment (called tenpy
, if you don’t speficy another name) for TeNPy with all the required packages:
conda env create f environment.yml
conda activate tenpy
Further information on conda environments can be found in the conda documentation. Note that installing conda also installs a version of [pip].
Alternatively, if you only have [pip] (and not [conda]), install the required packages with the following command (after downloading the requirements.txt file from the repository):
pip install r requirements.txt
Note
Make sure that the pip
you call corresponds to the python version you want to use.
(One way to ensure this is to use python m pip
instead of a simple pip
.)
Also, you might need to use the argument user
to install the packages to your home directory,
if you don’t have sudo
rights. (Using user
with conda’s pip
is discouraged, though.)
Warning
It might just be a temporary problem, but I found that the pip version of numpy is incompatible with the python distribution of anaconda. If you have installed the intelpython or anaconda distribution, use the conda packagemanager instead of pip for updating the packages whenever possible!
Installing the latest stable TeNPy package¶
Now we are ready to install TeNPy. It should be as easy as (note the different package name  ‘tenpy’ was taken!)
pip install physicstenpy
Note
If the installation fails, don’t give up yet. In the minimal version, tenpy requires only pure Python with somewhat uptodate NumPy and SciPy. See Installation from source.
Installation of the latest version from Github¶
To get the latest development version from the github master branch, you can use:
pip install git+git://github.com/tenpy/tenpy.git
This should already have the lastest features described in /changelog/latest. Disclaimer: this might sometimes be broken, although we do our best to keep to keep it stable as well.
Installation from the downloaded source folder¶
Finally, if you downloaded the source and want to modify parts of the source,
You can also install TeNPy with in
development version with editable
:
cd $HOME/tenpy # after downloading the source, got to the repository
pip install editable .
Uninstalling a pipinstalled version¶
In all of the above cases, you can uninstall tenpy with:
pip uninstall physicstenpy
Updating to a new version¶
Before you update, take a look at the Release Notes, which lists the changes, fixes, and new stuff. Most importantly, it has a section on backwards incompatible changes (i.e., changes which may break your existing code) along with information how to fix it. Of course, we try to avoid introducing such incompatible changes, but sometimes, there’s no way around them. If you skip some intermediate version(s) for the update, read also the release notes of those!
How to update depends a little bit on the way you installed TeNPy.
Of course, you have always the option to just remove the TeNPy files (possibly with a pip uninstall physicstenpy
or
conda uninstall physicstenpy
),
and to start over with downloading and installing the newest version.
When installed with conda¶
When you installed TeNPy with [conda], you just need to activate the corresponding environment
(e.g. conda activate tenpy
) and do a:
conda update physicstenpy
When installed with pip¶
When you installed TeNPy with [pip], you just need to do a:
pip install upgrade physicstenpy
When installed from source¶
If you used git clone ...
to download the repository, you can update to the newest version using [git].
First, briefly check that you didn’t change anything you need to keep with git status
.
Then, do a git pull
to download (and possibly merge) the newest commit from the repository.
Note
If some Cython file (ending in .pyx
) got renamed/removed (e.g., when updating from v0.3.0 to v0.4.0),
you first need to remove the corresponding binary files.
You can do so with the command bash cleanup.sh
.
Furthermore, whenever one of the cython files (ending in .pyx
) changed, you need to recompile it.
To do that, simply call the command bash ./compile
again.
If you are unsure whether a cython file changed, compiling again doesn’t hurt.
To summarize, you need to execute the following bash commands in the repository:
# 0) make a backup of the whole folder
git status # check the output whether you modified some files
git pull
bash ./cleanup.sh # (confirm with 'y')
bash ./compile.sh
Installation from source¶
Minimal Requirements¶
This code works with a minimal requirement of pure Python>=3.5 and somewhat recent versions of NumPy and SciPy.
Getting the source¶
The following instructions are for (some kind of) Linux, and tested on Ubuntu. However, the code itself should work on other operating systems as well (in particular MacOS and Windows).
The offical repository is at https://github.com/tenpy/tenpy.git. To get the latest version of the code, you can clone it with [git] using the following commands:
git clone https://github.com/tenpy/tenpy.git $HOME/TeNPy
cd $HOME/TeNPy
Note
Adjust $HOME/TeNPy
to the path wherever you want to save the library.
Optionally, if you don’t want to contribute, you can checkout the latest stable release:
git tag # this prints the available version tags
git checkout v0.3.0 # or whatever is the lastest stable version
Note
In case you don’t have [git] installed, you can download the repository as a ZIP archive. You can find it under releases, or the latest development version.
Minimal installation: Including tenpy into PYTHONPATH¶
The python source is in the directory tenpy/ of the repository. This folder tenpy/ should be placed in (one of the folders of) the environment variable PYTHONPATH. On Linux, you can simply do this with the following line in the terminal:
export PYTHONPATH=$HOME/TeNPy
(If you have already a path in this variable, separate the paths with a colon :
.)
However, if you enter this in the terminal, it will only be temporary for the terminal session where you entered it.
To make it permanently, you can add the above line to the file $HOME/.bashrc
.
You might need to restart the terminal session or need to relogin to force a reload of the ~/.bashrc
.
Whenever the path is set, you should be able to use the library from within python:
>>> import tenpy
/home/username/TeNPy/tenpy/tools/optimization.py:276: UserWarning: Couldn't load compiled cython code. Code will run a bit slower.
warnings.warn("Couldn't load compiled cython code. Code will run a bit slower.")
>>> tenpy.show_config()
tenpy 0.4.0.dev0+7706003 (not compiled),
git revision 77060034a9fa64d2c7c16b4211e130cf5b6f5272 using
python 3.7.3 (default, Mar 27 2019, 22:11:17)
[GCC 7.3.0]
numpy 1.16.3, scipy 1.2.1
tenpy.show_config()
prints the current version of the used TeNPy library as well as the versions of the used python, numpy and scipy libraries,
which might be different on your computer. It is a good idea to save this data (given as string in tenpy.version.version_summary
along with your data to allow to reproduce your results exactly.
If you got a similar output as above: congratulations! You can now run the codes :)
Compilation of np_conserved¶
At the heart of the TeNPy library is the module tenpy.linalg.np_conseved
, which provides an Array class to exploit the
conservation of abelian charges. The data model of python is not ideal for the required bookkeeping, thus
we have implemented the same np_conserved module in Cython.
This allows to compile (and thereby optimize) the corresponding python module, thereby speeding up the execution of the
code. While this might give a significant speedup for code with small matrix dimensions, don’t expect the same speedup in
cases where most of the CPUtime is already spent in matrix multiplications (i.e. if the bond dimension of your MPS is huge).
To compile the code, you first need to install Cython
conda install cython # when using anaconda, or
pip install upgrade Cython # when using pip
Moreover, you need a C++ compiler.
For example, on Ubuntu you can install sudo aptget install build_essential
,
or on Windows you can download MS Visual Studio 2015.
If you use anaconda, you can also use one conda install c condaforge cxxcompiler
.
After that, go to the root directory of TeNPy ($HOME/TeNPy
) and simply run
bash ./compile.sh
Note
There is no need to compile if you installed TeNPy directly with conda or pip. (You can verify this with tenpy.show_config() as illustrated below.)
Note that it is not required to separately download (and install) Intel MKL: the compilation just obtains the includes from numpy. In other words, if your current numpy version uses MKL (as the one provided by anaconda), the compiled TeNPy code will also use it.
After a successful compilation, the warning that TeNPy was not compiled should go away:
>>> import tenpy
>>> tenpy.show_config()
tenpy 0.4.0.dev0+b60bad3 (compiled from git rev. b60bad3243b7e54f549f4f7c1f074dc55bb54ba3),
git revision b60bad3243b7e54f549f4f7c1f074dc55bb54ba3 using
python 3.7.3 (default, Mar 27 2019, 22:11:17)
[GCC 7.3.0]
numpy 1.16.3, scipy 1.2.1
Note
For further optimization options, look at tenpy.tools.optimization
.
Extra requirements¶
We have some extra requirements that you don’t need to install to use TeNPy, but that you might find usefull to work with. TeNPy does not import the following libraries (at least not globally), but some functions might expect arguments behaving like objects from these libraries.
Note
If you created a [conda] environment with conda env create f environment.yml
, all the extra requirements below
should already be installed :)
(However, a pip install r requirements.txt
does not install them.)
Matplotlib¶
The first extra requirement is the [matplotlib] plotting library.
Some functions expect a matplotlib.axes.Axes
instance as argument to plot some data for visualization.
Intel’s Math Kernel Library (MKL)¶
If you want to run larger simulations, we recommend the use of Intel’s MKL. It ships with a Lapack library, and uses optimization for Intel CPUs. Moreover, it uses parallelization of the LAPACK/BLAS routines, which makes execution much faster. As of now, the library itself supports no other way of parallelization.
If you don’t have a python version which is built against MKL,
we recommend using [conda] or directly intelpython.
Conda has the advantage that it allows to use different environments for different projects.
Both are available for Linux, Mac and Windows; note that you don’t even need administrator rights to install it on linux.
Simply follow the (straightforward) instructions of the web page for the installation.
After a successfull installation, if you run python
interactively, the first output line should
state the python version and contain Anaconda
or Intel Corporation
, respectively.
If you have a working conda package manager, you can install the numpy build against mkl with:
conda install mkl numpy scipy
Note
MKL uses different threads to parallelize various BLAS and LAPACK routines.
If you run the code on a cluster, make sure that you specify the number of used cores/threads correctly.
By default, MKL uses all the available CPUs, which might be in stark contrast than what you required from the
cluster. The easiest way to set the used threads is using the environment variable MKL_NUM_THREADS (or OMP_NUM_THREADS).
For a dynamic change of the used threads, you might want to look at process
.
HDF5 file format support¶
We support exporting data to files in the [HDF5] format through the python interface of the h5py <https://docs.h5py.org/en/stable/> package, see Saving to disk: input/output for more information. However, that requires the installation of the HDF5 library and h5py.
YAML parameter files¶
The tenpy.tools.params.Config
class supports reading and writing YAML files, which requires the package
pyyaml; pip install pyyaml
.
Tests¶
To run the tests, you need to install pytest, which you can for example do with pip install pytest
.
For information how to run the tests, see Checking the installation.
Checking the installation¶
The first check of whether tenpy is installed successfully, is to try to import it from within python:
>>> import tenpy
Note
If this raises a warning Couldn't load compiled cython code. Code will run a bit slower.
, something went wrong with
the compilation of the Cython parts (or you didn’t compile at all).
While the code might run slower, the results should still be the same.
The function tenpy.show_config()
prints information about the used versions of tenpy, numpy and
scipy, as well on the fact whether the Cython parts were compiled and could be imported.
As a further check of the installation you can try to run (one of) the python files in the examples/ subfolder; hopefully all of them should run without error.
You can also run the automated testsuite with pytest to make sure everything works fine.
If you have pytest
installed, you can go to the tests folder of the repository, and run the tests with:
cd tests
pytest
In case of errors or failures it gives a detailed traceback and possibly some output of the test. At least the stable releases should run these tests without any failures.
If you can run the examples but not the tests, check whether pytest actually uses the correct python version.
The test suite is also run automatically by github actions and with travisci, results can be inspected here.
TeNPy developer team¶
The following people are part of the TeNPy developer team.
The full list of contributors can be obtained from the git repository with ``git shortlog sn``.
Johannes Hauschild tenpy@johanneshauschild.de
Frank Pollmann
Michael P. Zaletel
Maximilian Schulz
Leon Schoonderwoerd
Kévin Hémery
Samuel Scalet
Markus Drescher
Wilhelm Kadow
Gunnar Moeller
Jakob Unfried
YuChin Tzeng
Further, the code is based on an earlier version of the library, mainly developed by
Frank Pollmann, Michael P. Zaletel and Roger S. K. Mong.
License¶
The source code documented here is published under a GPL v3 license, which we include below.
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a programto make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on generalpurpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program nonfree.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyrightlike laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any nonsource
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or generalpurpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From AntiCircumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
nonpermissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying NonSource Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machinereadable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peertopeer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or nonconsumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other nonpermissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or nonpermissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peertopeer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a crossclaim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a nonexclusive, worldwide, royaltyfree
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the nonexercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/whynotlgpl.html>.
Release Notes¶
The project adheres semantic versioning.
All notable changes to the project should be documented in the changelog. The most important things should be summarized in the release notes.
The changes in /changelog/latest are implemented in the latest development version on github, but not yet released.
Changes compared to previous TeNPy highlights the most important changes compared to the other, previously developed (closed source) TeNPy version.
[0.7.2]  20201009¶
Release Notes¶
We’ve added a list of all papers using (and citing) TeNPy, see Papers using TeNPy. Feel free to include your own works!
And a slight simplicifation, which might affect your code:
using the MultiCouplingModel is no longer necessary, just use the tenpy.models.model.CouplingModel
directly.
Changelog¶
Backwards incompatible changes¶
Deprecated the
tenpy.models.model.MultiCouplingModel
. The functionality is fully merged into theCouplingModel
, no need to subclass the MultiCouplingModel anymore.The
Kagome
lattice did not include all next_next_nearest_neighbors. (It had only the ones across the hexagon, missing those maiking up a bowtie.)Combined arguments onsite_terms and coupling_terms of
tenpy.networks.mpo.MPOGraph.from_terms()
into a single argument terms.
Added¶
Allow to include jupyter notebooks into the documentation; collect example notebooks in [TeNPyNotebooks].
term_correlation_function_right()
andterm_correlation_function_left()
for correlation functions with more than one operator on each end.tenpy.networks.terms.ExponentiallyDecayingTerms
for constructing MPOs with exponential decay, andtenpy.networks.model.CouplingModel.add_exponentially_decaying_coupling()
for using it. This closes issue #78.
Fixed¶
The
IrregularLattice
used the'default'
order of the regular lattice instead of whatever the order of the regular lattice was.charge_variance()
did not work for more than 1 charge.
[0.7.0]  20200904¶
Release Notes¶
The big new feature is the implementation of the W_I
and W_II
method for approximating exponentials of an MPO with an MPO,
and MPS compression / MPO application to an MPS, to allow time evolution with ExpMPOEvolution()
.
Changelog¶
Backwards incompatible changes¶
Remove argument leg0 from
build_MPO
.Remove argument leg0 from
from_grids
, instead optionally give all legs as argument.Moved/renamed the module
tenpy.algorithms.mps_sweeps
totenpy.algorithms.mps_common
. The old mps_sweeps still exists for compatibility, but raises a warning upon import.Moved/renamed the module
tenpy.algorithms.purification_tebd
to tenpy.algorithms.purification (for the PurificaitonTEBD and PurificationTEBD2) and tenpy.algorithms.disentangler (for the disentanglers).
Added¶
VariationalCompression
andVariationalApplyMPO
for variational compressionPurificationApplyMPO
andPurificationTwoSiteU
for variational compression with purifications.Argument insert_all_id for
tenpy.networks.mpo.MPOGraph.from_terms()
andfrom_term_list()
implemented the
IrregularLattice
.extended user guide on lattices, Details on the lattice geometry.
Function to approximate a decaying function by a sum of exponentials.
spatial_inversion()
to perform an explicit spatial inversion of the MPS.
Changed¶
By default, for an usual MPO define IdL and IdR on all bonds. This can generate “dead ends” in the MPO graph of finite systems, but it is useful for the make_WI/make_WII for MPOexponentiation.
tenpy.models.lattice.Lattice.plot_basis()
now allows to shade the unit cell and shift the origin of the plotted basis.Don’t use bc_shift in
tenpy.models.lattice.Lattice.plot_couplings()
any more  it lead to confusing figures. Instead, the new keyword wrap=True allows to directly connect all sites. This is done to avoid confusing in combination withplot_bc_identified()
.Error handling of nonzero qtotal for
TransferMatrix
.
Fixed¶
Removed double counting of chemical potential terms in the
BosonicHaldaneModel
andFermionicHaldaneModel
.Wrong results of
tenpy.networks.mps.MPS.get_total_charge()
withonly_physical_legs=True
.tenpy.models.lattice.Lattice.plot_bc_identified()
had a sign error for the bc_shift.calc_H_MPO_from_bond()
didn’t work for charges with blocks > 1.TEBD: keep qtotal of the B tensors constant
order model parameter was read out but not used in
tenpy.models.model.CouplingMPOModel.init_lattice()
for 1D lattices.
[0.6.1]  20200518¶
Release Notes¶
This only is a followup release to [0.6.0]  20200516. It fixes a small bug in the examples/c_tebd.py and some roundoff problems in the tests.
It is now possible to install TeNPy with the conda package manager:
conda install channel=condaforge physicstenpy
[0.6.0]  20200516¶
Release Notes¶
This release contains a major update of the documentation, which is now hosted by “Read the Docs” at https://tenpy.readthedocs.io/. Update your bookmark :)
Apart from that, this release introduces a format how to save and load data (in particular TeNPy classes) to HDF5 files.
See Saving to disk: input/output for more details.
To use that feature, you need to install the h5py package (and therefore some version of the HDF5 library).
This is easy with anaconda, conda install h5py
, but might be cumbersome on your local computing cluster.
(However, many university computing clusters have some version of HDF5 installed already. Check with your local sysadmin.)
Moreover, we changed how we read out parameter dictionaries  instead of the get_parameter() function,
we have now a Config
class which behaves like a dictionary, you can simpy use
options.get(key, default)
for model parameters  as you would do for a python dictionary.
Changelog¶
Backwards incompatible changes¶
Created a class
Config
to replace Pythonnative parameter dictionaries and add some useful functionality. Old code usingtenpy.tools.params.get_parameter()
andtenpy.tools.params.unused_parameters()
still works as before, but raises a warning, and should be replaced. For example, if you defined your own models, you should replace callsget_parameter(model_params, "key", "default_value", "ModelName")
withmodel_params.get("key", "default_value")
, the latter syntax being what you would use for a normal python dictionary as well.Renamed the following class parameter dictionaries to simply options for more consitency. Old code using the class attributes should still work (since we provide property aliases), but raises warnings. Note that this affects also derived classes (for example the
TwoSiteDMRGEngine
).tenpy.algorithms.dmrg.DMRGEngine.DMRG_params
(was already renamed to engine_params in versin 0.5.0)tenpy.algorithms.mps_common.Sweep.engine_params
tenpy.algorithms.tebd.Engine.TEBD_params
tenpy.algorithms.tdvp.Engine.TDVP_params
tenpy.linalg.lanczos.Lanczos
Changed the arguments of
tenpy.models.model.MultiCouplingModel()
: We replaced the three arguments u0, op0 and other_op withother_ops=[(u1, op1, dx1), (op2, u2, dx2), ...]
by single, equivalent argment ops which should now readops=[(op0, dx0, u0), (op1, dx1, u1), (op2, dx2, u2), ...]
, wheredx0 = [0]*lat.dim
. Note the changed order inside the tuple! Old code (which specifies opstr and category as keyword argument, if at all) still works as before, but raises a warning, and should be replaced. Sincetenpy.lattice.Lattice.possible_multi_couplings()
used similar arguments, they were changed as well.Don’t save H_MPO_graph as model attribute anymore  this also wasn’t documented.
Renamed the truncation parameter symmetry_tol to degeneracy_tol and make the criterion more reasonable by not checking \(log(S_i/S_j) < log(symmetry_tol)\), but simply \(log(S_i/S_j) < degeneracy_tol\). The latter makes more sense, as it is equivalent to \((S_i  S_j)/S_j < exp(degeneracy_tol)  1 = degeneracy_tol + \mathcal{O}(degeneracy_tol^2)\).
Deprecated
tenpy.networks.mps.MPS.increase_L()
in favor of the newly addedtenpy.networks.mps.MPS.enlarge_mps_unit_cell()
(takingfactor
instead ofnew_L=factor*L
as argument).tenpy.networks.mps.MPS.correlation_function()
now autodetermines whether a JordanWigner string is necessary. If any of the given operators is directly an npc Array, it will now raise an error; setautoJW=False
in that case.Instead of “monkeypatching” matvec of the
tenpy.algorithms.mps_common.EffectiveH
for the case that ortho_to_envs is not empty, we defined a proper classNpcLinearOperatorWrapper
, which serves as baseclass forOrthogonalNpcLinearOperator
. The argument ortho_to_envs has been removed fromEffectiveH
.Switch order of the sites in the unit cell for the
DualSquare
, and redefine what the"default"
order means. This is a huge optimization of DMRG, reducing the necessary MPS bond dimension for the ground state to the optimal \(2^{L1}\) on each bond.Deprecated the Lanczos funciton/class argument orthogonal_to of in
LanczosGroundState
. Instead, one can use theOrthogonalNpcLinearOperator
.Deprecation warning for changing the default argument of shift_ket for nonzero shift_bra of the
TransferMatrix
.
Added¶
tenpy.networks.mpo.MPO.variance()
to calculate the variance of an MPO against a finite MPS.Classmethod
tenpy.networks.MPS.from_lat_product_state()
to initialize an MPS from a product state given in lattice coordinates (independent of the order of the lattice).argument plus_hc for
tenpy.models.model.CouplingModel.add_onsite()
,tenpy.models.model.CouplingModel.add_coupling()
, andtenpy.models.model.MultiCouplingModel.add_multi_coupling()
to simplify adding the hermitian conjugate terms.parameter explicit_plus_hc for
MPOModel
,CouplingModel
andMPO
, to reduce MPO bond dimension by not storing Hermitian conjugate terms, but computing them at runtime.tenpy.models.model.CouplingModel.add_local_term()
for adding a single term to the lattice, and still handling JordanWigner strings etc.tenpy.networks.site.Site.get_hc_opname()
andhc_ops
to allow getting the hermitian conjugate operator (name) of the onsite operators.tenpy.tools.hdf5_io
with convenience functions for import and output with pickle, as well as an implementation allowing to save and load objects to HDF5 files in the format specified in Saving to disk: input/output.humanreadable boundary_conditions property in
Lattice
.save_hdf5 and load_hdf5 methods to support saving/loading to HDF5 for the following classes (and their subclasses): 
ChargeInfo
LegCharge
LegPipe
Array
MPS
MPO
Lattice
tenpy.networks.mps.MPSEnvironment.get_initialization_data()
for a convenient way of saving the necessary parts of the environment after an DMRG run.Method enlarge_mps_unit_cell for the following classes: 
MPS
MPO
Lattice
Model
,MPOModel
,NearestNeighborModel
tenpy.tools.misc.to_iterable_of_len()
for convenience of handling arguments.tenpy.models.lattice.Lattice.mps2lat_values_masked()
as generalization oftenpy.models.lattice.Lattice.mps2lat_values()
.tenpy.linalg.sparse.OrthogonalNpcLinearOperator
to orthogonalize against vectors.tenpy.linalg.sparse.ShiftNpcLinearOperator
to add a constant.tenpy.linalg.sparse.SumNpcLinearOperator
which serves e.g. to add the h.c. during the matvec (in combination with the newtenpy.linalg.sparse.NpcLinearOperator.adjoint()
).tenpy.algorithms.mps_common.make_eff_H()
to simplify implementations ofprepare_update()
.attribute
options
for the Model.
Changed¶
DEFAULT DMRG paramter
'diag_method'
from'lanczos'
to'default'
, which is the same for large bond dimensions, but performs a full exact diagonalization if the effective Hamiltonian has small dimensions. The threshold introduced is the new DMRG parameter'max_N_for_ED'
.DEFAULT parameter
charge_sector=None
instead ofcharge_sector=0
intenpy.networks.mps.MPS.overlap()
to look for eigenvalues of the transfer matrix in all charge sectors, and not assume that it’s the 0 sector.Derive the following classes (and their subclasses) from the new
Hdf5Exportable
to support saving to HDF5: Site
Terms
OnsiteTerms
CouplingTerms
Model
, i.e., all model classes.Instead of just defining to_matrix and adjoint for
EffectiveH
, define the interface directly forNpcLinearOperator
.Try to keep the charge block structure as far as possible for
add_charge()
anddrop_charge()
Fixed¶
Adjust the default DMRG parameter min_sweeps if chi_list is set.
Avoid some unnecessary transpositions in MPO environments for MPS sweeps (e.g. in DMRG).
sort(bunch=True)
could return unbunched Array, but still set the bunched flag.LegPipe
did not initializeself.bunched
correctly.issue #98: Error of calling psi.canonical_form() directly after disabling the DMRG mixer.
svd()
withfull_matrices=True
gave wrong charges.tenpy.linalg.np_conserved.Array.drop_charge()
andtenpy.lina.np_conserved.Array.drop_charge()
did not copy over labels.wrong pairs for the fifth_nearest_neighbors of the
Honeycomb
.Continue in
tenpy.algorithms.dmrg.full_diag_effH()
with a warning instaed of raising an Error, if the effective Hamltonian is zero.correlation_length()
: check for hermitian Flag might have raised and Error with new numpy warningscorrelation_function()
did not respect argumentstr_on_first=False
.tenpy.networks.mps.MPS.get_op()
worked unexpected for infinite bc with incomensurateself.L
andlen(op_list)
.tenpy.networks.mps.MPS.permute_sites()
did modify the given perm.issue #105 Unintended sideeffects using lanczos_params.verbose in combination with orthogonal_to
issue #108
tenpy.linalg.sparse.FlatLinearOperator._matvec()
changesself._charge_sector
[0.5.0]  20191218¶
Backwards incompatible changes¶
Major rewriting of the DMRG Engines, see issue #39 and issue #85 for details. The
EngineCombine
andEngineFracture
have been combined into a singleTwoSiteDMRGEngine
with an Therun
function works as before. In case you have directly used theEngineCombine
orEngineFracture
, you should update your code and use theTwoSiteEngine
instead.Moved
init_LP
andinit_RP
method fromMPS
intoMPSEnvironment
andMPOEnvironment
.
Changed¶
Addition/subtraction of
Array
: check whether the both arrays have the same labels in differnt order, and in that case raise a warning that we will transpose in the future.Made
tenpy.linalg.np_conserved.Array.get_block()
public (previouslytenpy.linalg.np_conserved.Array._get_block
).groundstate()
now returns a tuple(E0, psi0)
instead of justpsi0
. Moreover, the argument charge_sector was added.Simplification in the
Lattice
: Instead of having separate arguments/attributes/functions for'nearest_neighbors', 'next_nearest_neighbors', 'next_next_nearest_neighbors'
and possibly (Honeycomb) even'fourth_nearest_neighbors', 'fifth_nearest_neighbors'
, collect them in a dictionary called pairs. Old call structures still allowed, but deprecated.issue #94: Array addition and
inner()
should reflect the order of the labels, if they coincided. Will change the default behaviour in the future, raising FutureWarning for now.Default parameter for DMRG params: increased precision by setting P_tol_min down to the maximum of
1.e30, lanczos_params['svd_min']**2 * P_tol_to_trunc, lanczos_params['trunc_cut']**2 * P_tol_to_trunc
by default.
Added¶
tenpy.algorithms.mps_common
with theSweep
class andEffectiveH
to be aOneSiteH
orTwoSiteH
.SingleSite DMRG with the
SingleSiteDMRG
.Example function in
examples/c_tebd.py
how to run TEBD with a model originally having nextnearest neighbors.increase_L()
to allow increasing the unit cell of an MPS.Additional option
order='folded'
for theChain
.tenpy.algorithms.exact_diag.ExactDiag.from_H_mpo()
wrapper as replacement fortenpy.networks.mpo.MPO.get_full_hamiltonian()
andtenpy.networks.mpo.MPO.get_grouped_mpo()
. The latter are now deprecated.Argument max_size to limit the matrix dimension in
ExactDiag
.tenpy.linalg.sparse.FlatLinearOperator.from_guess_with_pipe()
to allow quickly converting matvec functions acting on multidimensional arrays to a FlatLinearOperator by combining the legs into a LegPipe.tenpy.tools.math.speigsh()
for hermitian variant ofspeigs()
Allow for arguments
'LA', 'SA'
inargsort()
.tenpy.linalg.lanczos.lanczos_arpack()
as possiple replacement of the selfimplemented lanczos function.tenpy.algorithms.dmrg.full_diag_effH()
as another replacement oflanczos()
.The new DMRG parameter
'diag_method'
allows to select a method for the diagonalization of the effective Hamiltonian. Seetenpy.algorithms.dmrg.DMRGEngine.diag()
for details.dtype attribute in
EffectiveH
.tenpy.linalg.charges.LegCharge.get_qindex_of_charges()
to allow selecting a block of an Array from the charges.tenpy.algorithms.mps_common.EffectiveH.to_matrix
to allow contracting an EffectiveH to a matrix, as well as metadatatenpy.linalg.sparse.NpcLinearOperator.acts_on
andtenpy.algorithms.mps_common.EffectiveH.N
.argument only_physical_legs in
tenpy.networks.mps.MPS.get_total_charge()
Fixed¶
MPO
expectation_value()
did not work for finite systems.Calling
compute_K()
repeatedly with default parameters but on states with different chi would use the chi of the very first call for the truncation parameters.allow
MPSEnvironment
andMPOEnvironment
to have MPS/MPO with different lengthgroup_sites()
didn’t work correctly in some situations.matvec_to_array()
returned the transposed of A.tenpy.networks.mps.MPS.from_full()
messed up the form of the first array.issue #95: blowup of errors in DMRG with update_env > 0. Turns out to be a problem in the precision of the truncation error: TruncationError.eps was set to 0 if it would be smaller than machine precision. To fix it, I added
from_S()
.
[0.4.1]  20190814¶
Backwards incompatible changes¶
Switch the sign of the
BoseHubbardModel
andFermiHubbardModel
to hopping and chemical potential having negative prefactors. Of course, the same adjustment happens in theBoseHubbardChain
andFermiHubbardChain
.moved
BoseHubbardModel
andBoseHubbardChain
as well asFermiHubbardModel
andFermiHubbardChain
into the new moduletenpy.models.hubbard
.Change arguments of
coupling_term_handle_JW()
andmulti_coupling_term_handle_JW()
to use strength and sites instead of op_needs_JW.Only accept valid identifiers as operator names in
add_op()
.
Changed¶
grid_concat()
allows forNone
entries (representing zero blocks).from_full()
allows for ‘segment’ boundary conditions.apply_local_op()
allows for nsite operators.
Added¶
Nearestneighbor interaction in
BoseHubbardModel
multiply_op_names()
to replace' '.join(op_names)
and allow explicit compression/multiplication.order_combine_term()
to group operators together.dagger()
of MPO’s (and to implement that alsoflip_charges_qconj()
).has_label()
to check if a label existsAddition of MPOs
3 additional examples for chern insulators in
examples/chern_insulators/
.from_MPOModel()
for initializing nearestneighbor models after grouping sites.
Fixed¶
issue #36: longrange couplings could give IndexError.
issue #42: Onsiteterms in
FermiHubbardModel
were wrong for lattices with nontrivial unit cell.Missing a factor 0.5 in
GUE()
.Allow
TermList
to have terms with multiple operators acting on the same site.Allow MPS indices outside unit cell in
mps2lat_idx()
andlat2mps_idx()
.expectation_value()
did not work for nsite operators.
[0.4.0]  20190428¶
Backwards incompatible changes¶
The argument order of
tenpy.models.lattice.Lattice
could be a tuple(priority, snake_winding)
before. This is no longer valid and needs to be replaced by("standard", snake_winding, priority)
.Moved the boundary conditions bc_coupling from the
tenpy.models.model.CouplingModel
into thetenpy.models.lattice.Lattice
(as bc). Using the parameter bc_coupling will raise a FutureWarning, one should set the boundary conditions directly in the lattice.Added parameter permute (True by default) in
tenpy.networks.mps.MPS.from_product_state()
andtenpy.networks.mps.MPS.from_Bflat()
. The resulting state will therefore be independent of the “conserve” parameter of the Sites  unlike before, where the meaning of the p_state argument might have changed.Generalize and rename
tenpy.networks.site.DoubleSite
totenpy.networks.site.GroupedSite
, to allow for an arbitrary number of sites to be grouped. Argumentssite0, site1, label0, label1
of the __init__ can be replaced with[site0, site1], [label0, label1]
andop0, op1
of the kronecker_product with[op0, op1]
; this will recover the functionality of the DoubleSite.Restructured callstructure of Mixer in DMRG, allowing an implementation of other mixers. To enable the mixer, set the DMRG parameter
"mixer"
toTrue
or'DensityMatrixMixer'
instead of just'Mixer'
.The interaction parameter in the
tenpy.models.bose_hubbbard_chain.BoseHubbardModel
(andtenpy.models.bose_hubbbard_chain.BoseHubbardChain
) did not correspond to \(U/2 N (N1)\) as claimed in the Hamiltonian, but to \(U N^2\). The correcting factor 1/2 and change in the chemical potential have been fixed.Major restructuring of
tenpy.linalg.np_conserved
andtenpy.linalg.charges
. This should not break backwardscompatibility, but if you compiled the cython files, you need to remove the old binaries in the source directory. Usingbash cleanup.sh
might be helpful to do that, but also remove other files within the repository, so be careful and make a backup beforehand to be on the save side. Afterwards recompile withbash compile.sh
.Changed structure of
tenpy.models.model.CouplingModel.onsite_terms
andtenpy.models.model.CouplingModel.coupling_terms
: Each of them is now a dictionary with category strings as keys and the newly introducedtenpy.networks.terms.OnsiteTerms
andtenpy.networks.terms.CouplingTerms
as values.tenpy.models.model.CouplingModel.calc_H_onsite()
is deprecated in favor of new methods.Argument raise_op2_left of
tenpy.models.model.CouplingModel.add_coupling()
is deprecated.
Added¶
tenpy.networks.mps.MPS.expectation_value_term()
,tenpy.networks.mps.MPS.expectation_value_terms_sum()
andtenpy.networks.mps.MPS.expectation_value_multi_sites()
for expectation values of terms.tenpy.networks.mpo.MPO.expectation_value()
for an MPO.tenpy.linalg.np_conserved.Array.extend()
andtenpy.linalg.charges.LegCharge.extend()
, allowing to extend an Array with zeros.DMRG parameter
'orthogonal_to'
allows to calculate excited states for finite systems.possibility to change the number of charges after creating LegCharges/Arrays.
more general way to specify the order of sites in a
tenpy.models.lattice.Lattice
.new
tenpy.models.lattice.Triangular
,tenpy.models.lattice.Honeycomb
andtenpy.models.lattice.Kagome
latticea way to specify nearest neighbor couplings in a
Lattice
, along with methods to count the number of nearest neighbors for sites in the bulk, and a way to plot them (plot_coupling()
and friends)tenpy.networks.mpo.MPO.from_grids()
to generate the MPO from a grid.tenpy.models.model.MultiCouplingModel
for couplings involving more than 2 sites.request #8: Allow shift in boundary conditions of
CouplingModel
.Allow to use state labels in
tenpy.networks.mps.MPS.from_product_state()
.tenpy.models.model.CouplingMPOModel
structuring the default initialization of most models.Allow to force periodic boundary conditions for finite MPS in the
CouplingMPOModel
. This is not recommended, though.tenpy.models.model.NearestNeighborModel.calc_H_MPO_from_bond()
andtenpy.models.model.MPOModel.calc_H_bond_from_MPO()
for conversion of H_bond into H_MPO and vice versa.tenpy.algorithms.tebd.RandomUnitaryEvolution
for random unitary circuitsAllow documentation links to github issues, arXiv, papers by doi and the forum with e.g.
:issue:`5`, :arxiv:`1805.00055`, :doi:`10.21468/SciPostPhysLectNotes.5`, :forum:`3`
tenpy.models.model.CouplingModel.coupling_strength_add_ext_flux()
for adding hoppings with external flux.tenpy.models.model.CouplingModel.plot_coupling_terms()
to visualize the added coupling terms.tenpy.networks.terms.OnsiteTerms
,tenpy.networks.terms.CouplingTerms
,tenpy.networks.terms.MultiCouplingTerm
containing the of terms for theCouplingModel
andMultiCouplingModel
. This allowed to add the category argument toadd_onsite
,add_coupling
andadd_multi_coupling
.tenpy.networks.terms.TermList
as another (more human readable) representation of terms with conversion from and to the other*Term
classes.tenpy.networks.mps.MPS.init_LP()
andtenpy.networks.mps.MPS.init_RP()
to initialize left and right parts of an Environment.tenpy.networks.mpo.MPOGraph.from_terms()
andtenpy.networks.mpo.MPOGraph.from_term_list()
.argument charge_sector in
tenpy.networks.mps.MPS.correlation_length()
.
Changed¶
moved toycodes from the folder
examples/
to a new foldertoycodes/
to separate them clearly. major remodelling of the internals of
tenpy.linalg.np_conserved
andtenpy.linalg.charges
.
 major remodelling of the internals of
Restructured lanczos into a class, added time evolution calculating
exp(A*dt)psi0>
Warning for poorly conditioned Lanczos; to overcome this enable the new parameter reortho.
Restructured
tenpy.algorithms.dmrg
:run()
is now just a wrapper around the newrun()
,run(psi, model, pars)
is roughly equivalent toeng = EngineCombine(psi, model, pars); eng.run()
.Added
init_env()
andreset_stats()
to allow a simple restart of DMRG with slightly different parameters, e.g. for tuning Hamiltonian parameters.Call
canonical_form()
for infinite systems if the final state is not in canonical form.
Changed default values for some parameters:
set
trunc_params['chi_max'] = 100
. Not setting a chi_max at all will lead to memory problems. DisableDMRG_params['chi_list'] = None
by default to avoid conflicting settings.reduce to
mixer_params['amplitude'] = 1.e5
. A too strong mixer screws DMRG up pretty bad.increase
Lanczos_params['N_cache'] = N_max
(i.e., keep all states)set
DMRG_params['P_tol_to_trunc'] = 0.05
and provide reasonable …_min and …_max values.increased (default) DMRG accuracy by setting
DMRG_params['max_E_err'] = 1.e8
andDMRG_params['max_S_err'] = 1.e5
.don’t check the (absolute) energy for convergence in Lanczos.
set
DMRG_params['norm_tol'] = 1.e5
to check whether the final state is in canonical form.
Verbosity of
get_parameter()
reduced: Print parameters only for verbosity >=1. and default values only for verbosity >= 2.Don’t print the energy during realtime TEBD evolution  it’s preserved up to truncation errors.
Renamed the SquareLattice class to
tenpy.models.lattice.Square
for better consistency.autodetermine whether JordanWigner strings are necessary in
add_coupling()
.The way the labels of npc Arrays are stored internally changed to a simple list with None entries. There is a deprecated propery setter yielding a dictionary with the labels.
renamed first_LP and last_RP arguments of
MPSEnvironment
andMPOEnvironment
to init_LP and init_RP.Testing: insetad of the (outdated) nose, we now use pytest <https://pytest.org> for testing.
Fixed¶
issue #22: Serious bug in
tenpy.linalg.np_conserved.inner()
: ifdo_conj=True
is used with nonzeroqtotal
, it returned 0. instead of nonzero values.avoid error in
tenpy.networks.mps.MPS.apply_local_op()
Don’t carry around total charge when using DMRG with a mixer
Corrected couplings of the FermionicHubbardChain
issue #2: memory leak in cython parts when using intelpython/anaconda
issue #4: incompatible data types.
issue #6: the CouplingModel generated wrong Couplings in some cases
issue #19: Convergence of energy was slow for infinite systems with
N_sweeps_check=1
more reasonable traceback in case of wrong labels
wrong dtype of npc.Array when adding/subtracting/… arrays of different data types
could get wrong H_bond for completely decoupled chains.
SVD could return outer indices with different axes
tenpy.networks.mps.MPS.overlap()
works now for MPS with different total charge (e.g. afterpsi.apply_local_op(i, 'Sp')
).skip existing graph edges in MPOGraph.add() when building up terms without the strength part.
[0.3.0]  20180219¶
This is the first version published on github.
Added¶
Cython modules for np_conserved and charges, which can optionally be compiled for speedups
tools.optimization for dynamical optimization
Various models.
More predefined lattice sites.
Example toycodes.
Network contractor for general networks
Changed¶
Switch to python3
Removed¶
Python 2 support.
[0.2.0]  20170224¶
Compatible with python2 and python3 (using the 2to3 tool).
Development version.
Includes TEBD and DMRG.
Changes compared to previous TeNPy¶
This library is based on a previous (closed source) version developed mainly by Frank Pollmann, Michael P. Zaletel and Roger S. K. Mong. While allmost all files are completely rewritten and not backwards compatible, the overall structure is similar. In the following, we list only the most important changes.
Global Changes¶
syntax style based on PEP8. Use
$>yapf r i ./
to ensure consitent formatting over the whole project. Special comments# yapf: disable
and# yapf: enable
can be used for manual formatting of some regions in code.Following PEP8, we distinguish between ‘private’ functions, indicated by names starting with an underscore and to be used only within the library, and the public API. The puplic API should be backwardscompatible with different releases, while private functions might change at any time.
all modules are in the folder
tenpy
to avoid name conflicts with other libraries.withing the library, relative imports are used, e.g.,
from ..tools.math import (toiterable, tonparray)
Exception: the files in tests/ and examples/ run as__main__
and can’t use relative importsFiles outside of the library (and in tests/, examples/) should use absolute imports, e.g.
import tenpy.algorithms.tebd
renamed tenpy/mps/ to tenpy/networks, since it containes various tensor networks.
added
Site
describing the local physical sites by providing the physical LegCharge and onsite operators.
np_conserved¶
pure python, no need to compile!
in module
tenpy.linalg
instead ofalgorithms/linalg
.moved functionality for charges to
charges
Introduced the classes
ChargeInfo
(basically the oldq_number
, andmod_q
) andLegCharge
(the oldqind, qconj
).Introduced the class
LegPipe
to replace the oldleg_pipe
. It is derived fromLegCharge
and used as a leg in the array class. Thus any inherited array (aftertensordot
etc still has all the necessary information to split the legs. (The legs are shared between different arrays, so it’s saved only once in memory)Enhanced indexing of the array class to support slices and 1D index arrays along certain axes
more functions, e.g.
grid_outer()
TEBD¶
Introduced TruncationError for easy handling of total truncation error.
some truncation parameters are renamed and may have a different meaning, e.g. svd_max > svd_min has no ‘log’ in the definition.
DMRG¶
separate Lanczos module in tenpy/linalg/. Strangely, the old version orthoganalized against the complex conjugates of orthogonal_to (contrary to it’s doc string!) (and thus calculated ‘theta_o’ as bra, not ket).
cleaned up, provide prototypes for DMRG engine and mixer.
Tools¶
added
tenpy.tools.misc
, which contains ‘random stuff’ from oldtools.math
liketo_iterable
andto_array
(renamed to follow PEP8, documented)moved stuff for fitting to
tenpy.tools.fit
enhanced
tenpy.tools.string.vert_join()
for nice formattingmoved (parts of) old cluster/omp.py to
tenpy.tools.process
added
tenpy.tools.params
for a simplified handling of parameter/arguments for models and/or algorithms. Similar as the old models.model.set_var, but use it also for algorithms. Also, it may modify the given dictionary.
Introductions¶
The following documents are meant as introductions to various topics relevant to TeNPy.
If you are new to TeNPy, read the Overview.
Overview¶
Repository¶
The root directory of the git repository contains the following folders:
 tenpy
The actual source code of the library. Every subfolder contains an
__init__.py
file with a summary what the modules in it are good for. (This file is also necessary to mark the folder as part of the python package. Consequently, other subfolders of the git repo should not include a__init__.py
file.) toycodes
Simple toy codes completely independet of the remaining library (i.e., codes in
tenpy/
). These codes should be quite readable and intend to give a flavor of how (some of) the algorithms work. examples
Some example files demonstrating the usage and interface of the library.
 doc
A folder containing the documentation: the user guide is contained in the
*.rst
files. The online documentation is autogenerated from these files and the docstrings of the library. This folder contains a make file for building the documentation, runmake help
for the different options. The necessary files for the reference indoc/reference
can be autogenerated/updated withmake src2html
. tests
Contains files with test routines, to be used with pytest. If you are set up correctly and have pytest installed, you can run the test suite with
pytest
from within thetests/
folder. build
This folder is not distributed with the code, but is generated by
setup.py
(orcompile.sh
, respectively). It contains compiled versions of the Cython files, and can be ignored (and even removed without loosing functionality).
Code structure: getting started¶
There are several layers of abstraction in TeNPy. While there is a certain hierarchy of how the concepts build up on each other, the user can decide to utilize only some of them. A maximal flexibility is provided by an object oriented style based on classes, which can be inherited and adjusted to individual demands.
The following figure gives an overview of the most important modules, classes and functions in TeNPy.
Gray backgrounds indicate (sub)modules, yellow backgrounds indicate classes.
Red arrows indicate inheritance relations, dashed black arrows indicate a direct use.
(The individual models might be derived from the NearestNeighborModel
depending on the geometry of the lattice.)
There is a clear hierarchy from highlevel algorithms in the tenpy.algorithms
module down to basic
operations from linear algebra in the tenpy.linalg
module.
Most basic level: linear algebra¶
Note
See Charge conservation with np_conserved for more information on defining charges for arrays.
The most basic layer is given by in the linalg
module, which provides basic features of linear algebra.
In particular, the np_conserved
submodule implements an Array
class which is used to represent
the tensors. The basic interface of np_conserved
is very similar to that of the NumPy and SciPy libraries.
However, the Array
class implements abelian charge conservation.
If no charges are to be used, one can use ‘trivial’ arrays, as shown in the following example code.
"""Basic use of the `Array` class with trivial arrays."""
# Copyright 20192020 TeNPy Developers, GNU GPLv3
import tenpy.linalg.np_conserved as npc
M = npc.Array.from_ndarray_trivial([[0., 1.], [1., 0.]])
v = npc.Array.from_ndarray_trivial([2., 4. + 1.j])
v[0] = 3. # set indiviual entries like in numpy
print("v> =", v.to_ndarray())
# v> = [ 3.+0.j 4.+1.j]
M_v = npc.tensordot(M, v, axes=[1, 0])
print("Mv> =", M_v.to_ndarray())
# Mv> = [ 4.+1.j 3.+0.j]
print("<vMv> =", npc.inner(v.conj(), M_v, axes='range'))
# <vMv> = (24+0j)
The number and types of symmetries are specified in a ChargeInfo
class.
An Array
instance represents a tensor satisfying a charge rule specifying which blocks of it are nonzero.
Internally, it stores only the nonzero blocks of the tensor, along with one LegCharge
instance for each
leg, which contains the charges and sign qconj for each leg.
We can combine multiple legs into a single larger LegPipe
,
which is derived from the LegCharge
and stores all the information necessary to later split the pipe.
The following code explicitly defines the spin1/2 \(S^+, S^, S^z\) operators and uses them to generate and diagonalize the twosite Hamiltonian \(H = \vec{S} \cdot \vec{S}\). It prints the charge values (by default sorted ascending) and the eigenvalues of H.
"""Explicit definition of charges and spin1/2 operators."""
# Copyright 20192020 TeNPy Developers, GNU GPLv3
import tenpy.linalg.np_conserved as npc
# consider spin1/2 with Szconservation
chinfo = npc.ChargeInfo([1]) # just a U(1) charge
# charges for up, down state
p_leg = npc.LegCharge.from_qflat(chinfo, [[1], [1]])
Sz = npc.Array.from_ndarray([[0.5, 0.], [0., 0.5]], [p_leg, p_leg.conj()])
Sp = npc.Array.from_ndarray([[0., 1.], [0., 0.]], [p_leg, p_leg.conj()])
Sm = npc.Array.from_ndarray([[0., 0.], [1., 0.]], [p_leg, p_leg.conj()])
Hxy = 0.5 * (npc.outer(Sp, Sm) + npc.outer(Sm, Sp))
Hz = npc.outer(Sz, Sz)
H = Hxy + Hz
# here, H has 4 legs
H.iset_leg_labels(["s1", "t1", "s2", "t2"])
H = H.combine_legs([["s1", "s2"], ["t1", "t2"]], qconj=[+1, 1])
# here, H has 2 legs
print(H.legs[0].to_qflat().flatten())
# prints [2 0 0 2]
E, U = npc.eigh(H) # diagonalize blocks individually
print(E)
# [ 0.25 0.75 0.25 0.25]
Sites for the local Hilbert space and tensor networks¶
The next basic concept is that of a local Hilbert space, which is represented by a Site
in TeNPy.
This class does not only label the local states and define the charges, but also
provides onsite operators. For example, the SpinHalfSite
provides the
\(S^+, S^, S^z\) operators under the names 'Sp', 'Sm', 'Sz'
, defined as Array
instances similarly as
in the code above.
Since the most common sites like for example the SpinSite
(for general spin S=0.5, 1, 1.5,…), BosonSite
and
FermionSite
are predefined, a user of TeNPy usually does not need to define the local charges and operators explicitly.
The total Hilbert space, i.e, the tensor product of the local Hilbert spaces, is then just given by a
list of Site
instances. If desired, different kinds of Site
can be combined in that list.
This list is then given to classes representing tensor networks like the MPS
and
MPO
.
The tensor network classes also use Array
instances for the tensors of the represented network.
The following example illustrates the initialization of a spin1/2 site, an MPS
representing the Neel state, and
an MPO
representing the Heisenberg model by explicitly defining the W tensor.
"""Initialization of sites, MPS and MPO."""
# Copyright 20192020 TeNPy Developers, GNU GPLv3
from tenpy.networks.site import SpinHalfSite
from tenpy.networks.mps import MPS
from tenpy.networks.mpo import MPO
spin = SpinHalfSite(conserve="Sz")
print(spin.Sz.to_ndarray())
# [[ 0.5 0. ]
# [ 0. 0.5]]
N = 6 # number of sites
sites = [spin] * N # repeat entry of list N times
pstate = ["up", "down"] * (N // 2) # Neel state
psi = MPS.from_product_state(sites, pstate, bc="finite")
print("<Sz> =", psi.expectation_value("Sz"))
# <Sz> = [ 0.5 0.5 0.5 0.5]
print("<Sp_i Sm_j> =", psi.correlation_function("Sp", "Sm"), sep="\n")
# <Sp_i Sm_j> =
# [[1. 0. 0. 0. 0. 0.]
# [0. 0. 0. 0. 0. 0.]
# [0. 0. 1. 0. 0. 0.]
# [0. 0. 0. 0. 0. 0.]
# [0. 0. 0. 0. 1. 0.]
# [0. 0. 0. 0. 0. 0.]]
# define an MPO
Id, Sp, Sm, Sz = spin.Id, spin.Sp, spin.Sm, spin.Sz
J, Delta, hz = 1., 1., 0.2
W_bulk = [[Id, Sp, Sm, Sz, hz * Sz], [None, None, None, None, 0.5 * J * Sm],
[None, None, None, None, 0.5 * J * Sp], [None, None, None, None, J * Delta * Sz],
[None, None, None, None, Id]]
W_first = [W_bulk[0]] # first row
W_last = [[row[1]] for row in W_bulk] # last column
Ws = [W_first] + [W_bulk] * (N  2) + [W_last]
H = MPO.from_grids([spin] * N, Ws, bc='finite', IdL=0, IdR=1)
print("<psiHpsi> =", H.expectation_value(psi))
# <psiHpsi> = 1.25
Models¶
Note
See Models for more information on sites and how to define and extend models on your own.
Technically, the explicit definition of an MPO
is already enough to call an algorithm like DMRG in dmrg
.
However, writing down the W tensors is cumbersome especially for more complicated models.
Hence, TeNPy provides another layer of abstraction for the definition of models, which we discuss first.
Different kinds of algorithms require different representations of the Hamiltonian.
Therefore, the library offers to specify the model abstractly by the individual onsite terms and coupling terms of the Hamiltonian.
The following example illustrates this, again for the Heisenberg model.
"""Definition of a model: the XXZ chain."""
# Copyright 20192020 TeNPy Developers, GNU GPLv3
from tenpy.networks.site import SpinSite
from tenpy.models.lattice import Chain
from tenpy.models.model import CouplingModel, NearestNeighborModel, MPOModel
class XXZChain(CouplingModel, NearestNeighborModel, MPOModel):
def __init__(self, L=2, S=0.5, J=1., Delta=1., hz=0.):
spin = SpinSite(S=S, conserve="Sz")
# the lattice defines the geometry
lattice = Chain(L, spin, bc="open", bc_MPS="finite")
CouplingModel.__init__(self, lattice)
# add terms of the Hamiltonian
self.add_coupling(J * 0.5, 0, "Sp", 0, "Sm", 1) # Sp_i Sm_{i+1}
self.add_coupling(J * 0.5, 0, "Sp", 0, "Sm", 1) # Sp_i Sm_{i1}
self.add_coupling(J * Delta, 0, "Sz", 0, "Sz", 1)
# (for site dependent prefactors, the strength can be an array)
self.add_onsite(hz, 0, "Sz")
# finish initialization
# generate MPO for DMRG
MPOModel.__init__(self, lat, self.calc_H_MPO())
# generate H_bond for TEBD
NearestNeighborModel.__init__(self, lat, self.calc_H_bond())
While this generates the same MPO as in the previous code, this example can easily be adjusted and generalized, for
example to a higher dimensional lattice by just specifying a different lattice.
Internally, the MPO is generated using a finite state machine picture.
This allows not only to translate more complicated Hamiltonians into their corresponding MPOs,
but also to automate the mapping from a higher dimensional lattice to the 1D chain along which the MPS
winds.
Note that this mapping introduces longerrange couplings, so the model can no longer be defined to be a
NearestNeighborModel
suited for TEBD if another lattice than the Chain
is to be used.
Of course, many commonly studied models are also predefined.
For example, the following code initializes the Heisenberg model on a kagome lattice;
the spin liquid nature of the ground state of this model is highly debated in the current literature.
"""Initialization of the Heisenberg model on a kagome lattice."""
# Copyright 20192020 TeNPy Developers, GNU GPLv3
from tenpy.models.spins import SpinModel
model_params = {
"S": 0.5, # Spin 1/2
"lattice": "Kagome",
"bc_MPS": "infinite",
"bc_y": "cylinder",
"Ly": 2, # defines cylinder circumference
"conserve": "Sz", # use Sz conservation
"Jx": 1.,
"Jy": 1.,
"Jz": 1. # Heisenberg coupling
}
model = SpinModel(model_params)
Algorithms¶
The highest level in TeNPy is given by algorithms like DMRG and TEBD.
Using the previous concepts, setting up a simulation running those algorithms is a matter of just a few lines of code.
The following example runs a DMRG simulation, see dmrg
, exemplary for the transverse field Ising model at the critical point.
"""Call of (finite) DMRG."""
# Copyright 20192020 TeNPy Developers, GNU GPLv3
from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.algorithms import dmrg
N = 16 # number of sites
model = TFIChain({"L": N, "J": 1., "g": 1., "bc_MPS": "finite"})
sites = model.lat.mps_sites()
psi = MPS.from_product_state(sites, ['up'] * N, "finite")
dmrg_params = {"trunc_params": {"chi_max": 100, "svd_min": 1.e10}, "mixer": True}
info = dmrg.run(psi, model, dmrg_params)
print("E =", info['E'])
# E = 20.01638790048513
print("max. bond dimension =", max(psi.chi))
# max. bond dimension = 27
The switch from DMRG to gls{iDMRG} in TeNPy is simply accomplished by a change of the parameter
"bc_MPS"
from "finite"
to "infinite"
, both for the model and the state.
The returned E
is then the energy density per site.
Due to the translation invariance, one can also evaluate the correlation length, here slightly away from the critical point.
"""Call of infinite DMRG."""
# Copyright 20192020 TeNPy Developers, GNU GPLv3
from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.algorithms import dmrg
N = 2 # number of sites in unit cell
model = TFIChain({"L": N, "J": 1., "g": 1.1, "bc_MPS": "infinite"})
sites = model.lat.mps_sites()
psi = MPS.from_product_state(sites, ['up'] * N, "infinite")
dmrg_params = {"trunc_params": {"chi_max": 100, "svd_min": 1.e10}, "mixer": True}
info = dmrg.run(psi, model, dmrg_params)
print("E =", info['E'])
# E = 1.342864022725017
print("max. bond dimension =", max(psi.chi))
# max. bond dimension = 56
print("corr. length =", psi.correlation_length())
# corr. length = 4.915809146764157
Running time evolution with TEBD requires an additional loop, during which the desired observables have to be measured. The following code shows this directly for the infinite version of TEBD.
"""Call of (infinite) TEBD."""
# Copyright 20192020 TeNPy Developers, GNU GPLv3
from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.algorithms import tebd
M = TFIChain({"L": 2, "J": 1., "g": 1.5, "bc_MPS": "infinite"})
psi = MPS.from_product_state(M.lat.mps_sites(), [0] * 2, "infinite")
tebd_params = {
"order": 2,
"delta_tau_list": [0.1, 0.001, 1.e5],
"max_error_E": 1.e6,
"trunc_params": {
"chi_max": 30,
"svd_min": 1.e10
}
}
eng = tebd.Engine(psi, M, tebd_params)
eng.run_GS() # imaginary time evolution with TEBD
print("E =", sum(psi.expectation_value(M.H_bond)) / psi.L)
print("final bond dimensions: ", psi.chi)
Charge conservation with np_conserved¶
The basic idea is quickly summarized: By inspecting the Hamiltonian, you can identify symmetries, which correspond to conserved quantities, called charges. These charges divide the tensors into different sectors. This can be used to infer for example a blockdiagonal structure of certain matrices, which in turn speeds up SVD or diagonalization a lot. Even for more general (nonsquarematrix) tensors, charge conservation imposes restrictions which blocks of a tensor can be nonzero. Only those blocks need to be saved, which ultimately (= for large enough arrays) leads to a speedup of many routines, e.g., tensordot.
This introduction covers our implementation of charges; explaining mathematical details of the underlying symmetry is beyond its scope. We refer you to the corresponding chapter in our [TeNPyNotes] for a more general introduction of the idea (also stating the “charge rule” introduced below). [singh2010] explains why it works form a mathematical point of view, [singh2011] has the focus on a \(U(1)\) symmetry and might be easier to read.
Notations¶
Lets fix the notation of certain terms for this introduction and the docstrings in np_conserved
.
This might be helpful if you know the basics from a different context.
If you’re new to the subject, keep reading even if you don’t understand each detail,
and come back to this section when you encounter the corresponding terms again.
A Array
is a multidimensional array representing a tensor with the entries:
Each leg \(a_i\) corresponds the a vector space of dimension n_i.
An index of a leg is a particular value \(a_i \in \lbrace 0, ... ,n_i1\rbrace\).
The rank is the number of legs, the shape is \((n_0, ..., n_{rank1})\).
We restrict ourselfes to abelian charges with entries in \(\mathbb{Z}\) or in \(\mathbb{Z}_m\).
The nature of a charge is specified by \(m\); we set \(m=1\) for charges corresponding to \(\mathbb{Z}\).
The number of charges is refered to as qnumber as a short hand, and the collection of \(m\) for each charge is called qmod.
The qnumber, qmod and possibly descriptive names of the charges are saved in an instance of ChargeInfo
.
To each index of each leg, a value of the charge(s) is associated.
A charge block is a contiguous slice corresponding to the same charge(s) of the leg.
A qindex is an index in the list of charge blocks for a certain leg.
A charge sector is for given charge(s) is the set of all qindices of that charge(s).
A leg is blocked if all charge sectors map onetoone to qindices.
Finally, a leg is sorted, if the charges are sorted lexiographically.
Note that a sorted leg is always blocked.
We can also speak of the complete array to be blocked by charges or legchargesorted, which means that all of its legs are blocked or sorted, respectively.
The charge data for a single leg is collected in the class LegCharge
.
A LegCharge
has also a flag qconj, which tells whether the charges
point inward (+1) or outward (1). What that means, is explained later in Which entries of the npc Array can be nonzero?.
For completeness, let us also summarize also the internal structure of an Array
here:
The array saves only nonzero blocks, collected as a list of np.array in self._data
.
The qindices necessary to map these blocks to the original leg indices are collected in self._qdata
An array is said to be qdatasorted if its self._qdata
is lexiographically sorted.
More details on this follow later.
However, note that you usually shouldn’t access _qdata and _data directly  this
is only necessary from within tensordot, svd, etc.
Also, an array has a total charge, defining which entries can be nonzero  details in Which entries of the npc Array can be nonzero?.
Finally, a leg pipe (implemented in LegPipe
)
is used to formally combine multiple legs into one leg. Again, more details follow later.
Physical Example¶
For concreteness, you can think of the Hamiltonian \(H = t \sum_{<i,j>} (c^\dagger_i c_j + H.c.) + U n_i n_j\) with \(n_i = c^\dagger_i c_i\). This Hamiltonian has the global \(U(1)\) gauge symmetry \(c_i \rightarrow c_i e^{i\phi}\). The corresponding charge is the total number of particles \(N = \sum_i n_i\). You would then introduce one charge with \(m=1\).
Note that the total charge is a sum of local terms, living on single sites. Thus, you can infer the charge of a single physical site: it’s just the value \(q_i = n_i \in \mathbb{N}\) for each of the states.
Note that you can only assign integer charges. Consider for example the spin 1/2 Heisenberg chain. Here, you can naturally identify the magnetization \(S^z = \sum_i S^z_i\) as the conserved quantity, with values \(S^z_i = \pm \frac{1}{2}\). Obviously, if \(S^z\) is conserved, then so is \(2 S^z\), so you can use the charges \(q_i = 2 S^z_i \in \lbrace1, +1 \rbrace\) for the down and up states, respectively. Alternatively, you can also use a shift and define \(q_i = S^z_i + \frac{1}{2} \in \lbrace 0, 1 \rbrace\).
As another example, consider BCS like terms \(\sum_k (c^\dagger_k c^\dagger_{k} + H.c.)\). These terms break the total particle conservation, but they preserve the total parity, i.e., \(N % 2\) is conserved. Thus, you would introduce a charge with \(m = 2\) in this case.
In the above examples, we had only a single charge conserved at a time, but you might be lucky and have multiple conserved quantities, e.g. if you have two chains coupled only by interactions. TeNPy is designed to handle the general case of multiple charges. When giving examples, we will restrict to one charge, but everything generalizes to multiple charges.
The different formats for LegCharge¶
As mentioned above, we assign charges to each index of each leg of a tensor. This can be done in three formats: qflat, as qind and as qdict. Let me explain them with examples, for simplicity considereing only a single charge (the most inner array has one entry for each charge).
 qflat form: simply a list of charges for each index.
An example:
qflat = [[2], [1], [1], [0], [0], [0], [0], [3], [3]]
This tells you that the leg has size 9, the charges for are
[2], [1], [1], ..., [3]
for the indices0, 1, 2, 3,..., 8
. You can identify four charge blocksslice(0, 1), slice(1, 3), slice(3, 7), slice(7, 9)
in this example, which have charges[2], [1], [0], [3]
. In other words, the indices1, 2
(which are inslice(1, 3)
) have the same charge value[1]
. A qindex would just enumerate these blocks as0, 1, 2, 3
. qind form: a 1D array slices and a 2D array charges.
This is a more compact version than the qflat form: the slices give a partition of the indices and the charges give the charge values. The same example as above would simply be:
slices = [0, 1, 3, 7, 9] charges = [[2], [1], [0], [3]]
Note that slices includes
0
as first entry and the number of indices (here9
) as last entries. Thus it has lenblock_number + 1
, whereblock_number
(given byblock_number
) is the number of charge blocks in the leg, i.e. a qindex runs from 0 toblock_number1
. On the other hand, the 2D array charges has shape(block_number, qnumber)
, whereqnumber
is the number of charges (given byqnumber
).In that way, the qind form maps an qindex, say
qi
, to the indicesslice(slices[qi], slices[qi+1])
and the charge(s)charges[qi]
. qdict form: a dictionary in the other direction than qind, taking charge tuples to slices.
Again for the same example:
{(2,): slice(0, 1), (1,): slice(1, 3), (0,) : slice(3, 7), (3,) : slice(7, 9)}
Since the keys of a dictionary are unique, this form is only possible if the leg is completely blocked.
The LegCharge
saves the charge data of a leg internally in qind form,
directly in the attribute slices and charges.
However, it also provides convenient functions for conversion between from and to the qflat and qdict form.
The above example was nice since all charges were sorted and the charge blocks were ‘as large as possible’. This is however not required.
The following example is also a valid qind form:
slices = [0, 1, 3, 5, 7, 9]
charges = [[2], [1], [0], [0], [3]]
This leads to the same qflat form as the above examples, thus representing the same charges on the leg indices. However, regarding our Arrays, this is quite different, since it diveds the leg into 5 (instead of previously 4) charge blocks. We say the latter example is not bunched, while the former one is bunched.
To make the different notions of sorted and bunched clearer, consider the following (valid) examples:
charges 
bunched 
sorted 
blocked 

















If a leg is bunched and sorted, it is automatically blocked (but not vice versa). See also below for further comments on that.
Which entries of the npc Array can be nonzero?¶
The reason for the speedup with np_conserved lies in the fact that it saves only the blocks ‘compatible’ with the charges. But how is this ‘compatible’ defined?
Assume you have a tensor, call it \(T\), and the LegCharge
for all of its legs, say \(a, b, c, ...\).
Remeber that the LegCharge associates to each index of the leg a charge value (for each of the charges, if qnumber > 1).
Let a.to_qflat()[ia]
denote the charge(s) of index ia
for leg a
, and similar for other legs.
In addition, the LegCharge has a flag qconj
. This flag qconj is only a sign,
saved as +1 or 1, specifying whether the charges point ‘inward’ (+1, default) or ‘outward’ (1) of the tensor.
Then, the total charge of an entry T[ia, ib, ic, ...]
of the tensor is defined as:
qtotal[ia, ib, ic, ...] = a.to_qflat()[ia] * a.qconj + b.to_qflat()[ib] * b.qconj + c.to_qflat()[ic] * c.qconj + ... modulo qmod
The rule which entries of the a Array
can be nonzero
(i.e., are ‘compatible’ with the charges), is then very simple:
Rule for nonzero entries
An entry ia, ib, ic, ...
of a Array
can only be nonzero,
if qtotal[ia, ib, ic, ...]
matches the unique qtotal
attribute of the class.
In other words, there is a single total charge .qtotal
attribute of a Array
.
All indices ia, ib, ic, ...
for which the above defined qtotal[ia, ib, ic, ...]
matches this total charge,
are said to be compatible with the charges and can be nonzero.
All other indices are incompatible with the charges and must be zero.
In case of multiple charges, qnumber > 1, is a straigthforward generalization: an entry can only be nonzero if it is compatible with each of the defined charges.
The pesky qconj  contraction as an example¶
Why did we introduce the qconj
flag? Remember it’s just a sign telling whether the charge points inward or outward.
So whats the reasoning?
The short answer is, that LegCharges actually live on bonds (i.e., legs which are to be contracted) rather than individual tensors. Thus, it is convenient to share the LegCharges between different legs and even tensors, and just adjust the sign of the charges with qconj.
As an example, consider the contraction of two tensors, \(C_{ia,ic} = \sum_{ib} A_{ia,ib} B_{ib,ic}\).
For simplicity, say that the total charge of all three tensors is zero.
What are the implications of the above rule for nonzero entries?
Or rather, how can we ensure that C
complies with the above rule?
An entry C[ia,ic]
will only be nonzero,
if there is an ib
such that both A[ia,ib]
and B[ib,ic]
are nonzero, i.e., both of the following equations are
fullfilled:
A.qtotal == A.legs[0].to_qflat()[ia] * A.legs[0].qconj + A.legs[1].to_qflat()[ib] * A.legs[1].qconj modulo qmod
B.qtotal == B.legs[0].to_qflat()[ib] * B.legs[0].qconj + B.legs[1].to_qflat()[ic] * B.legs[1].qconj modulo qmod
(A.legs[0]
is the LegCharge
saving the charges of the first leg (with index ia
) of A.)
For the uncontracted legs, we just keep the charges as they are:
C.legs = [A.legs[0], B.legs[1]]
It is then straightforward to check, that the rule is fullfilled for \(C\), if the following condition is met:
A.qtotal + B.qtotal  C.qtotal == A.legs[1].to_qflat()[ib] A.b.qconj + B.legs[0].to_qflat()[ib] B.b.qconj modulo qmod
The easiest way to meet this condition is (1) to require that A.b
and B.b
share the same charges b.to_qflat()
, but have
opposite qconj, and (2) to define C.qtotal = A.qtotal + B.qtotal
.
This justifies the introduction of qconj:
when you define the tensors, you have to define the LegCharge
for the b only once, say for A.legs[1]
.
For B.legs[0]
you simply use A.legs[1].conj()
which creates a copy of the LegCharge with shared slices and charges, but opposite qconj.
As a more impressive example, all ‘physical’ legs of an MPS can usually share the same
LegCharge
(up to different qconj
if the local Hilbert space is the same).
This leads to the following convention:
Convention
When an npc algorithm makes tensors which share a bond (either with the input tensors, as for tensordot, or amongst the output tensors, as for SVD),
the algorithm is free, but not required, to use the same LegCharge
for the tensors sharing the bond, without making a copy.
Thus, if you want to modify a LegCharge, you must make a copy first (e.g. by using methods of LegCharge for what you want to acchive).
Assigning charges to nonphysical legs¶
From the above physical examples, it should be clear how you assign charges to physical legs. But what about other legs, e.g, the virtual bond of an MPS (or an MPO)?
The charge of these bonds must be derived by using the ‘rule for nonzero entries’, as far as they are not arbitrary. As a concrete example, consider an MPS on just two spin 1/2 sites:
 _____ _____
 x>  A  >y>  B  >z
  
 ^ ^
 p p
The two legs p
are the physical legs and share the same charge, as they both describe the same local Hilbert space.
For better distincition, let me label the indices of them by \(\uparrow=0\) and \(\downarrow=1\).
As noted above, we can associate the charges 1 (\(p=\uparrow\)) and 1 (\(p=\downarrow\)), respectively, so we define:
chinfo = npc.ChargeInfo([1], ['2*Sz'])
p = npc.LegCharge.from_qflat(chinfo, [1, 1], qconj=+1)
For the qconj
signs, we stick to the convention used in our MPS code and indicated by the
arrows in above ‘picture’: physical legs are incoming (qconj=+1
), and from left to right on the virtual bonds.
This is acchieved by using [p, x, y.conj()]
as legs for A
, and [p, y, z.conj()]
for B
, with the
default qconj=+1
for all p, x, y, z
: y.conj()
has the same charges as y
, but opposite qconj=1
.
The legs x
and z
of an L=2
MPS, are ‘dummy’ legs with just one index 0
.
The charge on one of them, as well as the total charge of both A
and B
is arbitrary (i.e., a gauge freedom),
so we make a simple choice: total charge 0 on both arrays, as well as for \(x=0\),
x = npc.LegCharge.from_qflat(chinfo, [0], qconj=+1)
.
The charges on the bonds y and z then depend on the state the MPS represents. Here, we consider a singlet \(\psi = (\uparrow \downarrow\rangle  \downarrow \uparrow\rangle)/\sqrt{2}\) as a simple example. A possible MPS representation is given by:
A[up, :, :] = [[1/2.**0.5, 0]] B[up, :, :] = [[0], [1]]
A[down, :, :] = [[0, 1/2.**0.5]] B[down, :, :] = [[1], [0]]
There are two nonzero entries in A
, for the indices \((a, x, y) = (\uparrow, 0, 0)\) and \((\downarrow, 0, 1)\).
For \((a, x, y) = (\uparrow, 0, 0)\), we want:
A.qtotal = 0 = p.to_qflat()[up] * p.qconj + x.to_qflat()[0] * x.qconj + y.conj().to_qflat()[0] * y.conj().qconj
= 1 * (+1) + 0 * (+1) + y.conj().to_qflat()[0] * (1)
This fixes the charge of y=0
to 1.
A similar calculation for \((a, x, y) = (\downarrow, 0, 1)\) yields the charge 1
for y=1
.
We have thus all the charges of the leg y
and can define y = npc.LegCharge.from_qflat(chinfo, [1, 1], qconj=+1)
.
Now take a look at the entries of B
.
For the nonzero entry \((b, y, z) = (\uparrow, 1, 0)\), we want:
B.qtotal = 0 = p.to_qflat()[up] * p.qconj + y.to_qflat()[1] * y.qconj + z.conj().to_qflat()[0] * z.conj().qconj
= 1 * (+1) + (1) * (+1) + z.conj().to_qflat()[0] * (1)
This implies the charge 0 for z = 0, thus z = npc.LegCharge.form_qflat(chinfo, [0], qconj=+1)
.
Finally, note that the rule for \((b, y, z) = (\downarrow, 0, 0)\) is automatically fullfilled!
This is an implication of the fact that the singlet has a well defined value for \(S^z_a + S^z_b\).
For other states without fixed magnetization (e.g., \(\uparrow \uparrow\rangle + \downarrow \downarrow\rangle\))
this would not be the case, and we could not use charge conservation.
As an exercise, you can calculate the charge of z in the case that A.qtotal=5
, B.qtotal = 1
and
charge 2
for x=0
. The result is 2.
Note
This section is meant be an pedagogical introduction. In you program, you can use the functions
detect_legcharge()
(which does exactly what’s described above) or
detect_qtotal()
(if you know all LegCharges, but not qtotal).
Array creation¶
Making an new Array
requires both the tensor entries (data) and charge data.
The default initialization a = Array(...)
creates an empty Array, where all entries are zero
(equivalent to zeros()
).
(Nonzero) data can be provided either as a dense np.array to from_ndarray()
,
or by providing a numpy function such as np.random, np.ones etc. to from_func()
.
In both cases, the charge data is provided by one ChargeInfo
,
and a LegCharge
instance for each of the legs.
Note
The charge data instances are not copied, in order to allow it to be shared between different Arrays.
Consequently, you must make copies of the charge data, if you manipulate it directly.
(However, methods like sort()
do that for you.)
Of course, a new Array
can also created using the charge data from exisiting Arrays,
for examples with zeros_like()
or creating a (deep or shallow) copy()
.
Further, there are the higher level functions like tensordot()
or svd()
,
which also return new Arrays.
Further, new Arrays are created by the various functions like tensordot or svd in np_conserved
.
Complete blocking of Charges¶
While the code was designed in such a way that each charge sector has a different charge, the code
should still run correctly if multiple charge sectors (for different qindex) correspond to the same charge.
In this sense Array
can act like a sparse array class to selectively store subblocks.
Algorithms which need a full blocking should state that explicitly in their docstrings.
(Some functions (like svd and eigh) require complete blocking internally, but if necessary they just work on
a temporary copy returned by as_completely_blocked()
).
If you expect the tensor to be dense subject to charge constraints (as for MPS), it will be most efficient to fully block by charge, so that work is done on large chunks.
However, if you expect the tensor to be sparser than required by charge (as for an MPO), it may be convenient not to completely block, which forces smaller matrices to be stored, and hence many zeroes to be dropped. Nevertheless, the algorithms were not designed with this in mind, so it is not recommended in general. (If you want to use it, run a benchmark to check whether it is really faster!)
If you haven’t created the array yet, you can call sort()
(with bunch=True
)
on each LegCharge
which you want to block.
This sorts by charges and thus induces a permution of the indices, which is also returned as an 1D array perm
.
For consistency, you have to apply this permutation to your flat data as well.
Alternatively, you can simply call sort_legcharge()
on an existing Array
.
It calls sort()
internally on the specified legs and performs the necessary
permutations directly to (a copy of) self. Yet, you should keep in mind, that the axes are permuted afterwards.
Internal Storage schema of npc Arrays¶
The actual data of the tensor is stored in _data
. Rather than keeping a single np.array (which would have many zeros in it),
we store only the nonzero sub blocks. So _data
is a python list of np.array’s.
The order in which they are stored in the list is not physically meaningful, and so not guaranteed (more on this later).
So to figure out where the sub block sits in the tensor, we need the _qdata
structure (on top of the LegCharges in legs
).
Consider a rank 3 tensor T
, with the first leg like:
legs[0].slices = np.array([0, 1, 4, ...])
legs[0].charges = np.array([[2], [1], ...])
Each row of charges gives the charges for a charge block of the leg, with the actual indices of the total tensor determined by the slices. The qindex simply enumerates the charge blocks of a lex. Picking a qindex (and thus a charge block) from each leg, we have a subblock of the tensor.
For each (nonzero) subblock of the tensor, we put a (numpy) ndarray entry in the _data
list.
Since each subblock of the tensor is specified by rank qindices,
we put a corresponding entry in _qdata
, which is a 2D array of shape (#stored_blocks, rank)
.
Each row corresponds to a nonzero subblock, and there are rank columns giving the corresponding qindex for each leg.
Example: for a rank 3 tensor we might have:
T._data = [t1, t2, t3, t4, ...]
T._qdata = np.array([[3, 2, 1],
[1, 1, 1],
[4, 2, 2],
[2, 1, 2],
... ])
The third subblock has an ndarray t3
, and qindices [4 2 2]
for the three legs.
To find the position of
t3
in the actual tensor you can useget_slice()
:T.legs[0].get_slice(4), T.legs[1].get_slice(2), T.legs[2].get_slice(2)
The function
leg.get_charges(qi)
simply returnsslice(leg.slices[qi], leg.slices[qi+1])
To find the charges of t3, we an use
get_charge()
:T.legs[0].get_charge(2), T.legs[1].get_charge(2), T.legs[2].get_charge(2)
The function
leg.get_charge(qi)
simply returnsleg.charges[qi]*leg.qconj
.
Note
Outside of np_conserved, you should use the API to access the entries.
If you really need to iterate over all blocks of an Array T
, try for (block, blockslices, charges, qindices) in T: do_something()
.
The order in which the blocks stored in _data
/_qdata
is arbitrary (although of course _data
and _qdata
must be in correspondence).
However, for many purposes it is useful to sort them according to some convention. So we include a flag ._qdata_sorted
to the array.
So, if sorted (with isort_qdata()
, the _qdata
example above goes to
_qdata = np.array([[1, 1, 1],
[3, 2, 1],
[2, 1, 2],
[4, 2, 2],
... ])
Note that np.lexsort chooses the rightmost column to be the dominant key, a convention we follow throughout.
If _qdata_sorted == True
, _qdata
and _data
are guaranteed to be lexsorted. If _qdata_sorted == False
, there is no gaurantee.
If an algorithm modifies _qdata
, it must set _qdata_sorted = False
(unless it gaurantees it is still sorted).
The routine sort_qdata()
brings the data to sorted form.
Indexing of an Array¶
Although it is usually not necessary to access single entries of an Array
, you can of course do that.
In the simplest case, this is something like A[0, 2, 1]
for a rank3 Array A
.
However, accessing single entries is quite slow and usually not recommended. For small Arrays, it may be convenient to convert them
back to flat numpy arrays with to_ndarray()
.
On top of that very basic indexing, Array supports slicing and some kind of advanced indexing, which is however different from the one of numpy arrarys (described here). Unlike numpy arrays, our Array class does not broadcast existing index arrays – this would be terribly slow. Also, np.newaxis is not supported, since inserting new axes requires additional information for the charges.
Instead, we allow just indexing of the legs independent of each other, of the form A[i0, i1, ...]
.
If all indices i0, i1, ...
are integers, the single corresponding entry (of type dtype) is returned.
However, the individual ‘indices’ i0
for the individual legs can also be one of what is described in the following list.
In that case, a new Array
with less data (specified by the indices) is returned.
The ‘indices’ can be:
an int: fix the index of that axis, return array with one less dimension. See also
take_slice()
.a
slice(None)
or:
: keep the complete axisan
Ellipsis
or...
: shorthand forslice(None)
for missing axes to fix the lenan 1D bool ndarray
mask
: apply a mask to that axis, seeiproject()
.a
slice(start, stop, step)
orstart:stop:step
: keep only the indices specified by the slice. This is also implemented with iproject.an 1D int ndarray
mask
: keep only the indices specified by the array. This is also implemented with iproject.
For slices and 1D arrays, additional permuations may be perfomed with the help of permute()
.
If the number of indices is less than rank, the remaining axes remain free, so for a rank 4 Array A
, A[i0, i1] == A[i0, i1, ...] == A[i0, i1, :, :]
.
Note that indexing always copies the data – even if int contains just slices, in which case numpy would return a view.
However, assigning with A[:, [3, 5], 3] = B
should work as you would expect.
Warning
Due to numpy’s advanced indexing, for 1D integer arrays a0
and a1
the following holds
A[a0, a1].to_ndarray() == A.to_ndarray()[np.ix_(a0, a1)] != A.to_ndarray()[a0, a1]
For a combination of slices and arrays, things get more complicated with numpys advanced indexing.
In that case, a simple np.ix_(...)
doesn’t help any more to emulate our version of indexing.
Introduction to combine_legs, split_legs and LegPipes¶
Often, it is necessary to “combine” multiple legs into one: for example to perfom a SVD, a tensor needs to be viewed as a matrix.
For a flat array, this can be done with np.reshape
, e.g., if A
has shape (10, 3, 7)
then B = np.reshape(A, (30, 7))
will
result in a (view of the) array with one less dimension, but a “larger” first leg. By default (order='C'
), this
results in
B[i*3 + j , k] == A[i, j, k] for i in range(10) for j in range(3) for k in range(7)
While for a np.array, also a reshaping (10, 3, 7) > (2, 21, 5)
would be allowed, it does not make sense
physically. The only sensible “reshape” operation on an Array
are
to combine multiple legs into one leg pipe (
LegPipe
) withcombine_legs()
, orto split a pipe of previously combined legs with
split_legs()
.
Each leg has a Hilbert space, and a representation of the symmetry on that Hilbert space. Combining legs corresponds to the tensor product operation, and for abelian groups, the corresponding “fusion” of the representation is the simple addition of charge.
Fusion is not a lossless process, so if we ever want to split the combined leg,
we need some additional data to tell us how to reverse the tensor product.
This data is saved in the class LegPipe
, derived from the LegCharge
and used as new leg.
Details of the information contained in a LegPipe are given in the class doc string.
The rough usage idea is as follows:
You can call
combine_legs()
without supplying any LegPipes, combine_legs will then make them for you.Nevertheless, if you plan to perform the combination over and over again on sets of legs you know to be identical [with same charges etc, up to an overall 1 in qconj on all incoming and outgoing Legs] you might make a LegPipe anyway to save on the overhead of computing it each time.
In any way, the resulting Array will have a
LegPipe
as a LegCharge on the combined leg. Thus, it – and all tensors inheriting the leg (e.g. the results of svd, tensordot etc.) – will have the information how to split the LegPipe back to the original legs.Once you performed the necessary operations, you can call
split_legs()
. This uses the information saved in the LegPipe to split the legs, recovering the original legs.
For a LegPipe, conj()
changes qconj
for the outgoing pipe and the incoming legs.
If you need a LegPipe with the same incoming qconj
, use outer_conj()
.
Leg labeling¶
It’s convenient to name the legs of a tensor: for instance, we can name legs 0, 1, 2 to be 'a', 'b', 'c'
: \(T_{i_a,i_b,i_c}\).
That way we don’t have to remember the ordering! Under tensordot, we can then call
U = npc.tensordot(S, T, axes = [ [...], ['b'] ] )
without having to remember where exactly 'b'
is.
Obviously U
should then inherit the name of its legs from the uncontracted legs of S and T.
So here is how it works:
Labels can only be strings. The labels should not include the characters
.
or?
. Internally, the labels are stored as dicta.labels = {label: leg_position, ...}
. Not all legs need a label.To set the labels, call
A.set_labels(['a', 'b', None, 'c', ... ])
which will set up the labeling
{'a': 0, 'b': 1, 'c': 3 ...}
.(Where implemented) the specification of axes can use either the labels or the index positions. For instance, the call
tensordot(A, B, [['a', 2, 'c'], [...]])
will interpret'a'
and'c'
as labels (callingget_leg_indices()
to find their positions using the dict) and 2 as ‘the 2nd leg’. That’s why we require labels to be strings! Labels will be intelligently inherited through the various operations of np_conserved.
Under transpose, labels are permuted.
Under tensordot, labels are inherited from uncontracted legs. If there is a collision, both labels are dropped.
Under combine_legs, labels get concatenated with a
.
delimiter and sourrounded by brackets. Example: leta.labels = {'a': 1, 'b': 2, 'c': 3}
. Then ifb = a.combine_legs([[0, 1], [2]])
, it will haveb.labels = {'(a.b)': 0, '(c)': 1}
. If some subleg of a combined leg isn’t named, then a'?#'
label is inserted (with#
the leg index), e.g.,'a.?0.c'
.Under split_legs, the labels are split using the delimiters (and the
'?#'
are dropped).Under conj, iconj: take
'a' > 'a*'
,'a*' > 'a'
, and'(a.(b*.c))' > '(a*.(b.c*))'
Under svd, the outer labels are inherited, and inner labels can be optionally passed.
Under pinv, the labels are transposed.
See also¶
The module
tenpy.linalg.np_conserved
should contain all the API needed from the point of view of the algorithms. It contians the fundamentalArray
class and functions for working with them (creating and manipulating).The module
tenpy.linalg.charges
contains implementations for the charge structure, for example the classesChargeInfo
,LegCharge
, andLegPipe
. As noted above, the ‘public’ API is imported to (and accessible from)np_conserved
.
A full example code for spin1/2¶
Below follows a full example demonstrating the creation and contraction of Arrays. (It’s the file a_np_conserved.py in the examples folder of the tenpy source.)
"""An example code to demonstrate the usage of :class:`~tenpy.linalg.np_conserved.Array`.
This example includes the following steps:
1) create Arrays for an Neel MPS
2) create an MPO representing the nearestneighbour AFM Heisenberg Hamiltonian
3) define 'environments' left and right
4) contract MPS and MPO to calculate the energy
5) extract twosite hamiltonian ``H2`` from the MPO
6) calculate ``exp(1.j*dt*H2)`` by diagonalization of H2
7) apply ``exp(H2)`` to two sites of the MPS and truncate with svd
Note that this example uses only np_conserved, but no other modules.
Compare it to the example `b_mps.py`,
which does the same steps using a few predefined classes like MPS and MPO.
"""
# Copyright 20182020 TeNPy Developers, GNU GPLv3
import tenpy.linalg.np_conserved as npc
import numpy as np
# model parameters
Jxx, Jz = 1., 1.
L = 20
dt = 0.1
cutoff = 1.e10
print("Jxx={Jxx}, Jz={Jz}, L={L:d}".format(Jxx=Jxx, Jz=Jz, L=L))
print("1) create Arrays for an Neel MPS")
# vL >B> vR
# 
# ^
# 
# p
# create a ChargeInfo to specify the nature of the charge
chinfo = npc.ChargeInfo([1], ['2*Sz']) # the second argument is just a descriptive name
# create LegCharges on physical leg and even/odd bonds
p_leg = npc.LegCharge.from_qflat(chinfo, [[1], [1]]) # charges for up, down
v_leg_even = npc.LegCharge.from_qflat(chinfo, [[0]])
v_leg_odd = npc.LegCharge.from_qflat(chinfo, [[1]])
B_even = npc.zeros([v_leg_even, v_leg_odd.conj(), p_leg],
labels=['vL', 'vR', 'p']) # virtual left/right, physical
B_odd = npc.zeros([v_leg_odd, v_leg_even.conj(), p_leg], labels=['vL', 'vR', 'p'])
B_even[0, 0, 0] = 1. # up
B_odd[0, 0, 1] = 1. # down
Bs = [B_even, B_odd] * (L // 2) + [B_even] * (L % 2) # (rightcanonical)
Ss = [np.ones(1)] * L # Ss[i] are singular values between Bs[i1] and Bs[i]
# Side remark:
# An MPS is expected to have nonzero entries everywhere compatible with the charges.
# In general, we recommend to use `sort_legcharge` (or `as_completely_blocked`)
# to ensure complete blocking. (But the code will also work, if you don't do it.)
# The drawback is that this might introduce permutations in the indices of single legs,
# which you have to keep in mind when converting dense numpy arrays to and from npc.Arrays.
print("2) create an MPO representing the AFM Heisenberg Hamiltonian")
# p*
# 
# ^
# 
# wL >W> wR
# 
# ^
# 
# p
# create physical spin1/2 operators Sz, S+, S
Sz = npc.Array.from_ndarray([[0.5, 0.], [0., 0.5]], [p_leg, p_leg.conj()], labels=['p', 'p*'])
Sp = npc.Array.from_ndarray([[0., 1.], [0., 0.]], [p_leg, p_leg.conj()], labels=['p', 'p*'])
Sm = npc.Array.from_ndarray([[0., 0.], [1., 0.]], [p_leg, p_leg.conj()], labels=['p', 'p*'])
Id = npc.eye_like(Sz, labels=Sz.get_leg_labels()) # identity
mpo_leg = npc.LegCharge.from_qflat(chinfo, [[0], [2], [2], [0], [0]])
W_grid = [[Id, Sp, Sm, Sz, None ],
[None, None, None, None, 0.5 * Jxx * Sm],
[None, None, None, None, 0.5 * Jxx * Sp],
[None, None, None, None, Jz * Sz ],
[None, None, None, None, Id ]] # yapf:disable
W = npc.grid_outer(W_grid, [mpo_leg, mpo_leg.conj()], grid_labels=['wL', 'wR'])
# wL/wR = virtual left/right of the MPO
Ws = [W] * L
print("3) define 'environments' left and right")
# .> vR vL >.
#  
# envL> wR wL >envR
#  
# .> vR* vL*>.
envL = npc.zeros([W.get_leg('wL').conj(), Bs[0].get_leg('vL').conj(), Bs[0].get_leg('vL')],
labels=['wR', 'vR', 'vR*'])
envL[0, :, :] = npc.diag(1., envL.legs[1])
envR = npc.zeros([W.get_leg('wR').conj(), Bs[1].get_leg('vR').conj(), Bs[1].get_leg('vR')],
labels=['wL', 'vL', 'vL*'])
envR[1, :, :] = npc.diag(1., envR.legs[1])
print("4) contract MPS and MPO to calculate the energy <psiHpsi>")
contr = envL
for i in range(L):
# contr labels: wR, vR, vR*
contr = npc.tensordot(contr, Bs[i], axes=('vR', 'vL'))
# wR, vR*, vR, p
contr = npc.tensordot(contr, Ws[i], axes=(['p', 'wR'], ['p*', 'wL']))
# vR*, vR, wR, p
contr = npc.tensordot(contr, Bs[i].conj(), axes=(['p', 'vR*'], ['p*', 'vL*']))
# vR, wR, vR*
# note that the order of the legs changed, but that's no problem with labels:
# the arrays are automatically transposed as necessary
E = npc.inner(contr, envR, axes=(['vR', 'wR', 'vR*'], ['vL', 'wL', 'vL*']))
print("E =", E)
print("5) calculate twosite hamiltonian ``H2`` from the MPO")
# label left, right physical legs with p, q
W0 = W.replace_labels(['p', 'p*'], ['p0', 'p0*'])
W1 = W.replace_labels(['p', 'p*'], ['p1', 'p1*'])
H2 = npc.tensordot(W0, W1, axes=('wR', 'wL')).itranspose(['wL', 'wR', 'p0', 'p1', 'p0*', 'p1*'])
H2 = H2[0, 1] # (If H has singlesite terms, it's not that simple anymore)
print("H2 labels:", H2.get_leg_labels())
print("6) calculate exp(H2) by diagonalization of H2")
# diagonalization requires to view H2 as a matrix
H2 = H2.combine_legs([('p0', 'p1'), ('p0*', 'p1*')], qconj=[+1, 1])
print("labels after combine_legs:", H2.get_leg_labels())
E2, U2 = npc.eigh(H2)
print("Eigenvalues of H2:", E2)
U_expE2 = U2.scale_axis(np.exp(1.j * dt * E2), axis=1) # scale_axis ~= apply an diagonal matrix
exp_H2 = npc.tensordot(U_expE2, U2.conj(), axes=(1, 1))
exp_H2.iset_leg_labels(H2.get_leg_labels())
exp_H2 = exp_H2.split_legs() # by default split all legs which are `LegPipe`
# (this restores the originial labels ['p0', 'p1', 'p0*', 'p1*'] of `H2` in `exp_H2`)
print("7) apply exp(H2) to even/odd bonds of the MPS and truncate with svd")
# (this implements one time step of first order TEBD)
for even_odd in [0, 1]:
for i in range(even_odd, L  1, 2):
B_L = Bs[i].scale_axis(Ss[i], 'vL').ireplace_label('p', 'p0')
B_R = Bs[i + 1].replace_label('p', 'p1')
theta = npc.tensordot(B_L, B_R, axes=('vR', 'vL'))
theta = npc.tensordot(exp_H2, theta, axes=(['p0*', 'p1*'], ['p0', 'p1']))
# view as matrix for SVD
theta = theta.combine_legs([('vL', 'p0'), ('p1', 'vR')], new_axes=[0, 1], qconj=[+1, 1])
# now theta has labels '(vL.p0)', '(p1.vR)'
U, S, V = npc.svd(theta, inner_labels=['vR', 'vL'])
# truncate
keep = S > cutoff
S = S[keep]
invsq = np.linalg.norm(S)
Ss[i + 1] = S / invsq
U = U.iscale_axis(S / invsq, 'vR')
Bs[i] = U.split_legs('(vL.p0)').iscale_axis(Ss[i]**(1), 'vL').ireplace_label('p0', 'p')
Bs[i + 1] = V.split_legs('(p1.vR)').ireplace_label('p1', 'p')
print("finished")
Models¶
What is a model?¶
Abstractly, a model stands for some physical (quantum) system to be described. For tensor networks algorithms, the model is usually specified as a Hamiltonian written in terms of second quantization. For example, let us consider a spin1/2 Heisenberg model described by the Hamiltonian
Note that a few things are defined more or less implicitly.
The local Hilbert space: it consists of Spin1/2 degrees of freedom with the usual spin1/2 operators \(S^x, S^y, S^z\).
The geometric (lattice) strucuture: above, we spoke of a 1D “chain”.
The boundary conditions: do we have open or periodic boundary conditions? The “chain” suggests open boundaries, which are in most cases preferable for MPSbased methods.
The range of i: How many sites do we consider (for a 2D system: in each direction)?
Obviously, these things need to be specified in TeNPy in one way or another, if we want to define a model.
Ultimately, our goal is to run some algorithm. However, different algorithm requires the model and Hamiltonian to be specified in different forms.
We have one class for each such required form.
For example dmrg
requires an MPOModel
,
which contains the Hamiltonian written as an MPO
.
So a new model class suitable for DMRG should have this general structure:
class MyNewModel(MPOModel):
def __init__(self, model_params):
lattice = somehow_generate_lattice(model_params)
H_MPO = somehow_generate_MPO(lattice, model_params)
# initialize MPOModel
MPOModel.__init__(self, lattice, H_MPO)
On the other hand, if we want to evolve a state with tebd
we need a NearestNeighborModel
, in which the Hamiltonian is written in terms of
twosite bondterms to allow a SuzukiTrotter decomposition of the timeevolution operator:
class MyNewModel2(NearestNeighborModel):
"""General strucutre for a model suitable for TEBD."""
def __init__(self, model_params):
lattice = somehow_generate_lattice(model_params)
H_bond = somehow_generate_H_bond(lattice, model_params)
# initialize MPOModel
NearestNeighborModel.__init__(self, lattice, H_bond)
Of course, the difficult part in these examples is to generate the H_MPO
and H_bond
in the required form.
If you want to write it down by hand, you can of course do that.
But it can be quite tedious to write every model multiple times, just because we need different representations of the same Hamiltonian.
Luckily, there is a way out in TeNPy: the CouplingModel
. Before we describe this class, let’s
discuss the background of the Site
and Site
class.
The Hilbert space¶
The local Hilbert space is represented by a Site
(read its docstring!).
In particular, the Site contains the local LegCharge
and hence the meaning of each
basis state needs to be defined.
Beside that, the site contains the local operators  those give the real meaning to the local basis.
Having the local operators in the site is very convenient, because it makes them available by name for example when you want to calculate expectation values.
The most common sites (e.g. for spins, spinless or spinfull fermions, or bosons) are predefined
in the module tenpy.networks.site
, but if necessary you can easily extend them
by adding further local operators or completely write your own subclasses of Site
.
The full Hilbert space is a tensor product of the local Hilbert space on each site.
Note
The LegCharge
of all involved sites need to have a common
ChargeInfo
in order to allow the contraction of tensors acting on the various sites.
This can be ensured with the function multi_sites_combine_charges()
.
An example where multi_sites_combine_charges()
is needed would be a coupling of different
types of sites, e.g., when a tight binding chain of fermions is coupled to some local spin degrees of freedom.
Another use case of this function would be a model with a $U(1)$ symmetry involving only half the sites, say \(\sum_{i=0}^{L/2} n_{2i}\).
Note
If you don’t know about the charges and np_conserved yet, but want to get started with models right away,
you can set conserve=None
in the existing sites or use
leg = tenpy.linalg.np_conserved.LegCharge.from_trivial(d)
for an implementation of your custom site,
where d is the dimension of the local Hilbert space.
Alternatively, you can find some introduction to the charges in the Charge conservation with np_conserved.
The geometry : lattice class¶
The geometry is usually given by some kind of lattice structure how the sites are arranged,
e.g. implicitly with the sum over nearest neighbours \(\sum_{<i, j>}\).
In TeNPy, this is specified by a Lattice
class, which contains a unit cell of
a few Site
which are shifted periodically by its basis vectors to form a regular lattice.
Again, we have predefined some basic lattices like a Chain
,
two chains coupled as a Ladder
or 2D lattices like the
Square
, Honeycomb
and
Kagome
lattices; but you are also free to define your own generalizations.
MPS based algorithms like DMRG always work on purely 1D systems. Even if our model “lives” on a 2D lattice,
these algorithms require to map it onto a 1D chain (probably at the cost of longerrange interactions).
This mapping is also done by the lattice by defining the order (order
) of the sites.
Note
Further details on the lattice geometry can be found in Details on the lattice geometry.
The CouplingModel: general structure¶
The CouplingModel
provides a general, quite abstract way to specify a Hamiltonian
of couplings on a given lattice.
Once initialized, its methods add_onsite()
and
add_coupling()
allow to add onsite and coupling terms repeated over the different
unit cells of the lattice.
In that way, it basically allows a straightforward translation of the Hamiltonian given as a math forumla
\(H = \sum_{i} A_i B_{i+dx} + ...\) with onsite operators A, B,… into a model class.
The general structure for a new model based on the CouplingModel
is then:
class MyNewModel3(CouplingModel,MPOModel,NearestNeighborModel):
def __init__(self, ...):
... # follow the basic steps explained below
In the initialization method __init__(self, ...)
of this class you can then follow these basic steps:
Read out the parameters.
Given the parameters, determine the charges to be conserved. Initialize the
LegCharge
of the local sites accordingly.Define (additional) local operators needed.
Initialize the needed
Site
.Note
Using predefined sites like the
SpinHalfSite
is recommended and can replace steps 13.Initialize the lattice (or if you got the lattice as a parameter, set the sites in the unit cell).
Initialize the
CouplingModel
withCouplingModel.__init__(self, lat)
.Use
add_onsite()
andadd_coupling()
to add all terms of the Hamiltonian. Here, thepairs
of the lattice can come in handy, for example:self.add_onsite(np.asarray(h), 0, 'Sz') for u1, u2, dx in self.lat.pairs['nearest_neighbors']: self.add_coupling(0.5*J, u1, 'Sp', u2, 'Sm', dx, plus_hc=True) self.add_coupling( J, u1, 'Sz', u2, 'Sz', dx)
Note
The method
add_coupling()
adds the coupling only in one direction, i.e. not switching i and j in a \(\sum_{\langle i, j\rangle}\). If you have terms like \(c^\dagger_i c_j\) or \(S^{+}_i S^{}_j\) in your Hamiltonian, you need to add it in both directions to get a Hermitian Hamiltonian! The easiest way to do that is to use the plus_hc option ofadd_onsite()
andadd_coupling()
, as we did for the \(J/2 (S^{+}_i S^{}_j + h.c.)\) terms of the Heisenberg model above. Alternatively, you can add the hermitian conjugate terms explicitly, see the examples inadd_coupling()
for more details.Note that the strength arguments of these functions can be (numpy) arrays for sitedependent couplings. If you need to add or multipliy some parameters of the model for the strength of certain terms, it is recommended use
np.asarray
beforehand – in that way lists will also work fine.Finally, if you derived from the
MPOModel
, you can callcalc_H_MPO()
to build the MPO and use it for the initialization asMPOModel.__init__(self, lat, self.calc_H_MPO())
.Similarly, if you derived from the
NearestNeighborModel
, you can callcalc_H_bond()
to initialze it asNearestNeighborModel.__init__(self, lat, self.calc_H_bond())
. Callingself.calc_H_bond()
will fail for models which are not nearestneighbors (with respect to the MPS ordering), so you should only subclass theNearestNeighborModel
if the lattice is a simpleChain
.
Note
The method add_coupling()
works only for terms involving operators on 2
sites. If you have couplings involving more than two sites, you can use the
add_multi_coupling()
instead.
A prototypical example is the exactly solvable ToricCode
.
The code of the module tenpy.models.xxz_chain
is included below as an illustrative example how to implement a
Model. The implementation of the XXZChain
directly follows the steps
outline above.
The XXZChain2
implements the very same model, but based on the
CouplingMPOModel
explained in the next section.
"""Prototypical example of a 1D quantum model: the spin1/2 XXZ chain.
The XXZ chain is contained in the more general :class:`~tenpy.models.spins.SpinChain`; the idea of
this module is more to serve as a pedagogical example for a model.
"""
# Copyright 20182020 TeNPy Developers, GNU GPLv3
import numpy as np
from .lattice import Site, Chain
from .model import CouplingModel, NearestNeighborModel, MPOModel, CouplingMPOModel
from ..linalg import np_conserved as npc
from ..tools.params import asConfig
from ..networks.site import SpinHalfSite # if you want to use the predefined site
__all__ = ['XXZChain', 'XXZChain2']
class XXZChain(CouplingModel, NearestNeighborModel, MPOModel):
r"""Spin1/2 XXZ chain with Sz conservation.
The Hamiltonian reads:
.. math ::
H = \sum_i \mathtt{Jxx}/2 (S^{+}_i S^{}_{i+1} + S^{}_i S^{+}_{i+1})
+ \mathtt{Jz} S^z_i S^z_{i+1} \\
 \sum_i \mathtt{hz} S^z_i
All parameters are collected in a single dictionary `model_params`, which
is turned into a :class:`~tenpy.tools.params.Config` object.
Parameters

model_params : :class:`~tenpy.tools.params.Config`
Parameters for the model. See :cfg:config:`XXZChain` below.
Options

.. cfg:config :: XXZChain
:include: CouplingMPOModel
L : int
Length of the chain.
Jxx, Jz, hz : float  array
Coupling as defined for the Hamiltonian above.
bc_MPS : {'finite'  'infinte'}
MPS boundary conditions. Coupling boundary conditions are chosen appropriately.
"""
def __init__(self, model_params):
# 0) read out/set default parameters
model_params = asConfig(model_params, "XXZChain")
L = model_params.get('L', 2)
Jxx = model_params.get('Jxx', 1.)
Jz = model_params.get('Jz', 1.)
hz = model_params.get('hz', 0.)
bc_MPS = model_params.get('bc_MPS', 'finite')
# 13):
USE_PREDEFINED_SITE = False
if not USE_PREDEFINED_SITE:
# 1) charges of the physical leg. The only time that we actually define charges!
leg = npc.LegCharge.from_qflat(npc.ChargeInfo([1], ['2*Sz']), [1, 1])
# 2) onsite operators
Sp = [[0., 1.], [0., 0.]]
Sm = [[0., 0.], [1., 0.]]
Sz = [[0.5, 0.], [0., 0.5]]
# (Can't define Sx and Sy as onsite operators: they are incompatible with Sz charges.)
# 3) local physical site
site = Site(leg, ['up', 'down'], Sp=Sp, Sm=Sm, Sz=Sz)
else:
# there is a site for spin1/2 defined in TeNPy, so just we can just use it
# replacing steps 13)
site = SpinHalfSite(conserve='Sz')
# 4) lattice
bc = 'periodic' if bc_MPS == 'infinite' else 'open'
lat = Chain(L, site, bc=bc, bc_MPS=bc_MPS)
# 5) initialize CouplingModel
CouplingModel.__init__(self, lat)
# 6) add terms of the Hamiltonian
# (u is always 0 as we have only one site in the unit cell)
self.add_onsite(hz, 0, 'Sz')
self.add_coupling(Jxx * 0.5, 0, 'Sp', 0, 'Sm', 1, plus_hc=True)
# instead of plus_hc=True, we could explicitly add the h.c. term with:
self.add_coupling(Jz, 0, 'Sz', 0, 'Sz', 1)
# 7) initialize H_MPO
MPOModel.__init__(self, lat, self.calc_H_MPO())
# 8) initialize H_bond (the order of 7/8 doesn't matter)
NearestNeighborModel.__init__(self, lat, self.calc_H_bond())
class XXZChain2(CouplingMPOModel, NearestNeighborModel):
"""Another implementation of the Spin1/2 XXZ chain with Sz conservation.
This implementation takes the same parameters as the :class:`XXZChain`, but is implemented
based on the :class:`~tenpy.models.model.CouplingMPOModel`.
Parameters

model_params : dict  :class:`~tenpy.tools.params.Config`
See :cfg:config:`XXZChain`
"""
def __init__(self, model_params):
model_params = asConfig(model_params, "XXZChain2")
model_params.setdefault('lattice', "Chain")
CouplingMPOModel.__init__(self, model_params)
def init_sites(self, model_params):
return SpinHalfSite(conserve='Sz') # use predefined Site
def init_terms(self, model_params):
# read out parameters
Jxx = model_params.get('Jxx', 1.)
Jz = model_params.get('Jz', 1.)
hz = model_params.get('hz', 0.)
# add terms
for u in range(len(self.lat.unit_cell)):
self.add_onsite(hz, u, 'Sz')
for u1, u2, dx in self.lat.pairs['nearest_neighbors']:
self.add_coupling(Jxx * 0.5, u1, 'Sp', u2, 'Sm', dx, plus_hc=True)
self.add_coupling(Jz, u1, 'Sz', u2, 'Sz', dx)
The easiest way: the CouplingMPOModel¶
Since many of the basic steps above are always the same, we don’t need to repeat them all the time.
So we have yet another class helping to structure the initialization of models: the CouplingMPOModel
.
The general structure of this class is like this:
class CouplingMPOModel(CouplingModel,MPOModel):
def __init__(self, model_param):
# ... follows the basic steps 18 using the methods
lat = self.init_lattice(self, model_param) # for step 4
# ...
self.init_terms(self, model_param) # for step 6
# ...
def init_sites(self, model_param):
# You should overwrite this
def init_lattice(self, model_param):
sites = self.init_sites(self, model_param) # for steps 13
# initialize an arbitrary predefined lattice
# using model_params['lattice']
def init_terms(self, model_param):
# does nothing.
# You should overwrite this
The XXZChain2
included above illustrates, how it can be used.
You need to implement steps 13) by overwriting the method init_sites()
Step 4) is performed in the method init_lattice()
, which initializes arbitrary 1D or 2D
lattices; by default a simple 1D chain.
If your model only works for specific lattices, you can overwrite this method in your own class.
Step 6) should be done by overwriting the method init_terms()
.
Steps 5,7,8 and calls to the init_… methods for the other steps are done automatically if you just call the
CouplingMPOModel.__init__(self, model_param)
.
The XXZChain
and XXZChain2
work only with the
Chain
as lattice, since they are derived from the NearestNeighborModel
.
This allows to use them for TEBD in 1D (yeah!), but we can’t get the MPO for DMRG on (for example) a Square
lattice cylinder  although it’s intuitively clear, what the Hamiltonian there should be: just put the nearestneighbor
coupling on each bond of the 2D lattice.
It’s not possible to generalize a NearestNeighborModel
to an arbitrary lattice where it’s
no longer nearest Neigbors in the MPS sense, but we can go the other way around:
first write the model on an arbitrary 2D lattice and then restrict it to a 1D chain to make it a NearestNeighborModel
.
Let me illustrate this with another standard example model: the transverse field Ising model, implemented in the module
tenpy.models.tf_ising
included below.
The TFIModel
works for arbitrary 1D or 2D lattices.
The TFIChain
is then taking the exact same model making a NearestNeighborModel
,
which only works for the 1D chain.
"""Prototypical example of a quantum model: the transverse field Ising model.
Like the :class:`~tenpy.models.xxz_chain.XXZChain`, the transverse field ising chain
:class:`TFIChain` is contained in the more general :class:`~tenpy.models.spins.SpinChain`;
the idea is more to serve as a pedagogical example for a 'model'.
We choose the field along z to allow to conserve the parity, if desired.
"""
# Copyright 20182020 TeNPy Developers, GNU GPLv3
import numpy as np
from .model import CouplingMPOModel, NearestNeighborModel
from ..tools.params import asConfig
from ..networks.site import SpinHalfSite
__all__ = ['TFIModel', 'TFIChain']
class TFIModel(CouplingMPOModel):
r"""Transverse field Ising model on a general lattice.
The Hamiltonian reads:
.. math ::
H =  \sum_{\langle i,j\rangle, i < j} \mathtt{J} \sigma^x_i \sigma^x_{j}
 \sum_{i} \mathtt{g} \sigma^z_i
Here, :math:`\langle i,j \rangle, i< j` denotes nearest neighbor pairs, each pair appearing
exactly once.
All parameters are collected in a single dictionary `model_params`, which
is turned into a :class:`~tenpy.tools.params.Config` object.
Parameters

model_params : :class:`~tenpy.tools.params.Config`
Parameters for the model. See :cfg:config:`TFIModel` below.
Options

.. cfg:config :: TFIModel
:include: CouplingMPOModel
conserve : None  'parity'
What should be conserved. See :class:`~tenpy.networks.Site.SpinHalfSite`.
J, g : float  array
Coupling as defined for the Hamiltonian above.
"""
def init_sites(self, model_params):
conserve = model_params.get('conserve', 'parity')
assert conserve != 'Sz'
if conserve == 'best':
conserve = 'parity'
if self.verbose >= 1.:
print(self.name + ": set conserve to", conserve)
site = SpinHalfSite(conserve=conserve)
return site
def init_terms(self, model_params):
J = np.asarray(model_params.get('J', 1.))
g = np.asarray(model_params.get('g', 1.))
for u in range(len(self.lat.unit_cell)):
self.add_onsite(g, u, 'Sigmaz')
for u1, u2, dx in self.lat.pairs['nearest_neighbors']:
self.add_coupling(J, u1, 'Sigmax', u2, 'Sigmax', dx)
# done
class TFIChain(TFIModel, NearestNeighborModel):
"""The :class:`TFIModel` on a Chain, suitable for TEBD.
See the :class:`TFIModel` for the documentation of parameters.
"""
def __init__(self, model_params):
model_params = asConfig(model_params, self.__class__.__name__)
model_params.setdefault('lattice', "Chain")
CouplingMPOModel.__init__(self, model_params)
Automation of Hermitian conjugation¶
As most physical Hamiltonians are Hermitian, these Hamiltonians are fully determined when only half of the mutually conjugate terms is defined. For example, a simple Hamiltonian:
is fully determined by the term \(c^{\dagger}_i c_j\) if we demand that Hermitian conjugates are included automatically.
In TeNPy, whenever you add a coupling using add_onsite()
,
add_coupling()
, or add_multi_coupling()
,
you can use the optional argument plus_hc to automatically create and add the Hermitian conjugate of that coupling term  as shown above.
Additionally, in an MPO, explicitly adding both a nonHermitian term and its conjugate increases the bond dimension of the MPO, which increases the memory requirements of the MPOEnvironment
.
Instead of adding the conjugate terms explicitly, you can set a flag explicit_plus_hc in the MPOCouplingModel
parameters, which will ensure two things:
The model and the MPO will only store half the terms of each Hermitian conjugate pair added, but the flag explicit_plus_hc indicates that they represent self + h.c.. In the example above, only the term \(c^{\dagger}_i c_j\) would be saved.
At runtime during DMRG, the Hermitian conjugate of the (now nonHermitian) MPO will be computed and applied along with the MPO, so that the effective Hamiltonian is still Hermitian.
Note
The model flag explicit_plus_hc should be used in conjunction with the flag plus_hc in add_coupling()
or add_multi_coupling()
.
If plus_hc is False while explicit_plus_hc is True the MPO bond dimension will not be reduced, but you will still pay the additional computational cost of computing the Hermitian conjugate at runtime.
Thus, we end up with several use cases, depending on your preferences.
Consider the FermionModel
.
If you do not care about the MPO bond dimension, and want to add Hermitian conjugate terms manually, you would set model_par[‘explicit_plus_hc’] = False and write:
self.add_coupling(J, u1, 'Cd', u2, 'C', dx)
self.add_coupling(np.conj(J), u2, 'Cd', u1, 'C', dx)
If you wanted to save the trouble of the extra line of code (but still did not care about MPO bond dimension), you would keep the model_par, but instead write:
self.add_coupling(J, u1, 'Cd', u2, 'C', dx, plus_hc=True)
Finally, if you wanted a reduction in MPO bond dimension, you would need to set model_par[‘explicit_plus_hc’] = True, and write:
self.add_coupling(J, u1, 'Cd', u2, 'C', dx, plus_hc=True)
Some random remarks on models¶
Needless to say that we have also various predefined models under
tenpy.models
.Of course, an MPO is all you need to initialize a
MPOModel
to be used for DMRG; you don’t have to use theCouplingModel
orCouplingMPOModel
. For example an exponentially decaying longrange interactions are not supported by the coupling model but straightforward to include to an MPO, as demonstrated in the exampleexamples/mpo_exponentially_decaying.py
.If the model of your interest contains Fermions, you should read the Fermions and the JordanWigner transformation.
We suggest writing the model to take a single parameter dictionary for the initialization, as the
CouplingMPOModel
does. The CouplingMPOModel converts the dictionary to a dictlikeConfig
with some additional features before passing it on to the init_lattice, init_site, … methods. It is recommended to read out providing default values withmodel_params.get("key", default_value)
, seeget()
.When you write a model and want to include a test that it can be at least constructed, take a look at
tests/test_model.py
.
Details on the lattice geometry¶
The Lattice
class defines the geometry of the system.
In the basic form, it represents a unit cell of a few sites repeated in one or multiple directions.
Moreover, it maps this higherdimensional geometry to a onedimensional chain for MPSbased algorithms.
Visualization¶
A plot of the lattice can greatly help to understand which sites are connected by what couplings.
The methods plot_*
of the Lattice
can do a good job for a quick illustration.
Let’s look at the Honeycomb lattice as an example.
import matplotlib.pyplot as plt
from tenpy.models import lattice
plt.figure(figsize=(5, 6))
ax = plt.gca()
lat = lattice.Honeycomb(Lx=4, Ly=4, sites=None, bc='periodic')
lat.plot_coupling(ax)
lat.plot_order(ax, linestyle=':')
lat.plot_sites(ax)
lat.plot_basis(ax, origin=0.5*(lat.basis[0] + lat.basis[1]))
ax.set_aspect('equal')
ax.set_xlim(1)
ax.set_ylim(1)
plt.show()
(Source code, png, hires.png, pdf)
In this case, the unit cell (shaded green) consists of two sites, which for the purpose of plotting we just set to sites=None
;
in general you should specify instances of Site
for that.
The unit cell gets repeated in the directions given by the lattice basis (green arrows at the unit cell boundary).
Hence, we can label each site by a lattice index (x, y, u)
in this case, where x in range(Lx), y in range(Ly)
specify the translation of the unit cell
and u in range(len(unit_cell))
, here u in [0, 1]
, specifies the index within the unit cell.
How an MPS winds through the lattice: the order¶
For MPSbased algorithms, we need to map a 2D lattice like the one above to a 1D chain.
The red, dashed line in the plot indicates how an MPS winds through the 2D
lattice. The MPS index i is a simple enumeration of the sites along this line, shown as numbers next to the sites
in the plot.
The methods mps2lat_idx()
and lat2mps_idx()
map
indices of the MPS to and from indices of the lattice.
The MPS
class itself is (mostly) agnostic of the underlying geometry.
For example, expectation_value()
will return a 1D array of the expectation value on each
site indexed by the MPS index i.
If you have a twodimensional lattice, you can use mps2lat_values()
to map this result to a 2D array index by the lattice indices.
A suitable order is critical for the efficiency of MPSbased algorithms. On one hand, different orderings can lead to different MPO bonddimensions, with direct impact on the complexity scaling. On the other hand, it influences how much entanglement needs to go through each bonds of the underlying MPS, e.g., the ground strate to be found in DMRG, and therefore influences the required MPS bond dimensions. For the latter reason, the “optimal” ordering can not be known a priori and might even depend on your coupling parameters (and the phase you are in). In the end, you can just try different orderings and see which one works best.
The simplets way to change the order is to use a nondefault value for the initialization parameter order of the
Lattice
class. This gets passed on to ordering()
,
which you an override in a custom lattice class to define new possible orderings.
Alternatively, you can go the most general way and simply set the attribute order to be a 2D numpy array with
lattice indices as rows, in the order you want.
import matplotlib.pyplot as plt
from tenpy.models import lattice
Lx, Ly = 3, 3
fig, axes = plt.subplots(2, 2, figsize=(7, 8))
lat1 = lattice.Honeycomb(Lx, Ly, sites=None, bc='periodic') # default order
lat2 = lattice.Honeycomb(Lx, Ly, sites=None, bc='periodic',
order="Cstyle") # first method to change order
# alternative: directly set "Cstyle" order
lat3 = lattice.Honeycomb(Lx, Ly, sites=None, bc='periodic')
lat3.order = lat2.ordering("Cstyle") # now equivalent to lat2
# general: can apply arbitrary permutation to the order
lat4 = lattice.Honeycomb(Lx, Ly, sites=None, bc='periodic',
order="Cstyle")
old_order = lat4.order
permutation = []
for i in range(0, len(old_order), 2):
permutation.append(i+1)
permutation.append(i)
lat4.order = old_order[permutation, :]
for lat, label, ax in zip([lat1, lat2, lat3, lat4],
["order='default'",
"order='Cstyle'",
"order='Cstyle'",
"custom permutation"],
axes.flatten()):
lat.plot_coupling(ax)
lat.plot_sites(ax)
lat.plot_order(ax, linestyle=':', linewidth=2.)
ax.set_aspect('equal')
ax.set_title('order = ' + repr(label))
plt.show()
(Source code, png, hires.png, pdf)
Boundary conditions¶
The Lattice
defines the boundary conditions bc in each direction.
It can be one of the usual 'open'
or 'periodic'
in each direcetion.
On top of that, there is the bc_MPS boundary condition of the MPS, one of 'finite', 'segment', 'infinite'
.
For an 'infinite'
MPS, the whole lattice is repeated in the direction of the first basis vector of the lattice.
For bc_MPS='infinite'
, the first direction should always be 'periodic'
, but you can also define a lattice with
bc_MPS='finite', bc=['periodic', 'perioid']
for a finite system on the torus.
This is discouraged, though, because the ground state MPS will require the squared bond dimension for the same precision in this
case!
For two (or higher) dimensional lattices, e.g for DMRG on an infinite cylinder,
you can also specify an integer shift instead of just saying 'periodic'
:
Rolling the 2D lattice up into a cylinder, you have a degree of freedom which sites to connect.
This is illustrated in the figure below for a Square
lattice with bc=['periodic', shift]
for shift in [1, 0, 1]
(different columns).
In the first row, the orange markers indicate a pair of identified sites (see plot_bc_shift()
).
The dashed orange line indicates the direction of the cylinder axis.
The line where the cylinder is “cut open” therefore winds around the the cylinder for a nonzero shift.
(A similar thing happens even for shift=0 for more complicated lattices with nonorthogonal basis.)
In the second row, we directly draw lines between all sites connected by nearestneighbor couplings, as they appear in the MPO.
(Source code, png, hires.png, pdf)
Irregular Lattices¶
The IrregularLattice
allows to add or remove sites from/to an existing regular lattice.
The docstring of IrregularLattice
contains several examples, let us consider another one
here, where we use the IrregularLattice to “fix” the boundary of the Honeycomb lattice:
when we use "open"
boundary conditions for a finite system, there are two sites (on the lower left, and upper right),
wich are not included into any hexagonal. The following example shows how to remove them from the system:
import matplotlib.pyplot as plt
from tenpy.models import lattice
Lx, Ly = 3, 3
fig, axes = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(6, 4))
reg_lat = lattice.Honeycomb(Lx=Lx, Ly=Ly, sites=None, bc='open')
irr_lat = lattice.IrregularLattice(reg_lat, remove=[[0, 0, 0], [1, 1, 1]])
for lat, label, ax in zip([reg_lat, irr_lat],
["regular", "irregular"],
axes.flatten()):
lat.plot_coupling(ax)
lat.plot_order(ax, linestyle=':')
lat.plot_sites(ax)
ax.set_aspect('equal')
ax.set_title(label)
plt.show()
(Source code, png, hires.png, pdf)
Fermions and the JordanWigner transformation¶
The JordanWigner tranformation maps fermionic creation and annihilation operators to (bosonic) spinoperators.
Spinless fermions in 1D¶
Let’s start by explicitly writing down the transformation. With the Pauli matrices \(\sigma^{x,y,z}_j\) and \(\sigma^{\pm}_j = (\sigma^x_j \pm \mathrm{i} \sigma^y_j)/2\) on each site, we can map
The \(n_l\) in the second and third row are defined in terms of Pauli matrices according to the first row. We do not interpret the Pauli matrices as spin1/2; they have nothing to do with the spin in the spinfull case. If you really want to interpret them physically, you might better think of them as hardcore bosons (\(b_j =\sigma^{}_j, b^\dagger_j=\sigma^{+}_j\)), with a spin of the fermions mapping to a spin of the hardcore bosons.
Note that this transformation maps the fermionic operators \(c_j\) and \(c^\dagger_j\) to global operators; although they carry an index j indicating
a site, they actually act on all sites l <= j
!
Thus, clearly the operators C
and Cd
defined in the FermionSite
do not directly correspond to \(c_j\) and
\(c^\dagger_j\).
The part \((1)^{\sum_{l < j} n_l}\) is called JordanWigner string and in the FermionSite
is given by the local operator
\(JW := (1)^{n_l}\) acting all sites l < j
.
Since this important, let me stress it again:
Warning
The fermionic operator \(c_j\) (and similar \(c^\dagger_j\)) maps to a global operator consisting of
the JordanWigner string built by the local operator JW
on sites l < j
and the local operator C
(or Cd
, respectively) on site j
.
On the sites itself, the onsite operators C
and Cd
in the FermionSite
fulfill the correct anticommutation relation, without the need to include JW
strings.
The JW
string is necessary to ensure the anticommutation for operators acting on different sites.
Written in terms of onsite operators defined in the FermionSite
,
with the ith entry entry in the list acting on site i, the relations are thus:
["JW", ..., "JW", "C", "Id", ..., "Id"] # for the annihilation operator
["JW", ..., "JW", "Cd", "Id", ..., "Id"] # for the creation operator
Note that "JW"
squares to the identity, "JW JW" == "Id"
,
which is the reason that the Jordanwigner string completely cancels in \(n_j = c^\dagger_j c_j\).
In the above notation, this can be written as:
["JW", ..., "JW", "Cd", "Id", ..., "Id"] * ["JW", ..., "JW", "C", "Id", ..., "Id"]
== ["JW JW", ..., "JW JW", "Cd C", "Id Id", ..., "Id Id"] # by definition of the tensorproduct
== ["Id", ..., "Id", "N", "Id", ..., "Id"] # by definition of the local operators
# ("X Y" stands for the local operators X and Y applied on the same site. We assume that the "Cd" and "C" on the first line act on the same site.)
For a pair of operators acting on different sites, JW
strings have to be included for every site between the operators.
For example, taking i < j
,
\(c^\dagger_i c_j \leftrightarrow \sigma_i^{+} (1)^{\sum_{i <=l < j} n_l} \sigma_j^{}\).
More explicitly, for j = i+2
we get:
["JW", ..., "JW", "Cd", "Id", "Id", "Id", ..., "Id"] * ["JW", ..., "JW", "JW", "JW", "C", "Id", ..., "Id"]
== ["JW JW", ..., "JW JW", "Cd JW", "Id JW", "Id C", ..., "Id"]
== ["Id", ..., "Id", "Cd JW", "JW", "C", ..., "Id"]
In other words, the JordanWigner string appears only in the range i <= l < j
, i.e. between the two sites and on the smaller/left one of them.
(You can easily generalize this rule to cases with more than two \(c\) or \(c^\dagger\).)
This last line (as well as the last line of the previous example) can be rewritten by changing the order of the operators Cd JW
to "JW Cd" ==  "Cd"
.
(This is valid because either site i
is occupied, yielding a minus sign from the JW
, or it is empty, yielding a 0 from the Cd
.)
This is also the case for j < i
, say j = i2
:
\(c^\dagger_i c_j \leftrightarrow (1)^{\sum_{j <=l < i} n_l} \sigma_i^{+} \sigma_j^{}\).
As shown in the following, the JW
again appears on the left site,
but this time acting after C
:
["JW", ..., "JW", "JW", "JW", "Cd", "Id", ..., "Id"] * ["JW", ..., "JW", "C", "Id", "Id", "Id", ..., "Id"]
== ["JW JW", ..., "JW JW", "JW C", "JW", "Cd Id", ..., "Id"]
== ["Id", ..., "Id", "JW C", "JW", "Cd", ..., "Id"]
Higher dimensions¶
For an MPO or MPS, you always have to define an ordering of all your sites. This ordering effectifely maps the higherdimensional lattice to a 1D chain, usually at the expence of longrange hopping/interactions. With this mapping, the JordanWigner transformation generalizes to higher dimensions in a straightforward way.
Spinful fermions¶
As illustrated in the above picture, you can think of spin1/2 fermions on a chain as spinless fermions living on a ladder (and analogous mappings for higher dimensional lattices).
Each rung (a blue box in the picture) forms a SpinHalfFermionSite
which is composed of two FermionSite
(the circles in the picture) for spinup and spindown.
The mapping of the spin1/2 fermions onto the ladder induces an ordering of the spins, as the final result must again be a onedimensional chain, now containing both spin species.
The solid line indicates the convention for the ordering, the dashed lines indicate spinpreserving hopping \(c^\dagger_{s,i} c_{s,i+1} + h.c.\)
and visualize the ladder structure.
More generally, each species of fermions appearing in your model gets a separate label, and its JordanWigner string
includes the signs \((1)^{n_l}\) of all species of fermions to the ‘left’ of it (in the sense of the ordering indicated by the solid line in the picture).
In the case of spin1/2 fermions labeled by \(\uparrow\) and \(\downarrow\) on each site, the complete mapping is given (where j and l are indices of the FermionSite
):
In each of the above mappings the operators on the right hand sides commute; we can rewrite
\((1)^{\sum_{l < j} n_{\uparrow,l} + n_{\downarrow,l}} = \prod_{l < j} (1)^{n_{\uparrow,l}} (1)^{n_{\downarrow,l}}\),
which resembles the actual structure in the code more closely.
The parts of the operator acting in the same box of the picture, i.e. which have the same index j or l,
are the ‘onsite’ operators in the SpinHalfFermionSite
:
for example JW
on site j is given by \((1)^{n_{\uparrow,j}} (1)^{n_{\downarrow,j}}\),
Cu
is just the \(\sigma^{}_{\uparrow,j}\), Cdu
is \(\sigma^{+}_{\uparrow,j}\),
Cd
is \((1)^{n_{\uparrow,j}} \sigma^{}_{\downarrow,j}\).
and Cdd
is \((1)^{n_{\uparrow,j}} \sigma^{+}_{\downarrow,j}\).
Note the asymmetry regarding the spin in the definition of the onsite operators:
the spindown operators include JordanWigner signs for the spinup fermions on the same site.
This asymetry stems from the ordering convention introduced by the solid line in the picture, according to which the spinup site
is “left” of the spindown site. With the above definition, the operators within the same SpinHalfFermionSite
fulfill the expected commutation relations,
for example "Cu Cdd" ==  "Cdd Cu"
, but again the JW
on sites left of the operator pair is crucial to get the correct
commutation relations globally.
Warning
Again, the fermionic operators \(c_{\downarrow,j}, c^\dagger_{\downarrow,j}, c_{\downarrow,j}, c^\dagger_{\downarrow,j}\) correspond to global operators consisting of
the JordanWigner string built by the local operator JW
on sites l < j
and the local operators 'Cu', 'Cdu', 'Cd', 'Cdd'
on site j
.
Written explicitly in terms of onsite operators defined in the FermionSite
,
with the jth entry entry in the list acting on site j, the relations are:
["JW", ..., "JW", "Cu", "Id", ..., "Id"] # for the annihilation operator spinup
["JW", ..., "JW", "Cd", "Id", ..., "Id"] # for the annihilation operator spindown
["JW", ..., "JW", "Cdu", "Id", ..., "Id"] # for the creation operator spinup
["JW", ..., "JW", "Cdd", "Id", ..., "Id"] # for the creation operator spindown
As you can see, the asymmetry regaring the spins in the definition of the local onsite operators "Cu", "Cd", "Cdu", "Cdd"
lead to a symmetric definition in the global sense.
If you look at the definitions very closely, you can see that in terms like ["Id", "Cd JW", "JW", "Cd"]
the
JordanWigner sign \((1)^{n_\uparrow,2}\) appears twice (namely once in the definition of "Cd"
and once in the "JW"
on site
2) and could in principle be canceled, however in favor of a simplified handling in the code we do not recommend you to cancel it.
Similar, within a spinless FermionSite
, one can simplify "Cd JW" == "Cd"
and "JW C" == "C"
,
but these relations do not hold in the SpinHalfSite
,
and for consistency we recommend to explicitly keep the "JW"
operator string even in nearestneighbor models where it is not strictly necessary.
How to handle JordanWigner strings in practice¶
There are only a few pitfalls where you have to keep the mapping in mind: When building a model, you map the physical fermionic operators to the usual spin/bosonic operators. The algorithms don’t care about the mapping, they just use the given Hamiltonian, be it given as MPO for DMRG or as nearest neighbor couplings for TEBD. Only when you do a measurement (e.g. by calculating an expectation value or a correlation function), you have to reverse this mapping. Be aware that in certain cases, e.g. when calculating the entanglement entropy on a certain bond, you cannot reverse this mapping (in a straightforward way), and thus your results might depend on how you defined the JordanWigner string.
Whatever you do, you should first think about if (and how much of) the JordanWigner string cancels.
For example for many of the onsite operators (like the particle number operator N
or the spin operators in the SpinHalfFermionSite
)
the JordanWigner string cancels completely and you can just ignore it both in onsiteterms and couplings.
In case of two operators acting on different sites, you typically have a JordanWigner string inbetween (e.g. for the
\(c^\dagger_i c_j\) examples described above and below) or no JordanWigner strings at all (e.g. for densitydensity
interactions \(n_i n_j\)).
In fact, the case that the Jordan Wigner string on the left of the first nontrivial operator does not cancel is currently not supported
for models and expectation values, as it usually doesn’t appear in practice.
For terms involving more operators, things tend to get more complicated, e.g. \(c^\dagger_i c^\dagger_j c_k c_l\) with
\(i < j < k < l\) requires a JordanWigner string on sites m with \(i \leq m <j\) or \(k \leq m <l\), but
not for \(j < m < k\).
Note
TeNPy keeps track of which onsite operators need a JordanWigner string in the Site
class,
specifically in need_JW_string
and op_needs_JW()
.
Hence, when you define custom sites or add extra operators to the sites, make sure that
op_needs_JW()
returns the expected results.
When building a model the JordanWigner strings need to be taken into account.
If you just specify the H_MPO or H_bond, it is your responsibility to use the correct mapping.
However, if you use the add_coupling()
method of the
CouplingModel
,
(or the generalization add_multi_coupling()
for more than 2 operators),
TeNPy can use the information from the Site class to automatically add JordanWigner strings as needed.
Indeed, with the default argument op_string=None
, add_coupling will automatically check whether the operators
need JordanWigner strings and correspondlingly set op_string='JW', str_on_first=True
, if necessary.
For add_multi_coupling, you cann’t even explicitly specify the correct JordanWigner strings, but you must use
op_string=None
, from which it will automatically determine where JordanWigner strings are needed.
Obviously, you should be careful about the convention which of the operators is applied first (in a physical
sense as an operator acting on a state), as this corresponds to a sign of the prefactor.
Read the docstrings of add_coupling()
add_multi_coupling()
for details.
As a concrete example, let us specify a hopping
\(\sum_{i} (c^\dagger_i c_{i+1} + h.c.) = \sum_{i} (c^\dagger_i c_{i+1} + c^\dagger_{i} c_{i1})\)
in a 1D chain of FermionSite
with add_coupling()
.
The recommended way is just:
self.add_coupling(strength, 0, 'Cd', 0, 'C', 1, plus_hc=True)
If you want to specify both the JordanWigner string and the h.c.
term explicitly, you can use:
self.add_coupling(strength, 0, 'Cd', 0, 'C', 1, op_string='JW', str_on_first=True)
self.add_coupling(strength, 0, 'Cd', 0, 'C', 1, op_string='JW', str_on_first=True)
Slightly more complicated, to specify the hopping \(\sum_{\langle i, j\rangle, s} (c^\dagger_{s,i} c_{s,j} + h.c.)\) in the FermiHubbard model on a 2D square lattice, we could use:
for (dx, dy) in [(1, 0), (0, 1)]:
self.add_coupling(strength, 0, 'Cdu', 0, 'Cu', (dx, dy), plus_hc=True) # spin up
self.add_coupling(strength, 0, 'Cdd', 0, 'Cd', (dx, dy), plus_hc=True) # spin down
# or without `plus_hc`
for (dx, dy) in [(1, 0), (1, 0), (0, 1), (0, 1)]: # include dx !
self.add_coupling(strength, 0, 'Cdu', 0, 'Cu', (dx, dy)) # spin up
self.add_coupling(strength, 0, 'Cdd', 0, 'Cd', (dx, dy)) # spin down
# or specifying the 'JW' string explicitly
for (dx, dy) in [(1, 0), (1, 0), (0, 1), (0, 1)]:
self.add_coupling(strength, 0, 'Cdu', 0, 'Cu', (dx, dy), 'JW', True) # spin up
self.add_coupling(strength, 0, 'Cdd', 0, 'Cd', (dx, dy), 'JW', True) # spin down
The most important functions for doing measurements are probably expectation_value()
and correlation_function()
. Again, if all the JordanWigner strings cancel, you don’t have
to worry about them at all, e.g. for many onsite operators or correlation functions involving only number operators.
If you build multisite operators to be measured by expectation_value, take care to include the JordanWigner
string correctly.
Some MPS methods like
correlation_function()
,
expectation_value_term()
and
expectation_value_terms_sum()
automatically add JordanWignder strings
(at least with default arguments).
Other more lowlevel functions like expectation_value_multi_sites()
don’t do it.
Hence, you should always watch out during measurements, if the function used needs special treatment for JordanWigner strings.
Saving to disk: input/output¶
Using pickle¶
A simple and pythonic way to store data of TeNPy arrays is to use pickle
from the Python standard library.
Pickle allows to store (almost) arbitrary python objects,
and the Array
is no exception (and neither are other TeNPy classes).
Say that you have run DMRG to get a ground state psi as an MPS
.
With pickle, you can save it to disk as follows:
import pickle
with open('my_psi_file.pkl', 'wb') as f:
pickle.dump(psi, f)
Here, the with ... :
structure ensures that the file gets closed after the pickle dump, and the 'wb'
indicates
the file opening mode “write binary”.
Reading the data from disk is as easy as ('rb'
for reading binary):
with open('my_psi_file.pkl', 'rb') as f:
psi = pickle.load(f)
Note
It is a good (scientific) practice to include metadata to the file, like the parameters you used to generate that state.
Instead of just the psi, you can simply store a dictionary containing psi and other data, e.g.,
data = {'psi': psi, 'dmrg_params': dmrg_params, 'model_params': model_params}
.
This can save you a lot of pain, when you come back looking at the files a few month later and forgot what you’ve done to generate them!
In some cases, compression can significantly reduce the space needed to save the data.
This can for example be done with gzip
(as well in the Python standard library).
However, be warned that it might cause longer loading and saving times, i.e. it comes at the penalty of more CPU usage for the input/output.
In Python, this requires only small adjustments:
import pickle
import gzip
# to save:
with gzip.open('my_data_file.pkl', 'wb') as f:
pickle.dump(data, f)
# and to load:
with gzip.open('my_data_file.pkl', 'rb') as f:
data = pickle.load(data, f)
Using HDF5 with h5py¶
While pickle
is great for simple input/output of python objects, it also has disadvantages. The probably most
dramatic one is the limited portability: saving data on one PC and loading it on another one might fail!
Even exporting data from Python 2 to load them in Python 3 on the same machine can give quite some troubles.
Moreover, pickle requires to load the whole file at once, which might be unnecessary if you only need part of the data,
or even lead to memory problems if you have more data on disk than fits into RAM.
Hence, we support saving to HDF5 files as an alternative. The h5py package provides a dictionarylike interface for the file/group objects with numpylike data sets, and is quite easy to use. If you don’t know about HDF5, read the quickstart of the h5py documentation (and this guide).
The implementation can be found in the tenpy.tools.hdf5_io
module with the
Hdf5Saver
and Hdf5Loader
classes
and the wrapper functions save_to_hdf5()
, load_from_hdf5()
.
The usage is very similar to pickle:
import h5py
from tenpy.tools import hdf5_io
data = {"psi": psi, # e.g. an MPS
"model": my_model,
"parameters": {"L": 6, "g": 1.3}}
with h5py.File("file.h5", 'w') as f:
hdf5_io.save_to_hdf5(f, data)
# ...
with h5py.File("file.h5", 'r') as f:
data = hdf5_io.load_from_hdf5(f)
# or for partial reading:
pars = hdf5_io.load_from_hdf5(f, "/parameters")
Note
The hickle package imitates the pickle functionality while saving the data to HDF5 files. However, since it aims to be close to pickle, it results in a more complicated data structure than we want here.
Note
To use the export/import features to HDF5, you need to install the h5py python package (and hence some version of the HDF5 library).
Data format specification for saving to HDF5¶
This section motivates and defines the format how we save data of TeNPydefined classes.
The goal is to have the save_to_hdf5()
function for saving sufficiently simple enough python
objects (supported by the format) to disk in an HDF5 file, such that they can be reconstructed with the
load_from_hdf5()
function, as outlined in the example code above.
Guidelines of the format:
Store enough data such that
load_from_hdf5()
can reconstruct a copy of the object (provided that the save did not fail with an error).Objects of a type supported by the HDF5 datasets (with the h5py interface) should be directly stored as h5py
Dataset
. Such objects are for example numpy arrays (of nonobject dtype), scalars and strings.Allow to save (nested) python lists, tuples and dictionaries with values (and keys) which can be saved.
Allow userdefined classes to implement a welldefined interface which allows to save instances of that class, hence extending what data can be saved. An instance of a class supporting the interface gets saved as an HDF5
Group
. Class attributes are stored as entries of the group, metadata like the type should be stored in HDF5 attributes, see attributes.Simple and intuitive, humanreadable structure for the HDF5 paths. For example, saving a simple dictionary
{'a': np.arange(10), 'b': 123.45}
should result in an HDF5 file with just the two data sets/a
and/b
.Allow loading only a subset of the data by specifying the path of the HDF5 group to be loaded. For the above example, specifying the path
/b
should result in loading the float123.45
, not the array.Avoid unnecessary copies if the same python object is referenced by different names, e.g, for the data
{'c': large_obj, 'd': large_obj}
with to references to the same large_obj, save it only once and use HDF5 hardlinks such that/c
and/d
are the same HDF5 dataset/group. Also avoid the copies during the loading, i.e., the loaded dictionary should again have two references to a single object large_obj. This is also necessary to allow saving and loading of objects with cyclic references.Loading a dataset should be (fairly) secure and not execute arbitrary python code (even if the dataset was manipulated), as it is the case for pickle.
Disclaimer: I’m not an security expert, so I can’t guarantee that… Also, loading a HDF5 file can import other python modules, so importing a manipulated file is not secure if you downloaded a malicious python file as well.
The full format specification is given by the what the code in hdf5_io
does…
Since this is not trivial to understand, let me summarize it here:
Following 1), simple scalars, strings and numpy arrays are saved as
Dataset
. Other objects are saved as a HDF5Group
, with the actual data being saved as group members (as subgroups and subdatasets) or as attributes (for metadata or simple data).The type of the object is stored in the HDF5 attribute
'type'
, which is one of the globalREPR_*
variables intenpy.tools.hdf5_io
. The type determines the format for saving/loading of builtin types (list, …)Userdefined classes which should be possible to export/import need to implement the methods
save_hdf5
andfrom_hdf5
as specified inHdf5Exportable
. When saving such a class, the attribute'type'
is automatically set to'instance'
, and the class name and module are saved under the attributes'module'
and'class'
. During loading, this information is used to automatically import the module, get the class and call the classmethodfrom_hdf5
for reconstruction. This can only work if the class definition already exists, i.e., you can only save class instances, not classes itself.For most (python) classes, simply subclassing
Hdf5Exportable
should work to make the class exportable. The latter saves the contents of__dict__
, with the extra attribute'format'
specifying whether the dictionary is “simple” (see below.).The
None
object is saved as a group with the attribute'type'
being'None'
and no subgroups.For iterables (list, tuple and set), we simple enumerate the entries and save entries as group members under the names
'0', '1', '2', ...
, and a maximum'len'
attribute.The format for dictionaries depends on whether all keys are “simple”, which we define as being strings which are valid path names in HDF5, see
valid_hdf5_path_component()
. Following 4), the keys of a simple dictionary are directly used as names for group members, and the values being whatever object the group member represents.Partial loading along 5) is possible by directly specifying the subgroup or the path to
load_from_hdf5()
.Guideline 6) is ensured as much as possible. However, there is a bug/exception: tuples with cyclic references are not reconstructed correctly; the inner objects will be lists instead of tuples (but with the same object entries).
Finally, we have to mention that many TeNPy classes are Hdf5Exportable
.
In particular, the Array
supports this.
To see what the exact format for those classes is, look at the save_hdf5 and from_hdf5 methods of those classes.
Note
There can be multiple possible output formats for the same object. The dictionary – with the format for simple keys or general keys – is such an example, but userdefined classes can use the same technique in their from_hdf5 method. The user might also explicitly choose a “lossy” output format (e.g. “flat” for np_conserved Arrays and LegCharges).
Tip
The above format specification is quite general and not bound to TeNPy. Feel free to use it in your own projects ;) To separate the development, versions and issues of the format clearly from TeNPy, we maintain the code for it in a separate git repository, https://github.com/tenpy/hdf5_io
Protocol for using (i)DMRG¶
While this documentation contains extensive guidance on how to interact with the tenpy, it is often unclear how to approach a physics question using these methods. This page is an attempt to provide such guidance, describing a protocol on how to go from a model implementation to an answered question.
The basic workflow for an (i)DMRG project is as follows, with individual steps expanded on later where necessary.
Confirm the correctness of the model implementation.
Run some loweffort tests to see whether the question seems answerable.
If the tests are successful, run productionquality simulations. This will be entirely particular to the project you’re working on.
Confirm that your results are converged.
Confirming the model is correct¶
Although TeNPy makes model implementation much easier than constructing the MPO by hand, one should still ensure that the MPO represents the intended model faithfully.
There are several possible ways to do this. Firstly, for sufficiently small system sizes, one can contract the entire MPO into a matrix, and inspect the matrix elements. In TeNPy, this can be done using get_full_hamiltonian()
. These should reproduce the analytical Hamiltonian up to machine precision, or any other necessary cutoff (e.g., longrange interactions may be truncated at some finite distance).
Secondly, if the model basis allows it, one can construct (product state) MPSs for known eigenstates of the model and evaluate whether these reproduce the correct eigenvalues upon contraction with the MPO.
Finally, one can sometimes construct a basis of single or even twoparticle MPSs in some basis, and evaluate the MPO on this basis to get a representation of the single and twoparticle Hamiltonian. If the model contains only single and twobody terms, this latter approach should reproduce all terms in the Hamiltonian.
Loweffort tests¶
As not every state can be accurately represented by an MPS, some results are outside the reach of (i)DMRG. To prevent wasting considerable numerical resources on a fruitless project, it is recommended to run some loweffort trials first, and see whether any indication of the desired result can be found. If so, one can then go on to more computationally expensive simulations. If not, one should evaluate:
Whether there is a mistake in the model or simulation setup,
Whether a slightly more computationally expensive test would potentially yield a result, or
Whether your approach is unfortunately out of reach of (i)DMRG.
To set up loweffort trials, one should limit system size, bond dimension and the range of interactions, as well as (if possible) target a noncritical region of phase space. All these measures reduce the size of and/or entanglement entropy needing to be captured by the MPS, which yields both memory and run time advantages. Of course, one introduces a tradeoff between computational cost and accuracy, which is why one should be careful to not put too much faith into results obtained at this stage.
Detecting convergence issues¶
Ensuring that the results of an (i)DMRG simulation are wellconverged and thus reliable is a hugely important part of any (i)DMRG study. Possible indications that there might be a convergence issue include:
The simulation shows a nonmonotonous decrease of energy, and/or a nonmonotonous increase of entanglement entropy. An increase of energy or decrease of entanglement entropy on subsequent steps within a sweep, or between subsequent sweeps, are particularly suspicious.
The simulation does not halt because it reached a convergence criterion, but because it reached its maximum number of sweeps.
Results vary wildly under small changes of parameters. In particular, if a small change in bond dimension yields a big change in results, one should be suspicious of the data.
Combating convergence issues¶
To combat convergence issues of the (i)DMRG algorithm, several strategies (short of switching to a different method) can be attempted:
Ensure that there are no errors in the model (see above) or the simulation setup.
Increase the maximum bond dimension.
Ramp up the maximum bond dimension during simulation, rather than starting at the highest value. I.e., define a schedule wherein the first \(N_{\mathrm{sweeps}}\) sweeps run at some \(\chi_1 < \chi_\mathrm{max}\), the next \(N_{\mathrm{sweeps}}\) at \(\chi_1 < \chi_2 < \chi_{\mathrm{max}}\), etc. This can be done through the
chi_list
option of theDMRGEngine
. You should also make sure that themax_hours
option is set to sufficiently long runtimes.Increase the maximum number of sweeps the algorithm is allowed to make, through the
max_sweeps
option of theDMRGEngine
.Change the
Mixer
settings to in or decrease the effects of the mixer.Change convergence criteria. This will not overcome convergence issues in itself, but can help fine tune the (i)DMRG simulation if it takes a long time to converge (relax the convergence constraints), or if the simulation finishes too soon (tighten the constraints). Criteria to consider are
max_E_err
andmax_S_err
, inDMRGEngine
.Increase the minimum number of sweeps taken by the algorithm. Again, this will not resolve issues due to bad convergence, but might prevent bad results due to premature convergence. This can be done through the
min_sweeps
option of theDMRGEngine
.Change the size and shape of the MPS unit cell (where possible), in case an artificially enforced translational invariance prevents the algorithm from finding a true ground state which is incommensurate with this periodicity. For example, a chain system which has a true ground state that is periodic in three sites, will not be accurately represented by a twosite MPS unit cell, as the latter enforces twosite periodicity.
In some instances, it is essentially unavoidable to encounter convergence issues. In particular, a simulation of a critical state can cause problems with (i)DMRG convergence, as these states violate the area law underlying an accurate MPS approximation. In these cases, one should acknowledge the difficulties imposed by the method and take care to be very careful in interpreting the data.
Examples¶
Toycodes¶
These toycodes are meant to give you a flavor of the different algorithms,
while keeping the codes as readable and simple as possible.
The scripts are included in the [TeNPySource] repository in the folder toycodes/
,
but not part of the basic TeNpy library;
the only requirements to run them are Python 3, Numpy, and Scipy.
a_mps.py¶
"""Toy code implementing a matrix product state."""
# Copyright 20182020 TeNPy Developers, GNU GPLv3
import numpy as np
from scipy.linalg import svd
# if you get an error message "LinAlgError: SVD did not converge",
# uncomment the following line. (This requires TeNPy to be installed.)
# from tenpy.linalg.svd_robust import svd # (works like scipy.linalg.svd)
class SimpleMPS:
"""Simple class for a matrix product state.
We index sites with `i` from 0 to L1; bond `i` is left of site `i`.
We *assume* that the state is in rightcanonical form.
Parameters

Bs, Ss, bc:
Same as attributes.
Attributes

Bs : list of np.Array[ndim=3]
The 'matrices' in rightcanonical form, one for each physical site
(within the unitcell for an infinite MPS).
Each `B[i]` has legs (virtual left, physical, virtual right), in short ``vL i vR``
Ss : list of np.Array[ndim=1]
The Schmidt values at each of the bonds, ``Ss[i]`` is left of ``Bs[i]``.
bc : 'infinite', 'finite'
Boundary conditions.
L : int
Number of sites (in the unitcell for an infinite MPS).
nbonds : int
Number of (nontrivial) bonds: L1 for 'finite' boundary conditions
"""
def __init__(self, Bs, Ss, bc='finite'):
assert bc in ['finite', 'infinite']
self.Bs = Bs
self.Ss = Ss
self.bc = bc
self.L = len(Bs)
self.nbonds = self.L  1 if self.bc == 'finite' else self.L
def copy(self):
return SimpleMPS([B.copy() for B in self.Bs], [S.copy() for S in self.Ss], self.bc)
def get_theta1(self, i):
"""Calculate effective singlesite wave function on sites i in mixed canonical form.
The returned array has legs ``vL, i, vR`` (as one of the Bs).
"""
return np.tensordot(np.diag(self.Ss[i]), self.Bs[i], [1, 0]) # vL [vL'], [vL] i vR
def get_theta2(self, i):
"""Calculate effective twosite wave function on sites i,j=(i+1) in mixed canonical form.
The returned array has legs ``vL, i, j, vR``.
"""
j = (i + 1) % self.L
return np.tensordot(self.get_theta1(i), self.Bs[j], [2, 0]) # vL i [vR], [vL] j vR
def get_chi(self):
"""Return bond dimensions."""
return [self.Bs[i].shape[2] for i in range(self.nbonds)]
def site_expectation_value(self, op):
"""Calculate expectation values of a local operator at each site."""
result = []
for i in range(self.L):
theta = self.get_theta1(i) # vL i vR
op_theta = np.tensordot(op, theta, axes=[1, 1]) # i [i*], vL [i] vR
result.append(np.tensordot(theta.conj(), op_theta, [[0, 1, 2], [1, 0, 2]]))
# [vL*] [i*] [vR*], [i] [vL] [vR]
return np.real_if_close(result)
def bond_expectation_value(self, op):
"""Calculate expectation values of a local operator at each bond."""
result = []
for i in range(self.nbonds):
theta = self.get_theta2(i) # vL i j vR
op_theta = np.tensordot(op[i], theta, axes=[[2, 3], [1, 2]])
# i j [i*] [j*], vL [i] [j] vR
result.append(np.tensordot(theta.conj(), op_theta, [[0, 1, 2, 3], [2, 0, 1, 3]]))
# [vL*] [i*] [j*] [vR*], [i] [j] [vL] [vR]
return np.real_if_close(result)
def entanglement_entropy(self):
"""Return the (vonNeumann) entanglement entropy for a bipartition at any of the bonds."""
bonds = range(1, self.L) if self.bc == 'finite' else range(0, self.L)
result = []
for i in bonds:
S = self.Ss[i].copy()
S[S < 1.e20] = 0. # 0*log(0) should give 0; avoid warning or NaN.
S2 = S * S
assert abs(np.linalg.norm(S)  1.) < 1.e14
result.append(np.sum(S2 * np.log(S2)))
return np.array(result)
def correlation_length(self):
"""Diagonalize transfer matrix to obtain the correlation length."""
import scipy.sparse.linalg.eigen.arpack as arp
assert self.bc == 'infinite' # works only in the infinite case
B = self.Bs[0] # vL i vR
chi = B.shape[0]
T = np.tensordot(B, np.conj(B), axes=[1, 1]) # vL [i] vR, vL* [i*] vR*
T = np.transpose(T, [0, 2, 1, 3]) # vL vL* vR vR*
for i in range(1, self.L):
B = self.Bs[i]
T = np.tensordot(T, B, axes=[2, 0]) # vL vL* [vR] vR*, [vL] i vR
T = np.tensordot(T, np.conj(B), axes=[[2, 3], [0, 1]])
# vL vL* [vR*] [i] vR, [vL*] [i*] vR*
T = np.reshape(T, (chi**2, chi**2))
# Obtain the 2nd largest eigenvalue
eta = arp.eigs(T, k=2, which='LM', return_eigenvectors=False, ncv=20)
return self.L / np.log(np.min(np.abs(eta)))
def init_FM_MPS(L, d, bc='finite'):
"""Return a ferromagnetic MPS (= product state with all spins up)"""
B = np.zeros([1, d, 1], np.float)
B[0, 0, 0] = 1.
S = np.ones([1], np.float)
Bs = [B.copy() for i in range(L)]
Ss = [S.copy() for i in range(L)]
return SimpleMPS(Bs, Ss, bc)
def split_truncate_theta(theta, chi_max, eps):
"""Split and truncate a twosite wave function in mixed canonical form.
Split a twosite wave function as follows::
vL (theta) vR => vL (A)diag(S)(B) vR
   
i j i j
Afterwards, truncate in the new leg (labeled ``vC``).
Parameters

theta : np.Array[ndim=4]
Twosite wave function in mixed canonical form, with legs ``vL, i, j, vR``.
chi_max : int
Maximum number of singular values to keep
eps : float
Discard any singular values smaller than that.
Returns

A : np.Array[ndim=3]
Leftcanonical matrix on site i, with legs ``vL, i, vC``
S : np.Array[ndim=1]
Singular/Schmidt values.
B : np.Array[ndim=3]
Rightcanonical matrix on site j, with legs ``vC, j, vR``
"""
chivL, dL, dR, chivR = theta.shape
theta = np.reshape(theta, [chivL * dL, dR * chivR])
X, Y, Z = svd(theta, full_matrices=False)
# truncate
chivC = min(chi_max, np.sum(Y > eps))
piv = np.argsort(Y)[::1][:chivC] # keep the largest `chivC` singular values
X, Y, Z = X[:, piv], Y[piv], Z[piv, :]
# renormalize
S = Y / np.linalg.norm(Y) # == Y/sqrt(sum(Y**2))
# split legs of X and Z
A = np.reshape(X, [chivL, dL, chivC])
B = np.reshape(Z, [chivC, dR, chivR])
return A, S, B
b_model.py¶
"""Toy code implementing the transversefield ising model."""
# Copyright 20182020 TeNPy Developers, GNU GPLv3
import numpy as np
class TFIModel:
"""Simple class generating the Hamiltonian of the transversefield Ising model.
The Hamiltonian reads
.. math ::
H =  J \\sum_{i} \\sigma^x_i \\sigma^x_{i+1}  g \\sum_{i} \\sigma^z_i
Parameters

L : int
Number of sites.
J, g : float
Coupling parameters of the above defined Hamiltonian.
bc : 'infinite', 'finite'
Boundary conditions.
Attributes

L : int
Number of sites.
bc : 'infinite', 'finite'
Boundary conditions.
sigmax, sigmay, sigmaz, id :
Local operators, namely the Pauli matrices and identity.
H_bonds : list of np.Array[ndim=4]
The Hamiltonian written in terms of local 2site operators, ``H = sum_i H_bonds[i]``.
Each ``H_bonds[i]`` has (physical) legs (i out, (i+1) out, i in, (i+1) in),
in short ``i j i* j*``.
H_mpo : lit of np.Array[ndim=4]
The Hamiltonian written as an MPO.
Each ``H_mpo[i]`` has legs (virutal left, virtual right, physical out, physical in),
in short ``wL wR i i*``.
"""
def __init__(self, L, J, g, bc='finite'):
assert bc in ['finite', 'infinite']
self.L, self.d, self.bc = L, 2, bc
self.J, self.g = J, g
self.sigmax = np.array([[0., 1.], [1., 0.]])
self.sigmay = np.array([[0., 1j], [1j, 0.]])
self.sigmaz = np.array([[1., 0.], [0., 1.]])
self.id = np.eye(2)
self.init_H_bonds()
self.init_H_mpo()
def init_H_bonds(self):
"""Initialize `H_bonds` hamiltonian.
Called by __init__().
"""
sx, sz, id = self.sigmax, self.sigmaz, self.id
d = self.d
nbonds = self.L  1 if self.bc == 'finite' else self.L
H_list = []
for i in range(nbonds):
gL = gR = 0.5 * self.g
if self.bc == 'finite':
if i == 0:
gL = self.g
if i + 1 == self.L  1:
gR = self.g
H_bond = self.J * np.kron(sx, sx)  gL * np.kron(sz, id)  gR * np.kron(id, sz)
# H_bond has legs ``i, j, i*, j*``
H_list.append(np.reshape(H_bond, [d, d, d, d]))
self.H_bonds = H_list
# (note: not required for TEBD)
def init_H_mpo(self):
"""Initialize `H_mpo` Hamiltonian.
Called by __init__().
"""
w_list = []
for i in range(self.L):
w = np.zeros((3, 3, self.d, self.d), dtype=np.float)
w[0, 0] = w[2, 2] = self.id
w[0, 1] = self.sigmax
w[0, 2] = self.g * self.sigmaz
w[1, 2] = self.J * self.sigmax
w_list.append(w)
self.H_mpo = w_list
c_tebd.py¶
"""Toy code implementing the time evolving block decimation (TEBD)."""
# Copyright 20182020 TeNPy Developers, GNU GPLv3
import numpy as np
from scipy.linalg import expm
from a_mps import split_truncate_theta
def calc_U_bonds(H_bonds, dt):
"""Given the H_bonds, calculate ``U_bonds[i] = expm(dt*H_bonds[i])``.
Each local operator has legs (i out, (i+1) out, i in, (i+1) in), in short ``i j i* j*``.
Note that no imaginary 'i' is included, thus real `dt` means 'imaginary time' evolution!
"""
d = H_bonds[0].shape[0]
U_bonds = []
for H in H_bonds:
H = np.reshape(H, [d * d, d * d])
U = expm(dt * H)
U_bonds.append(np.reshape(U, [d, d, d, d]))
return U_bonds
def run_TEBD(psi, U_bonds, N_steps, chi_max, eps):
"""Evolve for `N_steps` time steps with TEBD."""
Nbonds = psi.L  1 if psi.bc == 'finite' else psi.L
assert len(U_bonds) == Nbonds
for n in range(N_steps):
for k in [0, 1]: # even, odd
for i_bond in range(k, Nbonds, 2):
update_bond(psi, i_bond, U_bonds[i_bond], chi_max, eps)
# done
def update_bond(psi, i, U_bond, chi_max, eps):
"""Apply `U_bond` acting on i,j=(i+1) to `psi`."""
j = (i + 1) % psi.L
# construct theta matrix
theta = psi.get_theta2(i) # vL i j vR
# apply U
Utheta = np.tensordot(U_bond, theta, axes=([2, 3], [1, 2])) # i j [i*] [j*], vL [i] [j] vR
Utheta = np.transpose(Utheta, [2, 0, 1, 3]) # vL i j vR
# split and truncate
Ai, Sj, Bj = split_truncate_theta(Utheta, chi_max, eps)
# put back into MPS
Gi = np.tensordot(np.diag(psi.Ss[i]**(1)), Ai, axes=[1, 0]) # vL [vL*], [vL] i vC
psi.Bs[i] = np.tensordot(Gi, np.diag(Sj), axes=[2, 0]) # vL i [vC], [vC] vC
psi.Ss[j] = Sj # vC
psi.Bs[j] = Bj # vC j vR
def example_TEBD_gs_tf_ising_finite(L, g):
print("finite TEBD, imaginary time evolution, transverse field Ising")
print("L={L:d}, g={g:.2f}".format(L=L, g=g))
import a_mps
import b_model
M = b_model.TFIModel(L=L, J=1., g=g, bc='finite')
psi = a_mps.init_FM_MPS(M.L, M.d, M.bc)
for dt in [0.1, 0.01, 0.001, 1.e4, 1.e5]:
U_bonds = calc_U_bonds(M.H_bonds, dt)
run_TEBD(psi, U_bonds, N_steps=500, chi_max=30, eps=1.e10)
E = np.sum(psi.bond_expectation_value(M.H_bonds))
print("dt = {dt:.5f}: E = {E:.13f}".format(dt=dt, E=E))
print("final bond dimensions: ", psi.get_chi())
mag_x = np.sum(psi.site_expectation_value(M.sigmax))
mag_z = np.sum(psi.site_expectation_value(M.sigmaz))
print("magnetization in X = {mag_x:.5f}".format(mag_x=mag_x))
print("magnetization in Z = {mag_z:.5f}".format(mag_z=mag_z))
if L < 20: # compare to exact result
from tfi_exact import finite_gs_energy
E_exact = finite_gs_energy(L, 1., g)
print("Exact diagonalization: E = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E  E_exact) / E_exact))
return E, psi, M
def example_TEBD_gs_tf_ising_infinite(g):
print("infinite TEBD, imaginary time evolution, transverse field Ising")
print("g={g:.2f}".format(g=g))
import a_mps
import b_model
M = b_model.TFIModel(L=2, J=1., g=g, bc='infinite')
psi = a_mps.init_FM_MPS(M.L, M.d, M.bc)
for dt in [0.1, 0.01, 0.001, 1.e4, 1.e5]:
U_bonds = calc_U_bonds(M.H_bonds, dt)
run_TEBD(psi, U_bonds, N_steps=500, chi_max=30, eps=1.e10)
E = np.mean(psi.bond_expectation_value(M.H_bonds))
print("dt = {dt:.5f}: E (per site) = {E:.13f}".format(dt=dt, E=E))
print("final bond dimensions: ", psi.get_chi())
mag_x = np.mean(psi.site_expectation_value(M.sigmax))
mag_z = np.mean(psi.site_expectation_value(M.sigmaz))
print("<sigma_x> = {mag_x:.5f}".format(mag_x=mag_x))
print("<sigma_z> = {mag_z:.5f}".format(mag_z=mag_z))
print("correlation length:", psi.correlation_length())
# compare to exact result
from tfi_exact import infinite_gs_energy
E_exact = infinite_gs_energy(1., g)
print("Analytic result: E (per site) = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E  E_exact) / E_exact))
return E, psi, M
def example_TEBD_tf_ising_lightcone(L, g, tmax, dt):
print("finite TEBD, real time evolution, transverse field Ising")
print("L={L:d}, g={g:.2f}, tmax={tmax:.2f}, dt={dt:.3f}".format(L=L, g=g, tmax=tmax, dt=dt))
# find ground state with TEBD or DMRG
# E, psi, M = example_TEBD_gs_tf_ising_finite(L, g)
from d_dmrg import example_DMRG_tf_ising_finite
E, psi, M = example_DMRG_tf_ising_finite(L, g)
i0 = L // 2
# apply sigmaz on site i0
SzB = np.tensordot(M.sigmaz, psi.Bs[i0], axes=[1, 1]) # i [i*], vL [i] vR
psi.Bs[i0] = np.transpose(SzB, [1, 0, 2]) # vL i vR
U_bonds = calc_U_bonds(M.H_bonds, 1.j * dt) # (imaginary dt > realtime evolution)
S = [psi.entanglement_entropy()]
Nsteps = int(tmax / dt + 0.5)
for n in range(Nsteps):
if abs((n * dt + 0.1) % 0.2  0.1) < 1.e10:
print("t = {t:.2f}, chi =".format(t=n * dt), psi.get_chi())
run_TEBD(psi, U_bonds, 1, chi_max=50, eps=1.e10)
S.append(psi.entanglement_entropy())
import matplotlib.pyplot as plt
plt.figure()
plt.imshow(S[::1],
vmin=0.,
aspect='auto',
interpolation='nearest',
extent=(0, L  1., 0.5 * dt, (Nsteps + 0.5) * dt))
plt.xlabel('site $i$')
plt.ylabel('time $t/J$')
plt.ylim(0., tmax)
plt.colorbar().set_label('entropy $S$')
filename = 'c_tebd_lightcone_{g:.2f}.pdf'.format(g=g)
plt.savefig(filename)
print("saved " + filename)
if __name__ == "__main__":
example_TEBD_gs_tf_ising_finite(L=10, g=1.)
print("" * 100)
example_TEBD_gs_tf_ising_infinite(g=1.5)
print("" * 100)
example_TEBD_tf_ising_lightcone(L=20, g=1.5, tmax=3., dt=0.01)
d_dmrg.py¶
"""Toy code implementing the densitymatrix renormalization group (DMRG)."""
# Copyright 20182020 TeNPy Developers, GNU GPLv3
import numpy as np
from a_mps import split_truncate_theta
import scipy.sparse
import scipy.sparse.linalg.eigen.arpack as arp
class SimpleHeff(scipy.sparse.linalg.LinearOperator):
"""Class for the effective Hamiltonian.
To be diagonalized in `SimpleDMRGEnginge.update_bond`. Looks like this::
.vL* vR*.
 i* j* 
   
(LP)(W1)(W2)(RP)
   
 i j 
.vL vR.
"""
def __init__(self, LP, RP, W1, W2):
self.LP = LP # vL wL* vL*
self.RP = RP # vR* wR* vR
self.W1 = W1 # wL wC i i*
self.W2 = W2 # wC wR j j*
chi1, chi2 = LP.shape[0], RP.shape[2]
d1, d2 = W1.shape[2], W2.shape[2]
self.theta_shape = (chi1, d1, d2, chi2) # vL i j vR
self.shape = (chi1 * d1 * d2 * chi2, chi1 * d1 * d2 * chi2)
self.dtype = W1.dtype
def _matvec(self, theta):
"""Calculate theta'> = H_eff theta>.
This function is used by :func:scipy.sparse.linalg.eigen.arpack.eigsh` to diagonalize
the effective Hamiltonian with a Lanczos method, withouth generating the full matrix."""
x = np.reshape(theta, self.theta_shape) # vL i j vR
x = np.tensordot(self.LP, x, axes=(2, 0)) # vL wL* [vL*], [vL] i j vR
x = np.tensordot(x, self.W1, axes=([1, 2], [0, 3])) # vL [wL*] [i] j vR, [wL] wC i [i*]
x = np.tensordot(x, self.W2, axes=([3, 1], [0, 3])) # vL [j] vR [wC] i, [wC] wR j [j*]
x = np.tensordot(x, self.RP, axes=([1, 3], [0, 1])) # vL [vR] i [wR] j, [vR*] [wR*] vR
x = np.reshape(x, self.shape[0])
return x
class SimpleDMRGEngine:
"""DMRG algorithm, implemented as class holding the necessary data.
Parameters

psi, model, chi_max, eps:
See attributes
Attributes

psi : SimpleMPS
The current groundstate (approximation).
model :
The model of which the groundstate is to be calculated.
chi_max, eps:
Truncation parameters, see :func:`a_mps.split_truncate_theta`.
LPs, RPs : list of np.Array[ndim=3]
Left and right parts ("environments") of the effective Hamiltonian.
``LPs[i]`` is the contraction of all parts left of site `i` in the network ``<psiHpsi>``,
and similar ``RPs[i]`` for all parts right of site `i`.
Each ``LPs[i]`` has legs ``vL wL* vL*``, ``RPS[i]`` has legs ``vR* wR* vR``
"""
def __init__(self, psi, model, chi_max, eps):
assert psi.L == model.L and psi.bc == model.bc # ensure compatibility
self.H_mpo = model.H_mpo
self.psi = psi
self.LPs = [None] * psi.L
self.RPs = [None] * psi.L
self.chi_max = chi_max
self.eps = eps
# initialize left and right environment
D = self.H_mpo[0].shape[0]
chi = psi.Bs[0].shape[0]
LP = np.zeros([chi, D, chi], dtype=np.float) # vL wL* vL*
RP = np.zeros([chi, D, chi], dtype=np.float) # vR* wR* vR
LP[:, 0, :] = np.eye(chi)
RP[:, D  1, :] = np.eye(chi)
self.LPs[0] = LP
self.RPs[1] = RP
# initialize necessary RPs
for i in range(psi.L  1, 1, 1):
self.update_RP(i)
def sweep(self):
# sweep from left to right
for i in range(self.psi.nbonds  1):
self.update_bond(i)
# sweep from right to left
for i in range(self.psi.nbonds  1, 0, 1):
self.update_bond(i)
def update_bond(self, i):
j = (i + 1) % self.psi.L
# get effective Hamiltonian
Heff = SimpleHeff(self.LPs[i], self.RPs[j], self.H_mpo[i], self.H_mpo[j])
# Diagonalize Heff, find ground state `theta`
theta0 = np.reshape(self.psi.get_theta2(i), [Heff.shape[0]]) # initial guess
e, v = arp.eigsh(Heff, k=1, which='SA', return_eigenvectors=True, v0=theta0)
theta = np.reshape(v[:, 0], Heff.theta_shape)
# split and truncate
Ai, Sj, Bj = split_truncate_theta(theta, self.chi_max, self.eps)
# put back into MPS
Gi = np.tensordot(np.diag(self.psi.Ss[i]**(1)), Ai, axes=[1, 0]) # vL [vL*], [vL] i vC
self.psi.Bs[i] = np.tensordot(Gi, np.diag(Sj), axes=[2, 0]) # vL i [vC], [vC*] vC
self.psi.Ss[j] = Sj # vC
self.psi.Bs[j] = Bj # vC j vR
self.update_LP(i)
self.update_RP(j)
def update_RP(self, i):
"""Calculate RP right of site `i1` from RP right of site `i`."""
j = (i  1) % self.psi.L
RP = self.RPs[i] # vR* wR* vR
B = self.psi.Bs[i] # vL i vR
Bc = B.conj() # vL* i* vR*
W = self.H_mpo[i] # wL wR i i*
RP = np.tensordot(B, RP, axes=[2, 0]) # vL i [vR], [vR*] wR* vR
RP = np.tensordot(RP, W, axes=[[1, 2], [3, 1]]) # vL [i] [wR*] vR, wL [wR] i [i*]
RP = np.tensordot(RP, Bc, axes=[[1, 3], [2, 1]]) # vL [vR] wL [i], vL* [i*] [vR*]
self.RPs[j] = RP # vL wL vL* (== vR* wR* vR on site i1)
def update_LP(self, i):
"""Calculate LP left of site `i+1` from LP left of site `i`."""
j = (i + 1) % self.psi.L
LP = self.LPs[i] # vL wL vL*
B = self.psi.Bs[i] # vL i vR
G = np.tensordot(np.diag(self.psi.Ss[i]), B, axes=[1, 0]) # vL [vL*], [vL] i vR
A = np.tensordot(G, np.diag(self.psi.Ss[j]**1), axes=[2, 0]) # vL i [vR], [vR*] vR
Ac = A.conj() # vL* i* vR*
W = self.H_mpo[i] # wL wR i i*
LP = np.tensordot(LP, A, axes=[2, 0]) # vL wL* [vL*], [vL] i vR
LP = np.tensordot(W, LP, axes=[[0, 3], [1, 2]]) # [wL] wR i [i*], vL [wL*] [i] vR
LP = np.tensordot(Ac, LP, axes=[[0, 1], [2, 1]]) # [vL*] [i*] vR*, wR [i] [vL] vR
self.LPs[j] = LP # vR* wR vR (== vL wL* vL* on site i+1)
def example_DMRG_tf_ising_finite(L, g):
print("finite DMRG, transverse field Ising")
print("L={L:d}, g={g:.2f}".format(L=L, g=g))
import a_mps
import b_model
M = b_model.TFIModel(L=L, J=1., g=g, bc='finite')
psi = a_mps.init_FM_MPS(M.L, M.d, M.bc)
eng = SimpleDMRGEngine(psi, M, chi_max=30, eps=1.e10)
for i in range(10):
eng.sweep()
E = np.sum(psi.bond_expectation_value(M.H_bonds))
print("sweep {i:2d}: E = {E:.13f}".format(i=i + 1, E=E))
print("final bond dimensions: ", psi.get_chi())
mag_x = np.sum(psi.site_expectation_value(M.sigmax))
mag_z = np.sum(psi.site_expectation_value(M.sigmaz))
print("magnetization in X = {mag_x:.5f}".format(mag_x=mag_x))
print("magnetization in Z = {mag_z:.5f}".format(mag_z=mag_z))
if L < 20: # compare to exact result
from tfi_exact import finite_gs_energy
E_exact = finite_gs_energy(L, 1., g)
print("Exact diagonalization: E = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E  E_exact) / E_exact))
return E, psi, M
def example_DMRG_tf_ising_infinite(g):
print("infinite DMRG, transverse field Ising")
print("g={g:.2f}".format(g=g))
import a_mps
import b_model
M = b_model.TFIModel(L=2, J=1., g=g, bc='infinite')
psi = a_mps.init_FM_MPS(M.L, M.d, M.bc)
eng = SimpleDMRGEngine(psi, M, chi_max=20, eps=1.e14)
for i in range(20):
eng.sweep()
E = np.mean(psi.bond_expectation_value(M.H_bonds))
print("sweep {i:2d}: E (per site) = {E:.13f}".format(i=i + 1, E=E))
print("final bond dimensions: ", psi.get_chi())
mag_x = np.mean(psi.site_expectation_value(M.sigmax))
mag_z = np.mean(psi.site_expectation_value(M.sigmaz))
print("<sigma_x> = {mag_x:.5f}".format(mag_x=mag_x))
print("<sigma_z> = {mag_z:.5f}".format(mag_z=mag_z))
print("correlation length:", psi.correlation_length())
# compare to exact result
from tfi_exact import infinite_gs_energy
E_exact = infinite_gs_energy(1., g)
print("Analytic result: E (per site) = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E  E_exact) / E_exact))
return E, psi, M
if __name__ == "__main__":
example_DMRG_tf_ising_finite(L=10, g=1.)
print("" * 100)
example_DMRG_tf_ising_infinite(g=1.5)
tfi_exact.py¶
"""Provides exact ground state energies for the transverse field ising model for comparison.
The Hamiltonian reads
.. math ::
H =  J \\sum_{i} \\sigma^x_i \\sigma^x_{i+1}  g \\sum_{i} \\sigma^z_i
"""
# Copyright 20192020 TeNPy Developers, GNU GPLv3
import numpy as np
import scipy.sparse as sparse
import scipy.sparse.linalg.eigen.arpack as arp
import warnings
import scipy.integrate
def finite_gs_energy(L, J, g):
"""For comparison: obtain ground state energy from exact diagonalization.
Exponentially expensive in L, only works for small enough `L` <~ 20.
"""
if L >= 20:
warnings.warn("Large L: Exact diagonalization might take a long time!")
# get single site operaors
sx = sparse.csr_matrix(np.array([[0., 1.], [1., 0.]]))
sz = sparse.csr_matrix(np.array([[1., 0.], [0., 1.]]))
id = sparse.csr_matrix(np.eye(2))
sx_list = [] # sx_list[i] = kron([id, id, ..., id, sx, id, .... id])
sz_list = []
for i_site in range(L):
x_ops = [id] * L
z_ops = [id] * L
x_ops[i_site] = sx
z_ops[i_site] = sz
X = x_ops[0]
Z = z_ops[0]
for j in range(1, L):
X = sparse.kron(X, x_ops[j], 'csr')
Z = sparse.kron(Z, z_ops[j], 'csr')
sx_list.append(X)
sz_list.append(Z)
H_xx = sparse.csr_matrix((2**L, 2**L))
H_z = sparse.csr_matrix((2**L, 2**L))
for i in range(L  1):
H_xx = H_xx + sx_list[i] * sx_list[(i + 1) % L]
for i in range(L):
H_z = H_z + sz_list[i]
H = J * H_xx  g * H_z
E, V = arp.eigsh(H, k=1, which='SA', return_eigenvectors=True, ncv=20)
return E[0]
def infinite_gs_energy(J, g):
"""For comparison: Calculate groundstate energy density from analytic formula.
The analytic formula stems from mapping the model to free fermions, see P. Pfeuty, The one
dimensional Ising model with a transverse field, Annals of Physics 57, p. 79 (1970). Note that
we use Pauli matrices compared this reference using spin1/2 matrices and replace the sum_k >
integral dk/2pi to obtain the result in the N > infinity limit.
"""
def f(k, lambda_):
return np.sqrt(1 + lambda_**2 + 2 * lambda_ * np.cos(k))
E0_exact = g / (J * 2. * np.pi) * scipy.integrate.quad(f, np.pi, np.pi, args=(J / g, ))[0]
return E0_exact
Basic scripts¶
These example scripts illustrate the very basic interface for calling TeNPy.
They are included in the [TeNPySource] repository in the folder examples/
,
we include them here in the documentation for reference.
You need to install TeNPy to call them (see Installation instructions), but you can copy them anywhere before execution.
(Some scripts include other files from the same folder, though; copy those as well.)
a_np_conserved.py¶
"""An example code to demonstrate the usage of :class:`~tenpy.linalg.np_conserved.Array`.
This example includes the following steps:
1) create Arrays for an Neel MPS
2) create an MPO representing the nearestneighbour AFM Heisenberg Hamiltonian
3) define 'environments' left and right
4) contract MPS and MPO to calculate the energy
5) extract twosite hamiltonian ``H2`` from the MPO
6) calculate ``exp(1.j*dt*H2)`` by diagonalization of H2
7) apply ``exp(H2)`` to two sites of the MPS and truncate with svd
Note that this example uses only np_conserved, but no other modules.
Compare it to the example `b_mps.py`,
which does the same steps using a few predefined classes like MPS and MPO.
"""
# Copyright 20182020 TeNPy Developers, GNU GPLv3
import tenpy.linalg.np_conserved as npc
import numpy as np
# model parameters
Jxx, Jz = 1., 1.
L = 20
dt = 0.1
cutoff = 1.e10
print("Jxx={Jxx}, Jz={Jz}, L={L:d}".format(Jxx=Jxx, Jz=Jz, L=L))
print("1) create Arrays for an Neel MPS")
# vL >B> vR
# 
# ^
# 
# p
# create a ChargeInfo to specify the nature of the charge
chinfo = npc.ChargeInfo([1], ['2*Sz']) # the second argument is just a descriptive name
# create LegCharges on physical leg and even/odd bonds
p_leg = npc.LegCharge.from_qflat(chinfo, [[1], [1]]) # charges for up, down
v_leg_even = npc.LegCharge.from_qflat(chinfo, [[0]])
v_leg_odd = npc.LegCharge.from_qflat(chinfo, [[1]])
B_even = npc.zeros([v_leg_even, v_leg_odd.conj(), p_leg],
labels=['vL', 'vR', 'p']) # virtual left/right, physical
B_odd = npc.zeros([v_leg_odd, v_leg_even.conj(), p_leg], labels=['vL', 'vR', 'p'])
B_even[0, 0, 0] = 1. # up
B_odd[0, 0, 1] = 1. # down
Bs = [B_even, B_odd] * (L // 2) + [B_even] * (L % 2) # (rightcanonical)
Ss = [np.ones(1)] * L # Ss[i] are singular values between Bs[i1] and Bs[i]
# Side remark:
# An MPS is expected to have nonzero entries everywhere compatible with the charges.
# In general, we recommend to use `sort_legcharge` (or `as_completely_blocked`)
# to ensure complete blocking. (But the code will also work, if you don't do it.)
# The drawback is that this might introduce permutations in the indices of single legs,
# which you have to keep in mind when converting dense numpy arrays to and from npc.Arrays.
print("2) create an MPO representing the AFM Heisenberg Hamiltonian")
# p*
# 
# ^
# 
# wL >W> wR
# 
# ^
# 
# p
# create physical spin1/2 operators Sz, S+, S
Sz = npc.Array.from_ndarray([[0.5, 0.], [0., 0.5]], [p_leg, p_leg.conj()], labels=['p', 'p*'])
Sp = npc.Array.from_ndarray([[0., 1.], [0., 0.]], [p_leg, p_leg.conj()], labels=['p', 'p*'])
Sm = npc.Array.from_ndarray([[0., 0.], [1., 0.]], [p_leg, p_leg.conj()], labels=['p', 'p*'])
Id = npc.eye_like(Sz, labels=Sz.get_leg_labels()) # identity
mpo_leg = npc.LegCharge.from_qflat(chinfo, [[0], [2], [2], [0], [0]])
W_grid = [[Id, Sp, Sm, Sz, None ],
[None, None, None, None, 0.5 * Jxx * Sm],
[None, None, None, None, 0.5 * Jxx * Sp],
[None, None, None, None, Jz * Sz ],
[None, None, None, None, Id ]] # yapf:disable
W = npc.grid_outer(W_grid, [mpo_leg, mpo_leg.conj()], grid_labels=['wL', 'wR'])
# wL/wR = virtual left/right of the MPO
Ws = [W] * L
print("3) define 'environments' left and right")
# .> vR vL >.
#  
# envL> wR wL >envR
#  
# .> vR* vL*>.
envL = npc.zeros([W.get_leg('wL').conj(), Bs[0].get_leg('vL').conj(), Bs[0].get_leg('vL')],
labels=['wR', 'vR', 'vR*'])
envL[0, :, :] = npc.diag(1., envL.legs[1])
envR = npc.zeros([W.get_leg('wR').conj(), Bs[1].get_leg('vR').conj(), Bs[1].get_leg('vR')],
labels=['wL', 'vL', 'vL*'])
envR[1, :, :] = npc.diag(1., envR.legs[1])
print("4) contract MPS and MPO to calculate the energy <psiHpsi>")
contr = envL
for i in range(L):
# contr labels: wR, vR, vR*
contr = npc.tensordot(contr, Bs[i], axes=('vR', 'vL'))
# wR, vR*, vR, p
contr = npc.tensordot(contr, Ws[i], axes=(['p', 'wR'], ['p*', 'wL']))
# vR*, vR, wR, p
contr = npc.tensordot(contr, Bs[i].conj(), axes=(['p', 'vR*'], ['p*', 'vL*']))
# vR, wR, vR*
# note that the order of the legs changed, but that's no problem with labels:
# the arrays are automatically transposed as necessary
E = npc.inner(contr, envR, axes=(['vR', 'wR', 'vR*'], ['vL', 'wL', 'vL*']))
print("E =", E)
print("5) calculate twosite hamiltonian ``H2`` from the MPO")
# label left, right physical legs with p, q
W0 = W.replace_labels(['p', 'p*'], ['p0', 'p0*'])
W1 = W.replace_labels(['p', 'p*'], ['p1', 'p1*'])
H2 = npc.tensordot(W0, W1, axes=('wR', 'wL')).itranspose(['wL', 'wR', 'p0', 'p1', 'p0*', 'p1*'])
H2 = H2[0, 1] # (If H has singlesite terms, it's not that simple anymore)
print("H2 labels:", H2.get_leg_labels())
print("6) calculate exp(H2) by diagonalization of H2")
# diagonalization requires to view H2 as a matrix
H2 = H2.combine_legs([('p0', 'p1'), ('p0*', 'p1*')], qconj=[+1, 1])
print("labels after combine_legs:", H2.get_leg_labels())
E2, U2 = npc.eigh(H2)
print("Eigenvalues of H2:", E2)
U_expE2 = U2.scale_axis(np.exp(1.j * dt * E2), axis=1) # scale_axis ~= apply an diagonal matrix
exp_H2 = npc.tensordot(U_expE2, U2.conj(), axes=(1, 1))
exp_H2.iset_leg_labels(H2.get_leg_labels())
exp_H2 = exp_H2.split_legs() # by default split all legs which are `LegPipe`
# (this restores the originial labels ['p0', 'p1', 'p0*', 'p1*'] of `H2` in `exp_H2`)
print("7) apply exp(H2) to even/odd bonds of the MPS and truncate with svd")
# (this implements one time step of first order TEBD)
for even_odd in [0, 1]:
for i in range(even_odd, L  1, 2):
B_L = Bs[i].scale_axis(Ss[i], 'vL').ireplace_label('p', 'p0')
B_R = Bs[i + 1].replace_label('p', 'p1')
theta = npc.tensordot(B_L, B_R, axes=('vR', 'vL'))
theta = npc.tensordot(exp_H2, theta, axes=(['p0*', 'p1*'], ['p0', 'p1']))
# view as matrix for SVD
theta = theta.combine_legs([('vL', 'p0'), ('p1', 'vR')], new_axes=[0, 1], qconj=[+1, 1])
# now theta has labels '(vL.p0)', '(p1.vR)'
U, S, V = npc.svd(theta, inner_labels=['vR', 'vL'])
# truncate
keep = S > cutoff
S = S[keep]
invsq = np.linalg.norm(S)
Ss[i + 1] = S / invsq
U = U.iscale_axis(S / invsq, 'vR')
Bs[i] = U.split_legs('(vL.p0)').iscale_axis(Ss[i]**(1), 'vL').ireplace_label('p0', 'p')
Bs[i + 1] = V.split_legs('(p1.vR)').ireplace_label('p1', 'p')
print("finished")
b_mps.py¶
"""Simplified version of `a_np_conserved.py` making use of other classes (like MPS, MPO).
This example includes the following steps:
1) create Arrays for an Neel MPS
2) create an MPO representing the nearestneighbour AFM Heisenberg Hamiltonian
3) define 'environments' left and right
4) contract MPS and MPO to calculate the energy
5) extract twosite hamiltonian ``H2`` from the MPO
6) calculate ``exp(1.j*dt*H2)`` by diagonalization of H2
7) apply ``exp(H2)`` to two sites of the MPS and truncate with svd
Note that this example performs the same steps as `a_np_conserved.py`,
but makes use of other predefined classes except npc.
"""
# Copyright 20182020 TeNPy Developers, GNU GPLv3
import tenpy.linalg.np_conserved as npc
import numpy as np
# some more imports
from tenpy.networks.site import SpinHalfSite
from tenpy.models.lattice import Chain
from tenpy.networks.mps import MPS
from tenpy.networks.mpo import MPO, MPOEnvironment
from tenpy.algorithms.truncation import svd_theta
# model parameters
Jxx, Jz = 1., 1.
L = 20
dt = 0.1
cutoff = 1.e10
print("Jxx={Jxx}, Jz={Jz}, L={L:d}".format(Jxx=Jxx, Jz=Jz, L=L))
print("1) create Arrays for an Neel MPS")
site = SpinHalfSite(conserve='Sz') # predefined charges and Sp,Sm,Sz operators
p_leg = site.leg
chinfo = p_leg.chinfo
# make lattice from unit cell and create product state MPS
lat = Chain(L, site, bc_MPS='finite')
state = ["up", "down"] * (L // 2) + ["up"] * (L % 2) # Neel state
print("state = ", state)
psi = MPS.from_product_state(lat.mps_sites(), state, lat.bc_MPS)
print("2) create an MPO representing the AFM Heisenberg Hamiltonian")
# predefined physical spin1/2 operators Sz, S+, S
Sz, Sp, Sm, Id = site.Sz, site.Sp, site.Sm, site.Id
mpo_leg = npc.LegCharge.from_qflat(chinfo, [[0], [2], [2], [0], [0]])
W_grid = [[Id, Sp, Sm, Sz, None ],
[None, None, None, None, 0.5 * Jxx * Sm],
[None, None, None, None, 0.5 * Jxx * Sp],
[None, None, None, None, Jz * Sz ],
[None, None, None, None, Id ]] # yapf:disable
W = npc.grid_outer(W_grid, [mpo_leg, mpo_leg.conj()], grid_labels=['wL', 'wR'])
# wL/wR = virtual left/right of the MPO
Ws = [W] * L
Ws[0] = W[:1, :]
Ws[1] = W[:, 1:]
H = MPO(psi.sites, Ws, psi.bc, IdL=0, IdR=1)
print("3) define 'environments' left and right")
# this is automatically done during initialization of MPOEnvironment
env = MPOEnvironment(psi, H, psi)
envL = env.get_LP(0)
envR = env.get_RP(L  1)
print("4) contract MPS and MPO to calculate the energy <psiHpsi>")
E = env.full_contraction(L  1)
print("E =", E)
print("5) calculate twosite hamiltonian ``H2`` from the MPO")
# label left, right physical legs with p, q
W0 = H.get_W(0).replace_labels(['p', 'p*'], ['p0', 'p0*'])
W1 = H.get_W(1).replace_labels(['p', 'p*'], ['p1', 'p1*'])
H2 = npc.tensordot(W0, W1, axes=('wR', 'wL')).itranspose(['wL', 'wR', 'p0', 'p1', 'p0*', 'p1*'])
H2 = H2[H.IdL[0], H.IdR[2]] # (If H has singlesite terms, it's not that simple anymore)
print("H2 labels:", H2.get_leg_labels())
print("6) calculate exp(H2) by diagonalization of H2")
# diagonalization requires to view H2 as a matrix
H2 = H2.combine_legs([('p0', 'p1'), ('p0*', 'p1*')], qconj=[+1, 1])
print("labels after combine_legs:", H2.get_leg_labels())
E2, U2 = npc.eigh(H2)
print("Eigenvalues of H2:", E2)
U_expE2 = U2.scale_axis(np.exp(1.j * dt * E2), axis=1) # scale_axis ~= apply a diagonal matrix
exp_H2 = npc.tensordot(U_expE2, U2.conj(), axes=(1, 1))
exp_H2.iset_leg_labels(H2.get_leg_labels())
exp_H2 = exp_H2.split_legs() # by default split all legs which are `LegPipe`
# (this restores the originial labels ['p0', 'p1', 'p0*', 'p1*'] of `H2` in `exp_H2`)
# alternative way: use :func:`~tenpy.linalg.np_conserved.expm`
exp_H2_alternative = npc.expm(1.j * dt * H2).split_legs()
assert (npc.norm(exp_H2_alternative  exp_H2) < 1.e14)
print("7) apply exp(H2) to even/odd bonds of the MPS and truncate with svd")
# (this implements one time step of first order TEBD)
trunc_par = {'svd_min': cutoff, 'trunc_cut': None, 'verbose': 0}
for even_odd in [0, 1]:
for i in range(even_odd, L  1, 2):
theta = psi.get_theta(i, 2) # handles canonical form (i.e. scaling with 'S')
theta = npc.tensordot(exp_H2, theta, axes=(['p0*', 'p1*'], ['p0', 'p1']))
# view as matrix for SVD
theta = theta.combine_legs([('vL', 'p0'), ('p1', 'vR')], new_axes=[0, 1], qconj=[+1, 1])
# now theta has labels '(vL.p0)', '(p1.vR)'
U, S, V, err, invsq = svd_theta(theta, trunc_par, inner_labels=['vR', 'vL'])
psi.set_SR(i, S)
A_L = U.split_legs('(vL.p0)').ireplace_label('p0', 'p')
B_R = V.split_legs('(p1.vR)').ireplace_label('p1', 'p')
psi.set_B(i, A_L, form='A') # leftcanonical form
psi.set_B(i + 1, B_R, form='B') # rightcanonical form
print("finished")
c_tebd.py¶
"""Example illustrating the use of TEBD in tenpy.
The example functions in this class do the same as the ones in `toycodes/c_tebd.py`, but make use
of the classes defined in tenpy.
"""
# Copyright 20182020 TeNPy Developers, GNU GPLv3
import numpy as np
from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.algorithms import tebd
def example_TEBD_gs_tf_ising_finite(L, g, verbose=True):
print("finite TEBD, imaginary time evolution, transverse field Ising")
print("L={L:d}, g={g:.2f}".format(L=L, g=g))
model_params = dict(L=L, J=1., g=g, bc_MPS='finite', conserve=None, verbose=verbose)
M = TFIChain(model_params)
product_state = ["up"] * M.lat.N_sites
psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
tebd_params = {
'order': 2,
'delta_tau_list': [0.1, 0.01, 0.001, 1.e4, 1.e5],
'N_steps': 10,
'max_error_E': 1.e6,
'trunc_params': {
'chi_max': 30,
'svd_min': 1.e10
},
'verbose': verbose,
}
eng = tebd.Engine(psi, M, tebd_params)
eng.run_GS() # the main work...
# expectation values
E = np.sum(M.bond_energies(psi)) # M.bond_energies() works only a for NearestNeighborModel
# alternative: directly measure E2 = np.sum(psi.expectation_value(M.H_bond[1:]))
print("E = {E:.13f}".format(E=E))
print("final bond dimensions: ", psi.chi)
mag_x = np.sum(psi.expectation_value("Sigmax"))
mag_z = np.sum(psi.expectation_value("Sigmaz"))
print("magnetization in X = {mag_x:.5f}".format(mag_x=mag_x))
print("magnetization in Z = {mag_z:.5f}".format(mag_z=mag_z))
if L < 20: # compare to exact result
from tfi_exact import finite_gs_energy
E_exact = finite_gs_energy(L, 1., g)
print("Exact diagonalization: E = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E  E_exact) / E_exact))
return E, psi, M
def example_TEBD_gs_tf_ising_infinite(g, verbose=True):
print("infinite TEBD, imaginary time evolution, transverse field Ising")
print("g={g:.2f}".format(g=g))
model_params = dict(L=2, J=1., g=g, bc_MPS='infinite', conserve=None, verbose=verbose)
M = TFIChain(model_params)
product_state = ["up"] * M.lat.N_sites
psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
tebd_params = {
'order': 2,
'delta_tau_list': [0.1, 0.01, 0.001, 1.e4, 1.e5],
'N_steps': 10,
'max_error_E': 1.e8,
'trunc_params': {
'chi_max': 30,
'svd_min': 1.e10
},
'verbose': verbose,
}
eng = tebd.Engine(psi, M, tebd_params)
eng.run_GS() # the main work...
E = np.mean(M.bond_energies(psi)) # M.bond_energies() works only a for NearestNeighborModel
# alternative: directly measure E2 = np.mean(psi.expectation_value(M.H_bond))
print("E (per site) = {E:.13f}".format(E=E))
print("final bond dimensions: ", psi.chi)
mag_x = np.mean(psi.expectation_value("Sigmax"))
mag_z = np.mean(psi.expectation_value("Sigmaz"))
print("<sigma_x> = {mag_x:.5f}".format(mag_x=mag_x))
print("<sigma_z> = {mag_z:.5f}".format(mag_z=mag_z))
print("correlation length:", psi.correlation_length())
# compare to exact result
from tfi_exact import infinite_gs_energy
E_exact = infinite_gs_energy(1., g)
print("Analytic result: E (per site) = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E  E_exact) / E_exact))
return E, psi, M
def example_TEBD_tf_ising_lightcone(L, g, tmax, dt, verbose=True):
print("finite TEBD, real time evolution")
print("L={L:d}, g={g:.2f}, tmax={tmax:.2f}, dt={dt:.3f}".format(L=L, g=g, tmax=tmax, dt=dt))
# find ground state with TEBD or DMRG
# E, psi, M = example_TEBD_gs_tf_ising_finite(L, g)
from d_dmrg import example_DMRG_tf_ising_finite
print("(run DMRG to get the groundstate)")
E, psi, M = example_DMRG_tf_ising_finite(L, g, verbose=False)
print("(DMRG finished)")
i0 = L // 2
# apply sigmaz on site i0
psi.apply_local_op(i0, 'Sigmaz', unitary=True)
dt_measure = 0.05
# tebd.Engine makes 'N_steps' steps of `dt` at once; for second order this is more efficient.
tebd_params = {
'order': 2,
'dt': dt,
'N_steps': int(dt_measure / dt + 0.5),
'trunc_params': {
'chi_max': 50,
'svd_min': 1.e10,
'trunc_cut': None
},
'verbose': verbose,
}
eng = tebd.Engine(psi, M, tebd_params)
S = [psi.entanglement_entropy()]
for n in range(int(tmax / dt_measure + 0.5)):
eng.run()
S.append(psi.entanglement_entropy())
import matplotlib.pyplot as plt
plt.figure()
plt.imshow(S[::1],
vmin=0.,
aspect='auto',
interpolation='nearest',
extent=(0, L  1., 0.5 * dt_measure, eng.evolved_time + 0.5 * dt_measure))
plt.xlabel('site $i$')
plt.ylabel('time $t/J$')
plt.ylim(0., tmax)
plt.colorbar().set_label('entropy $S$')
filename = 'c_tebd_lightcone_{g:.2f}.pdf'.format(g=g)
plt.savefig(filename)
print("saved " + filename)
def example_TEBD_gs_tf_ising_next_nearest_neighbor(L, g, Jp, verbose=True):
from tenpy.models.spins_nnn import SpinChainNNN2
from tenpy.models.model import NearestNeighborModel
print("finite TEBD, imaginary time evolution, transverse field Ising nextnearest neighbor")
print("L={L:d}, g={g:.2f}, Jp={Jp:.2f}".format(L=L, g=g, Jp=Jp))
model_params = dict(L=L,
Jx=1.,
Jy=0.,
Jz=0.,
Jxp=Jp,
Jyp=0.,
Jzp=0.,
hz=g,
bc_MPS='finite',
conserve=None,
verbose=verbose)
# we start with the nongrouped sites, but nextnearest neighbor interactions, building the MPO
M = SpinChainNNN2(model_params)
product_state = ["up"] * M.lat.N_sites
psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
# now we group each to sites ...
psi.group_sites(n=2) # ... in the state
M.group_sites(n=2) # ... and model
# now, M has only 'nearestneighbor' interactions with respect to the grouped sites
# thus, we can convert the MPO into H_bond terms:
M_nn = NearestNeighborModel.from_MPOModel(M) # hence, we can initialize H_bond from the MPO
# now, we continue to run TEBD as before
tebd_params = {
'order': 2,
'delta_tau_list': [0.1, 0.01, 0.001, 1.e4, 1.e5],
'N_steps': 10,
'max_error_E': 1.e6,
'trunc_params': {
'chi_max': 30,
'svd_min': 1.e10
},
'verbose': verbose,
}
eng = tebd.Engine(psi, M_nn, tebd_params) # use M_nn and grouped psi
eng.run_GS() # the main work...
# expectation values:
E = np.sum(M_nn.bond_energies(psi)) # bond_energies() works only a for NearestNeighborModel
print("E = {E:.13f}".format(E=E))
print("final bond dimensions: ", psi.chi)
# we can split the sites of the state again for an easier evaluation of expectation values
psi.group_split()
mag_x = 2. * np.sum(psi.expectation_value("Sx")) # factor of 2 for Sx vs Sigmax
mag_z = 2. * np.sum(psi.expectation_value("Sz"))
print("magnetization in X = {mag_x:.5f}".format(mag_x=mag_x))
print("magnetization in Z = {mag_z:.5f}".format(mag_z=mag_z))
return E, psi, M
if __name__ == "__main__":
example_TEBD_gs_tf_ising_finite(L=10, g=1.)
print("" * 100)
example_TEBD_gs_tf_ising_infinite(g=1.5)
print("" * 100)
example_TEBD_tf_ising_lightcone(L=20, g=1.5, tmax=3., dt=0.01)
print("" * 100)
example_TEBD_gs_tf_ising_next_nearest_neighbor(L=10, g=1.0, Jp=0.1)
d_dmrg.py¶
"""Example illustrating the use of DMRG in tenpy.
The example functions in this class do the same as the ones in `toycodes/d_dmrg.py`,
but make use of the classes defined in tenpy.
"""
# Copyright 20182020 TeNPy Developers, GNU GPLv3
import numpy as np
from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.models.spins import SpinModel
from tenpy.algorithms import dmrg
def example_DMRG_tf_ising_finite(L, g, verbose=True):
print("finite DMRG, transverse field Ising model")
print("L={L:d}, g={g:.2f}".format(L=L, g=g))
model_params = dict(L=L, J=1., g=g, bc_MPS='finite', conserve=None, verbose=verbose)
M = TFIChain(model_params)
product_state = ["up"] * M.lat.N_sites
psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
dmrg_params = {
'mixer': None, # setting this to True helps to escape local minima
'max_E_err': 1.e10,
'trunc_params': {
'chi_max': 30,
'svd_min': 1.e10
},
'verbose': verbose,
'combine': True
}
info = dmrg.run(psi, M, dmrg_params) # the main work...
E = info['E']
print("E = {E:.13f}".format(E=E))
print("final bond dimensions: ", psi.chi)
mag_x = np.sum(psi.expectation_value("Sigmax"))
mag_z = np.sum(psi.expectation_value("Sigmaz"))
print("magnetization in X = {mag_x:.5f}".format(mag_x=mag_x))
print("magnetization in Z = {mag_z:.5f}".format(mag_z=mag_z))
if L < 20: # compare to exact result
from tfi_exact import finite_gs_energy
E_exact = finite_gs_energy(L, 1., g)
print("Exact diagonalization: E = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E  E_exact) / E_exact))
return E, psi, M
def example_1site_DMRG_tf_ising_finite(L, g, verbose=True):
print("singlesite finite DMRG, transverse field Ising model")
print("L={L:d}, g={g:.2f}".format(L=L, g=g))
model_params = dict(L=L, J=1., g=g, bc_MPS='finite', conserve=None, verbose=verbose)
M = TFIChain(model_params)
product_state = ["up"] * M.lat.N_sites
psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
dmrg_params = {
'mixer': True, # setting this to True is essential for the 1site algorithm to work.
'max_E_err': 1.e10,
'trunc_params': {
'chi_max': 30,
'svd_min': 1.e10
},
'verbose': verbose,
'combine': False,
'active_sites': 1 # specifies singlesite
}
info = dmrg.run(psi, M, dmrg_params)
E = info['E']
print("E = {E:.13f}".format(E=E))
print("final bond dimensions: ", psi.chi)
mag_x = np.sum(psi.expectation_value("Sigmax"))
mag_z = np.sum(psi.expectation_value("Sigmaz"))
print("magnetization in X = {mag_x:.5f}".format(mag_x=mag_x))
print("magnetization in Z = {mag_z:.5f}".format(mag_z=mag_z))
if L < 20: # compare to exact result
from tfi_exact import finite_gs_energy
E_exact = finite_gs_energy(L, 1., g)
print("Exact diagonalization: E = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E  E_exact) / E_exact))
return E, psi, M
def example_DMRG_tf_ising_infinite(g, verbose=True):
print("infinite DMRG, transverse field Ising model")
print("g={g:.2f}".format(g=g))
model_params = dict(L=2, J=1., g=g, bc_MPS='infinite', conserve=None, verbose=verbose)
M = TFIChain(model_params)
product_state = ["up"] * M.lat.N_sites
psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
dmrg_params = {
'mixer': True, # setting this to True helps to escape local minima
'trunc_params': {
'chi_max': 30,
'svd_min': 1.e10
},
'max_E_err': 1.e10,
'verbose': verbose,
}
# Sometimes, we want to call a 'DMRG engine' explicitly
eng = dmrg.TwoSiteDMRGEngine(psi, M, dmrg_params)
E, psi = eng.run() # equivalent to dmrg.run() up to the return parameters.
print("E = {E:.13f}".format(E=E))
print("final bond dimensions: ", psi.chi)
mag_x = np.mean(psi.expectation_value("Sigmax"))
mag_z = np.mean(psi.expectation_value("Sigmaz"))
print("<sigma_x> = {mag_x:.5f}".format(mag_x=mag_x))
print("<sigma_z> = {mag_z:.5f}".format(mag_z=mag_z))
print("correlation length:", psi.correlation_length())
# compare to exact result
from tfi_exact import infinite_gs_energy
E_exact = infinite_gs_energy(1., g)
print("Analytic result: E (per site) = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E  E_exact) / E_exact))
return E, psi, M
def example_1site_DMRG_tf_ising_infinite(g, verbose=True):
print("singlesite infinite DMRG, transverse field Ising model")
print("g={g:.2f}".format(g=g))
model_params = dict(L=2, J=1., g=g, bc_MPS='infinite', conserve=None, verbose=verbose)
M = TFIChain(model_params)
product_state = ["up"] * M.lat.N_sites
psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
dmrg_params = {
'mixer': True, # setting this to True is essential for the 1site algorithm to work.
'trunc_params': {
'chi_max': 30,
'svd_min': 1.e10
},
'max_E_err': 1.e10,
'verbose': verbose,
'combine': True
}
eng = dmrg.SingleSiteDMRGEngine(psi, M, dmrg_params)
E, psi = eng.run() # equivalent to dmrg.run() up to the return parameters.
print("E = {E:.13f}".format(E=E))
print("final bond dimensions: ", psi.chi)
mag_x = np.mean(psi.expectation_value("Sigmax"))
mag_z = np.mean(psi.expectation_value("Sigmaz"))
print("<sigma_x> = {mag_x:.5f}".format(mag_x=mag_x))
print("<sigma_z> = {mag_z:.5f}".format(mag_z=mag_z))
print("correlation length:", psi.correlation_length())
# compare to exact result
from tfi_exact import infinite_gs_energy
E_exact = infinite_gs_energy(1., g)
print("Analytic result: E (per site) = {E:.13f}".format(E=E_exact))
print("relative error: ", abs((E  E_exact) / E_exact))
def example_DMRG_heisenberg_xxz_infinite(Jz, conserve='best', verbose=True):
print("infinite DMRG, Heisenberg XXZ chain")
print("Jz={Jz:.2f}, conserve={conserve!r}".format(Jz=Jz, conserve=conserve))
model_params = dict(
L=2,
S=0.5, # spin 1/2
Jx=1.,
Jy=1.,
Jz=Jz, # couplings
bc_MPS='infinite',
conserve=conserve,
verbose=verbose)
M = SpinModel(model_params)
product_state = ["up", "down"] # initial Neel state
psi = MPS.from_product_state(M.lat.mps_sites(), product_state, bc=M.lat.bc_MPS)
dmrg_params = {
'mixer': True, # setting this to True helps to escape local minima
'trunc_params': {
'chi_max': 100,
'svd_min': 1.e10,
},
'max_E_err': 1.e10,
'verbose': verbose,
}
info = dmrg.run(psi, M, dmrg_params)
E = info['E']
print("E = {E:.13f}".format(E=E))
print("final bond dimensions: ", psi.chi)
Sz = psi.expectation_value("Sz") # Sz instead of Sigma z: spin1/2 operators!
mag_z = np.mean(Sz)
print("<S_z> = [{Sz0:.5f}, {Sz1:.5f}]; mean ={mag_z:.5f}".format(Sz0=Sz[0],
Sz1=Sz[1],
mag_z=mag_z))
# note: it's clear that mean(<Sz>) is 0: the model has Sz conservation!
print("correlation length:", psi.correlation_length())
corrs = psi.correlation_function("Sz", "Sz", sites1=range(10))
print("correlations <Sz_i Sz_j> =")
print(corrs)
return E, psi, M
if __name__ == "__main__":
example_DMRG_tf_ising_finite(L=10, g=1., verbose=True)
print("" * 100)
example_1site_DMRG_tf_ising_finite(L=10, g=1., verbose=True)
print("" * 100)
example_DMRG_tf_ising_infinite(g=1.5, verbose=True)
print("" * 100)
example_1site_DMRG_tf_ising_infinite(g=1.5, verbose=True)
print("" * 100)
example_DMRG_heisenberg_xxz_infinite(Jz=1.5)
e_tdvp.py¶
"""Example illustrating the use of TDVP in tenpy.
As of now, we have TDVP only for finite systems. The call structure is quite similar to TEBD. A
difference is that we can run onesite TDVP or twosite TDVP. In the former, the bond dimension can
not grow; the latter allows to grow the bond dimension and hence requires a truncation.
"""
# Copyright 20192020 TeNPy Developers, GNU GPLv3
import numpy as np
import tenpy.linalg.np_conserved as npc
import tenpy.models.spins
import tenpy.networks.mps as mps
import tenpy.networks.site as site
from tenpy.algorithms import tdvp
from tenpy.networks.mps import MPS
import copy
def run_out_of_equilibrium():
L = 10
chi = 5
delta_t = 0.1
model_params = {
'L': L,
'S': 0.5,
'conserve': 'Sz',
'Jz': 1.0,
'Jy': 1.0,
'Jx': 1.0,
'hx': 0.0,
'hy': 0.0,
'hz': 0.0,
'muJ': 0.0,
'bc_MPS': 'finite',
}
heisenberg = tenpy.models.spins.SpinChain(model_params)
product_state = ["up"] * (L // 2) + ["down"] * (L  L // 2)
# starting from a domainwall product state which is not an eigenstate of the Heisenberg model
psi = MPS.from_product_state(heisenberg.lat.mps_sites(),
product_state,
bc=heisenberg.lat.bc_MPS,
form='B')
tdvp_params = {
'start_time': 0,
'dt': delta_t,
'trunc_params': {
'chi_max': chi,
'svd_min': 1.e10,
'trunc_cut': None
}
}
tdvp_engine = tdvp.Engine(psi, heisenberg, tdvp_params)
times = []
S_mid = []
for i in range(30):
tdvp_engine.run_two_sites(N_steps=1)
times.append(tdvp_engine.evolved_time)
S_mid.append(psi.entanglement_entropy(bonds=[L // 2])[0])
for i in range(30):
tdvp_engine.run_one_site(N_steps=1)
#psi_2=copy.deepcopy(psi)
#psi_2.canonical_form()
times.append(tdvp_engine.evolved_time)
S_mid.append(psi.entanglement_entropy(bonds=[L // 2])[0])
import matplotlib.pyplot as plt
plt.figure()
plt.plot(times, S_mid)
plt.xlabel('t')
plt.ylabel('S')
plt.axvline(x=3.1, color='red')
plt.text(0.0, 0.0000015, "Two sites update")
plt.text(3.1, 0.0000015, "One site update")
plt.show()
if __name__ == "__main__":
run_out_of_equilibrium()
purification.py¶
from tenpy.models.tf_ising import TFIChain
from tenpy.networks.purification_mps import PurificationMPS
from tenpy.algorithms.purification import PurificationTEBD, PurificationApplyMPO
def imag_tebd(L=30, beta_max=3., dt=0.05, order=2, bc="finite"):
model_params = dict(L=L, J=1., g=1.2)
M = TFIChain(model_params)
psi = PurificationMPS.from_infiniteT(M.lat.mps_sites(), bc=bc)
options = {
'trunc_params': {
'chi_max': 100,
'svd_min': 1.e8
},
'order': order,
'dt': dt,
'N_steps': 1
}
beta = 0.
eng = PurificationTEBD(psi, M, options)
Szs = [psi.expectation_value("Sz")]
betas = [0.]
while beta < beta_max:
beta += 2. * dt # factor of 2: psi> ~= exp^{ dt H}, but rho = psi><psi
betas.append(beta)
print("beta = {0:.2f}".format(beta))
eng.run_imaginary(dt) # cool down by dt
Szs.append(psi.expectation_value("Sz")) # and further measurements...
return {'beta': betas, 'Sz': Szs}
def imag_apply_mpo(L=30, beta_max=3., dt=0.05, order=2, bc="finite", approx="II"):
model_params = dict(L=L, J=1., g=1.2)
M = TFIChain(model_params)
psi = PurificationMPS.from_infiniteT(M.lat.mps_sites(), bc=bc)
options = {'trunc_params': {'chi_max': 100, 'svd_min': 1.e8}}
beta = 0.
if order == 1:
Us = [M.H_MPO.make_U(dt, approx)]
elif order == 2:
Us = [M.H_MPO.make_U(d * dt, approx) for d in [0.5 + 0.5j, 0.5  0.5j]]
eng = PurificationApplyMPO(psi, Us[0], options)
Szs = [psi.expectation_value("Sz")]
betas = [0.]
while beta < beta_max:
beta += 2. * dt # factor of 2: psi> ~= exp^{ dt H}, but rho = psi><psi
betas.append(beta)
print("beta = {0:.2f}".format(beta))
for U in Us:
eng.init_env(U) # reset environment, initialize new copy of psi
eng.run() # apply U to psi
Szs.append(psi.expectation_value("Sz")) # and further measurements...
return {'beta': betas, 'Sz': Szs}
if __name__ == "__main__":
data_tebd = imag_tebd()
data_mpo = imag_apply_mpo()
from matplotlib.pyplot import plt
plt.plot(data_mpo['beta'], np.sum(data_mpo['Sz'], axis=1), label='MPO')
plt.plot(data_tebd['beta'], np.sum(data_tebd['Sz'], axis=1), label='TEBD')
plt.xlabel(r'$\beta$')
plt.ylabel(r'total $S^z$')
plt.show()
tfi_exact.py¶
"""Provides exact ground state energies for the transverse field ising model for comparison.
The Hamiltonian reads
.. math ::
H =  J \\sum_{i} \\sigma^x_i \\sigma^x_{i+1}  g \\sum_{i} \\sigma^z_i
"""
# Copyright 20192020 TeNPy Developers, GNU GPLv3
import numpy as np
import scipy.sparse as sparse
import scipy.sparse.linalg.eigen.arpack as arp
import warnings
import scipy.integrate
def finite_gs_energy(L, J, g):
"""For comparison: obtain ground state energy from exact diagonalization.
Exponentially expensive in L, only works for small enough `L` <~ 20.
"""
if L >= 20:
warnings.warn("Large L: Exact diagonalization might take a long time!")
# get single site operaors
sx = sparse.csr_matrix(np.array([[0., 1.], [1., 0.]]))
sz = sparse.csr_matrix(np.array([[1., 0.], [0., 1.]]))
id = sparse.csr_matrix(np.eye(2))
sx_list = [] # sx_list[i] = kron([id, id, ..., id, sx, id, .... id])
sz_list = []
for i_site in range(L):
x_ops = [id] * L
z_ops = [id] * L
x_ops[i_site] = sx
z_ops[i_site] = sz
X = x_ops[0]
Z = z_ops[0]
for j in range(1, L):
X = sparse.kron(X, x_ops[j], 'csr')
Z = sparse.kron(Z, z_ops[j], 'csr')
sx_list.append(X)
sz_list.append(Z)
H_xx = sparse.csr_matrix((2**L, 2**L))
H_z = sparse.csr_matrix((2**L, 2**L))
for i in range(L  1):
H_xx = H_xx + sx_list[i] * sx_list[(i + 1) % L]
for i in range(L):
H_z = H_z + sz_list[i]
H = J * H_xx  g * H_z
E, V = arp.eigsh(H, k=1, which='SA', return_eigenvectors=True, ncv=20)
return E[0]
def infinite_gs_energy(J, g):
"""For comparison: Calculate groundstate energy density from analytic formula.
The analytic formula stems from mapping the model to free fermions, see P. Pfeuty, The one
dimensional Ising model with a transverse field, Annals of Physics 57, p. 79 (1970). Note that
we use Pauli matrices compared this reference using spin1/2 matrices and replace the sum_k >
integral dk/2pi to obtain the result in the N > infinity limit.
"""
def f(k, lambda_):
return np.sqrt(1 + lambda_**2 + 2 * lambda_ * np.cos(k))
E0_exact = g / (J * 2. * np.pi) * scipy.integrate.quad(f, np.pi, np.pi, args=(J / g, ))[0]
return E0_exact
z_exact_diag.py¶
"""A simple example comparing DMRG output with full diagonalization (ED).
Sorry that this is not well documented! ED is meant to be used for debugging only ;)
"""
# Copyright 20182020 TeNPy Developers, GNU GPLv3
import tenpy.linalg.np_conserved as npc
from tenpy.models.xxz_chain import XXZChain
from tenpy.networks.mps import MPS
from tenpy.algorithms.exact_diag import ExactDiag
from tenpy.algorithms import dmrg
def example_exact_diagonalization(L, Jz):
xxz_pars = dict(L=L, Jxx=1., Jz=Jz, hz=0.0, bc_MPS='finite')
M = XXZChain(xxz_pars)
product_state = ["up", "down"] * (xxz_pars['L'] // 2) # this selects a charge sector!
psi_DMRG = MPS.from_product_state(M.lat.mps_sites(), product_state)
charge_sector = psi_DMRG.get_total_charge(True) # ED charge sector should match
ED = ExactDiag(M, charge_sector=charge_sector, max_size=2.e6)
ED.build_full_H_from_mpo()
# ED.build_full_H_from_bonds() # whatever you prefer
print("start diagonalization")
ED.full_diagonalization() # the expensive part for large L
E0_ED, psi_ED = ED.groundstate() # return the ground state
print("psi_ED =", psi_ED)
print("run DMRG")
dmrg.run(psi_DMRG, M, {'verbose': 0}) # modifies psi_DMRG in place!
# first way to compare ED with DMRG: convert MPS to ED vector
psi_DMRG_full = ED.mps_to_full(psi_DMRG)
print("psi_DMRG_full =", psi_DMRG_full)
ov = npc.inner(psi_ED, psi_DMRG_full, axes='range', do_conj=True)
print("<psi_EDpsi_DMRG_full> =", ov)
assert (abs(abs(ov)  1.) < 1.e13)
# second way: convert ED vector to MPS
psi_ED_mps = ED.full_to_mps(psi_ED)
ov2 = psi_ED_mps.overlap(psi_DMRG)
print("<psi_ED_mpspsi_DMRG> =", ov2)
assert (abs(abs(ov2)  1.) < 1.e13)
assert (abs(ov  ov2) < 1.e13)
# > advantage: expectation_value etc. of MPS are available!
print("<Sz> =", psi_ED_mps.expectation_value('Sz'))
if __name__ == "__main__":
example_exact_diagonalization(10, 1.)
Notebooks¶
This is a collection of [jupyter] notebooks from the [TeNPyNotebooks] repository. You need to install TeNPy to execute them (see Installation instructions), but you can copy them anywhere before execution. Note that some of them might take a while to run, as they contain more extensive examples.
A first TEBD Example¶
Like examples/c_tebd.py
, this notebook shows the basic interface for TEBD. It initalized the transverse field Ising model \(H = J XX + g Z\) at the critical point \(J=g=1\), and an MPS in the allup state \(\uparrow\cdots \uparrow\rangle\). It then performs a realtime evolution with TEBD and measures a few observables. This setup correspond to a global quench from \(g =\infty\) to \(g=1\).
[1]:
import numpy as np
import scipy
import matplotlib.pyplot as plt
import matplotlib
[2]:
import tenpy
import tenpy.linalg.np_conserved as npc
from tenpy.algorithms import tebd
from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
[3]:
L = 30
[4]:
model_params = {
'J': 1. , 'g': 1., # critical
'L': L,
'bc_MPS': 'finite',
}
M = TFIChain(model_params)
Reading 'bc_MPS'='finite' for config TFIChain
Reading 'L'=30 for config TFIChain
Reading 'J'=1.0 for config TFIChain
Reading 'g'=1.0 for config TFIChain
[5]:
psi = MPS.from_lat_product_state(M.lat, [['up']])
[6]:
tebd_params = {
'N_steps': 1,
'dt': 0.1,
'order': 4,
'trunc_params': {'chi_max': 100, 'svd_min': 1.e12}
}
eng = tebd.Engine(psi, M, tebd_params)
Subconfig 'trunc_params'=Config(<3 options>, 'trunc_params') for config TEBD
[7]:
def measurement(eng, data):
keys = ['t', 'entropy', 'Sx', 'Sz', 'corr_XX', 'corr_ZZ', 'trunc_err']
if data is None:
data = dict([(k, []) for k in keys])
data['t'].append(eng.evolved_time)
data['entropy'].append(eng.psi.entanglement_entropy())
data['Sx'].append(eng.psi.expectation_value('Sigmax'))
data['Sz'].append(eng.psi.expectation_value('Sigmaz'))
data['corr_XX'].append(eng.psi.correlation_function('Sigmax', 'Sigmax'))
data['trunc_err'].append(eng.trunc_err.eps)
return data
[8]:
data = measurement(eng, None)
[9]:
while eng.evolved_time < 5.:
eng.run()
measurement(eng, data)
Reading 'dt'=0.1 for config TEBD
Reading 'N_steps'=1 for config TEBD
Reading 'order'=4 for config TEBD
Calculate U for {'order': 4, 'delta_t': 0.1, 'type_evo': 'real', 'E_offset': None, 'tau': 0.1}
> time=0.100, max_chi=6, Delta_S=5.5243e02, S=0.0552432967, since last update: 0.5 s
> time=0.200, max_chi=6, Delta_S=1.0453e01, S=0.1597705686, since last update: 0.4 s
> time=0.300, max_chi=8, Delta_S=1.1368e01, S=0.2734471407, since last update: 0.5 s
> time=0.400, max_chi=10, Delta_S=1.0226e01, S=0.3757082127, since last update: 0.4 s
> time=0.500, max_chi=12, Delta_S=8.2474e02, S=0.4581821173, since last update: 0.4 s
> time=0.600, max_chi=12, Delta_S=6.3393e02, S=0.5215746922, since last update: 0.5 s
> time=0.700, max_chi=15, Delta_S=5.0837e02, S=0.5724119006, since last update: 0.5 s
> time=0.800, max_chi=18, Delta_S=4.7046e02, S=0.6194580889, since last update: 0.5 s
> time=0.900, max_chi=20, Delta_S=5.0606e02, S=0.6700636890, since last update: 0.5 s
> time=1.000, max_chi=20, Delta_S=5.7462e02, S=0.7275254979, since last update: 0.5 s
> time=1.100, max_chi=24, Delta_S=6.3115e02, S=0.7906402648, since last update: 0.5 s
> time=1.200, max_chi=26, Delta_S=6.4779e02, S=0.8554189184, since last update: 0.5 s
> time=1.300, max_chi=30, Delta_S=6.2269e02, S=0.9176882429, since last update: 0.5 s
> time=1.400, max_chi=34, Delta_S=5.7451e02, S=0.9751394599, since last update: 0.5 s
> time=1.500, max_chi=40, Delta_S=5.2899e02, S=1.0280386118, since last update: 0.5 s
> time=1.600, max_chi=40, Delta_S=5.0543e02, S=1.0785817862, since last update: 0.5 s
> time=1.700, max_chi=46, Delta_S=5.0852e02, S=1.1294340516, since last update: 0.5 s
> time=1.800, max_chi=52, Delta_S=5.2841e02, S=1.1822748827, since last update: 0.7 s
> time=1.900, max_chi=57, Delta_S=5.4804e02, S=1.2370787276, since last update: 0.8 s
> time=2.000, max_chi=62, Delta_S=5.5311e02, S=1.2923895877, since last update: 0.9 s
> time=2.100, max_chi=71, Delta_S=5.3906e02, S=1.3462960135, since last update: 0.9 s
> time=2.200, max_chi=80, Delta_S=5.1209e02, S=1.3975050668, since last update: 1.2 s
> time=2.300, max_chi=85, Delta_S=4.8446e02, S=1.4459514729, since last update: 1.3 s
> time=2.400, max_chi=95, Delta_S=4.6732e02, S=1.4926837515, since last update: 1.4 s
> time=2.500, max_chi=100, Delta_S=4.6486e02, S=1.5391700161, since last update: 1.7 s
> time=2.600, max_chi=100, Delta_S=4.7282e02, S=1.5864521162, since last update: 1.6 s
> time=2.700, max_chi=100, Delta_S=4.8160e02, S=1.6346120966, since last update: 1.4 s
> time=2.800, max_chi=100, Delta_S=4.8202e02, S=1.6828138289, since last update: 1.5 s
> time=2.900, max_chi=100, Delta_S=4.7044e02, S=1.7298576742, since last update: 1.5 s
> time=3.000, max_chi=100, Delta_S=4.5033e02, S=1.7748910539, since last update: 1.8 s
> time=3.100, max_chi=100, Delta_S=4.2960e02, S=1.8178514255, since last update: 1.8 s
> time=3.200, max_chi=100, Delta_S=4.1575e02, S=1.8594265450, since last update: 1.9 s
> time=3.300, max_chi=100, Delta_S=4.1182e02, S=1.9006080872, since last update: 1.8 s
> time=3.400, max_chi=100, Delta_S=4.1503e02, S=1.9421110677, since last update: 1.5 s
> time=3.500, max_chi=100, Delta_S=4.1873e02, S=1.9839839097, since last update: 1.7 s
> time=3.600, max_chi=100, Delta_S=4.1645e02, S=2.0256293591, since last update: 1.4 s
> time=3.700, max_chi=100, Delta_S=4.0570e02, S=2.0661996178, since last update: 1.6 s
> time=3.800, max_chi=100, Delta_S=3.8897e02, S=2.1050963078, since last update: 1.6 s
> time=3.900, max_chi=100, Delta_S=3.7189e02, S=2.1422855970, since last update: 1.6 s
> time=4.000, max_chi=100, Delta_S=3.5994e02, S=2.1782792652, since last update: 1.6 s
> time=4.100, max_chi=100, Delta_S=3.5534e02, S=2.2138131347, since last update: 1.8 s
> time=4.200, max_chi=100, Delta_S=3.5597e02, S=2.2494096463, since last update: 1.7 s
> time=4.300, max_chi=100, Delta_S=3.5673e02, S=2.2850830917, since last update: 1.7 s
> time=4.400, max_chi=100, Delta_S=3.5272e02, S=2.3203546259, since last update: 1.8 s
> time=4.500, max_chi=100, Delta_S=3.4188e02, S=2.3545423452, since last update: 1.6 s
> time=4.600, max_chi=100, Delta_S=3.2596e02, S=2.3871383468, since last update: 1.6 s
> time=4.700, max_chi=100, Delta_S=3.0915e02, S=2.4180531110, since last update: 1.6 s
> time=4.800, max_chi=100, Delta_S=2.9546e02, S=2.4475988717, since last update: 1.5 s
> time=4.900, max_chi=100, Delta_S=2.8630e02, S=2.4762292780, since last update: 1.4 s
> time=5.000, max_chi=100, Delta_S=2.7962e02, S=2.5041913364, since last update: 1.6 s
> time=5.100, max_chi=100, Delta_S=2.7099e02, S=2.5312904883, since last update: 1.8 s
[10]:
plt.plot(data['t'], np.array(data['entropy'])[:, L//2])
plt.xlabel('time $t$')
plt.ylabel('entropy $S$')
[10]:
Text(0, 0.5, 'entropy $S$')
The growth of \(S\) linear in time is typical for a global quench and to be expected from the quasiparticle picture
[11]:
plt.plot(data['t'], np.sum(data['Sx'], axis=1), label="X")
plt.plot(data['t'], np.sum(data['Sz'], axis=1), label="Z")
plt.xlabel('time $t$')
plt.ylabel('magnetization')
plt.legend(loc='best')
[11]:
<matplotlib.legend.Legend at 0x7f0350435eb0>
The strict conservation of X
being zero is ensured by charge conservation, because X
changes the parity sector.
Nevertheless, the XX
correlation function can be nontrivial:
[12]:
corrs = np.array(data['corr_XX'])
tmax = data['t'][1]
x = np.arange(L)
cmap = matplotlib.cm.viridis
for i, t in list(enumerate(data['t'])):
if i == 0 or i == len(data['t'])  1:
label = '{t:.2f}'.format(t=t)
else:
label = None
plt.plot(x, corrs[i, L//2, :], color=cmap(t/tmax), label=label)
plt.