Tensor Network Python (TeNPy)¶
TeNPy (short for ‘Tensor Network Python’) is a Python library for the simulation of strongly correlated quantum systems with tensor networks.
The philosophy of this library is to get a new balance of a good readability and usability for new-comers, and at the same time powerful algorithms and fast development of new algorithms for experts. For good readability, we include an extensive documentation next to the code, both in Python doc strings and separately as user guides, as well as simple example codes and even toy codes, which just demonstrate various algorithms (like TEBD and DMRG) in ~100 lines per file.
How do I get set up?¶
Follow the instructions in the file doc/INSTALL.rst
, online at https://tenpy.github.io/INSTALL.html.
The latest version of the source code can be obtained from https://github.com/tenpy/tenpy.
How to read the documentation¶
The documentation is available online at https://tenpy.github.io. The documentation is roughly split in two parts: on one hand the full “reference” containing the documentation of all functions, classes, methods, etc., and on the other hand the “user guide” containing some introductions and additional explanations.
The documentation is based on Python’s docstrings, and some additional *.rst
files located in the folder doc/ of the repository.
All documentation is formated as reStructuredText,
which means it is quite readable in the source plain text, but can also be converted to other formats.
If you like it simple, you can just use intective python help(), Python IDEs of your choice or jupyter notebooks, or just read the source.
Moreover, the documentation is nightly converted into HTML using Sphinx, and made available online at https://tenpy.github.io/.
The big advantages of the (online) HTML documentation are a lot of cross-links between different functions, and even a search function.
If you prefer yet another format, you can try to build the documentation yourself, as described in doc/contributing.rst
, online at https://tenpy.github.io/contributing.html.
Help - I looked at the documentation, but I don’t understand how …?¶
We have set up a community forum at https://tenpy.johannes-hauschild.de/, where you can post questions and hopefully find answers. Once you got some experience with TeNPy, you might also be able to contribute to the community and answer some questions yourself ;-) We also use this forum for official annoucements, for example when we release a new version.
Citing TeNPy¶
When you use TeNPy for a work published in an academic journal, you can cite this paper to acknowledge the work put into the development of TeNPy.
(The license of TeNPy does not force you, however.)
For example, you could add the sentence "Calculations were performed using the TeNPy Library (version X.X.X)\cite{tenpy}."
in the acknowledgements or in the main text.
The corresponding BibTex Entry would be the following (the \url{...}
requires \usepackage{hyperref}
in the LaTeX preamble.):
@Article{tenpy,
title={{Efficient numerical simulations with Tensor Networks: Tensor Network Python (TeNPy)}},
author={Johannes Hauschild and Frank Pollmann},
journal={SciPost Phys. Lect. Notes},
pages={5},
year={2018},
publisher={SciPost},
doi={10.21468/SciPostPhysLectNotes.5},
url={https://scipost.org/10.21468/SciPostPhysLectNotes.5},
archiveprefix={arXiv},
eprint={1805.00055},
note={Code available from \url{https://github.com/tenpy/tenpy}},
}
I found a bug¶
You might want to check the github issues, if someone else already reported the same problem. To report a new bug, just open a new issue on github. If you already know how to fix it, you can just create a pull request :) If you are not sure whether your problem is a bug or a feature, you can also ask for help in the TeNPy forum.
License¶
The code is licensed under GPL-v3.0 given in the file LICENSE
of the repository,
in the online documentation readable at https://tenpy.github.io/license.html.
Contents¶
User Guide¶
First a short warning: the term ‘user guide’ might be a bit misleading: this part of the documentation simply covers everything but what is documented directly in the source - the latter can be found in the Tenpy Reference.
The first step to use tenpy is to download and install it; simply follow the Installation instructions.
After that, take a look at the Overview to get started.
Content¶
Installation instructions¶
Installation from packages¶
If you have the conda package manager from anaconda, you can simply download the environment.yml
file and create a new environment for tenpy with all the required packages:
conda env create -f environment.yml
conda activate tenpy
This will also install pip. Alternatively, if you only have pip, install the required packages with:
pip install -r requirements.txt
Note
Make sure that the pip you call corresponds to the python version
you want to use. (e.g. by using python -m pip
instead of a simple pip
Also, you might need to use the arguement --user
to install the packages to your home directory,
if you don’t have sudo
rights.
Warning
It might just be a temporary problem, but I found that the pip version of numpy is incompatible with the python distribution of anaconda. If you have installed the intelpython or anaconda distribution, use the conda packagemanager instead of pip for updating the packages whenever possible!
After that, you can install the latest *stable* TeNPy package (without downloading the source) from PyPi with:
pip install physics-tenpy # note the different package name - 'tenpy' was taken!
Note
When the installation fails, don’t give up yet. In the minimal version, tenpy requires only pure Python with somewhat up-to-date NumPy and SciPy. See the section Installation from source below.
To get the latest development version from the github master branch, you can use:
pip install git+git://github.com/tenpy/tenpy.git
Finally, if you downloaded the source and want to modify parts of the source, you should install tenpy in
development version with -e
:
cd $HOME/TeNPy # after downloading the source
pip install --editable .
In all cases, you can uninstall tenpy with:
pip uninstall physics-tenpy # note the longer name!
Updating to a new version¶
Before you update, take a look at the CHANGELOG, which lists the changes, fixes, and new stuff. Most importantly, it has a section on backwards incompatible changes (i.e., changes which may break your existing code) along with information how to fix it. Of course, we try to avoid introducing such incompatible changes, but sometimes, there’s no way around them.
How to update depends a little bit on the way you installed TeNPy. Of course, you have always the option to just remove the tenpy files and download the newest version, following the instructions above.
Alternatively, if you used git clone ...
to download the repository, you can update to the newest version using Git.
First, briefly check that you didn’t change anything you need to keep with git status
.
Then, do a git pull
to download (and possibly merge) the newest commit from the repository.
Note
If some Cython file (ending in .pyx
) got renamed/removed (e.g., when updating from v0.3.0 to v0.4.0),
you first need to remove the corresponding binary files.
You can do so with the command bash cleanup.sh
.
Furthermore, whenever one of the cython files (ending in .pyx
) changed, you need to re-compile it.
To do that, simply call the command bash ./compile
again.
If you are unsure whether a cython file changed, compiling again doesn’t hurt.
To summarize, you need to execute the following bash commands in the repository:
# 0) make a backup of the whole folder
git status # check the output whether you modified some files
git pull
bash ./cleanup.sh # (confirm with 'y')
bash ./compile.sh
Installation from source¶
This code works with a minimal requirement of pure Python>=3.5 and somewhat recent versions of NumPy and SciPy.
The following instructions are for (some kind of) Linux, and tested on Ubuntu. However, the code itself should work on other operating systems as well (in particular MacOS and Windows).
The offical repository is at https://github.com/tenpy/tenpy.git. To get the latest version of the code, you can clone it with Git using the following commands:
git clone https://github.com/tenpy/tenpy.git $HOME/TeNPy
cd $HOME/TeNPy
Adjust $HOME/TeNPy
to the path wherever you want to save the library.
Optionally, if you don’t want to contribute, you can checkout the latest stable release:
git tag # this prints the available version tags
git checkout v0.3.0 # or whatever is the lastest stable version
Note
In case you don’t have Git, you can download the repository as a ZIP archive. You can find it under releases, or the latest development version.
The python source is in the directory tenpy/ of the repository. This folder tenpy/ should be placed in (one of the folders of) the environment variable PYTHONPATH. On Linux, you can simply do this with the following line in the terminal:
export PYTHONPATH=$HOME/TeNPy
(If you have already a path in this variable, separate the paths with a colon :
.)
However, if you enter this in the terminal, it will only be temporary for the terminal session where you entered it.
To make it permanently, you can add the above line to the file $HOME/.bashrc
.
You might need to restart the terminal session or need to relogin to force a reload of the ~/.bashrc
.
Whenever the path is set, you should be able to use the library from within python:
>>> import tenpy
/home/username/TeNPy/tenpy/tools/optimization.py:276: UserWarning: Couldn't load compiled cython code. Code will run a bit slower.
warnings.warn("Couldn't load compiled cython code. Code will run a bit slower.")
>>> tenpy.show_config()
tenpy 0.4.0.dev0+7706003 (not compiled),
git revision 77060034a9fa64d2c7c16b4211e130cf5b6f5272 using
python 3.7.3 (default, Mar 27 2019, 22:11:17)
[GCC 7.3.0]
numpy 1.16.3, scipy 1.2.1
tenpy.show_config()
prints the current version of the used TeNPy library as well as the versions of the used python, numpy and scipy libraries,
which might be different on your computer. It is a good idea to save this data (given as string in tenpy.version.version_summary
along with your data to allow to reproduce your results exactly.
If you got a similar output as above: congratulations! You can now run the codes :)
If you want to run larger simulations, we recommend the use of Intel’s MKL. It ships with a Lapack library, and uses optimization for Intel CPUs. Moreover, it uses parallelization of the LAPACK/BLAS routines, which makes execution much faster. As of now, the library itself supports no other way of parallelization.
If you don’t have a python version which is built against MKL,
we recommend using the anaconda distribution, which ships with Intel MKL,
or directly intelpython.
Conda has the advantage that it allows to use different environments for different projects.
Both are available for Linux, Mac and Windows; note that you don’t even need administrator rights to install it on linux.
Simply follow the (straight-forward) instructions of the web page for the installation.
After a successfull installation, if you run python
interactively, the first output line should
state the python version and contain Anaconda
or Intel Corporation
, respectively.
If you have a working conda package manager, you can install the numpy build against mkl with:
conda install mkl numpy scipy
If you prefer using a separete conda environment, you can also use the following code to install all the recommended packages:
conda env create -f environment.yml
conda activate tenpy
Note
MKL uses different threads to parallelize various BLAS and LAPACK routines.
If you run the code on a cluster, make sure that you specify the number of used cores/threads correctly.
By default, MKL uses all the available CPUs, which might be in stark contrast than what you required from the
cluster. The easiest way to set the used threads is using the environment variable MKL_NUM_THREADS (or OMP_NUM_THREADS).
For a dynamic change of the used threads, you might want to look at process
.
Some code uses MatPlotLib for plotting, e.g., to visualize a lattice.
However, having matplotlib is not necessary for running any of the algorithms: tenpy does not import matplotlib
by default.
Further optional requirements are listed in the requirements*.txt
files in the source repository.
At the heart of the TeNPy library is the module tenpy.linalg.np_conseved
, which provides an Array class to exploit the
conservation of abelian charges. The data model of python is not ideal for the required book-keeping, thus
we have implemented the same np_conserved module in Cython.
This allows to compile (and thereby optimize) the corresponding python module, thereby speeding up the execution of the
code. While this might give a significant speed-up for code with small matrix dimensions, don’t expect the same speed-up in
cases where most of the CPU-time is already spent in matrix multiplications (i.e. if the bond dimension of your MPS is huge).
To compile the code, you first need to install Cython
conda install cython # when using anaconda, or
pip install --upgrade Cython # when using pip
Moreover, you need a C++ compiler.
For example, on Ubuntu you can install sudo apt-get install build_essential
,
or on Windows you can download MS Visual Studio 2015.
If you use anaconda, you can also use one conda install -c conda-forge cxx-compiler
.
After that, go to the root directory of TeNPy ($HOME/TeNPy
) and simply run
bash ./compile.sh
Note that it is not required to separately download (and install) Intel MKL: the compilation just obtains the includes from numpy. In other words, if your current numpy version uses MKL (as the one provided by anaconda), the compiled TeNPy code will also use it.
After a successful compilation, the warning that TeNPy was not compiled should go away:
>>> import tenpy
>>> tenpy.show_config()
tenpy 0.4.0.dev0+b60bad3 (compiled from git rev. b60bad3243b7e54f549f4f7c1f074dc55bb54ba3),
git revision b60bad3243b7e54f549f4f7c1f074dc55bb54ba3 using
python 3.7.3 (default, Mar 27 2019, 22:11:17)
[GCC 7.3.0]
numpy 1.16.3, scipy 1.2.1
Note
For further optimization options, look at tenpy.tools.optimization
.
As a first check of the installation you can try to run (one of) the python files in the examples/ subfolder; hopefully all of them should run without error.
You can also run the automated testsuite with pytest (pip install pytest
) to make sure everything works fine:
cd $HOME/TeNPy/tests
pytest
This should run some tests. In case of errors or failures it gives a detailed traceback and possibly some output of the test. At least the stable releases should run these tests without any failures.
If you can run the examples but not the tests, check whether pytest actually uses the correct python version.
The test suite is also run automatically with travis-ci, results can be inspected at here.
Overview¶
Repository¶
The root directory of this git repository contains the following folders:
- tenpy
The actual source code of the library. Every subfolder contains an
__init__.py
file with a summary what the modules in it are good for. (This file is also necessary to mark the folder as part of the python package. Consequently, other subfolders of the git repo should not include a__init__.py
file.)- toycodes
Simple toy codes completely independet of the remaining library (i.e., codes in
tenpy/
). These codes should be quite readable and intend to give a flavor of how (some of) the algorithms work.- examples
Some example files demonstrating the usage and interface of the library.
- doc
A folder containing the documentation: the user guide is contained in the
*.rst
files. The online documentation is autogenerated from these files and the docstrings of the library. This folder contains a make file for building the documentation, runmake help
for the different options. The necessary files for the reference indoc/reference
can be auto-generated/updated withmake src2html
.- tests
Contains files with test routines, to be used with pytest. If you are set up correctly and have pytest installed, you can run the test suite with
pytest
from within thetests/
folder.- build
This folder is not distributed with the code, but is generated by
setup.py
(orcompile.sh
, respectively). It contains compiled versions of the Cython files, and can be ignored (and even removed without loosing functionality).
Code structure: getting started¶
There are several layers of abstraction in TeNPy. While there is a certain hierarchy of how the concepts build up on each other, the user can decide to utilize only some of them. A maximal flexibility is provided by an object oriented style based on classes, which can be inherited and adjusted to individual demands.
The following figure gives an overview of the most important modules, classes and functions in TeNPy.
Gray backgrounds indicate (sub)modules, yellow backgrounds indicate classes.
Red arrows indicate inheritance relations, dashed black arrows indicate a direct use.
(The individual models might be derived from the NearestNeighborModel
depending on the geometry of the lattice.)
There is a clear hierarchy from high-level algorithms in the tenpy.algorithms
module down to basic
operations from linear algebra in the tenpy.linalg
module.

Note
See Introduction to np_conserved for more information on defining charges for arrays.
The most basic layer is given by in the linalg
module, which provides basic features of linear algebra.
In particular, the np_conserved
submodule implements an Array
class which is used to represent
the tensors. The basic interface of np_conserved
is very similar to that of the NumPy and SciPy libraries.
However, the Array
class implements abelian charge conservation.
If no charges are to be used, one can use ‘trivial’ arrays, as shown in the following example code.
"""Basic use of the `Array` class with trivial arrays."""
# Copyright 2019 TeNPy Developers, GNU GPLv3
import tenpy.linalg.np_conserved as npc
M = npc.Array.from_ndarray_trivial([[0., 1.], [1., 0.]])
v = npc.Array.from_ndarray_trivial([2., 4. + 1.j])
v[0] = 3. # set indiviual entries like in numpy
print("|v> =", v.to_ndarray())
# |v> = [ 3.+0.j 4.+1.j]
M_v = npc.tensordot(M, v, axes=[1, 0])
print("M|v> =", M_v.to_ndarray())
# M|v> = [ 4.+1.j 3.+0.j]
print("<v|M|v> =", npc.inner(v.conj(), M_v, axes='range'))
# <v|M|v> = (24+0j)
The number and types of symmetries are specified in a ChargeInfo
class.
An Array
instance represents a tensor satisfying a charge rule specifying which blocks of it are nonzero.
Internally, it stores only the non-zero blocks of the tensor, along with one LegCharge
instance for each
leg, which contains the charges and sign qconj for each leg.
We can combine multiple legs into a single larger LegPipe
,
which is derived from the LegCharge
and stores all the information necessary to later split the pipe.
The following code explicitly defines the spin-1/2 \(S^+, S^-, S^z\) operators and uses them to generate and diagonalize the two-site Hamiltonian \(H = \vec{S} \cdot \vec{S}\). It prints the charge values (by default sorted ascending) and the eigenvalues of H.
"""Explicit definition of charges and spin-1/2 operators."""
# Copyright 2019 TeNPy Developers, GNU GPLv3
import tenpy.linalg.np_conserved as npc
# consider spin-1/2 with Sz-conservation
chinfo = npc.ChargeInfo([1]) # just a U(1) charge
# charges for up, down state
p_leg = npc.LegCharge.from_qflat(chinfo, [[1], [-1]])
Sz = npc.Array.from_ndarray([[0.5, 0.], [0., -0.5]], [p_leg, p_leg.conj()])
Sp = npc.Array.from_ndarray([[0., 1.], [0., 0.]], [p_leg, p_leg.conj()])
Sm = npc.Array.from_ndarray([[0., 0.], [1., 0.]], [p_leg, p_leg.conj()])
Hxy = 0.5 * (npc.outer(Sp, Sm) + npc.outer(Sm, Sp))
Hz = npc.outer(Sz, Sz)
H = Hxy + Hz
# here, H has 4 legs
H.iset_leg_labels(["s1", "t1", "s2", "t2"])
H = H.combine_legs([["s1", "s2"], ["t1", "t2"]], qconj=[+1, -1])
# here, H has 2 legs
print(H.legs[0].to_qflat().flatten())
# prints [-2 0 0 2]
E, U = npc.eigh(H) # diagonalize blocks individually
print(E)
# [ 0.25 -0.75 0.25 0.25]
The next basic concept is that of a local Hilbert space, which is represented by a Site
in TeNPy.
This class does not only label the local states and define the charges, but also
provides onsite operators. For example, the SpinHalfSite
provides the
\(S^+, S^-, S^z\) operators under the names 'Sp', 'Sm', 'Sz'
, defined as Array
instances similarly as
in the code above.
Since the most common sites like for example the SpinSite
(for general spin S=0.5, 1, 1.5,…), BosonSite
and
FermionSite
are predefined, a user of TeNPy usually does not need to define the local charges and operators explicitly.
The total Hilbert space, i.e, the tensor product of the local Hilbert spaces, is then just given by a
list of Site
instances. If desired, different kinds of Site
can be combined in that list.
This list is then given to classes representing tensor networks like the MPS
and
MPO
.
The tensor network classes also use Array
instances for the tensors of the represented network.
The following example illustrates the initialization of a spin-1/2 site, an MPS
representing the Neel state, and
an MPO
representing the Heisenberg model by explicitly defining the W tensor.
"""Initialization of sites, MPS and MPO."""
# Copyright 2019 TeNPy Developers, GNU GPLv3
from tenpy.networks.site import SpinHalfSite
from tenpy.networks.mps import MPS
from tenpy.networks.mpo import MPO
spin = SpinHalfSite(conserve="Sz")
print(spin.Sz.to_ndarray())
# [[ 0.5 0. ]
# [ 0. -0.5]]
N = 6 # number of sites
sites = [spin] * N # repeat entry of list N times
pstate = ["up", "down"] * (N // 2) # Neel state
psi = MPS.from_product_state(sites, pstate, bc="finite")
print("<Sz> =", psi.expectation_value("Sz"))
# <Sz> = [ 0.5 -0.5 0.5 -0.5]
print("<Sp_i Sm_j> =", psi.correlation_function("Sp", "Sm"), sep="\n")
# <Sp_i Sm_j> =
# [[1. 0. 0. 0. 0. 0.]
# [0. 0. 0. 0. 0. 0.]
# [0. 0. 1. 0. 0. 0.]
# [0. 0. 0. 0. 0. 0.]
# [0. 0. 0. 0. 1. 0.]
# [0. 0. 0. 0. 0. 0.]]
# define an MPO
Id, Sp, Sm, Sz = spin.Id, spin.Sp, spin.Sm, spin.Sz
J, Delta, hz = 1., 1., 0.2
W_bulk = [[Id, Sp, Sm, Sz, -hz * Sz], [None, None, None, None, 0.5 * J * Sm],
[None, None, None, None, 0.5 * J * Sp], [None, None, None, None, J * Delta * Sz],
[None, None, None, None, Id]]
W_first = [W_bulk[0]] # first row
W_last = [[row[-1]] for row in W_bulk] # last column
Ws = [W_first] + [W_bulk] * (N - 2) + [W_last]
H = MPO.from_grids([spin] * N, Ws, bc='finite', IdL=0, IdR=-1)
print("<psi|H|psi> =", H.expectation_value(psi))
# <psi|H|psi> = -1.25
Note
See Introduction to models for more information on sites and how to define and extend models on your own.
Technically, the explicit definition of an MPO
is already enough to call an algorithm like DMRG in dmrg
.
However, writing down the W tensors is cumbersome especially for more complicated models.
Hence, TeNPy provides another layer of abstraction for the definition of models, which we discuss first.
Different kinds of algorithms require different representations of the Hamiltonian.
Therefore, the library offers to specify the model abstractly by the individual onsite terms and coupling terms of the Hamiltonian.
The following example illustrates this, again for the Heisenberg model.
"""Definition of a model: the XXZ chain."""
# Copyright 2019 TeNPy Developers, GNU GPLv3
from tenpy.networks.site import SpinSite
from tenpy.models.lattice import Chain
from tenpy.models.model import CouplingModel, NearestNeighborModel, MPOModel
class XXZChain(CouplingModel, NearestNeighborModel, MPOModel):
def __init__(self, L=2, S=0.5, J=1., Delta=1., hz=0.):
spin = SpinSite(S=S, conserve="Sz")
# the lattice defines the geometry
lattice = Chain(L, spin, bc="open", bc_MPS="finite")
CouplingModel.__init__(self, lattice)
# add terms of the Hamiltonian
self.add_coupling(J * 0.5, 0, "Sp", 0, "Sm", 1) # Sp_i Sm_{i+1}
self.add_coupling(J * 0.5, 0, "Sp", 0, "Sm", -1) # Sp_i Sm_{i-1}
self.add_coupling(J * Delta, 0, "Sz", 0, "Sz", 1)
# (for site dependent prefactors, the strength can be an array)
self.add_onsite(-hz, 0, "Sz")
# finish initialization
# generate MPO for DMRG
MPOModel.__init__(self, lat, self.calc_H_MPO())
# generate H_bond for TEBD
NearestNeighborModel.__init__(self, lat, self.calc_H_bond())
While this generates the same MPO as in the previous code, this example can easily be adjusted and generalized, for
example to a higher dimensional lattice by just specifying a different lattice.
Internally, the MPO is generated using a finite state machine picture.
This allows not only to translate more complicated Hamiltonians into their corresponding MPOs,
but also to automate the mapping from a higher dimensional lattice to the 1D chain along which the MPS
winds.
Note that this mapping introduces longer-range couplings, so the model can no longer be defined to be a
NearestNeighborModel
suited for TEBD if another lattice than the Chain
is to be used.
Of course, many commonly studied models are also predefined.
For example, the following code initializes the Heisenberg model on a kagome lattice;
the spin liquid nature of the ground state of this model is highly debated in the current literature.
"""Initialization of the Heisenberg model on a kagome lattice."""
# Copyright 2019 TeNPy Developers, GNU GPLv3
from tenpy.models.spins import SpinModel
model_params = {
"S": 0.5, # Spin 1/2
"lattice": "Kagome",
"bc_MPS": "infinite",
"bc_y": "cylinder",
"Ly": 2, # defines cylinder circumference
"conserve": "Sz", # use Sz conservation
"Jx": 1.,
"Jy": 1.,
"Jz": 1. # Heisenberg coupling
}
model = SpinModel(model_params)
The highest level in TeNPy is given by algorithms like DMRG and TEBD.
Using the previous concepts, setting up a simulation running those algorithms is a matter of just a few lines of code.
The following example runs a DMRG simulation, see dmrg
, exemplary for the transverse field Ising model at the critical point.
"""Call of (finite) DMRG."""
# Copyright 2019 TeNPy Developers, GNU GPLv3
from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.algorithms import dmrg
N = 16 # number of sites
model = TFIChain({"L": N, "J": 1., "g": 1., "bc_MPS": "finite"})
sites = model.lat.mps_sites()
psi = MPS.from_product_state(sites, ['up'] * N, "finite")
dmrg_params = {"trunc_params": {"chi_max": 100, "svd_min": 1.e-10}, "mixer": True}
info = dmrg.run(psi, model, dmrg_params)
print("E =", info['E'])
# E = -20.01638790048513
print("max. bond dimension =", max(psi.chi))
# max. bond dimension = 27
The switch from DMRG to gls{iDMRG} in TeNPy is simply accomplished by a change of the parameter
"bc_MPS"
from "finite"
to "infinite"
, both for the model and the state.
The returned E
is then the energy density per site.
Due to the translation invariance, one can also evaluate the correlation length, here slightly away from the critical point.
"""Call of infinite DMRG."""
# Copyright 2019 TeNPy Developers, GNU GPLv3
from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.algorithms import dmrg
N = 2 # number of sites in unit cell
model = TFIChain({"L": N, "J": 1., "g": 1.1, "bc_MPS": "infinite"})
sites = model.lat.mps_sites()
psi = MPS.from_product_state(sites, ['up'] * N, "infinite")
dmrg_params = {"trunc_params": {"chi_max": 100, "svd_min": 1.e-10}, "mixer": True}
info = dmrg.run(psi, model, dmrg_params)
print("E =", info['E'])
# E = -1.342864022725017
print("max. bond dimension =", max(psi.chi))
# max. bond dimension = 56
print("corr. length =", psi.correlation_length())
# corr. length = 4.915809146764157
Running time evolution with TEBD requires an additional loop, during which the desired observables have to be measured. The following code shows this directly for the infinite version of TEBD.
"""Call of (infinite) TEBD."""
# Copyright 2019 TeNPy Developers, GNU GPLv3
from tenpy.networks.mps import MPS
from tenpy.models.tf_ising import TFIChain
from tenpy.algorithms import tebd
M = TFIChain({"L": 2, "J": 1., "g": 1.5, "bc_MPS": "infinite"})
psi = MPS.from_product_state(M.lat.mps_sites(), [0] * 2, "infinite")
tebd_params = {
"order": 2,
"delta_tau_list": [0.1, 0.001, 1.e-5],
"max_error_E": 1.e-6,
"trunc_params": {
"chi_max": 30,
"svd_min": 1.e-10
}
}
eng = tebd.Engine(psi, M, tebd_params)
eng.run_GS() # imaginary time evolution with TEBD
print("E =", sum(psi.expectation_value(M.H_bond)) / psi.L)
print("final bond dimensions: ", psi.chi)
Literature¶
This is a (by far non-exhaustive) list of some references for the various ideas behind the code, sorted by year and author.
They can be cited from the python doc-strings using the format [Author####]_
.
General reading¶
- Schollwoeck2011
“The density-matrix renormalization group in the age of matrix product states” U. Schollwoeck, Annals of Physics 326, 96 (2011), arXiv:1008.3477 doi:10.1016/j.aop.2010.09.012
Extensive review, classic introduction.
Further reviews are:
- Verstraete2009
“Matrix Product States, Projected Entangled Pair States, and variational renormalization group methods for quantum spin systems” F. Verstraete and V. Murg and J.I. Cirac, Advances in Physics 57 2, 143-224 (2009) arXiv:0907.2796 doi:10.1080/14789940801912366
- Cirac2009
“Renormalization and tensor product states in spin chains and lattices” J. I. Cirac and F. Verstraete, Journal of Physics A: Mathematical and Theoretical, 42, 50 (2009) arXiv:0910.1130 doi:10.1088/1751-8113/42/50/504004
- CincioVidal2013
“Characterizing Topological Order by Studying the Ground States on an Infinite Cylinder” L. Cincio, G. Vidal, Phys. Rev. Lett. 110, 067208 (2013), arXiv:1208.2623 doi:10.1103/PhysRevLett.110.067208
- Eisert2013
“Entanglement and tensor network states” J. Eisert, Modeling and Simulation 3, 520 (2013) arXiv:1308.3318
- Orus2014
“A Practical Introduction to Tensor Networks: Matrix Product States and Projected Entangled Pair States” R. Orus, Annals of Physics 349, 117-158 (2014) arXiv:1306.2164 doi:10.1016/j.aop.2014.06.013
- Hubig2019
“Time-evolution methods for matrix-product states” S. Paeckel, T. Köhler, A. Swoboda, S. R. Manmana, U. Schollwöck, C. Hubig, arXiv:1901.05824
Algorithm developments¶
- White1992
“Density matrix formulation for quantum renormalization groups” S. White, Phys. Rev. Lett. 69, 2863 (1992) doi:10.1103/PhysRevLett.69.2863, S. White, Phys. Rev. B 84, 10345 (1992) doi:10.1103/PhysRevB.48.10345
- Vidal2004
“Efficient Simulation of One-Dimensional Quantum Many-Body Systems” G. Vidal, Phys. Rev. Lett. 93, 040502 (2004), arXiv:quant-ph/0310089 doi:10.1103/PhysRevLett.93.040502
- Schollwoeck2005
“The density-matrix renormalization group” U. Schollwöck, Rev. Mod. Phys. 77, 259 (2005), arXiv:0409292 doi:10.1103/RevModPhys.77.259
- White2005
“Density matrix renormalization group algorithms with a single center site” S. White, Phys. Rev. B 72, 180403(R) (2005), arXiv:cond-mat/0508709 doi:10.1103/PhysRevB.72.180403
- McCulloch2008
“Infinite size density matrix renormalization group, revisited” I. P. McCulloch, arXiv:0804.2509
- Singh2009
“Tensor network decompositions in the presence of a global symmetry” S. Singh, R. Pfeifer, G. Vidal, Phys. Rev. A 82, 050301(R), arXiv:0907.2994 doi:10.1103/PhysRevA.82.050301
- Singh2010
“Tensor network states and algorithms in the presence of a global U(1) symmetry” S. Singh, R. Pfeifer, G. Vidal, Phys. Rev. B 83, 115125, arXiv:1008.4774 doi:10.1103/PhysRevB.83.115125
- Haegeman2011
“Time-Dependent Variational Principle for Quantum Lattices” J. Haegeman, J. I. Cirac, T. J. Osborne, I. Pizorn, H. Verschelde, F. Verstraete, Phys. Rev. Lett. 107, 070601 (2011), arXiv:1103.0936 doi:10.1103/PhysRevLett.107.070601
- Karrasch2013
“Reducing the numerical effort of finite-temperature density matrix renormalization group calculations” C. Karrasch, J. H. Bardarson, J. E. Moore, New J. Phys. 15, 083031 (2013), arXiv:1303.3942 doi:10.1088/1367-2630/15/8/083031
- Hubig2015
“Strictly single-site DMRG algorithm with subspace expansion” C. Hubig, I. P. McCulloch, U. Schollwoeck, F. A. Wolf, Phys. Rev. B 91, 155115 (2015), arXiv:1501.05504 doi:10.1103/PhysRevB.91.155115
- Haegeman2016
“Unifying time evolution and optimization with matrix product states” J. Haegeman, C. Lubich, I. Oseledets, B. Vandereycken, F. Verstraete, Phys. Rev. B 94, 165116 (2016), arXiv:1408.5056 doi:10.1103/PhysRevB.94.165116
- Hauschild2018
“Finding purifications with minimal entanglement” J. Hauschild, E. Leviatan, J. H. Bardarson, E. Altman, M. P. Zaletel, F. Pollmann, Phys. Rev. B 98, 235163 (2018), arXiv:1711.01288 doi:10.1103/PhysRevB.98.235163
Two-dimensional systems¶
- Stoudenmire2011
“Studying Two Dimensional Systems With the Density Matrix Renormalization Group” E.M. Stoudenmire, Steven R. White, Ann. Rev. of Cond. Mat. Physics, 3: 111-128 (2012), arXiv:1105.1374 doi:10.1146/annurev-conmatphys-020911-125018
- Neupert2011
“Fractional quantum Hall states at zero magnetic field” Titus Neupert, Luiz Santos, Claudio Chamon, and Christopher Mudry, Phys. Rev. Lett. 106, 236804 (2011), arXiv:1012.4723 doi:10.1103/PhysRevLett.106.236804
- Yang2012
“Topological flat band models with arbitrary Chern numbers” Shuo Yang, Zheng-Cheng Gu, Kai Sun, and S. Das Sarma, Phys. Rev. B 86, 241112(R) (2012), arXiv:1205.5792, doi:10.1103/PhysRevB.86.241112
- Grushin2015
“Characterization and stability of a fermionic ν=1/3 fractional Chern insulator” Adolfo G. Grushin, Johannes Motruk, Michael P. Zaletel, and Frank Pollmann, Phys. Rev. B 91, 035136 (2015), arXiv:1407.6985 doi:10.1103/PhysRevB.91.035136
Introduction to np_conserved¶
The basic idea is quickly summarized: By inspecting the Hamiltonian, you can identify symmetries, which correspond to conserved quantities, called charges. These charges divide the tensors into different sectors. This can be used to infer for example a block-diagonal structure of certain matrices, which in turn speeds up SVD or diagonalization a lot. Even for more general (non-square-matrix) tensors, charge conservation imposes restrictions which blocks of a tensor can be non-zero. Only those blocks need to be saved, and e.g. tensordot can be speeded up.
This introduction covers our implementation of charges; explaining mathematical details of the underlying symmetry is beyond its scope. We refer you to Ref. [Singh2009] for the general idea, which is more nicely explained for the example of a \(U(1)\) symmetry in [Singh2010].
Notations¶
Lets fix the notation for this introduction and the doc-strings in np_conserved
.
A Array
is a multi-dimensional array representing a tensor with the entries:
Each leg \(a_i\) corresponds the a vector space of dimension n_i.
An index of a leg is a particular value \(a_i \in \lbrace 0, ... ,n_i-1\rbrace\).
The rank is the number of legs, the shape is \((n_0, ..., n_{rank-1})\).
We restrict ourselfes to abelian charges with entries in \(\mathbb{Z}\) or in \(\mathbb{Z}_m\).
The nature of a charge is specified by \(m\); we set \(m=1\) for charges corresponding to \(\mathbb{Z}\).
The number of charges is refered to as qnumber as a short hand, and the collection of \(m\) for each charge is called qmod.
The qnumber, qmod and possibly descriptive names of the charges are saved in an instance of ChargeInfo
.
To each index of each leg, a value of the charge(s) is associated.
A charge block is a contiguous slice corresponding to the same charge(s) of the leg.
A qindex is an index in the list of charge blocks for a certain leg.
A charge sector is for given charge(s) is the set of all qindices of that charge(s).
A leg is blocked if all charge sectors map one-to-one to qindices.
Finally, a leg is sorted, if the charges are sorted lexiographically.
Note that a sorted leg is always blocked.
We can also speak of the complete array to be blocked by charges or legcharge-sorted, which means that all of its legs are blocked or sorted, respectively.
The charge data for a single leg is collected in the class LegCharge
.
A LegCharge
has also a flag qconj, which tells whether the charges
point inward (+1) or outward (-1). What that means, is explained later in Which entries of the npc Array can be non-zero?.
For completeness, let us also summarize also the internal structure of an Array
here:
The array saves only non-zero blocks, collected as a list of np.array in self._data
.
The qindices necessary to map these blocks to the original leg indices are collected in self._qdata
An array is said to be qdata-sorted if its self._qdata
is lexiographically sorted.
More details on this follow later.
However, note that you usually shouldn’t access _qdata and _data directly - this
is only necessary from within tensordot, svd, etc.
Also, an array has a total charge, defining which entries can be non-zero - details in Which entries of the npc Array can be non-zero?.
Finally, a leg pipe (implemented in LegPipe
)
is used to formally combine multiple legs into one leg. Again, more details follow later.
Physical Example¶
For concreteness, you can think of the Hamiltonian \(H = -t \sum_{<i,j>} (c^\dagger_i c_j + H.c.) + U n_i n_j\) with \(n_i = c^\dagger_i c_i\). This Hamiltonian has the global \(U(1)\) gauge symmetry \(c_i \rightarrow c_i e^{i\phi}\). The corresponding charge is the total number of particles \(N = \sum_i n_i\). You would then introduce one charge with \(m=1\).
Note that the total charge is a sum of local terms, living on single sites. Thus, you can infer the charge of a single physical site: it’s just the value \(q_i = n_i \in \mathbb{N}\) for each of the states.
Note that you can only assign integer charges. Consider for example the spin 1/2 Heisenberg chain. Here, you can naturally identify the magnetization \(S^z = \sum_i S^z_i\) as the conserved quantity, with values \(S^z_i = \pm \frac{1}{2}\). Obviously, if \(S^z\) is conserved, then so is \(2 S^z\), so you can use the charges \(q_i = 2 S^z_i \in \lbrace-1, +1 \rbrace\) for the down and up states, respectively. Alternatively, you can also use a shift and define \(q_i = S^z_i + \frac{1}{2} \in \lbrace 0, 1 \rbrace\).
As another example, consider BCS like terms \(\sum_k (c^\dagger_k c^\dagger_{-k} + H.c.)\). These terms break the total particle conservation, but they preserve the total parity, i.e., \(N % 2\) is conserved. Thus, you would introduce a charge with \(m = 2\) in this case.
In the above examples, we had only a single charge conserved at a time, but you might be lucky and have multiple conserved quantities, e.g. if you have two chains coupled only by interactions. TeNPy is designed to handle the general case of multiple charges. When giving examples, we will restrict to one charge, but everything generalizes to multiple charges.
The different formats for LegCharge¶
As mentioned above, we assign charges to each index of each leg of a tensor. This can be done in three formats: qflat, as qind and as qdict. Let me explain them with examples, for simplicity considereing only a single charge (the most inner array has one entry for each charge).
- qflat form: simply a list of charges for each index.
An example:
qflat = [[-2], [-1], [-1], [0], [0], [0], [0], [3], [3]]
This tells you that the leg has size 9, the charges for are
[-2], [-1], [-1], ..., [3]
for the indices0, 1, 2, 3,..., 8
. You can identify four charge blocksslice(0, 1), slice(1, 3), slice(3, 7), slice(7, 9)
in this example, which have charges[-2], [-1], [0], [3]
. In other words, the indices1, 2
(which are inslice(1, 3)
) have the same charge value[-1]
. A qindex would just enumerate these blocks as0, 1, 2, 3
.- qind form: a 1D array slices and a 2D array charges.
This is a more compact version than the qflat form: the slices give a partition of the indices and the charges give the charge values. The same example as above would simply be:
slices = [0, 1, 3, 7, 9] charges = [[-2], [-1], [0], [3]]
Note that slices includes
0
as first entry and the number of indices (here9
) as last entries. Thus it has lenblock_number + 1
, whereblock_number
(given byblock_number
) is the number of charge blocks in the leg, i.e. a qindex runs from 0 toblock_number-1
. On the other hand, the 2D array charges has shape(block_number, qnumber)
, whereqnumber
is the number of charges (given byqnumber
).In that way, the qind form maps an qindex, say
qi
, to the indicesslice(slices[qi], slices[qi+1])
and the charge(s)charges[qi]
.- qdict form: a dictionary in the other direction than qind, taking charge tuples to slices.
Again for the same example:
{(-2,): slice(0, 1), (-1,): slice(1, 3), (0,) : slice(3, 7), (3,) : slice(7, 9)}
Since the keys of a dictionary are unique, this form is only possible if the leg is completely blocked.
The LegCharge
saves the charge data of a leg internally in qind form,
directly in the attribute slices and charges.
However, it also provides convenient functions for conversion between from and to the qflat and qdict form.
The above example was nice since all charges were sorted and the charge blocks were ‘as large as possible’. This is however not required.
The following example is also a valid qind form:
slices = [0, 1, 3, 5, 7, 9]
charges = [[-2], [-1], [0], [0], [3]]
This leads to the same qflat form as the above examples, thus representing the same charges on the leg indices. However, regarding our Arrays, this is quite different, since it diveds the leg into 5 (instead of previously 4) charge blocks. We say the latter example is not bunched, while the former one is bunched.
To make the different notions of sorted and bunched clearer, consider the following (valid) examples:
charges |
bunched |
sorted |
blocked |
---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
If a leg is bunched and sorted, it is automatically blocked (but not vice versa). See also below for further comments on that.
Which entries of the npc Array can be non-zero?¶
The reason for the speedup with np_conserved lies in the fact that it saves only the blocks ‘compatible’ with the charges. But how is this ‘compatible’ defined?
Assume you have a tensor, call it \(T\), and the LegCharge
for all of its legs, say \(a, b, c, ...\).
Remeber that the LegCharge associates to each index of the leg a charge value (for each of the charges, if qnumber > 1).
Let a.to_qflat()[ia]
denote the charge(s) of index ia
for leg a
, and similar for other legs.
In addition, the LegCharge has a flag qconj
. This flag qconj is only a sign,
saved as +1 or -1, specifying whether the charges point ‘inward’ (+1, default) or ‘outward’ (-1) of the tensor.
Then, the total charge of an entry T[ia, ib, ic, ...]
of the tensor is defined as:
qtotal[ia, ib, ic, ...] = a.to_qflat()[ia] * a.qconj + b.to_qflat()[ib] * b.qconj + c.to_qflat()[ic] * c.qconj + ... modulo qmod
The rule which entries of the a Array
can be non-zero
(i.e., are ‘compatible’ with the charges), is then very simple:
Rule for non-zero entries
An entry ia, ib, ic, ...
of a Array
can only be non-zero,
if qtotal[ia, ib, ic, ...]
matches the unique qtotal
attribute of the class.
In other words, there is a single total charge .qtotal
attribute of a Array
.
All indices ia, ib, ic, ...
for which the above defined qtotal[ia, ib, ic, ...]
matches this total charge,
are said to be compatible with the charges and can be non-zero.
All other indices are incompatible with the charges and must be zero.
In case of multiple charges, qnumber > 1, is a straigth-forward generalization: an entry can only be non-zero if it is compatible with each of the defined charges.
The pesky qconj - contraction as an example¶
Why did we introduce the qconj
flag? Remember it’s just a sign telling whether the charge points inward or outward.
So whats the reasoning?
The short answer is, that LegCharges actually live on bonds (i.e., legs which are to be contracted) rather than individual tensors. Thus, it is convenient to share the LegCharges between different legs and even tensors, and just adjust the sign of the charges with qconj.
As an example, consider the contraction of two tensors, \(C_{ia,ic} = \sum_{ib} A_{ia,ib} B_{ib,ic}\).
For simplicity, say that the total charge of all three tensors is zero.
What are the implications of the above rule for non-zero entries?
Or rather, how can we ensure that C
complies with the above rule?
An entry C[ia,ic]
will only be non-zero,
if there is an ib
such that both A[ia,ib]
and B[ib,ic]
are non-zero, i.e., both of the following equations are
fullfilled:
A.qtotal == A.legs[0].to_qflat()[ia] * A.legs[0].qconj + A.legs[1].to_qflat()[ib] * A.legs[1].qconj modulo qmod
B.qtotal == B.legs[0].to_qflat()[ib] * B.legs[0].qconj + B.legs[1].to_qflat()[ic] * B.legs[1].qconj modulo qmod
(A.legs[0]
is the LegCharge
saving the charges of the first leg (with index ia
) of A.)
For the uncontracted legs, we just keep the charges as they are:
C.legs = [A.legs[0], B.legs[1]]
It is then straight-forward to check, that the rule is fullfilled for \(C\), if the following condition is met:
A.qtotal + B.qtotal - C.qtotal == A.legs[1].to_qflat()[ib] A.b.qconj + B.legs[0].to_qflat()[ib] B.b.qconj modulo qmod
The easiest way to meet this condition is (1) to require that A.b
and B.b
share the same charges b.to_qflat()
, but have
opposite qconj, and (2) to define C.qtotal = A.qtotal + B.qtotal
.
This justifies the introduction of qconj:
when you define the tensors, you have to define the LegCharge
for the b only once, say for A.legs[1]
.
For B.legs[0]
you simply use A.legs[1].conj()
which creates a copy of the LegCharge with shared slices and charges, but opposite qconj.
As a more impressive example, all ‘physical’ legs of an MPS can usually share the same
LegCharge
(up to different qconj
if the local Hilbert space is the same).
This leads to the following convention:
Convention
When an npc algorithm makes tensors which share a bond (either with the input tensors, as for tensordot, or amongst the output tensors, as for SVD),
the algorithm is free, but not required, to use the same LegCharge
for the tensors sharing the bond, without making a copy.
Thus, if you want to modify a LegCharge, you must make a copy first (e.g. by using methods of LegCharge for what you want to acchive).
Assigning charges to non-physical legs¶
From the above physical examples, it should be clear how you assign charges to physical legs. But what about other legs, e.g, the virtual bond of an MPS (or an MPO)?
The charge of these bonds must be derived by using the ‘rule for non-zero entries’, as far as they are not arbitrary. As a concrete example, consider an MPS on just two spin 1/2 sites:
| _____ _____
| x->- | A | ->-y->- | B | ->-z
| ----- -----
| ^ ^
| |p |p
The two legs p
are the physical legs and share the same charge, as they both describe the same local Hilbert space.
For better distincition, let me label the indices of them by \(\uparrow=0\) and \(\downarrow=1\).
As noted above, we can associate the charges 1 (\(p=\uparrow\)) and -1 (\(p=\downarrow\)), respectively, so we define:
chinfo = npc.ChargeInfo([1], ['2*Sz'])
p = npc.LegCharge.from_qflat(chinfo, [1, -1], qconj=+1)
For the qconj
signs, we stick to the convention used in our MPS code and indicated by the
arrows in above ‘picture’: physical legs are incoming (qconj=+1
), and from left to right on the virtual bonds.
This is acchieved by using [p, x, y.conj()]
as legs for A
, and [p, y, z.conj()]
for B
, with the
default qconj=+1
for all p, x, y, z
: y.conj()
has the same charges as y
, but opposite qconj=-1
.
The legs x
and z
of an L=2
MPS, are ‘dummy’ legs with just one index 0
.
The charge on one of them, as well as the total charge of both A
and B
is arbitrary (i.e., a gauge freedom),
so we make a simple choice: total charge 0 on both arrays, as well as for \(x=0\),
x = npc.LegCharge.from_qflat(chinfo, [0], qconj=+1)
.
The charges on the bonds y and z then depend on the state the MPS represents. Here, we consider a singlet \(\psi = (|\uparrow \downarrow\rangle - |\downarrow \uparrow\rangle)/\sqrt{2}\) as a simple example. A possible MPS representation is given by:
A[up, :, :] = [[1/2.**0.5, 0]] B[up, :, :] = [[0], [-1]]
A[down, :, :] = [[0, 1/2.**0.5]] B[down, :, :] = [[1], [0]]
There are two non-zero entries in A
, for the indices \((a, x, y) = (\uparrow, 0, 0)\) and \((\downarrow, 0, 1)\).
For \((a, x, y) = (\uparrow, 0, 0)\), we want:
A.qtotal = 0 = p.to_qflat()[up] * p.qconj + x.to_qflat()[0] * x.qconj + y.conj().to_qflat()[0] * y.conj().qconj
= 1 * (+1) + 0 * (+1) + y.conj().to_qflat()[0] * (-1)
This fixes the charge of y=0
to 1.
A similar calculation for \((a, x, y) = (\downarrow, 0, 1)\) yields the charge -1
for y=1
.
We have thus all the charges of the leg y
and can define y = npc.LegCharge.from_qflat(chinfo, [1, -1], qconj=+1)
.
Now take a look at the entries of B
.
For the non-zero entry \((b, y, z) = (\uparrow, 1, 0)\), we want:
B.qtotal = 0 = p.to_qflat()[up] * p.qconj + y.to_qflat()[1] * y.qconj + z.conj().to_qflat()[0] * z.conj().qconj
= 1 * (+1) + (-1) * (+1) + z.conj().to_qflat()[0] * (-1)
This implies the charge 0 for z = 0, thus z = npc.LegCharge.form_qflat(chinfo, [0], qconj=+1)
.
Finally, note that the rule for \((b, y, z) = (\downarrow, 0, 0)\) is automatically fullfilled!
This is an implication of the fact that the singlet has a well defined value for \(S^z_a + S^z_b\).
For other states without fixed magnetization (e.g., \(|\uparrow \uparrow\rangle + |\downarrow \downarrow\rangle\))
this would not be the case, and we could not use charge conservation.
As an exercise, you can calculate the charge of z in the case that A.qtotal=5
, B.qtotal = -1
and
charge 2
for x=0
. The result is -2.
Note
This section is meant be an pedagogical introduction. In you program, you can use the functions
detect_legcharge()
(which does exactly what’s described above) or
detect_qtotal()
(if you know all LegCharges, but not qtotal).
Array creation¶
Making an new Array
requires both the tensor entries (data) and charge data.
The default initialization a = Array(...)
creates an empty Array, where all entries are zero
(equivalent to zeros()
).
(Non-zero) data can be provided either as a dense np.array to from_ndarray()
,
or by providing a numpy function such as np.random, np.ones etc. to from_func()
.
In both cases, the charge data is provided by one ChargeInfo
,
and a LegCharge
instance for each of the legs.
Note
The charge data instances are not copied, in order to allow it to be shared between different Arrays.
Consequently, you must make copies of the charge data, if you manipulate it directly.
(However, methods like sort()
do that for you.)
Of course, a new Array
can also created using the charge data from exisiting Arrays,
for examples with zeros_like()
or creating a (deep or shallow) copy()
.
Further, there are the higher level functions like tensordot()
or svd()
,
which also return new Arrays.
Further, new Arrays are created by the various functions like tensordot or svd in np_conserved
.
Complete blocking of Charges¶
While the code was designed in such a way that each charge sector has a different charge, the code
should still run correctly if multiple charge sectors (for different qindex) correspond to the same charge.
In this sense Array
can act like a sparse array class to selectively store subblocks.
Algorithms which need a full blocking should state that explicitly in their doc-strings.
(Some functions (like svd and eigh) require complete blocking internally, but if necessary they just work on
a temporary copy returned by as_completely_blocked()
).
If you expect the tensor to be dense subject to charge constraints (as for MPS), it will be most efficient to fully block by charge, so that work is done on large chunks.
However, if you expect the tensor to be sparser than required by charge (as for an MPO), it may be convenient not to completely block, which forces smaller matrices to be stored, and hence many zeroes to be dropped. Nevertheless, the algorithms were not designed with this in mind, so it is not recommended in general. (If you want to use it, run a benchmark to check whether it is really faster!)
If you haven’t created the array yet, you can call sort()
(with bunch=True
)
on each LegCharge
which you want to block.
This sorts by charges and thus induces a permution of the indices, which is also returned as an 1D array perm
.
For consistency, you have to apply this permutation to your flat data as well.
Alternatively, you can simply call sort_legcharge()
on an existing Array
.
It calls sort()
internally on the specified legs and performs the necessary
permutations directly to (a copy of) self. Yet, you should keep in mind, that the axes are permuted afterwards.
Internal Storage schema of npc Arrays¶
The actual data of the tensor is stored in _data
. Rather than keeping a single np.array (which would have many zeros in it),
we store only the non-zero sub blocks. So _data
is a python list of np.array’s.
The order in which they are stored in the list is not physically meaningful, and so not guaranteed (more on this later).
So to figure out where the sub block sits in the tensor, we need the _qdata
structure (on top of the LegCharges in legs
).
Consider a rank 3 tensor T
, with the first leg like:
legs[0].slices = np.array([0, 1, 4, ...])
legs[0].charges = np.array([[-2], [1], ...])
Each row of charges gives the charges for a charge block of the leg, with the actual indices of the total tensor determined by the slices. The qindex simply enumerates the charge blocks of a lex. Picking a qindex (and thus a charge block) from each leg, we have a subblock of the tensor.
For each (non-zero) subblock of the tensor, we put a (numpy) ndarray entry in the _data
list.
Since each subblock of the tensor is specified by rank qindices,
we put a corresponding entry in _qdata
, which is a 2D array of shape (#stored_blocks, rank)
.
Each row corresponds to a non-zero subblock, and there are rank columns giving the corresponding qindex for each leg.
Example: for a rank 3 tensor we might have:
T._data = [t1, t2, t3, t4, ...]
T. _qdata = np.array([[3, 2, 1],
[1, 1, 1],
[4, 2, 2],
[2, 1, 2],
... ])
The third subblock has an ndarray t3
, and qindices [4 2 2]
for the three legs.
To find the position of
t3
in the actual tensor you can useget_slice()
:T.legs[0].get_slice(4), T.legs[1].get_slice(2), T.legs[2].get_slice(2)
The function
leg.get_charges(qi)
simply returnsslice(leg.slices[qi], leg.slices[qi+1])
To find the charges of t3, we an use
get_charge()
:T.legs[0].get_charge(2), T.legs[1].get_charge(2), T.legs[2].get_charge(2)
The function
leg.get_charge(qi)
simply returnsleg.charges[qi]*leg.qconj
.
Note
Outside of np_conserved, you should use the API to access the entries.
If you really need to iterate over all blocks of an Array T
, try for (block, blockslices, charges, qindices) in T: do_something()
.
The order in which the blocks stored in _data
/_qdata
is arbitrary (although of course _data
and _qdata
must be in correspondence).
However, for many purposes it is useful to sort them according to some convention. So we include a flag ._qdata_sorted
to the array.
So, if sorted (with isort_qdata()
, the _qdata
example above goes to
_qdata = np.array([[1, 1, 1],
[3, 2, 1],
[2, 1, 2],
[4, 2, 2],
... ])
Note that np.lexsort chooses the right-most column to be the dominant key, a convention we follow throughout.
If _qdata_sorted == True
, _qdata
and _data
are guaranteed to be lexsorted. If _qdata_sorted == False
, there is no gaurantee.
If an algorithm modifies _qdata
, it must set _qdata_sorted = False
(unless it gaurantees it is still sorted).
The routine sort_qdata()
brings the data to sorted form.
Indexing of an Array¶
Although it is usually not necessary to access single entries of an Array
, you can of course do that.
In the simplest case, this is something like A[0, 2, 1]
for a rank-3 Array A
.
However, accessing single entries is quite slow and usually not recommended. For small Arrays, it may be convenient to convert them
back to flat numpy arrays with to_ndarray()
.
On top of that very basic indexing, Array supports slicing and some kind of advanced indexing, which is however different from the one of numpy arrarys (described here). Unlike numpy arrays, our Array class does not broadcast existing index arrays – this would be terribly slow. Also, np.newaxis is not supported, since inserting new axes requires additional information for the charges.
Instead, we allow just indexing of the legs independent of each other, of the form A[i0, i1, ...]
.
If all indices i0, i1, ...
are integers, the single corresponding entry (of type dtype) is returned.
However, the individual ‘indices’ i0
for the individual legs can also be one of what is described in the following list.
In that case, a new Array
with less data (specified by the indices) is returned.
The ‘indices’ can be:
an int: fix the index of that axis, return array with one less dimension. See also
take_slice()
.a
slice(None)
or:
: keep the complete axisan
Ellipsis
or...
: shorthand forslice(None)
for missing axes to fix the lenan 1D bool ndarray
mask
: apply a mask to that axis, seeiproject()
.a
slice(start, stop, step)
orstart:stop:step
: keep only the indices specified by the slice. This is also implemented with iproject.an 1D int ndarray
mask
: keep only the indices specified by the array. This is also implemented with iproject.
For slices and 1D arrays, additional permuations may be perfomed with the help of permute()
.
If the number of indices is less than rank, the remaining axes remain free, so for a rank 4 Array A
, A[i0, i1] == A[i0, i1, ...] == A[i0, i1, :, :]
.
Note that indexing always copies the data – even if int contains just slices, in which case numpy would return a view.
However, assigning with A[:, [3, 5], 3] = B
should work as you would expect.
Warning
Due to numpy’s advanced indexing, for 1D integer arrays a0
and a1
the following holds
A[a0, a1].to_ndarray() == A.to_ndarray()[np.ix_(a0, a1)] != A.to_ndarray()[a0, a1]
For a combination of slices and arrays, things get more complicated with numpys advanced indexing.
In that case, a simple np.ix_(...)
doesn’t help any more to emulate our version of indexing.
Introduction to combine_legs, split_legs and LegPipes¶
Often, it is necessary to “combine” multiple legs into one: for example to perfom a SVD, a tensor needs to be viewed as a matrix.
For a flat array, this can be done with np.reshape
, e.g., if A
has shape (10, 3, 7)
then B = np.reshape(A, (30, 7))
will
result in a (view of the) array with one less dimension, but a “larger” first leg. By default (order='C'
), this
results in
B[i*3 + j , k] == A[i, j, k] for i in range(10) for j in range(3) for k in range(7)
While for a np.array, also a reshaping (10, 3, 7) -> (2, 21, 5)
would be allowed, it does not make sense
physically. The only sensible “reshape” operation on an Array
are
to combine multiple legs into one leg pipe (
LegPipe
) withcombine_legs()
, orto split a pipe of previously combined legs with
split_legs()
.
Each leg has a Hilbert space, and a representation of the symmetry on that Hilbert space. Combining legs corresponds to the tensor product operation, and for abelian groups, the corresponding “fusion” of the representation is the simple addition of charge.
Fusion is not a lossless process, so if we ever want to split the combined leg,
we need some additional data to tell us how to reverse the tensor product.
This data is saved in the class LegPipe
, derived from the LegCharge
and used as new leg.
Details of the information contained in a LegPipe are given in the class doc string.
The rough usage idea is as follows:
You can call
combine_legs()
without supplying any LegPipes, combine_legs will then make them for you.Nevertheless, if you plan to perform the combination over and over again on sets of legs you know to be identical [with same charges etc, up to an overall -1 in qconj on all incoming and outgoing Legs] you might make a LegPipe anyway to save on the overhead of computing it each time.
In any way, the resulting Array will have a
LegPipe
as a LegCharge on the combined leg. Thus, it – and all tensors inheriting the leg (e.g. the results of svd, tensordot etc.) – will have the information how to split the LegPipe back to the original legs.Once you performed the necessary operations, you can call
split_legs()
. This uses the information saved in the LegPipe to split the legs, recovering the original legs.
For a LegPipe, conj`()
changes qconj
for the outgoing pipe and the incoming legs.
If you need a LegPipe with the same incoming qconj
, use outer_conj()
.
Leg labeling¶
It’s convenient to name the legs of a tensor: for instance, we can name legs 0, 1, 2 to be 'a', 'b', 'c'
: \(T_{i_a,i_b,i_c}\).
That way we don’t have to remember the ordering! Under tensordot, we can then call
U = npc.tensordot(S, T, axes = [ [...], ['b'] ] )
without having to remember where exactly 'b'
is.
Obviously U
should then inherit the name of its legs from the uncontracted legs of S and T.
So here is how it works:
Labels can only be strings. The labels should not include the characters
.
or?
. Internally, the labels are stored as dicta.labels = {label: leg_position, ...}
. Not all legs need a label.To set the labels, call
A.set_labels(['a', 'b', None, 'c', ... ])
which will set up the labeling
{'a': 0, 'b': 1, 'c': 3 ...}
.(Where implemented) the specification of axes can use either the labels or the index positions. For instance, the call
tensordot(A, B, [ ['a', 2, 'c'], [...]])
will interpret'a'
and'c'
as labels (callingget_leg_indices()
to find their positions using the dict) and 2 as ‘the 2nd leg’. That’s why we require labels to be strings!- Labels will be intelligently inherited through the various operations of np_conserved.
Under transpose, labels are permuted.
Under tensordot, labels are inherited from uncontracted legs. If there is a collision, both labels are dropped.
Under combine_legs, labels get concatenated with a
.
delimiter and sourrounded by brackets. Example: leta.labels = {'a': 1, 'b': 2, 'c': 3}
. Then ifb = a.combine_legs([[0, 1], [2]])
, it will haveb.labels = {'(a.b)': 0, '(c)': 1}
. If some sub-leg of a combined leg isn’t named, then a'?#'
label is inserted (with#
the leg index), e.g.,'a.?0.c'
.Under split_legs, the labels are split using the delimiters (and the
'?#'
are dropped).Under conj, iconj: take
'a' -> 'a*'
,'a*' -> 'a'
, and'(a.(b*.c))' -> '(a*.(b.c*))'
Under svd, the outer labels are inherited, and inner labels can be optionally passed.
Under pinv, the labels are transposed.
See also¶
The module
tenpy.linalg.np_conserved
should contain all the API needed from the point of view of the algorithms. It contians the fundamentalArray
class and functions for working with them (creating and manipulating).The module
tenpy.linalg.charges
contains implementations for the charge structure, for example the classesChargeInfo
,LegCharge
, andLegPipe
. As noted above, the ‘public’ API is imported to (and accessible from)np_conserved
.
A full example code for spin-1/2¶
Below follows a full example demonstrating the creation and contraction of Arrays. (It’s the file a_np_conserved.py in the examples folder of the tenpy source.)
"""An example code to demonstrate the usage of :class:`~tenpy.linalg.np_conserved.Array`.
This example includes the following steps:
1) create Arrays for an Neel MPS
2) create an MPO representing the nearest-neighbour AFM Heisenberg Hamiltonian
3) define 'environments' left and right
4) contract MPS and MPO to calculate the energy
5) extract two-site hamiltonian ``H2`` from the MPO
6) calculate ``exp(-1.j*dt*H2)`` by diagonalization of H2
7) apply ``exp(H2)`` to two sites of the MPS and truncate with svd
Note that this example uses only np_conserved, but no other modules.
Compare it to the example `b_mps.py`,
which does the same steps using a few predefined classes like MPS and MPO.
"""
# Copyright 2018-2019 TeNPy Developers, GNU GPLv3
import tenpy.linalg.np_conserved as npc
import numpy as np
# model parameters
Jxx, Jz = 1., 1.
L = 20
dt = 0.1
cutoff = 1.e-10
print("Jxx={Jxx}, Jz={Jz}, L={L:d}".format(Jxx=Jxx, Jz=Jz, L=L))
print("1) create Arrays for an Neel MPS")
# vL ->--B-->- vR
# |
# ^
# |
# p
# create a ChargeInfo to specify the nature of the charge
chinfo = npc.ChargeInfo([1], ['2*Sz']) # the second argument is just a descriptive name
# create LegCharges on physical leg and even/odd bonds
p_leg = npc.LegCharge.from_qflat(chinfo, [[1], [-1]]) # charges for up, down
v_leg_even = npc.LegCharge.from_qflat(chinfo, [[0]])
v_leg_odd = npc.LegCharge.from_qflat(chinfo, [[1]])
B_even = npc.zeros([v_leg_even, v_leg_odd.conj(), p_leg])
B_odd = npc.zeros([v_leg_odd, v_leg_even.conj(), p_leg])
B_even[0, 0, 0] = 1. # up
B_odd[0, 0, 1] = 1. # down
for B in [B_even, B_odd]:
B.iset_leg_labels(['vL', 'vR', 'p']) # virtual left/right, physical
Bs = [B_even, B_odd] * (L // 2) + [B_even] * (L % 2) # (right-canonical)
Ss = [np.ones(1)] * L # Ss[i] are singular values between Bs[i-1] and Bs[i]
# Side remark:
# An MPS is expected to have non-zero entries everywhere compatible with the charges.
# In general, we recommend to use `sort_legcharge` (or `as_completely_blocked`)
# to ensure complete blocking. (But the code will also work, if you don't do it.)
# The drawback is that this might introduce permutations in the indices of single legs,
# which you have to keep in mind when converting dense numpy arrays to and from npc.Arrays.
print("2) create an MPO representing the AFM Heisenberg Hamiltonian")
# p*
# |
# ^
# |
# wL ->--W-->- wR
# |
# ^
# |
# p
# create physical spin-1/2 operators Sz, S+, S-
Sz = npc.Array.from_ndarray([[0.5, 0.], [0., -0.5]], [p_leg, p_leg.conj()])
Sp = npc.Array.from_ndarray([[0., 1.], [0., 0.]], [p_leg, p_leg.conj()])
Sm = npc.Array.from_ndarray([[0., 0.], [1., 0.]], [p_leg, p_leg.conj()])
Id = npc.eye_like(Sz) # identity
for op in [Sz, Sp, Sm, Id]:
op.iset_leg_labels(['p', 'p*']) # physical in, physical out
mpo_leg = npc.LegCharge.from_qflat(chinfo, [[0], [2], [-2], [0], [0]])
W_grid = [[Id, Sp, Sm, Sz, None ],
[None, None, None, None, 0.5 * Jxx * Sm],
[None, None, None, None, 0.5 * Jxx * Sp],
[None, None, None, None, Jz * Sz ],
[None, None, None, None, Id ]] # yapf:disable
W = npc.grid_outer(W_grid, [mpo_leg, mpo_leg.conj()])
W.iset_leg_labels(['wL', 'wR', 'p', 'p*']) # wL/wR = virtual left/right of the MPO
Ws = [W] * L
print("3) define 'environments' left and right")
# .---->- vR vL ->----.
# | |
# envL->- wR wL ->-envR
# | |
# .---->- vR* vL*->----.
envL = npc.zeros([W.get_leg('wL').conj(), Bs[0].get_leg('vL').conj(), Bs[0].get_leg('vL')])
envL.iset_leg_labels(['wR', 'vR', 'vR*'])
envL[0, :, :] = npc.diag(1., envL.legs[1])
envR = npc.zeros([W.get_leg('wR').conj(), Bs[-1].get_leg('vR').conj(), Bs[-1].get_leg('vR')])
envR.iset_leg_labels(['wL', 'vL', 'vL*'])
envR[-1, :, :] = npc.diag(1., envR.legs[1])
print("4) contract MPS and MPO to calculate the energy <psi|H|psi>")
contr = envL
for i in range(L):
# contr labels: wR, vR, vR*
contr = npc.tensordot(contr, Bs[i], axes=('vR', 'vL'))
# wR, vR*, vR, p
contr = npc.tensordot(contr, Ws[i], axes=(['p', 'wR'], ['p*', 'wL']))
# vR*, vR, wR, p
contr = npc.tensordot(contr, Bs[i].conj(), axes=(['p', 'vR*'], ['p*', 'vL*']))
# vR, wR, vR*
# note that the order of the legs changed, but that's no problem with labels:
# the arrays are automatically transposed as necessary
E = npc.inner(contr, envR, axes=(['vR', 'wR', 'vR*'], ['vL', 'wL', 'vL*']))
print("E =", E)
print("5) calculate two-site hamiltonian ``H2`` from the MPO")
# label left, right physical legs with p, q
W0 = W.replace_labels(['p', 'p*'], ['p0', 'p0*'])
W1 = W.replace_labels(['p', 'p*'], ['p1', 'p1*'])
H2 = npc.tensordot(W0, W1, axes=('wR', 'wL')).itranspose(['wL', 'wR', 'p0', 'p1', 'p0*', 'p1*'])
H2 = H2[0, -1] # (If H has single-site terms, it's not that simple anymore)
print("H2 labels:", H2.get_leg_labels())
print("6) calculate exp(H2) by diagonalization of H2")
# diagonalization requires to view H2 as a matrix
H2 = H2.combine_legs([('p0', 'p1'), ('p0*', 'p1*')], qconj=[+1, -1])
print("labels after combine_legs:", H2.get_leg_labels())
E2, U2 = npc.eigh(H2)
print("Eigenvalues of H2:", E2)
U_expE2 = U2.scale_axis(np.exp(-1.j * dt * E2), axis=1) # scale_axis ~= apply an diagonal matrix
exp_H2 = npc.tensordot(U_expE2, U2.conj(), axes=(1, 1))
exp_H2.iset_leg_labels(H2.get_leg_labels())
exp_H2 = exp_H2.split_legs() # by default split all legs which are `LegPipe`
# (this restores the originial labels ['p0', 'p1', 'p0*', 'p1*'] of `H2` in `exp_H2`)
print("7) apply exp(H2) to even/odd bonds of the MPS and truncate with svd")
# (this implements one time step of first order TEBD)
for even_odd in [0, 1]:
for i in range(even_odd, L - 1, 2):
B_L = Bs[i].scale_axis(Ss[i], 'vL').ireplace_label('p', 'p0')
B_R = Bs[i + 1].replace_label('p', 'p1')
theta = npc.tensordot(B_L, B_R, axes=('vR', 'vL'))
theta = npc.tensordot(exp_H2, theta, axes=(['p0*', 'p1*'], ['p0', 'p1']))
# view as matrix for SVD
theta = theta.combine_legs([('vL', 'p0'), ('p1', 'vR')], new_axes=[0, 1], qconj=[+1, -1])
# now theta has labels '(vL.p0)', '(p1.vR)'
U, S, V = npc.svd(theta, inner_labels=['vR', 'vL'])
# truncate
keep = S > cutoff
S = S[keep]
invsq = np.linalg.norm(S)
Ss[i + 1] = S / invsq
U = U.iscale_axis(S / invsq, 'vR')
Bs[i] = U.split_legs('(vL.p0)').iscale_axis(Ss[i]**(-1), 'vL').ireplace_label('p0', 'p')
Bs[i + 1] = V.split_legs('(p1.vR)').ireplace_label('p1', 'p')
print("finished")
Introduction to models¶
What is a model?¶
Abstractly, a model stands for some physical (quantum) system to be described. For tensor networks algorithms, the model is usually specified as a Hamiltonian written in terms of second quantization. For example, let us consider a spin-1/2 Heisenberg model described by the Hamiltonian
Note that a few things are defined more or less implicitly.
The local Hilbert space: it consists of Spin-1/2 degrees of freedom with the usual spin-1/2 operators \(S^x, S^y, S^z\).
The geometric (lattice) strucuture: above, we spoke of a 1D “chain”.
The boundary conditions: do we have open or periodic boundary conditions? The “chain” suggests open boundaries, which are in most cases preferable for MPS-based methods.
The range of i: How many sites do we consider (for a 2D system: in each direction)?
Obviously, these things need to be specified in TeNPy in one way or another, if we want to define a model.
Ultimately, our goal is to run some algorithm. Each algorithm requires the model and Hamiltonian to be specified in a particular form.
We have one class for each such required form.
For example dmrg
requires an MPOModel
,
which contains the Hamiltonian written as an MPO
.
On the other hand, if we want to evolve a state with tebd
we need a NearestNeighborModel
, in which the Hamiltonian is written in terms of
two-site bond-terms to allow a Suzuki-Trotter decomposition of the time-evolution operator.
Implmenting you own model ultimatley means to get an instance of MPOModel
or NearestNeighborModel
.
The predefined classes in the other modules under models
are subclasses of at least one of those,
you will see examples later down below.
The Hilbert space¶
The local Hilbert space is represented by a Site
(read its doc-string!).
In particular, the Site contains the local LegCharge
and hence the meaning of each
basis state needs to be defined.
Beside that, the site contains the local operators - those give the real meaning to the local basis.
Having the local operators in the site is very convenient, because it makes them available by name for example when you want to calculate expectation values.
The most common sites (e.g. for spins, spin-less or spin-full fermions, or bosons) are predefined
in the module tenpy.networks.site
, but if necessary you can easily extend them
by adding further local operators or completely write your own subclasses of Site
.
The full Hilbert space is a tensor product of the local Hilbert space on each site.
Note
The LegCharge
of all involved sites need to have a common
ChargeInfo
in order to allow the contraction of tensors acting on the various sites.
This can be ensured with the function multi_sites_combine_charges()
.
An example where multi_sites_combine_charges()
is needed would be a coupling of different
types of sites, e.g., when a tight binding chain of fermions is coupled to some local spin degrees of freedom.
Another use case of this function would be a model with a $U(1)$ symmetry involving only half the sites, say \(\sum_{i=0}^{L/2} n_{2i}\).
Note
If you don’t know about the charges and np_conserved yet, but want to get started with models right away,
you can set conserve=None
in the existing sites or use
leg = tenpy.linalg.np_conserved.LegCharge.from_trivial(d)
for an implementation of your custom site,
where d is the dimension of the local Hilbert space.
Alternatively, you can find some introduction to the charges in the Introduction to np_conserved.
The geometry : lattices¶
The geometry is usually given by some kind of lattice structure how the sites are arranged,
e.g. implicitly with the sum over nearest neighbours \(\sum_{<i, j>}\).
In TeNPy, this is specified by a Lattice
class, which contains a unit cell of
a few Site
which are shifted periodically by its basis vectors to form a regular lattice.
Again, we have pre-defined some basic lattices like a Chain
,
two chains coupled as a Ladder
or 2D lattices like the
Square
, Honeycomb
and
Kagome
lattices; but you are also free to define your own generalizations.
(More details on that can be found in the doc-string of Lattice
, read it!)
Visualization of the lattice can help a lot to understand which sites are connected by what couplings.
The methods plot_...
of the Lattice
can do a good job for a quick illustration.
We include a small image in the documation of each of the lattices.
For example, the following small script can generate the image of the Kagome lattice shown below:
import matplotlib.pyplot as plt
from tenpy.models.lattice import Kagome
ax = plt.gca()
lat = Kagome(4, 4, None, bc='periodic')
lat.plot_coupling(ax, lat.nearest_neighbors, linewidth=3.)
lat.plot_order(ax=ax, linestyle=':')
lat.plot_sites()
lat.plot_basis(ax, color='g', linewidth=2.)
ax.set_aspect('equal')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()

The lattice contains also the boundary conditions bc in each direction. It can be one of the usual 'open'
or
'periodic'
in each direcetion. Instead of just saying “periodic”, you can also specify a shift (except in the
first direction). This is easiest to understand at its standard usecase: DMRG on a infinite cylinder.
Going around the cylinder, you have a degree of freedom which sites to connect.
The orange markers in the following figures illustrates sites identified for a Square lattice with bc=['periodic', shift]
(see plot_bc_shift()
):

Note that the “cylinder” axis (and direction for \(k_x\)) is perpendicular to the orange line connecting these sites. The line where the cylinder is “cut open” therefore winds around the the cylinder for a non-zero shift (or more complicated lattices without perpendicular basis).
MPS based algorithms like DMRG always work on purely 1D systems. Even if our model “lives” on a 2D lattice,
these algorithms require to map it onto a 1D chain (probably at the cost of longer-range interactions).
This mapping is also done in by the lattice, as it defines an order (order
) of the sites.
The methods mps2lat_idx()
and lat2mps_idx()
map
indices of the MPS to and from indices of the lattice. If you obtained and array with expectation values for a given MPS,
you can use mps2lat_values()
to map it to lattice indices, thereby reverting the ordering.
Performing this mapping of the Hamiltonain from a 2D lattice to a 1D chain by hand can be a tideous process. Therefore, we have automated this mapping in TeNPy as explained in the next section. (Nevertheless it’s a good exercise you should do at least once in your life to understand how it works!)
Note
A suitable order is critical for the efficiency of MPS-based algorithms. On one hand, different orderings can lead to different MPO bond-dimensions, with direct impact on the complexity scaling. On the other hand, it influences how much entanglement needs to go through each bonds of the underlying MPS, e.g., the ground strate to be found in DMRG, and therefore influences the required MPS bond dimensions. For the latter reason, the “optimal” ordering can not be known a priori and might even depend on your coupling parameters (and the phase you are in). In the end, you can just try different orderings and see which one works best.
Implementing you own model¶
When you want to simulate a model not provided in models
, you need to implement your own model class,
lets call it MyNewModel
.
The idea is that you define a new subclass of one or multiple of the model base classes.
For example, when you plan to do DMRG, you have to provide an MPO in a MPOModel
,
so your model class should look like this:
class MyNewModel(MPOModel):
"""General strucutre for a model suitable for DMRG.
Here is a good place to document the represented Hamiltonian and parameters.
In the models of TeNPy, we usually take a single dictionary `model_params`
containing all parameters, and read values out with ``tenpy.tools.params.get_parameter(...)``,
The model needs to provide default values if the parameters was not specified.
"""
def __init__(self, model_params):
# some code here to read out model parameters and generate H_MPO
lattice = somehow_generate_lattice(model_params)
H_MPO = somehow_generate_MPO(lattice, model_params)
# initialize MPOModel
MPOModel.__init__(self, lattice, H_MPO)
TEBD requires another representation of H in terms of bond terms H_bond given to a
NearestNeighborModel
, so in this case it would look so like this instead:
class MyNewModel2(NearestNeighborModel):
"""General strucutre for a model suitable for TEBD."""
def __init__(self, model_params):
# some code here to read out model parameters and generate H_bond
lattice = somehow_generate_lattice(model_params)
H_bond = somehow_generate_H_bond(lattice, model_params)
# initialize MPOModel
NearestNeighborModel.__init__(self, lattice, H_bond)
Of course, the difficult part in these examples is to generate the H_MPO
and H_bond
.
Moreover, it’s quite annoying to write every model multiple times,
just because we need different representations of the same Hamiltonian.
Luckily, there is a way out in TeNPy: the CouplingModel!
The easy way to new models: the (Multi)CouplingModel¶
The CouplingModel
provides a general, quite abstract way to specify a Hamiltonian
of two-site couplings on a given lattice.
Once initialized, its methods add_onsite()
and
add_coupling()
allow to add onsite and coupling terms repeated over the different
unit cells of the lattice.
In that way, it basically allows a straight-forward translation of the Hamiltonian given as a math forumla
\(H = \sum_{i} A_i B_{i+dx} + ...\) with onsite operators A, B,… into a model class.
The general structure for a new model based on the CouplingModel
is then:
class MyNewModel3(CouplingModel,MPOModel,NearestNeighborModel):
def __init__(self, ...):
... # follow the basic steps explained below
In the initialization method __init__(self, ...)
of this class you can then follow these basic steps:
Read out the parameters.
Given the parameters, determine the charges to be conserved. Initialize the
LegCharge
of the local sites accordingly.Define (additional) local operators needed.
Initialize the needed
Site
.Note
Using pre-defined sites like the
SpinHalfSite
is recommended and can replace steps 1-3.Initialize the lattice (or if you got the lattice as a parameter, set the sites in the unit cell).
Initialize the
CouplingModel
withCouplingModel.__init__(self, lat)
.Use
add_onsite()
andadd_coupling()
to add all terms of the Hamiltonian. Here, thenearest_neighbors
of the lattice (and its friends for next nearest neighbors) can come in handy, for example:self.add_onsite(-np.asarray(h), 0, 'Sz') for u1, u2, dx in self.lat.nearest_neighbors: self.add_coupling(J, u1, 'Sz', u2, 'Sz', dx)
Note
The method
add_coupling()
adds the coupling only in one direction, i.e. not switching i and j in a \(\sum_{\langle i, j\rangle}\). If you have terms like \(c^\dagger_i c_j\) in your Hamiltonian, you need to add it in both directions to get a hermitian Hamiltonian! Simply add another line with switched, conjugated operatores, switched (u1, u2), and negative dx, for example when using theSpinHalfFermionSite
:self.add_coupling(t, u1, 'Cdu', u2, 'Cd', dx) self.add_coupling(np.conj(t), u2, 'Cdd', u1, 'Cu', -dx) # h.c. # ('Cdd' is h.c. of 'Cd', and 'Cu' is h.c. of 'Cdu'!)
See also the other examples in
add_coupling()
.Note that the strength arguments of these functions can be (numpy) arrays for site-dependent couplings. If you need to add or multipliy some parameters of the model for the strength of certain terms, it is recommended use
np.asarray
beforehand – in that way lists will also work fine.Finally, if you derived from the
MPOModel
, you can callcalc_H_MPO()
to build the MPO and use it for the initialization asMPOModel.__init__(self, lat, self.calc_H_MPO())
.Similarly, if you derived from the
NearestNeighborModel
, you can callcalc_H_MPO()
to initialze it asNearestNeighborModel.__init__(self, lat, self.calc_H_bond())
. Callingself.calc_H_bond()
will fail for models which are not nearest-neighbors (with respect to the MPS ordering), so you should only subclass theNearestNeighborModel
if the lattice is a simpleChain
.
The CouplingModel
works for Hamiltonians which are a sum of terms involving at most two sites.
The generalization MultiCouplingModel
can be used for Hamlitonians with
coupling terms acting on more than 2 sites at once. Follow the exact same steps in the initialization, and just use the
add_multi_coupling()
instead or in addition to the
add_coupling()
.
A prototypical example is the exactly solvable ToricCode
.
The code of the module tenpy.models.xxz_chain
is included below as an illustrative example how to implemnet a
Model. The implementation of the XXZChain
directly follows the steps
outline above.
The XXZChain2
implements the very same model, but based on the
CouplingMPOModel
explained in the next section.
"""Prototypical example of a 1D quantum model: the spin-1/2 XXZ chain.
The XXZ chain is contained in the more general :class:`~tenpy.models.spins.SpinChain`; the idea of
this module is more to serve as a pedagogical example for a model.
"""
# Copyright 2018-2019 TeNPy Developers, GNU GPLv3
import numpy as np
from .lattice import Site, Chain
from .model import CouplingModel, NearestNeighborModel, MPOModel, CouplingMPOModel
from ..linalg import np_conserved as npc
from ..tools.params import get_parameter, unused_parameters
from ..networks.site import SpinHalfSite # if you want to use the predefined site
__all__ = ['XXZChain', 'XXZChain2']
class XXZChain(CouplingModel, NearestNeighborModel, MPOModel):
r"""Spin-1/2 XXZ chain with Sz conservation.
The Hamiltonian reads:
.. math ::
H = \sum_i \mathtt{Jxx}/2 (S^{+}_i S^{-}_{i+1} + S^{-}_i S^{+}_{i+1})
+ \mathtt{Jz} S^z_i S^z_{i+1} \\
- \sum_i \mathtt{hz} S^z_i
All parameters are collected in a single dictionary `model_params` and read out with
:func:`~tenpy.tools.params.get_parameter`.
Parameters
----------
L : int
Length of the chain.
Jxx, Jz, hz : float | array
Couplings as defined for the Hamiltonian above.
bc_MPS : {'finite' | 'infinte'}
MPS boundary conditions. Coupling boundary conditions are chosen appropriately.
"""
def __init__(self, model_params):
# 0) read out/set default parameters
name = "XXZChain"
L = get_parameter(model_params, 'L', 2, name)
Jxx = get_parameter(model_params, 'Jxx', 1., name, asarray=True)
Jz = get_parameter(model_params, 'Jz', 1., name, True)
hz = get_parameter(model_params, 'hz', 0., name, True)
bc_MPS = get_parameter(model_params, 'bc_MPS', 'finite', name)
unused_parameters(model_params, name) # checks for mistyped parameters
# 1-3):
USE_PREDEFINED_SITE = False
if not USE_PREDEFINED_SITE:
# 1) charges of the physical leg. The only time that we actually define charges!
leg = npc.LegCharge.from_qflat(npc.ChargeInfo([1], ['2*Sz']), [1, -1])
# 2) onsite operators
Sp = [[0., 1.], [0., 0.]]
Sm = [[0., 0.], [1., 0.]]
Sz = [[0.5, 0.], [0., -0.5]]
# (Can't define Sx and Sy as onsite operators: they are incompatible with Sz charges.)
# 3) local physical site
site = Site(leg, ['up', 'down'], Sp=Sp, Sm=Sm, Sz=Sz)
else:
# there is a site for spin-1/2 defined in TeNPy, so just we can just use it
# replacing steps 1-3)
site = SpinHalfSite(conserve='Sz')
# 4) lattice
bc = 'periodic' if bc_MPS == 'infinite' else 'open'
lat = Chain(L, site, bc=bc, bc_MPS=bc_MPS)
# 5) initialize CouplingModel
CouplingModel.__init__(self, lat)
# 6) add terms of the Hamiltonian
# (u is always 0 as we have only one site in the unit cell)
self.add_onsite(-hz, 0, 'Sz')
self.add_coupling(Jxx * 0.5, 0, 'Sp', 0, 'Sm', 1)
self.add_coupling(np.conj(Jxx * 0.5), 0, 'Sp', 0, 'Sm', -1) # h.c.
self.add_coupling(Jz, 0, 'Sz', 0, 'Sz', 1)
# 7) initialize H_MPO
MPOModel.__init__(self, lat, self.calc_H_MPO())
# 8) initialize H_bond (the order of 7/8 doesn't matter)
NearestNeighborModel.__init__(self, lat, self.calc_H_bond())
class XXZChain2(CouplingMPOModel, NearestNeighborModel):
"""Another implementation of the Spin-1/2 XXZ chain with Sz conservation.
This implementation takes the same parameters as the :class:`XXZChain`, but is implemented
based on the :class:`~tenpy.models.model.CouplingMPOModel`.
"""
def __init__(self, model_params):
model_params.setdefault('lattice', "Chain")
CouplingMPOModel.__init__(self, model_params)
def init_sites(self, model_params):
return SpinHalfSite(conserve='Sz') # use predefined Site
def init_terms(self, model_params):
# read out parameters
Jxx = get_parameter(model_params, 'Jxx', 1., self.name, True)
Jz = get_parameter(model_params, 'Jz', 1., self.name, True)
hz = get_parameter(model_params, 'hz', 0., self.name, True)
# add terms
for u in range(len(self.lat.unit_cell)):
self.add_onsite(-hz, u, 'Sz')
for u1, u2, dx in self.lat.pairs['nearest_neighbors']:
self.add_coupling(Jxx * 0.5, u1, 'Sp', u2, 'Sm', dx)
self.add_coupling(np.conj(Jxx * 0.5), u2, 'Sp', u1, 'Sm', -dx) # h.c.
self.add_coupling(Jz, u1, 'Sz', u2, 'Sz', dx)
The easy easy way: the CouplingMPOModel¶
Since many of the basic steps above are always the same, we don’t need to repeat them all the time.
So we have yet another class helping to structure the initialization of models: the CouplingMPOModel
.
The general structure of the class is like this:
class CouplingMPOModel(CouplingModel,MPOModel):
def __init__(self, model_param):
# ... follow the basic steps 1-8 using the methods
lat = self.init_lattice(self, model_param) # for step 4
# ...
self.init_terms(self, model_param) # for step 6
# ...
def init_sites(self, model_param):
# You should overwrite this
def init_lattice(self, model_param):
sites = self.init_sites(self, model_param) # for steps 1-3
# initialize an arbitrary pre-defined lattice
# using model_params['lattice']
def init_terms(self, model_param):
# does nothing.
# You should overwrite this
The XXZChain2
included above illustrates, how it can be used.
You need to implement steps 1-3) by overwriting the method init_sites()
Step 4) is performed in the method init_lattice()
, which initializes arbitrary 1D or 2D
lattices; by default a simple 1D chain.
If your model only works for specific lattices, you can overwrite this method in your own class.
Step 6) should be done by overwriting the method init_terms()
.
Steps 5,7,8 and calls to the init_… methods for the other steps are done automatically if you just call the
CouplingMPOModel.__init__(self, model_param)
.
The XXZChain
and XXZChain2
work only with the
Chain
as lattice, since they are derived from the NearestNeighborModel
.
This allows to use them for TEBD in 1D (yeah!), but we can’t get the MPO for DMRG on a e.g. a Square
lattice cylinder - although it’s intuitively clear, what the hamiltonian there should be: just put the nearest-neighbor
coupling on each bond of the 2D lattice.
It’s not possible to generalize a NearestNeighborModel
to an arbitrary lattice where it’s
no longer nearest Neigbors in the MPS sense, but we can go the other way around:
first write the model on an arbitrary 2D lattice and then restrict it to a 1D chain to make it a NearestNeighborModel
.
Let me illustrate this with another standard example model: the transverse field Ising model, imlemented in the module
tenpy.models.tf_ising
included below.
The TFIModel
works for arbitrary 1D or 2D lattices.
The TFIChain
is then taking the exact same model making a NearestNeighborModel
,
which only works for the 1D chain.
"""Prototypical example of a quantum model: the transverse field Ising model.
Like the :class:`~tenpy.models.xxz_chain.XXZChain`, the transverse field ising chain
:class:`TFIChain` is contained in the more general :class:`~tenpy.models.spins.SpinChain`;
the idea is more to serve as a pedagogical example for a 'model'.
We choose the field along z to allow to conserve the parity, if desired.
"""
# Copyright 2018-2019 TeNPy Developers, GNU GPLv3
from .model import CouplingMPOModel, NearestNeighborModel
from ..tools.params import get_parameter
from ..networks.site import SpinHalfSite
__all__ = ['TFIModel', 'TFIChain']
class TFIModel(CouplingMPOModel):
r"""Transverse field Ising model on a general lattice.
The Hamiltonian reads:
.. math ::
H = - \sum_{\langle i,j\rangle, i < j} \mathtt{J} \sigma^x_i \sigma^x_{j}
- \sum_{i} \mathtt{g} \sigma^z_i
Here, :math:`\langle i,j \rangle, i< j` denotes nearest neighbor pairs, each pair appearing
exactly once.
All parameters are collected in a single dictionary `model_params` and read out with
:func:`~tenpy.tools.params.get_parameter`.
Parameters
----------
conserve : None | 'parity'
What should be conserved. See :class:`~tenpy.networks.Site.SpinHalfSite`.
J, g : float | array
Couplings as defined for the Hamiltonian above.
lattice : str | :class:`~tenpy.models.lattice.Lattice`
Instance of a lattice class for the underlaying geometry.
Alternatively a string being the name of one of the Lattices defined in
:mod:`~tenpy.models.lattice`, e.g. ``"Chain", "Square", "HoneyComb", ...``.
bc_MPS : {'finite' | 'infinte'}
MPS boundary conditions along the x-direction.
For 'infinite' boundary conditions, repeat the unit cell in x-direction.
Coupling boundary conditions in x-direction are chosen accordingly.
Only used if `lattice` is a string.
order : string
Ordering of the sites in the MPS, e.g. 'default', 'snake';
see :meth:`~tenpy.models.lattice.Lattice.ordering`.
Only used if `lattice` is a string.
L : int
Lenght of the lattice.
Only used if `lattice` is the name of a 1D Lattice.
Lx, Ly : int
Length of the lattice in x- and y-direction.
Only used if `lattice` is the name of a 2D Lattice.
bc_y : 'ladder' | 'cylinder'
Boundary conditions in y-direction.
Only used if `lattice` is the name of a 2D Lattice.
"""
def __init__(self, model_params):
CouplingMPOModel.__init__(self, model_params)
def init_sites(self, model_params):
conserve = get_parameter(model_params, 'conserve', 'parity', self.name)
assert conserve != 'Sz'
if conserve == 'best':
conserve = 'parity'
if self.verbose >= 1.:
print(self.name + ": set conserve to", conserve)
site = SpinHalfSite(conserve=conserve)
return site
def init_terms(self, model_params):
J = get_parameter(model_params, 'J', 1., self.name, True)
g = get_parameter(model_params, 'g', 1., self.name, True)
for u in range(len(self.lat.unit_cell)):
self.add_onsite(-g, u, 'Sigmaz')
for u1, u2, dx in self.lat.pairs['nearest_neighbors']:
self.add_coupling(-J, u1, 'Sigmax', u2, 'Sigmax', dx)
# done
class TFIChain(TFIModel, NearestNeighborModel):
"""The :class:`TFIModel` on a Chain, suitable for TEBD.
See the :class:`TFIModel` for the documentation of parameters.
"""
def __init__(self, model_params):
model_params.setdefault('lattice', "Chain")
CouplingMPOModel.__init__(self, model_params)
Some final remarks¶
Needless to say that we have also various predefined models under
tenpy.models
.Of course, an MPO is all you need to initialize a
MPOModel
to be used for DMRG; you don’t have to use theCouplingModel
orCouplingMPOModel
. For example an exponentially decaying long-range interactions are not supported by the coupling model but straight-forward to include to an MPO, as demonstrated in the exampleexamples/mpo_exponentially_decaying.py
.If the model of your interest contains Fermions, you should read the Fermions and the Jordan-Wigner transformation.
We suggest writing the model to take a single parameter dicitionary for the initialization, which is to be read out inside the class with
get_parameter()
. Read the doc-string of this function for more details on why this is a good idea. TheCouplingMPOModel.__init__(...)
callsunused_parameters()
, helping to avoid typos in the specified parameters.When you write a model and want to include a test that it can be at least constructed, take a look at
tests/test_model.py
.
Fermions and the Jordan-Wigner transformation¶
The Jordan-Wigner tranformation maps fermionic creation- and annihilation operators to (bosonic) spin-operators.
Spinless fermions in 1D¶
Let’s start by explicitly writing down the transformation. With the Pauli matrices \(\sigma^{x,y,z}_j\) and \(\sigma^{\pm}_j = (\sigma^x_j \pm \mathrm{i} \sigma^y_j)/2\) on each site, we can map
The \(n_l\) in the second and third row are defined in terms of Pauli matrices according to the first row. We do not interpret the Pauli matrices as spin-1/2; they have nothing to do with the spin in the spin-full case. If you really want to interpret them physically, you might better think of them as hard-core bosons (\(b_j =\sigma^{-}_j, b^\dagger_j=\sigma^{+}_j\)), with a spin of the fermions mapping to a spin of the hard-core bosons.
Note that this transformation maps the fermionic operators \(c_j\) and \(c^\dagger_j\) to global operators; although they carry an index j indicating
a site, they actually act on all sites l <= j
!
Thus, clearly the operators C
and Cd
defined in the FermionSite
do not directly correspond to \(c_j\) and
\(c^\dagger_j\).
The part \((-1)^{\sum_{l < j} n_l}\) is called Jordan-Wigner string and in the FermionSite
is given by the local operator
\(JW := (-1)^{n_l}\) acting all sites l < j
.
Since this important, let me stress it again:
Warning
The fermionic operator \(c_j\) (and similar \(c^\dagger_j\)) maps to a global operator consisting of
the Jordan-Wigner string built by the local operator JW
on sites l < j
and the local operator C
(or Cd
, respectively) on site j
.
On the sites itself, the onsite operators C
and Cd
in the FermionSite
fulfill the correct anti-commutation relation, without the need to include JW
strings.
The JW
string is necessary to ensure the anti-commutation for operators acting on different sites.
Written in terms of onsite operators defined in the FermionSite
,
with the i-th entry entry in the list acting on site i, the relations are thus:
["JW", ..., "JW", "C", "Id", ..., "Id"] # for the annihilation operator
["JW", ..., "JW", "Cd", "Id", ..., "Id"] # for the creation operator
Note that "JW"
squares to the identity, "JW JW" == "Id"
,
which is the reason that the Jordan-wigner string completely cancels in \(n_j = c^\dagger_j c_j\).
In the above notation, this can be written as:
["JW", ..., "JW", "Cd", "Id", ..., "Id"] * ["JW", ..., "JW", "C", "Id", ..., "Id"]
== ["JW JW", ..., "JW JW", "Cd C", "Id Id", ..., "Id Id"] # by definition of the tensorproduct
== ["Id", ..., "Id", "N", "Id", ..., "Id"] # by definition of the local operators
# ("X Y" stands for the local operators X and Y applied on the same site. We assume that the "Cd" and "C" on the first line act on the same site.)
For a pair of operators acting on different sites, JW
strings have to be included for every site between the operators.
For example, taking i < j
,
\(c^\dagger_i c_j \leftrightarrow \sigma_i^{+} (-1)^{\sum_{i <=l < j} n_l} \sigma_j^{-}\).
More explicitly, for j = i+2
we get:
["JW", ..., "JW", "Cd", "Id", "Id", "Id", ..., "Id"] * ["JW", ..., "JW", "JW", "JW", "C", "Id", ..., "Id"]
== ["JW JW", ..., "JW JW", "Cd JW", "Id JW", "Id C", ..., "Id"]
== ["Id", ..., "Id", "Cd JW", "JW", "C", ..., "Id"]
In other words, the Jordan-Wigner string appears only in the range i <= l < j
, i.e. between the two sites and on the smaller/left one of them.
(You can easily generalize this rule to cases with more than two \(c\) or \(c^\dagger\).)
This last line (as well as the last line of the previous example) can be rewritten by changing the order of the operators Cd JW
to "JW Cd" == - "Cd"
.
(This is valid because either site i
is occupied, yielding a minus sign from the JW
, or it is empty, yielding a 0 from the Cd
.)
This is also the case for j < i
, say j = i-2
:
\(c^\dagger_i c_j \leftrightarrow (-1)^{\sum_{j <=l < i} n_l} \sigma_i^{+} \sigma_j^{-}\).
As shown in the following, the JW
again appears on the left site,
but this time acting after C
:
["JW", ..., "JW", "JW", "JW", "Cd", "Id", ..., "Id"] * ["JW", ..., "JW", "C", "Id", "Id", "Id", ..., "Id"]
== ["JW JW", ..., "JW JW", "JW C", "JW", "Cd Id", ..., "Id"]
== ["Id", ..., "Id", "JW C", "JW", "Cd", ..., "Id"]
Higher dimensions¶
For an MPO or MPS, you always have to define an ordering of all your sites. This ordering effectifely maps the higher-dimensional lattice to a 1D chain, usually at the expence of long-range hopping/interactions. With this mapping, the Jordan-Wigner transformation generalizes to higher dimensions in a straight-forward way.
Spinful fermions¶

As illustrated in the above picture, you can think of spin-1/2 fermions on a chain as spinless fermions living on a ladder (and analogous mappings for higher dimensional lattices).
Each rung (a blue box in the picture) forms a SpinHalfFermionSite
which is composed of two FermionSite
(the circles in the picture) for spin-up and spin-down.
The mapping of the spin-1/2 fermions onto the ladder induces an ordering of the spins, as the final result must again be a one-dimensional chain, now containing both spin species.
The solid line indicates the convention for the ordering, the dashed lines indicate spin-preserving hopping \(c^\dagger_{s,i} c_{s,i+1} + h.c.\)
and visualize the ladder structure.
More generally, each species of fermions appearing in your model gets a separate label, and its Jordan-Wigner string
includes the signs \((-1)^{n_l}\) of all species of fermions to the ‘left’ of it (in the sense of the ordering indicated by the solid line in the picture).
In the case of spin-1/2 fermions labeled by \(\uparrow\) and \(\downarrow\) on each site, the complete mapping is given (where j and l are indices of the FermionSite
):
In each of the above mappings the operators on the right hand sides commute; we can rewrite
\((-1)^{\sum_{l < j} n_{\uparrow,l} + n_{\downarrow,l}} = \prod_{l < j} (-1)^{n_{\uparrow,l}} (-1)^{n_{\downarrow,l}}\),
which resembles the actual structure in the code more closely.
The parts of the operator acting in the same box of the picture, i.e. which have the same index j or l,
are the ‘onsite’ operators in the SpinHalfFermionSite
:
for example JW
on site j is given by \((-1)^{n_{\uparrow,j}} (-1)^{n_{\downarrow,j}}\),
Cu
is just the \(\sigma^{-}_{\uparrow,j}\), Cdu
is \(\sigma^{+}_{\uparrow,j}\),
Cd
is \((-1)^{n_{\uparrow,j}} \sigma^{-}_{\downarrow,j}\).
and Cdd
is \((-1)^{n_{\uparrow,j}} \sigma^{+}_{\downarrow,j}\).
Note the asymmetry regarding the spin in the definition of the onsite operators:
the spin-down operators include Jordan-Wigner signs for the spin-up fermions on the same site.
This asymetry stems from the ordering convention introduced by the solid line in the picture, according to which the spin-up site
is “left” of the spin-down site. With the above definition, the operators within the same SpinHalfFermionSite
fulfill the expected commutation relations,
for example "Cu Cdd" == - "Cdd Cu"
, but again the JW
on sites left of the operator pair is crucial to get the correct
commutation relations globally.
Warning
Again, the fermionic operators \(c_{\downarrow,j}, c^\dagger_{\downarrow,j}, c_{\downarrow,j}, c^\dagger_{\downarrow,j}\) correspond to global operators consisting of
the Jordan-Wigner string built by the local operator JW
on sites l < j
and the local operators 'Cu', 'Cdu', 'Cd', 'Cdd'
on site j
.
Written explicitly in terms of onsite operators defined in the FermionSite
,
with the j-th entry entry in the list acting on site j, the relations are:
["JW", ..., "JW", "Cu", "Id", ..., "Id"] # for the annihilation operator spin-up
["JW", ..., "JW", "Cd", "Id", ..., "Id"] # for the annihilation operator spin-down
["JW", ..., "JW", "Cdu", "Id", ..., "Id"] # for the creation operator spin-up
["JW", ..., "JW", "Cdd", "Id", ..., "Id"] # for the creation operator spin-down
As you can see, the asymmetry regaring the spins in the definition of the local onsite operators "Cu", "Cd", "Cdu", "Cdd"
lead to a symmetric definition in the global sense.
If you look at the definitions very closely, you can see that in terms like ["Id", "Cd JW", "JW", "Cd"]
the
Jordan-Wigner sign \((-1)^{n_\uparrow,2}\) appears twice (namely once in the definition of "Cd"
and once in the "JW"
on site
2) and could in principle be canceled, however in favor of a simplified handling in the code we do not recommend you to cancel it.
Similar, within a spinless FermionSite
, one can simplify "Cd JW" == "Cd"
and "JW C" == "C"
,
but these relations do not hold in the SpinHalfSite
,
and for consistency we recommend to explicitly keep the "JW"
operator string even in nearest-neighbor models where it is not strictly necessary.
How to handle Jordan-Wigner strings in practice¶
There are only a few pitfalls where you have to keep the mapping in mind: When building a model, you map the physical fermionic operators to the usual spin/bosonic operators. The algorithms don’t care about the mapping, they just use the given Hamiltonian, be it given as MPO for DMRG or as nearest neighbor couplings for TEBD. Only when you do a measurement (e.g. by calculating an expectation value or a correlation function), you have to reverse this mapping. Be aware that in certain cases, e.g. when calculating the entanglement entropy on a certain bond, you cannot reverse this mapping (in a straightforward way), and thus your results might depend on how you defined the Jordan-Wigner string.
Whatever you do, you should first think about if (and how much of) the Jordan-Wigner string cancels.
For example for many of the onsite operators (like the particle number operator N
or the spin operators in the SpinHalfFermionSite
)
the Jordan-Wigner string cancels completely and you can just ignore it both in onsite-terms and couplings.
To check, whether the Jordan-Wigner string cancels for a given operator,
take a look at need_JW_string
and op_needs_JW()
.
In case of operators acting on different sites, you typically have a Jordan-Wigner string inbetween (e.g. for the
\(c^\dagger_i c_j\) examples described above and below) or no Jordan-Wigner strings at all (e.g. for density-density
interactions \(n_i n_j\)).
In fact, the case that the Jordan Wigner string on the left of the first non-trivial operator does not cancel is currently not supported
for models and expectation values, as it usually doesn’t appear in practice.
When building a model with the CouplingModel
,
onsite terms for which the Jordan-Wigner string cancels can be added directly.
Care has to be taken when adding couplings with add_coupling()
.
When you need a Jordan-Wigner string inbetween the operators, set the optional arguments op_string='JW', str_on_first=True
.
Then, the function automatically takes care of the Jordan-Wigner string in the correct way, adding it on the left
operator. With the default arguments, it is checked automatically whether the model
Obviously, you should be careful about the convention which of the two coupling terms is applied first (in a physical sense as an operator acting on a state), as this corresponds to a sign. We follow the convention that the operator given as argument op2 is applied first, independent of wheter it ends up left or right in the MPS ordering sense.
As a concrete example, let us specify a hopping
\(\sum_{\langle i, j\rangle} (c^\dagger_i c_j + h.c.) = \sum_{\langle i, j\rangle} (c^\dagger_i c_j + c^\dagger_j c_i)\)
in a 1D chain of FermionSite
with add_coupling()
:
add_coupling(strength, 0, 'Cd', 0, 'C', 1, 'JW', True)
add_coupling(strength, 0, 'Cd', 0, 'C', -1, 'JW', True)
# (without the last 2 arguments, add_coupling checks for necessary JW strings automatically)
Slightly more complicated, to specify the hopping \(\sum_{\langle i, j\rangle, s} (c^\dagger_{s,i} c_{s,j} + h.c.)\) in the Fermi-Hubbard model on a 2D square lattice, we would need more terms:
for (dx, dy) in [(1, 0), (-1, 0), (0, 1), (0, -1)]:
add_coupling(strength, 0, 'Cdu', 0, 'Cu', (dx, dy), 'JW', True)
add_coupling(strength, 0, 'Cdd', 0, 'Cd', (dx, dy), 'JW', True)
If you want to build a model directly as an MPO or with nearest-neighbor bonds only, you have to care about how to handle the Jordan-Wigner string correctly.
The most important functions for doing measurements are probably expectation_value()
and correlation_function()
. Again, if all the Jordan-Wigner strings cancel, you don’t have
to worry about them at all, e.g. for many onsite operators or correlation functions involving only number operators.
If you measure operators involving multiple sites with expectation_value, take care to include the Jordan-Wigner
string correctly while building these operators.
The correlation_function()
supports a Jordan-Wigner string in between the two operators to
be measured. As for add_coupling()
, you should set the optional arguments op_string='JW', str_on_first=True
in that case.
Functions like expectation_value_term()
also care about the Jordan Wigner string (if specified in the documentation).
Contributing¶
The code is maintained in a git repository, the official repository is on github. You’re welcome to contribute and submit pull requests on github. If you’re unsure how or what to do, you can ask for help in the community forum. If you want to become a member of the developer team, just ask ;-)
To keep consistency, we ask you to comply with the following guidelines for contributions:
Use a code style based on PEP 8. The git repo includes a config file
.style.yapf
for the python package yapf. yapf is a tool to auto-format code, e.g., by the commandyapf -i some/file
(-i for “in place”). We run yapf on a regular basis on the github master branch. If your branch diverged, it might help to run yapf before merging.
Note
Since no tool is perfect, you can format some regions of code manually and enclose them
with the special comments # yapf: disable
and # yapf: enable
.
Every function/class/module should be documented by its doc-string (c.f. PEP 257), additional documentation is in
doc/
. The documentation uses reStructuredText. If you’re new to reStructuredText, read this introduction. We use the numpydoc extension to sphinx, so please read and follow these Instructions for the doc strings. In addition, you can take a look at the following example file. Helpful hints on top of that:r"""<- this r makes me a raw string, thus '\' has no special meaning. Otherwise you would need to escape backslashes, e.g. in math formulas. You can include cross references to classes, methods, functions, modules like :class:`~tenpy.linalg.np_conserved.Array`, :meth:`~tenpy.linalg.np_conserved.Array.to_ndarray`, :func:`tenpy.tools.math.toiterable`, :mod:`tenpy.linalg.np_conserved`. The ~ in the beginning makes only the last part of the name appear in the generated documentation. Documents of the userguide can be referenced with :doc:`/intro_npc` even from inside the doc-strings. You can also cross-link to other documentations, e.g. :class:`numpy.ndarray`, :func`scipy.linalg.svd` and :mod: will work. Moreover, you can link to github issues, arXiv papers, dois, and topics in the community forum with e.g. :issue:`5`, :arxiv:`1805.00055`, :doi:`10.1000/1` and :forum:`3`. Write inline formulas as :math:`H |\Psi\rangle = E |\Psi\rangle` or displayed equations as .. math :: e^{i\pi} + 1 = 0 In doc-strings, math can only be used in the Notes section. To refer to variables within math, use `\mathtt{varname}`. .. todo :: This block can describe things which need to be done and is automatically included in a section of :doc:`todo`. """
Use relative imports within TeNPy. Example:
from ..linalg import np_conserved as npc
Use the python package pytest for testing. Run it simply with
pytest
in tests/. You should make sure that all tests run through, before yougit push
back into the public repo. Long-running tests are marked with the attribute slow; for a quick check you can also runpytest -m "not slow"
.Reversely, if you write new functions, please also include suitable tests!
During development, you might introduce
# TODO
comments. But also try to remove them again later! If you’re not 100% sure that you will remove it soon, please add a doc-string with a.. todo ::
block, such that we can keep track of it as explained in the previous point.Unfinished functions should
raise NotImplementedError()
.if you want to try out new things in temporary files: any folder named
playground
is ignored by git.
Thank You for helping with the development!
Bulding the documentation¶
You can use Sphinx to generate the full documentation in various formats (including HTML or PDF) yourself, as described in the following. First, install Sphinx and the extension numpydoc with:
pip install --upgrade sphinx numpydoc
Afterwards, simply go to the folder doc/ and run the following command:
make html
This should generate the html documentation in the folder doc/sphinx_build/html.
Open this folder (or to be precise: the file index.html in it) in your webbroser
and enjoy this and other documentation beautifully rendered, with cross links, math formulas
and even a search function.
Other output formats are available as other make targets, e.g., make latexpdf
.
Note
Building the documentation with sphinx requires loading the modules.
Thus make sure that the folder tenpy is included in your $PYTHONPATH
,
as described in doc/INSTALL.rst.
To-Do list¶
Primary goals for the coming release¶
finish documentation and tests on existing stuff
Concrete things to be fixed in different files¶
The MPO class has no function for expectation value with MPS
Since we switched to python 3 completely, there’s no need to subclass ‘object’ anymore.
npc.Array: comparison with ==, pickle, hashable?
MPS class: group_sites, split_sites, pad
MPS class: probability_per_charge, charge_variance
MPS class: string correlation function
To be done at some point for the next releases¶
remove this file: use GitHub issues instead
overview and usage introduction to the overall library
trace: allow multiple axes to be traced over; optimize
Summary of defined classes/functions at the beginning of a module in the reference
Inconsistency: NearestNeighborModel.H_bond with
bc_MPS='infinite'
has bonds[(L, 0), (0, 1), ...]
, butexpectation_value()
takes two-site operators on bonds[(0, 1), (1, 2), ...(L,0)]
.
Wish-list¶
logging mechanism?
Johannes Motruk: extend simulation class: save standard variables like entropy, energy, etc?
Ruben: extend MPS TransferMatrix class
Jakob: function for Arrays: Perfrom trace over multiple pairs of legs at once. Tracing one after the other calculates unnecessary “off-diagonal” elements.
Auto-generated To-Do list¶
The following list is auto-generated by sphinx, extracting .. todo ::
blocks from doc-strings of the code.
Todo
Write UserGuide!!!
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/algorithms/dmrg.py:docstring of tenpy.algorithms.dmrg, line 30.)
Todo
Rebuild TDVP engine as subclasses of sweep Do testing
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/algorithms/mps_sweeps.py:docstring of tenpy.algorithms.mps_sweeps, line 18.)
Todo
- implement or wrap netcon.m, a function to find optimal contractionn sequences
improve helpfulness of Warnings
_do_trace: trace over all pairs of legs at once. need the corresponding npc function first.
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/algorithms/network_contractor.py:docstring of tenpy.algorithms.network_contractor, line 8.)
Todo
This is still a beta version, use with care. The interface might still change.
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/algorithms/tdvp.py:docstring of tenpy.algorithms.tdvp, line 12.)
Todo
long-term: Much of the code is similar as in DMRG. To avoid too much duplicated code, we should have a general way to sweep through an MPS and updated one or two sites, used in both cases.
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/algorithms/tdvp.py:docstring of tenpy.algorithms.tdvp, line 16.)
Todo
-add further terms (e.g. c^dagger c^dagger + h.c.) to the Hamiltonian.
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/models/fermions_spinless.py:docstring of tenpy.models.fermions_spinless, line 3.)
Todo
WARNING: These models are still under development and not yet tested for correctness. Use at your own risk! Replicate known results to confirm models work correctly. Long term: implement different lattices. Long term: implement variable hopping strengths Jx, Jy.
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/models/hofstadter.py:docstring of tenpy.models.hofstadter, line 3.)
Todo
make sure this function is used for expectation values…
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/models/lattice.py:docstring of tenpy.models.lattice.Honeycomb.mps2lat_values, line 69.)
Todo
this doesn’t fully work yet…
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/models/lattice.py:docstring of tenpy.models.lattice.IrregularLattice, line 4.)
Todo
make sure this function is used for expectation values…
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/models/lattice.py:docstring of tenpy.models.lattice.IrregularLattice.mps2lat_values, line 69.)
Todo
make sure this function is used for expectation values…
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/models/lattice.py:docstring of tenpy.models.lattice.Kagome.mps2lat_values, line 69.)
Todo
make sure this function is used for expectation values…
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/models/lattice.py:docstring of tenpy.models.lattice.Ladder.mps2lat_values, line 69.)
Todo
make sure this function is used for expectation values…
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/models/lattice.py:docstring of tenpy.models.lattice.Lattice.mps2lat_values, line 69.)
Todo
make sure this function is used for expectation values…
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/models/lattice.py:docstring of tenpy.models.lattice.TrivialLattice.mps2lat_values, line 69.)
Todo
implement MPO for time evolution…
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/models/model.py:docstring of tenpy.models.model.MPOModel, line 8.)
Todo
make sure this function is used for expectation values…
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/models/toric_code.py:docstring of tenpy.models.toric_code.DualSquare.mps2lat_values, line 69.)
Todo
might be useful to add a “cleanup” function which removes operators cancelling each other and/or unused states. Or better use a ‘compress’ of the MPO?
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/networks/mpo.py:docstring of tenpy.networks.mpo.MPOGraph, line 18.)
Todo
Make more general: it should be possible to specify states as strings.
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/networks/mps.py:docstring of tenpy.networks.mps.build_initial_state, line 14.)
Todo
One can also look at the canonical ensembles by defining the conserved quantities differently, see Barthel (2016), arXiv:1607.01696 for details. Idea: usual charges on p, trivial charges on q; fix total charge to desired value. I think it should suffice to implement another from_infiniteT.
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/networks/purification_mps.py:docstring of tenpy.networks.purification_mps, line 104.)
Todo
Check if Jordan-Wigner strings for 4x4 operators are correct.
(The original entry is located in /home/docs/checkouts/readthedocs.org/user_builds/tenpy/checkouts/v0.5.0/tenpy/networks/site.py:docstring of tenpy.networks.site.SpinHalfFermionSite, line 62.)
CHANGELOG¶
All notable changes to the project will be documented in this file. The project adheres semantic versioning
[0.5.0] - 2019-12-18¶
Major rewriting of the DMRG Engines, see issue #39 and issue #85 for details. The
EngineCombine
andEngineFracture
have been combined into a singleTwoSiteDMRGEngine
with an Therun
function works as before. In case you have directly used theEngineCombine
orEngineFracture
, you should update your code and use theTwoSiteEngine
instead.Moved
init_LP
andinit_RP
method fromMPS
intoMPSEnvironment
andMPOEnvironment
.
Addition/subtraction of
Array
: check whether the both arrays have the same labels in differnt order, and in that case raise a warning that we will transpose in the future.Made
tenpy.linalg.np_conserved.Array.get_block()
public (previouslytenpy.linalg.np_conserved.Array._get_block
).groundstate()
now returns a tuple(E0, psi0)
instead of justpsi0
. Moreover, the argument charge_sector was added.Simplification in the
Lattice
: Instead of having separate arguments/attributes/functions for'nearest_neighbors', 'next_nearest_neighbors', 'next_next_nearest_neighbors'
and possibly (Honeycomb) even'fourth_nearest_neighbors', 'fifth_nearest_neighbors'
, collect them in a dictionary called pairs. Old call structures still allowed, but deprecated.issue #94: Array addition and
inner()
should reflect the order of the labels, if they coincided. Will change the default behaviour in the future, raising FutureWarning for now.Default parameter for DMRG params: increased precision by setting P_tol_min down to the maximum of
1.e-30, lanczos_params['svd_min']**2 * P_tol_to_trunc, lanczos_params['trunc_cut']**2 * P_tol_to_trunc
by default.
tenpy.algorithms.mps_sweeps
with theSweep
class andEffectiveH
to be aOneSiteH
orTwoSiteH
.Single-Site DMRG with the
SingleSiteDMRG
.Example function in
examples/c_tebd.py
how to run TEBD with a model originally having next-nearest neighbors.increase_L()
to allow increasing the unit cell of an MPS.Additional option
order='folded'
for theChain
.tenpy.algorithms.exact_diag.ExactDiag.from_H_mpo()
wrapper as replacement fortenpy.networks.mpo.MPO.get_full_hamiltonian()
andtenpy.networks.mpo.MPO.get_grouped_mpo()
. The latter are now deprecated.Argument max_size to limit the matrix dimension in
ExactDiag
.tenpy.linalg.sparse.FlatLinearOperator.from_guess_with_pipe()
to allow quickly converting matvec functions acting on multi-dimensional arrays to a FlatLinearOperator by combining the legs into a LegPipe.tenpy.tools.math.speigsh()
for hermitian variant ofspeigs()
Allow for arguments
'LA', 'SA'
inargsort()
.tenpy.linalg.lanczos.lanczos_arpack()
as possiple replacement of the self-implemented lanczos function.tenpy.algorithms.dmrg.full_diag_effH()
as another replacement oflanczos()
.The new DMRG parameter
'diag_method'
allows to select a method for the diagonalization of the effective Hamiltonian. Seetenpy.algorithms.dmrg.DMRGEngine.diag()
for details.dtype attribute in
EffectiveH
.tenpy.linalg.charges.LegCharge.get_qindex_of_charges()
to allow selecting a block of an Array from the charges.tenpy.algorithms.mps_sweeps.EffectiveH.to_matrix
to allow contracting an EffectiveH to a matrix, as well as metadatatenpy.linalg.sparse.NpcLinearOperator.acts_on
andtenpy.algorithms.mps_sweeps.EffectiveH.N
.argument only_physical_legs in
tenpy.networks.mps.MPS.get_total_charge()
MPO
expectation_value()
did not work for finite systems.Calling
compute_K()
repeatedly with default parameters but on states with different chi would use the chi of the very first call for the truncation parameters.allow
MPSEnvironment
andMPOEnvironment
to have MPS/MPO with different lengthgroup_sites()
didn’t work correctly in some situations.matvec_to_array()
returned the transposed of A.tenpy.networks.mps.MPS.from_full()
messed up the form of the first array.issue #95: blowup of errors in DMRG with update_env > 0. Turns out to be a problem in the precision of the truncation error: TruncationError.eps was set to 0 if it would be smaller than machine precision. To fix it, I added
from_S()
.
[0.4.1] - 2019-08-14¶
Switch the sign of the
BoseHubbardModel
andFermiHubbardModel
to hopping and chemical potential having negative prefactors. Of course, the same adjustment happens in theBoseHubbardChain
andFermiHubbardChain
.moved
BoseHubbardModel
andBoseHubbardChain
as well asFermiHubbardModel
andFermiHubbardChain
into the new moduletenpy.models.hubbard
.Change arguments of
coupling_term_handle_JW()
andmulti_coupling_term_handle_JW()
to use strength and sites instead of op_needs_JW.Only accept valid identifiers as operator names in
add_op()
.
grid_concat()
allows forNone
entries (representing zero blocks).from_full()
allows for ‘segment’ boundary conditions.apply_local_op()
allows for n-site operators.
Nearest-neighbor interaction in
BoseHubbardModel
multiply_op_names()
to replace' '.join(op_names)
and allow explicit compression/multiplication.order_combine_term()
to group operators together.dagger()
of MPO’s (and to implement that alsoflip_charges_qconj()
).has_label()
to check if a label existsAddition of MPOs
3 additional examples for chern insulators in
examples/chern_insulators/
.from_MPOModel()
for initializing nearest-neighbor models after grouping sites.
issue #36: long-range couplings could give IndexError.
issue #42: Onsite-terms in
FermiHubbardModel
were wrong for lattices with non-trivial unit cell.Missing a factor 0.5 in
GUE()
.Allow
TermList
to have terms with multiple operators acting on the same site.Allow MPS indices outside unit cell in
mps2lat_idx()
andlat2mps_idx()
.expectation_value()
did not work for n-site operators.
[0.4.0] - 2019-04-28¶
The argument order of
tenpy.models.lattice.Lattice
could be a tuple(priority, snake_winding)
before. This is no longer valid and needs to be replaced by("standard", snake_winding, priority)
.Moved the boundary conditions bc_coupling from the
tenpy.models.model.CouplingModel
into thetenpy.models.lattice.Lattice
(as bc). Using the parameter bc_coupling will raise a FutureWarning, one should set the boundary conditions directly in the lattice.Added parameter permute (True by default) in
tenpy.networks.mps.MPS.from_product_state()
andtenpy.networks.mps.MPS.from_Bflat()
. The resulting state will therefore be independent of the “conserve” parameter of the Sites - unlike before, where the meaning of the p_state argument might have changed.Generalize and rename
tenpy.networks.site.DoubleSite
totenpy.networks.site.GroupedSite
, to allow for an arbitrary number of sites to be grouped. Argumentssite0, site1, label0, label1
of the __init__ can be replaced with[site0, site1], [label0, label1]
andop0, op1
of the kronecker_product with[op0, op1]
; this will recover the functionality of the DoubleSite.Restructured callstructure of Mixer in DMRG, allowing an implementation of other mixers. To enable the mixer, set the DMRG parameter
"mixer"
toTrue
or'DensityMatrixMixer'
instead of just'Mixer'
.The interaction parameter in the
tenpy.models.bose_hubbbard_chain.BoseHubbardModel
(andtenpy.models.bose_hubbbard_chain.BoseHubbardChain
) did not correspond to \(U/2 N (N-1)\) as claimed in the Hamiltonian, but to \(U N^2\). The correcting factor 1/2 and change in the chemical potential have been fixed.Major restructuring of
tenpy.linalg.np_conserved
andtenpy.linalg.charges
. This should not break backwards-compatibility, but if you compiled the cython files, you need to remove the old binaries in the source directory. Usingbash cleanup.sh
might be helpful to do that, but also remove other files within the repository, so be careful and make a backup beforehand to be on the save side. Afterwards recompile withbash compile.sh
.Changed structure of
tenpy.models.model.CouplingModel.onsite_terms
andtenpy.models.model.CouplingModel.coupling_terms
: Each of them is now a dictionary with category strings as keys and the newly introducedtenpy.networks.terms.OnsiteTerms
andtenpy.networks.terms.CouplingTerms
as values.tenpy.models.model.CouplingModel.calc_H_onsite()
is deprecated in favor of new methods.Argument raise_op2_left of
tenpy.models.model.CouplingModel.add_coupling()
is deprecated.
tenpy.networks.mps.MPS.expectation_value_term()
,tenpy.networks.mps.MPS.expectation_value_terms_sum()
andtenpy.networks.mps.MPS.expectation_value_multi_sites()
for expectation values of terms.tenpy.networks.mpo.MPO.expectation_value()
for an MPO.tenpy.linalg.np_conserved.Array.extend()
andtenpy.linalg.charges.LegCharge.extend()
, allowing to extend an Array with zeros.DMRG parameter
'orthogonal_to'
allows to calculate excited states for finite systems.possibility to change the number of charges after creating LegCharges/Arrays.
more general way to specify the order of sites in a
tenpy.models.lattice.Lattice
.new
tenpy.models.lattice.Triangular
,tenpy.models.lattice.Honeycomb
andtenpy.models.lattice.Kagome
latticea way to specify nearest neighbor couplings in a
Lattice
, along with methods to count the number of nearest neighbors for sites in the bulk, and a way to plot them (plot_coupling()
and friends)tenpy.networks.mpo.MPO.from_grids()
to generate the MPO from a grid.tenpy.models.model.MultiCouplingModel
for couplings involving more than 2 sites.request #8: Allow shift in boundary conditions of
CouplingModel
.Allow to use state labels in
tenpy.networks.mps.MPS.from_product_state()
.tenpy.models.model.CouplingMPOModel
structuring the default initialization of most models.Allow to force periodic boundary conditions for finite MPS in the
CouplingMPOModel
. This is not recommended, though.tenpy.models.model.NearestNeighborModel.calc_H_MPO_from_bond()
andtenpy.models.model.MPOModel.calc_H_bond_from_MPO()
for conversion of H_bond into H_MPO and vice versa.tenpy.algorithms.tebd.RandomUnitaryEvolution
for random unitary circuitsAllow documentation links to github issues, arXiv, papers by doi and the forum with e.g.
:issue:`5`, :arxiv:`1805.00055`, :doi:`10.21468/SciPostPhysLectNotes.5`, :forum:`3`
tenpy.models.model.CouplingModel.coupling_strength_add_ext_flux()
for adding hoppings with external flux.tenpy.models.model.CouplingModel.plot_coupling_terms()
to visualize the added coupling terms.tenpy.networks.terms.OnsiteTerms
,tenpy.networks.terms.CouplingTerms
,tenpy.networks.terms.MultiCouplingTerm
containing the of terms for theCouplingModel
andMultiCouplingModel
. This allowed to add the category argument toadd_onsite
,add_coupling
andadd_multi_coupling
.tenpy.networks.terms.TermList
as another (more human readable) representation of terms with conversion from and to the other*Term
classes.tenpy.networks.mps.MPS.init_LP()
andtenpy.networks.mps.MPS.init_RP()
to initialize left and right parts of an Environment.tenpy.networks.mpo.MPOGraph.from_terms()
andtenpy.networks.mpo.MPOGraph.from_term_list()
.argument charge_sector in
tenpy.networks.mps.MPS.correlation_length()
.
moved toycodes from the folder
examples/
to a new foldertoycodes/
to separate them clearly.- major remodelling of the internals of
tenpy.linalg.np_conserved
andtenpy.linalg.charges
.
- major remodelling of the internals of
Restructured lanczos into a class, added time evolution calculating
exp(A*dt)|psi0>
Warning for poorly conditioned Lanczos; to overcome this enable the new parameter reortho.
Restructured
tenpy.algorithms.dmrg
:run()
is now just a wrapper around the newrun()
,run(psi, model, pars)
is roughly equivalent toeng = EngineCombine(psi, model, pars); eng.run()
.Added
init_env()
andreset_stats()
to allow a simple restart of DMRG with slightly different parameters, e.g. for tuning Hamiltonian parameters.Call
canonical_form()
for infinite systems if the final state is not in canonical form.
Changed default values for some parameters:
set
trunc_params['chi_max'] = 100
. Not setting a chi_max at all will lead to memory problems. DisableDMRG_params['chi_list'] = None
by default to avoid conflicting settings.reduce to
mixer_params['amplitude'] = 1.e-5
. A too strong mixer screws DMRG up pretty bad.increase
Lanczos_params['N_cache'] = N_max
(i.e., keep all states)set
DMRG_params['P_tol_to_trunc'] = 0.05
and provide reasonable …_min and …_max values.increased (default) DMRG accuracy by setting
DMRG_params['max_E_err'] = 1.e-8
andDMRG_params['max_S_err'] = 1.e-5
.don’t check the (absolute) energy for convergence in Lanczos.
set
DMRG_params['norm_tol'] = 1.e-5
to check whether the final state is in canonical form.
Verbosity of
get_parameter()
reduced: Print parameters only for verbosity >=1. and default values only for verbosity >= 2.Don’t print the energy during real-time TEBD evolution - it’s preserved up to truncation errors.
Renamed the SquareLattice class to
tenpy.models.lattice.Square
for better consistency.auto-determine whether Jordan-Wigner strings are necessary in
add_coupling()
.The way the labels of npc Arrays are stored internally changed to a simple list with None entries. There is a deprecated propery setter yielding a dictionary with the labels.
renamed first_LP and last_RP arguments of
MPSEnvironment
andMPOEnvironment
to init_LP and init_RP.Testing: insetad of the (outdated) nose, we now use pytest <https://pytest.org> for testing.
issue #22: Serious bug in
tenpy.linalg.np_conserved.inner()
: ifdo_conj=True
is used with non-zeroqtotal
, it returned 0. instead of non-zero values.avoid error in
tenpy.networks.mps.MPS.apply_local_op()
Don’t carry around total charge when using DMRG with a mixer
Corrected couplings of the FermionicHubbardChain
issue #2: memory leak in cython parts when using intelpython/anaconda
issue #4: incompatible data types.
issue #6: the CouplingModel generated wrong Couplings in some cases
issue #19: Convergence of energy was slow for infinite systems with
N_sweeps_check=1
more reasonable traceback in case of wrong labels
wrong dtype of npc.Array when adding/subtracting/… arrays of different data types
could get wrong H_bond for completely decoupled chains.
SVD could return outer indices with different axes
tenpy.networks.mps.MPS.overlap()
works now for MPS with different total charge (e.g. afterpsi.apply_local_op(i, 'Sp')
).skip existing graph edges in MPOGraph.add() when building up terms without the strength part.
[0.3.0] - 2018-02-19¶
This is the first version published on github.
Cython modules for np_conserved and charges, which can optionally be compiled for speed-ups
tools.optimization for dynamical optimization
Various models.
More predefined lattice sites.
Example toy-codes.
Network contractor for general networks
Switch to python3
Python 2 support.
[0.2.0] - 2017-02-24¶
Compatible with python2 and python3 (using the 2to3 tool).
Development version.
Includes TEBD and DMRG.
Changes compared to previous TeNPy¶
This library is based on a previous (closed source) version developed mainly by Frank Pollmann, Michael P. Zaletel and Roger S. K. Mong. While allmost all files are completely rewritten and not backwards compatible, the overall structure is similar. In the following, we list only the most important changes.
syntax style based on PEP8. Use
$>yapf -r -i ./
to ensure consitent formatting over the whole project. Special comments# yapf: disable
and# yapf: enable
can be used for manual formatting of some regions in code.Following PEP8, we distinguish between ‘private’ functions, indicated by names starting with an underscore and to be used only within the library, and the public API. The puplic API should be backwards-compatible with different releases, while private functions might change at any time.
all modules are in the folder
tenpy
to avoid name conflicts with other libraries.withing the library, relative imports are used, e.g.,
from ..tools.math import (toiterable, tonparray)
Exception: the files in tests/ and examples/ run as__main__
and can’t use relative importsFiles outside of the library (and in tests/, examples/) should use absolute imports, e.g.
import tenpy.algorithms.tebd
renamed tenpy/mps/ to tenpy/networks, since it containes various tensor networks.
added
Site
describing the local physical sites by providing the physical LegCharge and onsite operators.
pure python, no need to compile!
in module
tenpy.linalg
instead ofalgorithms/linalg
.moved functionality for charges to
charges
Introduced the classes
ChargeInfo
(basically the oldq_number
, andmod_q
) andLegCharge
(the oldqind, qconj
).Introduced the class
LegPipe
to replace the oldleg_pipe
. It is derived fromLegCharge
and used as a leg in the array class. Thus any inherited array (aftertensordot
etc still has all the necessary information to split the legs. (The legs are shared between different arrays, so it’s saved only once in memory)Enhanced indexing of the array class to support slices and 1D index arrays along certain axes
more functions, e.g.
grid_outer()
Introduced TruncationError for easy handling of total truncation error.
some truncation parameters are renamed and may have a different meaning, e.g. svd_max -> svd_min has no ‘log’ in the definition.
separate Lanczos module in tenpy/linalg/. Strangely, the old version orthoganalized against the complex conjugates of orthogonal_to (contrary to it’s doc string!) (and thus calculated ‘theta_o’ as bra, not ket).
cleaned up, provide prototypes for DMRG engine and mixer.
added
tenpy.tools.misc
, which contains ‘random stuff’ from oldtools.math
liketo_iterable
andto_array
(renamed to follow PEP8, documented)moved stuff for fitting to
tenpy.tools.fit
enhanced
tenpy.tools.string.vert_join()
for nice formattingmoved (parts of) old cluster/omp.py to
tenpy.tools.process
added
tenpy.tools.params
for a simplified handling of parameter/arguments for models and/or algorithms. Similar as the old models.model.set_var, but use it also for algorithms. Also, it may modify the given dictionary.
TeNPy developer team¶
The following people are part of the TeNPy developer team.
The full list of contributors can be obtained from the git repository with ``git shortlog -sn``.
Johannes Hauschild tenpy@johannes-hauschild.de
Frank Pollmann
Michael P. Zaletel
Maximilian Schulz
Leon Schoonderwoerd
Kévin Hémery
Gunnar Moeller
Jakob Unfried
Yu-Chin Tzeng
Further, the code is based on an earlier version of the library, mainly developed by
Frank Pollmann, Michael P. Zaletel and Roger S. K. Mong.
License¶
The source code documented here is published under a GPL v3 license, which we include below.
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<https://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<https://www.gnu.org/licenses/why-not-lgpl.html>.
Tenpy Reference¶
TeNPy - a Python library for Tensor Network Algorithms
TeNPy is a library for algorithms working with tensor networks, e.g., matrix product states and -operators, designed to study the physics of strongly correlated quantum systems. The code is intended to be accessible for newcommers and yet powerful enough for day-to-day research.
Submodules
A collection of algorithms such as TEBD and DMRG. |
|
Linear-algebra tools for tensor networks. |
|
Definition of the various models. |
|
Definitions of tensor networks like MPS and MPO. |
|
A collection of tools: mostly short yet quite useful functions. |
|
Access to version of this library. |