Algorithm

Inheritance Diagram

Inheritance diagram of tenpy.algorithms.algorithm.Algorithm

Methods

Algorithm.__init__(psi, model, options, *[, ...])

Algorithm.estimate_RAM([mem_saving_factor])

Gives an approximate prediction for the required memory usage.

Algorithm.get_resume_data([...])

Return necessary data to resume a run() interrupted at a checkpoint.

Algorithm.resume_run()

Resume a run that was interrupted.

Algorithm.run()

Actually run the algorithm.

Algorithm.switch_engine(other_engine, *[, ...])

Initialize algorithm from another algorithm instance of a different class.

Class Attributes and Properties

Algorithm.verbose

class tenpy.algorithms.algorithm.Algorithm(psi, model, options, *, resume_data=None, cache=None)[source]

Bases: object

Base class and common interface for a tensor-network based algorithm in TeNPy.

Parameters:
  • psi – Tensor network to be updated by the algorithm.

  • model (Model | None) – Model with the representation of the hamiltonian suitable for the algorithm. None for algorithms which don’t require a model.

  • options (dict-like) – Optional parameters for the algorithm. In the online documentation, you can find the correct set of options in the Config Index.

  • resume_data (None | dict) – Can only be passed as keyword argument. By default (None) ignored. If a dict, it should contain the data returned by get_resume_data() when intending to continue/resume an interrupted run. If it contains psi, this takes precedence over the argument psi.

  • cache (None | DictCache) – The cache to be used to reduce memory usage. None defaults to a new, trivial DictCache which keeps everything in RAM.

Options

config Algorithm
option summary

trunc_params

Truncation parameters as described in :cfg:config:`truncation`.

option trunc_params: dict

Truncation parameters as described in truncation.

psi

Tensor network to be updated by the algorithm.

model

Model with the representation of the hamiltonian suitable for the algorithm.

Type:

Model

options

Optional parameters.

Type:

Config

checkpoint

An event that the algorithm emits at regular intervals when it is in a “well defined” step, where an intermediate status report, measurements and/or interrupting and saving to disk for later resume make sense.

Type:

EventHandler

cache

The cache to be used.

Type:

DictCache or subclass

resume_data

Data given as parameter resume_data and/or to be returned by get_resume_data().

Type:

dict

_resume_psi

Possibly a copy of psi to be used for get_resume_data().

classmethod switch_engine(other_engine, *, options=None, **kwargs)[source]

Initialize algorithm from another algorithm instance of a different class.

You can initialize one engine from another, not too different subclasses. Internally, this function calls get_resume_data() to extract data from the other_engine and then initializes the new class.

Note that it transfers the data without making copies in most case; even the options! Thus, when you call run() on one of the two algorithm instances, it will modify the state, environment, etc. in the other. We recommend to make the switch as engine = OtherSubClass.switch_engine(engine) directly replacing the reference.

Parameters:
  • cls (class) – Subclass of Algorithm to be initialized.

  • other_engine (Algorithm) – The engine from which data should be transferred. Another, but not too different algorithm subclass-class; e.g. you can switch from the TwoSiteDMRGEngine to the OneSiteDMRGEngine.

  • options (None | dict-like) – If not None, these options are used for the new initialization. If None, take the options from the other_engine.

  • **kwargs – Further keyword arguments for class initialization. If not defined, resume_data is collected with get_resume_data().

run()[source]

Actually run the algorithm.

Needs to be implemented in subclasses.

resume_run()[source]

Resume a run that was interrupted.

In case we saved an intermediate result at a checkpoint, this function allows to resume the run() of the algorithm (after re-initialization with the resume_data). Since most algorithms just have a while loop with break conditions, the default behavior implemented here is to just call run().

get_resume_data(sequential_simulations=False)[source]

Return necessary data to resume a run() interrupted at a checkpoint.

At a checkpoint, you can save psi, model and options along with the data returned by this function. When the simulation aborts, you can resume it using this saved data with:

eng = AlgorithmClass(psi, model, options, resume_data=resume_data)
eng.resume_run()

An algorithm which doesn’t support this should override resume_run to raise an Error.

Parameters:

sequential_simulations (bool) – If True, return only the data for re-initializing a sequential simulation run, where we “adiabatically” follow the evolution of a ground state (for variational algorithms), or do series of quenches (for time evolution algorithms); see run_seq_simulations().

Returns:

resume_data – Dictionary with necessary data (apart from copies of psi, model, options) that allows to continue the simulation from where we are now. It might contain an explicit copy of psi.

Return type:

dict

estimate_RAM(mem_saving_factor=None)[source]

Gives an approximate prediction for the required memory usage.

This calculation is based on the requested bond dimension, the local Hilbert space dimension, the number of sites, and the boundary conditions.

Parameters:

mem_saving_factor (float) – Represents the amount of RAM saved due to conservation laws. By default, it is ‘None’ and is extracted from the model automatically. However, this is only possible in a few cases and needs to be estimated in most cases. This is due to the fact that it is dependent on the model parameters. If one has a better estimate, one can pass the value directly. This value can be extracted by building the initial state psi (usually by performing DMRG) and then calling print(psi.get_B(0).sparse_stats()) TeNPy will automatically print the fraction of nonzero entries in the first line, for example, 6 of 16 entries (=0.375) nonzero. This fraction corresponds to the mem_saving_factor; in our example, it is 0.375.

Returns:

usage – Required RAM in MB.

Return type:

float

See also

tenpy.simulations.simulation.estimate_simulation_RAM

global function calling this.