TimeEvolutionAlgorithm

Inheritance Diagram

Inheritance diagram of tenpy.algorithms.algorithm.TimeEvolutionAlgorithm

Methods

TimeEvolutionAlgorithm.__init__(psi, model, ...)

TimeEvolutionAlgorithm.estimate_RAM([...])

Gives an approximate prediction for the required memory usage.

TimeEvolutionAlgorithm.evolve(N_steps, dt)

Evolve by N_steps*dt.

TimeEvolutionAlgorithm.evolve_step(dt)

TimeEvolutionAlgorithm.get_resume_data([...])

Return necessary data to resume a run() interrupted at a checkpoint.

TimeEvolutionAlgorithm.prepare_evolve(dt)

Prepare an evolution step.

TimeEvolutionAlgorithm.resume_run()

Resume a run that was interrupted.

TimeEvolutionAlgorithm.run()

Perform a (real-)time evolution of psi by N_steps * dt.

TimeEvolutionAlgorithm.run_evolution(N_steps, dt)

Perform a (real-)time evolution of psi by N_steps * dt.

TimeEvolutionAlgorithm.switch_engine(...[, ...])

Initialize algorithm from another algorithm instance of a different class.

Class Attributes and Properties

TimeEvolutionAlgorithm.time_dependent_H

whether the algorithm supports time-dependent H

class tenpy.algorithms.algorithm.TimeEvolutionAlgorithm(psi, model, options, **kwargs)[source]

Bases: Algorithm

Common interface for (real) time evolution algorithms.

Parameters are the same as for Algorithm.

Options

config TimeEvolutionAlgorithm
option summary

dt

Minimal time step by which to evolve.

max_N_sites_per_ring (from Algorithm) in Algorithm

Threshold for raising errors on too many sites per ring. Default ``18``. [...]

max_trunc_err in TimeDependentHAlgorithm.evolve

Threshold for raising errors on too large truncation errors. Default ``0.01 [...]

N_steps

Number of time steps `dt` to evolve by in :meth:`run`. [...]

preserve_norm

Whether the state will be normalized to its initial norm after each time st [...]

start_time

Initial value for :attr:`evolved_time`.

start_trunc_err

Initial truncation error for :attr:`trunc_err`.

trunc_params (from Algorithm) in Algorithm

Truncation parameters as described in :cfg:config:`truncation`.

option start_time: float

Initial value for evolved_time.

option dt: float

Minimal time step by which to evolve.

option N_steps: int

Number of time steps dt to evolve by in run(). Adjusting dt and N_steps at the same time allows to keep the evolution time done in run() fixed. Further, e.g., the Trotter decompositions of order > 1 are slightly more efficient if more than one step is performed at once.

option preserve_norm: bool

Whether the state will be normalized to its initial norm after each time step. Per default, this is False for real time evolution and True for imaginary time.

option start_trunc_err: TruncationError

Initial truncation error for trunc_err.

evolved_time

Indicating how long psi has been evolved, psi = exp(-i * evolved_time * H) psi(t=0). Note that the real-part of t is increasing for a real-time evolution, while the imaginary-part of t is decreasing for a imaginary time evolution.

Type:

float | complex

trunc_err

Upper bound for the accumulated error of the represented state, which is introduced due to the truncation during the sequence of update steps.

Type:

TruncationError

time_dependent_H = False

whether the algorithm supports time-dependent H

get_resume_data(sequential_simulations=False)[source]

Return necessary data to resume a run() interrupted at a checkpoint.

At a checkpoint, you can save psi, model and options along with the data returned by this function. When the simulation aborts, you can resume it using this saved data with:

eng = AlgorithmClass(psi, model, options, resume_data=resume_data)
eng.resume_run()

An algorithm which doesn’t support this should override resume_run to raise an Error.

Parameters:

sequential_simulations (bool) – If True, return only the data for re-initializing a sequential simulation run, where we “adiabatically” follow the evolution of a ground state (for variational algorithms), or do series of quenches (for time evolution algorithms); see run_seq_simulations().

Returns:

resume_data – Dictionary with necessary data (apart from copies of psi, model, options) that allows to continue the algorithm run from where we are now. It might contain an explicit copy of psi.

Return type:

dict

run()[source]

Perform a (real-)time evolution of psi by N_steps * dt.

You probably want to call this in a loop along with measurements. The recommended way to do this is via the RealTimeEvolution.

run_evolution(N_steps, dt)[source]

Perform a (real-)time evolution of psi by N_steps * dt.

This is the inner part of run() without the logging. For parameters see TimeEvolutionAlgorithm.

prepare_evolve(dt)[source]

Prepare an evolution step.

This method is used to prepare repeated calls of evolve() given the model. For example, it may generate approximations of U=exp(-i H dt). To avoid overhead, it may cache the result depending on parameters/options; but it should always regenerate it if force_prepare_evolve is set.

Parameters:

dt (float) – The time step to be used.

evolve(N_steps, dt)[source]

Evolve by N_steps*dt.

Subclasses may override this with a more efficient way of do N_steps update_step.

Parameters:
  • N_steps (int) – The number of time steps by dt to take at once.

  • dt (float) – Small time step. Might be ignored if already used in prepare_update().

Options

config TimeEvolutionAlgorithm
option summary

dt in TimeEvolutionAlgorithm

Minimal time step by which to evolve.

max_N_sites_per_ring (from Algorithm) in Algorithm

Threshold for raising errors on too many sites per ring. Default ``18``. [...]

max_trunc_err in TimeDependentHAlgorithm.evolve

Threshold for raising errors on too large truncation errors. Default ``0.01 [...]

N_steps in TimeEvolutionAlgorithm

Number of time steps `dt` to evolve by in :meth:`run`. [...]

preserve_norm in TimeEvolutionAlgorithm

Whether the state will be normalized to its initial norm after each time st [...]

start_time in TimeEvolutionAlgorithm

Initial value for :attr:`evolved_time`.

start_trunc_err in TimeEvolutionAlgorithm

Initial truncation error for :attr:`trunc_err`.

trunc_params (from Algorithm) in Algorithm

Truncation parameters as described in :cfg:config:`truncation`.

option max_trunc_err: float

Threshold for raising errors on too large truncation errors. Default 0.01. See consistency_check(). When the total accumulated truncation error (its eps) exceeds this value, we raise. Can be downgraded to a warning by setting this option to None.

Returns:

trunc_err – Sum of truncation errors introduced during evolution.

Return type:

TruncationError

estimate_RAM(mem_saving_factor=None)[source]

Gives an approximate prediction for the required memory usage.

This calculation is based on the requested bond dimension, the local Hilbert space dimension, the number of sites, and the boundary conditions.

Parameters:

mem_saving_factor (float) – Represents the amount of RAM saved due to conservation laws. By default, it is ‘None’ and is extracted from the model automatically. However, this is only possible in a few cases and needs to be estimated in most cases. This is due to the fact that it is dependent on the model parameters. If one has a better estimate, one can pass the value directly. This value can be extracted by building the initial state psi (usually by performing DMRG) and then calling print(psi.get_B(0).sparse_stats()) TeNPy will automatically print the fraction of nonzero entries in the first line, for example, 6 of 16 entries (=0.375) nonzero. This fraction corresponds to the mem_saving_factor; in our example, it is 0.375.

Returns:

usage – Required RAM in MB.

Return type:

float

See also

tenpy.simulations.simulation.estimate_simulation_RAM

global function calling this.

resume_run()[source]

Resume a run that was interrupted.

In case we saved an intermediate result at a checkpoint, this function allows to resume the run() of the algorithm (after re-initialization with the resume_data). Since most algorithms just have a while loop with break conditions, the default behavior implemented here is to just call run().

classmethod switch_engine(other_engine, *, options=None, **kwargs)[source]

Initialize algorithm from another algorithm instance of a different class.

You can initialize one engine from another, not too different subclasses. Internally, this function calls get_resume_data() to extract data from the other_engine and then initializes the new class.

Note that it transfers the data without making copies in most case; even the options! Thus, when you call run() on one of the two algorithm instances, it will modify the state, environment, etc. in the other. We recommend to make the switch as engine = OtherSubClass.switch_engine(engine) directly replacing the reference.

Parameters:
  • cls (class) – Subclass of Algorithm to be initialized.

  • other_engine (Algorithm) – The engine from which data should be transferred. Another, but not too different algorithm subclass-class; e.g. you can switch from the TwoSiteDMRGEngine to the OneSiteDMRGEngine.

  • options (None | dict-like) – If not None, these options are used for the new initialization. If None, take the options from the other_engine.

  • **kwargs – Further keyword arguments for class initialization. If not defined, resume_data is collected with get_resume_data().