ExpMPOEvolution
full name: tenpy.algorithms.mpo_evolution.ExpMPOEvolution
parent module:
tenpy.algorithms.mpo_evolution
type: class
Inheritance Diagram
Methods
|
|
|
Calculate |
|
Gives an approximate prediction for the required memory usage. |
|
Evolve by N_steps*dt. |
|
|
Return necessary data to resume a |
|
Prepare an evolution step. |
|
Resume a run that was interrupted. |
|
Perform a (real-)time evolution of |
|
|
Perform a (real-)time evolution of |
|
Initialize algorithm from another algorithm instance of a different class. |
Class Attributes and Properties
whether the algorithm supports time-dependent H |
- class tenpy.algorithms.mpo_evolution.ExpMPOEvolution(psi, model, options, **kwargs)[source]
Bases:
TimeEvolutionAlgorithm
Time evolution of an MPS using the W_I or W_II approximation for
exp(H dt)
.[zaletel2015] described a method to obtain MPO approximations \(W_I\) and \(W_{II}\) for the exponential
U = exp(i H dt)
of an MPO H, implemented inmake_U_I()
andmake_U_II()
. This class uses it for real-time evolution.Parameters are the same as for
Algorithm
.Options
- config ExpMPOEvolution
option summary Specifies which approximation is applied. The default 'II' is more precise. [...]
By default (``None``) this feature is disabled. [...]
chi_list_reactivates_mixer (from Sweep) in IterativeSweeps.sweep
If True, the mixer is reset/reactivated each time the bond dimension growth [...]
Whether to combine legs into pipes. This combines the virtual and [...]
compression_method (from ApplyMPO) in MPO.apply
Mandatory. [...]
dt (from TimeEvolutionAlgorithm) in TimeEvolutionAlgorithm
Minimal time step by which to evolve.
lanczos_params (from Sweep) in Sweep
Lanczos parameters as described in :cfg:config:`KrylovBased`.
m_temp (from ZipUpApplyMPO) in MPO.apply_zipup
bond dimension will be truncated to `m_temp * chi_max`
Threshold for raising errors on too large time steps. Default ``1.0``. [...]
max_hours (from IterativeSweeps) in DMRGEngine.stopping_criterion
If the DMRG took longer (measured in wall-clock time), [...]
max_N_sites_per_ring (from Algorithm) in Algorithm
Threshold for raising errors on too many sites per ring. Default ``18``. [...]
max_sweeps (from IterativeSweeps) in DMRGEngine.stopping_criterion
Maximum number of sweeps to perform.
max_trunc_err (from TimeEvolutionAlgorithm) in TimeDependentHAlgorithm.evolve
Threshold for raising errors on too large truncation errors. Default ``0.01 [...]
min_sweeps (from IterativeSweeps) in DMRGEngine.stopping_criterion
Minimum number of sweeps to perform.
Specifies which :class:`Mixer` to use, if any. [...]
mixer_params (from Sweep) in DMRGEngine.mixer_activate
Mixer parameters as described in :cfg:config:`Mixer`.
N_steps (from TimeEvolutionAlgorithm) in TimeEvolutionAlgorithm
Number of time steps `dt` to evolve by in :meth:`run`. [...]
Order of the algorithm. The total error up to time `t` scales as ``O(t*dt^o [...]
preserve_norm (from TimeEvolutionAlgorithm) in TimeEvolutionAlgorithm
Whether the state will be normalized to its initial norm after each time st [...]
Number of sweeps to be performed without optimization to update the environment.
start_env_sites (from VariationalCompression) in VariationalCompression
Number of sites to contract for the initial LP/RP environment in case of in [...]
start_time (from TimeEvolutionAlgorithm) in TimeEvolutionAlgorithm
Initial value for :attr:`evolved_time`.
start_trunc_err (from TimeEvolutionAlgorithm) in TimeEvolutionAlgorithm
Initial truncation error for :attr:`trunc_err`.
tol_theta_diff (from VariationalCompression) in VariationalCompression
Stop after less than `max_sweeps` sweeps if the 1-site wave function change [...]
trunc_params (from ApplyMPO) in MPO.apply
Truncation parameters as described in :cfg:config:`truncation`.
trunc_weight (from ZipUpApplyMPO) in MPO.apply_zipup
reduces cut for Schmidt values to `trunc_weight * svd_min`
- option approximation: 'I' | 'II'
Specifies which approximation is applied. The default ‘II’ is more precise. See [zaletel2015] and
make_U()
for more details.
- option order: int
Order of the algorithm. The total error up to time t scales as
O(t*dt^order)
. Implemented are order = 1 and order = 2.
- option max_dt: float | None
Threshold for raising errors on too large time steps. Default
1.0
. Seeconsistency_check()
. The trotterization in the time evolution operator assumes that the time step is small. We raise an error if it is not. Can be downgraded to a warning by setting this option toNone
.
- _U
Exponentiated H_MPO;
- Type:
list of
MPO
- _U_param
A dictionary containing the information of the latest created _U. We won’t recalculate _U if those parameters didn’t change.
- Type:
- prepare_evolve(dt)[source]
Prepare an evolution step.
This method is used to prepare repeated calls of
evolve()
given themodel
. For example, it may generate approximations ofU=exp(-i H dt)
. To avoid overhead, it may cache the result depending on parameters/options; but it should always regenerate it ifforce_prepare_evolve
is set.- Parameters:
dt (float) – The time step to be used.
- calc_U(dt, order=2, approximation='II')[source]
Calculate
self._U_MPO
.This function calculates the approximation
U ~= exp(-i dt_ H)
withdt_ = dt` for ``order=1
, ordt_ = (1 - 1j)/2 dt
anddt_ = (1 + 1j)/2 dt
fororder=2
.
- estimate_RAM(mem_saving_factor=None)[source]
Gives an approximate prediction for the required memory usage.
This calculation is based on the requested bond dimension, the local Hilbert space dimension, the number of sites, and the boundary conditions.
- Parameters:
mem_saving_factor (float) – Represents the amount of RAM saved due to conservation laws. By default, it is ‘None’ and is extracted from the model automatically. However, this is only possible in a few cases and needs to be estimated in most cases. This is due to the fact that it is dependent on the model parameters. If one has a better estimate, one can pass the value directly. This value can be extracted by building the initial state psi (usually by performing DMRG) and then calling
print(psi.get_B(0).sparse_stats())
TeNPy will automatically print the fraction of nonzero entries in the first line, for example,6 of 16 entries (=0.375) nonzero
. This fraction corresponds to the mem_saving_factor; in our example, it is 0.375.- Returns:
usage – Required RAM in MB.
- Return type:
See also
tenpy.simulations.simulation.estimate_simulation_RAM
global function calling this.
- evolve(N_steps, dt)[source]
Evolve by N_steps*dt.
Subclasses may override this with a more efficient way of do N_steps update_step.
- Parameters:
Options
- config TimeEvolutionAlgorithm
option summary dt in TimeEvolutionAlgorithm
Minimal time step by which to evolve.
max_N_sites_per_ring (from Algorithm) in Algorithm
Threshold for raising errors on too many sites per ring. Default ``18``. [...]
max_trunc_err in TimeDependentHAlgorithm.evolve
Threshold for raising errors on too large truncation errors. Default ``0.01 [...]
N_steps in TimeEvolutionAlgorithm
Number of time steps `dt` to evolve by in :meth:`run`. [...]
preserve_norm in TimeEvolutionAlgorithm
Whether the state will be normalized to its initial norm after each time st [...]
start_time in TimeEvolutionAlgorithm
Initial value for :attr:`evolved_time`.
start_trunc_err in TimeEvolutionAlgorithm
Initial truncation error for :attr:`trunc_err`.
trunc_params (from Algorithm) in Algorithm
Truncation parameters as described in :cfg:config:`truncation`.
- option max_trunc_err: float
Threshold for raising errors on too large truncation errors. Default
0.01
. Seeconsistency_check()
. When the total accumulated truncation error (itseps
) exceeds this value, we raise. Can be downgraded to a warning by setting this option toNone
.
- Returns:
trunc_err – Sum of truncation errors introduced during evolution.
- Return type:
TruncationError
- get_resume_data(sequential_simulations=False)[source]
Return necessary data to resume a
run()
interrupted at a checkpoint.At a
checkpoint
, you can savepsi
,model
andoptions
along with the data returned by this function. When the simulation aborts, you can resume it using this saved data with:eng = AlgorithmClass(psi, model, options, resume_data=resume_data) eng.resume_run()
An algorithm which doesn’t support this should override resume_run to raise an Error.
- Parameters:
sequential_simulations (bool) – If True, return only the data for re-initializing a sequential simulation run, where we “adiabatically” follow the evolution of a ground state (for variational algorithms), or do series of quenches (for time evolution algorithms); see
run_seq_simulations()
.- Returns:
resume_data – Dictionary with necessary data (apart from copies of psi, model, options) that allows to continue the algorithm run from where we are now. It might contain an explicit copy of psi.
- Return type:
- resume_run()[source]
Resume a run that was interrupted.
In case we saved an intermediate result at a
checkpoint
, this function allows to resume therun()
of the algorithm (after re-initialization with the resume_data). Since most algorithms just have a while loop with break conditions, the default behavior implemented here is to just callrun()
.
- run()[source]
Perform a (real-)time evolution of
psi
by N_steps * dt.You probably want to call this in a loop along with measurements. The recommended way to do this is via the
RealTimeEvolution
.
- run_evolution(N_steps, dt)[source]
Perform a (real-)time evolution of
psi
by N_steps * dt.This is the inner part of
run()
without the logging. For parameters seeTimeEvolutionAlgorithm
.
- classmethod switch_engine(other_engine, *, options=None, **kwargs)[source]
Initialize algorithm from another algorithm instance of a different class.
You can initialize one engine from another, not too different subclasses. Internally, this function calls
get_resume_data()
to extract data from the other_engine and then initializes the new class.Note that it transfers the data without making copies in most case; even the options! Thus, when you call run() on one of the two algorithm instances, it will modify the state, environment, etc. in the other. We recommend to make the switch as
engine = OtherSubClass.switch_engine(engine)
directly replacing the reference.- Parameters:
cls (class) – Subclass of
Algorithm
to be initialized.other_engine (
Algorithm
) – The engine from which data should be transferred. Another, but not too different algorithm subclass-class; e.g. you can switch from theTwoSiteDMRGEngine
to theOneSiteDMRGEngine
.options (None | dict-like) – If not None, these options are used for the new initialization. If None, take the options from the other_engine.
**kwargs – Further keyword arguments for class initialization. If not defined, resume_data is collected with
get_resume_data()
.
- time_dependent_H = False
whether the algorithm supports time-dependent H