TimeDependentHAlgorithm¶
full name: tenpy.algorithms.algorithm.TimeDependentHAlgorithm
parent module:
tenpy.algorithms.algorithm
type: class
Inheritance Diagram

Methods
|
|
|
Evolve by N_steps*dt. |
|
|
Return necessary data to resume a |
|
Prepare an evolution step. |
|
Re-initialize a new |
|
Resume a run that was interrupted. |
|
Perform a (real-)time evolution of |
|
Run the time evolution for N_steps * dt. |
|
|
Initialize algorithm from another algorithm instance of a different class. |
Class Attributes and Properties
whether the algorithm supports time-dependent H |
|
|
- class tenpy.algorithms.algorithm.TimeDependentHAlgorithm(psi, model, options, **kwargs)[source]¶
Bases:
TimeEvolutionAlgorithm
Time evolution under a time dependent Hamiltonian.
TimeEvolutionAlgorithm subclasses approximate the evolution by many small time steps of dt. If we have a time-dependent Hamiltonian
H(t)
, we can to first order in dt approximate the evolution by just updatingH(t)
after each time step, keeping it constant during the update step, i.e., we approximate as follows:\[U(t_0, t) = T{exp(-i int_{t_0}^t ds H(s))} \approx prod_{i=0}^{N-1} exp(-i \Delta t H(t_0 + i*\Delta t)) \textrm{ where } \Delta t = (t-t_0) / N\]Note
Even if the algorithm approximation for \(exp(-i \Delta t H(t_0 + i*\Delta t))\) might be precise to higher order in dt, the approximation of the time dependence (and hence the overall scaling of the error with dt) is only correct to first order! Yet, if the time dependence of H is weak, it might still be better to use order > 1.
Todo
This is still under development and lacks rigorous tests.
- time_dependent_H = True¶
whether the algorithm supports time-dependent H
- evolve(N_steps, dt)[source]¶
Evolve by N_steps*dt.
Subclasses may override this with a more efficient way of do N_steps update_step.
- Parameters
- Returns
trunc_err – Sum of truncation errors introduced during evolution.
- Return type
- get_resume_data(sequential_simulations=False)[source]¶
Return necessary data to resume a
run()
interrupted at a checkpoint.At a
checkpoint
, you can savepsi
,model
andoptions
along with the data returned by this function. When the simulation aborts, you can resume it using this saved data with:eng = AlgorithmClass(psi, model, options, resume_data=resume_data) eng.resume_run()
An algorithm which doesn’t support this should override resume_run to raise an Error.
- Parameters
sequential_simulations (bool) – If True, return only the data for re-initializing a sequential simulation run, where we “adiabatically” follow the evolution of a ground state (for variational algorithms), or do series of quenches (for time evolution algorithms); see
run_seq_simulations()
.- Returns
resume_data – Dictionary with necessary data (apart from copies of psi, model, options) that allows to continue the simulation from where we are now. It might contain an explicit copy of psi.
- Return type
- prepare_evolve(dt)[source]¶
Prepare an evolution step.
This method is used to prepare repeated calls of
evolve()
given themodel
. For example, it may generate approximations ofU=exp(-i H dt)
. To avoid overhead, it may cache the result depending on parameters/options; but it should always regenerate it ifforce_prepare_evolve
is set.- Parameters
dt (float) – The time step to be used.
- resume_run()[source]¶
Resume a run that was interrupted.
In case we saved an intermediate result at a
checkpoint
, this function allows to resume therun()
of the algorithm (after re-initialization with the resume_data). Since most algorithms just have a while loop with break conditions, the default behaviour implemented here is to just callrun()
.
- run()[source]¶
Perform a (real-)time evolution of
psi
by N_steps * dt.You probably want to call this in a loop along with measurements. The recommended way to do this is via the
RealTimeEvolution
.
- run_evolution(N_steps, dt)[source]¶
Run the time evolution for N_steps * dt.
Updates the model after each time step dt to account for changing H(t). For parameters see
TimeEvolutionAlgorithm
.
- classmethod switch_engine(other_engine, *, options=None, **kwargs)[source]¶
Initialize algorithm from another algorithm instance of a different class.
You can initialize one engine from another, not too different subclasses. Internally, this function calls
get_resume_data()
to extract data from the other_engine and then initializes the new class.Note that it transfers the data without making copies in most case; even the options! Thus, when you call run() on one of the two algorithm instances, it will modify the state, environment, etc. in the other. We recommend to make the switch as
engine = OtherSubClass.switch_engine(engine)
directly replacing the reference.- Parameters
cls (class) – Subclass of
Algorithm
to be initialized.other_engine (
Algorithm
) – The engine from which data should be transferred. Another, but not too different algorithm subclass-class; e.g. you can switch from theTwoSiteDMRGEngine
to theOneSiteDMRGEngine
.options (None | dict-like) – If not None, these options are used for the new initialization. If None, take the options from the other_engine.
**kwargs – Further keyword arguments for class initialization. If not defined, resume_data is collected with
get_resume_data()
.