IterativeSweeps
full name: tenpy.algorithms.mps_common.IterativeSweeps
parent module:
tenpy.algorithms.mps_common
type: class
Inheritance Diagram
Methods
|
|
|
Perform N_sweeps sweeps without optimization to update the environment. |
|
Gives an approximate prediction for the required memory usage. |
Remove no longer needed environments after an update. |
|
Return necessary data to resume a |
|
Define the schedule of the sweep. |
|
|
(Re-)initialize the environment. |
Determines if the algorithm is converged. |
|
Create new instance of self.EffectiveH at self.i0 and set it to self.eff_H. |
|
Set self.mixer to the class specified by options['mixer']. |
|
Cleanup the effects of a mixer. |
|
Deactivate the mixer. |
|
Perform any final steps or clean up after the main loop has terminated. |
|
|
Algorithm-specific actions to be taken after local update. |
Perform preparations before |
|
Prepare self for calling |
|
|
Reset the statistics. |
Resume a run that was interrupted. |
|
Actually run the algorithm. |
|
Perform a single iteration. |
|
Emits a status message to the logging system after an iteration. |
|
Determines if the main loop should be terminated. |
|
|
One 'sweep' of a sweeper algorithm. |
|
Initialize algorithm from another algorithm instance of a different class. |
|
Update the left and right environments after an update of the state. |
|
Perform algorithm-specific local update. |
Class Attributes and Properties
|
|
|
|
The number of sites to be optimized at once. |
|
|
- class tenpy.algorithms.mps_common.IterativeSweeps(psi, model, options, *, orthogonal_to=None, **kwargs)[source]
Bases:
Sweep
Prototype class for algorithms that iterate the same sweep until convergence.
Examples for such algorithms are DMRG and variational compression. This is a base class, implementing
run()
in terms of other methods. Subclasses should implementrun_iteration()
andis_converged()
. It might be useful to overwritepre_run_initialize()
,status_update()
,stopping_criterion()
orpost_run_cleanup()
.Options
- config IterativeSweeps
option summary By default (``None``) this feature is disabled. [...]
chi_list_reactivates_mixer (from Sweep) in IterativeSweeps.sweep
If True, the mixer is reset/reactivated each time the bond dimension growth [...]
Whether to combine legs into pipes. This combines the virtual and [...]
lanczos_params (from Sweep) in Sweep
Lanczos parameters as described in :cfg:config:`KrylovBased`.
max_hours in DMRGEngine.stopping_criterion
If the DMRG took longer (measured in wall-clock time), [...]
max_N_sites_per_ring (from Algorithm) in Algorithm
Threshold for raising errors on too many sites per ring. Default ``18``. [...]
max_sweeps in DMRGEngine.stopping_criterion
Maximum number of sweeps to perform.
Threshold for raising errors on too large truncation errors. Default ``0.00 [...]
min_sweeps in DMRGEngine.stopping_criterion
Minimum number of sweeps to perform.
Specifies which :class:`Mixer` to use, if any. [...]
mixer_params (from Sweep) in DMRGEngine.mixer_activate
Mixer parameters as described in :cfg:config:`Mixer`.
Number of sweeps to be performed without optimization to update the environment.
trunc_params (from Algorithm) in Algorithm
Truncation parameters as described in :cfg:config:`truncation`.
- option max_trunc_err: float
Threshold for raising errors on too large truncation errors. Default
0.0001
. Seeconsistency_check()
. If the any truncation erroreps
on the final sweep exceeds this value, we raise. Can be downgraded to a warning by setting this option toNone
.
- pre_run_initialize()[source]
Perform preparations before
run_iteration()
is iterated.- Returns:
The object to be returned by
run()
in case of immediate convergence, i.e. if no iterations are performed.- Return type:
result
- run_iteration()[source]
Perform a single iteration.
- Returns:
The object to be returned by
run()
if the main loop terminates after this iteration- Return type:
result
- status_update(iteration_start_time: float)[source]
Emits a status message to the logging system after an iteration.
- Parameters:
iteration_start_time (float) – The
time.time()
at the start of the last iteration
- stopping_criterion(iteration_start_time: float) bool [source]
Determines if the main loop should be terminated.
- Parameters:
iteration_start_time (float) – The
time.time()
at the start of the last iteration
Options
- option IterativeSweeps.min_sweeps: int
Minimum number of sweeps to perform.
- option IterativeSweeps.max_sweeps: int
Maximum number of sweeps to perform.
- option IterativeSweeps.max_hours: float
If the DMRG took longer (measured in wall-clock time), ‘shelve’ the simulation, i.e. stop and return with the flag
shelve=True
.
- is_converged() bool [source]
Determines if the algorithm is converged.
Does not cover any other reasons to abort, such as reaching a time limit. Such checks are covered by
stopping_criterion()
.
- environment_sweeps(N_sweeps)[source]
Perform N_sweeps sweeps without optimization to update the environment.
- Parameters:
N_sweeps (int) – Number of sweeps to run without optimization
- estimate_RAM(mem_saving_factor=None)[source]
Gives an approximate prediction for the required memory usage.
This calculation is based on the requested bond dimension, the local Hilbert space dimension, the number of sites, and the boundary conditions.
- Parameters:
mem_saving_factor (float) – Represents the amount of RAM saved due to conservation laws. By default, it is ‘None’ and is extracted from the model automatically. However, this is only possible in a few cases and needs to be estimated in most cases. This is due to the fact that it is dependent on the model parameters. If one has a better estimate, one can pass the value directly. This value can be extracted by building the initial state psi (usually by performing DMRG) and then calling
print(psi.get_B(0).sparse_stats())
TeNPy will automatically print the fraction of nonzero entries in the first line, for example,6 of 16 entries (=0.375) nonzero
. This fraction corresponds to the mem_saving_factor; in our example, it is 0.375.- Returns:
usage – Required RAM in MB.
- Return type:
See also
tenpy.simulations.simulation.estimate_simulation_RAM
global function calling this.
- free_no_longer_needed_envs()[source]
Remove no longer needed environments after an update.
This allows to minimize the number of environments to be kept. For large MPO bond dimensions, these environments are by far the biggest part in memory, so this is a valuable optimization to reduce memory requirements.
- get_resume_data(sequential_simulations=False)[source]
Return necessary data to resume a
run()
interrupted at a checkpoint.At a
checkpoint
, you can savepsi
,model
andoptions
along with the data returned by this function. When the simulation aborts, you can resume it using this saved data with:eng = AlgorithmClass(psi, model, options, resume_data=resume_data) eng.resume_run()
An algorithm which doesn’t support this should override resume_run to raise an Error.
- Parameters:
sequential_simulations (bool) – If True, return only the data for re-initializing a sequential simulation run, where we “adiabatically” follow the evolution of a ground state (for variational algorithms), or do series of quenches (for time evolution algorithms); see
run_seq_simulations()
.- Returns:
resume_data – Dictionary with necessary data (apart from copies of psi, model, options) that allows to continue the algorithm run from where we are now. It might contain an explicit copy of psi.
- Return type:
- get_sweep_schedule()[source]
Define the schedule of the sweep.
One ‘sweep’ is a full sequence from the leftmost site to the right and back.
- Returns:
schedule – Schedule for the sweep. Each entry is
(i0, move_right, (update_LP, update_RP))
, where i0 is the leftmost of theself.EffectiveH.length
sites to be updated inupdate_local()
, move_right indicates whether the next i0 in the schedule is right (True), left (False) or equal (None) of the current one, and update_LP, update_RP indicate whether it is necessary to update the LP and RP of the environments.- Return type:
- init_env(model=None, resume_data=None, orthogonal_to=None)[source]
(Re-)initialize the environment.
This function is useful to (re-)start a Sweep with a slightly different model or different (engine) parameters. Note that we assume that we still have the same psi. Calls
reset_stats()
.- Parameters:
model (
MPOModel
) – The model representing the Hamiltonian for which we want to find the ground state. IfNone
, keep the model used before.resume_data (None | dict) – Given when resuming a simulation, as returned by
get_resume_data()
. Can contain another dict under the key init_env_data; the contents of init_env_data get passed as keyword arguments to the environment initialization.orthogonal_to (None | list of
MPS
| list of dict) – List of other matrix product states to orthogonalize against. Instead of just the state, you can specify a dict with the state as ket and further keyword arguments for initializing theMPSEnvironment
; thepsi
to be optimized is used as bra. Works only for finite or segment MPS; for infinite MPS it must be None. This can be used to find (a few) excited states as follows. First, run DMRG to find the ground state, and then run DMRG again while orthogonalizing against the ground state, which yields the first excited state (in the same symmetry sector), and so on. Note thatresume_data['orthogonal_to']
takes precedence over the argument.
Options
- option Sweep.start_env: int
Number of sweeps to be performed without optimization to update the environment.
- Raises:
ValueError – If the engine is re-initialized with a new model, which legs are incompatible with those of hte old model.
- mixer_activate()[source]
Set self.mixer to the class specified by options[‘mixer’].
- option Sweep.mixer: str | class | bool | None
Specifies which
Mixer
to use, if any. A string stands for one of the mixers defined in this module. A class is assumed to have the same interface asMixer
and is used to instantiate themixer
.None
uses no mixer.True
uses the mixer specified by theDefaultMixer
class attribute. The default depends on the subclass ofSweep
.
See also
- mixer_cleanup()[source]
Cleanup the effects of a mixer.
A
sweep()
with an enabledMixer
leaves the MPS psi with 2D arrays in S. This method recovers the original form by performing SVDs of the S and updating the MPS tensors accordingly.
- mixer_deactivate()[source]
Deactivate the mixer.
Set
self.mixer=None
and revert any other effects ofmixer_activate()
.
- property n_optimize
The number of sites to be optimized at once.
Indirectly set by the class attribute
EffectiveH
and it’s length. For example,TwoSiteDMRGEngine
uses theTwoSiteH
and hence hasn_optimize=2
, while theSingleSiteDMRGEngine
hasn_optimize=1
.
- post_update_local(err, **update_data)[source]
Algorithm-specific actions to be taken after local update.
An example would be to collect statistics.
- prepare_update_local()[source]
Prepare self for calling
update_local()
.- Returns:
theta – Current best guess for the ground state, which is to be optimized. Labels are
'vL', 'p0', 'p1', 'vR'
, or combined versions of it (if self.combine). For single-site DMRG, the'p1'
label is missing.- Return type:
- reset_stats(resume_data=None)[source]
Reset the statistics. Useful if you want to start a new Sweep run.
This method is expected to be overwritten by subclass, and should then define self.update_stats and self.sweep_stats dicts consistent with the statistics generated by the algorithm particular to that subclass.
- Parameters:
resume_data (dict) – Given when resuming a simulation, as returned by
get_resume_data()
. Here, we read out the sweeps.
Options
- option Sweep.chi_list: None | dict(int -> int)
By default (
None
) this feature is disabled. A dict allows to gradually increase the chi_max. An entry at_sweep: chi states that starting from sweep at_sweep, the value chi is to be used fortrunc_params['chi_max']
. For examplechi_list={0: 50, 20: 100}
useschi_max=50
for the first 20 sweeps andchi_max=100
afterwards. A value of None is initialized to the current value oftrunc_params['chi_max']
at algorithm initialization.
- resume_run()[source]
Resume a run that was interrupted.
In case we saved an intermediate result at a
checkpoint
, this function allows to resume therun()
of the algorithm (after re-initialization with the resume_data). Since most algorithms just have a while loop with break conditions, the default behavior implemented here is to just callrun()
.
- sweep(optimize=True)[source]
One ‘sweep’ of a sweeper algorithm.
Iterate over the bond which is optimized, to the right and then back to the left to the starting point.
- Parameters:
optimize (bool, optional) – Whether we actually optimize the state, e.g. to find the ground state of the effective Hamiltonian in case of a DMRG. (If False, just update the environments).
Options
- option Sweep.chi_list_reactivates_mixer: bool
If True, the mixer is reset/reactivated each time the bond dimension growths due to
Sweep.chi_list
.
- Returns:
max_trunc_err – Maximal truncation error introduced.
- Return type:
- classmethod switch_engine(other_engine, *, options=None, **kwargs)[source]
Initialize algorithm from another algorithm instance of a different class.
You can initialize one engine from another, not too different subclasses. Internally, this function calls
get_resume_data()
to extract data from the other_engine and then initializes the new class.Note that it transfers the data without making copies in most case; even the options! Thus, when you call run() on one of the two algorithm instances, it will modify the state, environment, etc. in the other. We recommend to make the switch as
engine = OtherSubClass.switch_engine(engine)
directly replacing the reference.- Parameters:
cls (class) – Subclass of
Algorithm
to be initialized.other_engine (
Algorithm
) – The engine from which data should be transferred. Another, but not too different algorithm subclass-class; e.g. you can switch from theTwoSiteDMRGEngine
to theOneSiteDMRGEngine
.options (None | dict-like) – If not None, these options are used for the new initialization. If None, take the options from the other_engine.
**kwargs – Further keyword arguments for class initialization. If not defined, resume_data is collected with
get_resume_data()
.
- update_env(**update_data)[source]
Update the left and right environments after an update of the state.
- Parameters:
**update_data – Whatever is returned by
update_local()
.
- update_local(theta, **kwargs)[source]
Perform algorithm-specific local update.
For two-site algorithms with
n_optimize
= 2, this always optimizes the sitesi0
and i0 + 1. For single-site algorithms, the effective H only acts on site i0, but afterwards it also updates the bond to the right ifmove_right
is True, or the bond to the left ifmove_right
is False. Since the svd for truncation gives tensors to be multiplied into the tensors on both sides of the bond, tensors of two sites are updated even for single-site algorithms: when right-moving, site i0 + 1 is also updated; site i0 - 1 when left-moving.- Parameters:
theta (
Array
) – Local single- or two-site wave function, as returned byprepare_update_local()
.- Returns:
update_data – Data to be processed by
update_env()
andpost_update_local()
, e.g. containing the truncation error as err. Ifcombine
is set, it should also contain the U and VH from the SVD.- Return type: