QRBasedVariationalApplyMPO

Inheritance Diagram

Inheritance diagram of tenpy.algorithms.mps_common.QRBasedVariationalApplyMPO

Methods

QRBasedVariationalApplyMPO.__init__(psi, ...)

QRBasedVariationalApplyMPO.environment_sweeps(...)

Perform N_sweeps sweeps without optimization to update the environment.

QRBasedVariationalApplyMPO.estimate_RAM([...])

Gives an approximate prediction for the required memory usage.

QRBasedVariationalApplyMPO.free_no_longer_needed_envs()

Remove no longer needed environments after an update.

QRBasedVariationalApplyMPO.get_resume_data([...])

Return necessary data to resume a run() interrupted at a checkpoint.

QRBasedVariationalApplyMPO.get_sweep_schedule()

Define the schedule of the sweep.

QRBasedVariationalApplyMPO.init_env(U_MPO[, ...])

Initialize the environment.

QRBasedVariationalApplyMPO.is_converged()

Determines if the algorithm is converged.

QRBasedVariationalApplyMPO.make_eff_H()

Create new instance of self.EffectiveH at self.i0 and set it to self.eff_H.

QRBasedVariationalApplyMPO.mixer_activate()

Set self.mixer to the class specified by options['mixer'].

QRBasedVariationalApplyMPO.mixer_cleanup()

Cleanup the effects of a mixer.

QRBasedVariationalApplyMPO.mixer_deactivate()

Deactivate the mixer.

QRBasedVariationalApplyMPO.post_run_cleanup()

Perform any final steps or clean up after the main loop has terminated.

QRBasedVariationalApplyMPO.post_update_local(...)

Algorithm-specific actions to be taken after local update.

QRBasedVariationalApplyMPO.pre_run_initialize()

Perform preparations before run_iteration() is iterated.

QRBasedVariationalApplyMPO.prepare_update_local()

Prepare self for calling update_local().

QRBasedVariationalApplyMPO.reset_stats([...])

Reset the statistics.

QRBasedVariationalApplyMPO.resume_run()

Resume a run that was interrupted.

QRBasedVariationalApplyMPO.run()

Run the compression.

QRBasedVariationalApplyMPO.run_iteration()

Perform a single iteration.

QRBasedVariationalApplyMPO.status_update(...)

Emits a status message to the logging system after an iteration.

QRBasedVariationalApplyMPO.stopping_criterion(...)

Determines if the main loop should be terminated.

QRBasedVariationalApplyMPO.sweep([optimize])

One 'sweep' of a sweeper algorithm.

QRBasedVariationalApplyMPO.switch_engine(...)

Initialize algorithm from another algorithm instance of a different class.

QRBasedVariationalApplyMPO.update_env(...)

Update the left and right environments after an update of the state.

QRBasedVariationalApplyMPO.update_local(_[, ...])

Perform local update.

QRBasedVariationalApplyMPO.update_new_psi(theta)

Given a new two-site wave function theta, split it and save it in psi.

Class Attributes and Properties

QRBasedVariationalApplyMPO.DefaultMixer

QRBasedVariationalApplyMPO.S_inv_cutoff

QRBasedVariationalApplyMPO.n_optimize

The number of sites to be optimized at once.

QRBasedVariationalApplyMPO.use_mixer_by_default

class tenpy.algorithms.mps_common.QRBasedVariationalApplyMPO(psi, U_MPO, options, **kwargs)[source]

Bases: VariationalApplyMPO

Variational MPO application, using QR-based decompositions instead of SVD.

The QR-based decomposition, introduced in arXiv:2212.09782 is used for TEBD, as implemented in QRBasedTEBDEngine. This engine is a version of VariationalApplyMPO that uses the same QR-based decomposition instead of SVD in the truncation step after the variational update.

Options

config QRBasedVariationalApplyMPO
option summary

cbe_expand

Expansion rate. The QR-based decomposition is carried out at an expanded bo [...]

cbe_expand_0

Expansion rate at low ``chi``. [...]

cbe_min_block_increase

Minimum bond dimension increase for each block. Default is `1`.

chi_list (from Sweep) in IterativeSweeps.reset_stats

By default (``None``) this feature is disabled. [...]

chi_list_reactivates_mixer (from Sweep) in IterativeSweeps.sweep

If True, the mixer is reset/reactivated each time the bond dimension growth [...]

combine (from Sweep) in Sweep

Whether to combine legs into pipes. This combines the virtual and [...]

compute_err

Whether the truncation error should be computed exactly. [...]

lanczos_params (from Sweep) in Sweep

Lanczos parameters as described in :cfg:config:`KrylovBased`.

max_hours (from IterativeSweeps) in DMRGEngine.stopping_criterion

If the DMRG took longer (measured in wall-clock time), [...]

max_N_sites_per_ring (from Algorithm) in Algorithm

Threshold for raising errors on too many sites per ring. Default ``18``. [...]

max_sweeps (from IterativeSweeps) in DMRGEngine.stopping_criterion

Maximum number of sweeps to perform.

max_trunc_err (from IterativeSweeps) in IterativeSweeps

Threshold for raising errors on too large truncation errors. Default ``0.00 [...]

min_sweeps (from IterativeSweeps) in DMRGEngine.stopping_criterion

Minimum number of sweeps to perform.

mixer (from Sweep) in DMRGEngine.mixer_activate

Specifies which :class:`Mixer` to use, if any. [...]

mixer_params (from Sweep) in DMRGEngine.mixer_activate

Mixer parameters as described in :cfg:config:`Mixer`.

start_env (from Sweep) in DMRGEngine.init_env

Number of sweeps to be performed without optimization to update the environment.

start_env_sites (from VariationalCompression) in VariationalCompression

Number of sites to contract for the initial LP/RP environment in case of in [...]

tol_theta_diff (from VariationalCompression) in VariationalCompression

Stop after less than `max_sweeps` sweeps if the 1-site wave function change [...]

trunc_params (from VariationalCompression) in VariationalCompression

Truncation parameters as described in :cfg:config:`truncation`.

use_eig_based_svd

Whether the SVD of the bond matrix :math:`\Xi` should be carried out numeri [...]

option cbe_expand: float

Expansion rate. The QR-based decomposition is carried out at an expanded bond dimension eta = (1 + cbe_expand) * chi, where chi is the bond dimension before the time step. Default is 0.1.

option cbe_expand_0: float

Expansion rate at low chi. If given, the expansion rate decreases linearly from cbe_expand_0 at chi == 1 to cbe_expand at chi == trunc_params['chi_max'], then remains constant. If not given, the expansion rate is cbe_expand at all chi.

option cbe_min_block_increase: int

Minimum bond dimension increase for each block. Default is 1.

option use_eig_based_svd: bool

Whether the SVD of the bond matrix \(\Xi\) should be carried out numerically via the eigensystem. This is faster on GPUs, but less accurate. It makes no sense to do this on CPU. It is currently not supported for update_imag. Default is False.

option compute_err: bool

Whether the truncation error should be computed exactly. Compared to SVD-based TEBD, computing the truncation error is significantly more expensive. If True (default), the full error is computed. Otherwise, the truncation error is set to NaN.

EffectiveH[source]

alias of TwoSiteH

environment_sweeps(N_sweeps)[source]

Perform N_sweeps sweeps without optimization to update the environment.

Parameters:

N_sweeps (int) – Number of sweeps to run without optimization

estimate_RAM(mem_saving_factor=None)[source]

Gives an approximate prediction for the required memory usage.

This calculation is based on the requested bond dimension, the local Hilbert space dimension, the number of sites, and the boundary conditions.

Parameters:

mem_saving_factor (float) – Represents the amount of RAM saved due to conservation laws. By default, it is ‘None’ and is extracted from the model automatically. However, this is only possible in a few cases and needs to be estimated in most cases. This is due to the fact that it is dependent on the model parameters. If one has a better estimate, one can pass the value directly. This value can be extracted by building the initial state psi (usually by performing DMRG) and then calling print(psi.get_B(0).sparse_stats()) TeNPy will automatically print the fraction of nonzero entries in the first line, for example, 6 of 16 entries (=0.375) nonzero. This fraction corresponds to the mem_saving_factor; in our example, it is 0.375.

Returns:

usage – Required RAM in MB.

Return type:

float

See also

tenpy.simulations.simulation.estimate_simulation_RAM

global function calling this.

free_no_longer_needed_envs()[source]

Remove no longer needed environments after an update.

This allows to minimize the number of environments to be kept. For large MPO bond dimensions, these environments are by far the biggest part in memory, so this is a valuable optimization to reduce memory requirements.

get_resume_data(sequential_simulations=False)[source]

Return necessary data to resume a run() interrupted at a checkpoint.

At a checkpoint, you can save psi, model and options along with the data returned by this function. When the simulation aborts, you can resume it using this saved data with:

eng = AlgorithmClass(psi, model, options, resume_data=resume_data)
eng.resume_run()

An algorithm which doesn’t support this should override resume_run to raise an Error.

Parameters:

sequential_simulations (bool) – If True, return only the data for re-initializing a sequential simulation run, where we “adiabatically” follow the evolution of a ground state (for variational algorithms), or do series of quenches (for time evolution algorithms); see run_seq_simulations().

Returns:

resume_data – Dictionary with necessary data (apart from copies of psi, model, options) that allows to continue the algorithm run from where we are now. It might contain an explicit copy of psi.

Return type:

dict

get_sweep_schedule()[source]

Define the schedule of the sweep.

Compared to get_sweep_schedule(), we add one extra update at the end with i0=0 (which is the same as the first update of the sweep). This is done to ensure proper convergence after each sweep, even if that implies that the site 0 is then updated twice per sweep.

init_env(U_MPO, resume_data=None, orthogonal_to=None)[source]

Initialize the environment.

Parameters:
  • U_MPO (MPO) – The MPO to be applied to the sate.

  • resume_data (dict) – May contain init_env_data.

  • orthogonal_to – Ignored.

is_converged()[source]

Determines if the algorithm is converged.

Does not cover any other reasons to abort, such as reaching a time limit. Such checks are covered by stopping_criterion().

make_eff_H()[source]

Create new instance of self.EffectiveH at self.i0 and set it to self.eff_H.

mixer_activate()[source]

Set self.mixer to the class specified by options[‘mixer’].

option Sweep.mixer: str | class | bool | None

Specifies which Mixer to use, if any. A string stands for one of the mixers defined in this module. A class is assumed to have the same interface as Mixer and is used to instantiate the mixer. None uses no mixer. True uses the mixer specified by the DefaultMixer class attribute. The default depends on the subclass of Sweep.

option Sweep.mixer_params: dict

Mixer parameters as described in Mixer.

See also

mixer_deactivate

mixer_cleanup()[source]

Cleanup the effects of a mixer.

A sweep() with an enabled Mixer leaves the MPS psi with 2D arrays in S. This method recovers the original form by performing SVDs of the S and updating the MPS tensors accordingly.

mixer_deactivate()[source]

Deactivate the mixer.

Set self.mixer=None and revert any other effects of mixer_activate().

property n_optimize

The number of sites to be optimized at once.

Indirectly set by the class attribute EffectiveH and it’s length. For example, TwoSiteDMRGEngine uses the TwoSiteH and hence has n_optimize=2, while the SingleSiteDMRGEngine has n_optimize=1.

post_run_cleanup()[source]

Perform any final steps or clean up after the main loop has terminated.

post_update_local(err, **update_data)[source]

Algorithm-specific actions to be taken after local update.

An example would be to collect statistics.

pre_run_initialize()[source]

Perform preparations before run_iteration() is iterated.

Returns:

The object to be returned by run() in case of immediate convergence, i.e. if no iterations are performed.

Return type:

result

prepare_update_local()[source]

Prepare self for calling update_local().

Returns:

theta – Current best guess for the ground state, which is to be optimized. Labels are 'vL', 'p0', 'p1', 'vR', or combined versions of it (if self.combine). For single-site DMRG, the 'p1' label is missing.

Return type:

Array

reset_stats(resume_data=None)[source]

Reset the statistics. Useful if you want to start a new Sweep run.

This method is expected to be overwritten by subclass, and should then define self.update_stats and self.sweep_stats dicts consistent with the statistics generated by the algorithm particular to that subclass.

Parameters:

resume_data (dict) – Given when resuming a simulation, as returned by get_resume_data(). Here, we read out the sweeps.

Options

option Sweep.chi_list: None | dict(int -> int)

By default (None) this feature is disabled. A dict allows to gradually increase the chi_max. An entry at_sweep: chi states that starting from sweep at_sweep, the value chi is to be used for trunc_params['chi_max']. For example chi_list={0: 50, 20: 100} uses chi_max=50 for the first 20 sweeps and chi_max=100 afterwards. A value of None is initialized to the current value of trunc_params['chi_max'] at algorithm initialization.

resume_run()[source]

Resume a run that was interrupted.

In case we saved an intermediate result at a checkpoint, this function allows to resume the run() of the algorithm (after re-initialization with the resume_data). Since most algorithms just have a while loop with break conditions, the default behavior implemented here is to just call run().

run()[source]

Run the compression.

The state psi is compressed in place.

Warning

Call this function directly after initializing the class, without modifying psi inbetween. A copy of psi is made during init_env()!

Returns:

max_trunc_err – The maximal truncation error of a two-site wave function.

Return type:

TruncationError

run_iteration()[source]

Perform a single iteration.

Returns:

The object to be returned by run() if the main loop terminates after this iteration

Return type:

result

status_update(iteration_start_time: float)[source]

Emits a status message to the logging system after an iteration.

Parameters:

iteration_start_time (float) – The time.time() at the start of the last iteration

stopping_criterion(iteration_start_time: float) bool[source]

Determines if the main loop should be terminated.

Parameters:

iteration_start_time (float) – The time.time() at the start of the last iteration

Options

option IterativeSweeps.min_sweeps: int

Minimum number of sweeps to perform.

option IterativeSweeps.max_sweeps: int

Maximum number of sweeps to perform.

option IterativeSweeps.max_hours: float

If the DMRG took longer (measured in wall-clock time), ‘shelve’ the simulation, i.e. stop and return with the flag shelve=True.

Returns:

should_break – If True, the main loop in run() is broken.

Return type:

bool

sweep(optimize=True)[source]

One ‘sweep’ of a sweeper algorithm.

Iterate over the bond which is optimized, to the right and then back to the left to the starting point.

Parameters:

optimize (bool, optional) – Whether we actually optimize the state, e.g. to find the ground state of the effective Hamiltonian in case of a DMRG. (If False, just update the environments).

Options

option Sweep.chi_list_reactivates_mixer: bool

If True, the mixer is reset/reactivated each time the bond dimension growths due to Sweep.chi_list.

Returns:

max_trunc_err – Maximal truncation error introduced.

Return type:

float

classmethod switch_engine(other_engine, *, options=None, **kwargs)[source]

Initialize algorithm from another algorithm instance of a different class.

You can initialize one engine from another, not too different subclasses. Internally, this function calls get_resume_data() to extract data from the other_engine and then initializes the new class.

Note that it transfers the data without making copies in most case; even the options! Thus, when you call run() on one of the two algorithm instances, it will modify the state, environment, etc. in the other. We recommend to make the switch as engine = OtherSubClass.switch_engine(engine) directly replacing the reference.

Parameters:
  • cls (class) – Subclass of Algorithm to be initialized.

  • other_engine (Algorithm) – The engine from which data should be transferred. Another, but not too different algorithm subclass-class; e.g. you can switch from the TwoSiteDMRGEngine to the OneSiteDMRGEngine.

  • options (None | dict-like) – If not None, these options are used for the new initialization. If None, take the options from the other_engine.

  • **kwargs – Further keyword arguments for class initialization. If not defined, resume_data is collected with get_resume_data().

update_env(**update_data)[source]

Update the left and right environments after an update of the state.

Parameters:

**update_data – Whatever is returned by update_local().

update_local(_, optimize=True)[source]

Perform local update.

This simply contracts the environments and theta from the ket to get an updated theta for the bra self.psi (to be changed in place).

update_new_psi(theta: Array)[source]

Given a new two-site wave function theta, split it and save it in psi.