VariationalCompression

Inheritance Diagram

Inheritance diagram of tenpy.algorithms.mps_common.VariationalCompression

Methods

VariationalCompression.__init__(psi, options)

VariationalCompression.environment_sweeps(...)

Perform N_sweeps sweeps without optimization to update the environment.

VariationalCompression.estimate_RAM([...])

Gives an approximate prediction for the required memory usage.

VariationalCompression.free_no_longer_needed_envs()

Remove no longer needed environments after an update.

VariationalCompression.get_resume_data([...])

Return necessary data to resume a run() interrupted at a checkpoint.

VariationalCompression.get_sweep_schedule()

Define the schedule of the sweep.

VariationalCompression.init_env([model, ...])

Initialize the environment.

VariationalCompression.is_converged()

Determines if the algorithm is converged.

VariationalCompression.make_eff_H()

Create new instance of self.EffectiveH at self.i0 and set it to self.eff_H.

VariationalCompression.mixer_activate()

Set self.mixer to the class specified by options['mixer'].

VariationalCompression.mixer_cleanup()

Cleanup the effects of a mixer.

VariationalCompression.mixer_deactivate()

Deactivate the mixer.

VariationalCompression.post_run_cleanup()

Perform any final steps or clean up after the main loop has terminated.

VariationalCompression.post_update_local(...)

Algorithm-specific actions to be taken after local update.

VariationalCompression.pre_run_initialize()

Perform preparations before run_iteration() is iterated.

VariationalCompression.prepare_update_local()

Prepare self for calling update_local().

VariationalCompression.reset_stats([resume_data])

Reset the statistics.

VariationalCompression.resume_run()

Resume a run that was interrupted.

VariationalCompression.run()

Run the compression.

VariationalCompression.run_iteration()

Perform a single iteration.

VariationalCompression.status_update(...)

Emits a status message to the logging system after an iteration.

VariationalCompression.stopping_criterion(...)

Determines if the main loop should be terminated.

VariationalCompression.sweep([optimize])

One 'sweep' of a sweeper algorithm.

VariationalCompression.switch_engine(...[, ...])

Initialize algorithm from another algorithm instance of a different class.

VariationalCompression.update_env(**update_data)

Update the left and right environments after an update of the state.

VariationalCompression.update_local(_[, ...])

Perform local update.

VariationalCompression.update_new_psi(theta)

Given a new two-site wave function theta, split it and save it in psi.

Class Attributes and Properties

VariationalCompression.DefaultMixer

VariationalCompression.S_inv_cutoff

VariationalCompression.n_optimize

The number of sites to be optimized at once.

VariationalCompression.use_mixer_by_default

class tenpy.algorithms.mps_common.VariationalCompression(psi, options, resume_data=None)[source]

Bases: IterativeSweeps

Variational compression of an MPS (in place).

To compress an MPS psi, use VariationalCompression(psi, options).run().

The algorithm is the same as described in VariationalApplyMPO, except that we don’t have an MPO in the networks - one can think of the MPO being trivial.

Parameters:
  • psi (MPS) – The state to be compressed.

  • options (dict) – See VariationalCompression.

  • resume_data (None | dict) – By default (None) ignored. If a dict, it should contain the data returned by get_resume_data() when intending to continue/resume an interrupted run, in particular ‘init_env_data’.

Options

config VariationalCompression
option summary

chi_list (from Sweep) in IterativeSweeps.reset_stats

By default (``None``) this feature is disabled. [...]

chi_list_reactivates_mixer (from Sweep) in IterativeSweeps.sweep

If True, the mixer is reset/reactivated each time the bond dimension growth [...]

combine (from Sweep) in Sweep

Whether to combine legs into pipes. This combines the virtual and [...]

lanczos_params (from Sweep) in Sweep

Lanczos parameters as described in :cfg:config:`KrylovBased`.

max_hours (from IterativeSweeps) in DMRGEngine.stopping_criterion

If the DMRG took longer (measured in wall-clock time), [...]

max_N_sites_per_ring (from Algorithm) in Algorithm

Threshold for raising errors on too many sites per ring. Default ``18``. [...]

max_sweeps (from IterativeSweeps) in DMRGEngine.stopping_criterion

Maximum number of sweeps to perform.

max_trunc_err (from IterativeSweeps) in IterativeSweeps

Threshold for raising errors on too large truncation errors. Default ``0.00 [...]

min_sweeps (from IterativeSweeps) in DMRGEngine.stopping_criterion

Minimum number of sweeps to perform.

mixer (from Sweep) in DMRGEngine.mixer_activate

Specifies which :class:`Mixer` to use, if any. [...]

mixer_params (from Sweep) in DMRGEngine.mixer_activate

Mixer parameters as described in :cfg:config:`Mixer`.

start_env (from Sweep) in DMRGEngine.init_env

Number of sweeps to be performed without optimization to update the environment.

start_env_sites

Number of sites to contract for the initial LP/RP environment in case of in [...]

tol_theta_diff

Stop after less than `max_sweeps` sweeps if the 1-site wave function change [...]

trunc_params

Truncation parameters as described in :cfg:config:`truncation`.

option trunc_params: dict

Truncation parameters as described in truncation.

option tol_theta_diff: float | None

Stop after less than max_sweeps sweeps if the 1-site wave function changed by less than this value, 1.-|<theta_old|theta_new>| < tol_theta_diff, where theta_old/new are two-site wave functions during the sweep to the left. None disables this convergence check, always performing max_sweeps sweeps.

option start_env_sites: int

Number of sites to contract for the initial LP/RP environment in case of infinite MPS.

renormalize

Used to keep track of renormalization in the last sweep for psi.norm.

Type:

list

EffectiveH[source]

alias of DummyTwoSiteH

pre_run_initialize()[source]

Perform preparations before run_iteration() is iterated.

Returns:

The object to be returned by run() in case of immediate convergence, i.e. if no iterations are performed.

Return type:

result

run_iteration()[source]

Perform a single iteration.

Returns:

The object to be returned by run() if the main loop terminates after this iteration

Return type:

result

is_converged()[source]

Determines if the algorithm is converged.

Does not cover any other reasons to abort, such as reaching a time limit. Such checks are covered by stopping_criterion().

post_run_cleanup()[source]

Perform any final steps or clean up after the main loop has terminated.

run()[source]

Run the compression.

The state psi is compressed in place.

Warning

Call this function directly after initializing the class, without modifying psi inbetween. A copy of psi is made during init_env()!

Returns:

max_trunc_err – The maximal truncation error of a two-site wave function.

Return type:

TruncationError

init_env(model=None, resume_data=None, orthogonal_to=None)[source]

Initialize the environment.

Parameters:
  • model – Ignored, only there for compatibility with the Sweep class.

  • orthogonal_to – Ignored, only there for compatibility with the Sweep class.

  • resume_data (dict) – May contain init_env_data.

get_sweep_schedule()[source]

Define the schedule of the sweep.

Compared to get_sweep_schedule(), we add one extra update at the end with i0=0 (which is the same as the first update of the sweep). This is done to ensure proper convergence after each sweep, even if that implies that the site 0 is then updated twice per sweep.

update_local(_, optimize=True)[source]

Perform local update.

This simply contracts the environments and theta from the ket to get an updated theta for the bra self.psi (to be changed in place).

update_new_psi(theta)[source]

Given a new two-site wave function theta, split it and save it in psi.

environment_sweeps(N_sweeps)[source]

Perform N_sweeps sweeps without optimization to update the environment.

Parameters:

N_sweeps (int) – Number of sweeps to run without optimization

estimate_RAM(mem_saving_factor=None)[source]

Gives an approximate prediction for the required memory usage.

This calculation is based on the requested bond dimension, the local Hilbert space dimension, the number of sites, and the boundary conditions.

Parameters:

mem_saving_factor (float) – Represents the amount of RAM saved due to conservation laws. By default, it is ‘None’ and is extracted from the model automatically. However, this is only possible in a few cases and needs to be estimated in most cases. This is due to the fact that it is dependent on the model parameters. If one has a better estimate, one can pass the value directly. This value can be extracted by building the initial state psi (usually by performing DMRG) and then calling print(psi.get_B(0).sparse_stats()) TeNPy will automatically print the fraction of nonzero entries in the first line, for example, 6 of 16 entries (=0.375) nonzero. This fraction corresponds to the mem_saving_factor; in our example, it is 0.375.

Returns:

usage – Required RAM in MB.

Return type:

float

See also

tenpy.simulations.simulation.estimate_simulation_RAM

global function calling this.

free_no_longer_needed_envs()[source]

Remove no longer needed environments after an update.

This allows to minimize the number of environments to be kept. For large MPO bond dimensions, these environments are by far the biggest part in memory, so this is a valuable optimization to reduce memory requirements.

get_resume_data(sequential_simulations=False)[source]

Return necessary data to resume a run() interrupted at a checkpoint.

At a checkpoint, you can save psi, model and options along with the data returned by this function. When the simulation aborts, you can resume it using this saved data with:

eng = AlgorithmClass(psi, model, options, resume_data=resume_data)
eng.resume_run()

An algorithm which doesn’t support this should override resume_run to raise an Error.

Parameters:

sequential_simulations (bool) – If True, return only the data for re-initializing a sequential simulation run, where we “adiabatically” follow the evolution of a ground state (for variational algorithms), or do series of quenches (for time evolution algorithms); see run_seq_simulations().

Returns:

resume_data – Dictionary with necessary data (apart from copies of psi, model, options) that allows to continue the algorithm run from where we are now. It might contain an explicit copy of psi.

Return type:

dict

make_eff_H()[source]

Create new instance of self.EffectiveH at self.i0 and set it to self.eff_H.

mixer_activate()[source]

Set self.mixer to the class specified by options[‘mixer’].

option Sweep.mixer: str | class | bool | None

Specifies which Mixer to use, if any. A string stands for one of the mixers defined in this module. A class is assumed to have the same interface as Mixer and is used to instantiate the mixer. None uses no mixer. True uses the mixer specified by the DefaultMixer class attribute. The default depends on the subclass of Sweep.

option Sweep.mixer_params: dict

Mixer parameters as described in Mixer.

See also

mixer_deactivate

mixer_cleanup()[source]

Cleanup the effects of a mixer.

A sweep() with an enabled Mixer leaves the MPS psi with 2D arrays in S. This method recovers the original form by performing SVDs of the S and updating the MPS tensors accordingly.

mixer_deactivate()[source]

Deactivate the mixer.

Set self.mixer=None and revert any other effects of mixer_activate().

property n_optimize

The number of sites to be optimized at once.

Indirectly set by the class attribute EffectiveH and it’s length. For example, TwoSiteDMRGEngine uses the TwoSiteH and hence has n_optimize=2, while the SingleSiteDMRGEngine has n_optimize=1.

post_update_local(err, **update_data)[source]

Algorithm-specific actions to be taken after local update.

An example would be to collect statistics.

prepare_update_local()[source]

Prepare self for calling update_local().

Returns:

theta – Current best guess for the ground state, which is to be optimized. Labels are 'vL', 'p0', 'p1', 'vR', or combined versions of it (if self.combine). For single-site DMRG, the 'p1' label is missing.

Return type:

Array

reset_stats(resume_data=None)[source]

Reset the statistics. Useful if you want to start a new Sweep run.

This method is expected to be overwritten by subclass, and should then define self.update_stats and self.sweep_stats dicts consistent with the statistics generated by the algorithm particular to that subclass.

Parameters:

resume_data (dict) – Given when resuming a simulation, as returned by get_resume_data(). Here, we read out the sweeps.

Options

option Sweep.chi_list: None | dict(int -> int)

By default (None) this feature is disabled. A dict allows to gradually increase the chi_max. An entry at_sweep: chi states that starting from sweep at_sweep, the value chi is to be used for trunc_params['chi_max']. For example chi_list={0: 50, 20: 100} uses chi_max=50 for the first 20 sweeps and chi_max=100 afterwards. A value of None is initialized to the current value of trunc_params['chi_max'] at algorithm initialization.

resume_run()[source]

Resume a run that was interrupted.

In case we saved an intermediate result at a checkpoint, this function allows to resume the run() of the algorithm (after re-initialization with the resume_data). Since most algorithms just have a while loop with break conditions, the default behavior implemented here is to just call run().

status_update(iteration_start_time: float)[source]

Emits a status message to the logging system after an iteration.

Parameters:

iteration_start_time (float) – The time.time() at the start of the last iteration

stopping_criterion(iteration_start_time: float) bool[source]

Determines if the main loop should be terminated.

Parameters:

iteration_start_time (float) – The time.time() at the start of the last iteration

Options

option IterativeSweeps.min_sweeps: int

Minimum number of sweeps to perform.

option IterativeSweeps.max_sweeps: int

Maximum number of sweeps to perform.

option IterativeSweeps.max_hours: float

If the DMRG took longer (measured in wall-clock time), ‘shelve’ the simulation, i.e. stop and return with the flag shelve=True.

Returns:

should_break – If True, the main loop in run() is broken.

Return type:

bool

sweep(optimize=True)[source]

One ‘sweep’ of a sweeper algorithm.

Iterate over the bond which is optimized, to the right and then back to the left to the starting point.

Parameters:

optimize (bool, optional) – Whether we actually optimize the state, e.g. to find the ground state of the effective Hamiltonian in case of a DMRG. (If False, just update the environments).

Options

option Sweep.chi_list_reactivates_mixer: bool

If True, the mixer is reset/reactivated each time the bond dimension growths due to Sweep.chi_list.

Returns:

max_trunc_err – Maximal truncation error introduced.

Return type:

float

classmethod switch_engine(other_engine, *, options=None, **kwargs)[source]

Initialize algorithm from another algorithm instance of a different class.

You can initialize one engine from another, not too different subclasses. Internally, this function calls get_resume_data() to extract data from the other_engine and then initializes the new class.

Note that it transfers the data without making copies in most case; even the options! Thus, when you call run() on one of the two algorithm instances, it will modify the state, environment, etc. in the other. We recommend to make the switch as engine = OtherSubClass.switch_engine(engine) directly replacing the reference.

Parameters:
  • cls (class) – Subclass of Algorithm to be initialized.

  • other_engine (Algorithm) – The engine from which data should be transferred. Another, but not too different algorithm subclass-class; e.g. you can switch from the TwoSiteDMRGEngine to the OneSiteDMRGEngine.

  • options (None | dict-like) – If not None, these options are used for the new initialization. If None, take the options from the other_engine.

  • **kwargs – Further keyword arguments for class initialization. If not defined, resume_data is collected with get_resume_data().

update_env(**update_data)[source]

Update the left and right environments after an update of the state.

Parameters:

**update_data – Whatever is returned by update_local().