EngineCombine

Inheritance Diagram

Inheritance diagram of tenpy.algorithms.dmrg.EngineCombine

Methods

EngineCombine.__init__(psi, model, DMRG_params)

EngineCombine.diag(theta_guess)

Diagonalize the effective Hamiltonian represented by self.

EngineCombine.environment_sweeps(N_sweeps)

Perform N_sweeps sweeps without optimization to update the environment.

EngineCombine.estimate_RAM([mem_saving_factor])

Gives an approximate prediction for the required memory usage.

EngineCombine.free_no_longer_needed_envs()

Remove no longer needed environments after an update.

EngineCombine.get_resume_data([...])

Return necessary data to resume a run() interrupted at a checkpoint.

EngineCombine.get_sweep_schedule()

Define the schedule of the sweep.

EngineCombine.init_env([model, resume_data, ...])

(Re-)initialize the environment.

EngineCombine.is_converged()

Determines if the algorithm is converged.

EngineCombine.make_eff_H()

Create new instance of self.EffectiveH at self.i0 and set it to self.eff_H.

EngineCombine.mixed_svd(theta)

Get (truncated) B from the new theta (as returned by diag).

EngineCombine.mixer_activate()

Set self.mixer to the class specified by options['mixer'].

EngineCombine.mixer_cleanup()

Cleanup the effects of a mixer.

EngineCombine.mixer_deactivate()

Deactivate the mixer.

EngineCombine.plot_sweep_stats([axes, ...])

Plot sweep_stats to display the convergence with the sweeps.

EngineCombine.plot_update_stats(axes[, ...])

Plot update_stats to display the convergence during the sweeps.

EngineCombine.post_run_cleanup()

Perform any final steps or clean up after the main loop has terminated.

EngineCombine.post_update_local(E0, age, N, ...)

Perform post-update actions.

EngineCombine.pre_run_initialize()

Perform preparations before run_iteration() is iterated.

EngineCombine.prepare_svd(theta)

Transform theta into matrix for svd.

EngineCombine.prepare_update_local()

Prepare self for calling update_local().

EngineCombine.reset_stats([resume_data])

Reset the statistics, useful if you want to start a new sweep run.

EngineCombine.resume_run()

Resume a run that was interrupted.

EngineCombine.run()

Run the DMRG simulation to find the ground state.

EngineCombine.run_iteration()

Perform a single iteration.

EngineCombine.set_B(U, S, VH)

Update the MPS with the U, S, VH returned by self.mixed_svd.

EngineCombine.status_update(iteration_start_time)

Emits a status message to the logging system after an iteration.

EngineCombine.stopping_criterion(...)

Determines if the main loop should be terminated.

EngineCombine.sweep([optimize, meas_E_trunc])

One 'sweep' of the algorithm.

EngineCombine.switch_engine(other_engine, *)

Initialize algorithm from another algorithm instance of a different class.

EngineCombine.update_env(**update_data)

Update the left and right environments after an update of the state.

EngineCombine.update_local(theta[, optimize])

Perform site-update on the site i0.

Class Attributes and Properties

EngineCombine.DMRG_params

EngineCombine.S_inv_cutoff

EngineCombine.engine_params

EngineCombine.n_optimize

The number of sites to be optimized at once.

EngineCombine.use_mixer_by_default

EngineCombine.verbose

class tenpy.algorithms.dmrg.EngineCombine(psi, model, DMRG_params)[source]

Bases: TwoSiteDMRGEngine

Engine which combines legs into pipes as far as possible.

This engine combines the virtual and physical leg for the left site and right site into pipes. This reduces the overhead of calculating charge combinations in the contractions, but one matvec() is formally more expensive, \(O(2 d^3 \chi^3 D)\).

Deprecated since version 0.5.0: Directly use the TwoSiteDMRGEngine with the DMRG parameter combine=True.

DefaultMixer[source]

alias of DensityMatrixMixer

EffectiveH[source]

alias of TwoSiteH

diag(theta_guess)[source]

Diagonalize the effective Hamiltonian represented by self.

option DMRGEngine.max_N_for_ED: int

Maximum matrix dimension of the effective hamiltonian up to which the 'default' diag_method uses ED instead of Lanczos.

option DMRGEngine.diag_method: str

One of the following strings:

‘default’

Same as 'lanczos' for large bond dimensions, but if the total dimension of the effective Hamiltonian does not exceed the DMRG parameter 'max_N_for_ED' it uses 'ED_block'.

‘lanczos’

lanczos() Default, the Lanczos implementation in TeNPy.

‘arpack’

lanczos_arpack() Based on scipy.linalg.sparse.eigsh(). Slower than ‘lanczos’, since it needs to convert the npc arrays to numpy arrays during each matvec, and possibly does many more iterations.

‘ED_block’

full_diag_effH() Contract the effective Hamiltonian to a (large!) matrix and diagonalize the block in the charge sector of the initial state. Preserves the charge sector of the explicitly conserved charges. However, if you don’t preserve a charge explicitly, it can break it. For example if you use a SpinChain({'conserve': 'parity'}), it could change the total “Sz”, but not the parity of ‘Sz’.

‘ED_all’

full_diag_effH() Contract the effective Hamiltonian to a (large!) matrix and diagonalize it completely. Allows to change the charge sector even for explicitly conserved charges. For example if you use a SpinChain({'conserve': 'Sz'}), it can change the total “Sz”.

Parameters:

theta_guess (Array) – Initial guess for the ground state of the effective Hamiltonian.

Returns:

  • E0 (float) – Energy of the found ground state.

  • theta (Array) – Ground state of the effective Hamiltonian.

  • N (int) – Number of Lanczos iterations used. -1 if unknown.

  • ov_change (float) – Change in the wave function 1. - abs(<theta_guess|theta_diag>)

environment_sweeps(N_sweeps)[source]

Perform N_sweeps sweeps without optimization to update the environment.

Parameters:

N_sweeps (int) – Number of sweeps to run without optimization

estimate_RAM(mem_saving_factor=None)[source]

Gives an approximate prediction for the required memory usage.

This calculation is based on the requested bond dimension, the local Hilbert space dimension, the number of sites, and the boundary conditions.

Parameters:

mem_saving_factor (float) – Represents the amount of RAM saved due to conservation laws. By default, it is ‘None’ and is extracted from the model automatically. However, this is only possible in a few cases and needs to be estimated in most cases. This is due to the fact that it is dependent on the model parameters. If one has a better estimate, one can pass the value directly. This value can be extracted by building the initial state psi (usually by performing DMRG) and then calling print(psi.get_B(0).sparse_stats()) TeNPy will automatically print the fraction of nonzero entries in the first line, for example, 6 of 16 entries (=0.375) nonzero. This fraction corresponds to the mem_saving_factor; in our example, it is 0.375.

Returns:

usage – Required RAM in MB.

Return type:

float

See also

tenpy.simulations.simulation.estimate_simulation_RAM

global function calling this.

free_no_longer_needed_envs()[source]

Remove no longer needed environments after an update.

This allows to minimize the number of environments to be kept. For large MPO bond dimensions, these environments are by far the biggest part in memory, so this is a valuable optimization to reduce memory requirements.

get_resume_data(sequential_simulations=False)[source]

Return necessary data to resume a run() interrupted at a checkpoint.

At a checkpoint, you can save psi, model and options along with the data returned by this function. When the simulation aborts, you can resume it using this saved data with:

eng = AlgorithmClass(psi, model, options, resume_data=resume_data)
eng.resume_run()

An algorithm which doesn’t support this should override resume_run to raise an Error.

Parameters:

sequential_simulations (bool) – If True, return only the data for re-initializing a sequential simulation run, where we “adiabatically” follow the evolution of a ground state (for variational algorithms), or do series of quenches (for time evolution algorithms); see run_seq_simulations().

Returns:

resume_data – Dictionary with necessary data (apart from copies of psi, model, options) that allows to continue the simulation from where we are now. It might contain an explicit copy of psi.

Return type:

dict

get_sweep_schedule()[source]

Define the schedule of the sweep.

One ‘sweep’ is a full sequence from the leftmost site to the right and back.

Returns:

schedule – Schedule for the sweep. Each entry is (i0, move_right, (update_LP, update_RP)), where i0 is the leftmost of the self.EffectiveH.length sites to be updated in update_local(), move_right indicates whether the next i0 in the schedule is right (True), left (False) or equal (None) of the current one, and update_LP, update_RP indicate whether it is necessary to update the LP and RP of the environments.

Return type:

iterable of (int, bool, (bool, bool))

init_env(model=None, resume_data=None, orthogonal_to=None)[source]

(Re-)initialize the environment.

This function is useful to (re-)start a Sweep with a slightly different model or different (engine) parameters. Note that we assume that we still have the same psi. Calls reset_stats().

Parameters:
  • model (MPOModel) – The model representing the Hamiltonian for which we want to find the ground state. If None, keep the model used before.

  • resume_data (None | dict) – Given when resuming a simulation, as returned by get_resume_data(). Can contain another dict under the key init_env_data; the contents of init_env_data get passed as keyword arguments to the environment initialization.

  • orthogonal_to (None | list of MPS | list of dict) – List of other matrix product states to orthogonalize against. Instead of just the state, you can specify a dict with the state as ket and further keyword arguments for initializing the MPSEnvironment; the psi to be optimized is used as bra. Works only for finite or segment MPS; for infinite MPS it must be None. This can be used to find (a few) excited states as follows. First, run DMRG to find the ground state, and then run DMRG again while orthogonalizing against the ground state, which yields the first excited state (in the same symmetry sector), and so on. Note that resume_data['orthogonal_to'] takes precedence over the argument.

Options

Deprecated since version 0.6.0: Options LP, LP_age, RP and RP_age are now collected in a dictionary init_env_data with different keys init_LP, init_RP, age_LP, age_RP

Deprecated since version 0.8.0: Instead of passing the init_env_data as a option, it should be passed as dict entry of resume_data.

option Sweep.init_env_data: dict

Dictionary as returned by self.env.get_initialization_data() from get_initialization_data(). Deprecated, use the resume_data function/class argument instead.

option Sweep.orthogonal_to: list of MPS

Deprecated in favor of the orthogonal_to function argument (forwarded from the class argument) with the same effect.

option Sweep.start_env: int

Number of sweeps to be performed without optimization to update the environment.

Raises:

ValueError – If the engine is re-initialized with a new model, which legs are incompatible with those of hte old model.

is_converged()[source]

Determines if the algorithm is converged.

Does not cover any other reasons to abort, such as reaching a time limit. Such checks are covered by stopping_condition().

make_eff_H()[source]

Create new instance of self.EffectiveH at self.i0 and set it to self.eff_H.

mixed_svd(theta)[source]

Get (truncated) B from the new theta (as returned by diag).

The goal is to split theta and truncate it:

|   -- theta --   ==>    -- U -- S --  VH -
|      |   |                |          |

Without a mixer, this is done by a simple svd and truncation of Schmidt values.

With a mixer, the state is perturbed before the SVD. The details of the perturbation are defined by the Mixer class.

Note that the returned S is a general (not diagonal) matrix, with labels 'vL', 'vR'.

Parameters:

theta (Array) – The optimized wave function, prepared for svd.

Returns:

  • U (Array) – Left-canonical part of theta. Labels '(vL.p)', 'vR'.

  • S (1D ndarray | 2D Array) – Without mixer just the singular values of the array; with mixer it might be a general matrix with labels 'vL', 'vR'; see comment above.

  • VH (Array) – Right-canonical part of theta. Labels 'vL', '(p.vR)'.

  • err (TruncationError) – The truncation error introduced.

  • S_approx (ndarray) – Just the S if a 1D ndarray, or an approximation of the correct S (which was used for truncation) in case S is 2D Array.

mixer_activate()[source]

Set self.mixer to the class specified by options[‘mixer’].

option Sweep.mixer: str | class | bool | None

Specifies which Mixer to use, if any. A string stands for one of the mixers defined in this module. A class is assumed to have the same interface as Mixer and is used to instantiate the mixer. None uses no mixer. True uses the mixer specified by the DefaultMixer class attribute. The default depends on the subclass of Sweep.

option Sweep.mixer_params: dict

Mixer parameters as described in Mixer.

See also

mixer_deactivate

mixer_cleanup()[source]

Cleanup the effects of a mixer.

A sweep() with an enabled Mixer leaves the MPS psi with 2D arrays in S. To recover the original form, this function simply performs one sweep with disabled mixer.

mixer_deactivate()[source]

Deactivate the mixer.

Set self.mixer=None and revert any other effects of mixer_activate().

property n_optimize

The number of sites to be optimized at once.

Indirectly set by the class attribute EffectiveH and it’s length. For example, TwoSiteDMRGEngine uses the TwoSiteH and hence has n_optimize=2, while the SingleSiteDMRGEngine has n_optimize=1.

plot_sweep_stats(axes=None, xaxis='time', yaxis='E', y_exact=None, **kwargs)[source]

Plot sweep_stats to display the convergence with the sweeps.

Parameters:
  • axes (matplotlib.axes.Axes) – The axes to plot into. Defaults to matplotlib.pyplot.gca()

  • xaxis (key of sweep_stats) – Key of sweep_stats to be used for the x-axis and y-axis of the plots.

  • yaxis (key of sweep_stats) – Key of sweep_stats to be used for the x-axis and y-axis of the plots.

  • y_exact (float) – Exact value for the quantity on the y-axis for comparison. If given, plot abs((y-y_exact)/y_exact) on a log-scale yaxis.

  • **kwargs – Further keyword arguments given to axes.plot(...).

plot_update_stats(axes, xaxis='time', yaxis='E', y_exact=None, **kwargs)[source]

Plot update_stats to display the convergence during the sweeps.

Parameters:
  • axes (matplotlib.axes.Axes) – The axes to plot into. Defaults to matplotlib.pyplot.gca()

  • xaxis ('N_updates' | 'sweep' | keys of update_stats) – Key of update_stats to be used for the x-axis of the plots. 'N_updates' is just enumerating the number of bond updates, and 'sweep' corresponds to the sweep number (including environment sweeps).

  • yaxis ('E' | keys of update_stats) – Key of update_stats to be used for the y-axis of the plots. For ‘E’, use the energy (per site for infinite systems).

  • y_exact (float) – Exact value for the quantity on the y-axis for comparison. If given, plot abs((y-y_exact)/y_exact) on a log-scale yaxis.

  • **kwargs – Further keyword arguments given to axes.plot(...).

post_run_cleanup()[source]

Perform any final steps or clean up after the main loop has terminated.

post_update_local(E0, age, N, ov_change, err, **update_data)[source]

Perform post-update actions.

Compute truncation energy and collect statistics.

Parameters:

**update_data (dict) – What was returned by update_local().

pre_run_initialize()[source]

Perform preparations before run_iteration() is iterated.

Returns:

The object to be returned by run() in case of immediate convergence, i.e. if no iterations are performed.

Return type:

result

prepare_svd(theta)[source]

Transform theta into matrix for svd.

prepare_update_local()[source]

Prepare self for calling update_local().

Returns:

theta – Current best guess for the ground state, which is to be optimized. Labels are 'vL', 'p0', 'p1', 'vR', or combined versions of it (if self.combine). For single-site DMRG, the 'p1' label is missing.

Return type:

Array

reset_stats(resume_data=None)[source]

Reset the statistics, useful if you want to start a new sweep run.

option DMRGEngine.chi_list: dict | None

A dictionary to gradually increase the chi_max parameter of trunc_params. The key defines starting from which sweep chi_max is set to the value, e.g. {0: 50, 20: 100} uses chi_max=50 for the first 20 sweeps and chi_max=100 afterwards. Overwrites trunc_params[‘chi_list’]`. By default (None) this feature is disabled.

option DMRGEngine.sweep_0: int

The number of sweeps already performed. (Useful for re-start).

resume_run()[source]

Resume a run that was interrupted.

In case we saved an intermediate result at a checkpoint, this function allows to resume the run() of the algorithm (after re-initialization with the resume_data). Since most algorithms just have a while loop with break conditions, the default behavior implemented here is to just call run().

run()[source]

Run the DMRG simulation to find the ground state.

Returns:

  • E (float) – The energy of the resulting ground state MPS.

  • psi (MPS) – The MPS representing the ground state after the simulation, i.e. just a reference to psi.

Options

option DMRGEngine.diag_method: str

Method to be used for diagonalization, default 'default'. For possible arguments see DMRGEngine.diag().

option DMRGEngine.E_tol_to_trunc: float

It’s reasonable to choose the Lanczos convergence criteria 'E_tol' not many magnitudes lower than the current truncation error. Therefore, if E_tol_to_trunc is not None, we update E_tol of lanczos_params to max_E_trunc*E_tol_to_trunc, restricted to the interval [E_tol_min, E_tol_max], where max_E_trunc is the maximal energy difference due to truncation right after each Lanczos optimization during the sweeps.

option DMRGEngine.E_tol_max: float

See E_tol_to_trunc

option DMRGEngine.E_tol_min: float

See E_tol_to_trunc

option DMRGEngine.max_E_err: float

Convergence if the change of the energy in each step satisfies |Delta E / max(E, 1)| < max_E_err. Note that this might be satisfied even if Delta E > 0, i.e., if the energy increases (due to truncation).

option DMRGEngine.max_hours: float

If the DMRG took longer (measured in wall-clock time), ‘shelve’ the simulation, i.e. stop and return with the flag shelve=True.

option DMRGEngine.max_S_err: float

Convergence if the relative change of the entropy in each step satisfies |Delta S|/S < max_S_err

option DMRGEngine.max_sweeps: int

Maximum number of sweeps to be performed.

option DMRGEngine.min_sweeps: int

Minimum number of sweeps to be performed. Defaults to 1.5*N_sweeps_check.

option DMRGEngine.N_sweeps_check: int

Number of sweeps to perform between checking convergence criteria and giving a status update.

option DMRGEngine.norm_tol: float

After the DMRG run, update the environment with at most norm_tol_iter sweeps until np.linalg.norm(psi.norm_err()) < norm_tol.

option DMRGEngine.norm_tol_iter: float

Perform at most norm_tol_iter`*`update_env sweeps to converge the norm error below norm_tol.

option DMRGEngine.norm_tol_final: float

After performing norm_tol_iter`*`update_env sweeps, if np.linalg.norm(psi.norm_err()) < norm_tol_final, call canonical_form() to canonicalize instead. This tolerance should be stricter than norm_tol to ensure canonical form even if DMRG cannot fully converge.

option DMRGEngine.P_tol_to_trunc: float

It’s reasonable to choose the Lanczos convergence criteria 'P_tol' not many magnitudes lower than the current truncation error. Therefore, if P_tol_to_trunc is not None, we update P_tol of lanczos_params to max_trunc_err*P_tol_to_trunc, restricted to the interval [P_tol_min, P_tol_max], where max_trunc_err is the maximal truncation error (discarded weight of the Schmidt values) due to truncation right after each Lanczos optimization during the sweeps.

option DMRGEngine.P_tol_max: float

See P_tol_to_trunc

option DMRGEngine.P_tol_min: float

See P_tol_to_trunc

option DMRGEngine.update_env: int

Number of sweeps without bond optimization to update the environment for infinite boundary conditions, performed every N_sweeps_check sweeps.

run_iteration()[source]

Perform a single iteration.

Returns:

The object to be returned by run() if the main loop terminates after this iteration

Return type:

result

set_B(U, S, VH)[source]

Update the MPS with the U, S, VH returned by self.mixed_svd.

Parameters:
  • U (Array) – Left and Right-canonical matrices as returned by the SVD.

  • VH (Array) – Left and Right-canonical matrices as returned by the SVD.

  • S (1D array | 2D Array) – The middle part returned by the SVD, theta = U S VH. Without a mixer just the singular values, with enabled mixer a 2D array.

status_update(iteration_start_time: float)[source]

Emits a status message to the logging system after an iteration.

Parameters:

iteration_start_time (float) – The time.time() at the start of the last iteration

stopping_criterion(iteration_start_time: float) bool[source]

Determines if the main loop should be terminated.

Parameters:

iteration_start_time (float) – The time.time() at the start of the last iteration

Returns:

should_break – If True, the main loop in run() is broken.

Return type:

bool

sweep(optimize=True, meas_E_trunc=False)[source]

One ‘sweep’ of the algorithm.

Thin wrapper around tenpy.algorithms.mps_common.Sweep.sweep() with one additional parameter meas_E_trunc specifying whether to measure truncation energies.

classmethod switch_engine(other_engine, *, options=None, **kwargs)[source]

Initialize algorithm from another algorithm instance of a different class.

You can initialize one engine from another, not too different subclasses. Internally, this function calls get_resume_data() to extract data from the other_engine and then initializes the new class.

Note that it transfers the data without making copies in most case; even the options! Thus, when you call run() on one of the two algorithm instances, it will modify the state, environment, etc. in the other. We recommend to make the switch as engine = OtherSubClass.switch_engine(engine) directly replacing the reference.

Parameters:
  • cls (class) – Subclass of Algorithm to be initialized.

  • other_engine (Algorithm) – The engine from which data should be transferred. Another, but not too different algorithm subclass-class; e.g. you can switch from the TwoSiteDMRGEngine to the OneSiteDMRGEngine.

  • options (None | dict-like) – If not None, these options are used for the new initialization. If None, take the options from the other_engine.

  • **kwargs – Further keyword arguments for class initialization. If not defined, resume_data is collected with get_resume_data().

update_env(**update_data)[source]

Update the left and right environments after an update of the state.

Parameters:

**update_data – Whatever is returned by update_local().

update_local(theta, optimize=True)[source]

Perform site-update on the site i0.

Parameters:
  • theta (Array) – Initial guess for the ground state of the effective Hamiltonian.

  • optimize (bool) – Whether we actually optimize to find the ground state of the effective Hamiltonian. (If False, just update the environments).

Returns:

update_data – Data computed during the local update, as described in the following:

E0float

Total energy, obtained before truncation (if optimize=True), or after truncation (if optimize=False) (but never None).

Nint

Dimension of the Krylov space used for optimization in the lanczos algorithm. 0 if optimize=False.

ageint

Current size of the DMRG simulation: number of physical sites involved into the contraction.

U, VH: Array

U and VH returned by mixed_svd().

ov_change: float

Change in the wave function 1. - abs(<theta_guess|theta>) induced by diag(), not including the truncation!

Return type:

dict