VariationalCompression¶
full name: tenpy.algorithms.mps_common.VariationalCompression
parent module:
tenpy.algorithms.mps_common
type: class
Inheritance Diagram
Methods
|
|
Perform N_sweeps sweeps without optimization to update the environment. |
|
Remove no longer needed environments after an update. |
|
Return necessary data to resume a |
|
Define the schedule of the sweep. |
|
|
Initialize the environment. |
Create new instance of self.EffectiveH at self.i0 and set it to self.eff_H. |
|
Algorithm-specific actions to be taken after local update. |
|
Prepare self for calling |
|
|
Reset the statistics. |
Resume a run that was interrupted. |
|
Run the compression. |
|
|
One 'sweep' of a sweeper algorithm. |
|
Update the left and right environments after an update of the state. |
|
Perform local update. |
Given a new two-site wave function theta, split it and save it in |
Class Attributes and Properties
|
|
The number of sites to be optimized at once. |
|
|
- class tenpy.algorithms.mps_common.VariationalCompression(psi, options, resume_data=None)[source]¶
Bases:
tenpy.algorithms.mps_common.Sweep
Variational compression of an MPS (in place).
To compress an MPS psi, use
VariationalCompression(psi, options).run()
.The algorithm is the same as described in
VariationalApplyMPO
, except that we don’t have an MPO in the networks - one can think of the MPO being trivial.- Parameters
psi (
MPS
) – The state to be compressed.options (dict) – See
VariationalCompression
.resume_data (None | dict) – By default (
None
) ignored. If a dict, it should contain the data returned byget_resume_data()
when intending to continue/resume an interrupted run, in particular ‘init_env_data’.
Options
- config VariationalCompression¶
option summary By default (``None``) this feature is disabled. [...]
Whether to combine legs into pipes. This combines the virtual and [...]
init_env_data (from Sweep) in DMRGEngine.init_env
Dictionary as returned by ``self.env.get_initialization_data()`` from [...]
lanczos_params (from Sweep) in Sweep
Lanczos parameters as described in :cfg:config:`Lanczos`.
Number of sweeps to perform.
orthogonal_to (from Sweep) in DMRGEngine.init_env
Deprecated in favor of the `orthogonal_to` function argument (forwarded fro [...]
Number of sweeps to be performed without optimization to update the environment.
Number of sites to contract for the inital LP/RP environment in case of inf [...]
Truncation parameters as described in :cfg:config:`truncation`.
- option trunc_params: dict¶
Truncation parameters as described in
truncation
.
- option N_sweeps: int¶
Number of sweeps to perform.
- option start_env_sites: int¶
Number of sites to contract for the inital LP/RP environment in case of infinite MPS.
- run()[source]¶
Run the compression.
The state
psi
is compressed in place.Warning
Call this function directly after initializing the class, without modifying psi inbetween. A copy of
psi
is made duringinit_env()
!- Returns
max_trunc_err – The maximal truncation error of a two-site wave function.
- Return type
- update_local(_, optimize=True)[source]¶
Perform local update.
This simply contracts the environments and theta from the ket to get an updated theta for the bra self.psi (to be changed in place).
- update_new_psi(theta)[source]¶
Given a new two-site wave function theta, split it and save it in
psi
.
- environment_sweeps(N_sweeps)[source]¶
Perform N_sweeps sweeps without optimization to update the environment.
- Parameters
N_sweeps (int) – Number of sweeps to run without optimization
- free_no_longer_needed_envs()[source]¶
Remove no longer needed environments after an update.
This allows to minimize the number of environments to be kept. For large MPO bond dimensions, these environments are by far the biggest part in memory, so this is a valuable optimiztion to reduce memory requirements.
- get_resume_data(sequential_simulations=False)[source]¶
Return necessary data to resume a
run()
interrupted at a checkpoint.At a
checkpoint
, you can savepsi
,model
andoptions
along with the data returned by this function. When the simulation aborts, you can resume it using this saved data with:eng = AlgorithmClass(psi, model, options, resume_data=resume_data) eng.resume_run()
An algorithm which doesn’t support this should override resume_run to raise an Error.
- Parameters
sequential_simulations (bool) – If True, return only the data for re-initializing a sequential simulation run, where we “adiabatically” follow the evolution of a ground state (for variational algorithms), or do series of quenches (for time evolution algorithms); see
run_seq_simulations()
.- Returns
resume_data – Dictionary with necessary data (apart from copies of psi, model, options) that allows to continue the simulation from where we are now. It might contain an explicit copy of psi.
- Return type
- get_sweep_schedule()[source]¶
Define the schedule of the sweep.
One ‘sweep’ is a full sequence from the leftmost site to the right and back.
- Returns
schedule – Schedule for the sweep. Each entry is
(i0, move_right, (update_LP, update_RP))
, where i0 is the leftmost of theself.EffectiveH.length
sites to be updated inupdate_local()
, move_right indicates whether the next i0 in the schedule is rigth (True) of the current one, and update_LP, update_RP indicate whether it is necessary to update the LP and RP of the environments.- Return type
- property n_optimize¶
The number of sites to be optimized at once.
Indirectly set by the class attribute
EffectiveH
and it’s length. For example,TwoSiteDMRGEngine
uses theTwoSiteH
and hence hasn_optimize=2
, while theSingleSiteDMRGEngine
hasn_optimize=1
.
- post_update_local(err, **update_data)[source]¶
Algorithm-specific actions to be taken after local update.
An example would be to collect statistics.
- prepare_update()[source]¶
Prepare self for calling
update_local()
.- Returns
theta – Current best guess for the ground state, which is to be optimized. Labels are
'vL', 'p0', 'p1', 'vR'
, or combined versions of it (if self.combine). For single-site DMRG, the'p1'
label is missing.- Return type
- reset_stats(resume_data=None)[source]¶
Reset the statistics. Useful if you want to start a new Sweep run.
This method is expected to be overwritten by subclass, and should then define self.update_stats and self.sweep_stats dicts consistent with the statistics generated by the algorithm particular to that subclass.
- Parameters
resume_data (dict) – Given when resuming a simulation, as returned by
get_resume_data()
. Here, we read out the sweeps.
Options
- option Sweep.chi_list: None | dict(int -> int)¶
By default (
None
) this feature is disabled. A dict allows to gradually increase the chi_max. An entry at_sweep: chi states that starting from sweep at_sweep, the value chi is to be used fortrunc_params['chi_max']
. For examplechi_list={0: 50, 20: 100}
useschi_max=50
for the first 20 sweeps andchi_max=100
afterwards.
- resume_run()[source]¶
Resume a run that was interrupted.
In case we saved an intermediate result at a
checkpoint
, this function allows to resume therun()
of the algorithm (after re-initialization with the resume_data). Since most algorithms just have a while loop with break conditions, the default behaviour implemented here is to just callrun()
.
- sweep(optimize=True)[source]¶
One ‘sweep’ of a sweeper algorithm.
Iteratate over the bond which is optimized, to the right and then back to the left to the starting point. If optimize=False, don’t actually diagonalize the effective hamiltonian, but only update the environment.
- update_env(**update_data)[source]¶
Update the left and right environments after an update of the state.
- Parameters
**update_data – Whatever is returned by
update_local()
.