PurificationApplyMPO
full name: tenpy.algorithms.purification.PurificationApplyMPO
parent module:
tenpy.algorithms.purification
type: class
Inheritance Diagram
Methods
|
|
|
Perform N_sweeps sweeps without optimization to update the environment. |
Gives an approximate prediction for the required memory usage. |
|
Remove no longer needed environments after an update. |
|
Return necessary data to resume a |
|
Define the schedule of the sweep. |
|
|
Initialize the environment. |
Determines if the algorithm is converged. |
|
Create new instance of self.EffectiveH at self.i0 and set it to self.eff_H. |
|
Set self.mixer to the class specified by options['mixer']. |
|
Cleanup the effects of a mixer. |
|
Deactivate the mixer. |
|
Perform any final steps or clean up after the main loop has terminated. |
|
|
Algorithm-specific actions to be taken after local update. |
Perform preparations before |
|
Prepare self for calling |
|
|
Reset the statistics. |
Resume a run that was interrupted. |
|
Run the compression. |
|
Perform a single iteration. |
|
Emits a status message to the logging system after an iteration. |
|
Determines if the main loop should be terminated. |
|
|
One 'sweep' of a sweeper algorithm. |
|
Initialize algorithm from another algorithm instance of a different class. |
|
Update the left and right environments after an update of the state. |
|
Perform local update. |
Given a new two-site wave function theta, split it and save it in |
Class Attributes and Properties
|
|
|
|
The number of sites to be optimized at once. |
|
|
- class tenpy.algorithms.purification.PurificationApplyMPO(psi, U_MPO, options, **kwargs)[source]
Bases:
VariationalApplyMPO
Variant of VariationalApplyMPO suitable for purification.
- EffectiveH[source]
alias of
PurificationTwoSiteU
- update_local(_, optimize=True)[source]
Perform local update.
This simply contracts the environments and theta from the ket to get an updated theta for the bra self.psi (to be changed in place).
- update_new_psi(theta)[source]
Given a new two-site wave function theta, split it and save it in
psi
.
- environment_sweeps(N_sweeps)[source]
Perform N_sweeps sweeps without optimization to update the environment.
- Parameters:
N_sweeps (int) – Number of sweeps to run without optimization
- estimate_RAM(mem_saving_factor=None)[source]
Gives an approximate prediction for the required memory usage.
This calculation is based on the requested bond dimension, the local Hilbert space dimension, the number of sites, and the boundary conditions.
- Parameters:
mem_saving_factor (float) – Represents the amount of RAM saved due to conservation laws. By default, it is ‘None’ and is extracted from the model automatically. However, this is only possible in a few cases and needs to be estimated in most cases. This is due to the fact that it is dependent on the model parameters. If one has a better estimate, one can pass the value directly. This value can be extracted by building the initial state psi (usually by performing DMRG) and then calling
print(psi.get_B(0).sparse_stats())
TeNPy will automatically print the fraction of nonzero entries in the first line, for example,6 of 16 entries (=0.375) nonzero
. This fraction corresponds to the mem_saving_factor; in our example, it is 0.375.- Returns:
usage – Required RAM in MB.
- Return type:
See also
tenpy.simulations.simulation.estimate_simulation_RAM
global function calling this.
- free_no_longer_needed_envs()[source]
Remove no longer needed environments after an update.
This allows to minimize the number of environments to be kept. For large MPO bond dimensions, these environments are by far the biggest part in memory, so this is a valuable optimization to reduce memory requirements.
- get_resume_data(sequential_simulations=False)[source]
Return necessary data to resume a
run()
interrupted at a checkpoint.At a
checkpoint
, you can savepsi
,model
andoptions
along with the data returned by this function. When the simulation aborts, you can resume it using this saved data with:eng = AlgorithmClass(psi, model, options, resume_data=resume_data) eng.resume_run()
An algorithm which doesn’t support this should override resume_run to raise an Error.
- Parameters:
sequential_simulations (bool) – If True, return only the data for re-initializing a sequential simulation run, where we “adiabatically” follow the evolution of a ground state (for variational algorithms), or do series of quenches (for time evolution algorithms); see
run_seq_simulations()
.- Returns:
resume_data – Dictionary with necessary data (apart from copies of psi, model, options) that allows to continue the algorithm run from where we are now. It might contain an explicit copy of psi.
- Return type:
- get_sweep_schedule()[source]
Define the schedule of the sweep.
Compared to
get_sweep_schedule()
, we add one extra update at the end with i0=0 (which is the same as the first update of the sweep). This is done to ensure proper convergence after each sweep, even if that implies that the site 0 is then updated twice per sweep.
- is_converged()[source]
Determines if the algorithm is converged.
Does not cover any other reasons to abort, such as reaching a time limit. Such checks are covered by
stopping_criterion()
.
- mixer_activate()[source]
Set self.mixer to the class specified by options[‘mixer’].
- option Sweep.mixer: str | class | bool | None
Specifies which
Mixer
to use, if any. A string stands for one of the mixers defined in this module. A class is assumed to have the same interface asMixer
and is used to instantiate themixer
.None
uses no mixer.True
uses the mixer specified by theDefaultMixer
class attribute. The default depends on the subclass ofSweep
.
See also
- mixer_cleanup()[source]
Cleanup the effects of a mixer.
A
sweep()
with an enabledMixer
leaves the MPS psi with 2D arrays in S. This method recovers the original form by performing SVDs of the S and updating the MPS tensors accordingly.
- mixer_deactivate()[source]
Deactivate the mixer.
Set
self.mixer=None
and revert any other effects ofmixer_activate()
.
- property n_optimize
The number of sites to be optimized at once.
Indirectly set by the class attribute
EffectiveH
and it’s length. For example,TwoSiteDMRGEngine
uses theTwoSiteH
and hence hasn_optimize=2
, while theSingleSiteDMRGEngine
hasn_optimize=1
.
- post_update_local(err, **update_data)[source]
Algorithm-specific actions to be taken after local update.
An example would be to collect statistics.
- pre_run_initialize()[source]
Perform preparations before
run_iteration()
is iterated.- Returns:
The object to be returned by
run()
in case of immediate convergence, i.e. if no iterations are performed.- Return type:
result
- prepare_update_local()[source]
Prepare self for calling
update_local()
.- Returns:
theta – Current best guess for the ground state, which is to be optimized. Labels are
'vL', 'p0', 'p1', 'vR'
, or combined versions of it (if self.combine). For single-site DMRG, the'p1'
label is missing.- Return type:
- reset_stats(resume_data=None)[source]
Reset the statistics. Useful if you want to start a new Sweep run.
This method is expected to be overwritten by subclass, and should then define self.update_stats and self.sweep_stats dicts consistent with the statistics generated by the algorithm particular to that subclass.
- Parameters:
resume_data (dict) – Given when resuming a simulation, as returned by
get_resume_data()
. Here, we read out the sweeps.
Options
- option Sweep.chi_list: None | dict(int -> int)
By default (
None
) this feature is disabled. A dict allows to gradually increase the chi_max. An entry at_sweep: chi states that starting from sweep at_sweep, the value chi is to be used fortrunc_params['chi_max']
. For examplechi_list={0: 50, 20: 100}
useschi_max=50
for the first 20 sweeps andchi_max=100
afterwards. A value of None is initialized to the current value oftrunc_params['chi_max']
at algorithm initialization.
- resume_run()[source]
Resume a run that was interrupted.
In case we saved an intermediate result at a
checkpoint
, this function allows to resume therun()
of the algorithm (after re-initialization with the resume_data). Since most algorithms just have a while loop with break conditions, the default behavior implemented here is to just callrun()
.
- run()[source]
Run the compression.
The state
psi
is compressed in place.Warning
Call this function directly after initializing the class, without modifying psi inbetween. A copy of
psi
is made duringinit_env()
!- Returns:
max_trunc_err – The maximal truncation error of a two-site wave function.
- Return type:
TruncationError
- run_iteration()[source]
Perform a single iteration.
- Returns:
The object to be returned by
run()
if the main loop terminates after this iteration- Return type:
result
- status_update(iteration_start_time: float)[source]
Emits a status message to the logging system after an iteration.
- Parameters:
iteration_start_time (float) – The
time.time()
at the start of the last iteration
- stopping_criterion(iteration_start_time: float) bool [source]
Determines if the main loop should be terminated.
- Parameters:
iteration_start_time (float) – The
time.time()
at the start of the last iteration
Options
- option IterativeSweeps.min_sweeps: int
Minimum number of sweeps to perform.
- option IterativeSweeps.max_sweeps: int
Maximum number of sweeps to perform.
- option IterativeSweeps.max_hours: float
If the DMRG took longer (measured in wall-clock time), ‘shelve’ the simulation, i.e. stop and return with the flag
shelve=True
.
- sweep(optimize=True)[source]
One ‘sweep’ of a sweeper algorithm.
Iterate over the bond which is optimized, to the right and then back to the left to the starting point.
- Parameters:
optimize (bool, optional) – Whether we actually optimize the state, e.g. to find the ground state of the effective Hamiltonian in case of a DMRG. (If False, just update the environments).
Options
- option Sweep.chi_list_reactivates_mixer: bool
If True, the mixer is reset/reactivated each time the bond dimension growths due to
Sweep.chi_list
.
- Returns:
max_trunc_err – Maximal truncation error introduced.
- Return type:
- classmethod switch_engine(other_engine, *, options=None, **kwargs)[source]
Initialize algorithm from another algorithm instance of a different class.
You can initialize one engine from another, not too different subclasses. Internally, this function calls
get_resume_data()
to extract data from the other_engine and then initializes the new class.Note that it transfers the data without making copies in most case; even the options! Thus, when you call run() on one of the two algorithm instances, it will modify the state, environment, etc. in the other. We recommend to make the switch as
engine = OtherSubClass.switch_engine(engine)
directly replacing the reference.- Parameters:
cls (class) – Subclass of
Algorithm
to be initialized.other_engine (
Algorithm
) – The engine from which data should be transferred. Another, but not too different algorithm subclass-class; e.g. you can switch from theTwoSiteDMRGEngine
to theOneSiteDMRGEngine
.options (None | dict-like) – If not None, these options are used for the new initialization. If None, take the options from the other_engine.
**kwargs – Further keyword arguments for class initialization. If not defined, resume_data is collected with
get_resume_data()
.
- update_env(**update_data)[source]
Update the left and right environments after an update of the state.
- Parameters:
**update_data – Whatever is returned by
update_local()
.