Abstract Classes

\[\]

This section describes the abstract classes of the GlobalBioIm library. It provides general properties for every derived classes.

Map

class Abstract.Map

Bases: matlab.mixin.Copyable

Abstract class for Maps which maps elements from \(\mathrm{X}\) to \(\mathrm{Y}\) $$ \mathrm{H}: \mathrm{X}\rightarrow \mathrm{Y}.$$ where \(\mathrm{X}\) and \(\mathrm{Y}\) are either \(\mathbb{R}^N\) or \(\mathbb{C}^N\).

Parameters:
  • name – name of the linear operator \(\mathbf{H}\)

  • sizein – dimension of the left hand side vector space \(\mathrm{X}\)

  • sizeout – dimension of the right hand side vector space \(\mathrm{Y}\)

  • norm – norm of the operator \(\|\mathrm{H}\|\) (if known, otherwise -1)

  • isInvertible – true if the method applyInverse_() is implemented

  • isDifferentiable – true if the method applyJacobianT_() is implemented

  • memoizeOpts – structure of boolean (one field per method, see details below).

  • doPrecomputation – boolean true to allow doing precomputations to save time (will generally require more memory).

Note on the memoize option This option allows to store the result of a method such that if an identical call to this method is done, calculations are avoided. Example: memoizeOpts.apply=true will store the result of H*x.

Method Summary
apply(this, x)

Computes \(\mathrm{y}=\mathrm{H}(\mathrm{x})\) for the given \(\mathrm{x} \in \mathrm{X}\).

Calls the method apply_()

applyJacobianT(this, x, v)

Compute \(\mathrm{x}=[\mathrm{J}_{\mathrm{H}}(\mathrm{v})]^{\star}\mathrm{y}\) where

  • \([\mathrm{J}_{\mathrm{H}}(\mathrm{v})]\) is the Jacobian matrix of the Map \(\mathrm{H}\) computed at \(\mathrm{v} \in \mathrm{X} \)

  • \(\mathrm{y} \in \mathrm{Y} \)

Calls the method applyJacobianT_()

applyInverse(this, x)

Computes \(\mathrm{x} = \mathrm{H}^{-1} \mathrm{y}\) for the given \(\mathrm{y} \in \mathrm{Y}\). (if applicable)

Calls the method applyInverse_()

makeComposition(this, G)

Compose the Map \(\mathrm{H}\) with the given Map \(\mathrm{G}\). Returns a new map \(\mathrm{M=HG}\)

Calls the method makeComposition_()

plus(this, G)

Overload operator (+) for Map objects $$ \mathrm{M}(\mathrm{x}) := \mathrm{H}(\mathrm{x}) + \mathrm{G}(\mathrm{x})$$

Calls the method plus_()

minus(this, G)

Overload operator (-) for Map objects $$ \mathrm{M}(\mathrm{x}) := \mathrm{H}(\mathrm{x}) - \mathrm{G}(\mathrm{x})$$

Calls the method minus_()

mpower(this, p)

Returns a new Map which is the power p \(\mathrm{H}^{p}\) of the current \(\mathrm{H}\).

Calls the method mpower_()

mtimes(this, G)

Overload operator (*) for Map objects $$ \mathrm{M}(\mathrm{x}) := \mathrm{H}(\mathrm{G}(\mathrm{x}))$$

  • If \(\mathrm{G}\) is numeric of size sizein, then apply() is called

  • If \(\mathrm{G}\) is a Map, then a MapComposition is intanciated

times(this, G)

Returns a new Map which is the element-wise multiplication of the current \(\mathrm{H}\) with \(\mathrm{G}\) $$ \mathrm{M}(\mathrm{x}) := \mathrm{H}(\mathrm{x}) \times \mathrm{G}(\mathrm{x})$$

Calls the method times_()

apply_(this, x)

Not implemented in this Abstract class

applyJacobianT_(this, y, v)

Not implemented in this Abstract class

applyInverse_(this, y)

Not implemented in this Abstract class

plus_(this, G)

Constructs a MapSummation object to sum the current Map \(\mathrm{H}\) with the given \(\mathrm{G}\).

minus_(this, G)

Constructs a MapSummation object to subtract to the current Map \(\mathrm{H}\), the given \(\mathrm{G}\).

mpower_(this, p)

When \(p=-1\), constructs a MapInversion object which is the inverse Map of \(\mathrm{H}\). When \(p\neq-1\), this method is not implemented in this Abstract class

times_(this, G)

Constructs a MapMultiplication object to element-wise multiply the current Map \(\mathrm{H}\) with the given \(\mathrm{G}\).

makeComposition_(this, G)

Constructs a MapComposition object to compose the current Map \(\mathrm{H}\) with the given \(\mathrm{G}\).

copyElement()

Perform a deep copy of \(\mathrm{H}\)

Called by the function copy()

LinOp

class Abstract.LinOp

Bases: Abstract.Map

Abstract class for linear operators $$ \mathrm{H}: \mathrm{X}\rightarrow \mathrm{Y}.$$ where \(\mathrm{X}\) and \(\mathrm{Y}\) are either \(\mathbb{R}^N\) or \(\mathbb{C}^N\).

All attributes of parent class Map are inherited

See also Map

Method Summary
applyAdjoint(this, y)

Computes \(\mathrm{y=H}^*\mathrm{y}\) for \(\mathrm{y} \in \mathrm{Y}\)

Calls the method applyAdjoint_()

applyHtH(this, x)

Computes \(\mathrm{y=H}^*\mathrm{Hx}\) for \(\mathrm{y} \in \mathrm{Y}\)

Calls the method applyHHt_()

applyHHt(this, y)

Computes \(\mathrm{y=HH}^*\mathrm{y}\) for \(\mathrm{y} \in \mathrm{Y}\)

Calls the method applyHHt_()

transpose(this)

Returns a new LinOp which is the Adjoint \(\mathrm{H}^{\star}\) of the current \(\mathrm{H}\).

ctranspose(this)

Do the same as transpose()

applyAdjointInverse(this, x)

Computes \(\mathrm{y} = \mathrm{H}^{-\star} \mathrm{x}\) for the given \(\mathrm{x} \in \mathrm{X}\). (if applicable)

Calls the method applyAdjointInverse_()

makeHtH(this)

Compose the Adjoint Map \(\mathrm{H}^{\star}\) with \(\mathrm{H}\). Returns a new map \(\mathrm{M=H}^{\star} \mathrm{H}\)

Calls the method makeHtH_()

makeHHt(this)

Compose the Map \(\mathrm{H}\) with its adjoint \(\mathrm{H}^{\star}\). Returns a new map \(\mathrm{M=H}\mathrm{H}^{\star}\)

Calls the method makeHHt_()

applyAdjoint_(~, ~)

Not implemented in this Abstract class

applyHtH_(this, x)

There is a default implementation in the abstract class LinOp which calls successively the apply() and applyAdjoint() methods. However, it can be reimplemented in derived classes if there exists a faster way to perform computation.

applyHHt_(this, y)

There is a default implementation in the abstract class LinOp which calls successively the applyAdjoint() and apply() methods. However, it can be reimplemented in derived classes if there exists a faster way to perform computation.

applyAdjointInverse_(~, ~)

Not implemented in this Abstract class

plus_(this, G)

If \(\mathrm{G}\) is a LinOp, constructs a LinOpSummation object to sum the current LinOp \(\mathrm{H}\) with the given \(\mathrm{G}\). Otherwise the summation will be a MapSummation.

makeAdjoint_(this)

Constructs a LinOpAdjoint from the current current LinOp \(\mathrm{H}\)

makeHtH_(this)

Constructs a LinOpComposition corresponding to \(\mathrm{H}^{\star}\mathrm{H}\)

makeHHt_(this)

Constructs a LinOpComposition corresponding to \(\mathrm{H}\mathrm{H}^{\star}\)

makeInversion_(this)

Constructs a LinOpInversion corresponding to \(\mathrm{H}^{-1}\)

makeComposition_(this, G)

Reimplemented from parent class Map. If \(\mathrm{G}\) is a LinOp, constructs a LinOpComposition object to compose the current LinOp (this) with the given LinOp\(\mathrm{G}\). Otherwise the composition will be a MapComposition.

applyJacobianT_(this, y, ~)

Uses the method applyAdjoint (hence do not need to be reimplemented in derived classes)

Cost

class Abstract.Cost

Bases: Abstract.Map

Abstract class for Cost functions $$ C : \mathrm{X} \longrightarrow \mathbb{R}$$ with the following special structure $$ C(\mathrm{x}) := F( \mathrm{x} , \mathrm{y} ) $$ where \(F\) is a function takin two variables, both in \(\mathrm{X}\).

Parameters:
  • y – data vector (default 0)

  • name – name of the cost function

  • lip – Lipschitz constant of the gradient (when applicable and known, otherwise -1)

  • isConvex – true if the cost is convex

  • isSeparable – true if the cost is separable (R^n basis)

All attributes of parent class Map are inherited and norm is fixed to -1, sizeout is fixed to for all Cost

See also Map, LinOp.

Method Summary
applyGrad(this, x)

Computes the gradient of the cost function at \(\mathrm{x} \in \mathrm{X}\) (when applicable) $$ \mathrm{g} = \nabla C(\mathrm{x}) $$

Calls the method applyGrad_()

applyProx(this, z, alpha)

Computes the proximity operator of the cost at \(\mathrm{z} \in \mathrm{X} \) (when applicable) $$ \mathrm{prox}_{\alpha C}(\mathrm{z}) = \mathrm{arg} \, \mathrm{min}_{\mathrm{u} \in \mathrm{X}} \; \frac{1}{2\alpha} \| \mathrm{u} - \mathrm{z} \|_2^2 + C(\mathrm{u}). $$

Calls the method applyProx_()

applyProxFench(this, z, alpha)

Computes the proximity operator of the Fenchel Transform \(C^*\) at \(\mathrm{z} \in \mathrm{Y} \) (when applicable)

Calls the method applyProxFench_()

applyGrad_(this, x)

Not implemented in this Abstract class

applyProx_(this, z, alpha)

By default, if the cost \(C\) isConvex, computes the proximity operator of \(C^*\) at \(\mathrm{z} \in \mathrm{X} \) using the Moreau’s identity which uses the applyProxFench() method $$\mathrm{prox}_{\alpha C}(\mathrm{z}) = \mathrm{z} - \alpha \,\mathrm{prox}_{\frac{1}{\alpha}C^*}\left(\frac{\mathrm{z}}{\alpha}\right).$$

applyProxFench_(this, z, alpha)

By default, if the cost \(C\) isConvex, computes the proximity operator of the Fenchel Transform \(C^*\) at \(\mathrm{z} \in \mathrm{Y} \) using the Moreau’s identity which uses the applyProx() method $$\mathrm{prox}_{\alpha C^*}(\mathrm{z}) = \mathrm{z} - \alpha \,\mathrm{prox}_{\frac{1}{\alpha}C}\left(\frac{\mathrm{z}}{\alpha}\right).$$

plus_(this, G)

If \(\mathrm{G}\) is a Cost, constructs a CostSummation object to sum the current Cost \(C\) with the given \(G\). Otherwise the summation will be a MapSummation.

minus_(this, G)

If \(\mathrm{G}\) is a Cost, constructs a CostSummation object to subtract to the current Cost \(C\), the given \(G\). Otherwise the summation will be a MapSummation.

makeComposition_(this, G)

Reimplemented from parent class Map. Constructs a CostComposition object to compose the current Cost (this) with the given Map\(\mathrm{G}\).

applyJacobianT_(this, y, v)

Uses the method applyGrad (hence do not need to be reimplemented in derived classes)

set_y(this, y)

Set the attribute \(\mathrm{y}\)

  • has to be conformable with the sizein of the cost

  • can be a scalar.

Opti

class Abstract.Opti

Bases: matlab.mixin.SetGet

Abstract class for optimization algorithms to minimize Cost objects

Parameters:
  • name – name of the algorithm

  • cost – minimized Cost

  • maxiter – maximal number of iterations (default 50)

  • verbose – bollean (default true) to activate verbose mode

  • OutOpOutputOpti object

  • ItUpOut – number of iterations between two calls to the update method of the OutputOpti object OutOp (default 0)

  • CvOpTestCvg object

  • time – execution time of the algorithm

  • niter – iteration counter

  • xopt – optimization variable

See also OutputOpti Cost

Method Summary
run(this, x0)

Run the algorithm.

Parameters:

x0 – initial point in \(\in X\), if no argument restarts from the current value xopt.

note: this method does not return anything, the result being stored in public attribute xopt.

initialize(this, x0)

Implements initialization of the algorithm

Parameters:

x0 – initial point

doIteration(this)

Implements algorithm iteration

Returns:

flag with values

  • OPTI_NEXT_IT (= 0) to go to the next iteration

  • OPTI_REDO_IT (= 1) to redo the iteration

  • OPTI_STOP (= 2) to stop algorithm

updateParams(this)

Updates the parameters of the algorithm at each iteration (default: no update). This method can be overloaded to makes some parameters varying during iterations (e.g. descent step, lagrangian parameters…)

starting_verb(this)

Generic method to display a starting message in verbose mode.

ending_verb(this)

Generic method to display a ending message in verbose mode.

OperationsOnMaps

The following classes implement basic operations between Map (LinOp and Cost). They are not abstract but generally they do not need to be instanciated. They are mainly used inside the methods of the abstract classes Map, LinOp and Cost for the operator algebra machinery.

MapComposition

class Abstract.OperationsOnMaps.MapComposition

Bases: Abstract.Map

MapComposition : Composition of Maps $$ \mathrm{H}(\mathrm{x}) = \mathrm{H}_1 \left( \mathrm{H}_2(\mathrm{x}) \right) $$

Parameters:
  • H1 – left hand side Map

  • H2 – right hand side Map

Example H=MapComposition(H1,H2)

See also Map

Method Summary
apply_(this, x)

Reimplemented from Map

applyJacobianT_(this, y, v)

Reimplemented from Map

applyInverse_(this, y)

Reimplemented from Map

makeComposition_(this, G)

Reimplemented from Map

MapInversion

class Abstract.OperationsOnMaps.MapInversion

Bases: Abstract.Map

MapInversion : Builds the inverse Map

Parameters:

MMap object

Example Minv=MapInversion(M)

See also Map

Method Summary
apply_(this, x)

Reimplemented from Map

applyInverse_(this, x)

Reimplemented from Map

mpower_(this, p)

Reimplemented from Map

makeComposition_(this, G)

Reimplemented from parent class Map.

MapSummation

class Abstract.OperationsOnMaps.MapSummation

Bases: Abstract.Map

MapSummation: Sum of Maps $$ \mathrm{H}(\mathrm{x}) = \sum_i \alpha_i \mathrm{H}_i(\mathrm{x}) $$

Parameters:
  • Maps – cell of Map

  • alpha – array of coefficients

Example H=MapSummation(Maps,alpha)

See also Map, LinOpSummation

Method Summary
apply_(this, x)

Reimplemented from Map

applyJacobianT_(this, y, v)

Reimplemented from Map

makeComposition_(this, G)

Reimplemented from Map

MapMultiplication

class Abstract.OperationsOnMaps.MapMultiplication

Bases: Abstract.Map

MapMultiplication: Multiplication of Maps $$ \mathrm{H}(\mathrm{x}) = \mathrm{H}_1(\mathrm{x}) \times \mathrm{H}_2(\mathrm{x}) $$

Parameters:
  • Map1Map object

  • Map2Map object

Example H=MapMultiplication(Map1,Map2)

See also Map

Method Summary
apply_(this, x)

Reimplemented from Map

applyJacobianT_(this, y, v)

Reimplemented from Map

makeComposition_(this, G)

Reimplemented from Map

LinOpAdjoint

class Abstract.OperationsOnMaps.LinOpAdjoint

Bases: LinOp

Adjoint : Builds the adjoint LinOp

Parameters:

TLinOpLinOp object

Example Tadj=LinOpAdjoint(TLinOp)

See also Map, LinOp

Method Summary
apply_(this, x)

Reimplemented from LinOp

applyAdjoint_(this, x)

Reimplemented from LinOp

applyHtH_(this, x)

Reimplemented from LinOp

applyHHt_(this, x)

Reimplemented from LinOp

applyInverse_(this, x)

Reimplemented from LinOp

applyAdjointInverse_(this, x)

Reimplemented from LinOp

makeAdjoint_(this)

Reimplemented from parent class LinOp.

makeHHt_(this)

Reimplemented from parent class LinOp.

makeHtH_(this)

Reimplemented from parent class LinOp.

LinOpComposition

class Abstract.OperationsOnMaps.LinOpComposition

Bases: Abstract.OperationsOnMaps.MapComposition, LinOp

LinOpComposition : Composition of LinOps $$ \mathrm{H}(\mathrm{x}) = \mathrm{H}_1 \mathrm{H}_2\mathrm{x} $$

Parameters:
  • H1 – left hand side LinOp (or a scalar)

  • H2 – right hand side LinOp

Example H=LinOpComposition(H1,H2)

See also Map, LinOp, MapComposition

Method Summary
apply_(this, x)

Reimplemented from LinOp

applyAdjoint_(this, x)

Reimplemented from LinOp

applyHtH_(this, x)

Reimplemented from LinOp

applyHHt_(this, x)

Reimplemented from LinOp

applyAdjointInverse_(this, x)

Reimplemented from LinOp

makeAdjoint_(this)

Reimplemented from parent class LinOp.

makeHtH_(this)

Reimplemented from LinOp

makeHHt_(this)

Reimplemented from LinOp

makeComposition_(this, G)

Reimplemented from MapComposition

LinOpInversion

class Abstract.OperationsOnMaps.LinOpInversion

Bases: Abstract.OperationsOnMaps.MapInversion, LinOp

LinOpInversion : Builds the inverse LinOp

Parameters:

MLinOp object

Example Minv=LinOpInversion(M)

See also Map, LinOp, MapInversion

Method Summary
applyAdjoint_(this, x)

Reimplemented from LinOp

applyAdjointInverse_(this, x)

Reimplemented from LinOp

mpower_(this, p)

Reimplemented from MapInversion

makeComposition_(this, G)

Reimplemented from MapInversion

LinOpSummation

class Abstract.OperationsOnMaps.LinOpSummation

Bases: Abstract.OperationsOnMaps.MapSummation, LinOp

LinOpSummation: Sum of linear operators $$ \mathrm{H}(\mathrm{x}) = \sum_i \alpha_i \mathrm{H}_i(\mathrm{x}) $$

Parameters:
  • LinOps – cell of LinOp

  • alpha – array of coefficients

Example L=LinOpSummation(LinOps,alpha)

See also Map, LinOp, MapOpSummation

Method Summary
applyAdjoint_(this, y)

Reimplemented from LinOp

makeAdjoint_(this)

Reimplemented from LinOp

plus_(this, G)

Reimplemented from LinOp

CostComposition

class Abstract.OperationsOnMaps.CostComposition

Bases: Abstract.OperationsOnMaps.MapComposition, Cost

CostComposition: Compose a Cost with a Map $$ C(\mathrm{x}) := F( \mathrm{Hx}) $$ where \(F\) is a Cost and \(\mathrm{H}\) a Map

Parameters:

Example C = CostComposition(H1,H2)

See also Map, MapComposition, Cost

Method Summary
applyGrad_(this, x)

Reimplemented from Cost

applyProx_(this, z, alpha)

Reimplemented from Cost

If this.H2 is a LinOp and \(\mathrm{H} \mathrm{H}^{\star}\) is a LinOpScaledIdentity

makeComposition_(this, G)

Reimplemented from Cost

set_y(this, y)

Set the attribute \(\mathrm{y}\)

  • has to be conformable with the sizeout of the Map\(\mathrm{H}\),

  • can be anything if \(\mathrm{H}\) is not yet set (empty),

  • can be a scalar.

CostMultiplication

class Abstract.OperationsOnMaps.CostMultiplication

Bases: Cost

CostMultiplication: Multiplication of Costs $$C(\mathrm{x}) = C_1(\mathrm{x}) \times C_1(\mathrm{x}) $$

Parameters:
  • C1 – a Cost object or a scalar

  • C2 – a Cost object

Example F = MulCost(Cost1,Cost2)

See also Map, Cost

Method Summary
apply_(this, x)

Reimplemented from Cost

applyGrad_(this, x)

Reimplemented from Cost

applyProx_(this, x, alpha)

Reimplemented from Cost

makeComposition_(this, G)

Reimplemented from Cost

CostSummation

class Abstract.OperationsOnMaps.CostSummation

Bases: Abstract.OperationsOnMaps.MapSummation, Cost

CostSummation : Sum of Cost $$C(\mathrm{x}) = \sum_i \alpha_i C_i(\mathrm{x}) $$

Parameters:
  • costs – cell of Cost

  • alpha – array of coefficients

Example F = CostSummation(ACost,alpha)

See also Map, Cost, MapOpSummation

Method Summary
makePartialSummation(this, Lsub)

Instanciation of CostPartialSummation.

Parameters:

Lsub – number of Cost used for computation

applyGrad_(this, x)

Reimplemented from Cost

applyProx_(this, z, alpha)

Reimplemented from Cost in the case of the sum between a CostRectangle \(i_C \) and a Cost \(f \) which is separable [1] $$ \mathrm{prox}_{\alpha(i_C +f)}(z) = \mathrm{prox}_{i_c} \circ \mathrm{prox}_{\alpha f}(z) $$

Reference

[1] “A Douglas?Rachford splitting approach to nonsmooth convex variational signal recovery” P. L. Combettes, and J.C. Pesquet, Journal of Selected Topics in Signal Processing, 1(4), 564-574, 2007

CostPartialSummation

class Abstract.OperationsOnMaps.CostPartialSummation

Bases: Abstract.OperationsOnMaps.CostSummation

CostPartialSummation : Sum of Cost with apply, applyGrad,… computed from a subset of Cost $$C(\mathrm{x}) = \sum_i \alpha_i C_i(\mathrm{x}) $$

Parameters:
  • costs – cell of Cost

  • alpha – array of coefficients

  • Lsub – number of Cost used for computation

  • partialGrad – parameter for subset selection (0: no partial gradient; 1: stochastic gradient descent; 2: equally spaced indices)

Example F = CostPartialSummation(ACost,alpha,Lsub)

See also Map, Cost, MapOpSummation

Method Summary
setLsub(this, Lsub)

Set Lsub parameter

apply_(this, x)

Reimplemented from Cost

applyGrad_(this, x)

Reimplemented from Cost