Cost Functions (Cost)

\[\]

This section contains cost functions classes which all derive from the abstract class Cost.

CostConst

class Cost.CostConst(sizein, const)

Bases: sphinxcontrib.mat_types.Cost

CostConst: Constant cost whatever the input. Can be used to generate a null cost in particular situations.

All attributes of parent class Cost are inherited.

Parameters
  • sizein – size of the input vector

  • const – the output constant value (default 0)

(default true)

Example C=CostConst(sz, const)

Example C=CostMod2(sz)

See also Map, Cost, LinOp

apply_(this, ~)

Reimplemented from parent class Cost.

applyGrad_(this, ~)

Reimplemented from parent class Cost.

applyProx_(~, x, ~)

Reimplemented from parent class Cost.

CostGoodRoughness

class Cost.CostGoodRoughness(G, bet)

Bases: sphinxcontrib.mat_types.Cost

CostGoodRoughness: see [1] $$C(\mathrm{x}) := \sum_k \frac{\vert (\nabla \mathrm{x})_{.,k} \vert^2}{\sqrt{\vert \mathrm{x} \vert^2 + \beta}} $$ with \( \vert (\nabla \mathrm{x})_{.,k} \vert^2 \) the gradient magnitude at pixel k.

Parameters
  • GLinOpGrad object

  • bet – smoothing parameter (default 1e-1)

All attributes of parent class Cost are inherited.

Example GR=CostGoodRoughness(G)

Reference [1] Verveer, P. J., Gemkow, M. J., & Jovin, T. M. (1999). A comparison of image restoration approaches applied to three?dimensional confocal and wide?field fluorescence microscopy. Journal of microscopy, 193(1), 50-61.

See also Map, Cost, LinOpGrad

apply_(this, x)

Reimplemented from parent class Cost.

applyGrad_(this, x)

Reimplemented from parent class Cost.

CostHyperBolic

class Cost.CostHyperBolic(sz, epsilon, index, y, wght)

Bases: sphinxcontrib.mat_types.Cost

CostHyperBolic: Weighted Hyperbolic cost function $$C(\mathrm{x}) := \sum_{k=1}^K w_k \sqrt{\sum_{l=1}^L (\mathrm{x}-y)_{k,l}^2 + \varepsilon_k^2} - \sum_{k=1}^K\varepsilon_k $$

Parameters
  • index – dimensions along which the l2-norm will be applied (inner sum over l)

  • epsilon – \(\in \mathbb{R}^K_+\) smoothing parameter (default \(10^{-3}\))

  • W – weighting LinOp object or scalar (default 1)

All attributes of parent class Cost are inherited.

Example C=CostHyperBolic(sz,epsilon,index,y,W)

See also Map, Cost, LinOp

apply_(this, x)

Reimplemented from parent class Cost.

applyGrad_(this, x)

Reimplemented from parent class Cost.

CostKullLeib

class Cost.CostKullLeib(sz, y, bet)

Bases: sphinxcontrib.mat_types.Cost

CostKullLeib: KullbackLeibler divergence $$ C(\mathrm{x}) :=\sum_n D_{KL}(\mathrm{x}_n)$$ where $$ D_{KL}(\mathrm{z}_n) := \left\lbrace \begin{array}[ll] \mathrm{z}_n - \mathrm{y}_n \log(\mathrm{z}_n + \beta) & \text{ if } \mathrm{z}_n + \beta >0 \newline + \infty & \text{otherwise}. \end{array} \right.$$

All attributes of parent class Cost are inherited.

Parameters

bet – smoothing parameter \(\beta\) (default 0)

Example C=CostKullLeib(sz,y,bet)

See also Map Cost, LinOp

apply_(this, x)

Reimplemented from parent class Cost.

applyGrad_(this, x)

Reimplemented from parent class Cost.

applyProx_(this, x, alpha)

Reimplemented from parent class Cost.

CostLinear

class Cost.CostLinear(sz, y)

Bases: sphinxcontrib.mat_types.Cost

CostLinear: Linear cost function $$C(\mathrm{x}) := \mathrm{x}^T\mathrm{y}$$

All attributes of parent class Cost are inherited.

Parameters

y – data vector (default 0)

Example C=CostLinear(sz,y)

See also Map, Cost

apply_(this, x)

Reimplemented from parent class Cost. $$C(\mathrm{x}) := \mathrm{x}^T\mathrm{y}$$

applyGrad_(this, x)

Reimplemented from parent class Cost. Constant gradient: $$ \nabla C(\mathrm{x}) = \mathrm{y}$$

applyProx_(this, x, alpha)

Reimplemented from parent class Cost $$ \mathrm{prox}_{\alpha C}(\mathrm{x}) =\mathrm{x} - \alpha \mathrm{y} $$

CostL1

class Cost.CostL1(sz, y, nonneg)

Bases: sphinxcontrib.mat_types.Cost

CostL1: L1 norm cost function $$C(x) := \|\mathrm{x} - \mathrm{y}\|_1 $$

If nonneg is set to true, it adds positivity constraint on x: $$C(x) := \|\mathrm{x} - \mathrm{y}\|_1 + i(\mathrm{x} - \mathrm{y}) $$ where i() is the indicator function of $\mathbb{R}^{+}$

All attributes of parent class Cost are inherited.

Example C=CostL1(sz,y,nonneg)

See also Map Cost, LinOp

applyProx_(this, x, alpha)

Reimplemented from parent class Cost. $$ \mathrm{prox}_{\alpha C}(\mathrm{x}) = \mathrm{sign(x-y)} \mathrm{max}(\vert x-y \vert- \alpha,0)+ \mathrm{y} $$

apply_(this, x)

Reimplemented from parent class Cost.

applyGrad_(this, x)

Reimplemented from parent class Cost. Subgradient of CostL1

CostL2

class Cost.CostL2(sz, y, wght)

Bases: sphinxcontrib.mat_types.Cost

CostL2: Weighted L2 norm cost function $$C(\mathrm{x}) := \frac12\|\mathrm{x} - \mathrm{y}\|^2_W = \frac12 (\mathrm{x} - \mathrm{y})^T W (\mathrm{x} - \mathrm{y}) $$

All attributes of parent class Cost are inherited.

Parameters
  • y – data vector (default 0)

  • W – weighting LinOp object or scalar (default 1)

Example C=CostL2(sz,y,W)

See also Map, Cost, LinOp

apply_(this, x)

Reimplemented from parent class Cost.

applyGrad_(this, x)

Reimplemented from parent class Cost. $$ \nabla C(\mathrm{x}) = \mathrm{W (x - y)} $$ It is L-Lipschitz continuous with \( L \leq \|\mathrm{W}\|\).

applyProx_(this, x, alpha)

Reimplemented from parent class Cost $$ \mathrm{prox}_{\alpha C}(\mathrm{x}) = \frac{\mathrm{x} + \alpha \mathrm{Wy}}{1 + \alpha \mathrm{W}} $$ where the division is component-wise.

makeComposition_(this, G)

Reimplemented from parent class Cost. Instantiates a CostL2Composition.

CostL2Composition

class Cost.CostL2Composition(H1, H2)

Bases: Abstract.OperationsOnMaps.CostComposition

CostL2Composition: Composition of a CostL2 with a Map $$C(\mathrm{x}) := \frac12\|\mathrm{Hx} - \mathrm{y}\|^2_W = \frac12 (\mathrm{Hx} - \mathrm{y})^T W (\mathrm{Hx} - \mathrm{y}) $$

Parameters
  • H1CostL2 object

  • H2Map object

All attributes of parent class CostL2 and CostComposition are inherited.

Example C=CostL2Composition(H1,H2)

See also Map, Cost, CostL2, CostComposition, LinOp

apply_(this, x)

Reimplemented from parent class CostComposition. $$ C(\mathrm{x}) = \frac12\|\mathrm{Hx} - \mathrm{y}\|^2_W $$

If doPrecomputation is true, \(\mathrm{W}\) is a scaled identity and \(\mathrm{H}\) is a LinOp, then \(\mathrm{H^* Wy} \) and \(\| \mathrm{y} \|^2_{\mathrm{W}}\) are precomputed and \(C(\mathrm{x}) \) is evaluated using the applyHtH() method, i.e. $$ C(\mathrm{x}) = \frac12 \langle \mathrm{W H^{\star}Hx,x} \rangle - \langle \mathrm{x}, \mathrm{H^* Wy} \rangle + \frac12\| \mathrm{y} \|^2_{\mathrm{W}}$$

applyGrad_(this, x)

Reimplemented from parent class CostComposition. $$ \nabla C(\mathrm{x}) = \mathrm{J_{H}^* W (Hx - y)} $$

If doPrecomputation is true, \(\mathrm{W}\) is a scaled identity and \(\mathrm{H}\) is a LinOp, then \(\mathrm{H^* Wy} \) is precomputed and the gradient is evaluated using the applyHtH() method, i.e. $$ \nabla C(\mathrm{x}) = \mathrm{W H^{\star}Hx} - \mathrm{H^* Wy} $$

applyProx_(this, x, alpha)

Reimplemented from parent class CostComposition.

Implemented
  • if the operator \(\alpha\mathrm{H^{\star}WH + I} \) is invertible: $$ \mathrm{y} = (\alpha\mathrm{H^{\star}WH + I} )^{-1} (\alpha \mathrm{H^TWy +x})$$

  • if \(\alpha\mathrm{I + HH}^{\star}\) is invertible. In this case the prox is implemented using the Woodbury formulae [2]

  • if \(\mathrm{H}\) is a LinOpComposition composing a LinOpDownsample with a LinOpConv. The implementation follows [1,2].

  • if \(\mathrm{H}\) is a LinOpComposition composing a LinOpSum with a LinOpConv. The implementation follows [2]

Note If doPrecomputation is true, then \(\mathrm{H^TWy}\) is stored.

References

[1] Zhao Ningning et al. “Fast Single Image Super-Resolution Using a New Analytical Solution for l2-l2 Problems”. IEEE Transactions on Image Processing, 25(8), 3683-3697 (2016).

[2] Emmanuel Soubies and Michael Unser. “Computational Super-Sectioning for Single-Slice Structured-Illumination Microscopy” (2018)

makeComposition_(this, G)

Reimplemented from Cost. Instantiates a new CostL2Composition with the updated composed Map.

CostMixNorm21

class Cost.CostMixNorm21(sz, index, y)

Bases: sphinxcontrib.mat_types.Cost

CostMixNorm21: Mixed norm 2-1 cost function $$C(\mathrm{x}) := \sum_{k=1}^K \sqrt{\sum_{l=1}^L (\mathrm{x}-y)_{k,l}^2}= \sum_{k=1}^K \Vert (\mathrm{x-y})_{k\cdot} \Vert_2$$

Parameters

index – dimensions along which the l2-norm will be applied (inner sum over l)

All attributes of parent class Cost are inherited.

Example C=CostMixNorm21(sz,index,y)

See also Map Cost, LinOp

apply_(this, x)

Reimplemented from parent class Cost.

applyProx_(this, x, alpha)

Reimplemented from parent class Cost $$ \mathrm{prox}_{\alpha C}(\mathrm{x}_{k\cdot} ) = \left\lbrace \begin{array}{ll} \left( \mathrm{x}_{k\cdot} - y_{k\cdot} \right) \left(1-\frac{\alpha}{\Vert(\mathrm{x}-y)_{k\cdot}\Vert_2} \right) + y_{k\cdot} & \; \mathrm{if } \; \Vert (\mathrm{x-y})_{k\cdot}\Vert_2 > \alpha, \newline 0 & \; \mathrm{otherwise}, \end{array}\right. \; \forall \, k $$ where the division is component-wise.

makeComposition_(this, G)

Reimplemented from parent class Cost. Instantiates a CostL2Composition.

CostMixNorm21NonNeg

class Cost.CostMixNorm21NonNeg(sz, index, y)

Bases: sphinxcontrib.mat_types.Cost

CostMixNorm21: Mixed norm 2-1 with non negativity constraints cost function $$C(\mathrm{x}) := \left\lbrace \begin{array}{ll} \sum_{k=1}^K \sqrt{\sum_{l=1}^L (\mathrm{x}-y)_{k,l}^2} & \text{ if } \mathrm{x-y} \geq 0 \newline + \infty & \text{ otherwise.} \end{array} \right. $$

Parameters

index – dimensions along which the l2-norm will be applied (inner sum over l)

All attributes of parent class Cost are inherited.

Example C=CostMixNorm21(sz,index,y)

See also Map Cost, LinOp

apply_(this, x)

Reimplemented from parent class Cost.

applyProx_(this, x, alpha)

Reimplemented from parent class Cost $$ \mathrm{prox}_{\alpha C}(\mathrm{x}_{k\cdot} ) = \left\lbrace \begin{array}{ll} \max(\mathrm{x}_{k\cdot} - y_{k\cdot} ,0) \left(1-\frac{\alpha}{\Vert(\mathrm{x}-y)_{k\cdot}\Vert_2} \right) +y_{k\cdot} cd .. & \; \mathrm{if } \; \Vert (\mathrm{x-y})_{k\cdot}\Vert_2 > \alpha, \newline 0 & \; \mathrm{otherwise}, \end{array}\right. \; \forall \, k $$ where the division is component-wise.

CostMixNormSchatt1

class Cost.CostMixNormSchatt1(sz, p, y)

Bases: sphinxcontrib.mat_types.Cost

CostMixNormSchatt1 Mixed Schatten-l1 Norm [1] $$C(\mathrm{x}) := \sum_n \| \mathrm{x}_{n\cdot} \|_{Sp}, $$ for \(p \in [1,+\infty]\). Here, \(\|\cdot\|_{Sp}\) denotes the p-order Shatten norm defined by $$ \|\mathrm{X}\|_{Sp} = \left[\sum_k (\sigma_k(\mathrm{X}))^p\right]^{1/p},$$ where \(\sigma_k(\mathrm{X})\) is the k-th singular value of \(\mathrm{X}\). In other words it is the lp-norm of the signular values of \(\mathrm{X}\).

Parameters

p – order of the Shatten norm (default 1)

All attributes of parent class Cost are inherited.

Note The actual implementation works for size (sz) having one of the two following forms:

  • (NxMx3) such that the Sp norm will be applied on each symetric 2x2 $$ \begin{bmatrix} \mathrm{x}_{n m 1} & \mathrm{x}_{n m 2} \newline \mathrm{x}_{n m 2} & \mathrm{x}_{n m 3} \end{bmatrix}$$ and then the \(\ell_1\) norm on the two other dimensions.

  • (NxMxKx6) such that the Sp norm will be applied on each symetric 3x3 $$ \begin{bmatrix} \mathrm{x}_{n m k 1} & \mathrm{x}_{n m k 2} & \mathrm{x}_{n m k 3} \newline \mathrm{x}_{n m k 2} & \mathrm{x}_{n m k 4} & \mathrm{x}_{n m k 5} \newline \mathrm{x}_{n m k 3} & \mathrm{x}_{n m k 5} & \mathrm{x}_{n m k 6} \newline \end{bmatrix}$$ and then the \(\ell_1\) norm on the three other dimensions.

References [1] Lefkimmiatis, S., Ward, J. P., & Unser, M. (2013). Hessian Schatten-norm regularization for linear inverse problems. IEEE transactions on image processing, 22(5), 1873-1888.

Example C=CostMixNormSchatt1(sz,p,y)

See also Map, Cost, LinOp

apply_(this, x)

Reimplemented from parent class Cost.

applyProx_(this, x, alpha)

Reimplemented from parent class Cost.

CostRobustPenalization

class Cost.CostRobustPenalization(M, y, options)

Bases: sphinxcontrib.mat_types.Cost

CostRobustPenalization: Robust penalization cost function $$C(\mathrm{x}) := sum rho((A.x-y)/s)$$

All attributes of parent class Cost are inherited.

Parameters
  • M – MAP to apply on the input (default Identity)

  • y – data vector (default 0)

  • options – structure containing the different options of the robust penalization

  • options.method – objective function method to compute the cost function - ‘Andrews’ - ‘Beaton-Tukey’ - ‘Cauchy’ (default) - ‘Fair’ - ‘Huber’ - ‘Logistic’ - ‘Talwar-Hinich’ - ‘Welsch-Dennis’

  • options.mu – parameters to tune the method. Default: - ‘Andrews’ -> 1.339 - ‘Beaton-Tukey’ -> 4.685 - ‘Cauchy’ -> 2.385 - ‘Fair’ -> 1.400 - ‘Huber’ -> 1.345 - ‘Logistic’ -> 1.205 - ‘Talwar-Hinich’ -> 2.795 - ‘Welsch-Dennis’ -> 2.985

  • options.flag_s – method to scale the residue before applying the objective function (default: none) - ‘none’ -> no scaling applied - ‘MAD’ -> median absolute deviation - ‘STD’ -> standard deviation - numeric -> scaling (can be a matrix to have a different scaling for each variables)

  • options.noise_model – model of the noise to scale the residues according to the input variable x - ‘Poisson’ -> scaling_factor = sqrt(x+var_0) - ‘none’ -> no scaling according to a noise model

  • options.var_0 – value of the variance of the data at null flux (default 0)

  • options.eta – ratio to scale the model in the corresponding unit for the Poisson noise

  • options.flag_memoizeRes – memoize the computation of the residues? (default: true)

Example C=CostRobustPenalization(M, y)

Example C=CostRobustPenalization(M, y, options)

Example C=CostRobustPenalization(M, y, [])

See also Map, Cost

apply_(this, x)

Reimplemented from parent class Cost.

applyGrad_(this, x)

Reimplemented from parent class Cost.

CostTV

class Cost.CostTV(varargin)

Bases: Abstract.OperationsOnMaps.CostComposition

CostTV : Composition of a CostTV with a Map $$C(\mathrm{x}) := \sum_{k=1}^K \sqrt{\sum_{l=1}^L (\mathrm{H_2 x}-y)_{k,l}^2}= \sum_{k=1}^K \Vert (\mathrm{H_2 x-y})_{k\cdot} \Vert_2$$

Parameters

All attributes of parent CostComposition are inherited.

Example C=CostTV(sz)

Example C=CostTV(H1,H2); with H1=CostMixNorm21(sz,index,y); and H2=LinOpGrad(sz);

See also Map, Cost, CostL2, CostComposition, LinOp

setProxAlgo(this, bounds, maxiter, xtol, Outop)

Set the parameters of OptiFGP used to compute the proximity operator.

applyProx_(this, x, alpha)

Reimplemented from parent class CostComposition. Computed using the iterative OptiFGP

IndicatorFunctions

CostIndicator

class Cost.IndicatorFunctions.CostIndicator(sz, y)

Bases: sphinxcontrib.mat_types.Cost

CostIndicator: Indicator cost function $$ C(\mathrm{x}) = \left\lbrace \begin{array}[l] \text{0~if } \mathrm{x -y} \in \mathrm{C}, \newline + \infty \text{ otherwise.} \end{array} \right. $$ where \(\mathrm{C} \subset \mathrm{X} \) is a constraint set.

All attributes of parent class Cost are inherited

Note CostIndicator is an generic class for all indicator cost functions

See also Map, Cost

CostRectangle

class Cost.IndicatorFunctions.CostRectangle(sz, xmin, xmax, y)

Bases: Cost.IndicatorFunctions.CostIndicator

CostRectangle: Rectangle Indicator function $$ C(x) = \left\lbrace \begin{array}[l] \text{0~if } \mathrm{real(xmin)} \leq \mathrm{imag(x-y)} \leq \mathrm{real(xmax)} \text{ and } \mathrm{imag(xmin)} \leq \mathrm{imag(x-y)} \leq \mathrm{imag(xmax)}, \newline + \infty \text{ otherwise.} \end{array} \right. $$

Parameters
  • xmin – minimum value (default -inf + 0i)

  • xmax – maximum value (default +inf + 0i)

All attributes of parent class CostIndicator are inherited

Example C=CostRectangle(sz,xmin,xmax,y)

See also Map, Cost, CostIndicator

apply_(this, x)

Reimplemented from parent class Cost.

applyProx_(this, x, ~)

Reimplemented from parent class Cost.

CostReals

class Cost.IndicatorFunctions.CostReals(sz, xmin, xmax, y)

Bases: Cost.IndicatorFunctions.CostRectangle

CostReals: Reals Indicator function $$ C(x) = \left\lbrace \begin{array}[l] \text{0~if } \mathrm{xmin} \leq \mathrm{x-y} \leq \mathrm{xmax} \newline + \infty \text{ otherwise.} \end{array} \right. $$

Parameters
  • xmin – minimum value (default -inf)

  • xmax – maximum value (default +inf)

All attributes of parent class CostRectangle are inherited

Example C=CostReals(sz,xmin,xmax,y)

See also Map, Cost, CostIndicator, CostRectangle

CostNonNeg

class Cost.IndicatorFunctions.CostNonNeg(sz, y)

Bases: Cost.IndicatorFunctions.CostReals

CostNonNeg : Non negativity indicator $$ C(x) = \left\lbrace \begin{array}[l] \text{0~if } \mathrm{x-y} \geq 0 \newline + \infty \text{ otherwise.} \end{array} \right. $$

All attributes of parent class CostRectangle are inherited

Example C=CostNonNeg(sz,y)

See also Map, Cost, CostIndicator, CostRectangle CostReals

CostComplexRing

class Cost.IndicatorFunctions.CostComplexRing(sz, inner, outer, y)

Bases: Cost.IndicatorFunctions.CostIndicator

CostComplexRing: Implements the indicator function operator on the complex ring $$ C(x) = \left\lbrace \begin{array}[l] \text{0~if } \mathrm{inner} \leq \vert x-y\vert \leq \mathrm{outer} \newline + \infty \text{ otherwise.} \end{array} \right. $$

Parameters
  • inner – inner radius of the ring (default 0)

  • outer – outer radius of the ring (default 1)

All attributes of parent class CostIndicator are inherited

Example CostComplexRing(inner, outer, sz,y)

See also Map, Cost, CostIndicator

applyProx_(this, x, ~)

apply the operator Reimplemented from parent class Cost.

apply_(this, x)

Reimplemented from parent class Cost.

CostComplexCircle

class Cost.IndicatorFunctions.CostComplexCircle(sz, radius, y)

Bases: Cost.IndicatorFunctions.CostComplexRing

CostComplexCircle: Implements the indicator on the complex circle of radius c $$ C(x) = \left\lbrace \begin{array}[l] \text{0~if } \vert \mathrm{x-y}\vert =c \newline + \infty \text{ otherwise.} \end{array} \right. $$

All attributes of parent class CostComplexRing are inherited

Parameters

radius – radius of the circle (default 1)

Example C=CostComplexCircle(radius,sz,y)

See also Map, Cost, CostIndicator, CostComplexRing

CostComplexDisk

class Cost.IndicatorFunctions.CostComplexDisk(sz, radius, y)

Bases: Cost.IndicatorFunctions.CostComplexRing

CostComplexDisk: Implements the indocator function on the complex disk of radius c $$ C(x) = \left\lbrace \begin{array}[l] \text{0~if } \vert \mathrm{x-y}\vert \leq c \newline + \infty \text{ otherwise.} \end{array} \right. $$

All attributes of parent class CostComplexRing are inherited

Parameters

radius – radius of the disk (default 1)

Example C=CostComplexDisk(sz,radius, y)

See also Map, Cost, CostIndicator, CostComplexRing