mrpro.operators.functionals.L2NormSquared
- class mrpro.operators.functionals.L2NormSquared[source]
Bases:
ElementaryProximableFunctionalFunctional class for the squared L2 Norm.
This implements the functional given by \(f: C^N \rightarrow [0, \infty), x \rightarrow \| W (x-b)\|_2^2\), where \(W\) is either a scalar or tensor that corresponds to a (block-) diagonal operator that is applied to the input. This is, for example, useful for non-Cartesian MRI reconstruction when using a density-compensation function for k-space pre-conditioning, for masking of image data, or for spatially varying regularization weights.
In most cases, consider setting
divide_by_ntoTrueto be independent of input size. Alternatively, the functionalmrpro.operators.functionals.MSEcan be used. The norm is computed along the dimensions given at initialization, all other dimensions are considered batch dimensions.- __init__(target: Tensor | None | complex = None, weight: Tensor | complex = 1.0, dim: int | Sequence[int] | None = None, divide_by_n: bool = False, keepdim: bool = False) None[source]
Initialize a Functional.
We assume that functionals are given in the form \(f(x) = \phi ( \mathrm{weight} ( x - \mathrm{target}))\) for some functional \(\phi\).
- Parameters:
target (
Tensor|None|complex, default:None) – target element - often data tensor (see above)weight (
Tensor|complex, default:1.0) – weight parameter (see above)dim (
int|Sequence[int] |None, default:None) – dimension(s) over which functional is reduced. All other dimensions ofweight ( x - target)will be treated as batch dimensions.divide_by_n (
bool, default:False) – if true, the result is scaled by the number of elements of the dimensions index bydimin the tensorweight ( x - target). If true, the functional is thus calculated as the mean, else the sum.keepdim (
bool, default:False) – if true, the dimension(s) of the input indexed bydimare maintained and collapsed to singletons, else they are removed from the result.
- __call__(x: Tensor) tuple[Tensor][source]
Compute the squared L2 norm of the input tensor.
Calculates \(\| W * (x - b) \|_2^2\), where \(W\) is
weightand :math`b` istarget. The squared norm is computed along dimensions specified bydim. Ifdivide_by_nisTrue, the result is averaged over these dimensions; otherwise, it’s summed.
- forward(x: Tensor) tuple[Tensor][source]
Apply forward of L2NormSquared.
Note
Prefer calling the instance of the L2NormSquared as
operator(x)over directly calling this method. See this PyTorch discussion.
- prox(x: Tensor, sigma: Tensor | float = 1.0) tuple[Tensor][source]
Proximal Mapping of the squared L2 Norm.
Apply the proximal mapping of the squared L2 norm.
- prox_convex_conj(x: Tensor, sigma: Tensor | float = 1.0) tuple[Tensor][source]
Convex conjugate of squared L2 Norm.
Apply the proximal mapping of the convex conjugate of the squared L2 norm.
- __add__(other: Operator[Unpack[Tin], Tout]) Operator[Unpack[Tin], Tout][source]
- __add__(other: Tensor | complex) Operator[Unpack[Tin], tuple[Unpack[Tin]]]
Operator addition.
Returns
lambda x: self(x) + other(x)if other is a operator,lambda x: self(x) + other*xif other is a tensor
- __matmul__(other: Operator[Unpack[Tin2], tuple[Unpack[Tin]]] | Operator[Unpack[Tin2], tuple[Tensor, ...]]) Operator[Unpack[Tin2], Tout][source]
Operator composition.
Returns
lambda x: self(other(x))
- __mul__(other: Tensor | complex) Operator[Unpack[Tin], Tout][source]
Operator multiplication with tensor.
Returns
lambda x: self(x*other)
- __or__(other: ProximableFunctional) ProximableFunctionalSeparableSum[Tensor, Tensor][source]
Create a ProximableFunctionalSeparableSum object from two proximable functionals.
- Parameters:
other (
ProximableFunctional) – second functional to be summed- Returns:
ProximableFunctionalSeparableSum object
- __radd__(other: Tensor | complex) Operator[Unpack[Tin], tuple[Unpack[Tin]]][source]
Operator right addition.
Returns
lambda x: other*x + self(x)
- __rmul__(scalar: Tensor | complex) ProximableFunctional[source]
Multiply functional with scalar.