mrpro.operators.PatchOp

class mrpro.operators.PatchOp[source]

Bases: LinearOperator

Extract N-dimensional patches using a sliding window view.

The adjoint assembles patches to an image.

__init__(dim: Sequence[int] | int, patch_size: Sequence[int] | int, stride: Sequence[int] | int | None = None, dilation: Sequence[int] | int = 1, domain_size: int | Sequence[int] | None = None) None[source]

Initialize the PatchOp.

Parameters:
  • dim (Sequence[int] | int) – Dimension(s) to extract patches from.

  • patch_size (Sequence[int] | int) – Size of patches (window_shape).

  • stride (Sequence[int] | int | None, default: None) – Stride between patches. Set to patch_size if None.

  • dilation (Sequence[int] | int, default: 1) – Dilation factor of the patches

  • domain_size (int | Sequence[int] | None, default: None) – Size of the domain in the dimnsions dim. If None, it is inferred from the input tensor on the first call. This is only used in the adjoint method.

property H: LinearOperator[source]

Adjoint operator.

Obtains the adjoint of an instance of this operator as an AdjointLinearOperator, which itself is a an LinearOperator that can be applied to tensors.

Note: linear_operator.H.H == linear_operator

property gram: LinearOperator[source]

Gram operator.

For a LinearOperator \(A\), the self-adjoint Gram operator is defined as \(A^H A\).

Note

This is the inherited default implementation.

__call__(x: Tensor) tuple[Tensor][source]

Extract N-dimensional patches from an input tensor using a sliding window.

Parameters:

x (Tensor) – Input tensor from which to extract patches.

Returns:

A tensor containing the extracted patches. The first dimension represents the number of patches, followed by the original tensor dimensions (excluding those used for patching), and then the patch dimensions themselves. Shape: (n_patches, ... , patch_size_dim1, patch_size_dim2, ...).

adjoint(patches: Tensor) tuple[Tensor][source]

Assemble patches back into an image (adjoint operation).

This method reconstructs an image by summing the provided patches at their respective locations, effectively reversing the patch extraction process. Overlapping areas are summed.

Parameters:

patches (Tensor) – Tensor of patches to be assembled. Expected shape is (n_patches, ..., patch_size_dim1, patch_size_dim2, ...).

Returns:

The assembled image. Its shape will match the original image from which patches would have been extracted, with patch dimensions replaced by the original domain sizes along those dimensions.

forward(x: Tensor) tuple[Tensor][source]

Apply forward of PatchOp.

Note

Prefer calling the instance of the PatchOp operator as operator(x) over directly calling this method. See this PyTorch discussion.

operator_norm(initial_value: Tensor, dim: Sequence[int] | None, max_iterations: int = 20, relative_tolerance: float = 1e-4, absolute_tolerance: float = 1e-5, callback: Callable[[Tensor], None] | None = None) Tensor[source]

Power iteration for computing the operator norm of the operator.

Parameters:
  • initial_value (Tensor) – initial value to start the iteration; must be element of the domain. if the initial value contains a zero-vector for one of the considered problems, the function throws an ValueError.

  • dim (Sequence[int] | None) –

    The dimensions of the tensors on which the operator operates. The choice of dim determines how the operator norm is inperpreted. For example, for a matrix-vector multiplication with a batched matrix tensor of shape (batch1, batch2, row, column) and a batched input tensor of shape (batch1, batch2, row):

    • If dim=None, the operator is considered as a block diagonal matrix with batch1*batch2 blocks and the result is a tensor containing a single norm value (shape (1, 1, 1)).

    • If dim=(-1), batch1*batch2 matrices are considered, and for each a separate operator norm is computed.

    • If dim=(-2,-1), batch1 matrices with batch2 blocks are considered, and for each matrix a separate operator norm is computed.

    Thus, the choice of dim determines implicitly determines the domain of the operator.

  • max_iterations (int, default: 20) – maximum number of iterations

  • relative_tolerance (float, default: 1e-4) – absolute tolerance for the change of the operator-norm at each iteration; if set to zero, the maximal number of iterations is the only stopping criterion used to stop the power iteration.

  • absolute_tolerance (float, default: 1e-5) – absolute tolerance for the change of the operator-norm at each iteration; if set to zero, the maximal number of iterations is the only stopping criterion used to stop the power iteration.

  • callback (Callable[[Tensor], None] | None, default: None) – user-provided function to be called at each iteration

Returns:

An estimaton of the operator norm. Shape corresponds to the shape of the input tensor initial_value with the dimensions specified in dim reduced to a single value. The pointwise multiplication of initial_value with the result of the operator norm will always be well-defined.

__add__(other: LinearOperator | Tensor | complex) LinearOperator[source]
__add__(other: Operator[Tensor, tuple[Tensor]]) Operator[Tensor, tuple[Tensor]]

Operator addition.

Returns lambda x: self(x) + other(x) if other is a operator, lambda x: self(x) + other if other is a tensor

__and__(other: LinearOperator) LinearOperatorMatrix[source]

Vertical stacking of two LinearOperators.

A&B is a LinearOperatorMatrix with two rows, with (A&B)(x) == (A(x), B(x)). See mrpro.operators.LinearOperatorMatrix for more information.

__matmul__(other: LinearOperator) LinearOperator[source]
__matmul__(other: Operator[Unpack[Tin2], tuple[Tensor]] | Operator[Unpack[Tin2], tuple[Tensor, ...]]) Operator[Unpack[Tin2], tuple[Tensor]]

Operator composition.

Returns lambda x: self(other(x))

__mul__(other: Tensor | complex) LinearOperator[source]

Operator elementwise left multiplication with tensor/scalar.

Returns lambda x: self(x*other)

__or__(other: LinearOperator) LinearOperatorMatrix[source]

Horizontal stacking of two LinearOperators.

A|B is a LinearOperatorMatrix with two columns, with (A|B)(x1,x2) == A(x1) + B(x2). See mrpro.operators.LinearOperatorMatrix for more information.

__radd__(other: Tensor | complex) LinearOperator[source]

Operator addition.

Returns lambda x: self(x) + other*x

__rmul__(other: Tensor | complex) LinearOperator[source]

Operator elementwise right multiplication with tensor/scalar.

Returns lambda x: other*self(x)