mrpro.data.CsmData

class mrpro.data.CsmData[source]

Bases: QData

Coil sensitivity map class.

__init__(data: Tensor, header: QHeader) None

Create QData object from a tensor and an arbitrary MRpro header.

Parameters:
  • data (Tensor) – quantitative image data tensor with dimensions (*other, coils, z, y, x)

  • header (QHeader) – MRpro header containing required meta data for the QHeader

classmethod from_idata_inati(idata: IData, smoothing_width: int | SpatialDimension[int] = 5, chunk_size_otherdim: int | None = None, downsampled_size: int | SpatialDimension[int] | None = None) Self[source]

Create csm object from image data using Inati method.

See also inati.

Parameters:
  • idata (IData) – IData object containing the images for each coil element.

  • smoothing_width (Union[int, SpatialDimension[int]], default: 5) – Size of the smoothing kernel.

  • chunk_size_otherdim (int | None, default: None) – How many elements of the other dimensions should be processed at once. Default is None, which means that all elements are processed at once.

  • downsampled_size (Union[int, SpatialDimension[int], None], default: None) – IData will be downsampled to this size before calculating the csm to speed up the calculation and reduce memory requirements. The final csm will be upsampled to the original size. If set to None no downsampling will be performed.

Returns:

Coil sensitivity maps

classmethod from_idata_walsh(idata: IData, smoothing_width: int | SpatialDimension[int] = 5, chunk_size_otherdim: int | None = None, downsampled_size: int | SpatialDimension[int] | None = None) Self[source]

Create csm object from image data using Walsh method.

See also walsh.

Parameters:
  • idata (IData) – IData object containing the images for each coil element.

  • smoothing_width (Union[int, SpatialDimension[int]], default: 5) – width of smoothing filter.

  • chunk_size_otherdim (int | None, default: None) – How many elements of the other dimensions should be processed at once. Default is None, which means that all elements are processed at once.

  • downsampled_size (Union[int, SpatialDimension[int], None], default: None) – IData will be downsampled to this size before calculating the csm to speed up the calculation and reduce memory requirements. The final csm will be upsampled to the original size. If set to None no downsampling will be performed.

Returns:

Coil sensitivity maps

classmethod from_kdata_inati(kdata: KData, noise: KNoise | None = None, smoothing_width: int | SpatialDimension[int] = 5, chunk_size_otherdim: int | None = None, downsampled_size: int | SpatialDimension[int] | None = None) Self[source]

Create csm object from k-space data using Inati method.

See also inati.

Parameters:
  • kdata (KData) – k-space data

  • noise (KNoise | None, default: None) – Noise measurement for prewhitening.

  • optional – Noise measurement for prewhitening.

  • smoothing_width (Union[int, SpatialDimension[int]], default: 5) – width of smoothing filter

  • chunk_size_otherdim (int | None, default: None) – How many elements of the other dimensions should be processed at once. Default is None, which means that all elements are processed at once.

  • downsampled_size (Union[int, SpatialDimension[int], None], default: None) – IData will be downsampled to this size before calculating the csm to speed up the calculation and reduce memory requirements. The final csm will be upsampled to the original size. If set to None no downsampling will be performed.

Returns:

Coil sensitivity maps

classmethod from_kdata_walsh(kdata: KData, noise: KNoise | None = None, smoothing_width: int | SpatialDimension[int] = 5, chunk_size_otherdim: int | None = None, downsampled_size: int | SpatialDimension[int] | None = None) Self[source]

Create csm object from k-space data using Walsh method.

See also walsh.

Parameters:
  • kdata (KData) – k-space data

  • noise (KNoise | None, default: None) – Noise measurement for prewhitening.

  • optional – Noise measurement for prewhitening.

  • smoothing_width (Union[int, SpatialDimension[int]], default: 5) – width of smoothing filter

  • chunk_size_otherdim (int | None, default: None) – How many elements of the other dimensions should be processed at once. Default is None, which means that all elements are processed at once.

  • downsampled_size (Union[int, SpatialDimension[int], None], default: None) – IData will be downsampled to this size before calculating the csm to speed up the calculation and reduce memory requirements. The final csm will be upsampled to the original size. If set to None no downsampling will be performed.

Returns:

Coil sensitivity maps

classmethod from_single_dicom(filename: str | Path) Self[source]

Read single DICOM file and return QData object.

Parameters:

filename (str | Path) – path to DICOM file

data: torch.Tensor

Tensor containing quantitative image data with dimensions (*other, coils, z, y, x).

header: QHeader

Header describing quantitative data.

property device: device | None[source]

Return the device of the tensors.

Looks at each field of a dataclass implementing a device attribute, such as torch.Tensor or Dataclass instances. If the devices of the fields differ, an InconsistentDeviceError is raised, otherwise the device is returned. If no field implements a device attribute, None is returned.

Raises:

InconsistentDeviceError – If the devices of different fields differ.

Returns:

The device of the fields or None if no field implements a device attribute.

property is_cpu: bool[source]

Return True if all tensors are on the CPU.

Checks all tensor attributes of the dataclass for their device, (recursively if an attribute is a Dataclass)

Returns False if not all tensors are on cpu or if the device is inconsistent, returns True if the data class has no tensors as attributes.

property is_cuda: bool[source]

Return True if all tensors are on a single CUDA device.

Checks all tensor attributes of the dataclass for their device, (recursively if an attribute is a Dataclass)

Returns False if not all tensors are on the same CUDA devices, or if the device is inconsistent, returns True if the data class has no tensors as attributes.

property ndim: int[source]

Return the number of dimensions of the dataclass.

This is the number of dimensions of the broadcasted shape of all fields.

property shape: Size[source]

Return the broadcasted shape of all tensor/data fields.

Each field of this dataclass is broadcastable to this shape.

Returns:

The broadcasted shape of all fields.

Raises:

InconsistentShapeError – If the shapes cannot be broadcasted.

as_operator() SensitivityOp[source]

Create SensitivityOp using a copy of the CSMs.

apply(function: Callable[[Any], Any] | None = None, *, recurse: bool = True) Self[source]

Apply a function to all children. Returns a new object.

Parameters:
  • function (Callable[[Any], Any] | None, default: None) – The function to apply to all fields. None is interpreted as a no-op.

  • recurse (bool, default: True) – If True, the function will be applied to all children that are Dataclass instances.

apply_(function: Callable[[Any], Any] | None = None, *, memo: dict[int, Any] | None = None, recurse: bool = True) Self[source]

Apply a function to all children in-place.

Parameters:
  • function (Callable[[Any], Any] | None, default: None) – The function to apply to all fields. None is interpreted as a no-op.

  • memo (dict[int, Any] | None, default: None) – A dictionary to keep track of objects that the function has already been applied to, to avoid multiple applications. This is useful if the object has a circular reference.

  • recurse (bool, default: True) – If True, the function will be applied to all children that are Dataclass instances.

clone() Self[source]

Return a deep copy of the object.

concatenate(*others: Self, dim: int) Self[source]

Concatenate other instances to the current instance.

Only tensor-like fields will be concatenated in the specified dimension. List fields will be concatenated as a list. Other fields will be ignored.

Parameters:
  • others (Self) – other instance to concatnate.

  • dim (int) – The dimension to concatenate along.

Returns:

The concatenated dataclass.

cpu(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Put in CPU memory.

Parameters:
  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

cuda(device: device | str | int | None = None, *, non_blocking: bool = False, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Put object in CUDA memory.

Parameters:
  • device (device | str | int | None, default: None) – The destination GPU device. Defaults to the current CUDA device.

  • non_blocking (bool, default: False) – If True and the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.

  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

double(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Convert all float tensors to double precision.

converts float to float64 and complex to complex128

Parameters:
  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

half(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Convert all float tensors to half precision.

converts float to float16 and complex to complex32

Parameters:
  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

items() Iterator[tuple[str, Any]][source]

Get an iterator over names and values of fields.

rearrange(pattern: str, **axes_lengths: int) Self[source]

Rearrange the data according to the specified pattern.

Similar to einops.rearrange, allowing flexible rearrangement of data dimensions.

Examples

>>> # Split the phase encode lines into 8 cardiac phases
>>> data.rearrange('batch coils k2 (phase k1) k0 -> batch phase coils k2 k1 k0', phase=8)
>>> # Split the k-space samples into 64 k1 and 64 k2 lines
>>> data.rearrange('... 1 1 (k2 k1 k0) -> ... k2 k1 k0', k2=64, k1=64, k0=128)
Parameters:
  • pattern (str) – String describing the rearrangement pattern. See einops.rearrange and the examples above for more details.

  • **axes_lengths (dict) – Optional dictionary mapping axis names to their lengths. Used when pattern contains unknown dimensions.

Returns:

The rearranged data with the same type as the input.

single(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]

Convert all float tensors to single precision.

converts float to float32 and complex to complex64

Parameters:
  • memory_format (memory_format, default: torch.preserve_format) – The desired memory format of returned tensor.

  • copy (bool, default: False) – If True, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.

split(dim: int, size: int = 1, overlap: int = 0, dilation: int = 1) tuple[Self, ...][source]

Split the dataclass along a dimension.

Parameters:
  • dim (int) – dimension to split along.

  • size (int, default: 1) – size of the splits.

  • overlap (int, default: 0) – overlap between splits. The stride will be size - overlap. Negative overlap will leave spaces between splits.

  • dilation (int, default: 1) – dilation of elements in each split.

Examples

If the dimension has 6 elements:

  • split with size 2, overlap 0, dilation 1 -> elements (0,1), (2,3), and (4,5)

  • split with size 2, overlap 1, dilation 1 -> elements (0,1), (1,2), (2,3), (3,4), (4,5), and (5,6)

  • split with size 2, overlap 0, dilation 2 -> elements (0,2), and (3,5)

  • split with size 2, overlap -1, dilation 1 -> elements (0,1), and (3,4)

Returns:

A tuple of the splits.

to(device: str | device | int | None = None, dtype: dtype | None = None, non_blocking: bool = False, *, copy: bool = False, memory_format: memory_format | None = None) Self[source]
to(dtype: dtype, non_blocking: bool = False, *, copy: bool = False, memory_format: memory_format | None = None) Self
to(tensor: Tensor, non_blocking: bool = False, *, copy: bool = False, memory_format: memory_format | None = None) Self

Perform dtype and/or device conversion of data.

A torch.dtype and torch.device are inferred from the arguments args and kwargs. Please have a look at the documentation of torch.Tensor.to for more details.

A new instance of the dataclass will be returned.

The conversion will be applied to all Tensor- or Module fields of the dataclass, and to all fields that implement the Dataclass.

The dtype-type, i.e. float or complex will always be preserved, but the precision of floating point dtypes might be changed.

Examples

If called with dtype=torch.float32 OR dtype=torch.complex64:

  • A complex128 tensor will be converted to complex64

  • A float64 tensor will be converted to float32

  • A bool tensor will remain bool

  • An int64 tensor will remain int64

If other conversions are desired, please use the to method of the fields directly.

If the copy argument is set to True (default), a deep copy will be returned even if no conversion is necessary. If two fields are views of the same data before, in the result they will be independent copies if copy is set to True or a conversion is necessary. If set to False, some Tensors might be shared between the original and the new object.

__eq__(other: object) bool[source]

Check deep equality of two dataclasses.

Tests equality up to broadcasting.

__new__(**kwargs)