mrpro.data.AcqIdx
- class mrpro.data.AcqIdx[source]
Bases:
DataclassAcquisition index for each readout.
- __init__(k1: Tensor = _int_factory(), k2: Tensor = _int_factory(), average: Tensor = _int_factory(), slice: Tensor = _int_factory(), contrast: Tensor = _int_factory(), phase: Tensor = _int_factory(), repetition: Tensor = _int_factory(), set: Tensor = _int_factory(), segment: Tensor = _int_factory(), user0: Tensor = _int_factory(), user1: Tensor = _int_factory(), user2: Tensor = _int_factory(), user3: Tensor = _int_factory(), user4: Tensor = _int_factory(), user5: Tensor = _int_factory(), user6: Tensor = _int_factory(), user7: Tensor = _int_factory()) None
- property device: device | None[source]
Return the device of the tensors.
Looks at each field of a dataclass implementing a device attribute, such as
torch.TensororDataclassinstances. If the devices of the fields differ, anInconsistentDeviceErroris raised, otherwise the device is returned. If no field implements a device attribute, None is returned.- Raises:
InconsistentDeviceError – If the devices of different fields differ.
- Returns:
The device of the fields or
Noneif no field implements adeviceattribute.
- property is_cpu: bool[source]
Return True if all tensors are on the CPU.
Checks all tensor attributes of the dataclass for their device, (recursively if an attribute is a
Dataclass)Returns
Falseif not all tensors are on cpu or if the device is inconsistent, returnsTrueif the data class has no tensors as attributes.
- property is_cuda: bool[source]
Return
Trueif all tensors are on a single CUDA device.Checks all tensor attributes of the dataclass for their device, (recursively if an attribute is a
Dataclass)Returns
Falseif not all tensors are on the same CUDA devices, or if the device is inconsistent, returnsTrueif the data class has no tensors as attributes.
- property ndim: int[source]
Return the number of dimensions of the dataclass.
This is the number of dimensions of the broadcasted shape of all fields.
- property shape: Size[source]
Return the broadcasted shape of all tensor/data fields.
Each field of this dataclass is broadcastable to this shape.
- Returns:
The broadcasted shape of all fields.
- Raises:
InconsistentShapeError – If the shapes cannot be broadcasted.
- apply(function: Callable[[Any], Any] | None = None, *, recurse: bool = True) Self[source]
Apply a function to all children. Returns a new object.
- apply_(function: Callable[[Any], Any] | None = None, *, memo: dict[int, Any] | None = None, recurse: bool = True) Self[source]
Apply a function to all children in-place.
- Parameters:
function (
Callable[[Any],Any] |None, default:None) – The function to apply to all fields.Noneis interpreted as a no-op.memo (
dict[int,Any] |None, default:None) – A dictionary to keep track of objects that the function has already been applied to, to avoid multiple applications. This is useful if the object has a circular reference.recurse (
bool, default:True) – IfTrue, the function will be applied to all children that areDataclassinstances.
- concatenate(*others: Self, dim: int) Self[source]
Concatenate other instances to the current instance.
Only tensor-like fields will be concatenated in the specified dimension. List fields will be concatenated as a list. Other fields will be ignored.
- Parameters:
others (
Self) – other instance to concatnate.dim (
int) – The dimension to concatenate along.
- Returns:
The concatenated dataclass.
- cpu(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]
Put in CPU memory.
- Parameters:
memory_format (
memory_format, default:torch.preserve_format) – The desired memory format of returned tensor.copy (
bool, default:False) – IfTrue, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.
- cuda(device: device | str | int | None = None, *, non_blocking: bool = False, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]
Put object in CUDA memory.
- Parameters:
device (
device|str|int|None, default:None) – The destination GPU device. Defaults to the current CUDA device.non_blocking (
bool, default:False) – IfTrueand the source is in pinned memory, the copy will be asynchronous with respect to the host. Otherwise, the argument has no effect.memory_format (
memory_format, default:torch.preserve_format) – The desired memory format of returned tensor.copy (
bool, default:False) – IfTrue, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.
- detach() Self[source]
Detach the data from the autograd graph.
- Returns:
A new dataclass with the data detached from the autograd graph. The data is shared between the original and the new object. Use
detach().clone()to create an independent copy.
- double(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]
Convert all float tensors to double precision.
converts
floattofloat64andcomplextocomplex128- Parameters:
memory_format (
memory_format, default:torch.preserve_format) – The desired memory format of returned tensor.copy (
bool, default:False) – IfTrue, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.
- half(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]
Convert all float tensors to half precision.
converts
floattofloat16andcomplextocomplex32- Parameters:
memory_format (
memory_format, default:torch.preserve_format) – The desired memory format of returned tensor.copy (
bool, default:False) – IfTrue, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.
- rearrange(pattern: str, **axes_lengths: int) Self[source]
Rearrange the data according to the specified pattern.
Similar to
einops.rearrange, allowing flexible rearrangement of data dimensions.Examples
>>> # Split the phase encode lines into 8 cardiac phases >>> data.rearrange('batch coils k2 (phase k1) k0 -> batch phase coils k2 k1 k0', phase=8) >>> # Split the k-space samples into 64 k1 and 64 k2 lines >>> data.rearrange('... 1 1 (k2 k1 k0) -> ... k2 k1 k0', k2=64, k1=64, k0=128)
- Parameters:
pattern (
str) – String describing the rearrangement pattern. Seeeinops.rearrangeand the examples above for more details.**axes_lengths (dict) – Optional dictionary mapping axis names to their lengths. Used when pattern contains unknown dimensions.
- Returns:
The rearranged data with the same type as the input.
- single(*, memory_format: memory_format = torch.preserve_format, copy: bool = False) Self[source]
Convert all float tensors to single precision.
converts
floattofloat32andcomplextocomplex64- Parameters:
memory_format (
memory_format, default:torch.preserve_format) – The desired memory format of returned tensor.copy (
bool, default:False) – IfTrue, the returned tensor will always be a copy, even if the input was already on the correct device. This will also create new tensors for views.
- split(dim: int, size: int = 1, overlap: int = 0, dilation: int = 1) tuple[Self, ...][source]
Split the dataclass along a dimension.
- Parameters:
Examples
If the dimension has 6 elements:
split with size 2, overlap 0, dilation 1 -> elements (0,1), (2,3), and (4,5)
split with size 2, overlap 1, dilation 1 -> elements (0,1), (1,2), (2,3), (3,4), (4,5), and (5,6)
split with size 2, overlap 0, dilation 2 -> elements (0,2), and (3,5)
split with size 2, overlap -1, dilation 1 -> elements (0,1), and (3,4)
- Returns:
A tuple of the splits.
- stack(*others: Self) Self[source]
Stack other along new first dimension.
- Parameters:
others (
Self) – other instance to stack.
- to(device: str | device | int | None = None, dtype: dtype | None = None, non_blocking: bool = False, *, copy: bool = False, memory_format: memory_format | None = None) Self[source]
- to(dtype: dtype, non_blocking: bool = False, *, copy: bool = False, memory_format: memory_format | None = None) Self
- to(tensor: Tensor, non_blocking: bool = False, *, copy: bool = False, memory_format: memory_format | None = None) Self
Perform dtype and/or device conversion of data.
A
torch.dtypeandtorch.deviceare inferred from the arguments args and kwargs. Please have a look at the documentation oftorch.Tensor.tofor more details.A new instance of the dataclass will be returned.
The conversion will be applied to all Tensor- or Module fields of the dataclass, and to all fields that implement the
Dataclass.The dtype-type, i.e. float or complex will always be preserved, but the precision of floating point dtypes might be changed.
Examples
If called with
dtype=torch.float32ORdtype=torch.complex64:A
complex128tensor will be converted tocomplex64A
float64tensor will be converted tofloat32A
booltensor will remainboolAn
int64tensor will remainint64
If other conversions are desired, please use the
tomethod of the fields directly.If the copy argument is set to
True(default), a deep copy will be returned even if no conversion is necessary. If two fields are views of the same data before, in the result they will be independent copies if copy is set toTrueor a conversion is necessary. If set toFalse, some Tensors might be shared between the original and the new object.
- __eq__(other: object) bool[source]
Check deep equality of two dataclasses.
Tests equality up to broadcasting.
- __new__(**kwargs)