GeoDataset#

class GeoDataset[source]#

Bases: AlbumentationsDataset

Dataset that reads directly from a Scene (i.e. a raster source and a label source).

__init__(scene: Scene, out_size: Optional[Union[int, tuple[int, int]]] = None, within_aoi: bool = True, transform: albumentations.core.transforms_interface.BasicTransform | None = None, transform_type: rastervision.pytorch_learner.dataset.transform.TransformType | None = None, normalize: bool = True, to_pytorch: bool = True, return_window: bool = False)[source]#

Constructor.

Parameters:
  • scene (Scene) – A Scene instance.

  • out_size (Optional[Union[int, tuple[int, int]]]) – Resize chips to this size before returning.

  • within_aoi (bool) – If True and if the scene has an AOI, only sample windows that lie fully within the AOI. If False, windows only partially intersecting the AOI will also be allowed. Defaults to True.

  • transform (A.BasicTransform | None) – Albumentations transform to apply to the windows. Defaults to None. Each transform in Albumentations takes images of type uint8, and sometimes other data types. The data type requirements can be seen at https://albumentations.ai/docs/api_reference/augmentations/transforms/ # noqa If there is a mismatch between the data type of imagery and the transform requirements, a RasterTransformer should be set on the RasterSource that converts to uint8, such as MinMaxTransformer or StatsTransformer.

  • transform_type (rastervision.pytorch_learner.dataset.transform.TransformType | None) – Type of transform. Defaults to None.

  • normalize (bool) – If True, x is normalized to [0, 1] based on its data type. Defaults to True.

  • normalize – If True, the sampled chips are normalized to [0, 1] based on their data type. Defaults to True.

  • to_pytorch (bool) – If True, the sampled chips and labels are converted to pytorch tensors. Defaults to True.

  • return_window (bool) –

Methods

__init__(scene[, out_size, within_aoi, ...])

Constructor.

append_resize_transform(transform, out_size)

Get transform to use for resizing windows to out_size.

from_uris(*args, **kwargs)

__add__(other: Dataset[T_co]) ConcatDataset[T_co]#
Parameters:

other (Dataset[T_co]) –

Return type:

ConcatDataset[T_co]

__getitem__(key) tuple[torch.Tensor, torch.Tensor]#
Return type:

tuple[torch.Tensor, torch.Tensor]

__init__(scene: Scene, out_size: Optional[Union[int, tuple[int, int]]] = None, within_aoi: bool = True, transform: albumentations.core.transforms_interface.BasicTransform | None = None, transform_type: rastervision.pytorch_learner.dataset.transform.TransformType | None = None, normalize: bool = True, to_pytorch: bool = True, return_window: bool = False)[source]#

Constructor.

Parameters:
  • scene (Scene) – A Scene instance.

  • out_size (Optional[Union[int, tuple[int, int]]]) – Resize chips to this size before returning.

  • within_aoi (bool) – If True and if the scene has an AOI, only sample windows that lie fully within the AOI. If False, windows only partially intersecting the AOI will also be allowed. Defaults to True.

  • transform (A.BasicTransform | None) – Albumentations transform to apply to the windows. Defaults to None. Each transform in Albumentations takes images of type uint8, and sometimes other data types. The data type requirements can be seen at https://albumentations.ai/docs/api_reference/augmentations/transforms/ # noqa If there is a mismatch between the data type of imagery and the transform requirements, a RasterTransformer should be set on the RasterSource that converts to uint8, such as MinMaxTransformer or StatsTransformer.

  • transform_type (rastervision.pytorch_learner.dataset.transform.TransformType | None) – Type of transform. Defaults to None.

  • normalize (bool) – If True, x is normalized to [0, 1] based on its data type. Defaults to True.

  • normalize – If True, the sampled chips are normalized to [0, 1] based on their data type. Defaults to True.

  • to_pytorch (bool) – If True, the sampled chips and labels are converted to pytorch tensors. Defaults to True.

  • return_window (bool) –

append_resize_transform(transform: albumentations.core.transforms_interface.BasicTransform | None, out_size: tuple[int, int]) albumentations.augmentations.geometric.resize.Resize | albumentations.core.composition.Compose[source]#

Get transform to use for resizing windows to out_size.

Parameters:
  • transform (albumentations.core.transforms_interface.BasicTransform | None) –

  • out_size (tuple[int, int]) –

Return type:

albumentations.augmentations.geometric.resize.Resize | albumentations.core.composition.Compose

classmethod from_uris(*args, **kwargs) Self[source]#
Return type:

Self