GeoDataset#
- class GeoDataset[source]#
Bases:
AlbumentationsDataset
Dataset that reads directly from a Scene (i.e. a raster source and a label source).
- __init__(scene: Scene, out_size: Optional[Union[int, tuple[int, int]]] = None, within_aoi: bool = True, transform: albumentations.core.transforms_interface.BasicTransform | None = None, transform_type: rastervision.pytorch_learner.dataset.transform.TransformType | None = None, normalize: bool = True, to_pytorch: bool = True, return_window: bool = False)[source]#
Constructor.
- Parameters:
scene (Scene) – A Scene instance.
out_size (Optional[Union[int, tuple[int, int]]]) – Resize chips to this size before returning.
within_aoi (bool) – If True and if the scene has an AOI, only sample windows that lie fully within the AOI. If False, windows only partially intersecting the AOI will also be allowed. Defaults to True.
transform (A.BasicTransform | None) – Albumentations transform to apply to the windows. Defaults to None. Each transform in Albumentations takes images of type uint8, and sometimes other data types. The data type requirements can be seen at https://albumentations.ai/docs/api_reference/augmentations/transforms/ # noqa If there is a mismatch between the data type of imagery and the transform requirements, a RasterTransformer should be set on the RasterSource that converts to uint8, such as MinMaxTransformer or StatsTransformer.
transform_type (rastervision.pytorch_learner.dataset.transform.TransformType | None) – Type of transform. Defaults to
None
.normalize (bool) – If True, x is normalized to [0, 1] based on its data type. Defaults to
True
.normalize – If
True
, the sampled chips are normalized to [0, 1] based on their data type. Defaults toTrue
.to_pytorch (bool) – If
True
, the sampled chips and labels are converted to pytorch tensors. Defaults toTrue
.return_window (bool) –
Methods
__init__
(scene[, out_size, within_aoi, ...])Constructor.
append_resize_transform
(transform, out_size)Get transform to use for resizing windows to out_size.
from_uris
(*args, **kwargs)- __add__(other: Dataset[T_co]) ConcatDataset[T_co] #
- Parameters:
other (Dataset[T_co]) –
- Return type:
ConcatDataset[T_co]
- __getitem__(key) tuple[torch.Tensor, torch.Tensor] #
- Return type:
- __init__(scene: Scene, out_size: Optional[Union[int, tuple[int, int]]] = None, within_aoi: bool = True, transform: albumentations.core.transforms_interface.BasicTransform | None = None, transform_type: rastervision.pytorch_learner.dataset.transform.TransformType | None = None, normalize: bool = True, to_pytorch: bool = True, return_window: bool = False)[source]#
Constructor.
- Parameters:
scene (Scene) – A Scene instance.
out_size (Optional[Union[int, tuple[int, int]]]) – Resize chips to this size before returning.
within_aoi (bool) – If True and if the scene has an AOI, only sample windows that lie fully within the AOI. If False, windows only partially intersecting the AOI will also be allowed. Defaults to True.
transform (A.BasicTransform | None) – Albumentations transform to apply to the windows. Defaults to None. Each transform in Albumentations takes images of type uint8, and sometimes other data types. The data type requirements can be seen at https://albumentations.ai/docs/api_reference/augmentations/transforms/ # noqa If there is a mismatch between the data type of imagery and the transform requirements, a RasterTransformer should be set on the RasterSource that converts to uint8, such as MinMaxTransformer or StatsTransformer.
transform_type (rastervision.pytorch_learner.dataset.transform.TransformType | None) – Type of transform. Defaults to
None
.normalize (bool) – If True, x is normalized to [0, 1] based on its data type. Defaults to
True
.normalize – If
True
, the sampled chips are normalized to [0, 1] based on their data type. Defaults toTrue
.to_pytorch (bool) – If
True
, the sampled chips and labels are converted to pytorch tensors. Defaults toTrue
.return_window (bool) –