SlidingWindowGeoDataset#
- class SlidingWindowGeoDataset[source]#
Bases:
GeoDataset
Read the scene left-to-right, top-to-bottom, using a sliding window.
- __init__(scene: Scene, size: Union[int, tuple[int, int]], stride: Union[int, tuple[int, int]], out_size: Optional[Union[int, tuple[int, int]]] = None, padding: Optional[Union[int, tuple[int, int]]] = None, pad_direction: Literal['both', 'start', 'end'] = 'end', within_aoi: bool = True, transform: albumentations.core.transforms_interface.BasicTransform | None = None, transform_type: rastervision.pytorch_learner.dataset.transform.TransformType | None = None, normalize: bool = True, to_pytorch: bool = True, return_window: bool = False)[source]#
Constructor.
- Parameters:
object. (scene A Scene) –
stride (Union[int, tuple[int, int]]) – Step size between windows.
to (out_size Resize chips to this size before returning. Defaults) –
None
.padding (Optional[Union[int, tuple[int, int]]]) – How many pixels the windows are allowed to overflow the sides of the raster source. If
None
, will be automatically calculated such that the windows cover the entire extent. Defaults toNone
.pad_direction (Literal['both', 'start', 'end']) – If
'end'
, only pad ymax and xmax (bottom and right). If'start'
, only pad ymin and xmin (top and left). If'both'
, pad all sides. If'both'
pad all sides. Has no effect if padding is zero. Defaults to'end'
.AOI (within_aoi If True and if the scene has an) – windows that lie fully within the AOI. If False, windows only partially intersecting the AOI will also be allowed. Defaults to
True
.sample (only) – windows that lie fully within the AOI. If False, windows only partially intersecting the AOI will also be allowed. Defaults to
True
.transform (albumentations.core.transforms_interface.BasicTransform | None) – Albumentations transform to apply to the windows. Defaults to
None
. Each transform in Albumentations takes images of type uint8, and sometimes other data types. The data type requirements can be seen at https://albumentations.ai/docs/api_reference/augmentations/transforms/ # noqa If there is a mismatch between the data type of imagery and the transform requirements, a RasterTransformer should be set on the RasterSource that converts to uint8, such asMinMaxTransformer
orStatsTransformer
.transform_type (rastervision.pytorch_learner.dataset.transform.TransformType | None) – Type of transform. Defaults to
None
.normalize (bool) – If
True
, the sampled chips are normalized to [0, 1] based on their data type. Defaults toTrue
.to_pytorch (bool) – If
True
, the sampled chips and labels are converted to pytorch tensors. Defaults toTrue
.return_window (bool) – Make
__getitem__
return the window coordinates used to generate the image. Defaults toFalse
.scene (Scene) –
within_aoi (bool) –
Methods
__init__
(scene, size, stride[, out_size, ...])Constructor.
append_resize_transform
(transform, out_size)Get transform to use for resizing windows to out_size.
from_uris
(*args, **kwargs)Pre-compute windows.
- __add__(other: Dataset[T_co]) ConcatDataset[T_co] #
- Parameters:
other (Dataset[T_co]) –
- Return type:
ConcatDataset[T_co]
- __init__(scene: Scene, size: Union[int, tuple[int, int]], stride: Union[int, tuple[int, int]], out_size: Optional[Union[int, tuple[int, int]]] = None, padding: Optional[Union[int, tuple[int, int]]] = None, pad_direction: Literal['both', 'start', 'end'] = 'end', within_aoi: bool = True, transform: albumentations.core.transforms_interface.BasicTransform | None = None, transform_type: rastervision.pytorch_learner.dataset.transform.TransformType | None = None, normalize: bool = True, to_pytorch: bool = True, return_window: bool = False)[source]#
Constructor.
- Parameters:
object. (scene A Scene) –
stride (Union[int, tuple[int, int]]) – Step size between windows.
to (out_size Resize chips to this size before returning. Defaults) –
None
.padding (Optional[Union[int, tuple[int, int]]]) – How many pixels the windows are allowed to overflow the sides of the raster source. If
None
, will be automatically calculated such that the windows cover the entire extent. Defaults toNone
.pad_direction (Literal['both', 'start', 'end']) – If
'end'
, only pad ymax and xmax (bottom and right). If'start'
, only pad ymin and xmin (top and left). If'both'
, pad all sides. If'both'
pad all sides. Has no effect if padding is zero. Defaults to'end'
.AOI (within_aoi If True and if the scene has an) – windows that lie fully within the AOI. If False, windows only partially intersecting the AOI will also be allowed. Defaults to
True
.sample (only) – windows that lie fully within the AOI. If False, windows only partially intersecting the AOI will also be allowed. Defaults to
True
.transform (albumentations.core.transforms_interface.BasicTransform | None) – Albumentations transform to apply to the windows. Defaults to
None
. Each transform in Albumentations takes images of type uint8, and sometimes other data types. The data type requirements can be seen at https://albumentations.ai/docs/api_reference/augmentations/transforms/ # noqa If there is a mismatch between the data type of imagery and the transform requirements, a RasterTransformer should be set on the RasterSource that converts to uint8, such asMinMaxTransformer
orStatsTransformer
.transform_type (rastervision.pytorch_learner.dataset.transform.TransformType | None) – Type of transform. Defaults to
None
.normalize (bool) – If
True
, the sampled chips are normalized to [0, 1] based on their data type. Defaults toTrue
.to_pytorch (bool) – If
True
, the sampled chips and labels are converted to pytorch tensors. Defaults toTrue
.return_window (bool) – Make
__getitem__
return the window coordinates used to generate the image. Defaults toFalse
.scene (Scene) –
within_aoi (bool) –
- append_resize_transform(transform: albumentations.core.transforms_interface.BasicTransform | None, out_size: tuple[int, int]) albumentations.augmentations.geometric.resize.Resize | albumentations.core.composition.Compose #
Get transform to use for resizing windows to out_size.
- classmethod from_uris(*args, **kwargs) Self #
- Return type:
Self