Configuration API Reference¶
This contains the API used for configuring various components of Raster Vision pipelines. This serves as the lower-level companion to the discussion of Pipelines and Commands.
rastervision.pipeline¶
-
class
rastervision.pipeline.pipeline_config.
PipelineConfig
¶ Base class for configuring Pipelines.
This should be subclassed to configure new Pipelines.
rastervision.core¶
StatsAnalyzerConfig¶
ClassConfig¶
-
class
rastervision.core.data.
ClassConfig
¶ Configures the class names that are being predicted.
-
colors
¶ Colors used to visualize classes. Can be color strings accepted by matplotlib or RGB tuples. If None, a random color will be auto-generated for each class. Defaults to None.
- Type
Optional[List[Union[List, str]]]
-
null_class
¶ Optional name of class in names to use as the null class. This is used in semantic segmentation to represent the label for imagery pixels that are NODATA or that are missing a label. If None, and this Config is part of a SemanticSegmentationConfig, a null class will be added automatically. Defaults to None.
- Type
Optional[str]
-
DatasetConfig¶
-
class
rastervision.core.data.
DatasetConfig
¶ Config for a Dataset comprising the scenes for train, valid, and test splits.
-
class_config
¶ - Type
-
train_scenes
¶ - Type
List[SceneConfig]
-
validation_scenes
¶ - Type
List[SceneConfig]
-
test_scenes
¶ Defaults to [].
- Type
List[SceneConfig]
-
img_channels
¶ The number of channels of the images. Defaults to None.
- Type
Optional[PositiveInt]
-
SceneConfig¶
-
class
rastervision.core.data.
SceneConfig
¶ Config for a Scene which comprises the raster data and labels for an AOI.
-
raster_source
¶ - Type
RasterSourceConfig
-
label_source
¶ - Type
LabelSourceConfig
-
label_store
¶ Defaults to None.
- Type
Optional[LabelStoreConfig]
-
ChipClassificationLabelSourceConfig¶
-
class
rastervision.core.data.label_source.
ChipClassificationLabelSourceConfig
¶ Config for a source of labels for chip classification.
This can be provided explicitly as a grid of cells, or a grid of cells can be inferred from arbitrary polygons.
-
vector_source
¶ - Type
-
ioa_thresh
¶ Minimum IOA of a polygon and cell for that polygon to be a candidate for setting the class_id. Defaults to None.
- Type
Optional[float]
-
use_intersection_over_cell
¶ If True, then use the area of the cell as the denominator in the IOA. Otherwise, use the area of the polygon. Defaults to False.
- Type
-
pick_min_class_id
¶ If True, the class_id for a cell is the minimum class_id of the boxes in that cell. Otherwise, pick the class_id of the box covering the greatest area. Defaults to False.
- Type
-
background_class_id
¶ If not None, class_id to use as the background class; ie. the one that is used when a window contains no boxes. If not set, empty windows have None set as their class_id which is considered a null value. Defaults to None.
- Type
Optional[int]
-
SemanticSegmentationLabelSourceConfig¶
-
class
rastervision.core.data.label_source.
SemanticSegmentationLabelSourceConfig
¶ Config for a read-only label source for semantic segmentation.
-
raster_source
¶ The labels in the form of rasters.
- Type
Union[None.RasterSourceConfig, None.RasterizedSourceConfig]
-
rgb_class_config
¶ If set, will infer the class_ids for the labels using the colors field. This assumes the labels are stored as RGB rasters. Defaults to None.
- Type
Optional[ClassConfig]
-
ObjectDetectionLabelSourceConfig¶
ChipClassificationGeoJSONStoreConfig¶
-
class
rastervision.core.data.label_store.
ChipClassificationGeoJSONStoreConfig
¶ Config for storage for chip classification predictions.
PolygonVectorOutputConfig¶
BuildingVectorOutputConfig¶
-
class
rastervision.core.data.label_store.
BuildingVectorOutputConfig
¶ Config for vectorized semantic segmentation predictions.
Intended to break up clusters of buildings.
-
uri
¶ URI of vector output. If None, and this Config is part of a SceneConfig and RVPipeline, this field will be auto-generated. Defaults to None.
- Type
Optional[str]
-
denoise
¶ Radius of the structural element used to remove high-frequency signals from the image. Defaults to 0.
- Type
-
min_aspect_ratio
¶ Ratio between length and height (or height and length) of anything that can be considered to be a cluster of buildings. The goal is to distinguish between rows of buildings and (say) a single building. Defaults to 1.618.
- Type
-
min_area
¶ Minimum area of anything that can be considered to be a cluster of buildings. The goal is to distinguish between buildings and artifacts. Defaults to 0.0.
- Type
-
element_width_factor
¶ Width of the structural element used to break building clusters as a fraction of the width of the cluster. Defaults to 0.5.
- Type
-
SemanticSegmentationLabelStoreConfig¶
-
class
rastervision.core.data.label_store.
SemanticSegmentationLabelStoreConfig
¶ Config for storage for semantic segmentation predictions.
Stores class raster as GeoTIFF, and can optionally vectorizes predictions and stores them in GeoJSON files.
-
uri
¶ URI of file with predictions. If None, and this Config is part of a SceneConfig inside an RVPipelineConfig, this fiend will be auto-generated. Defaults to None.
- Type
Optional[str]
-
vector_output
¶ Defaults to [].
- Type
List[VectorOutputConfig]
-
rgb
¶ If True, save prediction class_ids in RGB format using the colors in class_config. Defaults to False.
- Type
-
smooth_output
¶ If True, expects labels to be continuous values representing class scores and stores both scores and discrete labels. Defaults to False.
- Type
-
smooth_as_uint8
¶ If True, stores smooth scores as uint8, resulting in loss of precision, but reduced file size. Only used if smooth_output=True. Defaults to False.
- Type
-
ObjectDetectionGeoJSONStoreConfig¶
-
class
rastervision.core.data.label_store.
ObjectDetectionGeoJSONStoreConfig
¶ Config for storage for object detection predictions.
RasterioSourceConfig¶
-
class
rastervision.core.data.raster_source.
RasterioSourceConfig
¶ -
channel_order
¶ The sequence of channel indices to use when reading imagery. Defaults to None.
- Type
Optional[List[int]]
-
transformers
¶ Defaults to [].
- Type
List[RasterTransformerConfig]
-
extent_crop
¶ Relative offsets (skip_top, skip_left, skip_bottom, skip_right) for cropping the extent of the raster source. Useful for splitting a scene into different dataset splits. E.g. if you want to use the top 80% of the image for training and the bottom 20% for validation you can pass extent_crop=CropOffsets(skip_bottom=0.20) to the raster source in the training scene and extent_crop=CropOffsets(skip_top=0.80) to the raster source in the validation scene. Defaults to None i.e. no cropping. Defaults to None.
- Type
Optional[CropOffsets]
-
uris
¶ List of image URIs that comprise imagery for a scene. The format of each file can be any that can be read by Rasterio/GDAL. If > 1 URI is provided, a VRT will be created to mosaic together the individual images.
- Type
List[str]
-
allow_streaming
¶ Allow streaming of assets rather than always downloading. Defaults to False.
- Type
-
RasterizerConfig¶
RasterizedSourceConfig¶
MultiRasterSourceConfig¶
-
class
rastervision.core.data.raster_source.
MultiRasterSourceConfig
¶ -
channel_order
¶ The sequence of channel indices to use when reading imagery. Defaults to None.
- Type
Optional[List[int]]
-
transformers
¶ Defaults to [].
- Type
List[RasterTransformerConfig]
-
extent_crop
¶ Relative offsets (skip_top, skip_left, skip_bottom, skip_right) for cropping the extent of the raster source. Useful for splitting a scene into different dataset splits. E.g. if you want to use the top 80% of the image for training and the bottom 20% for validation you can pass extent_crop=CropOffsets(skip_bottom=0.20) to the raster source in the training scene and extent_crop=CropOffsets(skip_top=0.80) to the raster source in the validation scene. Defaults to None i.e. no cropping. Defaults to None.
- Type
Optional[CropOffsets]
-
raster_sources
¶ List of SubRasterSourceConfigs to combine.
- Type
Sequence[SubRasterSourceConfig]
-
force_same_dtype
¶ Force all subchips to be of the same dtype as the first subchip. Defaults to False.
- Type
-
crs_source
¶ Use the crs_transformer of the raster source at this index. Defaults to 0.
- Type
ConstrainedIntValue
-
StatsTransformerConfig¶
CastTransformerConfig¶
NanTransformerConfig¶
ReclassTransformer¶
-
class
rastervision.core.data.raster_transformer.
ReclassTransformer
(mapping: Dict[int, int])¶ Reclassifies label raster
VectorSourceConfig¶
-
class
rastervision.core.data.vector_source.
VectorSourceConfig
¶ -
default_class_id
¶ The default class_id to use if class cannot be inferred using other mechanisms. If a feature has an inferred class_id of None, then it will be deleted.
- Type
Optional[int]
-
class_id_to_filter
¶ Map from class_id to JSON filter used to infer missing class_ids. Each key should be a class id, and its value should be a boolean expression which is run against the property field for each feature.This allows matching different features to different class ids based on its properties. The expression schema is that described by https://docs.mapbox.com/mapbox-gl-js/style-spec/other/#other-filter. Defaults to None.
- Type
Optional[Dict]
-
line_bufs
¶ This is useful, for example, for buffering lines representing roads so that their width roughly matches the width of roads in the imagery. If None, uses default buffer value of 1. Otherwise, a map from class_id to number of pixels to buffer by. If the buffer value is None, then no buffering will be performed and the LineString or Point won’t get converted to a Polygon. Not converting to Polygon is incompatible with the currently available LabelSources, but may be useful in the future. Defaults to None.
-
GeoJSONVectorSourceConfig¶
-
class
rastervision.core.data.vector_source.
GeoJSONVectorSourceConfig
¶ -
default_class_id
¶ The default class_id to use if class cannot be inferred using other mechanisms. If a feature has an inferred class_id of None, then it will be deleted.
- Type
Optional[int]
-
class_id_to_filter
¶ Map from class_id to JSON filter used to infer missing class_ids. Each key should be a class id, and its value should be a boolean expression which is run against the property field for each feature.This allows matching different features to different class ids based on its properties. The expression schema is that described by https://docs.mapbox.com/mapbox-gl-js/style-spec/other/#other-filter. Defaults to None.
- Type
Optional[Dict]
-
line_bufs
¶ This is useful, for example, for buffering lines representing roads so that their width roughly matches the width of roads in the imagery. If None, uses default buffer value of 1. Otherwise, a map from class_id to number of pixels to buffer by. If the buffer value is None, then no buffering will be performed and the LineString or Point won’t get converted to a Polygon. Not converting to Polygon is incompatible with the currently available LabelSources, but may be useful in the future. Defaults to None.
-
point_bufs
¶ Same as above, but used for buffering Points into Polygons. Defaults to None.
-
ChipClassificationEvaluatorConfig¶
SemanticSegmentationEvaluatorConfig¶
ObjectDetectionEvaluatorConfig¶
ChipClassificationConfig¶
-
class
rastervision.core.rv_pipeline.
ChipClassificationConfig
¶ -
-
rv_config
¶ Used to store serialized RVConfig so pipeline can run in remote environment with the local RVConfig. This should not be set explicitly by users – it is only used by the runner when running a remote pipeline. Defaults to None.
- Type
Optional[dict]
-
plugin_versions
¶ Used to store a mapping of plugin module paths to the latest version number. This should not be set explicitly by users – it is set automatically when serializing and saving the config to disk. Defaults to None.
-
dataset
¶ Dataset containing train, validation, and optional test scenes.
- Type
-
backend
¶ Backend to use for interfacing with ML library.
- Type
BackendConfig
-
evaluators
¶ Evaluators to run during analyzer command. If list is empty the default evaluator is added. Defaults to [].
- Type
List[EvaluatorConfig]
-
analyzers
¶ Analyzers to run during analyzer command. A StatsAnalyzer will be added automatically if any scenes have a RasterTransformer. Defaults to [].
- Type
List[AnalyzerConfig]
-
chip_nodata_threshold
¶ Discard chips where the proportion of NODATA values is greater than or equal to this value. Might result in false positives if there are many legitimate black pixels in the chip. Use with caution. Defaults to 1.
- Type
ConstrainedFloatValue
-
analyze_uri
¶ URI for output of analyze. If None, will be auto-generated. Defaults to None.
- Type
Optional[str]
-
chip_uri
¶ URI for output of chip. If None, will be auto-generated. Defaults to None.
- Type
Optional[str]
-
train_uri
¶ URI for output of train. If None, will be auto-generated. Defaults to None.
- Type
Optional[str]
-
predict_uri
¶ URI for output of predict. If None, will be auto-generated. Defaults to None.
- Type
Optional[str]
-
eval_uri
¶ URI for output of eval. If None, will be auto-generated. Defaults to None.
- Type
Optional[str]
-
SemanticSegmentationWindowMethod¶
SemanticSegmentationChipOptions¶
-
class
rastervision.core.rv_pipeline.
SemanticSegmentationChipOptions
¶ Chipping options for semantic segmentation.
-
window_method
¶ Window method to use for chipping. Defaults to <SemanticSegmentationWindowMethod.sliding: ‘sliding’>.
- Type
enum
-
target_class_ids
¶ List of class ids considered as targets (ie. those to prioritize when creating chips) which is only used in conjunction with the target_count_threshold and negative_survival_probability options. Applies to the random_sample window method. Defaults to None.
- Type
Optional[List[int]]
-
negative_survival_prob
¶ List of class ids considered as targets (ie. those to prioritize when creating chips) which is only used in conjunction with the target_count_threshold and negative_survival_probability options. Applies to the random_sample window method. Defaults to 1.0.
- Type
-
chips_per_scene
¶ Number of chips to generate per scene. Applies to the random_sample window method. Defaults to 1000.
- Type
-
target_count_threshold
¶ Minimum number of pixels covering target_classes that a chip must have. Applies to the random_sample window method. Defaults to 1000.
- Type
-
SemanticSegmentationConfig¶
-
class
rastervision.core.rv_pipeline.
SemanticSegmentationConfig
¶ -
-
rv_config
¶ Used to store serialized RVConfig so pipeline can run in remote environment with the local RVConfig. This should not be set explicitly by users – it is only used by the runner when running a remote pipeline. Defaults to None.
- Type
Optional[dict]
-
plugin_versions
¶ Used to store a mapping of plugin module paths to the latest version number. This should not be set explicitly by users – it is set automatically when serializing and saving the config to disk. Defaults to None.
-
dataset
¶ Dataset containing train, validation, and optional test scenes.
- Type
-
backend
¶ Backend to use for interfacing with ML library.
- Type
BackendConfig
-
evaluators
¶ Evaluators to run during analyzer command. If list is empty the default evaluator is added. Defaults to [].
- Type
List[EvaluatorConfig]
-
analyzers
¶ Analyzers to run during analyzer command. A StatsAnalyzer will be added automatically if any scenes have a RasterTransformer. Defaults to [].
- Type
List[AnalyzerConfig]
-
chip_nodata_threshold
¶ Discard chips where the proportion of NODATA values is greater than or equal to this value. Might result in false positives if there are many legitimate black pixels in the chip. Use with caution. Defaults to 1.
- Type
ConstrainedFloatValue
-
analyze_uri
¶ URI for output of analyze. If None, will be auto-generated. Defaults to None.
- Type
Optional[str]
-
chip_uri
¶ URI for output of chip. If None, will be auto-generated. Defaults to None.
- Type
Optional[str]
-
train_uri
¶ URI for output of train. If None, will be auto-generated. Defaults to None.
- Type
Optional[str]
-
predict_uri
¶ URI for output of predict. If None, will be auto-generated. Defaults to None.
- Type
Optional[str]
-
eval_uri
¶ URI for output of eval. If None, will be auto-generated. Defaults to None.
- Type
Optional[str]
-
bundle_uri
¶ URI for output of bundle. If None, will be auto-generated. Defaults to None.
- Type
Optional[str]
-
source_bundle_uri
¶ If provided, the model will be loaded from this bundle for the train stage. Useful for fine-tuning. Defaults to None.
- Type
Optional[str]
-
chip_options
¶ Defaults to SemanticSegmentationChipOptions(window_method=<SemanticSegmentationWindowMethod.sliding: ‘sliding’>, target_class_ids=None, negative_survival_prob=1.0, chips_per_scene=1000, target_count_threshold=1000, stride=None, type_hint=’semantic_segmentation_chip_options’).
-
predict_options
¶ Defaults to SemanticSegmentationPredictOptions(type_hint=’semantic_segmentation_predict_options’, stride=None).
- Type
SemanticSegmentationPredictOptions
-
channel_display_groups
¶ Groups of image channels to display together as a subplot when plotting the data and predictions. Can be a list or tuple of groups (e.g. [(0, 1, 2), (3,)]) or a dict containing title-to-group mappings (e.g. {“RGB”: [0, 1, 2], “IR”: [3]}), where each group is a list or tuple of channel indices and title is a string that will be used as the title of the subplot for that group. Defaults to None.
-
ObjectDetectionWindowMethod¶
ObjectDetectionChipOptions¶
-
class
rastervision.core.rv_pipeline.
ObjectDetectionChipOptions
¶ -
neg_ratio
¶ The ratio of negative chips (those containing no bounding boxes) to positive chips. This can be useful if the statistics of the background is different in positive chips. For example, in car detection, the positive chips will always contain roads, but no examples of rooftops since cars tend to not be near rooftops. Defaults to 1.0.
- Type
-
ioa_thresh
¶ When a box is partially outside of a training chip, it is not clear if (a clipped version) of the box should be included in the chip. If the IOA (intersection over area) of the box with the chip is greater than ioa_thresh, it is included in the chip. Defaults to 0.8.
- Type
-
window_method
¶ Defaults to <ObjectDetectionWindowMethod.chip: ‘chip’>.
- Type
enum
-
ObjectDetectionPredictOptions¶
-
class
rastervision.core.rv_pipeline.
ObjectDetectionPredictOptions
¶ -
merge_thresh
¶ If predicted boxes have an IOA (intersection over area) greater than merge_thresh, then they are merged into a single box during postprocessing. This is needed since the sliding window approach results in some false duplicates. Defaults to 0.5.
- Type
-
ObjectDetectionConfig¶
-
class
rastervision.core.rv_pipeline.
ObjectDetectionConfig
¶ -
-
rv_config
¶ Used to store serialized RVConfig so pipeline can run in remote environment with the local RVConfig. This should not be set explicitly by users – it is only used by the runner when running a remote pipeline. Defaults to None.
- Type
Optional[dict]
-
plugin_versions
¶ Used to store a mapping of plugin module paths to the latest version number. This should not be set explicitly by users – it is set automatically when serializing and saving the config to disk. Defaults to None.
-
dataset
¶ Dataset containing train, validation, and optional test scenes.
- Type
-
backend
¶ Backend to use for interfacing with ML library.
- Type
BackendConfig
-
evaluators
¶ Evaluators to run during analyzer command. If list is empty the default evaluator is added. Defaults to [].
- Type
List[EvaluatorConfig]
-
analyzers
¶ Analyzers to run during analyzer command. A StatsAnalyzer will be added automatically if any scenes have a RasterTransformer. Defaults to [].
- Type
List[AnalyzerConfig]
-
chip_nodata_threshold
¶ Discard chips where the proportion of NODATA values is greater than or equal to this value. Might result in false positives if there are many legitimate black pixels in the chip. Use with caution. Defaults to 1.
- Type
ConstrainedFloatValue
-
analyze_uri
¶ URI for output of analyze. If None, will be auto-generated. Defaults to None.
- Type
Optional[str]
-
chip_uri
¶ URI for output of chip. If None, will be auto-generated. Defaults to None.
- Type
Optional[str]
-
train_uri
¶ URI for output of train. If None, will be auto-generated. Defaults to None.
- Type
Optional[str]
-
predict_uri
¶ URI for output of predict. If None, will be auto-generated. Defaults to None.
- Type
Optional[str]
-
eval_uri
¶ URI for output of eval. If None, will be auto-generated. Defaults to None.
- Type
Optional[str]
-
bundle_uri
¶ URI for output of bundle. If None, will be auto-generated. Defaults to None.
- Type
Optional[str]
-
source_bundle_uri
¶ If provided, the model will be loaded from this bundle for the train stage. Useful for fine-tuning. Defaults to None.
- Type
Optional[str]
-
chip_options
¶ Defaults to ObjectDetectionChipOptions(neg_ratio=1.0, ioa_thresh=0.8, window_method=<ObjectDetectionWindowMethod.chip: ‘chip’>, label_buffer=None, type_hint=’object_detection_chip_options’).
-
predict_options
¶ Defaults to ObjectDetectionPredictOptions(type_hint=’object_detection_predict_options’, merge_thresh=0.5, score_thresh=0.5).
-
rastervision.pytorch_backend¶
PyTorchChipClassificationConfig¶
PyTorchSemanticSegmentationConfig¶
rastervision.pytorch_learner¶
Backbone¶
-
class
rastervision.pytorch_learner.
Backbone
¶ An enumeration.
-
alexnet
= 'alexnet'¶
-
densenet121
= 'densenet121'¶
-
densenet161
= 'densenet161'¶
-
densenet169
= 'densenet169'¶
-
densenet201
= 'densenet201'¶
-
googlenet
= 'googlenet'¶
-
inception_v3
= 'inception_v3'¶
-
int_to_str
= <function Backbone.int_to_str>¶
-
mnasnet0_5
= 'mnasnet0_5'¶
-
mnasnet0_75
= 'mnasnet0_75'¶
-
mnasnet1_0
= 'mnasnet1_0'¶
-
mnasnet1_3
= 'mnasnet1_3'¶
-
mobilenet_v2
= 'mobilenet_v2'¶
-
resnet101
= 'resnet101'¶
-
resnet152
= 'resnet152'¶
-
resnet18
= 'resnet18'¶
-
resnet34
= 'resnet34'¶
-
resnet50
= 'resnet50'¶
-
resnext101_32x8d
= 'resnext101_32x8d'¶
-
resnext50_32x4d
= 'resnext50_32x4d'¶
-
shufflenet_v2_x0_5
= 'shufflenet_v2_x0_5'¶
-
shufflenet_v2_x1_0
= 'shufflenet_v2_x1_0'¶
-
shufflenet_v2_x1_5
= 'shufflenet_v2_x1_5'¶
-
shufflenet_v2_x2_0
= 'shufflenet_v2_x2_0'¶
-
squeezenet1_0
= 'squeezenet1_0'¶
-
squeezenet1_1
= 'squeezenet1_1'¶
-
vgg11
= 'vgg11'¶
-
vgg11_bn
= 'vgg11_bn'¶
-
vgg13
= 'vgg13'¶
-
vgg13_bn
= 'vgg13_bn'¶
-
vgg16
= 'vgg16'¶
-
vgg16_bn
= 'vgg16_bn'¶
-
vgg19
= 'vgg19'¶
-
vgg19_bn
= 'vgg19_bn'¶
-
wide_resnet101_2
= 'wide_resnet101_2'¶
-
wide_resnet50_2
= 'wide_resnet50_2'¶
-
SolverConfig¶
-
class
rastervision.pytorch_learner.
SolverConfig
¶ Config related to solver aka optimizer.
-
lr
¶ Learning rate. Defaults to 0.0001.
- Type
PositiveFloat
-
num_epochs
¶ Number of epochs (ie. sweeps through the whole training set). Defaults to 10.
- Type
PositiveInt
-
test_num_epochs
¶ Number of epochs to use in test mode. Defaults to 2.
- Type
PositiveInt
-
test_batch_sz
¶ Batch size to use in test mode. Defaults to 4.
- Type
PositiveInt
-
overfit_num_steps
¶ Number of optimizer steps to use in overfit mode. Defaults to 1.
- Type
PositiveInt
-
sync_interval
¶ The interval in epochs for each sync to the cloud. Defaults to 1.
- Type
PositiveInt
-
batch_sz
¶ Batch size. Defaults to 32.
- Type
PositiveInt
-
one_cycle
¶ If True, use triangular LR scheduler with a single cycle across all epochs with start and end LR being lr/10 and the peak being lr. Defaults to True.
- Type
-
multi_stage
¶ List of epoch indices at which to divide LR by 10. Defaults to [].
- Type
List
-
class_loss_weights
¶ Class weights for weighted loss. Defaults to None.
-
ignore_last_class
¶ Whether to ignore the last class during training. Defaults to False.
- Type
Union[bool, typing_extensions.Literal[‘force’]]
-
external_loss_def
¶ If specified, the loss will be built from the definition from this external source, using Torch Hub. Defaults to None.
- Type
Optional[ExternalModuleConfig]
-
ExternalModuleConfig¶
-
class
rastervision.pytorch_learner.
ExternalModuleConfig
¶ Config describing an object to be loaded via Torch Hub.
-
uri
¶ Local uri of a zip file, or local uri of a directory,or remote uri of zip file. Defaults to None.
- Type
Optional[ConstrainedStrValue]
-
github_repo
¶ <repo-owner>/<repo-name>[:tag]. Defaults to None.
- Type
Optional[ConstrainedStrValue]
-
name
¶ Name of the folder in which to extract/copy the definition files. Defaults to None.
- Type
Optional[ConstrainedStrValue]
-
entrypoint
¶ Name of a callable present in hubconf.py. See docs for torch.hub for details.
- Type
ConstrainedStrValue
-
entrypoint_kwargs
¶ Keyword args to pass to the entrypoint. Must be serializable. Defaults to {}.
- Type
-
DataConfig¶
-
class
rastervision.pytorch_learner.
DataConfig
¶ Config related to dataset for training and testing.
-
class_colors
¶ Colors used to display classes. Can be color 3-tuples in list form. Defaults to None.
- Type
Union[List[str], List[List], NoneType]
-
img_sz
¶ Length of a side of each image in pixels. This is the size to transform it to during training, not the size in the raw dataset. Defaults to 256.
- Type
PositiveInt
-
train_sz
¶ If set, the number of training images to use. If fewer images exist, then an exception will be raised. Defaults to None.
- Type
Optional[int]
-
augmentors
¶ Names of albumentations augmentors to use for training batches. Choices include: [‘Blur’, ‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’, ‘GaussianBlur’, ‘GaussNoise’, ‘RGBShift’, ‘ToGray’]. Alternatively, a custom transform can be provided via the aug_transform option. Defaults to [‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’].
- Type
List[str]
-
base_transform
¶ An Albumentations transform serialized as a dict that will be applied to all datasets: training, validation, and test. This transformation is in addition to the resizing due to img_sz. This is useful for, for example, applying the same normalization to all datasets. Defaults to None.
- Type
Optional[dict]
-
aug_transform
¶ An Albumentations transform serialized as a dict that will be applied as data augmentation to the training dataset. This transform is applied before base_transform. If provided, the augmentors option is ignored. Defaults to None.
- Type
Optional[dict]
-
plot_options
¶ Options to control plotting. Defaults to PlotOptions(transform={‘__version__’: ‘0.5.2’, ‘transform’: {‘__class_fullname__’: ‘rastervision.pytorch_learner.utils.utils.MinMaxNormalize’, ‘always_apply’: False, ‘p’: 1.0, ‘min_val’: 0.0, ‘max_val’: 1.0, ‘dtype’: 5}}, type_hint=’plot_options’).
- Type
Optional[PlotOptions]
-
ImageDataConfig¶
-
class
rastervision.pytorch_learner.
ImageDataConfig
¶ Config related to dataset for training and testing.
-
class_colors
¶ Colors used to display classes. Can be color 3-tuples in list form. Defaults to None.
- Type
Union[List[str], List[List], NoneType]
-
img_sz
¶ Length of a side of each image in pixels. This is the size to transform it to during training, not the size in the raw dataset. Defaults to 256.
- Type
PositiveInt
-
train_sz
¶ If set, the number of training images to use. If fewer images exist, then an exception will be raised. Defaults to None.
- Type
Optional[int]
-
augmentors
¶ Names of albumentations augmentors to use for training batches. Choices include: [‘Blur’, ‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’, ‘GaussianBlur’, ‘GaussNoise’, ‘RGBShift’, ‘ToGray’]. Alternatively, a custom transform can be provided via the aug_transform option. Defaults to [‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’].
- Type
List[str]
-
base_transform
¶ An Albumentations transform serialized as a dict that will be applied to all datasets: training, validation, and test. This transformation is in addition to the resizing due to img_sz. This is useful for, for example, applying the same normalization to all datasets. Defaults to None.
- Type
Optional[dict]
-
aug_transform
¶ An Albumentations transform serialized as a dict that will be applied as data augmentation to the training dataset. This transform is applied before base_transform. If provided, the augmentors option is ignored. Defaults to None.
- Type
Optional[dict]
-
plot_options
¶ Options to control plotting. Defaults to PlotOptions(transform={‘__version__’: ‘0.5.2’, ‘transform’: {‘__class_fullname__’: ‘rastervision.pytorch_learner.utils.utils.MinMaxNormalize’, ‘always_apply’: False, ‘p’: 1.0, ‘min_val’: 0.0, ‘max_val’: 1.0, ‘dtype’: 5}}, type_hint=’plot_options’).
- Type
Optional[PlotOptions]
-
preview_batch_limit
¶ Optional limit on the number of items in the preview plots produced during training. Defaults to None.
- Type
Optional[int]
-
uri
¶ URI of the dataset. This can be a zip file, a list of zip files, or a directory which contains a set of zip files. Defaults to None.
-
group_uris
¶ This can be set instead of uri in order to specify groups of chips. Each element in the list is expected to be an object of the same form accepted by the uri field. The purpose of separating chips into groups is to be able to use the group_train_sz field. Defaults to None.
-
group_train_sz
¶ If group_uris is set, this can be used to specify the number of chips to use per group. Only applies to training chips. This can either be a single value that will be used for all groups or a list of values (one for each group). Defaults to None.
-
group_train_sz_rel
¶ Relative version of group_train_sz. Must be a float in [0, 1]. If group_uris is set, this can be used to specify the proportion of the total chips in each group to use per group. Only applies to training chips. This can either be a single value that will be used for all groups or a list of values (one for each group). Defaults to None.
- Type
Union[rastervision.pytorch_learner.learner_config.ConstrainedFloatValue, List[rastervision.pytorch_learner.learner_config.ConstrainedFloatValue], NoneType]
-
GeoDataConfig¶
-
class
rastervision.pytorch_learner.
GeoDataConfig
¶ -
-
class_colors
¶ Colors used to display classes. Can be color 3-tuples in list form. Defaults to None.
- Type
Union[List[str], List[List], NoneType]
-
img_sz
¶ Length of a side of each image in pixels. This is the size to transform it to during training, not the size in the raw dataset. Defaults to 256.
- Type
PositiveInt
-
train_sz
¶ If set, the number of training images to use. If fewer images exist, then an exception will be raised. Defaults to None.
- Type
Optional[int]
-
augmentors
¶ Names of albumentations augmentors to use for training batches. Choices include: [‘Blur’, ‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’, ‘GaussianBlur’, ‘GaussNoise’, ‘RGBShift’, ‘ToGray’]. Alternatively, a custom transform can be provided via the aug_transform option. Defaults to [‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’].
- Type
List[str]
-
base_transform
¶ An Albumentations transform serialized as a dict that will be applied to all datasets: training, validation, and test. This transformation is in addition to the resizing due to img_sz. This is useful for, for example, applying the same normalization to all datasets. Defaults to None.
- Type
Optional[dict]
-
aug_transform
¶ An Albumentations transform serialized as a dict that will be applied as data augmentation to the training dataset. This transform is applied before base_transform. If provided, the augmentors option is ignored. Defaults to None.
- Type
Optional[dict]
-
plot_options
¶ Options to control plotting. Defaults to PlotOptions(transform={‘__version__’: ‘0.5.2’, ‘transform’: {‘__class_fullname__’: ‘rastervision.pytorch_learner.utils.utils.MinMaxNormalize’, ‘always_apply’: False, ‘p’: 1.0, ‘min_val’: 0.0, ‘max_val’: 1.0, ‘dtype’: 5}}, type_hint=’plot_options’).
- Type
Optional[PlotOptions]
-
preview_batch_limit
¶ Optional limit on the number of items in the preview plots produced during training. Defaults to None.
- Type
Optional[int]
-
scene_dataset
¶ - Type
-
GeoDataWindowConfig¶
-
class
rastervision.pytorch_learner.
GeoDataWindowConfig
¶ -
method
¶ Defaults to <GeoDataWindowMethod.sliding: ‘sliding’>.
- Type
enum
-
size
¶ If method = sliding, this is the size of sliding window. If method = random, this is the size that all the windows are resized to before they are returned.
- Type
Union[pydantic.types.PositiveInt, Tuple[pydantic.types.PositiveInt, pydantic.types.PositiveInt]]
-
stride
¶ Stride of sliding window. Only used if method = sliding. Defaults to None.
- Type
Union[pydantic.types.PositiveInt, Tuple[pydantic.types.PositiveInt, pydantic.types.PositiveInt], NoneType]
-
padding
¶ How many pixels are windows allowed to overflow the edges of the raster source. Defaults to None.
- Type
Union[rastervision.pytorch_learner.learner_config.ConstrainedIntValue, Tuple[rastervision.pytorch_learner.learner_config.ConstrainedIntValue, rastervision.pytorch_learner.learner_config.ConstrainedIntValue], NoneType]
-
size_lims
¶ [min, max) interval from which window sizes will be uniformly randomly sampled. The upper limit is exclusive. To fix the size to a constant value, use size_lims = (sz, sz + 1). Only used if method = random. Must specify either size_lims or h and w lims, but not both. Defaults to None.
- Type
Optional[Tuple[PositiveInt, PositiveInt]]
-
h_lims
¶ [min, max] interval from which window heights will be uniformly randomly sampled. Only used if method = random. Defaults to None.
- Type
Optional[Tuple[PositiveInt, PositiveInt]]
-
w_lims
¶ [min, max] interval from which window widths will be uniformly randomly sampled. Only used if method = random. Defaults to None.
- Type
Optional[Tuple[PositiveInt, PositiveInt]]
-
max_windows
¶ Max allowed reads from a GeoDataset. Only used if method = random. Defaults to 10000.
- Type
ConstrainedIntValue
-
max_sample_attempts
¶ Max attempts when trying to find a window within the AOI of a scene. Only used if method = random and the scene has aoi_polygons specified. Defaults to 100.
- Type
PositiveInt
-
PlotOptions¶
-
class
rastervision.pytorch_learner.
PlotOptions
¶ Config related to plotting.
-
transform
¶ An Albumentations transform serialized as a dict that will be applied to each image before it is plotted. Mainly useful for undoing any data transformation that you do not want included in the plot, such as normalization. The default value will shift and scale the image so the values range from 0.0 to 1.0 which is the expected range for the plotting function. This default is useful for cases where the values after normalization are close to zero which makes the plot difficult to see. Defaults to {‘__version__’: ‘0.5.2’, ‘transform’: {‘__class_fullname__’: ‘rastervision.pytorch_learner.utils.utils.MinMaxNormalize’, ‘always_apply’: False, ‘p’: 1.0, ‘min_val’: 0.0, ‘max_val’: 1.0, ‘dtype’: 5}}.
- Type
Optional[dict]
-
ModelConfig¶
-
class
rastervision.pytorch_learner.
ModelConfig
¶ Config related to models.
-
backbone
¶ The torchvision.models backbone to use. Defaults to <Backbone.resnet18: ‘resnet18’>.
- Type
enum
-
pretrained
¶ If True, use ImageNet weights. If False, use random initialization. Defaults to True.
- Type
-
init_weights
¶ URI of PyTorch model weights used to initialize model. If set, this supercedes the pretrained option. Defaults to None.
- Type
Optional[str]
-
external_def
¶ If specified, the model will be built from the definition from this external source, using Torch Hub. Defaults to None.
- Type
Optional[ExternalModuleConfig]
-
ClassificationDataFormat¶
ClassificationImageDataConfig¶
-
class
rastervision.pytorch_learner.
ClassificationImageDataConfig
¶ -
-
class_colors
¶ Colors used to display classes. Can be color 3-tuples in list form. Defaults to None.
- Type
Union[List[str], List[List], NoneType]
-
img_sz
¶ Length of a side of each image in pixels. This is the size to transform it to during training, not the size in the raw dataset. Defaults to 256.
- Type
PositiveInt
-
train_sz
¶ If set, the number of training images to use. If fewer images exist, then an exception will be raised. Defaults to None.
- Type
Optional[int]
-
augmentors
¶ Names of albumentations augmentors to use for training batches. Choices include: [‘Blur’, ‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’, ‘GaussianBlur’, ‘GaussNoise’, ‘RGBShift’, ‘ToGray’]. Alternatively, a custom transform can be provided via the aug_transform option. Defaults to [‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’].
- Type
List[str]
-
base_transform
¶ An Albumentations transform serialized as a dict that will be applied to all datasets: training, validation, and test. This transformation is in addition to the resizing due to img_sz. This is useful for, for example, applying the same normalization to all datasets. Defaults to None.
- Type
Optional[dict]
-
aug_transform
¶ An Albumentations transform serialized as a dict that will be applied as data augmentation to the training dataset. This transform is applied before base_transform. If provided, the augmentors option is ignored. Defaults to None.
- Type
Optional[dict]
-
plot_options
¶ Options to control plotting. Defaults to PlotOptions(transform={‘__version__’: ‘0.5.2’, ‘transform’: {‘__class_fullname__’: ‘rastervision.pytorch_learner.utils.utils.MinMaxNormalize’, ‘always_apply’: False, ‘p’: 1.0, ‘min_val’: 0.0, ‘max_val’: 1.0, ‘dtype’: 5}}, type_hint=’plot_options’).
- Type
Optional[PlotOptions]
-
preview_batch_limit
¶ Optional limit on the number of items in the preview plots produced during training. Defaults to None.
- Type
Optional[int]
-
data_format
¶ Defaults to <ClassificationDataFormat.image_folder: ‘image_folder’>.
- Type
enum
-
uri
¶ URI of the dataset. This can be a zip file, a list of zip files, or a directory which contains a set of zip files. Defaults to None.
-
group_uris
¶ This can be set instead of uri in order to specify groups of chips. Each element in the list is expected to be an object of the same form accepted by the uri field. The purpose of separating chips into groups is to be able to use the group_train_sz field. Defaults to None.
-
group_train_sz
¶ If group_uris is set, this can be used to specify the number of chips to use per group. Only applies to training chips. This can either be a single value that will be used for all groups or a list of values (one for each group). Defaults to None.
-
group_train_sz_rel
¶ Relative version of group_train_sz. Must be a float in [0, 1]. If group_uris is set, this can be used to specify the proportion of the total chips in each group to use per group. Only applies to training chips. This can either be a single value that will be used for all groups or a list of values (one for each group). Defaults to None.
- Type
Union[rastervision.pytorch_learner.learner_config.ConstrainedFloatValue, List[rastervision.pytorch_learner.learner_config.ConstrainedFloatValue], NoneType]
-
ClassificationGeoDataConfig¶
-
class
rastervision.pytorch_learner.
ClassificationGeoDataConfig
¶ -
-
class_colors
¶ Colors used to display classes. Can be color 3-tuples in list form. Defaults to None.
- Type
Union[List[str], List[List], NoneType]
-
img_sz
¶ Length of a side of each image in pixels. This is the size to transform it to during training, not the size in the raw dataset. Defaults to 256.
- Type
PositiveInt
-
train_sz
¶ If set, the number of training images to use. If fewer images exist, then an exception will be raised. Defaults to None.
- Type
Optional[int]
-
augmentors
¶ Names of albumentations augmentors to use for training batches. Choices include: [‘Blur’, ‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’, ‘GaussianBlur’, ‘GaussNoise’, ‘RGBShift’, ‘ToGray’]. Alternatively, a custom transform can be provided via the aug_transform option. Defaults to [‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’].
- Type
List[str]
-
base_transform
¶ An Albumentations transform serialized as a dict that will be applied to all datasets: training, validation, and test. This transformation is in addition to the resizing due to img_sz. This is useful for, for example, applying the same normalization to all datasets. Defaults to None.
- Type
Optional[dict]
-
aug_transform
¶ An Albumentations transform serialized as a dict that will be applied as data augmentation to the training dataset. This transform is applied before base_transform. If provided, the augmentors option is ignored. Defaults to None.
- Type
Optional[dict]
-
plot_options
¶ Options to control plotting. Defaults to PlotOptions(transform={‘__version__’: ‘0.5.2’, ‘transform’: {‘__class_fullname__’: ‘rastervision.pytorch_learner.utils.utils.MinMaxNormalize’, ‘always_apply’: False, ‘p’: 1.0, ‘min_val’: 0.0, ‘max_val’: 1.0, ‘dtype’: 5}}, type_hint=’plot_options’).
- Type
Optional[PlotOptions]
-
preview_batch_limit
¶ Optional limit on the number of items in the preview plots produced during training. Defaults to None.
- Type
Optional[int]
-
scene_dataset
¶ - Type
-
ClassificationModelConfig¶
-
class
rastervision.pytorch_learner.
ClassificationModelConfig
¶ -
backbone
¶ The torchvision.models backbone to use. Defaults to <Backbone.resnet18: ‘resnet18’>.
- Type
enum
-
pretrained
¶ If True, use ImageNet weights. If False, use random initialization. Defaults to True.
- Type
-
init_weights
¶ URI of PyTorch model weights used to initialize model. If set, this supercedes the pretrained option. Defaults to None.
- Type
Optional[str]
-
external_def
¶ If specified, the model will be built from the definition from this external source, using Torch Hub. Defaults to None.
- Type
Optional[ExternalModuleConfig]
-
ClassificationLearnerConfig¶
-
class
rastervision.pytorch_learner.
ClassificationLearnerConfig
¶ -
model
¶
-
solver
¶ - Type
-
data
¶ - Type
Union[None.ClassificationImageDataConfig, None.ClassificationGeoDataConfig]
-
predict_mode
¶ If True, skips training, loads model, and does final eval. Defaults to False.
- Type
-
test_mode
¶ If True, uses test_num_epochs, test_batch_sz, truncated datasets with only a single batch, image_sz that is cut in half, and num_workers = 0. This is useful for testing that code runs correctly on CPU without multithreading before running full job on GPU. Defaults to False.
- Type
-
overfit_mode
¶ If True, uses half image size, and instead of doing epoch-based training, optimizes the model using a single batch repeatedly for overfit_num_steps number of steps. Defaults to False.
- Type
-
eval_train
¶ If True, runs final evaluation on training set (in addition to test set). Useful for debugging. Defaults to False.
- Type
-
save_model_bundle
¶ If True, saves a model bundle at the end of training which is zip file with model and this LearnerConfig which can be used to make predictions on new images at a later time. Defaults to True.
- Type
-
SemanticSegmentationDataFormat¶
SemanticSegmentationDataConfig¶
-
class
rastervision.pytorch_learner.
SemanticSegmentationDataConfig
¶ -
img_channels
¶ The number of channels of the training images. Defaults to 3.
- Type
PositiveInt
-
channel_display_groups
¶ Groups of image channels to display together as a subplot when plotting the data and predictions. Can be a list or tuple of groups (e.g. [(0, 1, 2), (3,)]) or a dict containing title-to-group mappings (e.g. {“RGB”: [0, 1, 2], “IR”: [3]}), where each group is a list or tuple of channel indices and title is a string that will be used as the title of the subplot for that group. Defaults to None.
-
SemanticSegmentationImageDataConfig¶
-
class
rastervision.pytorch_learner.
SemanticSegmentationImageDataConfig
¶ -
-
class_colors
¶ Colors used to display classes. Can be color 3-tuples in list form. Defaults to None.
- Type
Union[List[str], List[List], NoneType]
-
img_sz
¶ Length of a side of each image in pixels. This is the size to transform it to during training, not the size in the raw dataset. Defaults to 256.
- Type
PositiveInt
-
train_sz
¶ If set, the number of training images to use. If fewer images exist, then an exception will be raised. Defaults to None.
- Type
Optional[int]
-
augmentors
¶ Names of albumentations augmentors to use for training batches. Choices include: [‘Blur’, ‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’, ‘GaussianBlur’, ‘GaussNoise’, ‘RGBShift’, ‘ToGray’]. Alternatively, a custom transform can be provided via the aug_transform option. Defaults to [‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’].
- Type
List[str]
-
base_transform
¶ An Albumentations transform serialized as a dict that will be applied to all datasets: training, validation, and test. This transformation is in addition to the resizing due to img_sz. This is useful for, for example, applying the same normalization to all datasets. Defaults to None.
- Type
Optional[dict]
-
aug_transform
¶ An Albumentations transform serialized as a dict that will be applied as data augmentation to the training dataset. This transform is applied before base_transform. If provided, the augmentors option is ignored. Defaults to None.
- Type
Optional[dict]
-
plot_options
¶ Options to control plotting. Defaults to PlotOptions(transform={‘__version__’: ‘0.5.2’, ‘transform’: {‘__class_fullname__’: ‘rastervision.pytorch_learner.utils.utils.MinMaxNormalize’, ‘always_apply’: False, ‘p’: 1.0, ‘min_val’: 0.0, ‘max_val’: 1.0, ‘dtype’: 5}}, type_hint=’plot_options’).
- Type
Optional[PlotOptions]
-
preview_batch_limit
¶ Optional limit on the number of items in the preview plots produced during training. Defaults to None.
- Type
Optional[int]
-
data_format
¶ Defaults to <SemanticSegmentationDataFormat.default: ‘default’>.
- Type
enum
-
uri
¶ URI of the dataset. This can be a zip file, a list of zip files, or a directory which contains a set of zip files. Defaults to None.
-
group_uris
¶ This can be set instead of uri in order to specify groups of chips. Each element in the list is expected to be an object of the same form accepted by the uri field. The purpose of separating chips into groups is to be able to use the group_train_sz field. Defaults to None.
-
group_train_sz
¶ If group_uris is set, this can be used to specify the number of chips to use per group. Only applies to training chips. This can either be a single value that will be used for all groups or a list of values (one for each group). Defaults to None.
-
group_train_sz_rel
¶ Relative version of group_train_sz. Must be a float in [0, 1]. If group_uris is set, this can be used to specify the proportion of the total chips in each group to use per group. Only applies to training chips. This can either be a single value that will be used for all groups or a list of values (one for each group). Defaults to None.
- Type
Union[rastervision.pytorch_learner.learner_config.ConstrainedFloatValue, List[rastervision.pytorch_learner.learner_config.ConstrainedFloatValue], NoneType]
-
img_channels
¶ The number of channels of the training images. Defaults to 3.
- Type
PositiveInt
-
channel_display_groups
¶ Groups of image channels to display together as a subplot when plotting the data and predictions. Can be a list or tuple of groups (e.g. [(0, 1, 2), (3,)]) or a dict containing title-to-group mappings (e.g. {“RGB”: [0, 1, 2], “IR”: [3]}), where each group is a list or tuple of channel indices and title is a string that will be used as the title of the subplot for that group. Defaults to None.
-
SemanticSegmentationGeoDataConfig¶
-
class
rastervision.pytorch_learner.
SemanticSegmentationGeoDataConfig
¶ -
-
class_colors
¶ Colors used to display classes. Can be color 3-tuples in list form. Defaults to None.
- Type
Union[List[str], List[List], NoneType]
-
img_sz
¶ Length of a side of each image in pixels. This is the size to transform it to during training, not the size in the raw dataset. Defaults to 256.
- Type
PositiveInt
-
train_sz
¶ If set, the number of training images to use. If fewer images exist, then an exception will be raised. Defaults to None.
- Type
Optional[int]
-
augmentors
¶ Names of albumentations augmentors to use for training batches. Choices include: [‘Blur’, ‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’, ‘GaussianBlur’, ‘GaussNoise’, ‘RGBShift’, ‘ToGray’]. Alternatively, a custom transform can be provided via the aug_transform option. Defaults to [‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’].
- Type
List[str]
-
base_transform
¶ An Albumentations transform serialized as a dict that will be applied to all datasets: training, validation, and test. This transformation is in addition to the resizing due to img_sz. This is useful for, for example, applying the same normalization to all datasets. Defaults to None.
- Type
Optional[dict]
-
aug_transform
¶ An Albumentations transform serialized as a dict that will be applied as data augmentation to the training dataset. This transform is applied before base_transform. If provided, the augmentors option is ignored. Defaults to None.
- Type
Optional[dict]
-
plot_options
¶ Options to control plotting. Defaults to PlotOptions(transform={‘__version__’: ‘0.5.2’, ‘transform’: {‘__class_fullname__’: ‘rastervision.pytorch_learner.utils.utils.MinMaxNormalize’, ‘always_apply’: False, ‘p’: 1.0, ‘min_val’: 0.0, ‘max_val’: 1.0, ‘dtype’: 5}}, type_hint=’plot_options’).
- Type
Optional[PlotOptions]
-
preview_batch_limit
¶ Optional limit on the number of items in the preview plots produced during training. Defaults to None.
- Type
Optional[int]
-
scene_dataset
¶ - Type
-
img_channels
¶ The number of channels of the training images. Defaults to 3.
- Type
PositiveInt
-
channel_display_groups
¶ Groups of image channels to display together as a subplot when plotting the data and predictions. Can be a list or tuple of groups (e.g. [(0, 1, 2), (3,)]) or a dict containing title-to-group mappings (e.g. {“RGB”: [0, 1, 2], “IR”: [3]}), where each group is a list or tuple of channel indices and title is a string that will be used as the title of the subplot for that group. Defaults to None.
-
SemanticSegmentationModelConfig¶
-
class
rastervision.pytorch_learner.
SemanticSegmentationModelConfig
¶ -
backbone
¶ The torchvision.models backbone to use. At the moment only resnet50 or resnet101 will work. Defaults to <Backbone.resnet50: ‘resnet50’>.
- Type
enum
-
pretrained
¶ If True, use ImageNet weights. If False, use random initialization. Defaults to True.
- Type
-
init_weights
¶ URI of PyTorch model weights used to initialize model. If set, this supercedes the pretrained option. Defaults to None.
- Type
Optional[str]
-
external_def
¶ If specified, the model will be built from the definition from this external source, using Torch Hub. Defaults to None.
- Type
Optional[ExternalModuleConfig]
-
SemanticSegmentationLearnerConfig¶
-
class
rastervision.pytorch_learner.
SemanticSegmentationLearnerConfig
¶ -
model
¶
-
solver
¶ - Type
-
data
¶ - Type
Union[None.SemanticSegmentationImageDataConfig, None.SemanticSegmentationGeoDataConfig]
-
predict_mode
¶ If True, skips training, loads model, and does final eval. Defaults to False.
- Type
-
test_mode
¶ If True, uses test_num_epochs, test_batch_sz, truncated datasets with only a single batch, image_sz that is cut in half, and num_workers = 0. This is useful for testing that code runs correctly on CPU without multithreading before running full job on GPU. Defaults to False.
- Type
-
overfit_mode
¶ If True, uses half image size, and instead of doing epoch-based training, optimizes the model using a single batch repeatedly for overfit_num_steps number of steps. Defaults to False.
- Type
-
eval_train
¶ If True, runs final evaluation on training set (in addition to test set). Useful for debugging. Defaults to False.
- Type
-
save_model_bundle
¶ If True, saves a model bundle at the end of training which is zip file with model and this LearnerConfig which can be used to make predictions on new images at a later time. Defaults to True.
- Type
-
ObjectDetectionDataFormat¶
ObjectDetectionDataConfig¶
-
class
rastervision.pytorch_learner.
ObjectDetectionDataConfig
¶ Attributes:
ObjectDetectionImageDataConfig¶
-
class
rastervision.pytorch_learner.
ObjectDetectionImageDataConfig
¶ -
-
class_colors
¶ Colors used to display classes. Can be color 3-tuples in list form. Defaults to None.
- Type
Union[List[str], List[List], NoneType]
-
img_sz
¶ Length of a side of each image in pixels. This is the size to transform it to during training, not the size in the raw dataset. Defaults to 256.
- Type
PositiveInt
-
train_sz
¶ If set, the number of training images to use. If fewer images exist, then an exception will be raised. Defaults to None.
- Type
Optional[int]
-
augmentors
¶ Names of albumentations augmentors to use for training batches. Choices include: [‘Blur’, ‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’, ‘GaussianBlur’, ‘GaussNoise’, ‘RGBShift’, ‘ToGray’]. Alternatively, a custom transform can be provided via the aug_transform option. Defaults to [‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’].
- Type
List[str]
-
base_transform
¶ An Albumentations transform serialized as a dict that will be applied to all datasets: training, validation, and test. This transformation is in addition to the resizing due to img_sz. This is useful for, for example, applying the same normalization to all datasets. Defaults to None.
- Type
Optional[dict]
-
aug_transform
¶ An Albumentations transform serialized as a dict that will be applied as data augmentation to the training dataset. This transform is applied before base_transform. If provided, the augmentors option is ignored. Defaults to None.
- Type
Optional[dict]
-
plot_options
¶ Options to control plotting. Defaults to PlotOptions(transform={‘__version__’: ‘0.5.2’, ‘transform’: {‘__class_fullname__’: ‘rastervision.pytorch_learner.utils.utils.MinMaxNormalize’, ‘always_apply’: False, ‘p’: 1.0, ‘min_val’: 0.0, ‘max_val’: 1.0, ‘dtype’: 5}}, type_hint=’plot_options’).
- Type
Optional[PlotOptions]
-
preview_batch_limit
¶ Optional limit on the number of items in the preview plots produced during training. Defaults to None.
- Type
Optional[int]
-
data_format
¶ Defaults to <ObjectDetectionDataFormat.coco: ‘coco’>.
- Type
enum
-
uri
¶ URI of the dataset. This can be a zip file, a list of zip files, or a directory which contains a set of zip files. Defaults to None.
-
group_uris
¶ This can be set instead of uri in order to specify groups of chips. Each element in the list is expected to be an object of the same form accepted by the uri field. The purpose of separating chips into groups is to be able to use the group_train_sz field. Defaults to None.
-
group_train_sz
¶ If group_uris is set, this can be used to specify the number of chips to use per group. Only applies to training chips. This can either be a single value that will be used for all groups or a list of values (one for each group). Defaults to None.
-
group_train_sz_rel
¶ Relative version of group_train_sz. Must be a float in [0, 1]. If group_uris is set, this can be used to specify the proportion of the total chips in each group to use per group. Only applies to training chips. This can either be a single value that will be used for all groups or a list of values (one for each group). Defaults to None.
- Type
Union[rastervision.pytorch_learner.learner_config.ConstrainedFloatValue, List[rastervision.pytorch_learner.learner_config.ConstrainedFloatValue], NoneType]
-
ObjectDetectionGeoDataConfig¶
-
class
rastervision.pytorch_learner.
ObjectDetectionGeoDataConfig
¶ -
-
class_colors
¶ Colors used to display classes. Can be color 3-tuples in list form. Defaults to None.
- Type
Union[List[str], List[List], NoneType]
-
img_sz
¶ Length of a side of each image in pixels. This is the size to transform it to during training, not the size in the raw dataset. Defaults to 256.
- Type
PositiveInt
-
train_sz
¶ If set, the number of training images to use. If fewer images exist, then an exception will be raised. Defaults to None.
- Type
Optional[int]
-
augmentors
¶ Names of albumentations augmentors to use for training batches. Choices include: [‘Blur’, ‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’, ‘GaussianBlur’, ‘GaussNoise’, ‘RGBShift’, ‘ToGray’]. Alternatively, a custom transform can be provided via the aug_transform option. Defaults to [‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’].
- Type
List[str]
-
base_transform
¶ An Albumentations transform serialized as a dict that will be applied to all datasets: training, validation, and test. This transformation is in addition to the resizing due to img_sz. This is useful for, for example, applying the same normalization to all datasets. Defaults to None.
- Type
Optional[dict]
-
aug_transform
¶ An Albumentations transform serialized as a dict that will be applied as data augmentation to the training dataset. This transform is applied before base_transform. If provided, the augmentors option is ignored. Defaults to None.
- Type
Optional[dict]
-
plot_options
¶ Options to control plotting. Defaults to PlotOptions(transform={‘__version__’: ‘0.5.2’, ‘transform’: {‘__class_fullname__’: ‘rastervision.pytorch_learner.utils.utils.MinMaxNormalize’, ‘always_apply’: False, ‘p’: 1.0, ‘min_val’: 0.0, ‘max_val’: 1.0, ‘dtype’: 5}}, type_hint=’plot_options’).
- Type
Optional[PlotOptions]
-
preview_batch_limit
¶ Optional limit on the number of items in the preview plots produced during training. Defaults to None.
- Type
Optional[int]
-
scene_dataset
¶ - Type
-
ObjectDetectionGeoDataWindowConfig¶
-
class
rastervision.pytorch_learner.
ObjectDetectionGeoDataWindowConfig
¶ -
method
¶ Defaults to <GeoDataWindowMethod.sliding: ‘sliding’>.
- Type
enum
-
size
¶ If method = sliding, this is the size of sliding window. If method = random, this is the size that all the windows are resized to before they are returned.
- Type
Union[pydantic.types.PositiveInt, Tuple[pydantic.types.PositiveInt, pydantic.types.PositiveInt]]
-
stride
¶ Stride of sliding window. Only used if method = sliding. Defaults to None.
- Type
Union[pydantic.types.PositiveInt, Tuple[pydantic.types.PositiveInt, pydantic.types.PositiveInt], NoneType]
-
padding
¶ How many pixels are windows allowed to overflow the edges of the raster source. Defaults to None.
- Type
Union[rastervision.pytorch_learner.learner_config.ConstrainedIntValue, Tuple[rastervision.pytorch_learner.learner_config.ConstrainedIntValue, rastervision.pytorch_learner.learner_config.ConstrainedIntValue], NoneType]
-
size_lims
¶ [min, max) interval from which window sizes will be uniformly randomly sampled. The upper limit is exclusive. To fix the size to a constant value, use size_lims = (sz, sz + 1). Only used if method = random. Must specify either size_lims or h and w lims, but not both. Defaults to None.
- Type
Optional[Tuple[PositiveInt, PositiveInt]]
-
h_lims
¶ [min, max] interval from which window heights will be uniformly randomly sampled. Only used if method = random. Defaults to None.
- Type
Optional[Tuple[PositiveInt, PositiveInt]]
-
w_lims
¶ [min, max] interval from which window widths will be uniformly randomly sampled. Only used if method = random. Defaults to None.
- Type
Optional[Tuple[PositiveInt, PositiveInt]]
-
max_windows
¶ Max allowed reads from a GeoDataset. Only used if method = random. Defaults to 10000.
- Type
ConstrainedIntValue
-
max_sample_attempts
¶ Max attempts when trying to find a window within the AOI of a scene. Only used if method = random and the scene has aoi_polygons specified. Defaults to 100.
- Type
PositiveInt
-
ioa_thresh
¶ When a box is partially outside of a training chip, it is not clear if (a clipped version) of the box should be included in the chip. If the IOA (intersection over area) of the box with the chip is greater than ioa_thresh, it is included in the chip. Defaults to 0.8. Defaults to 0.8.
- Type
-
clip
¶ Clip bounding boxes to window limits when retrieving labels for a window. Defaults to False.
- Type
-
neg_ratio
¶ The ratio of negative chips (those containing no bounding boxes) to positive chips. This can be useful if the statistics of the background is different in positive chips. For example, in car detection, the positive chips will always contain roads, but no examples of rooftops since cars tend to not be near rooftops. Defaults to 1.0. Defaults to 1.0.
- Type
-
ObjectDetectionModelConfig¶
-
class
rastervision.pytorch_learner.
ObjectDetectionModelConfig
¶ -
backbone
¶ The torchvision.models backbone to use, which must be in the resnet* family. Defaults to <Backbone.resnet50: ‘resnet50’>.
- Type
enum
-
pretrained
¶ If True, use ImageNet weights. If False, use random initialization. Defaults to True.
- Type
-
init_weights
¶ URI of PyTorch model weights used to initialize model. If set, this supercedes the pretrained option. Defaults to None.
- Type
Optional[str]
-
external_def
¶ If specified, the model will be built from the definition from this external source, using Torch Hub. Defaults to None.
- Type
Optional[ExternalModuleConfig]
-
ObjectDetectionLearnerConfig¶
-
class
rastervision.pytorch_learner.
ObjectDetectionLearnerConfig
¶ -
model
¶
-
solver
¶ - Type
-
data
¶ - Type
Union[None.ObjectDetectionImageDataConfig, None.ObjectDetectionGeoDataConfig]
-
predict_mode
¶ If True, skips training, loads model, and does final eval. Defaults to False.
- Type
-
test_mode
¶ If True, uses test_num_epochs, test_batch_sz, truncated datasets with only a single batch, image_sz that is cut in half, and num_workers = 0. This is useful for testing that code runs correctly on CPU without multithreading before running full job on GPU. Defaults to False.
- Type
-
overfit_mode
¶ If True, uses half image size, and instead of doing epoch-based training, optimizes the model using a single batch repeatedly for overfit_num_steps number of steps. Defaults to False.
- Type
-
eval_train
¶ If True, runs final evaluation on training set (in addition to test set). Useful for debugging. Defaults to False.
- Type
-
save_model_bundle
¶ If True, saves a model bundle at the end of training which is zip file with model and this LearnerConfig which can be used to make predictions on new images at a later time. Defaults to True.
- Type
-