SemanticSegmentationGeoDataConfig#

Note

All Configs are derived from rastervision.pipeline.config.Config, which itself is a pydantic Model.

pydantic model SemanticSegmentationGeoDataConfig[source]#

Configure semantic segmentation GeoDatasets.

See rastervision.pytorch_learner.dataset.semantic_segmentation_dataset.

Show JSON schema
{
   "title": "SemanticSegmentationGeoDataConfig",
   "description": "Configure semantic segmentation :class:`GeoDatasets <.GeoDataset>`.\n\nSee\n:mod:`rastervision.pytorch_learner.dataset.semantic_segmentation_dataset`.",
   "type": "object",
   "properties": {
      "class_config": {
         "anyOf": [
            {
               "$ref": "#/$defs/ClassConfig"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "Class config."
      },
      "img_channels": {
         "anyOf": [
            {
               "exclusiveMinimum": 0,
               "type": "integer"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "The number of channels of the training images.",
         "title": "Img Channels"
      },
      "img_sz": {
         "default": 256,
         "description": "Length of a side of each image in pixels. This is the size to transform it to during training, not the size in the raw dataset.",
         "exclusiveMinimum": 0,
         "title": "Img Sz",
         "type": "integer"
      },
      "train_sz": {
         "anyOf": [
            {
               "type": "integer"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "If set, the number of training images to use. If fewer images exist, then an exception will be raised.",
         "title": "Train Sz"
      },
      "train_sz_rel": {
         "anyOf": [
            {
               "type": "number"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "If set, the proportion of training images to use.",
         "title": "Train Sz Rel"
      },
      "num_workers": {
         "default": 4,
         "description": "Number of workers to use when DataLoader makes batches.",
         "title": "Num Workers",
         "type": "integer"
      },
      "augmentors": {
         "default": [
            "RandomRotate90",
            "HorizontalFlip",
            "VerticalFlip"
         ],
         "description": "Names of albumentations augmentors to use for training batches. Choices include: ['Blur', 'RandomRotate90', 'HorizontalFlip', 'VerticalFlip', 'GaussianBlur', 'GaussNoise', 'RGBShift', 'ToGray']. Alternatively, a custom transform can be provided via the aug_transform option.",
         "items": {
            "type": "string"
         },
         "title": "Augmentors",
         "type": "array"
      },
      "base_transform": {
         "anyOf": [
            {
               "type": "object"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "An Albumentations transform serialized as a dict that will be applied to all datasets: training, validation, and test. This transformation is in addition to the resizing due to img_sz. This is useful for, for example, applying the same normalization to all datasets.",
         "title": "Base Transform"
      },
      "aug_transform": {
         "anyOf": [
            {
               "type": "object"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "An Albumentations transform serialized as a dict that will be applied as data augmentation to the training dataset. This transform is applied before base_transform. If provided, the augmentors option is ignored.",
         "title": "Aug Transform"
      },
      "plot_options": {
         "anyOf": [
            {
               "$ref": "#/$defs/PlotOptions"
            },
            {
               "type": "null"
            }
         ],
         "default": {
            "transform": {
               "__version__": "1.4.14",
               "transform": {
                  "__class_fullname__": "rastervision.pytorch_learner.utils.utils.MinMaxNormalize",
                  "dtype": 5,
                  "max_val": 1.0,
                  "min_val": 0.0,
                  "p": 1.0
               }
            },
            "channel_display_groups": null,
            "type_hint": "plot_options"
         },
         "description": "Options to control plotting."
      },
      "preview_batch_limit": {
         "anyOf": [
            {
               "type": "integer"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "Optional limit on the number of items in the preview plots produced during training.",
         "title": "Preview Batch Limit"
      },
      "type_hint": {
         "const": "semantic_segmentation_geo_data",
         "default": "semantic_segmentation_geo_data",
         "enum": [
            "semantic_segmentation_geo_data"
         ],
         "title": "Type Hint",
         "type": "string"
      },
      "scene_dataset": {
         "anyOf": [
            {
               "$ref": "#/$defs/DatasetConfig"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": ""
      },
      "sampling": {
         "anyOf": [
            {
               "$ref": "#/$defs/WindowSamplingConfig"
            },
            {
               "additionalProperties": {
                  "$ref": "#/$defs/WindowSamplingConfig"
               },
               "type": "object"
            }
         ],
         "default": {},
         "description": "Window sampling config.",
         "title": "Sampling"
      }
   },
   "$defs": {
      "ClassConfig": {
         "additionalProperties": false,
         "description": "Configure class information for a machine learning task.",
         "properties": {
            "names": {
               "description": "Names of classes. The i-th class in this list will have class ID = i.",
               "items": {
                  "type": "string"
               },
               "title": "Names",
               "type": "array"
            },
            "colors": {
               "anyOf": [
                  {
                     "items": {
                        "anyOf": [
                           {
                              "type": "string"
                           },
                           {
                              "items": {},
                              "type": "array"
                           }
                        ]
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "Colors used to visualize classes. Can be color strings accepted by matplotlib or RGB tuples. If None, a random color will be auto-generated for each class.",
               "title": "Colors"
            },
            "null_class": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "Optional name of class in `names` to use as the null class. This is used in semantic segmentation to represent the label for imagery pixels that are NODATA or that are missing a label. If None and the class names include \"null\", it will automatically be used as the null class. If None, and this Config is part of a SemanticSegmentationConfig, a null class will be added automatically.",
               "title": "Null Class"
            },
            "type_hint": {
               "const": "class_config",
               "default": "class_config",
               "enum": [
                  "class_config"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "required": [
            "names"
         ],
         "title": "ClassConfig",
         "type": "object"
      },
      "DatasetConfig": {
         "additionalProperties": false,
         "description": "Configure train, validation, and test splits for a dataset.",
         "properties": {
            "class_config": {
               "$ref": "#/$defs/ClassConfig"
            },
            "train_scenes": {
               "items": {
                  "$ref": "#/$defs/SceneConfig"
               },
               "title": "Train Scenes",
               "type": "array"
            },
            "validation_scenes": {
               "items": {
                  "$ref": "#/$defs/SceneConfig"
               },
               "title": "Validation Scenes",
               "type": "array"
            },
            "test_scenes": {
               "default": [],
               "items": {
                  "$ref": "#/$defs/SceneConfig"
               },
               "title": "Test Scenes",
               "type": "array"
            },
            "scene_groups": {
               "additionalProperties": {
                  "items": {
                     "type": "string"
                  },
                  "type": "array",
                  "uniqueItems": true
               },
               "default": {},
               "description": "Groupings of scenes. Should be a dict of the form: {<group-name>: set(scene_id_1, scene_id_2, ...)}. Three groups are added by default: \"train_scenes\", \"validation_scenes\", and \"test_scenes\"",
               "title": "Scene Groups",
               "type": "object"
            },
            "type_hint": {
               "const": "dataset",
               "default": "dataset",
               "enum": [
                  "dataset"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "required": [
            "class_config",
            "train_scenes",
            "validation_scenes"
         ],
         "title": "DatasetConfig",
         "type": "object"
      },
      "LabelSourceConfig": {
         "additionalProperties": false,
         "description": "Configure a :class:`.LabelSource`.",
         "properties": {
            "type_hint": {
               "const": "label_source",
               "default": "label_source",
               "enum": [
                  "label_source"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "title": "LabelSourceConfig",
         "type": "object"
      },
      "LabelStoreConfig": {
         "additionalProperties": false,
         "description": "Configure a :class:`.LabelStore`.",
         "properties": {
            "type_hint": {
               "const": "label_store",
               "default": "label_store",
               "enum": [
                  "label_store"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "title": "LabelStoreConfig",
         "type": "object"
      },
      "PlotOptions": {
         "additionalProperties": false,
         "description": "Config related to plotting.",
         "properties": {
            "transform": {
               "anyOf": [
                  {
                     "type": "object"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": {
                  "__version__": "1.4.14",
                  "transform": {
                     "__class_fullname__": "rastervision.pytorch_learner.utils.utils.MinMaxNormalize",
                     "dtype": 5,
                     "max_val": 1.0,
                     "min_val": 0.0,
                     "p": 1.0
                  }
               },
               "description": "An Albumentations transform serialized as a dict that will be applied to each image before it is plotted. Mainly useful for undoing any data transformation that you do not want included in the plot, such as normalization. The default value will shift and scale the image so the values range from 0.0 to 1.0 which is the expected range for the plotting function. This default is useful for cases where the values after normalization are close to zero which makes the plot difficult to see.",
               "title": "Transform"
            },
            "channel_display_groups": {
               "anyOf": [
                  {
                     "additionalProperties": {
                        "items": {
                           "minimum": 0,
                           "type": "integer"
                        },
                        "type": "array"
                     },
                     "type": "object"
                  },
                  {
                     "items": {
                        "items": {
                           "minimum": 0,
                           "type": "integer"
                        },
                        "type": "array"
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "Groups of image channels to display together as a subplot when plotting the data and predictions. Can be a list or tuple of groups (e.g. [(0, 1, 2), (3,)]) or a dict containing title-to-group mappings (e.g. {\"RGB\": [0, 1, 2], \"IR\": [3]}), where each group is a list or tuple of channel indices and title is a string that will be used as the title of the subplot for that group.",
               "title": "Channel Display Groups"
            },
            "type_hint": {
               "const": "plot_options",
               "default": "plot_options",
               "enum": [
                  "plot_options"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "title": "PlotOptions",
         "type": "object"
      },
      "RasterSourceConfig": {
         "additionalProperties": false,
         "description": "Configure a :class:`.RasterSource`.",
         "properties": {
            "channel_order": {
               "anyOf": [
                  {
                     "items": {
                        "type": "integer"
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The sequence of channel indices to use when reading imagery.",
               "title": "Channel Order"
            },
            "transformers": {
               "default": [],
               "items": {
                  "$ref": "#/$defs/RasterTransformerConfig"
               },
               "title": "Transformers",
               "type": "array"
            },
            "bbox": {
               "anyOf": [
                  {
                     "maxItems": 4,
                     "minItems": 4,
                     "prefixItems": [
                        {
                           "type": "integer"
                        },
                        {
                           "type": "integer"
                        },
                        {
                           "type": "integer"
                        },
                        {
                           "type": "integer"
                        }
                     ],
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "User-specified bbox in pixel coords in the form (ymin, xmin, ymax, xmax). Useful for cropping the raster source so that only part of the raster is read from.",
               "title": "Bbox"
            },
            "type_hint": {
               "const": "raster_source",
               "default": "raster_source",
               "enum": [
                  "raster_source"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "title": "RasterSourceConfig",
         "type": "object"
      },
      "RasterTransformerConfig": {
         "additionalProperties": false,
         "description": "Configure a :class:`.RasterTransformer`.",
         "properties": {
            "type_hint": {
               "const": "raster_transformer",
               "default": "raster_transformer",
               "enum": [
                  "raster_transformer"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "title": "RasterTransformerConfig",
         "type": "object"
      },
      "SceneConfig": {
         "additionalProperties": false,
         "description": "Configure a :class:`.Scene` comprising raster data & labels for an AOI.\n    ",
         "properties": {
            "id": {
               "title": "Id",
               "type": "string"
            },
            "raster_source": {
               "$ref": "#/$defs/RasterSourceConfig"
            },
            "label_source": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/LabelSourceConfig"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null
            },
            "label_store": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/LabelStoreConfig"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null
            },
            "aoi_uris": {
               "anyOf": [
                  {
                     "items": {
                        "type": "string"
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "List of URIs of GeoJSON files that define the AOIs for the scene. Each polygon defines an AOI which is a piece of the scene that is assumed to be fully labeled and usable for training or validation. The AOIs are assumed to be in EPSG:4326 coordinates.",
               "title": "Aoi Uris"
            },
            "type_hint": {
               "const": "scene",
               "default": "scene",
               "enum": [
                  "scene"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "required": [
            "id",
            "raster_source"
         ],
         "title": "SceneConfig",
         "type": "object"
      },
      "WindowSamplingConfig": {
         "additionalProperties": false,
         "description": "Configure the sampling of chip windows.",
         "properties": {
            "method": {
               "allOf": [
                  {
                     "$ref": "#/$defs/WindowSamplingMethod"
                  }
               ],
               "default": "sliding",
               "description": ""
            },
            "size": {
               "anyOf": [
                  {
                     "exclusiveMinimum": 0,
                     "type": "integer"
                  },
                  {
                     "maxItems": 2,
                     "minItems": 2,
                     "prefixItems": [
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        },
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        }
                     ],
                     "type": "array"
                  }
               ],
               "description": "If method = sliding, this is the size of sliding window. If method = random, this is the size that all the windows are resized to before they are returned. If method = random and neither size_lims nor h_lims and w_lims have been specified, then size_lims is set to (size, size + 1).",
               "title": "Size"
            },
            "stride": {
               "anyOf": [
                  {
                     "exclusiveMinimum": 0,
                     "type": "integer"
                  },
                  {
                     "maxItems": 2,
                     "minItems": 2,
                     "prefixItems": [
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        },
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        }
                     ],
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "Stride of sliding window. Only used if method = sliding.",
               "title": "Stride"
            },
            "padding": {
               "anyOf": [
                  {
                     "minimum": 0,
                     "type": "integer"
                  },
                  {
                     "maxItems": 2,
                     "minItems": 2,
                     "prefixItems": [
                        {
                           "minimum": 0,
                           "type": "integer"
                        },
                        {
                           "minimum": 0,
                           "type": "integer"
                        }
                     ],
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "How many pixels are windows allowed to overflow the edges of the raster source.",
               "title": "Padding"
            },
            "pad_direction": {
               "default": "end",
               "description": "If \"end\", only pad ymax and xmax (bottom and right). If \"start\", only pad ymin and xmin (top and left). If \"both\", pad all sides. Has no effect if padding is zero. Defaults to \"end\".",
               "enum": [
                  "both",
                  "start",
                  "end"
               ],
               "title": "Pad Direction",
               "type": "string"
            },
            "size_lims": {
               "anyOf": [
                  {
                     "maxItems": 2,
                     "minItems": 2,
                     "prefixItems": [
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        },
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        }
                     ],
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "[min, max) interval from which window sizes will be uniformly randomly sampled. The upper limit is exclusive. To fix the size to a constant value, use size_lims = (sz, sz + 1). Only used if method = random. Specify either size_lims or h_lims and w_lims, but not both. If neither size_lims nor h_lims and w_lims have been specified, then this will be set to (size, size + 1).",
               "title": "Size Lims"
            },
            "h_lims": {
               "anyOf": [
                  {
                     "maxItems": 2,
                     "minItems": 2,
                     "prefixItems": [
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        },
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        }
                     ],
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "[min, max] interval from which window heights will be uniformly randomly sampled. Only used if method = random.",
               "title": "H Lims"
            },
            "w_lims": {
               "anyOf": [
                  {
                     "maxItems": 2,
                     "minItems": 2,
                     "prefixItems": [
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        },
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        }
                     ],
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "[min, max] interval from which window widths will be uniformly randomly sampled. Only used if method = random.",
               "title": "W Lims"
            },
            "max_windows": {
               "default": 10000,
               "description": "Max number of windows to sample. Only used if method = random.",
               "minimum": 0,
               "title": "Max Windows",
               "type": "integer"
            },
            "max_sample_attempts": {
               "default": 100,
               "description": "Max attempts when trying to find a window within the AOI of a scene. Only used if method = random and the scene has aoi_polygons specified.",
               "exclusiveMinimum": 0,
               "title": "Max Sample Attempts",
               "type": "integer"
            },
            "efficient_aoi_sampling": {
               "default": true,
               "description": "If the scene has AOIs, sampling windows at random anywhere in the extent and then checking if they fall within any of the AOIs can be very inefficient. This flag enables the use of an alternate algorithm that only samples window locations inside the AOIs. Only used if method = random and the scene has aoi_polygons specified. Defaults to True",
               "title": "Efficient Aoi Sampling",
               "type": "boolean"
            },
            "within_aoi": {
               "default": true,
               "description": "If True and if the scene has an AOI, only sample windows that lie fully within the AOI. If False, windows only partially intersecting the AOI will also be allowed.",
               "title": "Within Aoi",
               "type": "boolean"
            },
            "type_hint": {
               "const": "window_sampling",
               "default": "window_sampling",
               "enum": [
                  "window_sampling"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "required": [
            "size"
         ],
         "title": "WindowSamplingConfig",
         "type": "object"
      },
      "WindowSamplingMethod": {
         "description": "Enum for window sampling methods.\n\nAttributes:\n    sliding: Sliding windows.\n    random: Randomly sampled windows.",
         "enum": [
            "sliding",
            "random"
         ],
         "title": "WindowSamplingMethod",
         "type": "string"
      }
   },
   "additionalProperties": false
}

Config:
  • extra: str = forbid

  • validate_assignment: bool = True

Fields:
Validators:
field aug_transform: dict | None = None#

An Albumentations transform serialized as a dict that will be applied as data augmentation to the training dataset. This transform is applied before base_transform. If provided, the augmentors option is ignored.

Validated by:
field augmentors: list[str] = ['RandomRotate90', 'HorizontalFlip', 'VerticalFlip']#

Names of albumentations augmentors to use for training batches. Choices include: [‘Blur’, ‘RandomRotate90’, ‘HorizontalFlip’, ‘VerticalFlip’, ‘GaussianBlur’, ‘GaussNoise’, ‘RGBShift’, ‘ToGray’]. Alternatively, a custom transform can be provided via the aug_transform option.

Validated by:
  • get_class_config_from_dataset_if_needed

  • validate_augmentors

  • validate_plot_options

  • validate_sampling

field base_transform: dict | None = None#

An Albumentations transform serialized as a dict that will be applied to all datasets: training, validation, and test. This transformation is in addition to the resizing due to img_sz. This is useful for, for example, applying the same normalization to all datasets.

Validated by:
field class_config: ClassConfig | None = None#

Class config.

Validated by:
  • get_class_config_from_dataset_if_needed

  • validate_plot_options

  • validate_sampling

field img_channels: PosInt | None = None#

The number of channels of the training images.

Validated by:
  • get_class_config_from_dataset_if_needed

  • validate_plot_options

  • validate_sampling

field img_sz: PosInt = 256#

Length of a side of each image in pixels. This is the size to transform it to during training, not the size in the raw dataset.

Constraints:
  • gt = 0

Validated by:
  • get_class_config_from_dataset_if_needed

  • validate_plot_options

  • validate_sampling

field num_workers: int = 4#

Number of workers to use when DataLoader makes batches.

Validated by:
  • get_class_config_from_dataset_if_needed

  • validate_plot_options

  • validate_sampling

field plot_options: PlotOptions | None = PlotOptions(transform={'__version__': '1.4.14', 'transform': {'__class_fullname__': 'rastervision.pytorch_learner.utils.utils.MinMaxNormalize', 'p': 1.0, 'min_val': 0.0, 'max_val': 1.0, 'dtype': 5}}, channel_display_groups=None)#

Options to control plotting.

Validated by:
  • get_class_config_from_dataset_if_needed

  • validate_plot_options

  • validate_sampling

field preview_batch_limit: int | None = None#

Optional limit on the number of items in the preview plots produced during training.

Validated by:
  • get_class_config_from_dataset_if_needed

  • validate_plot_options

  • validate_sampling

field sampling: WindowSamplingConfig | dict[str, WindowSamplingConfig] = {}#

Window sampling config.

Validated by:
  • get_class_config_from_dataset_if_needed

  • validate_plot_options

  • validate_sampling

field scene_dataset: SceneDatasetConfig | None = None#
Validated by:
  • get_class_config_from_dataset_if_needed

  • validate_plot_options

  • validate_sampling

field train_sz: int | None = None#

If set, the number of training images to use. If fewer images exist, then an exception will be raised.

Validated by:
  • get_class_config_from_dataset_if_needed

  • validate_plot_options

  • validate_sampling

field train_sz_rel: float | None = None#

If set, the proportion of training images to use.

Validated by:
  • get_class_config_from_dataset_if_needed

  • validate_plot_options

  • validate_sampling

field type_hint: Literal['semantic_segmentation_geo_data'] = 'semantic_segmentation_geo_data'#
Validated by:
  • get_class_config_from_dataset_if_needed

  • validate_plot_options

  • validate_sampling

build(tmp_dir: str | None = None, for_chipping: bool = False) tuple[torch.utils.data.dataset.Dataset, torch.utils.data.dataset.Dataset, torch.utils.data.dataset.Dataset]#

Build an instance of the corresponding type of object using this config.

For example, BackendConfig will build a Backend object. The arguments to this method will vary depending on the type of Config.

Parameters:
  • tmp_dir (str | None) –

  • for_chipping (bool) –

Return type:

tuple[torch.utils.data.dataset.Dataset, torch.utils.data.dataset.Dataset, torch.utils.data.dataset.Dataset]

build_dataset(split: Literal['train', 'valid', 'test'], tmp_dir: str | None = None) Dataset#

Build and return dataset for a single split.

Parameters:
  • split (Literal['train', 'valid', 'test']) –

  • tmp_dir (str | None) –

Return type:

Dataset

build_scenes(scene_configs: Iterable[SceneConfig], tmp_dir: str | None = None) list[rastervision.core.data.scene.Scene]#

Build training, validation, and test scenes.

Parameters:
Return type:

list[rastervision.core.data.scene.Scene]

classmethod deserialize(inp: str | dict | Config) Self#

Deserialize Config from a JSON file or dict, upgrading if possible.

If inp is already a Config, it is returned as is.

Parameters:

inp (str | dict | Config) – a URI to a JSON file or a dict.

Return type:

Self

classmethod from_dict(cfg_dict: dict) Self#

Deserialize Config from a dict.

Parameters:

cfg_dict (dict) – Dict to deserialize.

Return type:

Self

classmethod from_file(uri: str) Self#

Deserialize Config from a JSON file, upgrading if possible.

Parameters:

uri (str) – URI to load from.

Return type:

Self

get_bbox_params() albumentations.core.bbox_utils.BboxParams | None#

Returns BboxParams used by albumentations for data augmentation.

Return type:

albumentations.core.bbox_utils.BboxParams | None

validator get_class_config_from_dataset_if_needed  »  all fields#
Return type:

Self

get_custom_albumentations_transforms() list[dict]#

Returns all custom transforms found in this config.

This should return all serialized albumentations transforms with a ‘lambda_transforms_path’ field contained in this config or in any of its members no matter how deeply neseted.

The purpose is to make it easier to adjust their paths all at once while saving to or loading from a bundle.

Return type:

list[dict]

get_data_transforms() tuple[albumentations.core.transforms_interface.BasicTransform, albumentations.core.transforms_interface.BasicTransform]#

Get albumentations transform objects for data augmentation.

Returns a 2-tuple of a “base” transform and an augmentation transform. The base transform comprises a resize transform based on img_sz followed by the transform specified in base_transform. The augmentation transform comprises the base transform followed by either the transform in aug_transform (if specified) or the transforms in the augmentors field.

The augmentation transform is intended to be used for training data, and the base transform for all other data where data augmentation is not desirable, such as validation or prediction.

Returns:

base transform and augmentation transform.

Return type:

tuple[albumentations.core.transforms_interface.BasicTransform, albumentations.core.transforms_interface.BasicTransform]

random_subset_dataset(ds: Dataset, size: int | None = None, fraction: Optional[float] = None) Subset#
Parameters:
Return type:

Subset

recursive_validate_config()#

Recursively validate hierarchies of Configs.

This uses reflection to call validate_config on a hierarchy of Configs using a depth-first pre-order traversal.

revalidate()#

Re-validate an instantiated Config.

Runs all Pydantic validators plus self.validate_config().

scene_to_dataset(scene: Scene, transform: albumentations.core.transforms_interface.BasicTransform | None = None, for_chipping: bool = False) Dataset[source]#

Make a dataset from a single scene.

Parameters:
  • scene (Scene) –

  • transform (albumentations.core.transforms_interface.BasicTransform | None) –

  • for_chipping (bool) –

Return type:

Dataset

to_file(uri: str, with_rv_metadata: bool = True) None#

Save a Config to a JSON file, optionally with RV metadata.

Parameters:
  • uri (str) – URI to save to.

  • with_rv_metadata (bool) – If True, inject Raster Vision metadata such as plugin_versions, so that the config can be upgraded when loaded.

Return type:

None

update(*args, **kwargs)[source]#

Update any fields before validation.

Subclasses should override this to provide complex default behavior, for example, setting default values as a function of the values of other fields. The arguments to this method will vary depending on the type of Config.

validator validate_augmentors  »  augmentors#
Parameters:

v (list[str]) –

Return type:

list[str]

validate_config()#

Validate fields that should be checked after update is called.

This is to complement the builtin validation that Pydantic performs at the time of object construction.

validate_list(field: str, valid_options: list[str])#

Validate a list field.

Parameters:
  • field (str) – name of field to validate

  • valid_options (list[str]) – values that field is allowed to take

Raises:

ConfigError – if field is invalid

validator validate_plot_options  »  all fields#
Return type:

Self

validator validate_sampling  »  all fields#
Return type:

Self

property class_colors#
property class_names#
property num_classes#