ObjectDetectionConfig#

Note

All Configs are derived from rastervision.pipeline.config.Config, which itself is a pydantic Model.

pydantic model ObjectDetectionConfig[source]#

Configure an ObjectDetection pipeline.

Show JSON schema
{
   "title": "ObjectDetectionConfig",
   "description": "Configure an :class:`.ObjectDetection` pipeline.",
   "type": "object",
   "properties": {
      "root_uri": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "The root URI for output generated by the pipeline",
         "title": "Root Uri"
      },
      "rv_config": {
         "anyOf": [
            {
               "type": "object"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "Used to store serialized RVConfig so pipeline can run in remote environment with the local RVConfig. This should not be set explicitly by users -- it is only used by the runner when running a remote pipeline.",
         "title": "Rv Config"
      },
      "plugin_versions": {
         "anyOf": [
            {
               "additionalProperties": {
                  "type": "integer"
               },
               "type": "object"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "Used to store a mapping of plugin module paths to the latest version number. This should not be set explicitly by users -- it is set automatically when serializing and saving the config to disk.",
         "title": "Plugin Versions"
      },
      "type_hint": {
         "const": "object_detection",
         "default": "object_detection",
         "enum": [
            "object_detection"
         ],
         "title": "Type Hint",
         "type": "string"
      },
      "dataset": {
         "allOf": [
            {
               "$ref": "#/$defs/DatasetConfig"
            }
         ],
         "description": "Dataset containing train, validation, and optional test scenes."
      },
      "backend": {
         "allOf": [
            {
               "$ref": "#/$defs/BackendConfig"
            }
         ],
         "description": "Backend to use for interfacing with ML library."
      },
      "evaluators": {
         "default": [],
         "description": "Evaluators to run during analyzer command. If list is empty the default evaluator is added.",
         "items": {
            "$ref": "#/$defs/EvaluatorConfig"
         },
         "title": "Evaluators",
         "type": "array"
      },
      "analyzers": {
         "default": [],
         "description": "Analyzers to run during analyzer command. A StatsAnalyzer will be added automatically if any scenes have a RasterTransformer.",
         "items": {
            "$ref": "#/$defs/AnalyzerConfig"
         },
         "title": "Analyzers",
         "type": "array"
      },
      "analyze_uri": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "URI for output of analyze. If None, will be auto-generated.",
         "title": "Analyze Uri"
      },
      "chip_uri": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "URI for output of chip. If None, will be auto-generated.",
         "title": "Chip Uri"
      },
      "train_uri": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "URI for output of train. If None, will be auto-generated.",
         "title": "Train Uri"
      },
      "predict_uri": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "URI for output of predict. If None, will be auto-generated.",
         "title": "Predict Uri"
      },
      "eval_uri": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "URI for output of eval. If None, will be auto-generated.",
         "title": "Eval Uri"
      },
      "bundle_uri": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "URI for output of bundle. If None, will be auto-generated.",
         "title": "Bundle Uri"
      },
      "source_bundle_uri": {
         "anyOf": [
            {
               "type": "string"
            },
            {
               "type": "null"
            }
         ],
         "default": null,
         "description": "If provided, the model will be loaded from this bundle for the train stage. Useful for fine-tuning.",
         "title": "Source Bundle Uri"
      },
      "chip_options": {
         "anyOf": [
            {
               "$ref": "#/$defs/ObjectDetectionChipOptions"
            },
            {
               "type": "null"
            }
         ],
         "default": null
      },
      "predict_options": {
         "anyOf": [
            {
               "$ref": "#/$defs/ObjectDetectionPredictOptions"
            },
            {
               "type": "null"
            }
         ],
         "default": null
      }
   },
   "$defs": {
      "AnalyzerConfig": {
         "additionalProperties": false,
         "description": "Configure an :class:`.Analyzer`.",
         "properties": {
            "type_hint": {
               "const": "analyzer",
               "default": "analyzer",
               "enum": [
                  "analyzer"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "title": "AnalyzerConfig",
         "type": "object"
      },
      "BackendConfig": {
         "additionalProperties": false,
         "description": "Configure a :class:`.Backend`.",
         "properties": {
            "type_hint": {
               "const": "backend",
               "default": "backend",
               "enum": [
                  "backend"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "title": "BackendConfig",
         "type": "object"
      },
      "ClassConfig": {
         "additionalProperties": false,
         "description": "Configure class information for a machine learning task.",
         "properties": {
            "names": {
               "description": "Names of classes. The i-th class in this list will have class ID = i.",
               "items": {
                  "type": "string"
               },
               "title": "Names",
               "type": "array"
            },
            "colors": {
               "anyOf": [
                  {
                     "items": {
                        "anyOf": [
                           {
                              "type": "string"
                           },
                           {
                              "items": {},
                              "type": "array"
                           }
                        ]
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "Colors used to visualize classes. Can be color strings accepted by matplotlib or RGB tuples. If None, a random color will be auto-generated for each class.",
               "title": "Colors"
            },
            "null_class": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "Optional name of class in `names` to use as the null class. This is used in semantic segmentation to represent the label for imagery pixels that are NODATA or that are missing a label. If None and the class names include \"null\", it will automatically be used as the null class. If None, and this Config is part of a SemanticSegmentationConfig, a null class will be added automatically.",
               "title": "Null Class"
            },
            "type_hint": {
               "const": "class_config",
               "default": "class_config",
               "enum": [
                  "class_config"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "required": [
            "names"
         ],
         "title": "ClassConfig",
         "type": "object"
      },
      "DatasetConfig": {
         "additionalProperties": false,
         "description": "Configure train, validation, and test splits for a dataset.",
         "properties": {
            "class_config": {
               "$ref": "#/$defs/ClassConfig"
            },
            "train_scenes": {
               "items": {
                  "$ref": "#/$defs/SceneConfig"
               },
               "title": "Train Scenes",
               "type": "array"
            },
            "validation_scenes": {
               "items": {
                  "$ref": "#/$defs/SceneConfig"
               },
               "title": "Validation Scenes",
               "type": "array"
            },
            "test_scenes": {
               "default": [],
               "items": {
                  "$ref": "#/$defs/SceneConfig"
               },
               "title": "Test Scenes",
               "type": "array"
            },
            "scene_groups": {
               "additionalProperties": {
                  "items": {
                     "type": "string"
                  },
                  "type": "array",
                  "uniqueItems": true
               },
               "default": {},
               "description": "Groupings of scenes. Should be a dict of the form: {<group-name>: set(scene_id_1, scene_id_2, ...)}. Three groups are added by default: \"train_scenes\", \"validation_scenes\", and \"test_scenes\"",
               "title": "Scene Groups",
               "type": "object"
            },
            "type_hint": {
               "const": "dataset",
               "default": "dataset",
               "enum": [
                  "dataset"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "required": [
            "class_config",
            "train_scenes",
            "validation_scenes"
         ],
         "title": "DatasetConfig",
         "type": "object"
      },
      "EvaluatorConfig": {
         "additionalProperties": false,
         "description": "Configure an :class:`.Evaluator`.",
         "properties": {
            "output_uri": {
               "anyOf": [
                  {
                     "type": "string"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "URI of directory where evaluator output will be saved. Evaluations for each scene-group will be save in a JSON file at <output_uri>/<scene-group-name>/eval.json. If None, and this Config is part of an RVPipeline, this field will be auto-generated.",
               "title": "Output Uri"
            },
            "type_hint": {
               "const": "evaluator",
               "default": "evaluator",
               "enum": [
                  "evaluator"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "title": "EvaluatorConfig",
         "type": "object"
      },
      "LabelSourceConfig": {
         "additionalProperties": false,
         "description": "Configure a :class:`.LabelSource`.",
         "properties": {
            "type_hint": {
               "const": "label_source",
               "default": "label_source",
               "enum": [
                  "label_source"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "title": "LabelSourceConfig",
         "type": "object"
      },
      "LabelStoreConfig": {
         "additionalProperties": false,
         "description": "Configure a :class:`.LabelStore`.",
         "properties": {
            "type_hint": {
               "const": "label_store",
               "default": "label_store",
               "enum": [
                  "label_store"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "title": "LabelStoreConfig",
         "type": "object"
      },
      "ObjectDetectionChipOptions": {
         "additionalProperties": false,
         "properties": {
            "sampling": {
               "allOf": [
                  {
                     "$ref": "#/$defs/ObjectDetectionWindowSamplingConfig"
                  }
               ],
               "description": "Window sampling config."
            },
            "nodata_threshold": {
               "default": 1.0,
               "description": "Discard chips where the proportion of NODATA values is greater than or equal to this value. Might result in false positives if there are many legitimate black pixels in the chip. Use with caution. If 1.0, only chips that are fully NODATA will be discarded. Defaults to 1.0.",
               "maximum": 1.0,
               "minimum": 0.0,
               "title": "Nodata Threshold",
               "type": "number"
            },
            "type_hint": {
               "const": "object_detection_chip_options",
               "default": "object_detection_chip_options",
               "enum": [
                  "object_detection_chip_options"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "required": [
            "sampling"
         ],
         "title": "ObjectDetectionChipOptions",
         "type": "object"
      },
      "ObjectDetectionPredictOptions": {
         "additionalProperties": false,
         "properties": {
            "chip_sz": {
               "default": 300,
               "description": "Size of predictions chips in pixels.",
               "title": "Chip Sz",
               "type": "integer"
            },
            "stride": {
               "anyOf": [
                  {
                     "type": "integer"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "Stride of the sliding window for generating chips. Defaults to half of ``chip_sz``.",
               "title": "Stride"
            },
            "batch_sz": {
               "default": 8,
               "description": "Batch size to use during prediction.",
               "title": "Batch Sz",
               "type": "integer"
            },
            "type_hint": {
               "const": "object_detection_predict_options",
               "default": "object_detection_predict_options",
               "enum": [
                  "object_detection_predict_options"
               ],
               "title": "Type Hint",
               "type": "string"
            },
            "merge_thresh": {
               "default": 0.5,
               "description": "If predicted boxes have an IOA (intersection over area) greater than merge_thresh, then they are merged into a single box during postprocessing. This is needed since the sliding window approach results in some false duplicates.",
               "title": "Merge Thresh",
               "type": "number"
            },
            "score_thresh": {
               "default": 0.5,
               "description": "Predicted boxes are only output if their score is above score_thresh.",
               "title": "Score Thresh",
               "type": "number"
            }
         },
         "title": "ObjectDetectionPredictOptions",
         "type": "object"
      },
      "ObjectDetectionWindowSamplingConfig": {
         "additionalProperties": false,
         "properties": {
            "method": {
               "allOf": [
                  {
                     "$ref": "#/$defs/WindowSamplingMethod"
                  }
               ],
               "default": "sliding",
               "description": ""
            },
            "size": {
               "anyOf": [
                  {
                     "exclusiveMinimum": 0,
                     "type": "integer"
                  },
                  {
                     "maxItems": 2,
                     "minItems": 2,
                     "prefixItems": [
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        },
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        }
                     ],
                     "type": "array"
                  }
               ],
               "description": "If method = sliding, this is the size of sliding window. If method = random, this is the size that all the windows are resized to before they are returned. If method = random and neither size_lims nor h_lims and w_lims have been specified, then size_lims is set to (size, size + 1).",
               "title": "Size"
            },
            "stride": {
               "anyOf": [
                  {
                     "exclusiveMinimum": 0,
                     "type": "integer"
                  },
                  {
                     "maxItems": 2,
                     "minItems": 2,
                     "prefixItems": [
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        },
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        }
                     ],
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "Stride of sliding window. Only used if method = sliding.",
               "title": "Stride"
            },
            "padding": {
               "anyOf": [
                  {
                     "minimum": 0,
                     "type": "integer"
                  },
                  {
                     "maxItems": 2,
                     "minItems": 2,
                     "prefixItems": [
                        {
                           "minimum": 0,
                           "type": "integer"
                        },
                        {
                           "minimum": 0,
                           "type": "integer"
                        }
                     ],
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "How many pixels are windows allowed to overflow the edges of the raster source.",
               "title": "Padding"
            },
            "pad_direction": {
               "default": "end",
               "description": "If \"end\", only pad ymax and xmax (bottom and right). If \"start\", only pad ymin and xmin (top and left). If \"both\", pad all sides. Has no effect if padding is zero. Defaults to \"end\".",
               "enum": [
                  "both",
                  "start",
                  "end"
               ],
               "title": "Pad Direction",
               "type": "string"
            },
            "size_lims": {
               "anyOf": [
                  {
                     "maxItems": 2,
                     "minItems": 2,
                     "prefixItems": [
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        },
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        }
                     ],
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "[min, max) interval from which window sizes will be uniformly randomly sampled. The upper limit is exclusive. To fix the size to a constant value, use size_lims = (sz, sz + 1). Only used if method = random. Specify either size_lims or h_lims and w_lims, but not both. If neither size_lims nor h_lims and w_lims have been specified, then this will be set to (size, size + 1).",
               "title": "Size Lims"
            },
            "h_lims": {
               "anyOf": [
                  {
                     "maxItems": 2,
                     "minItems": 2,
                     "prefixItems": [
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        },
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        }
                     ],
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "[min, max] interval from which window heights will be uniformly randomly sampled. Only used if method = random.",
               "title": "H Lims"
            },
            "w_lims": {
               "anyOf": [
                  {
                     "maxItems": 2,
                     "minItems": 2,
                     "prefixItems": [
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        },
                        {
                           "exclusiveMinimum": 0,
                           "type": "integer"
                        }
                     ],
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "[min, max] interval from which window widths will be uniformly randomly sampled. Only used if method = random.",
               "title": "W Lims"
            },
            "max_windows": {
               "default": 10000,
               "description": "Max number of windows to sample. Only used if method = random.",
               "minimum": 0,
               "title": "Max Windows",
               "type": "integer"
            },
            "max_sample_attempts": {
               "default": 100,
               "description": "Max attempts when trying to find a window within the AOI of a scene. Only used if method = random and the scene has aoi_polygons specified.",
               "exclusiveMinimum": 0,
               "title": "Max Sample Attempts",
               "type": "integer"
            },
            "efficient_aoi_sampling": {
               "default": true,
               "description": "If the scene has AOIs, sampling windows at random anywhere in the extent and then checking if they fall within any of the AOIs can be very inefficient. This flag enables the use of an alternate algorithm that only samples window locations inside the AOIs. Only used if method = random and the scene has aoi_polygons specified. Defaults to True",
               "title": "Efficient Aoi Sampling",
               "type": "boolean"
            },
            "within_aoi": {
               "default": true,
               "description": "If True and if the scene has an AOI, only sample windows that lie fully within the AOI. If False, windows only partially intersecting the AOI will also be allowed.",
               "title": "Within Aoi",
               "type": "boolean"
            },
            "type_hint": {
               "const": "object_detection_window_sampling",
               "default": "object_detection_window_sampling",
               "enum": [
                  "object_detection_window_sampling"
               ],
               "title": "Type Hint",
               "type": "string"
            },
            "ioa_thresh": {
               "default": 0.8,
               "description": "When a box is partially outside of a training chip, it is not clear if (a clipped version) of the box should be included in the chip. If the IOA (intersection over area) of the box with the chip is greater than ioa_thresh, it is included in the chip. Defaults to 0.8.",
               "title": "Ioa Thresh",
               "type": "number"
            },
            "clip": {
               "default": false,
               "description": "Clip bounding boxes to window limits when retrieving labels for a window.",
               "title": "Clip",
               "type": "boolean"
            },
            "neg_ratio": {
               "anyOf": [
                  {
                     "type": "number"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The ratio of negative chips (those containing no bounding boxes) to positive chips. This can be useful if the statistics of the background is different in positive chips. For example, in car detection, the positive chips will always contain roads, but no examples of rooftops since cars tend to not be near rooftops. Defaults to None.",
               "title": "Neg Ratio"
            },
            "neg_ioa_thresh": {
               "default": 0.2,
               "description": "A window will be considered negative if its max IoA with any bounding box is less than this threshold. Defaults to 0.2.",
               "title": "Neg Ioa Thresh",
               "type": "number"
            }
         },
         "required": [
            "size"
         ],
         "title": "ObjectDetectionWindowSamplingConfig",
         "type": "object"
      },
      "RasterSourceConfig": {
         "additionalProperties": false,
         "description": "Configure a :class:`.RasterSource`.",
         "properties": {
            "channel_order": {
               "anyOf": [
                  {
                     "items": {
                        "type": "integer"
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "The sequence of channel indices to use when reading imagery.",
               "title": "Channel Order"
            },
            "transformers": {
               "default": [],
               "items": {
                  "$ref": "#/$defs/RasterTransformerConfig"
               },
               "title": "Transformers",
               "type": "array"
            },
            "bbox": {
               "anyOf": [
                  {
                     "maxItems": 4,
                     "minItems": 4,
                     "prefixItems": [
                        {
                           "type": "integer"
                        },
                        {
                           "type": "integer"
                        },
                        {
                           "type": "integer"
                        },
                        {
                           "type": "integer"
                        }
                     ],
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "User-specified bbox in pixel coords in the form (ymin, xmin, ymax, xmax). Useful for cropping the raster source so that only part of the raster is read from.",
               "title": "Bbox"
            },
            "type_hint": {
               "const": "raster_source",
               "default": "raster_source",
               "enum": [
                  "raster_source"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "title": "RasterSourceConfig",
         "type": "object"
      },
      "RasterTransformerConfig": {
         "additionalProperties": false,
         "description": "Configure a :class:`.RasterTransformer`.",
         "properties": {
            "type_hint": {
               "const": "raster_transformer",
               "default": "raster_transformer",
               "enum": [
                  "raster_transformer"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "title": "RasterTransformerConfig",
         "type": "object"
      },
      "SceneConfig": {
         "additionalProperties": false,
         "description": "Configure a :class:`.Scene` comprising raster data & labels for an AOI.\n    ",
         "properties": {
            "id": {
               "title": "Id",
               "type": "string"
            },
            "raster_source": {
               "$ref": "#/$defs/RasterSourceConfig"
            },
            "label_source": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/LabelSourceConfig"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null
            },
            "label_store": {
               "anyOf": [
                  {
                     "$ref": "#/$defs/LabelStoreConfig"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null
            },
            "aoi_uris": {
               "anyOf": [
                  {
                     "items": {
                        "type": "string"
                     },
                     "type": "array"
                  },
                  {
                     "type": "null"
                  }
               ],
               "default": null,
               "description": "List of URIs of GeoJSON files that define the AOIs for the scene. Each polygon defines an AOI which is a piece of the scene that is assumed to be fully labeled and usable for training or validation. The AOIs are assumed to be in EPSG:4326 coordinates.",
               "title": "Aoi Uris"
            },
            "type_hint": {
               "const": "scene",
               "default": "scene",
               "enum": [
                  "scene"
               ],
               "title": "Type Hint",
               "type": "string"
            }
         },
         "required": [
            "id",
            "raster_source"
         ],
         "title": "SceneConfig",
         "type": "object"
      },
      "WindowSamplingMethod": {
         "description": "Enum for window sampling methods.\n\nAttributes:\n    sliding: Sliding windows.\n    random: Randomly sampled windows.",
         "enum": [
            "sliding",
            "random"
         ],
         "title": "WindowSamplingMethod",
         "type": "string"
      }
   },
   "additionalProperties": false,
   "required": [
      "dataset",
      "backend"
   ]
}

Config:
  • extra: str = forbid

  • validate_assignment: bool = True

Fields:
field analyze_uri: str | None = None#

URI for output of analyze. If None, will be auto-generated.

field analyzers: list[AnalyzerConfig] = []#

Analyzers to run during analyzer command. A StatsAnalyzer will be added automatically if any scenes have a RasterTransformer.

field backend: BackendConfig [Required]#

Backend to use for interfacing with ML library.

field bundle_uri: str | None = None#

URI for output of bundle. If None, will be auto-generated.

field chip_options: ObjectDetectionChipOptions | None = None#
field chip_uri: str | None = None#

URI for output of chip. If None, will be auto-generated.

field dataset: DatasetConfig [Required]#

Dataset containing train, validation, and optional test scenes.

field eval_uri: str | None = None#

URI for output of eval. If None, will be auto-generated.

field evaluators: list[EvaluatorConfig] = []#

Evaluators to run during analyzer command. If list is empty the default evaluator is added.

field plugin_versions: dict[str, int] | None = None#

Used to store a mapping of plugin module paths to the latest version number. This should not be set explicitly by users – it is set automatically when serializing and saving the config to disk.

field predict_options: ObjectDetectionPredictOptions | None = None#
field predict_uri: str | None = None#

URI for output of predict. If None, will be auto-generated.

field root_uri: str | None = None#

The root URI for output generated by the pipeline

field rv_config: dict | None = None#

Used to store serialized RVConfig so pipeline can run in remote environment with the local RVConfig. This should not be set explicitly by users – it is only used by the runner when running a remote pipeline.

field source_bundle_uri: str | None = None#

If provided, the model will be loaded from this bundle for the train stage. Useful for fine-tuning.

field train_uri: str | None = None#

URI for output of train. If None, will be auto-generated.

field type_hint: Literal['object_detection'] = 'object_detection'#
build(tmp_dir)[source]#

Return a pipeline based on this configuration.

Subclasses should override this to return an instance of the corresponding subclass of Pipeline.

Parameters:

tmp_dir – root of any temporary directory to pass to pipeline

classmethod deserialize(inp: str | dict | Config) Self#

Deserialize Config from a JSON file or dict, upgrading if possible.

If inp is already a Config, it is returned as is.

Parameters:

inp (str | dict | Config) – a URI to a JSON file or a dict.

Return type:

Self

classmethod from_dict(cfg_dict: dict) Self#

Deserialize Config from a dict.

Parameters:

cfg_dict (dict) – Dict to deserialize.

Return type:

Self

classmethod from_file(uri: str) Self#

Deserialize Config from a JSON file, upgrading if possible.

Parameters:

uri (str) – URI to load from.

Return type:

Self

get_config_uri() str#

Get URI of serialized version of this PipelineConfig.

Return type:

str

get_default_evaluator()[source]#

Returns a default EvaluatorConfig to use if one isn’t set.

get_default_label_store(scene)[source]#

Returns a default LabelStoreConfig to fill in any missing ones.

get_model_bundle_uri()#
recursive_validate_config()#

Recursively validate hierarchies of Configs.

This uses reflection to call validate_config on a hierarchy of Configs using a depth-first pre-order traversal.

revalidate()#

Re-validate an instantiated Config.

Runs all Pydantic validators plus self.validate_config().

to_file(uri: str, with_rv_metadata: bool = True) None#

Save a Config to a JSON file, optionally with RV metadata.

Parameters:
  • uri (str) – URI to save to.

  • with_rv_metadata (bool) – If True, inject Raster Vision metadata such as plugin_versions, so that the config can be upgraded when loaded.

Return type:

None

update()#

Update any fields before validation.

Subclasses should override this to provide complex default behavior, for example, setting default values as a function of the values of other fields. The arguments to this method will vary depending on the type of Config.

validate_config()#

Validate fields that should be checked after update is called.

This is to complement the builtin validation that Pydantic performs at the time of object construction.

validate_list(field: str, valid_options: list[str])#

Validate a list field.

Parameters:
  • field (str) – name of field to validate

  • valid_options (list[str]) – values that field is allowed to take

Raises:

ConfigError – if field is invalid