Usage Overview#

Users can use Raster Vision in a couple of different ways depending on their needs and level of experience:

  • As a library of utilities for handling geospatial data and training deep learning models that you can incorporate into your own code.

  • As a low-code framework in the form of the Raster Vision Pipeline that internally handles all aspects of the training workflow for you and only requires you to configure a few parameters.

As a library#

This allows you to pick and choose which parts of Raster Vision to use. For example, to apply your existing PyTorch training code to geospatial imagery, simply create a GeoDataset like so:

from rastervision.pytorch_learner import SemanticSegmentationSlidingWindowGeoDataset

ds = SemanticSegmentationSlidingWindowGeoDataset.from_uris(
    class_config=class_config,
    image_uri=image_uri,
    label_raster_uri=label_uri,
    size=200,
    stride=100)

As a framework#

This allows you to configure a full pipeline in one go like so:

def get_config(runner) -> SemanticSegmentationConfig:
    output_root_uri = '/opt/data/output/tiny_spacenet'
    class_config = ClassConfig(
        names=['building', 'background'], colors=['red', 'black'])

    base_uri = ('https://s3.amazonaws.com/azavea-research-public-data/'
                'raster-vision/examples/spacenet')
    train_image_uri = join(base_uri, 'RGB-PanSharpen_AOI_2_Vegas_img205.tif')
    train_label_uri = join(base_uri, 'buildings_AOI_2_Vegas_img205.geojson')
    val_image_uri = join(base_uri, 'RGB-PanSharpen_AOI_2_Vegas_img25.tif')
    val_label_uri = join(base_uri, 'buildings_AOI_2_Vegas_img25.geojson')

    train_scene = make_scene('scene_205', train_image_uri, train_label_uri,
                             class_config)
    val_scene = make_scene('scene_25', val_image_uri, val_label_uri,
                           class_config)
    scene_dataset = DatasetConfig(
        class_config=class_config,
        train_scenes=[train_scene],
        validation_scenes=[val_scene])

    # Use the PyTorch backend for the SemanticSegmentation pipeline.
    chip_sz = 300

    backend = PyTorchSemanticSegmentationConfig(
        data=SemanticSegmentationGeoDataConfig(
            scene_dataset=scene_dataset,
            sampling=WindowSamplingConfig(
                # randomly sample training chips from scene
                method=WindowSamplingMethod.random,
                # ... of size chip_sz x chip_sz
                size=chip_sz,
                # ... and at most 10 chips per scene
                max_windows=10)),
        model=SemanticSegmentationModelConfig(backbone=Backbone.resnet50),
        solver=SolverConfig(lr=1e-4, num_epochs=1, batch_sz=2))

    return SemanticSegmentationConfig(
        root_uri=output_root_uri,
        dataset=scene_dataset,
        backend=backend,
        predict_options=SemanticSegmentationPredictOptions(chip_sz=chip_sz))

And then run it from the command line:

> rastervision run local <path/to/file.py>

Read more about it here: The Raster Vision Pipeline.