CHANGELOG

Raster Vision 0.10

Raster Vision 0.10.0

Notes on switching to PyTorch-based backends

The current backends based on Tensorflow have several problems:

  • They depend on third party libraries (Deeplab, TF Object Detection API) that are complex, not well suited to being used as dependencies within a larger project, and are each written in a different style. This makes the code for each backend very different from one other, and unnecessarily complex. This increases the maintenance burden, makes it difficult to customize, and makes it more difficult to implement a consistent set of functionality between the backends.

  • Tensorflow, in the maintainer’s opinion, is more difficult to write and debug than PyTorch (although this is starting to improve).

  • The third party libraries assume that training images are stored as PNG or JPG files. This limits our ability to handle more than three bands and more that 8-bits per channel. We have recently completed some research on how to train models on > 3 bands, and we plan on adding this functionality to Raster Vision.

Therefore, we are in the process of sunsetting the Tensorflow backends (which will probably be removed) and have implemented replacement PyTorch-based backends. The main things to be aware of in upgrading to this version of Raster Vision are as follows:

  • Instead of there being CPU and GPU Docker images (based on Tensorflow), there are now tf-cpu, tf-gpu, and pytorch (which works on both CPU and GPU) images. Using ./docker/build --tf or ./docker/build --pytorch will only build the TF or PyTorch images, respectively.

  • Using the TF backends requires being in the TF container, and similar for PyTorch. There are now --tf-cpu, --tf-gpu, and --pytorch-gpu options for the ./docker/run command. The default setting is to use the PyTorch image in the standard (CPU) Docker runtime.

  • The raster-vision-aws CloudFormation setup creates Batch resources for TF-CPU, TF-GPU, and PyTorch. It also now uses default AMIs provided by AWS, simplifying the setup process.

  • To easily switch between running TF and PyTorch jobs on Batch, we recommend creating two separate Raster Vision profiles with the Batch resources for each of them.

  • The way to use the ConfigBuilders for the new backends can be seen in the examples repo and the BackendConfig

Features

  • Add confusion matrix as metric for semantic segmentation #788

  • Add predict_chip_size as option for semantic segmentation #786

  • Handle “ignore” class for semantic segmentation #783

  • Add stochastic gradient descent (“SGD”) as an optimizer option for chip classification #792

  • Add option to determine if all touched pixels should be rasterized for rasterized RasterSource #803

  • Script to generate GeoTIFF from ZXY tile server #811

  • Remove QGIS plugin #818

  • Add PyTorch backends and add PyTorch Docker image #821 and #823.

Bug Fixes

  • Fixed issue with configuration not being able to read lists #784

  • Fixed ConfigBuilders not supporting type annotations in __init__ #800

Raster Vision 0.9

Raster Vision 0.9.0

Features

  • Add requester_pays RV config option #762

  • Unify Docker scripts #743

  • Switch default branch to master #726

  • Merge GeoTiffSource and ImageSource into RasterioSource #723

  • Simplify/clarify/test/validate RasterSource #721

  • Simplify and generalize geom processing #711

  • Predict zero for nodata pixels on semantic segmentation #701

  • Add support for evaluating vector output with AOIs #698

  • Conserve disk space when dealing with raster files #692

  • Optimize StatsAnalyzer #690

  • Include per-scene eval metrics #641

  • Make and save predictions and do eval chip-by-chip #635

  • Decrease semseg memory usage #630

  • Add support for vector tiles in .mbtiles files #601

  • Add support for getting labels from zxy vector tiles #532

  • Remove custom __deepcopy__ implementation from ConfigBuilders. #567

  • Add ability to shift raster images by given numbers of meters. #573

  • Add ability to generate GeoJSON segmentation predictions. #575

  • Add ability to run the DeepLab eval script. #653

  • Submit CPU-only stages to a CPU queue on Aws. #668

  • Parallelize CHIP and PREDICT commands #671

  • Refactor update_for_command to split out the IO reporting into report_io. #671

  • Add Multi-GPU Support to DeepLab Backend #590

  • Handle multiple AOI URIs #617

  • Give train_restart_dir Default Value #626

  • Use `make to manage local execution #664

  • Optimize vector tile processing #676

Bug Fixes

  • Fix Deeplab resume bug: update path in checkpoint file #756

  • Allow Spaces in --channel-order Argument #731

  • Fix error when using predict packages with AOIs #674

  • Correct checkpoint name #624

  • Allow using default stride for semseg sliding window #745

  • Fix filter_by_aoi for ObjectDetectionLabels #746

  • Load null channel_order correctly #733

  • Handle Rasterio crs that doesn’t contain EPSG #725

  • Fixed issue with saving semseg predictions for non-georeferenced imagery #708

  • Fixed issue with handling width > height in semseg eval #627

  • Fixed issue with experiment configs not setting key names correctly #576

  • Fixed issue with Raster Sources that have channel order #576

Raster Vision 0.8

Raster Vision 0.8.1

Bug Fixes

  • Allow multiploygon for chip classification #523

  • Remove unused args for AWS Batch runner #503

  • Skip over lines when doing chip classification, Use background_class_id for scenes with no polygons #507

  • Fix issue where get_matching_s3_keys fails when suffix is None #497