CHANGELOG¶
Raster Vision 0.12.1¶
Raster Vision 0.12¶
This release presents a major refactoring of Raster Vision intended to simplify the codebase, and make it more flexible and customizable.
To learn about how to upgrade existing experiment configurations, perhaps the best approach is to read the source code of the Examples to get a feel for the new syntax. Unfortunately, existing predict packages will not be usable with this release, and upgrading and re-running the experiments will be necessary. For more advanced users who have written plugins or custom commands, the internals have changed substantially, and we recommend reading Architecture and Customization.
Since the changes in this release are sweeping, it is difficult to enumerate a list of all changes and associated PRs. Therefore, this change log describes the changes at a high level, along with some justifications and pointers to further documentation.
We are still using a modular, programmatic approach to configuration, but have switched to using a Config
base class which uses the Pydantic library. This allows us to define configuration schemas in a declarative fashion, and let the underlying library handle serialization, deserialization, and validation. In addition, this has allowed us to DRY up the configuration code, eliminate the use of Protobufs, and represent configuration from plugins in the same fashion as built-in functionality. To see the difference, compare the configuration code for ChipClassificationLabelSource
in 0.11 (label_source.proto and chip_classification_label_source_config.py), and in 0.12 (chip_classification_label_source_config.py).
Raster Vision includes functionality for running computational pipelines in local and remote environments, but previously, this functionality was tightly coupled with the “domain logic” of machine learning on geospatial data in the Experiment
abstraction. This made it more difficult to add and modify commands, as well as use this functionality in other projects. In this release, we factored out the experiment running code into a separate rastervision.pipeline package, which can be used for defining, configuring, customizing, and running arbitrary computational pipelines.
The rest of Raster Vision is now written as a set of optional plugins that have Pipelines
which implement the “domain logic” of machine learning on geospatial data. Implementing everything as optional (pip
installable) plugins makes it easier to install subsets of Raster Vision functionality, eliminates separate code paths for built-in and plugin functionality, and provides (de facto) examples of how to write plugins. See Codebase Overview for more details.
The 0.10 release added PyTorch backends for chip classification, semantic segmentation, and object detection. In this release, we abstracted out the common code for training models into a flexible Learner
base class with subclasses for each of the computer vision tasks. This code is in the rastervision.pytorch_learner
plugin, and is used by the Backends
in rastervision.pytorch_backend
. By decoupling Backends
and Learners
, it is now easier to write arbitrary Pipelines
and new Backends
that reuse the core model training code, which can be customized by overriding methods such as build_model
. See Customizing Raster Vision.
The Tensorflow backends and associated Docker images have been removed. It is too difficult to maintain backends for multiple deep learning frameworks, and PyTorch has worked well for us. Of course, it’s still possible to write Backend
plugins using any framework.
For simplicity, we moved the contents of the raster-vision-examples and raster-vision-aws repos into the main repo. See Examples and Setup AWS Batch using CloudFormation.
To help people bootstrap new projects using RV, we added Bootstrap new projects with a template.
All the PyTorch backends now offer data augmentation using albumentations.
We removed the ability to automatically skip running commands that already have output, “tree workflows”, and “default providers”. We also unified the
Experiment
,Command
, andTask
classes into a singlePipeline
class which is subclassed for different computer vision (or other) tasks. These features and concepts had little utility in our experience, and presented stumbling blocks to outside contributors and plugin writers.Although it’s still possible to add new
VectorSources
and other classes for reading data, our philosophy going forward is to prefer writing pre-processing scripts to get data into the format that Raster Vision can already consume. TheVectorTileVectorSource
was removed since it violates this new philosophy.We previously attempted to make predictions for semantic segmentation work in a streaming fashion (to avoid running out of RAM), but the implementation was buggy and complex. So we reverted to holding all predictions for a scene in RAM, and now assume that scenes are roughly < 20,000 x 20,000 pixels. This works better anyway from a parallelization standponit.
We switched to writing chips to disk incrementally during the
CHIP
command using aSampleWriter
class to avoid running out of RAM.The term “predict package” has been replaced with “model bundle”, since it rolls off the tongue better, and
BUNDLE
is the name of the command that produces it.Class ids are now indexed starting at 0 instead of 1, which seems more intuitive. The “null class”, used for marking pixels in semantic segmentation that have not been labeled, used to be 0, and is now equal to
len(class_ids)
.The
aws_batch
runner was renamedbatch
due to a naming conflict, and the names of the configuration variables for Batch changed. See Setting up AWS Batch.
The next big features we plan on developing are:
the ability to read and write data in STAC format using the label extension. This will facilitate integration with other tools such as GroundWork.
the ability to train models on multi-band imagery, rather than having to pick a subset of three bands.
Raster Vision 0.11¶
Raster Vision 0.10¶
Notes on switching to PyTorch-based backends¶
The current backends based on Tensorflow have several problems:
They depend on third party libraries (Deeplab, TF Object Detection API) that are complex, not well suited to being used as dependencies within a larger project, and are each written in a different style. This makes the code for each backend very different from one other, and unnecessarily complex. This increases the maintenance burden, makes it difficult to customize, and makes it more difficult to implement a consistent set of functionality between the backends.
Tensorflow, in the maintainer’s opinion, is more difficult to write and debug than PyTorch (although this is starting to improve).
The third party libraries assume that training images are stored as PNG or JPG files. This limits our ability to handle more than three bands and more that 8-bits per channel. We have recently completed some research on how to train models on > 3 bands, and we plan on adding this functionality to Raster Vision.
Therefore, we are in the process of sunsetting the Tensorflow backends (which will probably be removed) and have implemented replacement PyTorch-based backends. The main things to be aware of in upgrading to this version of Raster Vision are as follows:
Instead of there being CPU and GPU Docker images (based on Tensorflow), there are now tf-cpu, tf-gpu, and pytorch (which works on both CPU and GPU) images. Using
./docker/build --tf
or./docker/build --pytorch
will only build the TF or PyTorch images, respectively.Using the TF backends requires being in the TF container, and similar for PyTorch. There are now
--tf-cpu
,--tf-gpu
, and--pytorch-gpu
options for the./docker/run
command. The default setting is to use the PyTorch image in the standard (CPU) Docker runtime.The raster-vision-aws CloudFormation setup creates Batch resources for TF-CPU, TF-GPU, and PyTorch. It also now uses default AMIs provided by AWS, simplifying the setup process.
To easily switch between running TF and PyTorch jobs on Batch, we recommend creating two separate Raster Vision profiles with the Batch resources for each of them.
The way to use the
ConfigBuilders
for the new backends can be seen in the examples repo and the Backend reference
Features¶
Add confusion matrix as metric for semantic segmentation #788
Add predict_chip_size as option for semantic segmentation #786
Handle “ignore” class for semantic segmentation #783
Add stochastic gradient descent (“SGD”) as an optimizer option for chip classification #792
Add option to determine if all touched pixels should be rasterized for rasterized RasterSource #803
Script to generate GeoTIFF from ZXY tile server #811
Remove QGIS plugin #818
Add PyTorch backends and add PyTorch Docker image #821 and #823.
Raster Vision 0.9¶
Features¶
Add requester_pays RV config option #762
Unify Docker scripts #743
Switch default branch to master #726
Merge GeoTiffSource and ImageSource into RasterioSource #723
Simplify/clarify/test/validate RasterSource #721
Simplify and generalize geom processing #711
Predict zero for nodata pixels on semantic segmentation #701
Add support for evaluating vector output with AOIs #698
Conserve disk space when dealing with raster files #692
Optimize StatsAnalyzer #690
Include per-scene eval metrics #641
Make and save predictions and do eval chip-by-chip #635
Decrease semseg memory usage #630
Add support for vector tiles in .mbtiles files #601
Add support for getting labels from zxy vector tiles #532
Remove custom
__deepcopy__
implementation fromConfigBuilder
s. #567Add ability to shift raster images by given numbers of meters. #573
Add ability to generate GeoJSON segmentation predictions. #575
Add ability to run the DeepLab eval script. #653
Submit CPU-only stages to a CPU queue on Aws. #668
Parallelize CHIP and PREDICT commands #671
Refactor
update_for_command
to split out the IO reporting intoreport_io
. #671Add Multi-GPU Support to DeepLab Backend #590
Handle multiple AOI URIs #617
Give
train_restart_dir
Default Value #626Use
`make
to manage local execution #664Optimize vector tile processing #676
Bug Fixes¶
Fix Deeplab resume bug: update path in checkpoint file #756
Allow Spaces in
--channel-order
Argument #731Fix error when using predict packages with AOIs #674
Correct checkpoint name #624
Allow using default stride for semseg sliding window #745
Fix filter_by_aoi for ObjectDetectionLabels #746
Load null channel_order correctly #733
Handle Rasterio crs that doesn’t contain EPSG #725
Fixed issue with saving semseg predictions for non-georeferenced imagery #708
Fixed issue with handling width > height in semseg eval #627
Fixed issue with experiment configs not setting key names correctly #576
Fixed issue with Raster Sources that have channel order #576