Extracting training data from the ODC

Keywords data used; sentinel-2 geomedian, index:data used; MADs, data methods; machine learning

Contexte

Training data is the most important part of any supervised machine learning workflow. The quality of the training data has a greater impact on the classification than the algorithm used. Large and accurate training data sets are preferable: increasing the training sample size results in increased classification accuracy (Maxell et al 2018). A review of training data methods in the context of Earth Observation is available here

When creating training labels, be sure to capture the spectral variability of the class, and to use imagery from the time period you want to classify (rather than relying on basemap composites). Another common problem with training data is class imbalance. This can occur when one of your classes is relatively rare and therefore the rare class will comprise a smaller proportion of the training set. When imbalanced data is used, it is common that the final classification will under-predict less abundant classes relative to their true proportion.

There are many platforms to use for gathering training labels, the best one to use depends on your application. GIS platforms are great for collection training data as they are highly flexible and mature platforms; Geo-Wiki and Collect Earth Online are two open-source websites that may also be useful depending on the reference data strategy employed. Alternatively, there are many pre-existing training datasets on the web that may be useful, e.g. Radiant Earth manages a growing number of reference datasets for use by anyone.

Description

This notebook will extract training data (feature layers) from the open-data-cube using geometries within a geojson. The default example will use the crop/non-crop labels within the 'data/crop_training_egypt.geojson' file.

To do this, we rely on a custom deafrica-sandbox-notebooks function called collect_training_data, contained within the deafrica_tools.classification script. The principal goal of this notebook is to familarise users with this function so they can extract the appropriate data for their use-case. The default example also highlights extracting a set of useful feature layers for generating a cropland mask for Egypt.

  1. Preview the polygons in our training data by plotting them on a basemap

  2. Define a feature layer function to pass to collect_training_data

  3. Extract training data from the datacube using collect_training_data

  4. Export the training data to disk for use in subsequent scripts


Getting started

To run this analysis, run all the cells in the notebook, starting with the « Load packages » cell.

Load packages

[1]:
%matplotlib inline

# Force GeoPandas to use Shapely instead of PyGEOS
# In a future release, GeoPandas will switch to using Shapely by default.
import os
os.environ['USE_PYGEOS'] = '0'

import datacube
import numpy as np
import xarray as xr
import subprocess as sp
import geopandas as gpd
from odc.io.cgroups import get_cpu_quota
from datacube.utils.geometry import assign_crs

from deafrica_tools.plotting import map_shapefile
from deafrica_tools.bandindices import calculate_indices
from deafrica_tools.classification import collect_training_data

Analysis parameters

  • path: The path to the input vector file from which we will extract training data. A default geojson is provided.

  • field: This is the name of column in your shapefile attribute table that contains the class labels. The class labels must be integers

[2]:
path = 'data/crop_training_egypt.geojson'
field = 'class'

Find the number of CPUs

[3]:
ncpus=round(get_cpu_quota())
print('ncpus = '+str(ncpus))
ncpus = 15

Preview input data

We can load and preview our input data shapefile using geopandas. The shapefile should contain a column with class labels (e.g. “class”). These labels will be used to train our model.

Remember, the class labels must be represented by integers.

[4]:
# Load input data shapefile
input_data = gpd.read_file(path)

# Plot first five rows
input_data.head()
[4]:
class geometry
0 0 POLYGON ((26.19189 22.06193, 26.19230 22.06193...
1 0 POLYGON ((32.24947 22.07338, 32.24989 22.07338...
2 0 POLYGON ((32.62301 22.15862, 32.62342 22.15862...
3 0 POLYGON ((28.35345 22.29337, 28.35386 22.29337...
4 0 POLYGON ((27.72311 22.83994, 27.72352 22.83994...
[5]:
# Plot training data in an interactive map
map_shapefile(input_data, attribute=field)

Extracting training data

The function collect_training_data takes our geojson containing class labels and extracts training data (features) from the datacube over the locations specified by the input geometries. The function will also pre-process our training data by stacking the arrays into a useful format and removing any NaN or inf values.

The below variables can be set within the collect_training_data function:

  • zonal_stats: An optional string giving the names of zonal statistics to calculate across each polygon (if the geometries in the vector file are polygons and not points). Default is None (all pixel values are returned). Supported values are “mean”, “median”, “max”, and “min”.

In addition to the zonal_stats parameter, we also need to set up a datacube query dictionary for the Open Data Cube query such as measurements (the bands to load from the satellite), the resolution (the cell size), and the output_crs (the output projection). These options will be added to a query dictionary that will be passed into collect_training_data using the parameter collect_training_data(dc_query=query, ...). The query dictionary will be the only argument in the feature layer function which we will define and describe in a moment.

Note: collect_training_data also has a number of additional parameters for handling ODC I/O read failures, where polygons that return an excessive number of null values can be resubmitted to the multiprocessing queue. Check out the docs to learn more.

[6]:
#set up our inputs to collect_training_data
zonal_stats = 'mean'

# Set up the inputs for the ODC query
time = ('2019')
measurements =  ['blue','green','red','nir','swir_1','swir_2','red_edge_1',
                 'red_edge_2', 'red_edge_3', 'BCMAD', 'EMAD', 'SMAD']
resolution = (-20,20)
output_crs='epsg:6933'

Generate a datacube query object from the parameters above:

[7]:
query = {
    'time': time,
    'measurements': measurements,
    'resolution': resolution,
    'output_crs': output_crs
}

Defining feature layers

To create the desired feature layers, we pass instructions to collect_training_data through the feature_func parameter.

  • feature_func: A function for generating feature layers that is applied to the data within the bounds of the input geometry. The feature_func must accept a dc_query dictionary, and return a single xarray.Dataset or xarray.DataArray containing 2D coordinates (i.e x, y - no time dimension). e.g.

    def feature_function(query):
        dc = datacube.Datacube(app='feature_layers')
        ds = dc.load(**query)
        ds = ds.mean('time')
        return ds
    

Below, we will define a more complicated feature layer function than the brief example shown above. We will calculate some band indices on the Sentinel-2 geoMAD and append a slope dataset.

[8]:
from datacube.testutils.io import rio_slurp_xarray

def feature_layers(query):
    #connect to the datacube
    dc = datacube.Datacube(app='feature_layers')

    #load s2 annual geomedian
    ds = dc.load(product='gm_s2_annual',
                 **query)

    #calculate some band indices
    da = calculate_indices(ds,
                           index=['NDVI', 'LAI', 'MNDWI'],
                           drop=False,
                           satellite_mission='s2')

    #add slope dataset
    url_slope = "https://deafrica-input-datasets.s3.af-south-1.amazonaws.com/srtm_dem/srtm_africa_slope.tif"
    slope = rio_slurp_xarray(url_slope, gbox=ds.geobox)
    slope = slope.to_dataset(name='slope')

    #merge results into single dataset
    result = xr.merge([da, slope],compat='override')

    return result.squeeze()

Now let’s run the collect_training_data function.

Note: With supervised classification, its common to have many, many labelled geometries in the training data. collect_training_data can parallelize across the geometries in order to speed up the extracting of training data. Setting ncpus>1 will automatically trigger the parallelization. However, its best to set ncpus=1 to begin with to assist with debugging before triggering the parallelization. You can also limit the number of polygons to run when checking code. For example, passing in gdf=input_data[0:5] will only run the code over the first 5 polygons.

[9]:
column_names, model_input = collect_training_data(
                                    gdf=input_data,
                                    dc_query=query,
                                    ncpus=ncpus,
                                    field=field,
                                    zonal_stats=zonal_stats,
                                    feature_func=feature_layers
                                    )
Taking zonal statistic: mean
Collecting training data in parallel mode
Percentage of possible fails after run 1 = 0.0 %
Removed 0 rows wth NaNs &/or Infs
Output shape:  (156, 17)

The function returns a list (column_names) contains a list of the names of the feature layers we’ve computed:

[10]:
print(column_names)
['class', 'blue', 'green', 'red', 'nir', 'swir_1', 'swir_2', 'red_edge_1', 'red_edge_2', 'red_edge_3', 'BCMAD', 'EMAD', 'SMAD', 'NDVI', 'LAI', 'MNDWI', 'slope']

The second object returned by the function is a numpy.array (model_input) and contains the data from our labelled geometries. The first item in the array is the class integer (e.g. in the default example 1. “crop”, or 0. “noncrop”), the second set of items are the values for each feature layer we computed:

[11]:
print(np.array_str(model_input, precision=2, suppress_small=True))
[[   0.   1545.25 2627.25 ...    0.13   -0.4     3.84]
 [   0.   1945.75 3164.75 ...    0.13   -0.38    3.59]
 [   0.   1274.25 2244.   ...    0.1    -0.47    8.8 ]
 ...
 [   1.    619.25  914.75 ...    1.47   -0.36    2.79]
 [   1.    535.25  862.75 ...    1.76   -0.42    0.94]
 [   1.    494.5   791.25 ...    1.52   -0.4     2.94]]

Export training data

Once we’ve collected all the training data we require, we can write the data to disk. This will allow us to import the data in the next step(s) of the workflow.

[12]:
#set the name and location of the output file
output_file = "results/test_training_data.txt"
[13]:
#grab all columns
model_col_indices = [column_names.index(var_name) for var_name in column_names]
#Export files to disk
np.savetxt(output_file, model_input[:, model_col_indices], header=" ".join(column_names), fmt="%4f")

Additional information

License: The code in this notebook is licensed under the Apache License, Version 2.0. Digital Earth Africa data is licensed under the Creative Commons by Attribution 4.0 license.

Contact: If you need assistance, please post a question on the Open Data Cube Slack channel or on the GIS Stack Exchange using the open-data-cube tag (you can view previously asked questions here). If you would like to report an issue with this notebook, you can file one on Github.

Compatible datacube version:

[14]:
print(datacube.__version__)
1.8.15

Last Tested:

[15]:
from datetime import datetime
datetime.today().strftime('%Y-%m-%d')
[15]:
'2023-08-11'