# Determining seasonal extent of waterbodies with Sentinel-2¶

Keywords: data used; sentinel-2, water; extent, analysis; time series, band index; MNDWI, visualisation; animation

## Contexte¶

The United Nations have prescribed 17 « Sustainable Development Goals » (SDGs). This notebook attempts to monitor SDG Indicator 6.6.1 - change in the extent of water-related ecosystems. Indicator 6.6.1 has 4 sub-indicators: > i. The spatial extent of water-related ecosystems > ii. The quantity of water contained within these ecosystems > iii. The quality of water within these ecosystems > iv. The health or state of these ecosystems

This notebook primarily focuses on the first sub-indicator - spatial extents.

## Description¶

The notebook demonstrates how to:

1. Load satellite data over the water body of interest

2. Calculate the water index MNDWI

3. Resample the time-series of MNDWI to seasonal medians

4. Generate an animation of the water extent time-series

5. Calculate and plot a time series of seassonal water extent (in square kilometres)

6. Find the minimum and maximum water extents in the time-series and plot them.

7. Compare two nominated time-periods, and plot where the water-body extent has changed.

## Getting started¶

To run this analysis, run all the cells in the notebook, starting with the « Load packages » cell.

Import Python packages that are used for the analysis.

[1]:

%matplotlib inline

import datacube
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
from IPython.display import Image
from matplotlib.colors import ListedColormap
from matplotlib.patches import Patch

from deafrica_tools.bandindices import calculate_indices
from deafrica_tools.plotting import display_map, xr_animation


### Connect to the datacube¶

Activate the datacube database, which provides functionality for loading and displaying stored Earth observation data.

[2]:

dc = datacube.Datacube(app='water_extent')


## Set up a Dask cluster¶

Dask can be used to better manage memory use and conduct the analysis in parallel. For an introduction to using Dask with Digital Earth Africa, see the Dask notebook.

Note: We recommend opening the Dask processing window to view the different computations that are being executed; to do this, see the Dask dashboard in DE Africa section of the Dask notebook.

To activate Dask, set up the local computing cluster using the cell below.

[3]:

create_local_dask_cluster()


### Cluster

• Workers: 1
• Cores: 15
• Memory: 104.37 GB

### Analysis parameters¶

The following cell sets the parameters, which define the area of interest and the length of time to conduct the analysis over.

The parameters are:

• lat: The central latitude to analyse (e.g. 10.338).

• lon : The central longitude to analyse (e.g. -1.055).

• lat_buffer : The number of degrees to load around the central latitude.

• lon_buffer : The number of degrees to load around the central longitude.

• start_year and end_year: The date range to analyse (e.g. ('2017', '2020').

If running the notebook for the first time, keep the default settings below. This will demonstrate how the analysis works and provide meaningful results. The example covers part of the Lake Sulunga. Tanzania.

[4]:

# Define the area of interest
lat = -5.9460 #-6.0873
lon =  35.5188 #35.1817

lat_buffer = 0.03
lon_buffer = 0.03

# Combine central lat,lon with buffer to get area of interest
lat_range = (lat - lat_buffer, lat + lat_buffer)
lon_range = (lon - lon_buffer, lon + lon_buffer)

# Define the start year and end year
start_year = '2017'
end_year = '2021-05'


## View the area of Interest on an interactive map¶

The next cell will display the selected area on an interactive map. The red border represents the area of interest of the study. Zoom in and out to get a better understanding of the area of interest. Clicking anywhere on the map will reveal the latitude and longitude coordinates of the clicked point.

[5]:

display_map(lon_range, lat_range)

[5]:

Make this Notebook Trusted to load map: File -> Trust Notebook

[6]:

#Create a query object
query = {
'x': lon_range,
'y': lat_range,
'resolution': (-20, 20),
'output_crs':'EPSG:6933',
'time': (start_year, end_year),
}

products=['s2_l2a'],
measurements=['green','swir_1'],
group_by='solar_day',
**query)

print(ds)

Using pixel quality parameters for Sentinel 2
Finding datasets
s2_l2a
Applying morphological filters to pq mask [('opening', 3), ('dilation', 2)]
Returning 297 time steps as a dask array
<xarray.Dataset>
Dimensions:      (time: 297, y: 382, x: 290)
Coordinates:
* time         (time) datetime64[ns] 2017-01-19T07:57:34 ... 2021-05-28T08:...
* y            (y) float64 -7.534e+05 -7.534e+05 ... -7.61e+05 -7.61e+05
* x            (x) float64 3.424e+06 3.424e+06 3.424e+06 ... 3.43e+06 3.43e+06
spatial_ref  int32 6933
Data variables:
green        (time, y, x) float32 dask.array<chunksize=(1, 382, 290), meta=np.ndarray>
swir_1       (time, y, x) float32 dask.array<chunksize=(1, 382, 290), meta=np.ndarray>
Attributes:
crs:           EPSG:6933
grid_mapping:  spatial_ref


## Calculate the MNDWI water index¶

[7]:

# Calculate the chosen vegetation proxy index and add it to the loaded data set
ds = calculate_indices(ds=ds, index='MNDWI', satellite_mission='s2', drop=True)

Dropping bands ['green', 'swir_1']


## Resample time series¶

Due to many factors (e.g. cloud obscuring the region, missed cloud cover in the fmask layer) the data will be gappy and noisy. Here, we will resample the data to ensure we working with a consistent time-series.

To do this we resample the data to seasonal time-steps using medians

These calculations will take several minutes to complete as we will run .compute(), triggering all the tasks we scheduled above and bringing the arrays into memory.

[8]:

%%time
sample_frequency="QS-DEC"  # quarterly starting in DEC, i.e. seasonal

#resample using medians
print('calculating MNDWI seasonal medians...')
mndwi = ds['MNDWI'].resample(time=sample_frequency).median().compute()

calculating MNDWI seasonal medians...

CPLReleaseMutex: Error = 1 (Operation not permitted)

CPU times: user 2.7 s, sys: 198 ms, total: 2.9 s
Wall time: 19.1 s


### Facet plot the MNDWI time-steps¶

[9]:

mndwi.plot(col='time', col_wrap=4, cmap='RdBu', vmax=1, vmin=-1);


## Animating time series¶

In the next cell, we plot the dataset we loaded above as an animation GIF, using the xr_animation <../Frequently_used_code/Animated_timeseries.ipynb>__ function. The output_path will be saved in the directory where the script is found and you can change the names to prevent files overwrite.

[10]:

out_path = 'water_extent.gif'

xr_animation(ds=mndwi.to_dataset(name='MNDWI'),
output_path=out_path,
bands = ['MNDWI'],
show_text = 'Seasonal MNDWI',
interval=500,
width_pixels=300,
show_colorbar=True,
imshow_kwargs={'cmap':'RdBu','vmin': -0.5, 'vmax': 0.5},
colorbar_kwargs={'colors': 'black'}
)

# Plot animated gif
plt.close()
Image(filename=out_path)

Exporting animation to water_extent.gif

[10]:

<IPython.core.display.Image object>


## Calculate the area per pixel¶

The number of pixels can be used for the area of the waterbody if the pixel area is known. Run the following cell to generate the necessary constants for performing this conversion.

[11]:

pixel_length = query["resolution"][1]  # in metres
m_per_km = 1000  # conversion from metres to kilometres
area_per_pixel = pixel_length**2 / m_per_km**2


## Calculating the extent of water¶

Calculates the area of pixels classified as water (if MNDWI is > 0, then water)

[12]:

water = mndwi.where(mndwi > 0, np.nan)
area_ds = water.where(np.isnan(water),1)
ds_valid_water_area = area_ds.sum(dim=['x', 'y']) * area_per_pixel


### Plot seasonal time series from the Start year to End year¶

[13]:

plt.figure(figsize=(18, 4))
ds_valid_water_area.plot(marker='o', color='#9467bd')
plt.title(f'Observed Seasonal Area of Water from {start_year} to {end_year}')
plt.xlabel('Dates')
plt.ylabel('Waterbody area (km$^2$)')
plt.tight_layout()


## Determine minimum and maximum water extent¶

The next cell extract the Minimum and Maximum extent of water from the dataset using the min and max functions, we then add the dates to an xarray.DataArray.

[14]:

min_water_area_date, max_water_area_date =  min(ds_valid_water_area), max(ds_valid_water_area)
time_xr = xr.DataArray([min_water_area_date.time.values, max_water_area_date.time.values], dims=["time"])

print(time_xr)

<xarray.DataArray (time: 2)>
array(['2019-09-01T00:00:00.000000000', '2021-03-01T00:00:00.000000000'],
dtype='datetime64[ns]')
Dimensions without coordinates: time


### Plot the dates when the min and max water extent occur¶

Plot water classified pixel for the two dates where we have the minimum and maximum surface water extent.

[15]:

area_ds.sel(time=time_xr).plot.imshow(col="time", col_wrap=2, figsize=(14, 6));


## Compare two time periods¶

The following cells determine the maximum extent of water for two different years. * baseline_year : The baseline year for the analysis * analysis_year : The year to compare to the baseline year

[16]:

baseline_time = '2019-03-01'
analysis_time = '2020-03-01'

baseline_ds, analysis_ds = ds_valid_water_area.sel(time=baseline_time, method ='nearest'),ds_valid_water_area.sel(time=analysis_time, method ='nearest')


A new dataArray is created to store the new date from the maximum water extent for the two years

[17]:

time_xr = xr.DataArray([baseline_ds.time.values, analysis_ds.time.values], dims=["time"])


## Plotting¶

Plot water extent of the MNDWI product for the two chosen periods.

[18]:

area_ds.sel(time=time_xr).plot(col="time", col_wrap=2, robust=True, figsize=(10, 5), cmap='viridis', add_colorbar=False);


## Calculating the change for the two nominated periods¶

The cells below calculate the amount of water gain, loss and stable for the two periods

[19]:

# The two period Extract the two periods(Baseline and analysis) dataset from
ds_selected = area_ds.where(area_ds == 1, 0).sel(time=time_xr)

analyse_total_value = ds_selected[1]
change = analyse_total_value - ds_selected[0]

water_appeared = change.where(change == 1)
permanent_water = change.where((change == 0) & (analyse_total_value == 1))
permanent_land = change.where((change == 0) & (analyse_total_value == 0))
water_disappeared = change.where(change == -1)


The cell below calculate the area of water extent for water_loss, water_gain, permanent water and land

[20]:

total_area = analyse_total_value.count().values * area_per_pixel
water_apperaed_area = water_appeared.count().values * area_per_pixel
permanent_water_area = permanent_water.count().values * area_per_pixel
water_disappeared_area = water_disappeared.count().values * area_per_pixel


## Plotting¶

The water variables are plotted to visualised the result

[21]:

water_appeared_color = "Green"
water_disappeared_color = "Yellow"
stable_color = "Blue"
land_color = "Brown"

fig, ax = plt.subplots(1, 1, figsize=(10, 10))

ds_selected[1].plot.imshow(cmap="Pastel1",
ax=ax)
water_appeared.plot.imshow(
cmap=ListedColormap([water_appeared_color]),
ax=ax,
)
water_disappeared.plot.imshow(
cmap=ListedColormap([water_disappeared_color]),
ax=ax,
)
permanent_water.plot.imshow(cmap=ListedColormap([stable_color]),
ax=ax)

plt.legend(
[
Patch(facecolor=stable_color),
Patch(facecolor=water_disappeared_color),
Patch(facecolor=water_appeared_color),
Patch(facecolor=land_color),
],
[
f"Water to Water {round(permanent_water_area, 2)} km2",
f"Water to No Water {round(water_disappeared_area, 2)} km2",
f"No Water to Water: {round(water_apperaed_area, 2)} km2",
],
loc="lower left",
)

plt.title("Change in water extent: " + baseline_time + " to " + analysis_time);


## Prochaines étapes¶

Return to the « Analysis parameters » section, modify some values (e.g. latitude, longitude, start_year, end_year) and re-run the analysis. You can use the interactive map in the « View the selected location » section to find new central latitude and longitude values by panning and zooming, and then clicking on the area you wish to extract location values for. You can also use Google maps to search for a location you know, then return the latitude and longitude values by clicking the map.

Change the year also in « Compare Two Time Periods - a Baseline and an Analysis » section, (e.g. base_year, analyse_year) and re-run the analysis.

Contact: If you need assistance, please post a question on the Open Data Cube Slack channel or on the GIS Stack Exchange using the open-data-cube tag (you can view previously asked questions here). If you would like to report an issue with this notebook, you can file one on Github.

Compatible datacube version:

[22]:

print(datacube.__version__)

1.8.6


Last Tested:

[23]:

from datetime import datetime
datetime.today().strftime('%Y-%m-%d')

[23]:

'2022-07-07'