NDVI Image Synthesis with Image-to-Image Translation Networks

Name
Hudson Taylor Lekunze
Abstract
As image-to-image translation continues to achieve credible results in various image manipulation tasks (notably image and video generation), controlled image synthesis offers rewarding possibilities for applications that require additional constraints on the training objectives and resulting outputs. Consequently, this thesis explores constrained generative modeling to synthesize NDVI images (characterizing plant growth and crop health) from radar and Multi-Spectral (MS) satellite data. NDVI is derived from MS satellites but often due to cloud cover this data is corrupted leaving more gaps in an already sparse time-series, limited by the repeat frequency of satellite missions. There is already significant research focusing on predicting NDVI as a single-point indicator, averaged over agricultural parcels. However, this thesis instead proposes an image generation approach using Conditional GANs (CGANs), to synthesize images tiled at $512 \\times 512$ px and enclosing several agricultural parcels. We select different areas in Estonia for training and validation, and show that using a Multi-Temporal Conditional GAN ($MTCGAN$) we can generate perceptible images with 0.8688 structural similarity and 0.9429 per-pixel accuracy.
Graduation Thesis language
English
Graduation Thesis type
Master - Computer Science
Supervisor(s)
Marharyta Dekret, Tambet Matiisen
Defence year
2022
 
PDF