×
 

CORTEX

Deep Learning simplification for on-board processing

 
Objectives

The main objective of the project is to define a workflow easing the integration and reduction of complex Deep Neural Networks (DNN) models on SoC (Sytem-on-Chip) FPGA platforms.

 
Context

Image processing on-board (HR & VHR & Hyperspectral)

This workflow will be demonstrated on a Deep Learning (DL) image processing pipeline devoted to feature extraction in Earth Observation images on-board of small satellites.

This pipeline will be as generic as possible to be able to update, in flight, the performed recognition tasks (clouds, floods, planes, ships or more generic objects identification) with new models required by new applications or users (replace ships detection by oil spills detection, or even fires).

The main challenge of this activity is to define the most suitable combination of methods for DL networks simplification (pruning, compression, …) allowing to execute efficient but complex networks in on-board hardware resources (spatialized or not, including FPGA). These methods are generic enough to be applied to networks or aggregate of networks with different types of architectures. For example:

As a summary, the main objectives of the project are:
Reduce DNN free parameters to fit in existing devices suitable for cubesat platforms, with minimal performance loss & high throughput
Propose and benchmark generic methodologies for DNN simplification
some applications
surveillance
to identify ships, planes or vehicules for sites monitoring
agriculture
in agriculture, to analyse cultures and soil evolution
forest fire
to identify and fight forest fires
icebergs
to identify evolution of glaciers and movement of drifting icebergs
other
for any other application
acknowledgements
This work is supervised by the EOP Φ-lab and funded by a contract in the framework of the EO SCIENCE FOR SOCIETY PERMANENTLY OPEN CALL FOR PROPOSALS EOEP-5 BLOCK 4 issued by the European Space Agency.

Results

The main objective of the project is to define a workflow easing the integration and reduction of complex deep neural networks (DNN) models on Soc FPGA platforms.

21
Oct
2022

Database updates

In the frame of the CORTEX project we have updated one of our database and created a new one dealing with PRISMA hyperspectral imagery !!!! PRISMA-HSI-Forest dataset This BD provides more than 1000 labelled hyperspectral images. It is the first database providing as many high-quality HS images. The images are based on PRISMA products and ground truth is based on IGN BD Forest V2. Four classes are used : deciduous forest, coniferous forest, mixed, non-tree. Details about its content is provided in the technical note (only for Zenodo text). Considerable efforts were made to ensure the high quality of the dataset, especially regarding the coregistration between the ground truth and the images. To assess the quality of the dataset, a segmentation network was trained with it and the good results obtained proved the coherence between the final set of images and ground truth. This database was generated by AGENIUM Space in the framework of the CORTEX project (https://esacortexproject.agenium-space.com/) funded by ESA in the framework of the EO SCIENCE FOR SOCIETY PERMANENTLY OPEN CALL FOR PROPOSALS EOEP-5 BLOCK 4 The AZenodo access link is : https://zenodo.org/record/7230134 Ship-S2-AIS dataset The DB is providing an ARD classification dataset, automatically labelled using AIS data and manually checked. It is composed of 13k tiles extracted from 29 free Sentinel-2 products. 2k images showing ships in Denmark sovereign waters: one may detect cargos, fishing, or container ships. Others showing negatives (sea, coasts, urban areas, …) Technical details about its content are provided in the technical note attached to the database. This database was generated by AGENIUM Space in the framework of the CORTEX project (https://esacortexproject.agenium-space.com/) funded by ESA in the framework of the EO SCIENCE FOR SOCIETY PERMANENTLY OPEN CALL FOR PROPOSALS EOEP-5 BLOCK 4 The Zenodo access link is : https://zenodo.org/record/7229756

21
Oct
2022

Results of Cortex extension project

The CORTEX CCN is an extension of the CORTEX project capitalizing on the developments in Artificial Intelligence (AI) performed in the previous phase. In the previous phase AGENIUM Space developed a complete pipeline for DNN simplification and AI-based image analysis on board the satellites. This extension tackles key operational elements of TinyML. It specifically attempts to: 1) improve semi-automatic sample synthesis with generative AI models, in particular for hyperspectral use-cases, 2) incorporate some interpretability in the Deep Learning (DL) models through trustworthy AI, and 3) allow transfer DL models between various sensors. First objective listed above was implemented through the exploration of new ways to generate synthetic images with associated labels for hyperspectral use cases. Such a method is interesting in the case where too few annotated data are available to train a deep neural network. Then, it is interesting to train a generative model, for example using a set of unlabeled data and a small amount of labeled data. The context of hyperspectral images is particularly suited for this problem since labeled datasets of hyperspectral images are scarce and generally of very small size. First step of the project consists in defining an hyperspectral dataset along with associated ground truth in order to correctly train the models. We have constituted a segmentation dataset based on PRISMA products and IGN BD Forest V2 (PRISMA HSI Forest dataset, https://zenodo.org/record/7230134). To correctly match the ground truth and the images, an important work was done about the improvement of the geolocalization of the PRISMA images. The small patches of PRISMA images were coregistered with Sentinel-2 images (with very accurate geolocation) to have a better estimation of the position of the patches. Then, a segmentation model was trained on the dataset to assess its quality and the feasibility of the task of forest-type segmentation. Good results were obtained using a Unet-EfficientNet segmentation network. It showed that the dataset is globally coherent in terms of association between image and ground truth. Finally, an important research work was conducted to develop a Generative Adversarial Network (GAN) method able to generate synthetic hyperspectral images. The final goal was to generate synthetic ground truth masks alongside the images and the method SemanticGAN was elected to address this problem. The study showed good results regarding the generation of HS images up to a certain number of bands. Regarding the generation of masks, the initial expectation was that it would help stabilizing the generation of images, but the experiments showed the contrary. More research will be necessary to obtain couples of images and masks that could be used to train a DNN. Overall, the increase of the spectral dimension is a key difficulty of the problem. Second objective of the CORTEX project was to investigate the possibility to associate a confidence score to the predictions of a Deep Neural Network (DNN) in Earth Observation (EO) scenario. Most of the DNNs are designed to predict a class, a segmentation map or detections, no matter it is interpolation or extrapolation. Then, a confidence score answers to the need of having interpretable outputs and it could help an AI4EO end-user to take a decision. We have investigated one method: the confidNet approach on two use cases, one based on S2 tiles containing ships or not (Ship S2 AIS dataset, https://zenodo.org/record/7229756) and the other one is classification of 10 geophysical phenomena from Sentinel-1 wave mode. The main results obtained in this study are the relevance to utilize the confidNet approach in AI4EO scenarios, the possibility to reduce the network in an on-boarding interest, and a first warranty that the confidNet approach can learn in a different way from classification networks, with interesting properties of generalization. Also, an important work has been done to produce and publish the ARD database of the first usecase. It is a good outcome of this study, and a good contribution to open science. Finally, we've also addressed the topic of transferring DL models through the on boarding of a DL Model on the Unibap/ION mission. We' ve sent on board the ION satellite a forest and clouds segmentation DNN trained on known S2 images (that were uploaded on board as well). This experiment was very conclusive and allowed us to prove that we were CPU flight proven. As a conclusion the work performed on CORTEX project has allowed us to investigate innovative AI methods and as a consequence has enlarge our competencies in the domain. This kind of exploratory activities are an essential part of Agenium Space AI solution development.

Result
14
Dec
2020

SUMMARY of RESULTS

The main objective of CORTEX is to define a workflow easing the integration and reduction of complex deep neural networks (DNN) models on SoC FPGA devices embarked in spatial platforms. This workflow has been demonstrated on a deep learning (DL) image processing pipeline devoted to feature extraction and tested on an FPGA representative of the on-board hardware. In order to define a generic pipeline, three use cases have been selected: 1) ships detection in Sentinel-2 images with a DNN trained with transfer learning technics, 2) oil spills and ocean features detection in Sentinel-1 images, two separate models are developed to demonstrate the flexibility offered by easy processing updating on-board, 3) Sentinel-1 to Sentinel-2 transformation applied on specific regions (refugees camps), demonstrative use case for future applications based on generative adversarial networks. Deep Learning models have been implemented and delivered as well as generated/modified data bases and training software. The networks trained for use cases 1 and 2 showed very good performances (F1-score over 80%) compared to available public results. The results obtained for use case 3 were encouraging even so they would need a more extended study. The distillation process showed to be fairly robust and provided low performance loss with a drastic reduction in the number of parameters of factor 52 on the use cases 1 and 2. A small performance drop was observed for the oil spill case, but we believe that more parameter exploration should solve the case. The techno push use case 3 proved to be difficult but we gained insights on the distillation of GAN approaches: first we managed to use a loss which is more focused on structures in the images. The inference code for the simplified/distilled DNN models was ported to be executed in a middle/low range FPGA representative of the devices used on-board small sats. Eventually, the quantization result has shown a minimal F1 score loss around 1% (1.33% at worse) for the use cases 1 and 2 from the distilled models. These results are fairly good and prove that the hard part lies in the distillation process, justifying our methodology and approach. Moreover, our student architecture can be ported on the selected hardware, enabling our pipeline to bring deep learning models on (small) satellites abiding by COTS hardware constraints. We continue to work, we will keep you informed of our progress... Thank you to ESA and specially to ESA Ф-Lab (https://blogs.esa.int/philab/) for its support and recommendations.

See all results
 

 

 
Contact
 
  • Contractor:
  • Agenium Space
  • 1 avenue de l'Europe, Bâtiment 1
  • 31400 Toulouse