Project

Project Keyword: Semantic Segmentation

Virtual underwater environments

The recent focus on monitoring of underwater energy and information infrastructure in and near Danish waters has increased the debate on the use of unmanned underwater vehicles (UUVs). While general monitoring can be advantageously carried out with sailing vessels, detailed inspection will necessarily require underwater vehicles with optical and acoustic sensors.

Industrially, UUVs have long been used for inspection and maintenance tasks with varying degrees of automation. Common to the automation of UUVs is the localization problem below the water surface. Today, acoustic solutions (LBL/SBL/USBL) mounted at the water surface are used for triangulation and thereby localization. Such solutions contribute a significant time delay, which makes automatic and precise navigation near underwater structures and objects impossible. At the same time, the localization solution is also inflexible due to the necessity of the sensors mounted at the sea surface or bottom. There has therefore been increased focus on using localization sensors that are mounted exclusively on UUVs, such as high-frequency short-range sonar and camera solutions. Sonar is extremely robust in environments where visibility is low, while the camera solution in good visibility provides the most information about objects and structures. A combination solution seems obvious to solve both the navigation problem and automated object detection and classification of the surrounding environment.

Machine learning methods have long been used for navigation and object detection for flying drones, but have not yet gained traction for UUVs. The biggest challenge is that machine learning requires a relatively large amount of data with great diversity to ensure reliable results. There are several ways to create such datasets, and for flying drones it has been shown that data augmentation with a mixture of real and virtual photorealistic images provides a good basis. Virtual images have the great advantage of enabling a simulation of conditions that can be difficult or costly to test in. For conditions above water, there are several software solutions, including from the gaming industry, which can create such realistic virtual environments. There is no equivalent solution for underwater environments where, among other things, water turbidity, light attenuation and sunlight refraction with the water surface have been studied. The tools allow these effects to be included, but there is no evidence that this gives realistic results.

In this project we want to investigate the possibility of generating and using virtual underwater environments for data augmentation in connection with training and validation of navigation, object and classification methods. We will limit the study to one case with a smaller environment with few objects, so that we can verify or falsify the working method during the project period. Results will obviously also reveal the potential for applying for a larger and more comprehensive project.

Project start: 01. Jan. 2023
Project end: 31. Dec. 2023
Project participants: Christian MayJesper Liniger
Read more about Virtual underwater environments