Project

Project Keyword: Sonar

Virtual underwater environments

The recent focus on monitoring of underwater energy and information infrastructure in and near Danish waters has increased the debate on the use of unmanned underwater vehicles (UUVs). While general monitoring can be advantageously carried out with sailing vessels, detailed inspection will necessarily require underwater vehicles with optical and acoustic sensors.

Industrially, UUVs have long been used for inspection and maintenance tasks with varying degrees of automation. Common to the automation of UUVs is the localization problem below the water surface. Today, acoustic solutions (LBL/SBL/USBL) mounted at the water surface are used for triangulation and thereby localization. Such solutions contribute a significant time delay, which makes automatic and precise navigation near underwater structures and objects impossible. At the same time, the localization solution is also inflexible due to the necessity of the sensors mounted at the sea surface or bottom. There has therefore been increased focus on using localization sensors that are mounted exclusively on UUVs, such as high-frequency short-range sonar and camera solutions. Sonar is extremely robust in environments where visibility is low, while the camera solution in good visibility provides the most information about objects and structures. A combination solution seems obvious to solve both the navigation problem and automated object detection and classification of the surrounding environment.

Machine learning methods have long been used for navigation and object detection for flying drones, but have not yet gained traction for UUVs. The biggest challenge is that machine learning requires a relatively large amount of data with great diversity to ensure reliable results. There are several ways to create such datasets, and for flying drones it has been shown that data augmentation with a mixture of real and virtual photorealistic images provides a good basis. Virtual images have the great advantage of enabling a simulation of conditions that can be difficult or costly to test in. For conditions above water, there are several software solutions, including from the gaming industry, which can create such realistic virtual environments. There is no equivalent solution for underwater environments where, among other things, water turbidity, light attenuation and sunlight refraction with the water surface have been studied. The tools allow these effects to be included, but there is no evidence that this gives realistic results.

In this project we want to investigate the possibility of generating and using virtual underwater environments for data augmentation in connection with training and validation of navigation, object and classification methods. We will limit the study to one case with a smaller environment with few objects, so that we can verify or falsify the working method during the project period. Results will obviously also reveal the potential for applying for a larger and more comprehensive project.

Project start: 01. Jan. 2023
Project end: 31. Dec. 2023
Project participants: Christian MayJesper Liniger
Read more about Virtual underwater environments

Virtual acoustic underwater simulations

In continuation of the previous project “Virtual photorealistic underwater environments for data augmentation in training machine learning methods for classification and navigation with UUVs”, it will be beneficial to include a sonar sensor in the selected UUV scenario and simulate it, as visual data can be limited by blurring at high turbidity, e.g. in port environments, at higher distances to the inspection object, or under poor lighting. The choice of sonar system must take into account specific needs and conditions in the selected underwater environment. This will allow for the collection and merging of acoustic data alongside the optical, which can contribute to a more comprehensive and versatile representation of the underwater environment. From a defense perspective, it is particularly interesting to achieve robust detection of objects in an extended working area. This can be, for example, in conditions where objects are hidden by marine fouling, lightly buried or by other masking that can be penetrated by acoustic signals.

In addition to the previous optical simulations, a sonar simulation model must therefore be developed and used. This involves a complex understanding of acoustic signal processing, as well as the unique properties of sound propagation under water, which is why it is intended to use an existing ultrasound simulator (Field-ii, developed by DTU) for the simulation itself. This step will drastically improve the possibility of a holistic simulation of the underwater environment in which the UUVs will operate.

The inclusion of sonar data provides the opportunity to train more robust and versatile machine learning models. Sonar data can be used to strengthen the models' ability for object detection and classification, especially (as mentioned) in scenarios where optical data is insufficient or unreliable, such as under high turbidity. Furthermore, the integration of different sensor data types could result in the development of a multisensor data fusion algorithm, which can improve the precision and reliability of the trained models.

Including sonar data will undoubtedly lead to technical challenges, such as the need to synchronize data from different sensors and the challenges of developing a realistic sonar simulation model. A further technical challenge will be ensuring that the machine learning algorithms can effectively merge the optical and sonar-based data to produce reliable results.

Project start: 01. Jan. 2024
Project end: 31. Dec. 2024
Project participants: Christian MayJesper Liniger
Read more about Virtual acoustic underwater simulations