The ocean covers over 70% surface of the earth, however, we have to say that so far human being knows still very little under these waters, although we believe there should be plenty of resources we could adopt if we could find out some safe and cost-effective technology to do so. Subsea robotics has been helping human beings to extend their capabilities in recent decades, thanks to the rapid technology development. Subsea robots can commit difficult and/or dangerous tasks beyond human's natural capability, such as deepwater sea floor scanning, oil & gas exploitation and exploration, subsea pipeline installation and inspection, as well as handling some catastrophic disasters.
The proposed equipment can certainly provide us with a solid and professional subsea robotic platform, not only to verify our so-far obtained results, but also to inspire new thinking and ideas, as well as to provide relevant industries a lab-sized testing robot protocol.
Mussels and other marine fouling settle on the part of offshore wind turbines and production platforms that is underwater.
The fouling worsens the load from the waves and reduces the load-bearing capacity of the structure by 25-65 percent. Today, the fouling is removed manually – typically using manually controlled underwater robots – which is a time-consuming and financially burdensome process.
The idea for the solution consists of three elements. 1. cleaning rings around the supporting structures that remove fouling when the water moves. 2. a robot that can move on the supporting structures and send a message about the size of the fouling. 3. A robot that can remove fouling by high-pressure washing underwater. The effect of the solution will be an extension of the service life of the structure, and an expected reduction in costs by 30-40 percent. In the North Sea alone, the industry currently spends a three-digit million amount annually on removing marine fouling.
The recent focus on monitoring of underwater energy and information infrastructure in and near Danish waters has increased the debate on the use of unmanned underwater vehicles (UUVs). While general monitoring can be advantageously carried out with sailing vessels, detailed inspection will necessarily require underwater vehicles with optical and acoustic sensors.
Industrially, UUVs have long been used for inspection and maintenance tasks with varying degrees of automation. Common to the automation of UUVs is the localization problem below the water surface. Today, acoustic solutions (LBL/SBL/USBL) mounted at the water surface are used for triangulation and thereby localization. Such solutions contribute a significant time delay, which makes automatic and precise navigation near underwater structures and objects impossible. At the same time, the localization solution is also inflexible due to the necessity of the sensors mounted at the sea surface or bottom. There has therefore been increased focus on using localization sensors that are mounted exclusively on UUVs, such as high-frequency short-range sonar and camera solutions. Sonar is extremely robust in environments where visibility is low, while the camera solution in good visibility provides the most information about objects and structures. A combination solution seems obvious to solve both the navigation problem and automated object detection and classification of the surrounding environment.
Machine learning methods have long been used for navigation and object detection for flying drones, but have not yet gained traction for UUVs. The biggest challenge is that machine learning requires a relatively large amount of data with great diversity to ensure reliable results. There are several ways to create such datasets, and for flying drones it has been shown that data augmentation with a mixture of real and virtual photorealistic images provides a good basis. Virtual images have the great advantage of enabling a simulation of conditions that can be difficult or costly to test in. For conditions above water, there are several software solutions, including from the gaming industry, which can create such realistic virtual environments. There is no equivalent solution for underwater environments where, among other things, water turbidity, light attenuation and sunlight refraction with the water surface have been studied. The tools allow these effects to be included, but there is no evidence that this gives realistic results.
In this project we want to investigate the possibility of generating and using virtual underwater environments for data augmentation in connection with training and validation of navigation, object and classification methods. We will limit the study to one case with a smaller environment with few objects, so that we can verify or falsify the working method during the project period. Results will obviously also reveal the potential for applying for a larger and more comprehensive project.
The project follows the ACOMAR project, where the main focus for AAU is to make the control and algorithm part of ACOMAR ready for TRL8. Based on offshore tests in ACOMAR, it is expected that several algorithms and their implementation will need to be adjusted to achieve TRL8. It is expected that more tests of the navigation, control and error handling algorithms will need to be carried out at local onshore test facilities, with the aim of adapting and maturing the final product.
In conjunction with these tests, it is expected that documentation of the algorithms will be made for possible transfer. The documentation is intended to promote user-friendliness, so that the algorithms can be operated by the operators.
In continuation of the previous project “Virtual photorealistic underwater environments for data augmentation in training machine learning methods for classification and navigation with UUVs”, it will be beneficial to include a sonar sensor in the selected UUV scenario and simulate it, as visual data can be limited by blurring at high turbidity, e.g. in port environments, at higher distances to the inspection object, or under poor lighting. The choice of sonar system must take into account specific needs and conditions in the selected underwater environment. This will allow for the collection and merging of acoustic data alongside the optical, which can contribute to a more comprehensive and versatile representation of the underwater environment. From a defense perspective, it is particularly interesting to achieve robust detection of objects in an extended working area. This can be, for example, in conditions where objects are hidden by marine fouling, lightly buried or by other masking that can be penetrated by acoustic signals.
In addition to the previous optical simulations, a sonar simulation model must therefore be developed and used. This involves a complex understanding of acoustic signal processing, as well as the unique properties of sound propagation under water, which is why it is intended to use an existing ultrasound simulator (Field-ii, developed by DTU) for the simulation itself. This step will drastically improve the possibility of a holistic simulation of the underwater environment in which the UUVs will operate.
The inclusion of sonar data provides the opportunity to train more robust and versatile machine learning models. Sonar data can be used to strengthen the models' ability for object detection and classification, especially (as mentioned) in scenarios where optical data is insufficient or unreliable, such as under high turbidity. Furthermore, the integration of different sensor data types could result in the development of a multisensor data fusion algorithm, which can improve the precision and reliability of the trained models.
Including sonar data will undoubtedly lead to technical challenges, such as the need to synchronize data from different sensors and the challenges of developing a realistic sonar simulation model. A further technical challenge will be ensuring that the machine learning algorithms can effectively merge the optical and sonar-based data to produce reliable results.
The purpose of NextGen Robotics is to mobilize companies in the ecosystem for robots and drones and thereby ensure the continuation of the business lighthouse effort by contributing to innovation in SMEs within the business lighthouse's three strengths: Large structure production, advanced drones and autonomous coastal shipping.