This paper presents a new approach to attain estimates of the sea state based on short-time sequences of wave-induced ship responses. The present sea state estimation method aims at reconstructing the incident wave profiles in time domain. In order to identify phase components of the incident waves, the Prolate Spheroidal Wave Functions (PSWF) are employed. The use of PSWF offers an explicit expression of phase components in the measured responses and incident waves, indicating that estimations can be efficiently attained. A method to estimate the relative wave heading angle based on the response measurements and pre-computed transfer functions of the responses is also proposed. The method is tested with numerical simulations and experimental measurements of ship motions, i.e. heave, pitch, and roll, together with vertical bending moment and local pressure in a post-panamax size containership. Validation is made by comparing the reconstructed wave profiles with the incident waves. The accuracy and efficiency of the present approach are promising. At the same time, it is shown that the use of responses, which are more broad-banded in their frequency characteristics, is an effective means to cope with high frequency noise in reconstructed waves.
This paper presents an assessment of three methods used for sea state estimation via the wave buoy analogy, where measured ship responses are processed. The three methods all rely on Machine Learning exclusively but they have different output; Method 1 provides bulk parameters, Method 2 yields a point wave spectrum and the wave direction, while Method 3 gives the directional wave spectrum in non-parametric form. The assessment is made using full-scale data from an in-service container ship in cross-Atlantic service. Training and testing of the methods are made using data from a wave radar, and the three methods perform well. An uncertainty measure, equivalently, a trust level indicator, based on the variation between the post-processed outputs of the methods is proposed, and this facilitates determination of estimates with small errors; without knowing the ground truth.
Despite a list of national and international efforts to harmonise data management procedures, the categorisation of space and time within datasets in marine spatial planning (MSP) has not been addressed so far. This paper proposes a conceptual framework to categorise the spatial and temporal dimensions of data used in MSP and introduces a method to jointly manage non-spatial information and spatial data in the same geographic information system (GIS). The presented categorisation provides easy and intuitive classifications for a more detailed and transparent data description of spatial and temporal data properties, which can be applied both in attribute tables and in metadata. It allows the differentiation of the vertical and the horizontal dimensions, enabling users to focus on operations taking place at specific parts of the marine environment. The categorisation with predefined attribute domains allows space and time based automatic analyses. The inclusion of non-spatial data within GIS repositories ensures the availability of all relevant data in one database minimising the risk of incomplete data. Overall, the framework provides effective steps towards a more coherent data management and subsequently may foster better use of information in MSP processes.
The purpose of this paper is to investigate a multiple ship routing and speed optimization problem under time, cost and environmental objectives. A branch and price algorithm as well as a constraint programming model are developed that consider (a) fuel consumption as a function of payload, (b) fuel price as an explicit input, (c) freight rate as an input, and (d) in-transit cargo inventory costs. The alternative objective functions are minimum total trip duration, minimum total cost and minimum emissions. Computational experience with the algorithm is reported on a variety of scenarios.
This paper presents a detailed BC, NOx and SO2 emission inventory for ships in the Arctic in 2012 based on satellite AIS data, ship engine power functions and technology stratified emission factors. Emission projections are presented for the years 2020, 2030 and 2050. Furthermore, the BC, SO2 and O3 concentrations and the deposition of BC are calculated for 2012 and for two arctic shipping scenarios – with or without arctic diversion routes due to a possible polar sea ice extent in the future.
In 2012, the largest shares of Arctic ships emissions are calculated for fishing ships (45% for BC, 38% for NOx, 23% for SO2) followed by passenger ships (20%, 17%, 25%), tankers (9%, 13%, 15%), general cargo (8%, 11%, 12%) and container ships (5%, 7%, 8%). In 2050, without arctic diversion routes, the total emissions of BC, NOx and SO2 are expected to change by +16%, −32% and −63%, respectively, compared to 2012. The results for fishing ships are the least certain, caused by a less precise engine power – sailing speed relation.
The calculated BC, SO2, and O3 surface concentrations and BC deposition contributions from ships are low as a mean for the whole Arctic in 2012, but locally BC additional contributions reach up to 20% around Iceland, and high additional contributions (100–300%) are calculated in some sea areas for SO2. In 2050, the arctic diversion routes highly influence the calculated surface concentrations and the deposition of BC in the Arctic. During summertime navigation contributions become very visible for BC (>80%) and SO2 (>1000%) along the arctic diversion routes, while the O3 (>10%) and BC deposition (>5%) additional contributions, respectively, get highest over the ocean east of Greenland and in the High Arctic.
The geospatial ship type specific emission results presented in this paper have increased the accuracy of the emission inventories for ships in the Arctic. The methodology can be used to estimate shipping emissions in other regions of the world, and hence may serve as an input for other researchers and policy makers working in this field.
In order to enhance sustainability in maritime shipping, shipping companies spend good efforts in improving the operational energy efficiency of existing ships. Accurate fuel consumption prediction model is a prerequisite of such operational improvements. Existing grey-box models (GBMs) are found with significant performance potential for ship fuel consumption prediction, although having a limitation of separating weather directions. Aiming to overcome this limitation, we propose a novel genetic algorithm-based GBM (GA-based GBM), where ship fuel consumption is modelled in a procedure based on basic principles of ship propulsion and the unknown parameters in this model are estimated with a GA-based procedure. Real ship operation data from a crude oil tanker over a 7-year sailing period are used to demonstrate the accuracy and reliability of the proposed model. To highlight the contribution of this work, we compare the proposed model against the latest GBM. The results show that the fitting performance of the proposed model is remarkably better, especially for oblique weather directions. The proposed model can be employed as a basis of ship energy efficiency management programs to reduce fuel consumption and greenhouse gas (GHG) emissions of a ship. This is beneficial to achieve the goal of sustainable shipping.
Hydrogen is believed as a promising energy carrier that contributes to deep decarbonization, especially for the sectors hard to be directly electrified. A grid-connected wind/hydrogen system is a typical configuration for hydrogen production. For such a system, a critical barrier lies in the poor cost-competitiveness of the produced hydrogen. Researchers have found that flexible operation of a wind/hydrogen system is possible thanks to the excellent dynamic properties of electrolysis. This finding implies the system owner can strategically participate in day-ahead power markets to reduce the hydrogen production cost. However, the uncertainties from imperfect prediction of the fluctuating market price and wind power reduce the effectiveness of the offering strategy in the market. In this paper, we proposed a decision-making framework, which is based on data-driven robust chance constrained programming (DRCCP). This framework also includes multi-layer perception neural network (MLPNN) for wind power and spot electricity price prediction. Such a DRCCP-based decision framework (DDF) is then applied to make the day-ahead decision for a wind/hydrogen system. It can effectively handle the uncertainties, manage the risks and reduce the operation cost. The results show that, for the daily operation in the selected 30 days, offering strategy based on the framework reduces the overall operation cost by 24.36%, compared to the strategy based on imperfect prediction. Besides, we elaborate the parameter selections of the DRCCP to reveal the best parameter combination to obtain better optimization performance. The efficacy of the DRCCP method is also highlighted by the comparison with the chance-constrained programming method.
Havebrugsindustrien i nordiske lande er meget afhængig af drivhussystemer på grund af begrænsningen af det naturlige miljø og de strenge plantekrav for bestemte plantetyper. Kommercielle avlere i disse regioner støder på betydelige udfordringer med at garantere kvaliteten af planterne, mens de minimerer produktionsomkostningerne. På den ene side skal et drivhussystem forbruge en stor mængde energi for at give et tilfredsstillende klima for plantevækst. På den anden side, i de senere år, har energiprisen stigende i Europa ført til en stigning i produktionsomkostningerne for drivhuse, hvilket gør energibesparelse og optimering imperativ. Det er dog udfordrende for avlere at håndtere dette dilemma, fordi drivhusklimakontrol er et meget dynamisk og meget koblet komplekst system. Ved at analysere funktionerne i ikke-linearitet og dynamik i drivhusklimaet kan de eksisterende løsninger ikke korrekt opfylde de praktiske krav i gartneriindustrien.
For at tackle disse problemer foreslås en digital tvilling af drivhusklimakontrol (DT-GCC) rammer i denne forskning for at optimere aktuatorens driftsplan til minimering af energiforbrug og produktionsomkostninger uden at gå på kompromis med produktionskvaliteten. Arkitekturen i DT-GCC-rammen og de anvendte metoder er uddybet modulært, herunder fysisk tvilling af drivhusklimakontrol (PT-GCC) systemforståelse, design af DT-GCC-system, sammenkobling af DTGCC og PT-GCC og integration med andre digitale tvillinger (DTS).
DT-GCC omfatter en virtuel drivhus (VGH) og en multi-objektiv optimeringsbaseret klimakontrol (MOOCC) platform. VGH er den digitale repræsentation af det fysiske drivhus gennem modellering af de faktorer, der kan påvirke drivhusklimaet markant og aktuatorens driftsstrategier. MOOCC er ansvarlig for at definere drivhusklimakontrol som et multi-objektivt optimeringsproblem (MOO) og optimere driftsplanen for kunstigt lys (lysplan) og varmesystem (varmeplan). Desuden er en hierarkisk struktur af DT-GCC designet i henhold til funktionerne og ansvaret for individuelle lag, der gavner den praktiske realisering af DT-GCC med en organiseret arkitektur af design og styring.
Funktionaliteterne i DT-GCC er udviklet i en drivhusklimakontrolplatform, der er navngivet af Dynalight, som er kombineret med en genetisk algoritme (GA) ramme kaldet Controleum. Dynalight definerer et MOO -problem til at abstrahere drivhusklimakontrolsystemet med flere objektive funktioner, og omkostningerne beregnes baseret på modelleringsresultaterne fra VGH. Controleum er ansvarlig for implementeringen af GA for at generere en Pareto Frontier (PF) og endelig løsning af løsning til let plan og varmeplan.
Forskellige scenarier og tilsvarende eksperimenter er designet til at evaluere ydelsen af DT-GCC fra individuelle perspektiver, herunder VGH, MOOCC og DT-integration. Eksperimenterne på VGH verificerer forudsigelsesydelsen for kunstigt neuralt netværk (ANN) metoder på indendørs temperatur, opvarmning af forbrug og netto fotosyntese (PN). Hvad angår de to standaloneeksperimenter, garanterer resultaterne DT-GCCs evne til at kortlægge avlernes beslutningstagning om let plan og varmeplan og verificere MOOCC-ydelsen for at opfylde voksende krav og samtidig reducere energiforbruget og omkostningerne. Endelig, i DT-integrationseksperimenterne med Digital Twin of Production Twin (DT-PF) og Digital Twin of Energy System (DT-ES), afslutter DT-GCC det tilsvarende svar på forudsigelser og optimeringsanmodninger.
Abstract: In recent years, the development of ground robots with human-like perception capabilities has led to the use of multiple sensors, including cameras, lidars, and radars, along with deep learning techniques for detecting and recognizing objects and estimating distances. This paper proposes a computer vision-based navigation system that integrates object detection, segmentation, and monocular depth estimation using deep neural networks to identify predefined target objects and navigate towards them with a single monocular camera as a sensor. Our experiments include different sensitivity analyses to evaluate the impact of monocular cues on distance estimation. We show that this system can provide a ground robot with the perception capabilities needed for autonomous navigation in unknown indoor environments without the need for prior mapping or external positioning systems. This technique provides an efficient and cost-effective means of navigation, overcoming the limitations of other navigation techniques such as GPS-based and SLAM-based navigation. Graphical Abstract: [Figure not available: see fulltext.]