Despite a list of national and international efforts to harmonise data management procedures, the categorisation of space and time within datasets in marine spatial planning (MSP) has not been addressed so far. This paper proposes a conceptual framework to categorise the spatial and temporal dimensions of data used in MSP and introduces a method to jointly manage non-spatial information and spatial data in the same geographic information system (GIS). The presented categorisation provides easy and intuitive classifications for a more detailed and transparent data description of spatial and temporal data properties, which can be applied both in attribute tables and in metadata. It allows the differentiation of the vertical and the horizontal dimensions, enabling users to focus on operations taking place at specific parts of the marine environment. The categorisation with predefined attribute domains allows space and time based automatic analyses. The inclusion of non-spatial data within GIS repositories ensures the availability of all relevant data in one database minimising the risk of incomplete data. Overall, the framework provides effective steps towards a more coherent data management and subsequently may foster better use of information in MSP processes.
In order to enhance sustainability in maritime shipping, shipping companies spend good efforts in improving the operational energy efficiency of existing ships. Accurate fuel consumption prediction model is a prerequisite of such operational improvements. Existing grey-box models (GBMs) are found with significant performance potential for ship fuel consumption prediction, although having a limitation of separating weather directions. Aiming to overcome this limitation, we propose a novel genetic algorithm-based GBM (GA-based GBM), where ship fuel consumption is modelled in a procedure based on basic principles of ship propulsion and the unknown parameters in this model are estimated with a GA-based procedure. Real ship operation data from a crude oil tanker over a 7-year sailing period are used to demonstrate the accuracy and reliability of the proposed model. To highlight the contribution of this work, we compare the proposed model against the latest GBM. The results show that the fitting performance of the proposed model is remarkably better, especially for oblique weather directions. The proposed model can be employed as a basis of ship energy efficiency management programs to reduce fuel consumption and greenhouse gas (GHG) emissions of a ship. This is beneficial to achieve the goal of sustainable shipping.
Purpose: The purpose of the paper is to identify the multiple types of data that can be collected and analyzed by practitioners across the cold chain, the ICT infrastructure required to enable data capture and how to utilize the data for decision making in cold chain logistics. Design/methodology/approach: Content analysis based literature review of 38 selected research articles, published between 2000 and 2016, was used to create an overview of data capture, technologies used for collection and sharing of data, and decision making that can be supported by the data, across the cold chain and for different types of perishable food products. Findings: There is a need to understand how continuous monitoring of conditions such as temperature, humidity, and vibration can be translated to support real-time assessment of quality, determination of actual remaining shelf life of products and use of those for decision making in cold chains. Firms across the cold chain need to adopt appropriate technologies suited to the specific contexts to capture data across the cold chain. Analysis of such data over longer periods can also unearth patterns of product deterioration under different transportation conditions, which can lead to redesigning the transportation network to minimize quality loss or to take precautions to avoid the adverse transportation conditions. Research limitations/implications: The findings need to be validated through further empirical research and modeling. There are opportunities to identify all relevant parameters to capture product condition as well as transaction data across the cold chain processes for fish, meat and dairy products. Such data can then be used for supply chain (SC) planning and pricing products in the retail stores based on product conditions and traceability information. Addressing some of the above research gaps will call for multi-disciplinary research involving food science and engineering, information technologies, computer science and logistics and SC management scholars. Practical implications: The findings of this research can be beneficial for multiple players involved in the cold chain like food processing companies, logistics service providers, ports and wholesalers and retailers to understand how data can be effectively used for better decision making in cold chain and to invest in the specific technologies, which will suit the purpose. To ensure adoption of data analytics across the cold chain, it is also important to identify the player in the cold chain, which will drive and coordinate the effort. Originality/value: This paper is one of the earliest to recognize the need for a comprehensive assessment for adoption and application of data analytics in cold chain management and provides directions for future research.
Considering 91 countries with seaports, this study conducted an empirical inquiry into the broader economic contribution of seaborne trade, from a port infrastructure quality and logistics performance perspective. Investment in quality improvement of port infrastructure and its contribution to economy are often questioned by politicians, investors and general public. A structural equation model (SEM) is used to provide empirical evidence of significant economic impacts of port infrastructure quality and logistics performance. Furthermore, analysis of a multi-group SEM is performed by dividing countries into developed and developing economy groups. The results reveal that it is vital for developing countries to continuously improve the quality of port infrastructure as it contributes to better logistics performance, leading to higher seaborne trade, yielding higher economic growth. However, this association weakens as the developing countries become richer.
Having a well-designed liner shipping network is paramount to ensure competitive freight rates, adequate capacity on trade-lanes, and reasonable transportation times. The most successful algorithms for liner shipping network design make use of a two-phase approach, where they first design the routes of the vessels, and then flow the containers through the network in order to calculate how many of the customers’ demands can be satisfied, and what the imposed operational costs are. In this article, we reverse the approach by first flowing the containers through a relaxed network, and then design routes to match this flow. This gives a better initial solution than starting from scratch, and the relaxed network reflects the ideas behind a physical internet of having a distributed multi-segment intermodal transport. Next, the initial solution is improved by use of a variable neighborhood search method, where six different operators are used to modify the network. Since each iteration of the local search method involves solving a very complex multi-commodity flow problem to route the containers through the network, the flow problem is solved heuristically by use of a fast Lagrange heuristic. Although the Lagrange heuristic for flowing containers is 2–5% from the optimal solution, the solution quality is sufficiently good to guide the variable neighborhood search method in designing the network. Computational results are reported, showing that the developed heuristic is able to find improved solutions for large-scale instances from LINER-LIB, and it is the first heuristic to report results for the biggest WorldLarge instance.
In transportation of goods in large container ships, shipping industries need to minimize the time spent at ports to load/unload containers. An optimal stowage of containers on board minimizes unnecessary unloading/reloading movements, while satisfying many operational constraints. We address the basic container stowage planning problem (CSPP). Different heuristics and formulations have been proposed for the CSPP, but finding an optimal stowage plan remains an open problem even for small-sized instances. We introduce a novel formulation that decomposes CSPPs into two sets of decision variables: the first defining how single container stacks evolve over time and the second modeling port-dependent constraints. Its linear relaxation is solved through stabilized column generation and with different heuristic and exact pricing algorithms. The lower bound achieved is then used to find an optimal stowage plan by solving a mixed-integer programming model. The proposed solution method outperforms the methods from the literature and can solve to optimality instances with up to 10 ports and 5,000 containers in a few minutes of computing time.
Seaborne trade is the lynchpin in almost every international supply chain, and about 90% of non-bulk cargo worldwide is transported by container. In this survey we give an overview of data-driven optimization problems in liner shipping. Research in liner shipping is motivated by a need for handling still more complex decision problems, based on big data sets and going across several organizational entities. Moreover, liner shipping optimization problems are pushing the limits of optimization methods, creating a new breeding ground for advanced modelling and solution methods. Starting from liner shipping network design, we consider the problem of container routing and speed optimization. Next, we consider empty container repositioning and stowage planning as well as disruption management. In addition, the problem of bunker purchasing is considered in depth. In each section we give a clear problem description, bring an overview of the existing literature, and go in depth with a specific model that somehow is essential for the problem. We conclude the survey by giving an introduction to the public benchmark instances LINER-LIB. Finally, we discuss future challenges and give directions for further research.
We present a novel solution approach to the container pre-marshalling problem using the A* and IDA* algorithms combined with several novel branching and symmetry breaking rules that significantly increases the number of pre-marshalling instances that can be solved to optimality. A* and IDA* are graph search algorithms that use heuristics combined with a complete graph search to find optimal solutions to problems. The container pre-marshalling problem is a key problem for container terminals seeking to reduce delays of inter-modal container transports. The goal of the container pre-marshalling problem is to find the minimal sequence of container movements to shuffle containers in a set of stacks such that the resulting stacks are arranged according to the time each container must leave the stacks. We evaluate our approach on three well-known datasets of pre-marshalling problem instances, solving over 500 previously unsolved instances to optimality, which is nearly twice as many instances as the current state-of-the-art method solves.
The international Maritime Organization (IMO) Weather Criterion has proven to be the governing stability criteria regarding minimum metacentric height for e.g., small ferries and large passenger ships. The formulation of the Weather Criterion is based on some empirical relations derived many years ago for vessels not necessarily representative for current new buildings with large superstructures. Thus, it seems reasonable to investigate the possibility of capsizing in beam sea under the joint action of waves and wind using direct time domain simulations. This has already been done in several studies. Here, it is combined with the first order reliability method (FORM) to define possible combined critical wave and wind scenarios leading to capsize and corresponding probability of capsize. The FORM results for a fictitious vessel are compared with Monte Carlo simulations, and good agreement is found at a much lesser computational effort. Finally, the results for an existing small ferry will be discussed in the light of the current weather criterion.
The purpose of this paper is to investigate a multiple ship routing and speed optimization problem under time, cost and environmental objectives. A branch and price algorithm as well as a constraint programming model are developed that consider (a) fuel consumption as a function of payload, (b) fuel price as an explicit input, (c) freight rate as an input, and (d) in-transit cargo inventory costs. The alternative objective functions are minimum total trip duration, minimum total cost and minimum emissions. Computational experience with the algorithm is reported on a variety of scenarios.