Each year maritime accidents occur at sea causing human casualties. Training facilities serve to reduce the risk of human error by allowing maritime teams to train safety procedures in cooperative real-size immersive simulators. However, they are expensive and only few maritime professionals have access to such simulators. Virtual Reality (VR) can provide a digital all-immersive learning environment at a reduced cost allowing for increased access. However, a key ingredient of what makes all-immersive physical simulators effective is that they allow for multiple participants to engage in cooperative social interaction. Social interaction which allows trainees to develop skills and competencies in navigating situational awareness essential for safety training. Social interaction requires social fidelity. Moving from physical simulators into digital simulators based upon VR technology thus challenges us as HCI researchers to figure out how to design social fidelity into immersive training simulators. We explore social fidelity theoretically and technically by combining core conceptual work from CSCW research to the design experimentation of social fidelity for maritime safety training. We argue that designing for social fidelity in VR simulators requires designers to contextualize the VR experience in location, artifacts, and actors structured through dependencies in work allowing trainees to perform situational awareness, coordination, and communication which are all features of social fidelity. Further, we identify the risk of breaking the social fidelity immersion related to the intent and social state of the participants entering the simulation. Finally, we suggest that future designs of social fidelity should consider not only trainees in the design, but also the social relations created by the instructors’ guidance as part of the social fidelity immersion.
Critical maritime infrastructure protection has become a priority in ocean governance, particularly in Europe. Increased geopolitical tensions, regional conflicts, and the Nord Stream pipeline attacks in the Baltic Sea of September 2022 have been the main catalysts for this development. Calls for enhancing critical maritime infrastructure protection have multiplied, yet, what this implies in practice is less clear. This is partially a question of engineering and risk analysis. It also concerns how the multitude of actors involved can act concertedly. Dialogue, information sharing, and coordination are required, but there is a lack of discussion about which institutional set ups would lend themselves. In this article, we argue that the maritime counter-piracy operations off Somalia, as well as maritime cybersecurity governance hold valuable lessons to provide new answers for the institutional question in the critical maritime infrastructure protection agenda. We start by clarifying what is at stake in the CMIP agenda and why it is a major contemporary governance challenge. We then examine and assess the instruments found in maritime counter-piracy and maritime cybersecurity governance, including why and how they provide effective solutions for enhancing critical maritime infrastructure protection. Finally, we assess the ongoing institution building for CMIP in Europe. While we focus on the European experience, our discussion on designing institutions carries forward lessons for CMIP in other regions, too.
As the world collectively looks to technology to salvage what is left of our world to sustain a habitat that can accommodate our way of life, users are increasingly exposed to technological solutions, rarely developed with an offset in their practice. This also holds for the maritime sector in Denmark, where the way of developing technology is limited to the applicability of technological artifacts and can reduce the potential efficiency gains that technologies can introduce. This paper applies qualitative research to show that there is a disconnect between, on the one hand, funders, technology developers, and decision-makers and, on the other hand, technology users and practitioners in the Danish maritime sector. It argued that if technology is to replace or assist any human practice and solve for example the climate crises, then knowledge of users’ practices must be key to developing the technological solutions.
Ship engines are subject to a very demanding work environment, where maximum availability is a must. In this project we look at different operational variables of a marine engine from large cargo ships, with the aim of detecting and trending damage onset on different engine sub-components. This information can be used by owners to expedite O&M interventions and maximize ship availability.
Purpose
The purpose of this paper is to investigate the impact of cloud computing (CC) on supply chain management (SCM).
Design/methodology/approach
The paper is conceptual and based on a literature review and conceptual analysis.
Findings
Today, digital technology is the primary enabler of supply chain (SC) competitiveness. CC capabilities support competitive SC challenges through structural flexibility and responsiveness. An Internet platform based on CC and a digital ecosystem can serve as “information cross-docking” between SC stakeholders. In this way, the SC model is transformed from a traditional, linear model to a platform model with the simultaneous cooperation of all partners. Platform-based SCs will be a milestone in the evolution of SCM – here conceptualised as Supply Chain 3.0.
Research limitations/implications
Currently, SCs managed holistically in cyberspace are rare in practice, and therefore empirical evidence on how digital technologies impact SC competitiveness is required in future research.
Practical implications
This research generates insights that can help managers understand and develop the next generation of SCM with the use of CC, a modern and commonly available Information and Communication Technologies (ICT) tool.
Originality/value
The paper presents a conceptual basis of how CC enables structural flexibility of SCs through easy, real-time resource and capacity reconfiguration. CC not only reduces cost and increases flexibility but also offers an effective solution for disruptive new business models with the potential to revolutionise current SCM thinking.
The widespread use of software-intensive cyber systems in critical infrastructures such as ships (CyberShips) has brought huge benefits, yet it has also opened new avenues for cyber attacks to potentially disrupt operations. Cyber risk assessment plays a vital role in identifying cyber threats and vulnerabilities that can be exploited to compromise cyber systems. Understanding the nature of cyber threats and their potential risks and impact is essential to improve the security and resilience of cyber systems, and to build systems that are secure by design and better prepared to detect and mitigate cyber attacks. A number of methodologies have been proposed to carry out these analyses. This paper evaluates and compares the application of three risk assessment methodologies: system theoretic process analysis (STPA-Sec), STRIDE and CORAS for identifying threats and vulnerabilities in a CyberShip system. We specifically selected these three methodologies because they identify threats not only at the component level, but also threats or hazards caused due to the interaction between components, resulting in sets of threats identified with each methodology and relevant differences. Moreover, STPA-Sec, which is a variant of the STPA, is widely used for safety and security analysis of cyber physical systems (CPS); CORAS offers a framework to perform cyber risk assessment in a top-down approach that aligns with STPA-Sec; and STRIDE (Spoofing, Tampering, Repudiation,Information disclosure, Denial of Service, Elevation of Privilege) considers threat at the component level as well as during the interaction that is similar to STPA-Sec. As a result of this analysis, this paper highlights the pros and cons of these methodologies, illustrates areas of special applicability, and suggests that their complementary use as threats identified through STRIDE can be used as an input to CORAS and STPA-Sec to make these methods more structured.
How value is created through service has recently undergone massive changes. Centralized service provision with clear distinctions between service offerers and beneficiaries is increasingly being substituted by value creation within decentralized networks of distributed actors integrating digital resources. One of the drivers of this transformation is blockchain technology. Applying the lens of service-dominant logic and discussing examples of blockchain-based decentralized finance, we shed light on how properties of decentralized technology stimulate value creation in service ecosystems. With this conceptual research, we postulate five propositions of decentralized value creation along the axiomatic foundations of the service-dominant logic. We provide first definitions for decentralized service as well as decentralized service ecosystems. Thereby, we contribute with an extension of the service-dominant logic to the context of decentralized ecosystems. To our knowledge, this research is among the first to add to the growing literature on blockchain value creation from a service science perspective.
This article examines the theoretical and practical implications of artificial intelligence (AI) integration in supply chain management (SCM). AI has developed dramatically in recent years, embodied by the newest generation of large language models (LLM) that exhibit human-like capabilities in various domains. However, SCM as a discipline seems unprepared for this potential revolution, as existing perspectives do not capture the potential for disruption offered by AI tools. Moreover, AI integration in SCM is not only a technical but also a social process, influenced by human sensemaking and interpretation of AI systems. This article offers a novel theoretical lens called the AI Integration (AII) framework, which considers two key dimensions: the level of AI integration across the supply chain and the role of AI in decision-making. It also incorporates human meaning-making as an overlaying factor that shapes AI integration and disruption dynamics. The article demonstrates that different ways of integrating AI will lead to different kinds of disruptions, both in theory and practice. It also discusses the implications of AI integration for SCM theorizing and practice, highlighting the need for cross-disciplinary collaboration and sociotechnical perspectives.
Havebrugsindustrien i nordiske lande er meget afhængig af drivhussystemer på grund af begrænsningen af det naturlige miljø og de strenge plantekrav for bestemte plantetyper. Kommercielle avlere i disse regioner støder på betydelige udfordringer med at garantere kvaliteten af planterne, mens de minimerer produktionsomkostningerne. På den ene side skal et drivhussystem forbruge en stor mængde energi for at give et tilfredsstillende klima for plantevækst. På den anden side, i de senere år, har energiprisen stigende i Europa ført til en stigning i produktionsomkostningerne for drivhuse, hvilket gør energibesparelse og optimering imperativ. Det er dog udfordrende for avlere at håndtere dette dilemma, fordi drivhusklimakontrol er et meget dynamisk og meget koblet komplekst system. Ved at analysere funktionerne i ikke-linearitet og dynamik i drivhusklimaet kan de eksisterende løsninger ikke korrekt opfylde de praktiske krav i gartneriindustrien.
For at tackle disse problemer foreslås en digital tvilling af drivhusklimakontrol (DT-GCC) rammer i denne forskning for at optimere aktuatorens driftsplan til minimering af energiforbrug og produktionsomkostninger uden at gå på kompromis med produktionskvaliteten. Arkitekturen i DT-GCC-rammen og de anvendte metoder er uddybet modulært, herunder fysisk tvilling af drivhusklimakontrol (PT-GCC) systemforståelse, design af DT-GCC-system, sammenkobling af DTGCC og PT-GCC og integration med andre digitale tvillinger (DTS).
DT-GCC omfatter en virtuel drivhus (VGH) og en multi-objektiv optimeringsbaseret klimakontrol (MOOCC) platform. VGH er den digitale repræsentation af det fysiske drivhus gennem modellering af de faktorer, der kan påvirke drivhusklimaet markant og aktuatorens driftsstrategier. MOOCC er ansvarlig for at definere drivhusklimakontrol som et multi-objektivt optimeringsproblem (MOO) og optimere driftsplanen for kunstigt lys (lysplan) og varmesystem (varmeplan). Desuden er en hierarkisk struktur af DT-GCC designet i henhold til funktionerne og ansvaret for individuelle lag, der gavner den praktiske realisering af DT-GCC med en organiseret arkitektur af design og styring.
Funktionaliteterne i DT-GCC er udviklet i en drivhusklimakontrolplatform, der er navngivet af Dynalight, som er kombineret med en genetisk algoritme (GA) ramme kaldet Controleum. Dynalight definerer et MOO -problem til at abstrahere drivhusklimakontrolsystemet med flere objektive funktioner, og omkostningerne beregnes baseret på modelleringsresultaterne fra VGH. Controleum er ansvarlig for implementeringen af GA for at generere en Pareto Frontier (PF) og endelig løsning af løsning til let plan og varmeplan.
Forskellige scenarier og tilsvarende eksperimenter er designet til at evaluere ydelsen af DT-GCC fra individuelle perspektiver, herunder VGH, MOOCC og DT-integration. Eksperimenterne på VGH verificerer forudsigelsesydelsen for kunstigt neuralt netværk (ANN) metoder på indendørs temperatur, opvarmning af forbrug og netto fotosyntese (PN). Hvad angår de to standaloneeksperimenter, garanterer resultaterne DT-GCCs evne til at kortlægge avlernes beslutningstagning om let plan og varmeplan og verificere MOOCC-ydelsen for at opfylde voksende krav og samtidig reducere energiforbruget og omkostningerne. Endelig, i DT-integrationseksperimenterne med Digital Twin of Production Twin (DT-PF) og Digital Twin of Energy System (DT-ES), afslutter DT-GCC det tilsvarende svar på forudsigelser og optimeringsanmodninger.
Hydrogen is believed as a promising energy carrier that contributes to deep decarbonization, especially for the sectors hard to be directly electrified. A grid-connected wind/hydrogen system is a typical configuration for hydrogen production. For such a system, a critical barrier lies in the poor cost-competitiveness of the produced hydrogen. Researchers have found that flexible operation of a wind/hydrogen system is possible thanks to the excellent dynamic properties of electrolysis. This finding implies the system owner can strategically participate in day-ahead power markets to reduce the hydrogen production cost. However, the uncertainties from imperfect prediction of the fluctuating market price and wind power reduce the effectiveness of the offering strategy in the market. In this paper, we proposed a decision-making framework, which is based on data-driven robust chance constrained programming (DRCCP). This framework also includes multi-layer perception neural network (MLPNN) for wind power and spot electricity price prediction. Such a DRCCP-based decision framework (DDF) is then applied to make the day-ahead decision for a wind/hydrogen system. It can effectively handle the uncertainties, manage the risks and reduce the operation cost. The results show that, for the daily operation in the selected 30 days, offering strategy based on the framework reduces the overall operation cost by 24.36%, compared to the strategy based on imperfect prediction. Besides, we elaborate the parameter selections of the DRCCP to reveal the best parameter combination to obtain better optimization performance. The efficacy of the DRCCP method is also highlighted by the comparison with the chance-constrained programming method.