Hub Blog


Building an Effective AI Strategy for Maritime

As the hotly awaited Oppenheimer debuts on cinema screens, Christopher Nolan, the film’s director has compared the arrival of AI in our society with Oppenheimer’s invention of the atomic bomb. Ironically, actors across Hollywood are staging mass strikes because of the arrival of AI, just as decades earlier, atomic weapons attracted mass protests. AI is controversial. It is misunderstood. It is feared, and yet, it also offers huge promise, which is why the Maritime Innovation Hub hosted a special ‘Understanding AI for Maritime’ event on July 7th 2023.

How to lead your AI strategy for Maritime Leaders set out to dispel many of the myths surrounding AI and machine learning, to enable decision makers to make informed choices about how to harness its incredible potential and procure the right systems.

What potential does AI and machine learning offer to maritime?


Operational automation through machine learning

Supervised and unsupervised machine learning can be used to significantly enhance operational performance for specific prediction tasks. It is typically used for data driven problem solving, for instance, using data gathered from ships within ports to train a model that derives the maximum speed that a boat should travel to optimise fuel consumption. Predictive maintenance is another key application for machine learning. Reinforcement learning, a subset of machine learning, can be applied to tasks like route planning and autonomous ship berthing.

Accelerated decarbonization with market-ready AI applications

Maritime faces a massive decarbonisation challenge, needing to transform how the industry is fueled within 30 to 50 years. AI offers promising and affordable solutions that are readily available now to improve the industry's emissions footprint, as evidenced in recent case studies.

  • Blue Visby used simulations to illustrate how emissions during over 90,000 voyages could be reduced by 15%.
  • Winward software can determine tankers and their fuel consumption with 98% accuracy, providing recommendations to fleet management teams on how to improve fuel consumption.

Key considerations when investigating AI systems for procurement

Data quality - It is helpful to consider modern AI as being programming with data. Just as buggy code affects software program functionality, the quality of data plays a vital role. The quality of your data will directly impact the quality of your outputs, so have good data before embarking on any AI project. Without well-curated, representative, correct, and relevant data, building effective AI is not possible. Therefore, the focus should always be on data quality, which is also the source of bias that can compromise AI systems.

Eliminating bias - The issue of bias in AI has been discussed extensively using the example of early facial recognition software which was predominantly focused on male white faces, neglecting proper representation of diverse ethnicities and genders. In reality, any AI-driven system that interacts with people can be assumed to have inherent historical biases related to ethnicity and gender, regardless of whether it is based on images or other forms of data. Caution always needs to be employed when populating these systems, together with an awareness of the likely biases within the data and their potential impact on systems.

Inherent problems with Generative AI - Generative AI systems like Chat GPT have been described as ‘broken mirror representations’. These systems actually amplify existing bias within datasets because they rely on huge volumes of largely uncurated training data to function. Chat GPT utilises over one terabyte of text data, making it impossible to manually review and identify problematic content for exclusion. Consequently, undesirable content can find its way into these systems, ultimately diminishing control over the output quality. It's crucial to consider these factors when working with generative AI systems and validate the outputs. Unlike AI systems that provide statistical decisions, which can be tested using test data to assess performance, generative AI requires output validation. For instance, if a statistical AI system is 97% accurate, you can account for the 3% error and incorporate error mitigation strategies into your workflow.

No need for bespoke development - Take advantage of the vast array of tools and APIs already available. Many solutions are readily accessible and affordable, there is no need to build everything from scratch. Adopting ready-made solutions can save time and resources.

Importance of ethics - When procuring AI systems, there are many ethical considerations concerning how the AI will have been developed. Questions to ask suppliers include: How was the AI built? Where does the training data come from? Ethical considerations are vital since much AI development requires substantial human input, potentially involving the use of cheap labour. Have bias considerations been addressed? Understanding the relationship between training and the use case is crucial. Linked to development, it is also important to identify ways to create safe testing environments for AI implementations, for instance the Port of Tyne offers a test bed for developers to trial new AI solutions.

Carbon footprint - Another important factor to consider is the carbon footprint. New generative AI systems in particular demand significant computational power and energy. Understanding the energy requirements and assessing compatibility with your organisation's net zero policy is crucial.

AI enhances maritime cybersecurity - Cybercrime attacks and their severity are increasing, with the recent cyber attacks on the Port of Lisbon and the Port of Nagoya being good exmaples. In 2022 the cost of cybercrime reached an estimated $8.6 trillion, and it is projected to grow to $10.5 trillion in damages alone. This increasing trend is attributed to the hackers' ability to export data, launch ransomware attacks, or engage in activities like business email compromise. Many of the tools available to safeguard maritime environments also utilise AI and the more data captured and input into a system, the better the results become. Data fuels AI advancements and makes it possible to extract more value from cybersecurity, by identifying threats and using behavioural analytics to investigating suspicious activity and prevent major incidents.

The potential for AI to revolutionise the maritime industry is tremendous, but caution must be exercised to prevent unintended consequences. With the rapid pace of technological advancements, it is wise to embrace AI to address critical challenges related to safety, sustainability and profitability.

As always thank you to our excellent speakers, we very much appreciate you giving up your time to share these insights with the maritime community.

  • Jordan Connolly and  Dr Jacek Cała at the National Innovation Centre for Data
  • Nick Chubb from Thetius
  • Owain Brennan from SeerBI
  • Rich Fenton from Arctic Wolf
  • Prof Dr Detlef Nauck from BT Group
  • Ben Keith from Frazer-Nash Consultancy.

Hosted by the Port of Tyne’s Innovation Hub, maritime’s only festival of technology, sustainability and future skills development, Maritime Innovation Week, will be happening from 7 – 9th November 2023. Reserve your free delegate place today:


Ian Blake, Head of Technology and Innovation at the Port of Tyne