Building a Climate Resilience Demonstrator
How can digital twins help us respond to climate change? We asked Hartree Centre Data Engineering Specialist and Technical Architect on the Climate Resilience Demonstrator project, Tom Collingwood, to tell us more.
With all the recent storms causing damage and flooding across the UK, it couldn’t be a more relevant time to talk about climate change!
To start us off, can you explain to us what climate resilience means?
It is very topical right now with all the storms we’d had recently and it seems to be a developing issue that is worth looking at. Most people will be familiar with the idea of climate change causing all kinds of disruption – from flooding to droughts to storm damage. On the extreme end of the scale, they can pose a threat to our safety whether directly or indirectly by disrupting an essential service or system – for example, the power going out in a hospital, or emergency services losing signal on their way to an accident.
This kind of work is distinct from trying to slow/stop climate change – which is incredibly important too – instead, we’re trying to help inform the understanding around what might happen to our infrastructure if/when more severe climate events do occur, and hence how we might prioritise resilience planning around assets which are crucial to whole-of-system resilience.
This means there is huge potential in terms of damage prevention, cost savings and service reliability for immediate services like telecoms, energy, water and utilities – but these also cascade down to any industry that relies heavily on or would be affected by disruption to those services. Which is pretty much all industries!
So that’s where the Climate Resilience Demonstrator (CReDo) comes in?
The Climate Resilience Demonstrator, or CReDo, is a digital twin demonstrator project to improve climate and extreme weather resilience across infrastructure networks – the first of its kind in the UK. We narrowed down our focus to look specifically at the effects of extreme flooding on the comms, power and water networks in a specific area of the UK. We have developed a prototype digital twin that takes in data from climate, water, utilities, telecoms and energy industries and applies flood impact models which predict where flooding will form in the UK, how those floods might affect the equipment they touch and how knock-on impacts spread out to the rest of the networks outside the immediate flood zones.
We wanted to demonstrate how those who own and operate infrastructure can use secure, resilient, information sharing, across sector boundaries, to mitigate the effect of flooding on network performance and service delivery to customers, so we’ve been developing reliable approaches and frameworks for secure data sharing and information management that can inform this kind of model and be scaled up.
Why is a digital twin useful when tackling the challenge of climate resilience?
Lots of niche areas of utilities and telecoms will have specific experts or teams who have been responsible for that same machine or equipment for the last 50 years and if you lose that person or team – all that operational knowledge goes with them. If you start using digital twins and connected data, you need to get that specialist information out of their head and turn it into models that can run in AI and machine learning systems, providing 24 hour access to that information.
Many people think of digital twins as operational tools streaming live data from sensors and adjusting ongoing processes accordingly, such as in a manufacturing facility. With climate change, the feedback loop we are looking at might take 100 years to complete, so in this specific use case our digital twin isn’t streaming live data and instead is operating in a way which provides resilience planners with predicted outcomes for a given set of inputs, so they can use the information when making decisions about the future networks they’re supporting. We’re only scraping the surface of what digital twins can do for climate resilience with this specific use case. Bringing operational sensor data into the mix (from river levels to real-time asset monitoring) would broaden the application out to explore current and potentially upcoming failures via predictive maintenance modelling, or branching out into other climatic effects such as wind and extreme heat to inform how we make the whole network more resilient to a variety of new challenges over the coming decades. You’re building the foundations for a digital decision support, and potentially future decision-making, assistant that always gives consistent advice to actively support the experts making vitally important decisions about our countries’ infrastructure.
“That’s the thing – if you get this kind of work right, basically no one will ever hear about it. Life goes on as normal, the power stays on, the communications don’t go down, the damage is minimal.”
How do you teach a computer to do that?
You have a structured conversation with the experts, you ask them to tell you how things might break – even in strange or temperamental ways you wouldn’t expect – and you incorporate all those cases to develop a model that provides more accurate predictions. The more you know, the more data you have to keep running through the system to refine it and make better decisions and better decisions in future.
Can you talk us through an example to illustrate what kind of scenarios you’re modelling?
One of the examples we looked at was a water pumping station. So we had to factor in variables like knowing what will break if the water reaches a specific depth because that would submerge the electronics and potentially start a fire. Or if the fuel has been stolen from a backup generator, which will mean everything switches off in an emergency – but imagine cases where there are no sensors detecting whether the fuel is still there.
Our approach means that in the short-term we look at the statistical probability and frequency of those factors to make more accurate predictions of when and how failures might occur. In the long term we’ve discovered what data would be useful so you can put the technologies in place – in this case, you’d install fuel level sensors in the tanks.
So the process goes something like this:
- Learn from experts what variables affect potential failures or faults
- Make a plan for which data you need to start collecting
- Create a model that uses that data to make predictions, and provides sensible approximations where the data aren’t readily available to the system yet’Keep feeding in new data to refine the models over time
- Review the outputs of the models with the experts running the machines/assets, and tweak as necessary to ensure the models give sensible outputs using the current information at hand
- Use the predicted outputs to inform plans to mitigate the failures
So with the flooding example, you can’t stop the weather but you can predict when it’s likely to happen and put up defences in time to minimise damage or disruption?
Exactly. And the next stage is to look at what knock-on effects happen when a fault or failure occurs – so if it’s a power plant that went down, everything it supplies power to has now lost its primary power supply. What would that mean for vital infrastructure, like healthcare? This was what the short film we funded through the project was exploring – that something like loss of power – even over a short period – can actually be life or death.
The ultimate impact of a single asset going down isn’t something which is immediately apparent – we have to cascade those failures across multiple networks throughout the system if we want to understand the real impact, and with complex network interdependencies that’s not an easy thing for humans to resolve quickly, whereas the right computational models can be very well suited to doing this quickly and repeatably.
Short film “Tomorrow Today” was produced by the National Digital Twin programme and Climate Resilience Demonstrator to explore the potential impact of digital twins.
What was the Hartree Centre’s role in the project?
The Hartree Centre was brought into the consortium originally to provide leadership of technical delivery, and I was given the role of Technical Architect accordingly. This meant my job was to oversee the successful delivery of a technical plan, so I had to do a bit of planning first and then ensure we could make it happen. We also had several other members of our Data Science and Research Software Engineering teams working on different aspects of data analysis and code optimisation for the project.
I’ve had oversight of what’s being done across the consortium of rproject partners: STFC’s Hartree Centre and DAFNI, CMCL Innovations, the Joint Centre of Excellence for Environmental Intelligence (JCEEI), the National Digital Twin Hub and the Universities of Edinburgh, Warwick and Newcastle.
On the industry side, Anglian Water, BT and UK Power Networks provided infrastructure data and Mott MacDonald supported us with domain expertise in infrastructure and flood modelling.
That’s lots of pieces to bring together!
Yeah, it’s a massive and quite complicated stakeholder map with a lot of moving pieces! So I’ve spent a lot of the last year joining the dots and doing agile programme planning. We’ve approached it with telecoms, water and utilities providers as the “customers” we had in mind as they’re the ones who would ultimately be able to benefit from the outputs of the project and use them to increase reliability and functionality.
A bunch of very talented people were put in front of me and I had to figure out how we could deliver as much as possible simultaneously and get it all done in time for the close of the project. We set up a secure cluster on DAFNI to put data from the asset owners all in one place, so that the scientists working on the project could access it and connect it up to develop models, without it being shared or accessed by anyone else.
What are the next steps?
The project comes to a close in March 2022, so we’re currently writing up the reports and planning a webinar to present our experiences, talk about technical achievements and lessons we’ve learned along the way so that hopefully others can learn from them too and continue to develop our ideas.
The project partners are going to collate reports and write executive summaries so we have something to help us engage with business leadership audiences that are less technical but have decision-making authority to try implementing these concepts at scale. The technical reports are there in more detail so that technical staff can understand what needs to be done.
We’re also going to continue working with the partners on this project to seek funding for the continuation of development, and hopefully further scaling up of this project. Watch this space!
Find out more about the Climate Resilience Demonstrator.
Missed the show-and-tell webinar? Watch it now
Join Newsletter
Provide your details to receive regular updates from the STFC Hartree Centre.