Climate Risk

“Downscaling”: Climate Modeling’s Distracting Vaporware

By April 28, 2021 No Comments

Jupiter Intelligence’s recent white paper that compares the merits of different climate model downscaling methods is a climate science niche that I’m very familiar with — I myself spent a lot of time in graduate school and after working with others on state-of-the-art machine learning approaches to climate downscaling. The target of this research area is to take relatively coarse IPCC Global Climate Model (GCM) outputs and make them highly precise in space. 

Downscaling is a fruitful academic branch of physical climate science and has helped bolster the climate science community’s understanding of hyperlocal climate processes over time. Deploying it carefully is useful in practice in resolving interactions between topography, land cover and the atmosphere, for example.

On the whole, GCM data is crucial for making projections of climate risk, and overall the white paper covers the topic well from an academic literature synthesis perspective. 

But making the downscaling of those GCMs in particular such a central focus in climate risk modeling is a red herring at best — and misleading at worst. Many key, local scale atmospheric processes (e.g., cloud microphysics) are not well understood from a physics point of view. Add to that that historical, real weather observations aren’t even geospatially precise and temporally complete enough to backtest and validate really high resolution downscaling models — so much so that researchers often have to make up somewhat synthetic ground truth data to achieve some sense of limited validation. In other words, this would be a bit like a history professor testing her student’s knowledge on a final exam based on a textbook that had some missing pages — where the professor had to make some assumptions and guesses about what happened in the gaps. Sure, the student might get it right, but what did she actually learn and is this judgement to be trusted going forward?  

Overall, there’s no consensus in the climate science community (including method comparison work I was directly a part of) that highly sophisticated downscaling approaches are much better than really simple ones. That’s why at risQ we use simple, robust methods for downscaling when needed without trying to imply we can tell you whether it’ll rain at your house instead of the one next door on April 16, 2034. 

To be clear, there are many data components to the problem that are incredibly important to have at a high spatial resolution for climate risk modeling — elevation data, economic exposure data, physical asset data, land cover data, municipal boundary data, infrastructure data, flood insurance zone data, and so on. 

But GCM downscaling shouldn’t be a primary focal point in this industry, as it can create unnecessary confusion, skepticism, and false sense of precision about climate model projections — right at a time when it’s so crucial for financial markets participants to understand their real value and limitations.

Leave a Reply