Worldwide evaluation of mean and extreme runoff from six global-scale hydrological models that account for human impacts

Jamal Zaherpour, Simon N. Gosling, Nick Mount, Hannes Müller Schmied, Ted I.E. Veldkamp, Rutger Dankers, Stephanie Eisner, Dieter Gerten, Lukas Gudmundsson, Ingjerd Haddeland, Naota Hanasaki, Hyungjun Kim, Guoyong Leng, Junguo Liu, Yoshimitsu Masaki, Taikan Oki, Yadu Pokhrel, Yusuke Satoh, Jacob Schewe, Yoshihide Wada

Research output: Contribution to journalArticleAcademicpeer-review

40 Citations (Scopus)

Abstract

Global-scale hydrological models are routinely used to assess water scarcity, flood hazards and droughts worldwide. Recent efforts to incorporate anthropogenic activities in these models have enabled more realistic comparisons with observations. Here we evaluate simulations from an ensemble of six models participating in the second phase of the Inter-Sectoral Impact Model Inter-comparison Project (ISIMIP2a). We simulate monthly runoff in 40 catchments, spatially distributed across eight global hydrobelts. The performance of each model and the ensemble mean is examined with respect to their ability to replicate observed mean and extreme runoff under human-influenced conditions. Application of a novel integrated evaluation metric to quantify the models' ability to simulate timeseries of monthly runoff suggests that the models generally perform better in the wetter equatorial and northern hydrobelts than in drier southern hydrobelts. When model outputs are temporally aggregated to assess mean annual and extreme runoff, the models perform better. Nevertheless, we find a general trend in the majority of models towards the overestimation of mean annual runoff and all indicators of upper and lower extreme runoff. The models struggle to capture the timing of the seasonal cycle, particularly in northern hydrobelts, while in southern hydrobelts the models struggle to reproduce the magnitude of the seasonal cycle. It is noteworthy that over all hydrological indicators, the ensemble mean fails to perform better than any individual model - a finding that challenges the commonly held perception that model ensemble estimates deliver superior performance over individual models. The study highlights the need for continued model development and improvement. It also suggests that caution should be taken when summarising the simulations from a model ensemble based upon its mean output.

Original languageEnglish
Article number065015
JournalEnvironmental Research Letters
Volume13
Issue number6
DOIs
Publication statusPublished - Jun 2018
Externally publishedYes

Fingerprint Dive into the research topics of 'Worldwide evaluation of mean and extreme runoff from six global-scale hydrological models that account for human impacts'. Together they form a unique fingerprint.

Cite this