This is the official site of the radiation transfer model intercomparison (RAMI) initiative. RAMI proposes a mechanism to benchmark models designed to simulate the transfer of radiation at or near the Earth's terrestrial surface, i.e., in plant canopies and over soil surfaces. As an open-access, on-going activity, RAMI operates in successive phases each one aiming at re-assessing the capability, performance and agreement of the latest generation of radiation transfer (RT) models. This in turn, will lead to model enhancements and further developments that benefit the RT modelling community as a whole. For more information on RAMI, please consult the FAQs and our privacy and data-usage policy via this DISCLAIMER link.
The first phase of RAMI (RAMI-1) was launched in 1999.
Its prime objective was to document the variability that existed between canopy reflectance models when run under well controlled experimental conditions [Pinty et al.,2001, JGR].
The positive response of the various RAMI-1 participants and the subsequent improvements made to a series of radiative transfer (RT) models promoted the launching of the second phase of RAMI (RAMI-2) in 2002.
Here the number of test cases was expanded to focus further on the performance of models dealing with structurally complex 3-D plant environments.
The main outcomes of RAMI-2 included (1) an increase in the number of participating models, (2) a better agreement between the model simulations in the case of the structurally simple scenes inherited from RAMI-1, and (3) the need to reduce the sometimes substantial differences between some of the 3-D RT models over complex heterogeneous scenes [Pinty et al., 2004, JGR].
The latter issue was noted as one of the challenges that future intercomparison activities would have to face, since the reliable derivation of some sort of ‘‘surrogate truth’’ data set will not be possible in the absence of any agreement between these RT models.
This, in turn, would then imply that except in some simple special cases, the evaluation of RT model simulations cannot proceed beyond their mutual comparison because of the general lack of absolute reference standards.
During the third phase of RAMI (RAMI-3) - which was held in 2005 and which saw a further increase in the number of participants and test cases with respect to RAMI-1 and 2 – the self-consistency (e.g., energy conservation) together with the absolute and relative performance of RT models were evaluated in great detail [Widlowski et al., 2007a, JGR].
In fact, it became possible to actually demonstrate, for the first time, a general convergence of the ensemble of submitted RT simulations (with respect to RAMI-2), and to document a better than 1% agreement between six of the participating 3-D Monte Carlo RT models.
Several years of benchmarking efforts were thus needed by the international modeling community to identify a series of “credible” 3-D Monte Carlo models.
The substantial agreement between the RAMI-3 simulations of these models (DART, drat, FLIGHT, Rayspread, raytran and Sprint3) allowed furthermore to derive a ‘‘surrogate truth’’ data set that can be used by model owners, developers and customers to evaluate the performance of a given RT model outside the frame of a given RAMI phase.
To facilitate this undertaking the RAMI Online Model Checker (ROMC) was developed.
The ROMC is a web-based interface allowing for the autonomous evaluation of RT models in quasi real time.
Usage of the ROMC is simple and allows to distinguish between quality assessments intended to 1) repeatedly “debug” a model against test cases from previous phases of RAMI, and 2) enable a “validation” of a model under randomly chosen spectral, structural and illumination conditions [Widlowski et al., 2007b, RSE].
Correctly formatted RT model simulations of the selected/assigned test cases have to be uploaded via the ROMC web-interface and result in a series of graphs that document the closeness of the model simulations to the “surrogate truth” data set of the ROMC.
All ROMC graphs can be received in PostScript format for easy inclusion in presentations and publications.
Using flux simulations of one of the “credible” 3-D Monte Carlo models identified during RAMI-3 as reference, the RAMI4PILPS suite of virtual experiments was designed to verify the accuracy and consistency of the radiative transfer formulations that provide the magnitudes of absorbed, reflected, and transmitted shortwave radiative fluxes in a single grid-cell of most soil-vegetation-atmosphere transfer (SVAT), numerical weather prediction (NWP), and global circulation models [Widlowski et al, 2011, JGR].
RAMI4PILPS thus evaluated flux models under perfectly controlled experimental conditions in order to eliminate uncertainties arising from an incomplete or erroneous knowledge of the structural, spectral and illumination related canopy characteristics typical for model comparison with in situ observations.
The RAMI4PILPS setup thus allows to focus in particular on the numerical accuracy of shortwave radiative transfer formulations and to pinpoint to areas where future model improvements should concentrate.
The RAMI4PILPS results are now available online.
The fourth phase of RAMI (RAMI-IV) a completely new set of architectural scenarios was provided, conveniently subdivided into "abstract" and "actual" canopies. The latter are based on detailed inventories of existing forest and plantation sites.The former are both 1D and 3D canopies with increased levels of spectral/structural complexity compared to test cases from RAMI-3. Stand level BRFs and fluxes, as well as LIDAR return signals and the response of in situ measuring devices (TRAC, DHP) had to be simulated. The model simulations pertaining to "abstract canopies" were analysed in accordance with the ISO-13528 framework to determine the proficiency of models in matching predefine quality criteria [Widlowski et al., 2013, JGR]. A detailed overview of the results for the RAMI-IV abstract canopies is also available online. The model simulations pertaining to the highly complex "actual canopies" are still being analysed.
Related model intercomparison and quality assurance activities include: