Navigation path

LEft

18-Oct-2017   
Path: ORGANISATION
  Up Print version Decrease text Increase text Home

RAMI rationale, organisation and data-usage policy

Intended audience: This particular page provides information on the rationale behind the RAMI exercise, its functional organisation and data-usage policy. It is intended for current and prospective participants in the RAMI benchmarking activity. If you intend to contribute model results to (current or) future intercomparison exercises, please read on to understand what are your rights and responsibilities in this venture. Your active participation in the exercise and in particular your willful, spontaneous and unsolicited submission of results in a particular RAMI (or RAMI4PILPS) phase will be deemed to imply your knowledge and acceptance of the terms and conditions described on this web page. Otherwise, you can safely skip this page.

RAMI RationaleBackground and rationale for the RAMI initiative
Qualified ParticipantsInformation on the role, rights and responsibilities of qualified RAMI participants.
RAMI Coordinators Information on the role, rights and responsibilities of the RAMI Coordination group.
RAMI advisory body Information on the role, rights and responsibilities of the RAMI advisory body.
Privacy and data Policy Information regarding copyrights, confidentiality and ownership of the results.
Future of RAMI Information on the status of the RAMI evaluations and future phases.
RAMI4PILPS Information on the role, rights and responsabilities of qualified RAMI4PILPS participants.



Background and rationale for the RAMI initiative up

The interpretation of remote sensing data generated by Earth Observation satellites hinges on the exploitation of radiation transfer models that describe the interaction of solar radiation with the various geophysical media (atmosphere, vegetation, soil, etc) of interest. It is thus of the utmost importance that these models be reliable and accurate. Since different models have been designed by different authors for different purposes, it is natural to carry out comparisons between these tools. This exercise is of primary interest to the authors because it provides an objective way to evaluate the performance and limits of applicability of these models. The results are also of relevance to the users of these models, in that they can select a particular tool for their specific purpose, according to requirements dictated by the application at hand.

The goal of the experiments described in the following pages is thus to provide a basis for comparing a large variety of computer codes that model the radiation fields typically encountered in remote sensing. The fundamental measurement is a radiance, or equivalently a reflectance, and these models thus simulate the spectral Bidirectional Reflectance Distribution Functions (BRDF) of the observed geophysical media. Of particular interest initially are the radiance distributions of solar radiation reflected from vegetation canopies, as well as the corresponding inverse problem of deriving biogeophysical canopy parameters from remotely sensed radiances.

The approach for such a model intercomparison is to provide benchmark cases and solutions which will be useful in the development and testing of models. The intercomparison exercise can also help identify the regimes of applicability of existing models. The overall benefit to the BRDF and remote sensing communities will be a demonstration of maturity and a better understanding of how these models are to be used in the practical interpretation of remote sensing data.

Scientific communities in several other technical specialty fields have been using such model performance intercomparisons in the past, as a yardstick to guide the development and evolution of complex computational tools. Most recently, for example, a Web site has been established by the climate modeling community for the Intercomparison of 3D Radiation Codes, whose approach was emulated here. Similar exercises have taken place to compare General Circulation Models,land surface parameterization, and other simulation tools.

Historically, the remote sensing community had already performed a 'model cook-off' in the mid 1980's, when the first computer models of surface reflectance were tested for their usefulness in the interpretation of these data (N. Goel and F. Hall, private communication). Internet technologies did not exist in the form we know today, but this 'somewhat pedestrian model intercomparison' was nevertheless very useful for the further evolution of computer models and it even identified some errors in databases published earlier, which could then be corrected. RAMI constitutes a formalization and extension of that effort.

Milestones of the RAMI initiative

  • The first phase of the RAMI initiative took place in 1999, and the results were presented at the IWMMM-2 conference that took place in Ispra, Italy. A subset of these results appeared in the refereed literature as part of the proceedings of that conference.

  • The second phase of the RAMI initiative started in February 2002. Tests run under Phase 1 (Direct mode only) were renewed and new tests added, especially to exercise 3-dimensional RT models. Preliminary results of this exercise were unveiled at the IWMMM-3 conference in Steamboat Springs in June 2002. All simulation results as well as various statistics thereof are available on this website.

  • The third phase of the RAMI initiative was started in Mrch 2005. RAMI-3 included a re-run of all RAMI_2 cases and featured a series of new experiments and measuremernts designed to complement and/or extent those of previous phases. Particular emphasis was given to within-scene (high spatial resolution) sampling of the radiative environment. Preliminary results were presented in October 2005 at the ISPMSRS in Beijing, China (Homogeneous only) and in March 2006 at the IWMMM-4 in Sydney, Australia (All results). All simulation results as well as various statistics thereof are available on this website.

  • The RAMI On-line Model Checker (ROMC), a web-based RT model benchmarking facility was build as a direct consequence of the close agreement between six of the participating 3D Monte Carlo models of RAMI-3. The ROMC allows for an autonomous evaluation of canopy reflectance models, by comparing their simulation results over the internet against the values of a surrogate truth dataset.

  • The RAMI4PILPS exercise proposed a suite of experiments to assess the performance of current land surface schemes in climate and weather prediction models. RAMI4PILPS focused on hemispherically-integrated shortwave fluxes, i.e., albedo, transmission, absorption in the visible (400 - 700nm) and near-infrared (700 - 3000nm) domains. No forcing terms applied. Both diffuse and direct illumination conditions as well as homogeneous and heterogeneous canopies were included. Results have been presented in early July at the 2011 IUGG general assembly in Melbourne, Australia.

  • The fourth phase of RAMI was launched in February 2009 and offered a completely new set of test cases to its participants. The 'abstract canopy' category featured increased levels of spectral and structural complexity when compared to earlier phases of RAMI. The 'actual canopy' category on the other hand was based on laboratory measurements and field inventory data acquired from actual test sites. RAMI-IV also included new measurement types, e.g., lidar, TRAC, DHP, as well as a larger numbers of spectral bands (for the actual canopy test cases). The results of the abstract canopies were analysed using according to ISO-13528 and presented at the 2013 AGU fall-meeting in San Francisco, USA. The model simulations for the actual canopy test cases are still being analysed.

This Web site and the proposed intercomparisons reflect a self-organized activity of the BRDF community. The impetus for Phase 1 of RAMI came from the activities linked to the organization of the IWMMM-2 conference of ENAMORS, but participation in its continuation (Phase 2, 3 and beyond) is open to all interested participants, who can contribute freely to and benefit from these exchanges. Please read our privacy and data-usage policy.




The role, rights and responsibilities of the RAMI qualified participants up

A scientist - not belonging to the RAMI coordination group - qualifies as a full participant to a particular phase of the RAMI benchmarking exercise if:

1.1.1: he/she is the original developer of the model to be tested,
or
1.1.2: he/she is duly authorized, by the original developer, to conduct the experiments on his/her behalf,
or
1.1.3: he/she is using a publicly available model,
and
1.2: he/she submits the model simulation results within the required constraints (deadline, format, etc),
and
1.3: the volume and quality of results submitted justifies full participation in the current benchmarking phase

Each model can participate only once to each exercise of a given phase. In other words, multiple participants cannot simultaneously submit results obtained from the same or similar models. Prospective participants should contact the RAMI coordinators in case of doubt. Participating in one phase of RAMI does not automatically imply or guarantee participation in any subsequent phase. Qualified participants accept full responsibility for the following tasks:

  • running the necessary simulations according to the protocol described
  • certifying that the results submitted were generated by the indicated model and used as prescribed in the RAMI protocol
  • presenting the results according to the recommended formats
  • sending the results to the RAMI coordinators within the scheduled period
  • modifying the code and re-submitting the results if suggested by the RAMI coordinators and deemed appropriate, within the allowed time periods
  • participating in the drafting of the papers, presentations and other official communications that may be generated at the end of each phase of the RAMI exercise

Since the analysis of the simulation results submitted is largely done automatically, files that do not meet the formatting and other constraints set by the RAMI coordinators may be rejected or unused. This may lead to the de facto disqualification of the participant for the particular exercise, or for the entire phase. The RAMI coordinators will ensure that all the necessary information and constraints are publicly available and accessible, and will make best reasonable efforts to provide warnings and error messages to the participants to inform them of possible problems.




The role, rights and responsibilities of the RAMI coordination group (currently at the JRC) up

The RAMI model benchmarking activity is led by a small group of scientists at the Joint Research Centre (JRC) in Ispra (VA), Italy. Information on the rationale and development of the RAMI project can be found on the relevant section. The JRC group will continue to coordinate the RAMI exercise for the foreseeable future, and offers the following services to the community:

  • Initiate new phases of benchmarking and set the corresponding deadlines
  • Design and publish suites of experiments to test the models in progressively more demanding situations
  • Define and publicize the communication protocols and formats
  • Define and publish the statistical criteria used to establish measures of relative model performance
  • Maintain the dedicated RAMI web site
  • Receive and process model simulations from qualified participants
  • Provide feedback to the participants as appropriate
  • Analyse these simulations and coordinate the publication of the results in the refereed literature as well as on the official RAMI web site
  • Report to the RAMI advisory body

The RAMI coordinators may extend deadlines for submitting results or modify other aspects of the exercise if such action is clearly to the advantage of the modelling community and if all qualified participants are given equal opportunity to exploit these changes. In so doing, the coordination group guarantees that RAMI

  • operates on a voluntary basis
  • is open to all qualified participants, without discrimination or prejudice
  • is driven by open, commonly agreed scientific objectives and free from pressures and influences
  • offers unbiased and non-preferential treatment of model results
  • maintains confidentiality in the strict sense defined below



The role, rights and responsibilities of the RAMI Advisory Body up

The RAMI Advisory Body (RAB) is composed of prominent members of the radiation transfer modelling community, and long-standing participants of the RAMI initiative. The functions of the RAB include:

  • Setting and maintaining the overall philosophy and work ethics of this sensitive activity
  • Overseeing the work of the RAMI coordinators, supporting and complementing their decisions and actions as required
  • Endorsing the overall strategy of RAMI and in particular the focus and direction of new experiments
  • Hearing arguments and making decisions in difficult or controversial cases

The RAB is the ultimate authority with regards to the conduct of RAMI; it is ultimately responsible for the scientific credibility and integrity of the benchmarking process. Its decisions are final, definitive, irrevocable and cannot be appealed.




Copyrights, confidentiality and ownership of the results up

The authors/developers of the models maintain full rights on their models. In fact, participation in the RAMI activity does not assume, imply or require any transfer of ownership, intellectual property, commercial rights or software codes between the participants and the RAMI coordinators.

Qualified participants do deliver to the RAMI coordinators the results of computations (typically, suitably formatted tables of numbers representing reflectances and other properties of the simulated radiation field), obtained with their own models, for the explicit purpose of comparing them with similarly derived results from other models. These results become the property of the RAMI coordination group, but the latter, in turn, uses these results exclusively for the stated purpose. Submitted results are not returned to their originators, but they can be updated or modified as needed and within the limits set by the coordination group.

Qualified participants are expected to actively participate in the scientific debate surrounding the RAMI exercise and to contribute to the writing and editing of publications and presentations resulting from this activity. They are automatically co-authors of such publications and presentations, unless they explicitly decline this privilege. Such works are typically submitted for publication by professional organizations (e.g., the American Geophysical Union), which normally own the copyright on the publication itself.

The RAMI coordinators, with the assent of and in collaboration with the qualified participants, publicize the results of the benchmarking exercises, in scientific publications and presentations, as well as through the RAMI web site. By submitting simulation results to the RAMI coordinators, qualified participants authorize the latter to analyse these results and compare them to other similarly derived models, they also accept that the results of this analysis will be published as described earlier. However, individual model results as submitted by qualified participants and intermediary results obtained during the analysis phase are not published without the explicit consent of the authors, and never disclosed to anybody (including other qualified participants) for any reason. Qualified participants may select or approve the publication of their particular model results, but the recommended procedure is for them to organize the delivery of software codes or results themselves.

For practical reasons, the copyrights on the RAMI web site, including the text, figures, tables and other graphical, textual or programming elements remains with the RAMI coordinators. Within the legal limits allowed by copyright laws, figures, tables, statistics and other materials published on the RAMI web site can be downloaded and used in other works, provided full and explicit reference to the source materials are duly provided. For refereed publications, the full reference to the published paper is sufficient. For information gathered on the web site, the full HTTP address of the appropriate web page or site is required.

Qualified participants in RAMI agree to publish the results of the benchmarking as a group, and to refrain from initiating, running or publishing these results, or those resulting from similar benchmarking activities on their own or as a subgroup, unless full and explicit consent by the RAB, the RAMI coordinators and all participants is formally obtained.

Neither the RAB nor the RAMI coordinators take any responsibility for the scientific value or suitability for any particular purpose of the model results submitted in this context. They do however assume responsibility for the analyses and make every effort to ensure the appropriateness, accuracy and fairness of these benchmarks, and request advice or support from the qualified participants as and when needed. The qualified participants, jointly with the RAB and coordinators assume the scientific responsibility of the contents of the publications (both printed and electronic) pertaining to RAMI. If errors or inaccuracies are found after publication, every effort will be made to correct them, e.g., on the RAMI web site or as appropriate.




Status of the RAMI evaluations and future phases up

By its very nature, RAMI is a dynamic, evolving activity. New experiments and benchmarks are designed and proposed at the start of each new phase, both to exercise the participating models under more stringent conditions or to remain in line with the progressive development, complexity and performance of the models.

By the same token, existing models are continually improved and expanded, obsolete codes are disregarded and new approaches and tools are being developed to address past and current limitations, or to take advantage of new findings or observational opportunities.

As a result, the benchmarks and comparative evaluations issued by the RAMI process must be considered snapshots describing the state of the art at the time of the exercise, not as a final, absolute and definitive judgment on the worthiness and performance of any particular model. This also implies that model developers should document the version and release numbers of their codes, that the rating of a particular model can change in time, and that the users of these benchmarks must understand these issues and make sure they associate the results with the proper model version.

Last but not least, if you have any questions or suggestions about these issues, please contact the RAMI coordinators before initiating your participation in a particular phase of the activity.

The role, rights and responsibilities of the RAMI4PILPS qualified participants up

A scientist qualifies as a full participant to the RAMI4PILPS benchmarking exercise if:

1.1.1: he/she is the original developer of the model to be tested,
or
1.1.2: he/she is duly authorized, by the original developer, to conduct the experiments on his/her behalf,
or
1.1.3: he/she is using a publicly available model,
or
1.1.4: he/she has obtained the go-ahead from the RAMI4PILPS coordinators,
and
1.2: he/she submits the model simulation results within the required constraints (deadline, format, etc),
and
1.3: the volume and quality of results submitted justifies full participation

Each model can participate only once to each exercise of a given phase. In other words, multiple participants cannot simultaneously submit results obtained from the same or very similar models. Prospective participants should contact the RAMI4PILPS coordinators in case of doubt. Qualified participants accept full responsibility for the following tasks:

  • running the necessary simulations according to the protocol described,
  • certifying that the results submitted were generated by the indicated model used as prescribed in the RAMI4PILPS protocol,
  • presenting the results according to the recommended formats,
  • sending the results to the RAMI4PILPS coordinators within the scheduled period,
  • modifying the code and re-submitting the results if suggested by the RAMI4PILPS coordinators and deemed appropriate, within the allowed time periods,
  • participating in the drafting of the papers, presentations and other official communications that may be generated at the end of the RAMI4PILPS exercise,
  • agreeing to the RAMI4PILPS conditions for copyrights, confidentiality and ownership of the results.

Since the analysis of the simulation results submitted is largely done automatically, files that do not meet the formatting and other constraints set by the RAMI4PILPS coordinators may be rejected or unused. This may lead to the de facto disqualification of the participant for the particular exercise, or for the entire phase. The RAMI4PILPS coordinators will ensure that all the necessary information and constraints are publicly available and accessible, and will make best reasonable efforts to provide warnings and error messages to the participants to inform them of possible problems. If you have any questions or suggestions about these issues, please contact the RAMI4PILPS coordinators before initiating your participation in the RAMI4PILPS activity.

    Right navigation