Use cases for WMS Best Practices Interoperability Experiments

Introduction

This is a collection of use cases being offered for the Interoperabilitiy Experiments. The plan in winter 2010 is accumulate use cases, examine them to see what kinds of things they require, and then decide which ones to use for this year's experiment. At this stage, these use cases just they need a story; eventually they will be detailed out in first in terms of what difficulties they present and finally in terms of means available. Somewhere along the way, the Interoperatiblity Experiment will need to pick the use cases it wants to attempt. Selection for the 1st Interoperability Experiment will be made during teleconfs or at Frascati TC March 2010.

Note that another set of use cases can be found at the page MetOceanUseCases.

Note: editing rules for the twiki can be found http://twiki.org/cgi-bin/view/TWiki/TextFormattingRules.

Candidate Use Cases:

These are presented in no particular order, just in the order they were added.

Stephan 1 : Analysis of a weather feature

A forecaster at a NWS and a analyst at ECMWF are observing and analysing a weather feature and trying to predict its future developments. They want to compare its development between different model runs and look at ensemble outputs. The aim is to see how forecasters can easily use maps of other NWP forecasts to improve their ability to access a meteorological situation.

Ideally I would add that the ECMWF and NWS analysts should be easily be able to exchange the maps they are looking at!

What this use case tests:

  • this would test time dimensions
  • security (hopefully) is here a limited factor since we have "trusted" channels?
  • EGOWS community could pick this up since the clients are forecaster workstations
  • this use case can be later extended to let forecaster workstation requesting related data through WCS and exchange meteorological objects (in GML/CSML) between workstations ...

Challenges for the IE:

  • How to define "different model runs"
  • How to generate "ensemble outputs"
  • How to serve the models via WMS

Stephan 2 : Obtaining forecasts

Several government agencies are observing a disaster zone (like Haiti). They have to plan relief from air, sea and by road. They have maps of roads, storage facilities, hospitals, etc and would like to overlay these with forecasts to make decisions to co-ordinate the relief (like the average/accumulated rainfall). The aim is to see how well NWS can provide forecasts reliable&fast to be used in standard OGC clients.

What this use case tests:

  • Only our servers are really tested here
  • security a big issue
  • meteorological specific clients would only be consider if very thin - more likely they would be standard OGC clients (such as ESRI)
  • This use case can be used to make a business case of using OGC standards ...

Challenges for the IE:

  • How to define different "forecasts"

Jeff 1 : Building an ANASYG/PRESYG Product.

  • Step : definition of the level of confidence into the model via comparison of the Mean Sea Level Pressure on several models

At 9:00 UTC and 18:UTC, Jeff wants to build an ANASYG/PRESYG for a 24 hour forecast. He visualizes the Mean Sea Level Pressure (field only available a surface level) over the north atlantic area but today, the French model he uses regularly seems doubtful to him : He identifies some "anomalies" that can be due to an exageration from the model in certain meteorological situations. But they can be real anomalies (anomalies are non common values but can be reality to distinguish from exagerations or errors of the models which are just artifacts of the numerical outputs and not reality).

So he wants to access and visualize the behaviour of all other available modeles over the same area for the same validity date. The Run can be different but not older than 12 hours.

Note: an expanded version of this with image explanations can be found here. WARNING: the file images are large.

Note: a simplified, reworded take on the use case, with smaller images can be found here.

What this use case tests:

  • this would test time dimensions
  • EGOWS community could pick this up since the clients are forecaster workstations and the need is a basic need for all NWS
  • this use case can be later extended to let forecaster workstation requesting related data through WCS and exchange meteorological objects (in GML/CSML) between workstations ...

Challenges for the IE:

  • How to represent Mean Sea Level Pressure (SLD does not do isolines, so would need to be extended, but that work is ongoing)
  • How to browse the time dimensions
  • How to request "Mean sea level pressure"

Cecile 1 : Checking the good calibration of the model (= good localisation of the significant meteorological patterns).

  • Step 1: Check of modele calibration at surface level via the comparison of the pressure available via the observations and the analysis of Mean Sea Level Pressure provided by the numerical model (RUN_START_TIME = TIME, FORECAST_OFFSET= 0H)

It is 14 UTC. Cecile visualizes the observations at 12UTC because she has a run available at 12 UTC (i.e. she visualises observations at the RUN_START_TIME of the model she wants to calibrate) and overlays the Mean Sea level Pressure from the model she wants to calibrate. She can also overlay other models available at same TIME (validity date) but which RUN_START_TIME is not older than 12hours (same use case as Jeff1)

  • Step 2: Check of modele calibration at surface level via the comparison of the precipitations available via the water wave and accumulated precipitations available at the first time step of the model

It is 14 UTC. Cecile wants to calibrate a modele which step is 1 hour and which last run available is at 12 UTC. She visualizes the water waves over 1 hour at 13UTC (i.e. she visualises water wave at the TIME = RUN_START_TIME+first FORECAST_OFFSET) and visualizes into another window the Total precipitations from the model she wants to calibrate. Remark : The Total precipitations field is not available at FORECAST_OFFSET= 0H so it has to be made at the first FORECAST_OFFSET(here = 1H). If the step of the model is > 1hour, all combination has to be made on the same duration (i.e. if step = 3hours, water wave has to be over 3hours)

  • Step 3: Check of modele calibration in altitude via the comparison of the watervapor satellite image and the alitude of the 2PVU from the numerical model

Cecile visualizes the Water vapor from Meteosat at 12 UTC. She overlays the field of geopotential on the 2.0 level expressed in PVU (Potential Vorticity Unit) at 12 UTC. She then checks if the dark areas of the image fit to the area of strong gradient.

  • Step 4: Check of modele calibration in altitude via the comparison of the infrared satellite image and the Humidity field at 850 hPa from the numerical model

Cecile visualizes the infrared from Meteosat at 12 UTC. She overlays the field of Humidity at 850 hPa at 12 UTC. She then checks if the white areas of the image fit to the area where the humidity is over 90% (thicker isoline from the Humidity field)

What this use case tests: Step 1 same as Jeff1 Step2
  • this would test accumulations over time
  • this use case can be later extended to let forecaster workstation requesting related data through WCS and exchange meteorological objects (in GML/CSML) between workstations ...

Challenges for the IE: Step 1 same as Jeff1 Step2
  • How to define an accumulation duration
  • How to combine observations and forecast

Bruno 1: Building a forecast for D+2, typically 48-72 H over France.

Bruno wants to build a forecast for D+2, typically 48-72 H over France.

An area of low pression in moving towards France, the aim is to predict its more probable trajectory. Over a domain like “Europe-Atlantic, "he wants to visualize the trajectories forecasted by different models and prediction systems together (example given in Figure 1). to do so, he selects to superimpose the deterministic models of interest (arp, ECMWF, uk, gfs ..., he can choose the last 2 or 3 runs available). He isolates the trajectories of depression of interest . He also overlay the trajectories of the ensemble forecast (arp ECMWF and US ..., 2 or 3 latest available networks).

He wants then to manipulate these trajectories: for example, exclude the less likely or less realistic. Calculate and visualize the trajectory averages, medians among the selected paths. Finally, he builds the official trajectory (most likely) and makes it available.

This work should take about 1 hour.

What this use case tests:

Challenges for the IE:

Jeff 2 : Problem of temperature in a cold wave situation.

One of our major customers uses our estimates of temperatures over the 15 next days (or more) for its production planning. In a context of poor predictability, when the ensemble forecast presents a high instability, a high dispersion, a dual-modality (2 opposite solutions equally probable), it is necessary to look for all elements which may help us a choice of scenario. It is therefore particularly useful in this case to access information from ensemble forecasts of other countries, in this case USA (GEFS), Canada (GEM-CMC) and NAEFS which is a super-set composed of both of them. The mean fields of MSLP, Z500, T500, T850 allow a comparison of the type of forecasted flow the most likely and then choose an option in a more comfortable way , or at worst to support an opinion on the poor predictability. Fields of T850 anomaly compared to climatology are also very useful for comparing signals which are mostly qualitative.

What this use case tests:

Challenges for the IE:

Jeff 3 : Problem of uncertainty on predicting short-term D2D3.

At these ranges the differences between deterministic models are quite common. Sometimes it is only uncertainty in the spatio-temporal detail. Sometimes these differences can impact the type of weather throughout the day. The two reference models French ARPEGE and ECMWF can be contradictory and it also happens that the ensembles built on these two models have different mean solutions. In this case it is difficult to decide of a scenario. It is therefore useful to have access to the outputs of other deterministic models to establish a more likely scenario. The GFS model (USA), GEM (Canada), UK or Japanese ... may be useful. In the case of worsening of the rainfall more or less important, the precipitation forecast models will be useful (RR03, RR06, RR24). In case of ncertainty on the wind strength it will require to compare the gust fields….

What this use case tests:

Challenges for the IE:

Chris 1 : Catalogue Interoperability for International Polar Year

The International Polar Year (IPY) is a truly interdisciplinary scientific data gathering exercise covering biology, meteorology, oceanography, atmospheric chemistry, human sciences, etc., concerning the polar regions of the earth. This data will be made widely accessible through a series of catalogues which can be searched for Data Access and Retrieval.

The catalogues will use various ontologies, and in some cases not have a common ontology or even a controlled vocabulary.

Catalogue interoperability will be based on Open Archives Initiative - Protocol for Metadata Harvesting (OAI-PMH) using NASA's Global Change Master Directory - Directory Interchange Format (GCMD DIF) as an exchange format. However, some centres do use ISO23950, which is the preferred standard for libraries and the WMO.

This use case, tabulated in the attached Word document, envisages a user at any IPY centre performing a search and receiving information irrespective of the catalogue metadata harvesting protocol and metadata exchange format used by individual catalogues.

  • uc_ipy.doc: Use Case for International Polar Year Catalogue Interoperability

-- ChrisLittle & Øystein Godøy - 22 Feb 2010

What this use case tests:

Challenges for the IE:

Chris 2 : Coordinate Reference Systems And Map Projections

The International Polar Year (IPY) is a truly interdisciplinary scientific data gathering exercise covering biology, meteorology, oceanography, atmospheric chemistry, human sciences, etc., concerning the polar regions of the earth. This data will be made widely accessible through a series of catalogues which can be searched for Data Access and Retrieval.

IPY data are mainly describing polar regions and comes in a wide variety of map projections, coordinate reference systems etc. The problem is to understand the identified projection, to translate coordinates and transform map projections in a standardised and safe manner.

This use case, tabulated in the attached Word document, envisages a user at any IPY centre performing a search, receiving information irrespective of the catalogues or data respositories used, and constructing layers of a map using Web Map Services. The data may be re-projected to ensure harmonised and consistent layers.

-- ChrisLittle & Øystein Godøy - 22 Feb 2010

What this use case tests:

Challenges for the IE:

I Attachment ActionSorted ascending Size Date Who Comment
MODWG_WMS-use-case-1_avc001.docdoc MODWG_WMS-use-case-1_avc001.doc manage 4 MB 08 Feb 2010 - 14:03 AdrianCuster Adrian's revision of Marie-Francoise's Use Case #1
Use_case_1_detailed.docdoc Use_case_1_detailed.doc manage 12 MB 04 Feb 2010 - 09:28 MarieFrancoiseVoidrotMartinez  
uc_ipy.docdoc uc_ipy.doc manage 25 K 22 Feb 2010 - 14:26 ChrisLittle Use Case for International Polar Year Catalogue Interoperability
Topic revision: r12 - 11 Mar 2010, MarieFrancoiseVoidrotMartinez
This site is powered by FoswikiThe information you supply is used for OGC purposes only. We will never pass your contact details to any third party without your prior consent.
If you enter content here you are agreeing to the OGC privacy policy.

Copyright &© by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding OGC Public Wiki? Send feedback