This document provides guidance and supporting documentation to implement the different steps and processes needed to make Ethiopia’s Anticipatory Action Plan (AAP) operational. It covers the forecast-based AA trigger protocol, how to set up PyCPT, how to generate and evaluate a new forecast configuration, how to implement an operational forecasting system once the configuration is finalized, and how to use the implementation tools to visualize and communicate the forecast.
Trigger rules are more broadly known in AA implementation as the activation protocol. Please note that the rules per season and region follow the same logical principles as discussed with stakeholders. However, each case has specific definitions.
The Forecast-Based Anticipatory Action Plan is triggered according to the following rule.
The TWG designates a historical Frequency with which anticipatory action is intended to trigger. For this project, actions associated with a “Moderate” severity drought are meant to trigger in 30% of years, and actions associated with a “Severe” drought are meant to trigger in only 20% of years.
For each year, 1991-present, the IRI forecast database retrieves the forecasted probability of non-exceedence associated with the chosen Frequency. For example, the forecast probability at 20% frequency indicates the likelihood during a given year that cumulative MAM precipitation will be at least as dry as the 20th percentile in the historical record, or roughly a 1/5 year drought.
The IRI database calculates the Threshold for triggering action. This is also based on the Frequency; for example, at the 20% frequency, the threshold is set such that the most probable 1/5th of forecasts in the historical record would have triggered action.
Since the forecast model has different levels of skill at different lead times, the Threshold corresponding to a given frequency of action will not necessarily be the same for each issue month. Consult the Appendix for more detailed documentation of model construction and skill.
This also means that as more years of history are added to the record, the Threshold will change from year to year, since it is based on percentiles of the historical distribution from 1990 to the most recent year.
To account for the fact that consecutive seasons of drought can have compounding affects in arid areas like Somali Region, two adjustments are made to the forecast threshold.
If the previous OND season’s rainfall (as measured by the Standardized Precipitation Index, SPI) was below its long-term average, 3.5 percentage points are subtracted from the forecast threshold.
If the most recent monthly measurement of MODIS NDVI (for example, November for the December issue forecast) is below its long-term average, 1.5 percentage points are subtracted from the forecast threshold.
The forecast probability for a given year is compared against the adjusted Threshold. If it is equal to or higher than the Threshold, anticipatory action is triggered for that month. If it is less than the Threshold, no action is taken that month.
This process is followed for each forecast issue month - December, January and February.
This trigger rule is based on the average forecast over Somali region. However, the tables in this document also present information on the forecast measured at the Woreda level, to assist in the targeting of anticipatory action programs within the region.
NB: For the December issue forecast only, CHIRP data for the period October 1 - November 16 is used to calculate the previous season’s SPI, since the full CHIRPS OND data is not yet available.
The Forecast-Based Anticipatory Action Plan is triggered according to the following rule.
The TWG designates a historical Frequency with which anticipatory action is intended to trigger. For this project, actions associated with a “Moderate” severity drought are meant to trigger in 30% of years, and actions associated with a “Severe” drought are meant to trigger in only 20% of years.
For each year, 1991-present, the IRI forecast database retrieves the forecasted probability of non-exceedence associated with the chosen Frequency. For example, the forecast probability at 20% frequency indicates the likelihood during a given year that cumulative OND precipitation will be at least as dry as the 20th percentile in the historical record, or roughly a 1/5 year drought.
The IRI database calculates the Threshold for triggering action. This is also based on the Frequency; for example, at the 20% frequency, the threshold is set such that the most probable 1/5th of forecasts in the historical record would have triggered action.
Since the forecast model has different levels of skill at different lead times, the Threshold corresponding to a given frequency of action will not necessarily be the same for each issue month. Consult the Appendix for more detailed documentation of model construction and skill.
This also means that as more years of history are added to the record, the Threshold will change from year to year, since it is based on percentiles of the historical distribution from 1990 to the most recent year.
To account for the fact that consecutive seasons of drought can have compounding affects in arid areas like Somali Region, two adjustments are made to the forecast threshold.
If the previous MAM season’s rainfall (as measured by the Standardized Precipitation Index, SPI) was below its long-term average, 3.5 percentage points are subtracted from the forecast threshold.
If the most recent monthly measurement of MODIS NDVI (for example, June for the July issue forecast) is below its long-term average, 1.5 percentage points are subtracted from the forecast threshold.
The forecast probability for a given year is compared against the adjusted Threshold. If it is equal to or higher than the Threshold, anticipatory action is triggered for that month. If it is less than the Threshold, no action is taken that month.
This process is followed for each forecast issue month - July, August and September.
This trigger rule is based on the average forecast over Somali region. However, the tables in this document also present information on the forecast measured at the Woreda level, to assist in the targeting of anticipatory action programs within the region.
The Forecast-Based Anticipatory Action Plan is triggered according to the following rule.
The TWG designates a historical Frequency with which anticipatory action is intended to trigger. For this project, actions associated with a “Moderate” severity drought are meant to trigger in 35% of years, and actions associated with a “Severe” drought are meant to trigger in only 25% of years.
For each year, 1991-present, the IRI map tool retrieves the forecasted probability of non-exceedence associated with the chosen Frequency. For example, the forecast probability at 25% frequency indicates the likelihood during a given year that cumulative MAM precipitation will be at least as dry as the 25th percentile in the historical record, or roughly a 1/4 year drought.
The IRI map tool calculates the Threshold for triggering action. This is also based on the Frequency; for example, at the 25% frequency, the threshold is set such that the most probable 1/4th of forecasts in the historical record would have triggered action.
To account for the fact that consecutive seasons of drought can have compounding affects in arid areas like Oromia Region, an adjustment are made to the forecast threshold.
The forecast probability for a given year is compared against the adjusted Threshold. If it is equal to or higher than the Threshold, anticipatory action is triggered for that month. If it is less than the Threshold, no action is taken that month.
This process is followed for each forecast issue month - December, January and February.
This trigger rule is based on the average forecast over Oromia region. However, the implementation tools and dashboards also present information on the forecast measured at the Woreda level, to assist in the targeting of anticipatory action programs within the region.
NB: For the December issue forecast only, CHIRP data for the period October 1 - November 16 is used to calculate the previous season’s SPI, since the full CHIRPS OND data is not yet available.
The Forecast-Based Anticipatory Action Plan is triggered according to the following rule.
The TWG designates a historical Frequency with which anticipatory action is intended to trigger. For this project, actions associated with a “Moderate” severity drought are meant to trigger in 35% of years, and actions associated with a “Severe” drought are meant to trigger in only 25% of years.
For each year, 1991-present, the IRI map tool retrieves the forecasted probability of non-exceedence associated with the chosen Frequency. For example, the forecast probability at 25% frequency indicates the likelihood during a given year that cumulative OND precipitation will be at least as dry as the 25th percentile in the historical record, or roughly a 1/4 year drought.
The IRI map tool calculates the Threshold for triggering action. This is also based on the Frequency; for example, at the 25% frequency, the threshold is set such that the most probable 1/4th of forecasts in the historical record would have triggered action.
To account for the fact that consecutive seasons of drought can have compounding affects in arid areas like Oromia Region, an adjustment are made to the forecast threshold.
The forecast probability for a given year is compared against the adjusted Threshold. If it is equal to or higher than the Threshold, anticipatory action is triggered for that month. If it is less than the Threshold, no action is taken that month.
This process is followed for each forecast issue month - July, August and September.
This trigger rule is based on the average forecast over Oromia region. However, the implementation tools and dashboards also present information on the forecast measured at the Woreda level, to assist in the targeting of anticipatory action programs within the region.
To minimize the potential for mis-triggering, the AA Technical Working Group has designated certain actions to only trigger if multiple months of issued forecasts have passed the threshold. You can find the cumulative action criteria below, with the current status (according to the region-level average forecast) highlighted.
The following steps describe the initial setup of a new forecast configuration using PyCPT.
Then IRI has set up the page in this link https://iri-pycpt.github.io/installation/ to guide PyCPT installation. However, in the tabs on the following sections, you can find a guided and illustrated review of the process. Please go back to the link in case there are updates that this user guide may not reflect.
Presentation link: https://drive.google.com/file/d/18LaE5b598ntPerh4fVK1S0kvbksNQJhC/view
What is PyCPT?
What is in the PyCPT 2 update?
You can review an introduction to Anaconda in this page https://iri-pycpt.github.io/anaconda/ . Please review the tabs in this section for a guided process.
Python programs are designed to run on any computer. But they need an environment in which to operate. * conda – a cross-platform package manager. * Anaconda – a distribution package that includes conda and various Python libraries
The installation can take quite a long time.
Up to here, you are set with the environment
Activation:
Follow these steps to generate and evaluate a potential forecast configuration usin the Jupyter Notebook interface.
Output directory
Statistical model
Choose between:
Canonical Correlation Analysis (CCA): suitable for making a spatially consistent forecast (but possibly highly smoothed) for an area that is not too large (national scale is ok)
Principal Components Regression (PCR): suitable for making a spatially detailed (but possibly inconsistent) forecast for an area of any size.
Climate model inputs
Predictand data
Defining Dates
’fdaterefers to the date the forecasts were made.
‘target’ is the period we are intending to forecast.
‘first-year’ and ‘final_year’ define the the limits of the hindcast
Defining the Area Domains for the Model
It is easier to start with the predictand domain.
For the predictors, choose a domain to balance: * Local detail * Large-scale forcing
Setting
Modes
Note: You can leave all the remaining other settings.
Set Data Download Conditions
Set ‘force_download = False’ to avoid downloading the data again if you rerun using the same X and Y domains. * Set force_download = True if any changes require a new data version.
Make sure the data are saved in a subdirectory of case_dir (the one shown in the Output Directory section above)
Check the Area Domains
Once the
configuration is set up, run the remaining lines of code to generate the
forecast and see its skill and validation metrics (consult the
Powerpoint linked above for a detailed description of these
metrics).
Once you are happy with the configuration, it is time to set it up for the operational forecast data pipeline.
Once the operational forecast configuration is set up, it is crucial that you deviate from it as little as possible. This is to ensure that the criteria for Anticipatory Action are consistent and in line with the agreed-upon protocol. The only times when you should consider altering the configuration are 1) If the AAP is being officially revised or 2) if there is some major data problem with the forecast for that month (see the “Reality Checks” section for more detail on what to do in that case).
To set up (or update) the forecast configuration, go to the python_maproom_emi code repository and navigate to the pycpt-forecasts subfolder. Here, you will see all of the forecast configurations that have been set up to date. If you wish to create a new configuration, it should go in a new folder with a corresponding version number (if you are updating an existing configuration, simply copy the old version folder and rename the version number).
You will keep the history of your forecast configurations in this git repository. This section outlines a standard repository structure, naming conventions, and management workflow for PyCPT configurations and notebooks. Server-based forecast automation tools assume this structure, so please follow it.
You will use the Git software to manage versions of the forecast configuration code. Download it from https://git-scm.com/install/.
You will also need to create an account on bitbucket.org in order to access the forecast configuration code repository.
python_maproom_<country>/
│
├── README.md
│
└── pycpt-forecasts/
├── prcp-ndj-v1/
│ ├── README.md
│ ├── config.py
│ ├── pycpt-2024-10.ipynb
│ ├── pycpt-2024-09.ipynb
│ ├── pycpt-2024-08.ipynb
│ └── pycpt-version
│
├── prcp-jjas-v1/
│ ├── README.md
│ ├── config.py
│ ├── pycpt-2024-05.ipynb
│ ├── pycpt-2024-04.ipynb
│ ├── pycpt-2024-03.ipynb
│ └── pycpt-version
Folder Naming Convention
<variable>-<season>-v<version>
Components Variable: prcp (precipitation) Season: Month initials of
target season ndj = November-December-January
mam = March-April-May son = September-October-November djf =
December-January-February Version: Sequential number (v1, v2, v3,
etc.)
Examples: prcp-ndj-v1 = Precipitation forecast for Nov-Dec-Jan, version 1 prcp-jjas-v2 = Precipitation forecast for Jun-Jul-Aug-Sep, version 2 temp-djf-v1 = Temperature forecast for Dec-Jan-Feb, version 1
Forecast Notebook File Format
pycpt-<issueyear>-<issuemonth>.ipynb
where
Please add a Jupyter Notebook file for each month that you intend to issue forecasts.
PyCPT Version File
Each forecast folder should contain a plain text file named pycpt-version that records the PyCPT version used for that forecast configuration, for example:
2.10.2
This section outlines the process for granting, managing, and revoking access to Bitbucket repositories used for forecast configurations. Proper access control ensures data integrity, version control discipline, and accountability across the forecasting workflow. Roles and Permissions
Bitbucket supports three main permission levels:
Admin – Full control over the repository, including granting access to others. Write (Developer) – Can modify files by pushing commits. Typical responsibilities: adding new forecasts, updating configurations to reflect operational changes. Read (Viewer) – Can view or clone the repository but cannot modify files. Typical responsibilities: reviewing configurations, documentation, or outputs for quality control or training.
Process: Adding a New Member Go to the Bitbucket repository: https://bitbucket.org/iridl/python_maproom_
Access Management Best Practices
Review access lists regularly to ensure only active users have permissions. Limit Admin roles to two or three responsible individuals. Revoke access when staff leave or roles change. Enable two-factor authentication (2FA) for all users with Write or Admin roles. Keep an internal Access Log noting the user, date, and role assigned or removed.
Troubleshooting Access Issues
User cannot push changes – Check if their role is “Write” or “Admin.” “Permission denied” errors – Confirm Bitbucket authentication Invitation not received – Verify the email address or resend the invitation. Repository not visible – Ensure user access is correctly assigned.
Removing or Updating Member Access
Click Repository settings (gear icon on the left sidebar). Select Repository permissions. To change a user’s role, select a new permission level from the dropdown list. Confirm changes and document them in the Access Log.
The following steps are done on the command line interface and can be viewed using your system’s file explorer.
# Clone the repository
git clone https://bitbucket.org/iridl/python_maproom_<country>
# Navigate to repository
cd python_maproom_<country>
# Create pycpt-forecasts directory (if it doesn't exist)
mkdir -p pycpt-forecasts
Creating New Forecast Version
# Create new forecast folder
mkdir pycpt-forecasts/prcp-ndj-v1
Add the respective files as shown in the section above.
Version Control
Once completed, commit your changes to Bitbucket.
# Add new configuration
git add pycpt-forecasts/prcp-ndj-v1/
# Commit with descriptive message
git commit -m "Add precipitation NDJ forecast configuration v1"
# Push to remote repository
git push origin main
Version Management Rules Critical Guidelines: Never modify existing configurations once forecasts are published. This repository records the history of how published forecasts were generated. When you decide to use a different configuration, make a new directory for the new version, rather than overwriting the historical record. Document all changes in README and commit messages
Reasons you might decide to create a new version: Model or predictand selection changes Spatial domain modifications Temporal parameter adjustments MOS method changes
Producing forecasts and uploading them to the Anticipatory Action tool:
The PyCPT Jupyter Notebook is useful for experimenting with different forecast configurations, but once participants have settled on an optimized configuration and you need to produce forecasts on a regular basis, the notebook interface is not as convenient. For operational monthly forecast generation, we have developed a service that can be run from the EMI server instead. The purpose of the service is to ensure that the monthly forecast generation process is as consistent and straightforward as possible.
To prepare to generate forecasts operationally, first verify the forecast configuration you wish to use. If you are setting up a new configuration for the first time, follow the steps in the “Setting Up a Seasonal Forecast Configuration” section above.
Having prepared a configuration, participants can then generate forecasts each month as follows:
If this is the first time the configuration is run, add the additional argument –init to the end of the command - this will tell the software to generate all of the hindcast and possible forecast data in addition to the current month.
If this is not the first time the service is run, you do not need to specify –init. By default, the command will generate a forecast using the current month for initialization. If you wish to generate forecasts using a previous month’s initialization, you can specify an optional ‘issue_date’ argument, e.g. generate-forecasts prcp-ond-v1 2024-07-01.
If any climate model data is missing for the initialization month, PyCPT will replace it with climatology. If the missing climate model data later becomes available for that month, the user can update the forecast using the –update argument to generate-forecasts.
Run the command generate-forecasts –help to see a reference for all of these commands and their syntax.
The forecast directory will contain forecast data in the form of NetCDF files, as well as PNG image files that present the forecast visually on a map. You can view the PNG files to review a forecast before pushing it to the Anticipatory Action (AA) tool. This validation process is described in the following section. Make sure you look at the PNG to confirm that no climate model data is missing. If any model data has not been published for the month yet, follow the instructions in step 1 for what to do.
Once you have validated the forecast and you are ready to publish it to the AA tool, run the upload command: upload-forecasts [name of configuration]. The new forecast will be added to the IRI server, and some time later it will appear in the AA tool. (Note that there is a manual review step on IRI’s side required in order to update the server, so the new forecast will not be viewable in the tools immediately.)
Note that once a forecast has been published, it cannot be modified without deleting the file. Be sure to complete the validation process (and make updates if necessary) before running upload-forecasts!
After a forecast is produced based on the monthly process described in the previous tab, a Reality Check must be conducted to make sure the foreceast is aligned with the conditions as well as other global forecasts and the forecast of previous issue months. Please follow the next 4 steps for this brief assessment.
The steps below are illustrated with the February issue of the MAM forecast for the Somali region as example, available here. However, the steps apply to any other issue month, season or region.
Starting point: The forecast for MAM season as of February 2024 indicates likely above normal rainfall in the Somali Region of Ethiopia.
Figure 1
Step 1: Internal Check against previous forecasts, if applicable.
When a forecast has preceding forecasts, it is important to review if there seems to be continuity with the previously issued forecast for the season. For example, the February issue for MAM season has two preceding forecasts: December of the previous year and January
Guiding questions
In the case of the February forecast, we can see that both the December and January forecasts (links above) have similar results with none triggering at 20% or 30% frequency and keeping similar distance from triggering.
Step 2: Check Related Conditions. In other words, check that the conditions match the results
Guiding question: Looking at the conditions of the associated sea surface temperatures (SSTs), are the conditions consistent with the forecast?
As can be seen in Acharya 2021 and Funk 2023 (Figures 2 and 3), the Ethiopian MAM rains have a strong relationship with central Pacific sea surface temperatures (SSTs).
The SST forecast according to the NMME (Figure 4), and reflected in other SST forecasts as well, is a neutral to above normal SST in the central Pacific. It is therefore within reason to expect average to above normal rainfall in eastern Ethiopia.
Figure 2. Correlation of rainfall to SST from Acharya 2021. Strong positive correlation for Ethiopian rainfall to central Pacific SSTs.
Figure 3. Composites of SSTs during dry MAMs (A) and wet MAMs (B) in the Horn of Africa. Shows a relationship during dry MAMs to cool SSTs in the central Pacific.
Figure 4. NMME forecast of SST anomalies.
Step 3: ‘Compare with Friends’. In other words, compare the generated forecast against other published forecasts.
Guiding question: Looking at the relevant forecasts, are these other forecasts looking similar?
Looking at forecasts for rainfall for the MAM season, we see other sources of forecasts pointing in the same direction as the AA forecast for February.
NMME
In Figure 4(A), according to the NMME precipitation forecast (available at the CPC website), we see normal rainfall anticipated for the eastern Ethiopia region, with a probability forecast indicating a moderate to above normal probability for a rainier than average MAM. * Figure 5.a NMME forecast for precipitation for MAM 2024 in anomalies.
ICPAC
The forecast by ICPAC, shown in in Figure 6 (for FMA, not an exact overlay), we can see the forecast is anticipating an above normal rainfall to be likely. * Figure 6. ICPAC’s FMA seasonal forecast for 2024.
C3S
In C3S from Coperincus in Figure 7, we can also see a moderate to high probability of higher-than average rainfall. * Figure 7. C3S from Copernicus’ forecast for MAM season.
Three scenarios:
If the three previous steps confirm an aligned forecast result that makes sense, then this forecast upload is valid and can be communicated for next steps in AA implementation.
If the reality check steps point at a forecast result that is not aligned with the different reference sources described above, the user should first check whether any climate model data is missing. If it is, the user should work with colleagues to determine whether the data outage is temporary or permanent. If it is temporary, the user should proceed with generating the forecast; if the data later becomes available, the forecast can be updated using the –update argument to generate-forecasts (see previous tab for notes on this). If it is permanent, the user should consider creating a new configuration version which omits the missing model, or replaces it with another.
There may be some (rare) cases in which no climate model data is missing, but some models demonstrate unusual or intuitive behavior. These situations should be handled on a case-by-case basis. In general, users should be cautious about ever dropping a model due to unusual behavior, as this introduces subjectivity into the forecast generation process. The multi-model ensemble approach is meant to address the ideosyncratic behavior of individual models.
The operational forecast data can be accessed by a variety of applications for policy implementation and communication. Currently, we have two such applications: an AAP Monitoring Dashboard and a Design Maptool. The first is meant to be used for producing official reports on AAP status, while the second is meant to be used for data exploration and visualization.
Both applications are distributed via the Docker platform. Docker provides a consistent and reproducible environment for running reports builds across different systems. Essentially, instead of relying on all users to have the same versions, packages, sys libraries and dependencies, Docker bundles all this together into a single packaged environment. Please install the Docker Desktop software if you wish to run any of the implementation tools.
This page provides an index of all of the implementation tools available to date.
There is also a real-time Seasonal Monitoring system in development (detailed documentation forthcoming as we collect EMI / WFP feedback).
The AAP Monitoring Dashboard is the primary implementation tool for operational forecasting. It is meant to show an up-to-date report of the forecast and climate conditions for each season and region, and present the relevant AAP activities that have triggered (if any).
The Docker dashboard images are called via a script called run_reports_ethiopia.bash. If you do not have a copy of this script, please contact Max Mauerman (max.mauerman@columbia.edu).
Instructions for running the monitoring dashboard:
First, start the Docker Desktop program and make sure it is running on your computer.
To run run_reports.bash, use terminal to navigate to the directory where the script is located, and the run the following command:
./run_reports_ethiopia.bash -g|--region region -s|--season season [-y|--year year] [-m|--month month] [-r|--reports reports_dir] [-b|--branch branch] [-h|--help]
e.g.
./run_reports_ethiopia.bash --region "south ethiopia" --season OND -m 9
Syntax:
region: Region name (required for Ethiopia)
Valid regions: <list of valid reagions, which is only applicable for Ethiopia. this is filled in through build_docker.bash + config files>
season: Season name (required)
Available seasons and their valid months:
<dict of season to it's valid months. this is applicable for all countries, and is filled in through config files and build_docker.bash>
year: Year to run reports for (default: current year, valid: 1991-2035)
month: Month to run reports for (default: current month)
reports_dir: Output directory (default: auto-generated)
branch: Docker image tag to use (default: 'latest')
This command builds a Docker compose file (compose.yaml) that runs the appropriate country specific dashboard image, passing in the necessary env variables (such as the date) and mounting the output folder from the local computer into the container (this is where the generated HTML will be saved) → Mounting = makes sure that the output file inside Docker actually shows up on the local machine
Essentially → builds config that tells Docker what to run (RUN.sh), with what settings, and where to save the results. Then runs the container to generate the report (through RUN.sh in the {country}_flexdashboard repository), saving the output HTML locally to the users device
The directory structure after running run_reports_
├── run_reports
| └── <country>
| ├── <Country>_AA_Reports
| | └── <season the reports were run for>
| | └── <reports html file here>
| ├── run_reports_<country>.bash
| └── run_reports_<country>.ps1
If you wish to build a new Docker image, or see documentation on the overall process, please go to this repo:
EMI Website
OND Monitoring Dashboard: http://www.ethiomet.gov.et:9000/publications/anticipatory-action-decision-support-map-room/
This exploratory data analysis tool presents different anticipatory action-related information, including forecasts, triggers, and vulnerability data for Ethiopia. The control bar allows users to view the forecast for the season, region and year and issue month of interest, as well as viewing the historical performance of that forecast. Using the Design Maptool, users can explore the implications of different potential thresholds and data sources for Anticipatory Action. However, the Design Maptool does not present the official status of the operational AAP; for that, consult the AAP Monitoring Dashboard detailed above.
This section is based in the maptool developed for the OND season in the Somali Region but the features are the same for all the maptools.
The Maptool is sectioned into two areas, the map component, and the Gantt chart component. The map component includes the climate information such as forecast and rainfall rank, as well as data on which drought years were the most severe (“bad years”), which can come from farmers and / or expert review.
The control bar is comprised of 6 elements:
Mode - describing the spatial resolution in which the user would like to make the analysis; these range from the national level down to pixel level. This mode will affect the forecast calculation as it will be tailored to the selected spatial resolution.
Issue Month - This function determines the month in which the forecast is issued and acts as a lead time. The user can change the month depending on how early or close to the season they would like to utilize the forecast.
Season - This is the timing for which the forecast is modeled.
Year - The year selection strictly influences the map as the map will display the forecast of the year in question. Note that the table is not affected by the year change.
Severity - As early actions are designed based on the output of forecasts and their target frequency of reaching a threshold, labeling a trigger with a severity level enables the user to designate low, moderate, and severe scenarios in their planning. The severity function is used in the Gantt it section, but it is essentially used to label what type of events the user is planning (closely linked to function of Frequency of Triggered Forecasts).
Frequency of Triggered Forecasts - is a slider function in which the user can set the percentage frequency event at which the forecast is triggered. Upon choosing this target value, all the years in which the forecast probability reached that threshold will be highlighted (in the table below).
The map is a fairly straightforward feature that is affected by the Mode and Year settings in the control bar. The user can also move the pin to the desired place of interest in which the forecast displayed on the table will be for the place on the pin depending on the mode or spatial resolution chosen.
The table is dependent on most of the control bar settings and it shows all the necessary climate information displaying the years from 1982 to current along with some calculations as seen from the top row of the table:
Worthy-action - drought was forecasted, and a ‘bad year’ occurred (as compared to the “Baseline observations” dataset, described below)
Act-in-vain - drought was forecasted, but a ‘bad year’ did not occur
Fail-to-act - no drought was forecasted, but a ‘bad year’ occurred
Worthy-Inaction: no drought was forecasted, and no ‘bad year’ occurred
Rate - Overall performance (the number of correct action years over the total number of triggered years)
Threshold - This is the threshold (in terms of forecast probability or measured weather conditions) at which action would trigger, based on the chosen frequency and dataset.
The columns on the other hand emphasize the climate information and validating ground data/vulnerability data.
Year: refers to the forecast year
Forecast: displays all the historical flexible forecasts for the selected issue month and location
Baseline dataset: This is the main dataset on drought impact, against which the forecast performance metrics are calculated. By default, it is a list of reported bad years from last year’s expert workshop. The choice of baseline dataset can be changed (item 5)
Observational datasets: These are other observational datasets that may be useful for evaluating the forecast. By default, this column shows the actual rainfall for that year / season (what the forecast is trying to predict). You can add (item 6) other data such as the El Nino-La Nina state, vegetation, the previous seasons’ rainfall and vegetation, etc.
Baseline data selector: This menu allows you to toggle which data set is being used as the “baseline”, i.e. the benchmark for evaluating forecast performance.
Predictor selector: The menu allows you to select the datasets you would like to consider as potential triggers for action. This includes the forecast by default, but can also include observational data. You can select as many as you’d like.
Set trigger: Press this button to set a trigger based on the chosen dataset and frequency of action. When you press this button, you will be directed to the Activity Planning Tool, where you can define the specific activities associated with this trigger rule.
The Maptool populates its data from the same file server as the Dashboard. To update the forecast data in the maptool, users should follow the steps outlined in the “Setting Up a Seasonal Forecast Configuration” (for adding a new forecast) and “Monthly Forecast Generation and Validation” (for monthly issues of an existing forecast) section.
It is also possible to add new sources of observational data to the Maptool for comparison. This process is documented in the maptool code repository.
There is a version of the Maptool that can run on a user’s local machine instead of the IRI server. For details and instructions, please see the “design-dashboard” subfolder of the EMI bitbucket repository:
https://bitbucket.org/iridl/python_maproom_emi/
Note that this version of the maptool is not yet in use for operational forecast monitoring. Before the entire process can be transferred to EMI servers, IRI and EMI need to discuss the details of how to best allocate responsibility for the data and software management currently being done via IRI. This section will be updated as those discussions proceed.