Current Status

Column

Current Status - Map

Column

Current Status – Dashboard

Overview

This page displays the activation status of Anticipatory Action (AA) triggers at the national level in Zambia, as well as for ten key regions (referred to as districts here) during the Nov-Dec-Jan 2024/2025 season, based on forecasts issued in October.

For more details on the AA protocol and its implementation, click on the “AA Protocol” tab at the top of this dashboard.

National-Level Activation

1. Severe Scenario
  • The forecast has not been triggered at the national level.
2. Moderate Scenario
  • The forecast has not been triggered at the national level.
3. Mild Scenario
  • The forecast has not been triggered at the national level.

Regional-Level Activation

1. Severe Scenario::
  • Triggered districts: No Districts

  • Triggers approaching the activation threshold: No Districts

  • Districts that have not been triggered: Central, Copperbelt, Eastern, Luapula, Lusaka, North-Western, Northern, Southern, Western.

2. Moderate Scenario:
  • Triggered districts: No Districts

  • Triggers approaching the activation threshold: No Districts

  • Districts that have not been triggered: Central, Copperbelt, Eastern, Luapula, Lusaka, North-Western, Northern, Southern, Western.

3. Mild Scenario:
  • Triggered districts: No Districts

  • Triggers approaching the activation threshold: No Districts

  • Districts that have not been triggered: Central, Copperbelt, Eastern, Luapula, Lusaka, North-Western, Northern, Southern, Western.

  • Note: A forecast is considered close to being triggered when it is two points or less from the threshold.

Map Considerations

The Zambia map displays only the results of regional forecasts. For national-level results, refer to the activation section above.

Current Forecast Table

Overview

Latest Monthly Output

The tables in this tab present thresholds for different frequencies (%) associated with Severe, Moderate, and Mild levels of severity. Information is organized by national and regional levels. For regional-level data, tables are categorized by severity level (%).

National Level

Regional Level

Severe Scenario Table
Moderate Scenario Table
Mild Scenario Table

Historical Metrics Tables

Overview

Metrics Based on Data from Challenging Years

National Level

Placeholder for tables with data

Regional Level

Placeholder for tables with data

Metrics Formula

Where:

  • a = Worthy action or True Positive:the model correctly predicted a positive instance.
  • b = Act in Vain or False Positive: the model incorrectly predicted a positive instance (it was actually negative).
  • c = Fail to Act or False Negative: the model incorrectly predicted a negative instance (it was actually positive).
  • d = Worthy Inaction or True Negative: the model correctly predicted a negative instance.
Accuracy Rate: the proportion of correct predictions (both positive and negative) made by the model out of all predictions.

This metric gives an overall sense of how often the model is right. However, it can be misleading in cases of class imbalance (e.g., when one class is much more frequent than the other).

  • Range: 0 to 1
  • Reference point: depends on class distribution
  • Interpretation:
    • 0: no correct predictions
    • ≈ 0.5: model performs no better than random guessing
    • > 0.7: generally acceptable performance
    • 1: perfect prediction

Formula:

\[ \begin{align*} Accuracy = \frac{a + d}{a + b + c+ d} \end{align*} \]

Hit Rate (HR): measures the proportion of actual positive cases that the model correctly identifies.

It reflects the model’s ability to detect relevant events or instances—especially critical in contexts like medical diagnoses or weather alerts.

  • Range: 0 to 1
  • Ideal value: 1 (perfect detection)
  • Interpretation:
    • 0: no actual positive cases were detected
    • < 0.5: low detection capability
    • ≈ 0.5: moderate detection, room for improvement
    • > 0.5: good detection performance
    • 1: all actual positive cases were correctly identified

Formula:

\[ \begin{align*} HR = \frac{a}{a + c} \end{align*} \]

False Alarm Ratio (FAR): proportion of Incorrect Positive Alerts

The False Alarm Ratio measures the proportion of predicted positive cases that were actually false—i.e., how many alerts were raised unnecessarily. A lower FAR indicates that the model’s positive predictions are more trustworthy.

  • Range: 0 to 1
  • Ideal value: 0 (no false alarms)
  • Interpretation:
    • 0: no false alarms—perfect precision
    • < 0.3: excellent—most alerts are justified
    • ≈ 0.5: half of the alerts are incorrect—moderate reliability
    • > 0.5: unreliable—alerts are often wrong
    • 1: all alerts are false—no useful signal

Formula:

\[ \begin{align*} FAR = \frac{b}{a + b} \end{align*} \]

Bias Score (BS): Forecast Bias Indicator

The Bias Score quantifies a model’s systematic tendency to overpredict or underpredict events compared to actual observations. It doesn’t assess accuracy per se, but rather the balance of predictions.

  • Range: 0 to ∞ (sometimes normalized around 1)
  • Reference point: 1 (perfect balance)
  • Interpretation:
    • < 1: underprediction – the model forecasts fewer events than actually occur
    • = 1: balanced prediction – the number of predicted and observed events match
    • > 1: overprediction – the model forecasts more events than actually occur
    • Example: a bias score of 0.7 suggests the model tends to miss important events, while a score of 1.2 indicates it tends to raise too many alerts

Formula:

\[ \begin{align*} BS = \frac{a + b}{a + c} \end{align*} \]

Hanssen-Kuipers Score (KSS): True Skill Statistic

The Hanssen-Kuipers Score measures a model’s ability to correctly discriminate between events and non-events, accounting for both correct predictions and errors.

  • Range: –1 to 1
  • Interprétation :
    • -1: predictions are completely wrong (inverse of reality)
    • 0: no skill—equivalent to random guessing
    • 0.3 – 0.5: useful model, but with room for improvement
    • 0.6 – 0.8: good discriminatory power
    • 1: perfect discrimination—no misclassification

This score evaluates both sensitivity (true positive rate) and specificity (true negative rate), making it especially robust in situations with imbalanced classes or rare events.

Formula:

\[ \begin{align*} KSS = HR - FAR = \frac{a}{a + c} - \frac{b}{b + d} = \frac{ad - bc}{(a + c)(b + d)} \end{align*} \]

Heidke Skill Score (HSS): Forecast Skill Relative to Random Chance

The Heidke Skill Score evaluates the overall accuracy of a prediction system by comparing its performance to that of a random forecast. Unlike simple accuracy, HSS accounts for the correct predictions that could occur purely by chance.

  • Range: –∞ to 1 (typically –1 to 1 in practice)
  • Interpretation:
    • < 0: worse than random guessing
    • 0: no skill—equivalent to chance
    • 0.3 – 0.5: useful model, but with room for improvement
    • 0.6 – 0.8: good predictive skill
    • 1: perfect classification—every instance correctly predicted

This score is especially valuable when dealing with imbalanced datasets, where accuracy alone can be misleading. HSS provides a more equitable assessment by factoring in the expected performance of a random model.

Formula:

\[ \begin{align*} HSS = \frac{2(ad - bc)}{(a + c)(c + d) + (a + b)(b + d)} \end{align*} \]

Trigger Sensitivity

Column

Threshold Uncertainty Analysis

Column

Description

The Anticipatory Action (AA) protocol works like an on/off switch:

  • If the drought forecast is worse than a certain threshold, we take action.
  • If it’s better than the threshold, we don’t.

But how sure are we about that threshold? Could it change if we had slightly different data from the past?

This section checks how sensitive the threshold is by running a simple experiment: We remove 5 random years from the historical data, recalculate the threshold, and repeat this 100 times. That gives us a sense of how much the threshold might shift due to natural variation in the past records.

The boxplot you see shows the range of thresholds we got from this process, for each month and severity level:

  • The red point is the trigger threshold currently used.
  • The blue point is this year’s forecast.
  • The dotted line shows what we’d expect just from historical climate (without using a forecast).

How to read this:

  • If the blue point is outside the boxplot, then it’s likely that the situation is clearly above or below the threshold — we can be confident about whether or not to act.
  • If the blue point is inside the boxplot, then the result is more uncertain — and we should interpret the trigger with caution.

AA Protocol

Column

Triggering System and Activation of AA

Operation of the Anticipatory Action Mechanism

The Anticipatory Action mechanism consists of a predictive element and an observational element.

Predictive triggers rely on seasonal rainfall forecasts for July-August-September, developed by IRI. If forecasts indicate a deficit season or the national cumulative precipitation average remains within the driest 30% of past seasons, the threshold is considered reached.

Evaluation Period

The evaluation period runs from January to August. Each month from January to June, seasonal forecasts are updated, and the trigger status is assessed. No assessment is conducted in July. In August, rainfall data from June and July is used to determine whether the observational trigger has been reached.

Predictive Trigger Activation

Evaluation of the predictive trigger activation (January to June, inclusive)

  • Verification of threshold completion.
  • Update of the Maproom tool.
  • Production of the technical report.
  • [Deadline: 22nd of the month] Communication of trigger activation status (see Section 6).

Figure 1. Timeline of the Triggering System for the NDJ Season in Zambia

Section to be determined—placeholder text follows.

Column

Anticipatory Actions Triggered According to the AA Plan (AAP).

Anticipatory Actions Triggered in Previous Months by Scenario

Please note that the current triggering system tracks national-level results, while regional data is provided for reference only. Below are the months in which each scenario has been triggered so far.

1. Severe Scenario
  • Months that triggered:
2. Moderate Scenario
  • Months that triggered:
3. Mild Scenario
  • Months that triggered: No months have been triggered.

  • Note: To view previous forecasts issued for the Nov-Dec-Jan 2024season, click on the “Previous Months” tab.

Anticipatory Action Packages

AA Package for Triggering

Section to be determined

Previous Months

Column {data-width=400}

AA Status for Previous Months – Map

The maps in this tab display results from previous months at the regional level.

August Results

September Results

October Results

[1] "This forecast has not yet been published or represents the current result."

AA Status for Previous Months – Table

Please note that the tables in this tab display the results of national-level forecasts for each previous month. The maps on the left show the results for each previous month at the regional level.

August Results

September Results

October Results

[1] "This forecast has not yet been published or represents the current result."