• Profile
Close

Researchers built dozens of COVID-19 forecasting models—Did they actually help?

MedicalXpress Breaking News-and-Events Jul 09, 2024

Accurate modelling is crucial during pandemics for several reasons. Political bodies must make policy decisions, which can take weeks to become law and even longer to implement. Similarly, public health organisations such as hospitals, schools, daycares, and health centres require advanced planning for severe surges and the distribution of critical resources such as staff, beds, ventilators, and oxygen supply.

Accurate forecasting models can aid in making informed decisions regarding necessary precautions for specific locations and times, identifying regions to avoid travelling to, and assessing risks associated with activities like public gatherings.

During the COVID-19 pandemic, dozens of forecasting models were proposed. Even though their accuracy over time and by model type remains unclear, these were used in framing policy to varying degrees.

The main questions

Our recent study, published in Frontiers in Public Health, aimed to answer several important questions pertinent to pandemic modelling.

First, can we establish a standardised metric to evaluate pandemic forecasting models? Second, what were the top-performing models during the four COVID-19 waves in the US, and how did they perform on the complete timeline? Third, are there specific categories or types of models that significantly outperform others? Fourth, how do model predictions fare with increased forecast horizons? Finally, how do these models compare against two simple baselines?

Not fit for policy framing

The main results of the study show that more than two-thirds of models fail to outperform a simple static case baseline, and one-third fail to outperform a simple linear trend forecast.

To analyse models, we first categorised them into epidemiological, machine learning, ensemble, hybrid and other approaches. Next, we compared estimates made by the models to the government-reported case numbers and with each other, as well as against two baselines wherein case counts remain static or follow a simple linear trend.

This comparison was conducted wave-wise and on the entire pandemic timeline, revealing that no single modelling approach consistently outperformed or was superior to others, and modelling errors increased over time.

What went wrong and how to fix it?

What went wrong and how do we bridge that gap? Enhanced data collection is crucial as modelling accuracy hinges on data availability, particularly during early outbreaks. Currently, models rely on case data from diverse reporting systems that vary by county and suffer from regional and temporal delays. Some counties, for example, may gather data over many days and make it public all at once, giving the impression of a sudden burst of cases. The lack of data can limit modelling accuracy in counties with less robust testing programs.

Also, these methods are not uniform between data collection groups, resulting in unpredictable errors. Standardising data formats could simplify data collection, reducing unpredictable errors.

Underlying biases in data, such as under-reporting, can produce predictable errors in model quality, requiring models to be adjusted to predict future erroneous reporting rather than actual case numbers. For example, the availability of rapid home test kits has led many individuals not to report test results to government databases. Serology data and excess mortality have identified such under-reporting.

Looking ahead

Even though enormous progress has been made, models still need to be better on various fronts, making more realistic assumptions on the effect the spread of multiple variants has on case numbers, immunity boosted by vaccination programs, the impact lockdowns have had, the presence of numerous virus variants, the rise of vaccination, the number of doses given to a patient, varying vaccination rates in different counties and varying lockdown mandates.

All these factors affect case numbers, which complicates the forecasting task. Even in the case of ensemble models, the study showed that these added individual model errors and thus did not show any significant difference in performance.

The model forecasting error in the U.S. CDC database increased each week from the time of prediction. In other words, the prediction accuracy declined the further they were made. At one week from the time of the forecast, the prediction errors of most models clustered just below 25% but increased to about 50% in four-week forecasting.

This suggests that current models may not provide sufficient lead time for health entities and governments to implement effective policies.

Accurate predictive modelling remains essential in combating future pandemics. However, the study raises concerns when a policy is formulated directly based on these models. Models with high errors in predictions might lead to the heterogeneous distribution of resources such as masks and ventilators, which may lead to a risk of unnecessary mortality.

Further, hosting these models on official public platforms of health organisations (including the U.S. CDC) risks giving them an official imprimatur. The study suggests that developing more sophisticated pandemic forecasting models should be a priority.

Go to Original
Only Doctors with an M3 India account can read this article. Sign up for free or login with your existing account.
4 reasons why Doctors love M3 India
  • Exclusive Write-ups & Webinars by KOLs

  • Nonloggedininfinity icon
    Daily Quiz by specialty
  • Nonloggedinlock icon
    Paid Market Research Surveys
  • Case discussions, News & Journals' summaries
Sign-up / Log In
x
M3 app logo
Choose easy access to M3 India from your mobile!


M3 instruc arrow
Add M3 India to your Home screen
Tap  Chrome menu  and select "Add to Home screen" to pin the M3 India App to your Home screen
Okay