Limitations on meteorology and how we can overcome them

Charlotte Mineo, Staff Writer

60 years ago, mathematicians predicted that humans could never accurately predict the weather more than two weeks in advance. This week, scientists published a paper in Science supporting this hypothesis.

In 1969, Edward Lorenz, a meteorologist and mathematician from M.I.T., published a paper predicting that the best weather forcasting models will only ever be accurate out to two weeks in advance. Lorenz pioneered the concept often referred to in popular culture as “the butterfly effect”.

He suggested that small changes to atmospheric conditions, density changes caused by something as seemingly benign as the flap of a butterfly’s wings, can propagate to create major final changes to weather patterns over time.

Paul Voosen writes for Science that since the 1980s, a combination of better models and more accurate data have extended forecasting accuracy by one day each decade.

If you thought your meteorologist lied to you now, just think about contending with the weather 40 years ago!

Recent research by Fuqing Zhang from Pennsylvania State University in State College has supported Lorenz’s ideas by compiling past atmospheric data and running simulations to predict the outcome of two major weather events in Europe and Asia during 2015 and 2016.

The researchers entered this data into the European model used to predict weather and ran the simulation 120 times.

They also drew from the U.S. Global Forecast System, the next version of which is expected to be released in March.

Under both models, after the simulation was extended to two weeks, the simulations diverged until they appeared totally unrelated.

Paul Voosen writes that before this study, simulations meant to test Lorenz’s predictions were not precise enough.

The work of Zhang et al. has provided strong evidence that, no matter how advanced our computer models and atmospheric monitoring technologies may be, unpredictability is here to stay.