Just how accurate are oil price predictions?

If you enjoy my work and want to learn more about carbon markets then please consider signing up for my Substack newsletter CARBON RISK

“Economists don’t forecast because they know, they forecast because they’re asked.”

K. Galbraith, economist

This article is based on a chapter from the book, Crude Forecasts: Predictions, Pundits & Profits in the Commodity Casino. Buy it to find out more about how you can make better predictions of future commodity markets and hold others to account.

The Wall Street Journal (WSJ) polls institutions every month on a range of economic variables including inflation, unemployment and West Texas Intermediate (WTI) crude oil prices. Each month, the survey asks for predictions for the forthcoming June and December. For the sake of consistency, I have reviewed the accuracy of forecasts made both six and 12 months prior to June and December each year. I reviewed surveys from mid-2007 to the end of 2016 and so this covered booms and busts, financial crises and quantitative easing, the Arab Spring and the shale revolution.

By means of a disclaimer, this is not an exhaustive study. By definition, it only covered a ten year period, and there is no guarantee that forecasters that were correct during this boom and bust period will be any more or less successful in future periods. It also says nothing about how well those same institutions did trying to predict other commodity prices including metal and agricultural prices. Finally, it only covers those forecasters that the WSJ surveyed – there may have been others who were more or less accurate in their predictions.

I tried to answer the following three questions: were the forecasts correct?; were the predictions valuable?, and, third, was there a forecaster that you could have followed that would have led to a better overall result than taking the consensus? Let’s discuss each of these points in more detail.

First, were the forecasts right? The answer was clearly no. The average consensus forecast (ie, the average of the commodity price predictions) for WTI crude oil was off by 27% when forecasting six months out. Oil price forecasts looking twelve months out were only slightly worse, off by an average of 30%. Another way of looking at it is that only in three of the nineteen periods reviewed was the consensus six month forecast within 5% of the actual result.

Producers, manufacturers, investors and traders are making billions of worth of investment based on the outlook for commodity prices. If these forecasts are awry on as short term a time period as twelve months then how can they have confidence making decisions over much longer time periods?

Second, did any of the forecasts spot the major changes in the direction of the oil price over the past decade? In June 2008, WTI crude was trading at approximately $135 per barrel. The consensus prediction for December 2008 was just under $112 per barrel and $101 per barrel twelve months ahead. The reality was somewhat different. The financial crisis hit and with it the oil price was hit too. WTI crude prices fell to $41 per barrel in late December 2008, only rebounding to $70 per barrel in mid-2009. Almost all forecasters polled in mid-2008 saw prices falling over the next twelve months, but no one saw the scale of the collapse. The closest six month forecast, although over 50% higher than the outcome, came from Parsec Financial Management!

It was a similar story in trying to call the rebound in prices. Remember that oil and other commodities rebounded in 2009 as quantitative easing helped support prices. Back in December 2008, however, the consensus prediction for June 2009 was for prices to stay low, only nudging up from the current levels of the time. This time the consensus was over 30% too low. Only three forecasts called the market within 5%: Societe Generale, Barclays and the Economic and Revenue Forecast Council.

Over the next few years, oil prices traded in a gradually narrowing range between $70 and $110 per barrel. Sure enough, the consensus and individual forecasts, increasingly anchored against recent prices, turned out to be broadly correct – well at least within a range of 5–15%. Like many forecasters, these economists were driving with their eyes fixed on the rear view mirror, enabling them to tell us where things were but not where they were going. This bears out the old adage that “it’s difficult to make accurate predictions, especially with regard to the future.” The corollary is also true: predicting the past is a snap.

If we fast forward to June 2014, oil prices were trading at approximately $105 per barrel, having peaked at just over $112 per barrel ten months earlier. The consensus forecast was for oil prices to fall from those levels to below $99 per barrel in December 2014. In reality, the consensus proved too optimistic by 85%. All of the forecasters were over 65% too optimistic, apart from one – Parsec Financial Management had predicted oil prices to be in the late $60s per barrel range in December 2014, only 24% too high.

Does that mean that Parsec Financial Management have superior insight? Well, not quite. A look back through earlier forecasts reveals that they were consistently bearish all the way back to early 2010, calling for oil prices to stay around $50–70 per barrel, even though oil prices kept on rising. This is what’s known as the “stopped watch” method of prediction. If you keep on saying something extreme will happen and it eventually does then you are feted as a guru when, in reality, you were lucky (eventually) with the timing.

Predictions are most useful when they anticipate change. If you predict that something will stay the same and it doesn’t change, that prediction is unlikely to earn you much money or wow clients with your predictive abilities. However, predicting change can be very profitable for investors, while timing hedging strategies can be a welcome boost to both producers and manufacturers.

Third, was there a forecaster that you could have followed that would have led to a better overall result than taking the consensus? Of those 26 institutions that contributed prices for at least 14 of the 19 forecast periods and during the key turning points in the market identified above, three forecasters achieved a better than average result than the consensus when looking over a period of six months. These were: JP Morgan (26% forecast error, 4 correct calls); Comerica Bank (25%, 2) and The Conference Board (25%, 3). The most accurate institution achieved a two-percentage point improvement on the consensus, but still had an average six-month forecasting error of well over 20%. The research sample also includes Goldman Sachs, often famed for its supposed commodity prediction ability. How did they do? They were an average of 36% off with one correct call.

“The lucky idiot”

Nevertheless, the ability to predict, over very short periods says little about the quality of a forecast. As with investment success in the stock market, or anywhere else, it’s very difficult to measure success in retrospect or a priori. If someone you follow correctly predicts the price of oil, it looks good on paper, but what you can’t gleam from the magic number is what the risk was that it didn’t happen. To what extent was the forecaster a “lucky idiot”, as the investor Nassim Nicholas Taleb would call them?

That a forecast for future oil prices turns out right doesn’t mean it was bound to happen. Howard Marks, manager of Oaktree Capital, uses the example of the weatherman to explain:

He says there’s a 70 percent chance of rain tomorrow. It rains; was he right or wrong? Or it doesn’t rain; was he right or wrong? It’s impossible to assess the accuracy of probability estimates other than 0 and 100 except over a very large number of trials.

But the climate and the weather are never the same on any one day. A meteorologist can look back at similar patterns and infer what is likely to happen tomorrow. Commodity forecasters can do a similar exercise too, but it will never be the same. The world is always changing.

Risk exists only in the future, and it’s impossible to know for sure what it holds. No ambiguity is evident when we view the past. Only the things that happened, happened. That definiteness, however, doesn’t mean the process that creates outcomes is clear-cut and dependable. Many things could have happened, and the fact that only one happened devalues the variability that existed.

Remember, predictions are made with foresight, but tested with hindsight. It is easy to look back at a sequence of events that led to a forecast turning out correct and to lead the pundit, and anyone who had seen that forecast, to say: “I knew it would, it was obvious it would turn out that way.” The hindsight bias, as it’s known, prevents the forecaster and the consumer of that forecaster from reviewing whether it was correct because the pundit judged the risks correctly, or whether the pundit was just a “lucky idiot”.

The pundit (or lucky idiot) might be infamous for making one big call. But is that enough? Given enough events, even a monkey can make the right prediction eventually. Does that mean that the monkey is endowed with magical powers of insight about the future? Sadly no. For purely statistical reasons, outstanding performances tend to be followed by something less impressive. This is because most performances involve some randomness. On any given day, the worst observed outcomes will be incompetents having an unlucky day, and the best observed outcomes will be stars having a lucky day. Observe the same group on another day and, because luck rarely lasts, the former outliers will not be quite as bad, or as good, as they first seemed.

Regression to the mean, as it’s known, probably explains why many winners subsequently disappoint. And the disappointment will be spectacular if some people are taking bigger risks than others. The most impressive performance may combine skill with luck. In a financial market — or a casino — the easiest way to become an outlier is to make a big bet.

While randomness can explain much, hubris may also play a role. The economists Ulrike Malmendier of University of California, Berkeley and Geoffrey Tate of University of California, Los Angeles examined what happened to companies whose chief executives won accolades such as Forbes’s “Best Performing CEO” or BusinessWeek’s “Best Manager”. They picked a statistical control group of near-winners who might have been expected to win an award, but did not.

Like the near-winners, the winners ran large and profitable companies. However, those companies run by the winners did far worse in the three years following the award, lagging behind the near-winners by approximately 20%. The prizewinning CEOs, nevertheless, enjoyed millions of dollars more in pay. They were also more likely to write books, accept seats on other corporate boards and improve their golf handicap.