How To Make LLMs Generate Time Series Forecasts Instead Of Texts
Introduction
Since ChatGPT hit the scene, the term 'Large Language Models (LLMs)' has become a buzzword, with everyone on social media sharing the latest papers on the next big advancement. At first, I was just as excited, but eventually, I started to lose interest as many of these so-called breakthroughs felt like incremental improvements. They didn’t offer that 'wow' factor that ChatGPT did. But then, I stumbled upon a post from Amazon Science that reignited my interest in LLMs. They were talking about using LLMs, not for the usual NLP tasks, but for something entirely different: time series forecasting!
This got me excited because imagine being able to harness the power of LLMs—models that have already shown amazing feats in Natural Language Processing (NLP)—and apply it to time series! Could we finally predict the future with perfect accuracy? Well, obviously not, but even reducing uncertainty would be incredibly valuable.
In this blog post, I’ll walk you through how the authors of the Chronos paper successfully repurposed any LLM for time series forecasting. And if you’re the hands-on type, you can follow along with all the code to reproduce the diagrams and results by checking out this GitHub repository.