vijaye12 commited on
Commit
97a1625
1 Parent(s): c89fbf3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -28
README.md CHANGED
@@ -16,9 +16,12 @@ tags:
16
  <img src="ttm_image.webp" width="600">
17
  </p>
18
 
19
- TinyTimeMixers (TTMs) are compact pre-trained models for Multivariate Time-Series Forecasting, open-sourced by IBM Research.
20
  **With less than 1 Million parameters, TTM introduces the notion of the first-ever “tiny” pre-trained models for Time-Series Forecasting.**
21
 
 
 
 
22
  TTM outperforms several popular benchmarks demanding billions of parameters in zero-shot and few-shot forecasting. TTMs are lightweight
23
  forecasters, pre-trained on publicly available time series data with various augmentations. TTM provides state-of-the-art zero-shot forecasts and can easily be
24
  fine-tuned for multi-variate forecasts with just 5% of the training data to be competitive. Refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf) for more details.
@@ -36,26 +39,29 @@ fine-tuned for multi-variate forecasts with just 5% of the training data to be c
36
 
37
  - **512-96:** Given the last 512 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length)
38
  in future. This model is targeted towards a forecasting setting of context length 512 and forecast length 96 and
39
- recommended for hourly and minutely resolutions (Ex. 10 min, 15 min, 1 hour, etc). (branch name: main) [Benchmarks](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_benchmarking_512_96.ipynb)
40
 
41
  - **1024-96:** Given the last 1024 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length)
42
  in future. This model is targeted towards a long forecasting setting of context length 1024 and forecast length 96 and
43
- recommended for hourly and minutely resolutions (Ex. 10 min, 15 min, 1 hour, etc). (branch name: 1024-96-v1) [Benchmarks](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_benchmarking_1024_96.ipynb)
44
 
45
  - **New Releases (trained on larger pretraining datasets, released on October 2024)**:
46
 
47
  - **512-96-r2**: Given the last 512 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length)
48
  in future. This model is pre-trained with a larger pretraining dataset for improved accuracy. Recommended for hourly and minutely
49
- resolutions (Ex. 10 min, 15 min, 1 hour, etc). (branch name: 512-96-r2) [Benchmarks](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/hfdemo/tinytimemixer/ttm_v2_benchmarking_512_96.ipynb)
50
 
51
 
52
 
53
 
54
  ## Model Capabilities with example scripts
 
 
 
55
  - Getting Started [colab](https://colab.research.google.com/github/IBM/tsfm/blob/main/notebooks/tutorial/ttm_tutorial.ipynb)
56
  - Zeroshot Multivariate Forecasting [Example](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/hfdemo/ttm_getting_started.ipynb)
57
  - Finetuned Multivariate Forecasting:
58
- - Channel-Independent Finetuning [Example](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/hfdemo/ttm_getting_started.ipynb)
59
  - Channel-Mix Finetuning [Example](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/tutorial/ttm_channel_mix_finetuning.ipynb)
60
  - **New Releases (extended features released on October 2024)**
61
  - Finetuning and Forecasting with Exogenous/Control Variables [Example](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/tutorial/ttm_with_exog_tutorial.ipynb)
@@ -63,7 +69,14 @@ fine-tuned for multi-variate forecasts with just 5% of the training data to be c
63
  - Rolling Forecasts - Extend forecast lengths beyond 96 via rolling capability [Example](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/hfdemo/ttm_rolling_prediction_getting_started.ipynb)
64
  - Helper scripts for optimal Learning Rate suggestions for Finetuning [Example](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/tutorial/ttm_with_exog_tutorial.ipynb)
65
 
 
66
 
 
 
 
 
 
 
67
 
68
  ## Recommended Use
69
  1. Users have to externally standard scale their data independently for every channel before feeding it to the model (Refer to [TSP](https://github.com/IBM/tsfm/blob/main/tsfm_public/toolkit/time_series_preprocessor.py), our data processing utility for data scaling.)
@@ -71,14 +84,6 @@ fine-tuned for multi-variate forecasts with just 5% of the training data to be c
71
  3. Enabling any upsampling or prepending zeros to virtually increase the context length for shorter-length datasets is not recommended and will
72
  impact the model performance.
73
 
74
- ## Other Benchmark Scripts:
75
- - TTM (1024-96, released in this model card with 1M parameters) outperforms pre-trained MOIRAI-Small (14M parameters) by 10%, MOIRAI-Base (91M parameters) by 2% and
76
- MOIRAI-Large (311M parameters) by 3% on zero-shot forecasting (horizon = 96). [[notebook]](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_benchmarking_1024_96.ipynb)
77
- - TTM quick fine-tuning also outperforms the competitive statistical baselines (Statistical ensemble and S-Naive) in
78
- M4-hourly dataset which existing pretrained TS models are finding difficult to outperform. [[notebook]](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_m4_hourly.ipynb)
79
-
80
-
81
-
82
 
83
  ## Model Description
84
 
@@ -99,7 +104,7 @@ getting started [notebook](https://github.com/IBM/tsfm/blob/main/notebooks/hfdem
99
 
100
  ## Model Details
101
 
102
- For more details on TTM architecture and benchmarks, refer to our [paper](https://arxiv.org/pdf/2401.03955v5.pdf).
103
 
104
  TTM-1 currently supports 2 modes:
105
 
@@ -114,22 +119,19 @@ The current release supports multivariate forecasting via both channel independe
114
  Decoder Channel-Mixing can be enabled during fine-tuning for capturing strong channel-correlation patterns across
115
  time-series variates, a critical capability lacking in existing counterparts.
116
 
117
- In addition, TTM also supports exogenous infusion and categorical data which is not released as part of this version.
118
- Stay tuned for these extended features.
119
 
120
 
121
-
122
-
123
  ### Model Sources
124
 
125
- - **Repository:** https://github.com/IBM/tsfm/tree/main/tsfm_public/models/tinytimemixer
126
- - **Paper:** https://arxiv.org/pdf/2401.03955v5.pdf
127
- - **Paper (Newer variants, extended benchmarks):** https://arxiv.org/pdf/2401.03955.pdf
128
 
129
- ### External Blogs on TTM
130
- - https://aihorizonforecast.substack.com/p/tiny-time-mixersttms-powerful-zerofew
131
- - https://medium.com/@david.proietti_17/predicting-venetian-lagoon-tide-levels-with-multivariate-time-series-modeling-8bafdf229588
132
 
 
133
  ## Uses
134
 
135
  ```
@@ -184,6 +186,7 @@ The TTM models were trained on a collection of datasets from the Monash Time Ser
184
  - US Births: https://zenodo.org/records/4656049
185
  - Wind Farms Production data: https://zenodo.org/records/4654858
186
  - Wind Power: https://zenodo.org/records/4656032
 
187
 
188
 
189
  ## Citation [optional]
@@ -203,14 +206,18 @@ work
203
  }
204
  ```
205
 
206
- **APA:**
207
-
208
- Ekambaram, V., Jati, A., Dayama, P., Mukherjee, S., Nguyen, N. H., Gifford, W. M., … Kalagnanam, J. (2024). Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series. arXiv [Cs.LG]. Retrieved from http://arxiv.org/abs/2401.03955
209
 
 
 
 
 
 
 
210
 
211
  ## Model Card Authors
212
 
213
- Vijay Ekambaram, Arindam Jati, Pankaj Dayama, Nam H. Nguyen, Wesley Gifford and Jayant Kalagnanam
214
 
215
 
216
  ## IBM Public Repository Disclosure:
 
16
  <img src="ttm_image.webp" width="600">
17
  </p>
18
 
19
+ TinyTimeMixers (TTMs) are compact pre-trained models for Multivariate Time-Series Forecasting, open-sourced by IBM Research.
20
  **With less than 1 Million parameters, TTM introduces the notion of the first-ever “tiny” pre-trained models for Time-Series Forecasting.**
21
 
22
+
23
+ TTM is accepted in NeurIPS 2024.
24
+
25
  TTM outperforms several popular benchmarks demanding billions of parameters in zero-shot and few-shot forecasting. TTMs are lightweight
26
  forecasters, pre-trained on publicly available time series data with various augmentations. TTM provides state-of-the-art zero-shot forecasts and can easily be
27
  fine-tuned for multi-variate forecasts with just 5% of the training data to be competitive. Refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf) for more details.
 
39
 
40
  - **512-96:** Given the last 512 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length)
41
  in future. This model is targeted towards a forecasting setting of context length 512 and forecast length 96 and
42
+ recommended for hourly and minutely resolutions (Ex. 10 min, 15 min, 1 hour, etc). This model refers to the TTM-Q variant used in the paper. (branch name: main) [[Benchmark Scripts]](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_benchmarking_512_96.ipynb)
43
 
44
  - **1024-96:** Given the last 1024 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length)
45
  in future. This model is targeted towards a long forecasting setting of context length 1024 and forecast length 96 and
46
+ recommended for hourly and minutely resolutions (Ex. 10 min, 15 min, 1 hour, etc). (branch name: 1024-96-v1) [[Benchmark Scripts]](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_benchmarking_1024_96.ipynb)
47
 
48
  - **New Releases (trained on larger pretraining datasets, released on October 2024)**:
49
 
50
  - **512-96-r2**: Given the last 512 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length)
51
  in future. This model is pre-trained with a larger pretraining dataset for improved accuracy. Recommended for hourly and minutely
52
+ resolutions (Ex. 10 min, 15 min, 1 hour, etc). This model refers to the TTM-B variant used in the paper (branch name: 512-96-r2) [[Benchmark Scripts]](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/hfdemo/tinytimemixer/ttm_v2_benchmarking_512_96.ipynb)
53
 
54
 
55
 
56
 
57
  ## Model Capabilities with example scripts
58
+
59
+ The below model scripts can be used for any TTM models. Please update the HF model URL and branch name in the `from_pretrained` call appropriately to pick the model of your choice.
60
+
61
  - Getting Started [colab](https://colab.research.google.com/github/IBM/tsfm/blob/main/notebooks/tutorial/ttm_tutorial.ipynb)
62
  - Zeroshot Multivariate Forecasting [Example](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/hfdemo/ttm_getting_started.ipynb)
63
  - Finetuned Multivariate Forecasting:
64
+ - Channel-Independent Finetuning [Example](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/hfdemo/ttm_getting_started.ipynb) [Example: M4-Hourly finetuning](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/hfdemo/tinytimemixer/ttm_m4_hourly.ipynb)
65
  - Channel-Mix Finetuning [Example](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/tutorial/ttm_channel_mix_finetuning.ipynb)
66
  - **New Releases (extended features released on October 2024)**
67
  - Finetuning and Forecasting with Exogenous/Control Variables [Example](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/tutorial/ttm_with_exog_tutorial.ipynb)
 
69
  - Rolling Forecasts - Extend forecast lengths beyond 96 via rolling capability [Example](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/hfdemo/ttm_rolling_prediction_getting_started.ipynb)
70
  - Helper scripts for optimal Learning Rate suggestions for Finetuning [Example](https://github.com/ibm-granite/granite-tsfm/blob/ttm_v2_release/notebooks/tutorial/ttm_with_exog_tutorial.ipynb)
71
 
72
+ ## Benchmarks
73
 
74
+ TTM outperforms popular benchmarks such as TimesFM, Moirai, Chronos, Lag-Llama, Moment, GPT4TS, TimeLLM, LLMTime in zero/fewshot forecasting while reducing computational requirements significantly.
75
+ Moreover, TTMs are lightweight and can be executed even on CPU-only machines, enhancing usability and fostering wider
76
+ adoption in resource-constrained environments. For more details, refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf) TTM-Q referred in the paper maps to the `512-96` model
77
+ uploaded in the main branch, and TTM-B referred in the paper maps to the `512-96-r2` model. Please note that the Granite TTM models are pre-trained exclusively on datasets
78
+ with clear commercial-use licenses that are approved by our legal team. As a result, the pre-training dataset used in this release differs slightly from the one used in the research
79
+ paper, which may lead to minor variations in model performance as compared to the published results. Please refer to our paper for more details.
80
 
81
  ## Recommended Use
82
  1. Users have to externally standard scale their data independently for every channel before feeding it to the model (Refer to [TSP](https://github.com/IBM/tsfm/blob/main/tsfm_public/toolkit/time_series_preprocessor.py), our data processing utility for data scaling.)
 
84
  3. Enabling any upsampling or prepending zeros to virtually increase the context length for shorter-length datasets is not recommended and will
85
  impact the model performance.
86
 
 
 
 
 
 
 
 
 
87
 
88
  ## Model Description
89
 
 
104
 
105
  ## Model Details
106
 
107
+ For more details on TTM architecture and benchmarks, refer to our [paper](https://arxiv.org/pdf/2401.03955.pdf).
108
 
109
  TTM-1 currently supports 2 modes:
110
 
 
119
  Decoder Channel-Mixing can be enabled during fine-tuning for capturing strong channel-correlation patterns across
120
  time-series variates, a critical capability lacking in existing counterparts.
121
 
122
+ In addition, TTM also supports exogenous infusion and categorical data infusion.
 
123
 
124
 
 
 
125
  ### Model Sources
126
 
127
+ - **Repository:** https://github.com/ibm-granite/granite-tsfm/tree/main/tsfm_public/models/tinytimemixer
128
+ - **Paper:** https://arxiv.org/pdf/2401.03955.pdf
129
+
130
 
131
+ ### Blogs and articles on TTM:
132
+ - Refer to our [wiki](https://github.com/ibm-granite/granite-tsfm/wiki)
 
133
 
134
+
135
  ## Uses
136
 
137
  ```
 
186
  - US Births: https://zenodo.org/records/4656049
187
  - Wind Farms Production data: https://zenodo.org/records/4654858
188
  - Wind Power: https://zenodo.org/records/4656032
189
+ - [to be updated]
190
 
191
 
192
  ## Citation [optional]
 
206
  }
207
  ```
208
 
209
+ **Bibtex:**
 
 
210
 
211
+ @inproceedings{ekambaram2024tinytimemixersttms,
212
+ title={Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series},
213
+ author={Vijay Ekambaram and Arindam Jati and Pankaj Dayama and Sumanta Mukherjee and Nam H. Nguyen and Wesley M. Gifford and Chandra Reddy and Jayant Kalagnanam},
214
+ booktitle={Advances in Neural Information Processing Systems (NeurIPS 2024)},
215
+ year={2024},
216
+ }
217
 
218
  ## Model Card Authors
219
 
220
+ Vijay Ekambaram, Arindam Jati, Pankaj Dayama, Wesley Gifford and Jayant Kalagnanam
221
 
222
 
223
  ## IBM Public Repository Disclosure: