vijaye12 commited on
Commit
0d164a4
1 Parent(s): 1655761

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -23,7 +23,7 @@ can be easily fine-tuned for your target data. Refer to our [paper](https://arxi
23
  - *UniTime (WWW 24) by 27% in zero-shot forecasting.*
24
  - Zero-shot results of TTM surpass the *few-shot results of many popular SOTA approaches* including
25
  PatchTST (ICLR 23), PatchTSMixer (KDD 23), TimesNet (ICLR 23), DLinear (AAAI 23) and FEDFormer (ICML 22).
26
- - TTM (1024-96, released in this model card with 1M parameters) outperforms pre-trained MOIRAI (Small, 14M parameters) by 10%, MOIRAI (Base, 91M parameters) by 4% and
27
  MOIRAI (Large, 311M parameters) by 3% on zero-shot forecasting (fl = 96). (TODO: add notebook)
28
  - TTM quick fine-tuning also outperforms the hard statistical baselines (Statistical ensemble and S-Naive) in
29
  M4-hourly dataset which existing pretrained TS models are finding hard to outperform. (TODO: add notebook)
 
23
  - *UniTime (WWW 24) by 27% in zero-shot forecasting.*
24
  - Zero-shot results of TTM surpass the *few-shot results of many popular SOTA approaches* including
25
  PatchTST (ICLR 23), PatchTSMixer (KDD 23), TimesNet (ICLR 23), DLinear (AAAI 23) and FEDFormer (ICML 22).
26
+ - TTM (1024-96, released in this model card with 1M parameters) outperforms pre-trained MOIRAI (Small, 14M parameters) by 10%, MOIRAI (Base, 91M parameters) by 2% and
27
  MOIRAI (Large, 311M parameters) by 3% on zero-shot forecasting (fl = 96). (TODO: add notebook)
28
  - TTM quick fine-tuning also outperforms the hard statistical baselines (Statistical ensemble and S-Naive) in
29
  M4-hourly dataset which existing pretrained TS models are finding hard to outperform. (TODO: add notebook)