File size: 6,355 Bytes
627b1eb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c554733
627b1eb
 
 
 
 
 
 
4e13083
 
8914c00
 
30f05b2
 
 
 
 
 
 
8914c00
99c4e25
8914c00
 
30f05b2
8914c00
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
da8d5fb
8914c00
 
 
 
 
 
 
 
 
 
da8d5fb
8914c00
 
 
 
 
 
 
 
627b1eb
30f05b2
 
 
 
 
 
 
 
 
627b1eb
 
 
 
 
 
 
 
 
 
 
 
1e22cc6
627b1eb
da8d5fb
627b1eb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
99c4e25
627b1eb
 
 
 
 
 
 
99c4e25
 
 
4e88627
99c4e25
 
627b1eb
 
 
 
 
c554733
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
---
license: mit
datasets:
- AutonLab/Timeseries-PILE
metrics:
- accuracy
- mse
- mae
- f1
tags:
- time series
- forecasting
- classification
- anomaly detection
- imputation
- transformers
- pretrained models
- foundation models
- time-series
pipeline_tag: time-series-forecasting
---
# MOMENT-Large

MOMENT is a family of foundation models for general-purpose time-series analysis. The models in this family (1) serve as a building block for diverse **time-series analysis tasks** (e.g., forecasting, classification, anomaly detection, and imputation, etc.), (2) are effective **out-of-the-box**, i.e., with no (or few) task-specific exemplars (enabling e.g., zero-shot forecasting, few-shot classification, etc.), and (3) are **tunable** using in-distribution and task-specific data to improve performance. 

For details on MOMENT models, training data, and experimental results, please refer to the paper [MOMENT: A Family of Open Time-series Foundation Models](https://arxiv.org/pdf/2402.03885.pdf).

MOMENT-1 comes in 3 sizes: [Small](https://huggingface.co/AutonLab/MOMENT-1-small), [Base](https://huggingface.co/AutonLab/MOMENT-1-base), and [Large](https://huggingface.co/AutonLab/MOMENT-1-large). 

# Usage

**Recommended Python Version:** Python 3.11 (support for additional versions is expected soon).

You can install the `momentfm` package using pip:
```bash
pip install momentfm
```
Alternatively, to install the latest version directly from the GitHub repository:
```bash
pip install git+https://github.com/moment-timeseries-foundation-model/moment.git
```


To load the pre-trained model for one of the tasks, use one of the following code snippets:

**Forecasting**
```python
from moment import MOMENTPipeline

model = MOMENTPipeline.from_pretrained(
    "AutonLab/MOMENT-1-large", 
    model_kwargs={
        'task_name': 'forecasting',
        'forecast_horizon': 96
    },
)
model.init()
```

**Classification**
```python
from moment import MOMENTPipeline

model = MOMENTPipeline.from_pretrained(
    "AutonLab/MOMENT-1-large", 
    model_kwargs={
        'task_name': 'classification',
        'n_channels': 1,
        'num_class': 2
    },
)
model.init()
```

**Anomaly Detection, Imputation, and Pre-training**
```python
from moment import MOMENTPipeline

model = MOMENTPipeline.from_pretrained(
    "AutonLab/MOMENT-1-large", 
    model_kwargs={"task_name": "reconstruction"},
)
mode.init()
```

**Representation Learning**
```python
from moment import MOMENTPipeline

model = MOMENTPipeline.from_pretrained(
    "AutonLab/MOMENT-1-large", 
    model_kwargs={'task_name': 'embedding'},
)
```

### Tutorials
Here is the list of tutorials and reproducibile experiments to get started with MOMENT for various tasks:
- [Forecasting](https://github.com/moment-timeseries-foundation-model/moment/blob/main/tutorials/forecasting.ipynb)
- [Classification](https://github.com/moment-timeseries-foundation-model/moment/blob/main/tutorials/classification.ipynb)
- [Anomaly Detection](https://github.com/moment-timeseries-foundation-model/moment/blob/main/tutorials/anomaly_detection.ipynb)
- [Imputation](https://github.com/moment-timeseries-foundation-model/moment/blob/main/tutorials/imputation.ipynb)
- [Representation Learning](https://github.com/moment-timeseries-foundation-model/moment/blob/main/tutorials/representation_learning.ipynb)
- [Real-world Electrocardiogram (ECG) Case Study](https://github.com/moment-timeseries-foundation-model/moment/blob/main/tutorials/ptbxl_classification.ipynb) -- This tutorial also shows how to fine-tune MOMENT for a real-world ECG classification problem, performing training and inference on multiple GPUs and parameter efficient fine-tuning (PEFT). 

## Model Details

### Model Description

- **Developed by:** [Auton Lab](https://autonlab.org/), [Carnegie Mellon University](https://www.cmu.edu/) and [University of Pennsylvania](https://www.upenn.edu/)
- **Model type:** Time-series Foundation Model
- **License:** MIT License

### Model Sources

<!-- Provide the basic links for the model. -->

- **Repository:** https://github.com/moment-timeseries-foundation-model/ (Pre-training and research code coming out soon!)
- **Paper:** https://arxiv.org/abs/2402.03885
- **Demo:** https://github.com/moment-timeseries-foundation-model/moment/tree/main/tutorials


## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

We train multiple models over many days resulting in significant energy usage and a sizeable carbon footprint. However, we hope that releasing our models will ensure that future time-series modeling efforts are quicker and more efficient, resulting in lower carbon emissions.

We use the Total Graphics Power (TGP) to calculate the total power consumed for training MOMENT models, although the total power consumed by the GPU will likely vary a little based on the GPU utilization while training our model. Our calculations do not account for power demands from other sources of our compute. We use 336.566 Kg C02/MWH as the standard value of CO2 emission per megawatt hour of energy consumed for [Pittsburgh](https://emissionsindex.org/).

- **Hardware Type:** NVIDIA RTX A6000 GPU
- **GPU Hours:** 404
- **Compute Region:** Pittsburgh, USA
- **Carbon Emission (tCO2eq):** 

#### Hardware

All models were trained and evaluated on a computing cluster consisting of 128 AMD EPYC 7502 CPUs, 503 GB of RAM, and 8 NVIDIA RTX A6000 GPUs each with 49 GiB RAM. All MOMENT variants were trained on a single A6000 GPU (with any data or model parallelism).

## Citation

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**
If you use MOMENT please cite our paper: 

```bibtex
@inproceedings{goswami2024moment,
  title={MOMENT: A Family of Open Time-series Foundation Models},
  author={Mononito Goswami and Konrad Szafer and Arjun Choudhry and Yifu Cai and Shuo Li and Artur Dubrawski},
  booktitle={International Conference on Machine Learning},
  year={2024}
}
```

**APA:**

Goswami, M., Szafer, K., Choudhry, A., Cai, Y., Li, S., & Dubrawski, A. (2024). 
MOMENT: A Family of Open Time-series Foundation Models. In International Conference on Machine Learning. PMLR.