commavq-gpt2m / README.md
Yassine's picture
Update README.md
0c89702
metadata
license: mit
library_name: transformers
pipeline_tag: unconditional-image-generation
datasets:
  - commaai/commavq

commaVQ - GPT2M

A GPT2M model trained on a larger version of the commaVQ dataset.

This model is able to generate driving video unconditionally.

Below is an example of 5 seconds of imagined video using GPT2M.