--- title: README emoji: π colorFrom: pink colorTo: indigo sdk: static pinned: false ---
This organization is a part of the NeurIPS 2021 demonstration "Training Transformers Together".
In this demo, we've trained a model similar to OpenAI DALL-E β a Transformer "language model" that generates images from text descriptions. Training happened collaboratively β volunteers from all over the Internet contributed to the training using hardware available to them. We used LAION-400M, the world's largest openly available image-text-pair dataset with 400 million samples. Our model was based on the dalleβpytorch implementation by Phil Wang with a few tweaks to make it communication-efficient.
See details about how it works on our website.
This organization gathers people participating in the collaborative training and provides links to the related materials:
The materials below were available during the training run itself:
Feel free to reach us on Discord if you have any questions π