xinyangz commited on
Commit
971d357
1 Parent(s): ab4c8d9

Add model card v1.

Browse files
Files changed (1) hide show
  1. README.md +56 -3
README.md CHANGED
@@ -1,3 +1,56 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations
2
+ [![PRs Welcome](https://img.shields.io/badge/PRs-welcome-green.svg?style=flat-square)](http://makeapullrequest.com)
3
+ [![arXiv](https://img.shields.io/badge/arXiv-2203.15827-b31b1b.svg)](https://arxiv.org/abs/2209.07562)
4
+
5
+
6
+ This repo contains models, code and pointers to datasets from our paper: [TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations](https://arxiv.org/abs/2209.07562).
7
+ [[PDF]](https://arxiv.org/pdf/2209.07562.pdf)
8
+ [[HuggingFace Models]](https://huggingface.co/Twitter)
9
+
10
+ ### Overview
11
+ TwHIN-BERT is a new multi-lingual Tweet language model that is trained on 7 billion Tweets from over 100 distinct languages. TwHIN-BERT differs from prior pre-trained language models as it is trained with not only text-based self-supervision (e.g., MLM), but also with a social objective based on the rich social engagements within a Twitter Heterogeneous Information Network (TwHIN).
12
+
13
+ TwHIN-BERT can be used as a drop-in replacement for BERT in a variety of NLP and recommendation tasks. It not only outperforms similar models semantic understanding tasks such text classification), but also **social recommendation **tasks such as predicting user to Tweet engagement.
14
+
15
+ ## 1. Pretrained Models
16
+
17
+ We initially release two pretrained TwHIN-BERT models (base and large) that are compatible wit the [HuggingFace BERT models](https://github.com/huggingface/transformers).
18
+
19
+
20
+ | Model | Size | Download Link (🤗 HuggingFace) |
21
+ | ------------- | ------------- | --------- |
22
+ | TwHIN-BERT-base | 280M parameters | [Twitter/TwHIN-BERT-base](https://huggingface.co/Twitter/twhin-bert-base) |
23
+ | TwHIN-BERT-large | 550M parameters | [Twitter/TwHIN-BERT-large](https://huggingface.co/Twitter/twhin-bert-large) |
24
+
25
+
26
+ To use these models in 🤗 Transformers:
27
+ ```python
28
+ from transformers import AutoTokenizer, AutoModel
29
+ tokenizer = AutoTokenizer.from_pretrained('Twitter/twhin-bert-base')
30
+ model = AutoModel.from_pretrained('Twitter/twhin-bert-base')
31
+ inputs = tokenizer("I'm using TwHIN-BERT! #TwHIN-BERT #NLP", return_tensors="pt")
32
+ outputs = model(**inputs)
33
+ ```
34
+
35
+
36
+
37
+ <!-- ## 2. Set up environment and data
38
+ ### Environment
39
+ TBD
40
+
41
+
42
+ ## 3. Fine-tune TwHIN-BERT
43
+
44
+ TBD -->
45
+
46
+
47
+ ## Citation
48
+ If you use TwHIN-BERT or out datasets in your work, please cite, please cite the following:
49
+ ```bib
50
+ @article{zhang2022twhin,
51
+ title={TwHIN-BERT: A Socially-Enriched Pre-trained Language Model for Multilingual Tweet Representations},
52
+ author={Zhang, Xinyang and Malkov, Yury and Florez, Omar and Park, Serim and McWilliams, Brian and Han, Jiawei and El-Kishky, Ahmed},
53
+ journal={arXiv preprint arXiv:2209.07562},
54
+ year={2022}
55
+ }
56
+ ```