Edit model card

Twitter June 2020 (RoBERTa-base, 99M)

This is a RoBERTa-base model trained on 98.66M tweets until the end of June 2020. More details and performance scores are available in the TimeLMs paper.

Below, we provide some usage examples using the standard Transformers interface. For another interface more suited to comparing predictions and perplexity scores between models trained at different temporal intervals, check the TimeLMs repository.

For other models trained until different periods, check this table.

Preprocess Text

Replace usernames and links for placeholders: "@user" and "http". If you're interested in retaining verified users which were also retained during training, you may keep the users listed here.

def preprocess(text):
    preprocessed_text = []
    for t in text.split():
        if len(t) > 1:
            t = '@user' if t[0] == '@' and t.count('@') == 1 else t
            t = 'http' if t.startswith('http') else t
        preprocessed_text.append(t)
    return ' '.join(preprocessed_text)

Example Masked Language Model

from transformers import pipeline, AutoTokenizer

MODEL = "cardiffnlp/twitter-roberta-base-jun2020"
fill_mask = pipeline("fill-mask", model=MODEL, tokenizer=MODEL)
tokenizer = AutoTokenizer.from_pretrained(MODEL)

def pprint(candidates, n):
    for i in range(n):
        token = tokenizer.decode(candidates[i]['token'])
        score = candidates[i]['score']
        print("%d) %.5f %s" % (i+1, score, token))

texts = [
    "So glad I'm <mask> vaccinated.",
    "I keep forgetting to bring a <mask>.",
    "Looking forward to watching <mask> Game tonight!",
]

for text in texts:
    t = preprocess(text)
    print(f"{'-'*30}\n{t}")
    candidates = fill_mask(t)
    pprint(candidates, 5)

Output:

------------------------------
So glad I'm <mask> vaccinated.
1) 0.52684  not
2) 0.18349  getting
3) 0.07971  fully
4) 0.05598  being
5) 0.02347  self
------------------------------
I keep forgetting to bring a <mask>.
1) 0.13266  mask
2) 0.04859  book
3) 0.04851  laptop
4) 0.03123  pillow
5) 0.02747  blanket
------------------------------
Looking forward to watching <mask> Game tonight!
1) 0.35750  The
2) 0.32703  the
3) 0.13048  End
4) 0.02261  this
5) 0.01066  This

Example Tweet Embeddings

from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np
from scipy.spatial.distance import cosine
from collections import Counter

def get_embedding(text):  # naive approach for demonstration
  text = preprocess(text)
  encoded_input = tokenizer(text, return_tensors='pt')
  features = model(**encoded_input)
  features = features[0].detach().cpu().numpy() 
  return np.mean(features[0], axis=0) 


MODEL = "cardiffnlp/twitter-roberta-base-jun2020"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModel.from_pretrained(MODEL)

query = "The book was awesome"
tweets = ["I just ordered fried chicken ๐Ÿฃ", 
          "The movie was great",
          "What time is the next game?",
          "Just finished reading 'Embeddings in NLP'"]

sims = Counter()
for tweet in tweets:
    sim = 1 - cosine(get_embedding(query), get_embedding(tweet))
    sims[tweet] = sim

print('Most similar to: ', query)
print(f"{'-'*30}")
for idx, (tweet, sim) in enumerate(sims.most_common()):
    print("%d) %.5f %s" % (idx+1, sim, tweet))

Output:

Most similar to:  The book was awesome
------------------------------
1) 0.99078 The movie was great
2) 0.96610 Just finished reading 'Embeddings in NLP'
3) 0.96095 What time is the next game?
4) 0.95855 I just ordered fried chicken ๐Ÿฃ

Example Feature Extraction

from transformers import AutoTokenizer, AutoModel, TFAutoModel
import numpy as np

MODEL = "cardiffnlp/twitter-roberta-base-jun2020"
tokenizer = AutoTokenizer.from_pretrained(MODEL)

text = "Good night ๐Ÿ˜Š"
text = preprocess(text)

# Pytorch
model = AutoModel.from_pretrained(MODEL)
encoded_input = tokenizer(text, return_tensors='pt')
features = model(**encoded_input)
features = features[0].detach().cpu().numpy() 
features_mean = np.mean(features[0], axis=0) 
#features_max = np.max(features[0], axis=0)

# # Tensorflow
# model = TFAutoModel.from_pretrained(MODEL)
# encoded_input = tokenizer(text, return_tensors='tf')
# features = model(encoded_input)
# features = features[0].numpy()
# features_mean = np.mean(features[0], axis=0) 
# #features_max = np.max(features[0], axis=0)
Downloads last month
11
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using cardiffnlp/twitter-roberta-base-jun2020 1

Collection including cardiffnlp/twitter-roberta-base-jun2020