shahrukhx01 commited on
Commit
529bea9
1 Parent(s): 4e98941

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -0
README.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Article
2
+ [Medium article](https://medium.com/@shahrukhx01/multi-task-learning-with-transformers-part-1-multi-prediction-heads-b7001cf014bf)
3
+ ## Demo Notebook
4
+ [Colab Notebook Multi-task Query classifiers](https://colab.research.google.com/drive/1R7WcLHxDsVvZXPhr5HBgIWa3BlSZKY6p?usp=sharing)
5
+ ## Clone the model repo
6
+ ```bash
7
+ git clone https://huggingface.co/shahrukhx01/bert-multitask-query-classifiers
8
+ ```
9
+ ```python
10
+ %cd bert-multitask-query-classifiers/
11
+ ```
12
+ ## Load models
13
+ ```python
14
+ from multitask_model import BertForSequenceClassification
15
+ from transformers import AutoTokenizer
16
+ import torch
17
+ model = BertForSequenceClassification.from_pretrained(
18
+ "shahrukhx01/bert-multitask-query-classifiers",
19
+ task_labels_map={"quora_keyword_pairs": 2, "spaadia_squad_pairs": 2},
20
+ )
21
+ tokenizer = AutoTokenizer.from_pretrained("shahrukhx01/bert-multitask-query-classifiers")
22
+ ```
23
+ ## Run inference on both Tasks
24
+ ```python
25
+ from multitask_model import BertForSequenceClassification
26
+ from transformers import AutoTokenizer
27
+ import torch
28
+ model = BertForSequenceClassification.from_pretrained(
29
+ "shahrukhx01/bert-multitask-query-classifiers",
30
+ task_labels_map={"quora_keyword_pairs": 2, "spaadia_squad_pairs": 2},
31
+ )
32
+ tokenizer = AutoTokenizer.from_pretrained("shahrukhx01/bert-multitask-query-classifiers")
33
+
34
+ ## Keyword vs Statement/Question Classifier
35
+ input = ["keyword query", "is this a keyword query?"]
36
+ task_name="quora_keyword_pairs"
37
+ sequence = tokenizer(input, padding=True, return_tensors="pt")['input_ids']
38
+ logits = model(sequence, task_name=task_name)[0]
39
+ predictions = torch.argmax(torch.softmax(logits, dim=1).detach().cpu(), axis=1)
40
+ for input, prediction in zip(input, predictions):
41
+ print(f"task: {task_name}, input: {input} \n prediction=> {prediction}")
42
+ print()
43
+ ```