Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
csv
Sub-tasks:
sentiment-classification
Languages:
Polish
Size:
1K - 10K
License:
Create README.md
#1
by
asawczyn
- opened
README.md
ADDED
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- expert-generated
|
4 |
+
language_creators:
|
5 |
+
- other
|
6 |
+
language:
|
7 |
+
- pl
|
8 |
+
license:
|
9 |
+
- cc-by-sa-4.0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
pretty_name: 'PolEmo2.0-OUT'
|
13 |
+
size_categories:
|
14 |
+
- 1K<n<10K
|
15 |
+
source_datasets:
|
16 |
+
- original
|
17 |
+
task_categories:
|
18 |
+
- text-classification
|
19 |
+
task_ids:
|
20 |
+
- sentiment-classification
|
21 |
+
---
|
22 |
+
|
23 |
+
# klej-polemo2-out
|
24 |
+
|
25 |
+
## Description
|
26 |
+
|
27 |
+
The PolEmo2.0 is a dataset of online consumer reviews from four domains: medicine, hotels, products, and university. It is human-annotated on a level of full reviews and individual sentences. It comprises over 8000 reviews, about 85% from the medicine and hotel domains.
|
28 |
+
|
29 |
+
We use the PolEmo2.0 dataset to form two tasks. Both use the same training dataset, i.e., reviews from medicine and hotel domains, but are evaluated on a different test set.
|
30 |
+
|
31 |
+
**Out-of-Domain** is the second task, and we test the model on out-of-domain reviews, i.e., from product and university domains. Since the original test sets for those domains are scarce (50 reviews each), we decided to use the original out-of-domain training set of 900 reviews for testing purposes and create a new split of development and test sets. As a result, the task consists of 1000 reviews, comparable in size to the in-domain test dataset of 1400 reviews.
|
32 |
+
|
33 |
+
## Tasks (input, output, and metrics)
|
34 |
+
|
35 |
+
The task is to predict the correct label of the review.
|
36 |
+
|
37 |
+
**Input** ('*text'* column): sentence
|
38 |
+
|
39 |
+
**Output** ('*target'* column): label for sentence sentiment ('zero': neutral, 'minus': negative, 'plus': positive, 'amb': ambiguous)
|
40 |
+
|
41 |
+
**Domain**: Online reviews
|
42 |
+
|
43 |
+
**Measurements**: Accuracy
|
44 |
+
|
45 |
+
**Example***:
|
46 |
+
Lekarz zalecił mi kurację alternatywną do dotychczasowej , więc jeszcze nie daję najwyższej oceny ( zobaczymy na ile okaże się skuteczna ) . Do Pana doktora nie mam zastrzeżeń : bardzo profesjonalny i kulturalny . Jedyny minus dotyczy gabinetu , który nie jest nowoczesny , co może zniechęcać pacjentki .* → *__label__meta_amb*
|
47 |
+
|
48 |
+
## Data splits
|
49 |
+
|
50 |
+
| Subset | Cardinality |
|
51 |
+
|:-----------|--------------:|
|
52 |
+
| train | 5783 |
|
53 |
+
| test | 722 |
|
54 |
+
| validation | 723 |
|
55 |
+
|
56 |
+
## Class distribution
|
57 |
+
|
58 |
+
| Class | Sentiment | train | validation | test |
|
59 |
+
|:------|:----------|------:|-----------:|------:|
|
60 |
+
| minus | positive | 0.379 | 0.334 | 0.368 |
|
61 |
+
| plus | negative | 0.271 | 0.332 | 0.302 |
|
62 |
+
| amb | ambiguous | 0.182 | 0.332 | 0.328 |
|
63 |
+
| zero | neutral | 0.168 | 0.002 | 0.002 |
|
64 |
+
|
65 |
+
## Citation
|
66 |
+
|
67 |
+
```
|
68 |
+
@inproceedings{kocon-etal-2019-multi,
|
69 |
+
title = "Multi-Level Sentiment Analysis of {P}ol{E}mo 2.0: Extended Corpus of Multi-Domain Consumer Reviews",
|
70 |
+
author = "Koco{\'n}, Jan and
|
71 |
+
Mi{\l}kowski, Piotr and
|
72 |
+
Za{\'s}ko-Zieli{\'n}ska, Monika",
|
73 |
+
booktitle = "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
|
74 |
+
month = nov,
|
75 |
+
year = "2019",
|
76 |
+
address = "Hong Kong, China",
|
77 |
+
publisher = "Association for Computational Linguistics",
|
78 |
+
url = "https://aclanthology.org/K19-1092",
|
79 |
+
doi = "10.18653/v1/K19-1092",
|
80 |
+
pages = "980--991",
|
81 |
+
abstract = "In this article we present an extended version of PolEmo {--} a corpus of consumer reviews from 4 domains: medicine, hotels, products and school. Current version (PolEmo 2.0) contains 8,216 reviews having 57,466 sentences. Each text and sentence was manually annotated with sentiment in 2+1 scheme, which gives a total of 197,046 annotations. We obtained a high value of Positive Specific Agreement, which is 0.91 for texts and 0.88 for sentences. PolEmo 2.0 is publicly available under a Creative Commons copyright license. We explored recent deep learning approaches for the recognition of sentiment, such as Bi-directional Long Short-Term Memory (BiLSTM) and Bidirectional Encoder Representations from Transformers (BERT).",
|
82 |
+
}
|
83 |
+
```
|
84 |
+
|
85 |
+
## License
|
86 |
+
|
87 |
+
```
|
88 |
+
Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
|
89 |
+
```
|
90 |
+
|
91 |
+
## Links
|
92 |
+
|
93 |
+
[HuggingFace](https://huggingface.co/datasets/allegro/klej-polemo2-out)
|
94 |
+
|
95 |
+
[Source](https://clarin-pl.eu/dspace/handle/11321/710)
|
96 |
+
|
97 |
+
[Paper](https://aclanthology.org/K19-1092/)
|
98 |
+
|
99 |
+
## Examples
|
100 |
+
|
101 |
+
### Loading
|
102 |
+
|
103 |
+
```python
|
104 |
+
from pprint import pprint
|
105 |
+
|
106 |
+
from datasets import load_dataset
|
107 |
+
|
108 |
+
dataset = load_dataset("allegro/klej-polemo2-out")
|
109 |
+
pprint(dataset['train'][0])
|
110 |
+
|
111 |
+
# {'sentence': 'Super lekarz i człowiek przez duże C . Bardzo duże doświadczenie '
|
112 |
+
# 'i trafne diagnozy . Wielka cierpliwość do ludzi starszych . Od '
|
113 |
+
# 'lat opiekuje się moją Mamą staruszką , i twierdzę , że mamy duże '
|
114 |
+
# 'szczęście , że mamy takiego lekarza . Naprawdę nie wiem cobyśmy '
|
115 |
+
# 'zrobili , gdyby nie Pan doktor . Dzięki temu , moja mama żyje . '
|
116 |
+
# 'Każda wizyta u specjalisty jest u niego konsultowana i uważam , '
|
117 |
+
# 'że jest lepszy od każdego z nich . Mamy do Niego prawie '
|
118 |
+
# 'nieograniczone zaufanie . Można wiele dobrego o Panu doktorze '
|
119 |
+
# 'jeszcze napisa�� . Niestety , ma bardzo dużo pacjentów , jest '
|
120 |
+
# 'przepracowany ( z tego powodu nawet obawiam się o jego zdrowie ) '
|
121 |
+
# 'i dostęp do niego jest trudny , ale zawsze możliwy .',
|
122 |
+
# 'target': '__label__meta_plus_m'}
|
123 |
+
```
|
124 |
+
|
125 |
+
### Evaluation
|
126 |
+
|
127 |
+
```python
|
128 |
+
import random
|
129 |
+
from pprint import pprint
|
130 |
+
|
131 |
+
from datasets import load_dataset, load_metric
|
132 |
+
|
133 |
+
dataset = load_dataset("allegro/klej-polemo2-out")
|
134 |
+
dataset = dataset.class_encode_column("target")
|
135 |
+
references = dataset["test"]["target"]
|
136 |
+
|
137 |
+
# generate random predictions
|
138 |
+
predictions = [random.randrange(max(references) + 1) for _ in range(len(references))]
|
139 |
+
|
140 |
+
acc = load_metric("accuracy")
|
141 |
+
f1 = load_metric("f1")
|
142 |
+
|
143 |
+
acc_score = acc.compute(predictions=predictions, references=references)
|
144 |
+
f1_score = f1.compute(predictions=predictions, references=references, average="macro")
|
145 |
+
|
146 |
+
pprint(acc_score)
|
147 |
+
pprint(f1_score)
|
148 |
+
|
149 |
+
# {'accuracy': 0.2894736842105263}
|
150 |
+
# {'f1': 0.2484406098784191}
|
151 |
+
```
|