Commit
•
8cffeb0
1
Parent(s):
3f574e1
Create README.md (#6)
Browse files- Create README.md (668281199328a516fb5ca3eb446916fb2f79f2e1)
Co-authored-by: Gabriela Zuñiga <[email protected]>
- README_ES.md +216 -41
README_ES.md
CHANGED
@@ -1,51 +1,181 @@
|
|
1 |
---
|
2 |
-
license: cc-by-
|
3 |
task_categories:
|
4 |
- text-classification
|
5 |
language:
|
6 |
- es
|
7 |
-
pretty_name:
|
8 |
-
|
9 |
-
-
|
10 |
---
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
-
### Esta dividido en:
|
49 |
- train:
|
50 |
|
51 |
| Número | Label | % |
|
@@ -54,9 +184,54 @@ Clasificación binaria sobre párrafos relacionados a cambio climatico o sustent
|
|
54 |
| 1300 | 0 | 45% |
|
55 |
|
56 |
- test:
|
|
|
57 |
| Número | Label | % |
|
58 |
|----------|----------|----------|
|
59 |
| 480 | 1 | 62% |
|
60 |
| 300 | 0 | 38% |
|
61 |
|
62 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: cc-by-4.0
|
3 |
task_categories:
|
4 |
- text-classification
|
5 |
language:
|
6 |
- es
|
7 |
+
pretty_name: ClimID
|
8 |
+
size_categories:
|
9 |
+
- 1K<n<10K
|
10 |
---
|
11 |
+
# Dataset Card for BERTIN-ClimID: BERTIN-Base Climate-related text Identification
|
12 |
+
README Spanish Version: [README_ES](https://huggingface.co/datasets/somosnlp/spa_climate_detection/blob/main/README_ES.md)
|
13 |
+
|
14 |
+
Dataset for BERTIN-ClimID was developed as the fusion of different sources (open-source).
|
15 |
+
|
16 |
+
<!--
|
17 |
+
|
18 |
+
Nombre del corpus:
|
19 |
+
|
20 |
+
Suele haber un nombre corto ("pretty name") para las URLs, tablas y demás y uno largo más descriptivo. Para crear el pretty name podéis utilizar acrónimos.
|
21 |
+
|
22 |
+
Idioma:
|
23 |
+
|
24 |
+
La Dataset Card puede estar en español o en inglés. Recomendamos que sea en inglés para que la comunidad internacional pueda utilizar vuestro dataset. Teniendo en cuenta que somos una comunidad hispanohablante y no queremos que el idioma sea una barrera, la opción más inclusiva sería escribirla en un idioma y traducirla (automáticamente?) al otro. En el repo entonces habría un README.md (Dataset Card en inglés) que enlazaría a un README_ES.md (Dataset Card en español), o viceversa, README.md y README_EN.md. Si necesitáis apoyo con la traducción os podemos ayudar.
|
25 |
+
|
26 |
+
Qué incluir en esta sección:
|
27 |
+
|
28 |
+
Esta sección es como el abstract. Escribir un resumen del corpus y motivación del proyecto (inc. los ODS relacionados). Si el proyecto tiene un logo, incluidlo aquí.
|
29 |
+
|
30 |
+
Si queréis incluir una versión de la Dataset Card en español, enlazadla aquí al principio (e.g. "A Spanish version of this Dataset Card can be found under [`README_es.md`](URL)"). De manera análoga para el inglés.
|
31 |
+
|
32 |
+
-->
|
33 |
+
|
34 |
+
## Dataset Details
|
35 |
+
|
36 |
+
### Dataset Description
|
37 |
+
|
38 |
+
<!-- Una frase de resumen del dataset. -->
|
39 |
+
- **Curated by:** [Gerardo Huerta](https://huggingface.co/Gerard-1705) [Gabriela Zuñiga](https://huggingface.co/Gabrielaz)
|
40 |
+
- **Funded by:** SomosNLP, HuggingFace
|
41 |
+
- **Language(s):** es-ES, es-PE
|
42 |
+
- **License:** cc-by-nc-sa-4.0
|
43 |
+
|
44 |
+
### Dataset Sources
|
45 |
+
|
46 |
+
- **Repository:** [somosnlp/spa_climate_detection](https://huggingface.co/datasets/somosnlp/spa_climate_detection) <!-- Enlace al `main` del repo donde tengáis los scripts, i.e.: o del mismo repo del dataset en HuggingFace o a GitHub. -->
|
47 |
+
- **Paper:** [WIP] <!-- Si vais a presentarlo a NAACL poned "WIP", "Comming soon!" o similar. Si no tenéis intención de presentarlo a ninguna conferencia ni escribir un preprint, eliminar. -->
|
48 |
+
- **Video presentation:** [Proyecto BERTIN-ClimID](https://www.youtube.com/watch?v=sfXLUP9Ei-o) <!-- Enlace a vuestro vídeo de presentación en YouTube (están todos subidos aquí: https://www.youtube.com/playlist?list=PLTA-KAy8nxaASMwEUWkkTfMaDxWBxn-8J) -->
|
49 |
+
|
50 |
+
<!-- ### Dataset Versions & Formats [optional]
|
51 |
+
|
52 |
+
<!-- Si tenéis varias versiones de vuestro dataset podéis combinarlas todas en un mismo repo y simplemente enlazar aquí los commits correspondientes. Ver ejemplo de https://huggingface.co/bertin-project/bertin-roberta-base-spanish -->
|
53 |
+
|
54 |
+
<!-- Si hay varias formatos del dataset (e.g. sin anotar, pregunta/respuesta, gemma) las podéis enumerar aquí. -->
|
55 |
+
|
56 |
+
## Uses
|
57 |
+
|
58 |
+
<!-- Address questions around how the dataset is intended to be used. -->
|
59 |
+
|
60 |
+
### Direct Use
|
61 |
+
- News classification: With this model it is possible to classify news headlines related to the areas of climate change.
|
62 |
+
- Paper classification: The identification of scientific texts that disclose solutions and/or effects of climate change. For this use, the abstract of each paper can be used for identification.
|
63 |
+
-Social Media posts Classification: Classify social media posts (short texts) related or not to climate areas
|
64 |
+
<!-- This section describes suitable use cases for the dataset. -->
|
65 |
+
|
66 |
+
|
67 |
+
### Out-of-Scope Use
|
68 |
+
- For the creation of information repositories regarding climate issues.
|
69 |
+
- This model can serve as a basis for creating new classification systems for climate solutions to disseminate new efforts to combat climate change in different sectors.
|
70 |
+
- Creation of new datasets that address the issue.
|
71 |
+
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
|
72 |
+
|
73 |
+
## Dataset Structure
|
74 |
+
- **question:** Text
|
75 |
+
- **answer:** binary label, if the text is related to climate change or sustainability (1) if the text is not related (0)
|
76 |
+
- **domain:** Identifies what topic the text is related to, in our case we have 3 types "climate_change_reports", "miscellaneous_press", "climate_change". Climate change reports refers to the paragraphs that talk about climate change but were extracted from corporate annual reports. Miscellaneous press are paragraphs on various topics extracted from the press. Climate change, all paragraphs that talk about this topic and do not have any special source of information.
|
77 |
+
- **Country of origin:** Where this data comes from geographically. We include 3 categories: "global", "Spain", "USA". Global is data that was taken from sources that do not indicate its specific origin but we know that it was taken from data repositories with sources from any country of origin.
|
78 |
+
- **Language:** Geographic variety of Spanish used. In this case we used 2 types "es_pe", "es_esp", this is because many of the data had to be translated from English to Spanish, annotations were made using the regional language of the team that collaborated with the translation.
|
79 |
+
- **Registration:** Functional variety of language. Within this dataset, 3 types are identified: "cult", "medium", "colloquial" depending on the origin of the data.
|
80 |
+
- **Task:** Identifies the purpose for which the input data is intended.
|
81 |
+
- **Period:** In what era the language used is located. This dataset uses actual language.
|
82 |
+
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
83 |
+
|
84 |
+
<!--
|
85 |
+
|
86 |
+
Enumerar y explicar cada columna del corpus. Para cada columna que sea de tipo "categoría" indicar el porcentaje de ejemplos. Podéis encontrar la estructura de corpus propuesta en [estructura_corpus.md](/plantillas_docs_proyectos/estructura_corpus.md).
|
87 |
+
|
88 |
+
Ejemplo:
|
89 |
+
|
90 |
+
El corpus cuenta con un total de X ejemplos y contiene las siguientes columnas:
|
91 |
+
- `pregunta`
|
92 |
+
- `respuesta`
|
93 |
+
- `idioma` (variedad geográfica): código ISO del idioma. Distribución: 33% `es_AR`, 33% `es_UY`, 33% `es_PY`
|
94 |
+
- `registro` (variedad funcional): `coloquial`, `medio` o `culto`. Distribución: 100% `coloquial.
|
95 |
+
- `periodo` (variedad histórica): `actual`, `moderno` (ss. XVIII-XIX), `clásico` (ss. XVI-XVII) o `medieval`. Distribución: 100% `actual`.
|
96 |
+
- `dominio`: dominio de la instrucción. Distribución: 10% `sociales_historia`, ...
|
97 |
+
- `tarea`: tarea de la instrucción. Distribución: 100% `resumen`.
|
98 |
+
- `país_origen`: código ISO del país de origen de los datos. Distribución:
|
99 |
+
- `país_referencia`: código ISO del país al que hace referencia la pregunta. Distribución: 55% en blanco, 5% ..., ...
|
100 |
+
|
101 |
+
-->
|
102 |
+
|
103 |
+
[More Information Needed]
|
104 |
+
|
105 |
+
## Dataset Creation
|
106 |
+
|
107 |
+
### Curation Rationale
|
108 |
+
The motivation of the dataset creation was to create a repository in Spanish on information or resources on topics such as: climate change, sustainability, global warming, energy, etc; this because we didn't found a dataset like this one. Climate change and global warming are current main problems globally so it's important to fight this harm in all places with all languages, also to bring solutions and share information accesible for everyone
|
109 |
+
|
110 |
+
<!-- Motivation for the creation of this dataset. -->
|
111 |
+
|
112 |
+
### Source Data
|
113 |
+
We used several sources of data to make a varied dataset that could work with different types of texts, from articles, news, social media posts and other texts. we included:
|
114 |
+
|
115 |
+
- Spanish translation of the dataset: [climate Bert] (https://huggingface.co/datasets/climatebert/climate_detection)
|
116 |
+
- News in Spanish on topics not related to climate change:[Spanish news headers](https://www.kaggle.com/datasets/kevinmorgado/spanish-news-classification)
|
117 |
+
- Translation of opinions related to climate change: [Opinions](https://data.world/crowdflower/sentiment-of-climate-change)
|
118 |
+
- Translation of news tweets not related to climate change: [Posts](https://www.kaggle.com/datasets/muhammadmemoon/los-angeles-twitter-news-dataset)
|
119 |
+
|
120 |
+
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
121 |
+
|
122 |
+
<!-- Incluir siempre que sea posible enlaces a los datos de origen. -->
|
123 |
+
|
124 |
+
#### Data Collection and Processing
|
125 |
+
|
126 |
+
- Spanish translation of the dataset: [climate Bert](https://huggingface.co/datasets/climatebert/climate_detection)
|
127 |
+
- News in Spanish on topics not related to climate change:[Spanish news headers](https://www.kaggle.com/datasets/kevinmorgado/spanish-news-classification)
|
128 |
+
For this dataset, the column with news and the topics Macroeconomics, Innovation, Regulations, Alliances, Reputation have been discriminated, which have been labeled with (0)
|
129 |
+
The dataset also contained the topic Sustainability as a topic but it was removed (we only required unrelated texts).
|
130 |
+
- Translation of opinions related to climate change: [Opinions](https://data.world/crowdflower/sentiment-of-climate-change)
|
131 |
+
In this dataset all opinions are related to climate change, which is why they were labeled with (1). Data cleaning has been carried out by removing harshtags, usernames and emogis to use only the textual content of the tweets.
|
132 |
+
- Translation of news tweets not related to climate change: [Posts](https://www.kaggle.com/datasets/muhammadmemoon/los-angeles-twitter-news-dataset)
|
133 |
+
In this dataset the news is categorized and has short length (like opinions) all text is not related to climate change so they were labeled with (0). Data cleaning has been carried out by removing harshtags, usernames and emogis to use only the textual content of the tweets. This dataset has been chosen to balance the amount of related text and to include short texts not related to training.
|
134 |
+
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
135 |
+
|
136 |
+
<!-- Enlazar aquí los scripts y notebooks utilizados para generar el corpus. -->
|
137 |
+
|
138 |
+
#### Who are the source data producers?
|
139 |
+
- Climate bert dataset: Large Companies listed on paper(original dataset).
|
140 |
+
- Spanish News: Web scrapping from Bank news sites
|
141 |
+
- Opinions from climate change: Tweets extraction
|
142 |
+
- Opinions not related to climate change: Tweets of around 2 month of Los Angeles News from twitter.
|
143 |
+
|
144 |
+
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
|
145 |
+
|
146 |
+
### Annotations
|
147 |
+
|
148 |
+
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
|
149 |
+
|
150 |
+
#### Annotation process
|
151 |
+
All the records had the corresponding annotation (related or not related to climate and global warming), but we only changed the text values to binary values (1 / 0)
|
152 |
+
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
|
153 |
+
|
154 |
+
<!-- Enlazar aquí el notebook utilizado para crear el espacio de anotación de Argilla y la guía de anotación. -->
|
155 |
+
|
156 |
+
|
157 |
+
|
158 |
+
#### Who are the annotators?
|
159 |
+
|
160 |
+
<!-- This section describes the people or systems who created the annotations. -->
|
161 |
+
|
162 |
+
[More Information Needed]
|
163 |
+
|
164 |
+
#### Personal and Sensitive Information
|
165 |
+
In this case it was not necessary to have an anonymization process
|
166 |
+
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
|
167 |
+
|
168 |
+
|
169 |
+
## Bias, Risks, and Limitations
|
170 |
+
|
171 |
+
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
172 |
+
|
173 |
+
<!-- Aquí podéis mencionar los posibles sesgos heredados según el origen de los datos y de las personas que lo han anotado, hablar del balance de las categorías representadas, los esfuerzos que habéis hecho para intentar mitigar sesgos y riesgos. -->
|
174 |
+
At this point, no specific studies have been carried out on biases and limitations, however we make the following notes based on previous experience and model testing:
|
175 |
+
- It inherits the biases and limitations of the base model with which it was trained.
|
176 |
+
- Direct biases such as the majority use of high-level language in the dataset due to the use of texts extracted from news, legal documentation of companies that can complicate the identification of texts with low-level languages (example: colloquial). To mitigate these biases, diverse opinions on climate change topics extracted from sources such as social networks were included in the dataset, and the labels were additionally rebalanced (see tables below).
|
177 |
+
- The dataset inherits other limitations such as: the model loses performance in short texts, this is because most of the texts used in the dataset have a long length of between 200 - 500 words. Once again, an attempt was made to mitigate these limitations with the inclusion of short texts.
|
178 |
|
|
|
179 |
- train:
|
180 |
|
181 |
| Número | Label | % |
|
|
|
184 |
| 1300 | 0 | 45% |
|
185 |
|
186 |
- test:
|
187 |
+
|
188 |
| Número | Label | % |
|
189 |
|----------|----------|----------|
|
190 |
| 480 | 1 | 62% |
|
191 |
| 300 | 0 | 38% |
|
192 |
|
193 |
|
194 |
+
### Recommendations
|
195 |
+
Our recommendation is to continue adding more samples of spanish text in both large and short length.
|
196 |
+
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations.
|
197 |
+
|
198 |
+
Example:
|
199 |
+
|
200 |
+
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. -->
|
201 |
+
|
202 |
+
[More Information Needed]
|
203 |
+
|
204 |
+
## License
|
205 |
+
cc-by-nc-sa-4.0 Due to inheritance of the data used in the dataset.
|
206 |
+
<!-- Indicar bajo qué licencia se libera el dataset explicando, si no es apache 2.0, a qué se debe la licencia más restrictiva (i.e. herencia de los datos utilizados). -->
|
207 |
+
|
208 |
+
## Citation
|
209 |
+
|
210 |
+
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
211 |
+
|
212 |
+
**BibTeX:**
|
213 |
+
```
|
214 |
+
@misc{BERTIN-ClimID,
|
215 |
+
author = {Gerardo Huerta, Gabriela Zuñiga},
|
216 |
+
title = {Dataset for BERTIN-ClimID: BERTIN-Base Climate-related text Identification},
|
217 |
+
month = Abril,
|
218 |
+
year = 2024,
|
219 |
+
url = {https://huggingface.co/datasets/somosnlp/spa_climate_detectiona}
|
220 |
+
}
|
221 |
+
|
222 |
+
```
|
223 |
+
|
224 |
+
<!--## Glossary [optional]
|
225 |
+
If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
|
226 |
+
## More Information
|
227 |
+
This project was developed during the [Hackathon #Somos600M](https://somosnlp.org/hackathon) organized by SomosNLP. We thank all event organizers and sponsors for their support during the event.
|
228 |
+
|
229 |
+
**Team:**
|
230 |
+
|
231 |
+
- [Gerardo Huerta](https://huggingface.co/Gerard-1705)
|
232 |
+
- [Gabriela Zuñiga](https://huggingface.co/Gabrielaz)
|
233 |
+
|
234 |
+
## Contact
|
235 |
+
|
236 | |
237 |