tomaarsen HF staff commited on
Commit
c7e32ec
1 Parent(s): 7c0ab2e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -0
README.md CHANGED
@@ -1,4 +1,58 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  dataset_info:
3
  - config_name: en-ar
4
  features:
@@ -690,3 +744,49 @@ configs:
690
  - split: train
691
  path: en-zh_cn/train-*
692
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ - multilingual
5
+ - ar
6
+ - bg
7
+ - ca
8
+ - cs
9
+ - da
10
+ - de
11
+ - el
12
+ - es
13
+ - et
14
+ - fa
15
+ - fi
16
+ - fr
17
+ - gl
18
+ - he
19
+ - hi
20
+ - hr
21
+ - hu
22
+ - hy
23
+ - id
24
+ - it
25
+ - ja
26
+ - ka
27
+ - ko
28
+ - lt
29
+ - lv
30
+ - mk
31
+ - ms
32
+ - nl
33
+ - pl
34
+ - pt
35
+ - ro
36
+ - ru
37
+ - sk
38
+ - sl
39
+ - sq
40
+ - sr
41
+ - sv
42
+ - th
43
+ - tr
44
+ - uk
45
+ - ur
46
+ - vi
47
+ - zh
48
+ size_categories:
49
+ - 10M<n<100M
50
+ task_categories:
51
+ - feature-extraction
52
+ - sentence-similarity
53
+ pretty_name: OpenSubtitle
54
+ tags:
55
+ - sentence-transformers
56
  dataset_info:
57
  - config_name: en-ar
58
  features:
 
744
  - split: train
745
  path: en-zh_cn/train-*
746
  ---
747
+
748
+
749
+ # Dataset Card for Parallel Sentences - OpenSubtitles
750
+
751
+ This dataset contains parallel sentences (i.e. English sentence + the same sentences in another language) for numerous other languages. Most of the sentences originate from the [OPUS website](https://opus.nlpl.eu/).
752
+ In particular, this dataset contains the [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles) dataset.
753
+
754
+ ## Related Datasets
755
+
756
+ The following datasets are also a part of the Parallel Sentences collection:
757
+ * [parallel-sentences-europarl](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-europarl)
758
+ * [parallel-sentences-global-voices](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-global-voices)
759
+ * [parallel-sentences-muse](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-muse)
760
+ * [parallel-sentences-jw300](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-jw300)
761
+ * [parallel-sentences-news-commentary](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-news-commentary)
762
+ * [parallel-sentences-opensubtitles](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-opensubtitles)
763
+ * [parallel-sentences-talks](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks)
764
+ * [parallel-sentences-tatoeba](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-tatoeba)
765
+ * [parallel-sentences-wikimatrix](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-wikimatrix)
766
+ * [parallel-sentences-wikititles](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-wikititles)
767
+
768
+ These datasets can be used to train multilingual sentence embedding models. For more information, see [sbert.net - Multilingual Models](https://www.sbert.net/examples/training/multilingual/README.html).
769
+
770
+ ## Dataset Subsets
771
+
772
+ ### `all` subset
773
+
774
+ * Columns: "english", "non_english"
775
+ * Column types: `str`, `str`
776
+ * Examples:
777
+ ```python
778
+
779
+ ```
780
+ * Collection strategy: Combining all other subsets from this dataset.
781
+ * Deduplified: No
782
+
783
+ ### `en-...` subsets
784
+
785
+ * Columns: "english", "non_english"
786
+ * Column types: `str`, `str`
787
+ * Examples:
788
+ ```python
789
+
790
+ ```
791
+ * Collection strategy: Processing the raw data from [parallel-sentences](https://huggingface.co/datasets/sentence-transformers/parallel-sentences) and formatting it in Parquet, followed by deduplication.
792
+ * Deduplified: Yes