Sources

#3
by tomaarsen HF staff - opened

Hello!

I couldn't help but notice the interesting datasets on your profile. Is this dataset (and https://huggingface.co/datasets/gowitheflow/unsupervised-multilingual) a combination of various other datasets? And how about https://huggingface.co/datasets/gowitheflow/embedding-datasets? I also see that you've deduplicated my parallel-sentences datasets, nice work! That's quite useful.

I'm also curious if you've trained any embedding models with these datasets.

  • Tom Aarsen

Hi Tom,

@tomaarsen Thanks for noticing these datasets! Long story short, they mostly came from combining collections from SentenceTransfomers - Thanks for your continual efforts to make it better! I manually selected datasets that are in reasonable sizes and provide good performance to embedding models by their own. The "multilingual" part is by further concatenating the deduplicated parallel sentence datasets.

The unsupervised version is the concatenation of the supervised version and "gowitheflow/wiki-span", which I made with similar procedure of Contriever.

In case you're interested, the datasets deduplicated from yours are with logic like the following (It might be even more useful to take care of low-resource languages by over-sampling them which I haven't done!):

def deduplicate(dataset, key_column):
    grouped_data = defaultdict(list)
    for row in tqdm(dataset):
        grouped_data[row[key_column]].append(row)
    
    deduplicated_data = []
    for key in grouped_data:
        deduplicated_data.append(random.choice(grouped_data[key]))
    
    return Dataset.from_dict({key: [item[key] for item in deduplicated_data] for key in dataset.column_names})

deduplicated_dataset = deduplicate(original_dataset, 'english')

I haven't trained any language models with it unfortunately. I created these datasets mostly to explore a fun idea - I am training vision-only embedding models by rendering these datasets to images! The first-phase exploration resulted in Pixel-Linguist-v0. With these new datasets, I am training a class of new vision models at the moment and will share results with you on Slack if interesting results come out! The biggest motivation to deduplicate these parallel datasets was due to the need to use a huge batch size (>10000) to make vision models work better in my project.

Also I'll find time to add documentation to these datasets that lead to yours!

Chenghao

Sign up or log in to comment