--- dataset_info: - config_name: year_2015 features: - name: id dtype: string - name: score dtype: int64 - name: permalink dtype: string - name: depth dtype: string - name: link_id dtype: string - name: parent_id dtype: string - name: poster dtype: string - name: content dtype: string - name: date_utc dtype: timestamp[ns] - name: flair dtype: string - name: new dtype: bool - name: updated dtype: bool - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 24597775 num_examples: 82707 download_size: 14199076 dataset_size: 24597775 - config_name: year_2016 features: - name: id dtype: string - name: score dtype: int64 - name: permalink dtype: string - name: depth dtype: string - name: link_id dtype: string - name: parent_id dtype: string - name: poster dtype: string - name: content dtype: string - name: date_utc dtype: timestamp[ns] - name: flair dtype: string - name: new dtype: bool - name: updated dtype: bool - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 31725124 num_examples: 115258 download_size: 18339731 dataset_size: 31725124 - config_name: year_2017 features: - name: id dtype: string - name: score dtype: int64 - name: permalink dtype: string - name: depth dtype: string - name: link_id dtype: string - name: parent_id dtype: string - name: poster dtype: string - name: content dtype: string - name: date_utc dtype: timestamp[ns] - name: flair dtype: string - name: new dtype: bool - name: updated dtype: bool - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 66619085 num_examples: 231408 download_size: 35903130 dataset_size: 66619085 - config_name: year_2018 features: - name: id dtype: string - name: score dtype: int64 - name: permalink dtype: string - name: depth dtype: string - name: link_id dtype: string - name: parent_id dtype: string - name: poster dtype: string - name: content dtype: string - name: date_utc dtype: timestamp[ns] - name: flair dtype: string - name: new dtype: bool - name: updated dtype: bool - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 83970726 num_examples: 264246 download_size: 41583278 dataset_size: 83970726 - config_name: year_2019 features: - name: id dtype: string - name: score dtype: int64 - name: permalink dtype: string - name: depth dtype: string - name: link_id dtype: string - name: parent_id dtype: string - name: poster dtype: string - name: content dtype: string - name: date_utc dtype: timestamp[ns] - name: flair dtype: string - name: new dtype: bool - name: updated dtype: bool - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 91572130 num_examples: 293538 download_size: 45149003 dataset_size: 91572130 - config_name: year_2020 features: - name: id dtype: string - name: score dtype: int64 - name: permalink dtype: string - name: depth dtype: string - name: link_id dtype: string - name: parent_id dtype: string - name: poster dtype: string - name: content dtype: string - name: date_utc dtype: timestamp[ns] - name: flair dtype: string - name: new dtype: bool - name: updated dtype: bool - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 89261893 num_examples: 277205 download_size: 44020462 dataset_size: 89261893 - config_name: year_2021 features: - name: id dtype: string - name: score dtype: int64 - name: permalink dtype: string - name: depth dtype: string - name: link_id dtype: string - name: parent_id dtype: string - name: poster dtype: string - name: content dtype: string - name: date_utc dtype: timestamp[ns] - name: flair dtype: string - name: new dtype: bool - name: updated dtype: bool - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 50670926 num_examples: 161207 download_size: 25272190 dataset_size: 50670926 - config_name: year_2022 features: - name: id dtype: string - name: score dtype: int64 - name: permalink dtype: string - name: depth dtype: string - name: link_id dtype: string - name: parent_id dtype: string - name: poster dtype: string - name: content dtype: string - name: date_utc dtype: timestamp[ns] - name: flair dtype: string - name: new dtype: bool - name: updated dtype: bool - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 49411900 num_examples: 157496 download_size: 24673180 dataset_size: 49411900 - config_name: year_2023 features: - name: id dtype: 'null' - name: score dtype: int64 - name: permalink dtype: 'null' - name: depth dtype: 'null' - name: link_id dtype: 'null' - name: parent_id dtype: 'null' - name: poster dtype: 'null' - name: content dtype: 'null' - name: date_utc dtype: timestamp[ns] - name: flair dtype: 'null' - name: new dtype: bool - name: updated dtype: bool - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 0 num_examples: 0 download_size: 2914 dataset_size: 0 - config_name: year_2024 features: - name: content dtype: string - name: poster dtype: string - name: date_utc dtype: timestamp[ns] - name: flair dtype: string - name: ups dtype: int64 - name: score dtype: int64 - name: permalink dtype: string - name: depth dtype: int64 - name: link_id dtype: string - name: parent_id dtype: string - name: id dtype: string - name: new dtype: bool - name: updated dtype: bool splits: - name: train num_bytes: 383429 num_examples: 1041 download_size: 190287 dataset_size: 383429 configs: - config_name: year_2015 data_files: - split: train path: year_2015/train-* - config_name: year_2016 data_files: - split: train path: year_2016/train-* - config_name: year_2017 data_files: - split: train path: year_2017/train-* - config_name: year_2018 data_files: - split: train path: year_2018/train-* - config_name: year_2019 data_files: - split: train path: year_2019/train-* - config_name: year_2020 data_files: - split: train path: year_2020/train-* - config_name: year_2021 data_files: - split: train path: year_2021/train-* - config_name: year_2022 data_files: - split: train path: year_2022/train-* - config_name: year_2023 data_files: - split: train path: year_2023/train-* - config_name: year_2024 data_files: - split: train path: year_2024/train-* --- --- Generated Part of README Below --- ## Dataset Overview The goal is to have an open dataset of [r/uwaterloo](https://www.reddit.com/r/uwaterloo/) submissions. I'm leveraging PRAW and the Reddit API to get downloads. There is a limit of 1000 in an API call and limited search functionality, so this is run hourly to get new submissions. ## Creation Details This dataset was created by [alvanlii/dataset-creator-reddit-uwaterloo](https://huggingface.co/spaces/alvanlii/dataset-creator-reddit-uwaterloo) ## Update Frequency The dataset is updated hourly with the most recent update being `2024-08-29 21:00:00 UTC+0000` where we added **1041 new rows**. ## Licensing [Reddit Licensing terms](https://www.redditinc.com/policies/data-api-terms) as accessed on October 25: [License information] ## Opt-out To opt-out of this dataset please make a pull request with your justification and add your ids in filter_ids.json 1. Go to [filter_ids.json](https://huggingface.co/spaces/reddit-tools-HF/dataset-creator-reddit-bestofredditorupdates/blob/main/filter_ids.json) 2. Click Edit 3. Add your ids, 1 per row 4. Comment with your justification