File size: 3,619 Bytes
195c2e5
 
cb1c527
 
 
 
 
 
 
 
 
 
 
 
 
 
195c2e5
cb1c527
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
---
license: cc0-1.0
task_categories:
- sentence-similarity
language:
- en
pretty_name: '"Movie descriptors for Semantic Search"'
size_categories:
- 10K<n<100K
tags:
- movies
- embeddings
- semantic search
- films
- hpi
- workshop
---
# Dataset Card

This dataset is a subset from Kaggle's The Movie Dataset that contains only name, release year and overview for some movies from the original dataset.
It is intended as a toy dataset for learning about embeddings in a workshop from the AI Service Center Berlin-Brandenburg at the Hasso Plattner Institute.

This dataset has a bigger version [here](https://huggingface.co/datasets/mt0rm0/movie_descriptors).

## Dataset Details

### Dataset Description

The dataset has 28655 rows and 3 columns:

- 'name': includes the title of the movies
- 'release_year': indicates the year of release
- 'overview': provides a brief description of each movie, used for advertisement.

The source dataset was filtered for keeping only movies with complete metadata in the required fields, a vote average of at least 6, with more than 100 votes and with a revenue over 2 Million Dollars.
 
**Curated by:** [Mario Tormo Romero](https://huggingface.co/mt0rm0)
  
**Language(s) (NLP):** English

**License:** cc0-1.0
  
### Dataset Sources
This Dataset is a subset of Kaggle's [The Movie Dataset](https://www.kaggle.com/datasets/rounakbanik/the-movies-dataset). 
We have only used the <kbd>movies_metadata.csv</kbd> file and extracted some features (see Dataset Description) and dropped the rows that didn't were complete.

The original Dataset has a cc0-1.0 License and we have maintained it.

## Uses

This is a toy dataset created for pegagogical purposes, and is used in the **Working with embeddings** Workshop created and organized by the [AI Service Center Berlin-Brandenburg](https://hpi.de/kisz/) at the [Hasso Plattner Institute](https://hpi.de/). 

## Dataset Creation

### Curation Rationale

We want to provide with this dataset a fast way of obtaining the required data for our workshops without having to download huge datasets with just way too much information.  

### Source Data

Our source is Kaggle's The Movie Dataset, so the information comes from the MovieLens Dataset. The dataset consists of movies released on or before July 2017. 

#### Data Collection and Processing

The data was downloaded from [Kaggle](https://www.kaggle.com/datasets/rounakbanik/the-movies-dataset) as a zip file. The file <kbd>movies_metadata.csv</kbd> was then extracted.
The data was processed with the following code:
```python
import pandas as pd
# load the csv file
df = pd.read_csv("movies_metadata.csv", low_memory=False)

# filter movies according to:
# - vote average of at least 6
# - more than 100 votes
# - revenue over 2M$
df = df.loc[(df.vote_average >= 6)&(df.vote_count > 100)&(df.revenue > 2e6)]

# select the required columns, drop rows with missing values and
# reset the index
df = df.loc[:, ['title', 'release_date', 'overview']]
df = df.dropna(axis=0).reset_index(drop=True)

# make a new column with the release year
df.loc[:, 'release_year'] = pd.to_datetime(df.release_date).dt.year
# select the columns in the desired order
df = df.loc[:, ['title', 'release_year', 'overview']]

# save the data to parquet
df.to_parquet('descriptors_data.parquet')
```
#### Who are the source data producers?
The source dataset is an ensemble of data collected by [Rounak Banik](https://www.kaggle.com/rounakbanik) from TMDB and GroupLens.
In particular, the movies metadata has been collected from the TMDB Open API, but the source dataset is not endorsed or certified by TMDb.