|
--- |
|
license: mit |
|
tags: |
|
- baseball |
|
- sports-analytics |
|
pretty_name: Don't scrape statcast data anymore |
|
--- |
|
|
|
# statcast-era-pitches |
|
|
|
This dataset contains every pitch thrown in Major League Baseball from 2015 through present, and is updated weekly. |
|
|
|
## Why |
|
|
|
This data is available through the pybaseball pacakge and the baseballr package, however it is time consuming to re scrape statcast pitch level data each time you want / need to re run your code. This dataset solves this issue using github actions to automatically update itself each week throughout the baseball season. Reading the dataset directly from the huggingface url is way faster than rescraping your data each time. |
|
|
|
# Usage |
|
|
|
### With statcast_pitches package |
|
|
|
```bash |
|
pip install git+https://github.com/Jensen-holm/statcast-era-pitches.git |
|
``` |
|
|
|
**Example 1 w/ polars (suggested)** |
|
```python |
|
import statcast_pitches |
|
import polars as pl |
|
|
|
# load all pitches from 2015-present |
|
pitches_lf = statcast_pitches.load() |
|
|
|
# filter to get 2024 bat speed data |
|
bat_speed_24_df = (pitches_lf |
|
.filter(pl.col("game_date").dt.year() == 2024) |
|
.select("bat_speed", "swing_length") |
|
.collect()) |
|
``` |
|
|
|
**Notes** |
|
- Because `statcast_pitches.load()` uses a LazyFrame, we can load it much faster and even perform operations on it before 'collecting' it into memory. If it were loaded as a DataFrame, this code would execute in ~30-60 seconds, instead it runs between 2-8 seconds. |
|
|
|
**Example 2 Duckdb** |
|
```python |
|
import statcast_pitches |
|
|
|
# get bat tracking data from 2024 |
|
params = ("2024",) |
|
query_2024_bat_speed = f""" |
|
SELECT bat_speed, swing_length |
|
FROM pitches |
|
WHERE |
|
YEAR(game_date) =? |
|
AND bat_speed IS NOT NULL; |
|
""" |
|
|
|
if __name__ == "__main__": |
|
bat_speed_24_df = statcast_pitches.load( |
|
query=query_2024_bat_speed, |
|
params=params, |
|
).collect() |
|
|
|
print(bat_speed_24_df.head(3)) |
|
``` |
|
|
|
output: |
|
| | bat_speed | swing_length | |
|
|-|------------|--------------| |
|
| 0 | 73.61710 | 6.92448 | |
|
| 1 | 58.63812 | 7.56904 | |
|
| 2 | 71.71226 | 6.46088 | |
|
|
|
**Notes**: |
|
- If no query is specified, all data from 2015-present will be loaded into a DataFrame. |
|
- The table in your query MUST be called 'pitches', or it will fail. |
|
- Since `load()` returns a LazyFrame, notice that I had to call `pl.DataFrame.collect()` before calling `head()` |
|
- This is slower than the other polars approach, however sometimes using SQL is fun |
|
|
|
### With HuggingFace API |
|
|
|
***Pandas*** |
|
|
|
```python |
|
import pandas as pd |
|
|
|
df = pd.read_parquet("hf://datasets/Jensen-holm/statcast-era-pitches/data/statcast_era_pitches.parquet") |
|
``` |
|
|
|
***Polars*** |
|
|
|
```python |
|
import polars as pl |
|
|
|
df = pl.read_parquet('hf://datasets/Jensen-holm/statcast-era-pitches/data/statcast_era_pitches.parquet') |
|
``` |
|
|
|
***Duckdb*** |
|
|
|
```sql |
|
SELECT * |
|
FROM 'hf://datasets/Jensen-holm/statcast-era-pitches/data/statcast_era_pitches.parquet'; |
|
``` |
|
|
|
***HuggingFace Dataset*** |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
ds = load_dataset("Jensen-holm/statcast-era-pitches") |
|
``` |
|
|
|
***Tidyverse*** |
|
```r |
|
library(tidyverse) |
|
|
|
statcast_pitches <- read_parquet( |
|
"https://huggingface.co/datasets/Jensen-holm/statcast-era-pitches/resolve/main/data/statcast_era_pitches.parquet" |
|
) |
|
``` |
|
|
|
see the [dataset](https://huggingface.co/datasets/Jensen-holm/statcast-era-pitches) on HugingFace itself for more details. |
|
|
|
## Eager Benchmarking |
|
|
|
![dataset_load_times](dataset_load_times.png) |
|
|
|
| Eager Load Time (s) | API | |
|
|---------------|-----| |
|
| 1421.103 | pybaseball | |
|
| 26.899 | polars | |
|
| 33.093 | pandas | |
|
| 68.692 | duckdb | |
|
|
|
``` |
|
|
|
|
|
|
|
|