AmazonQAC / README.md
danteev's picture
Update README.md
975cbfa verified
metadata
license: cdla-permissive-2.0
task_categories:
  - text-generation
  - text2text-generation
  - text-retrieval
language:
  - en
tags:
  - query-autocomplete
  - amazon
  - large-scale
  - ecommerce
  - search
  - session-based
pretty_name: AmazonQAC
size_categories:
  - 100M<n<1B
configs:
  - config_name: default
    data_files:
      - split: train
        path: train/*.parquet
      - split: test
        path: test/*.parquet

AmazonQAC: A Large-Scale, Naturalistic Query Autocomplete Dataset

Train Dataset Size: 395 million samples
Test Dataset Size: 20k samples
Source: Amazon Search Logs
File Format: Parquet
Compression: Snappy

If you use this dataset, please cite our EMNLP 2024 paper:

@inproceedings{everaert-etal-2024-amazonqac,
    title = "{A}mazon{QAC}: A Large-Scale, Naturalistic Query Autocomplete Dataset",
    author = "Everaert, Dante  and
      Patki, Rohit  and
      Zheng, Tianqi  and
      Potts, Christopher",
    editor = "Dernoncourt, Franck  and
      Preo{\c{t}}iuc-Pietro, Daniel  and
      Shimorina, Anastasia",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track",
    month = nov,
    year = "2024",
    address = "Miami, Florida, US",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-industry.78",
    pages = "1046--1055",
    abstract = "Query Autocomplete (QAC) is a critical feature in modern search engines, facilitating user interaction by predicting search queries based on input prefixes. Despite its widespread adoption, the absence of large-scale, realistic datasets has hindered advancements in QAC system development. This paper addresses this gap by introducing AmazonQAC, a new QAC dataset sourced from Amazon Search logs, comprising 395M samples. The dataset includes actual sequences of user-typed prefixes leading to final search terms, as well as session IDs and timestamps that support modeling the context-dependent aspects of QAC. We assess Prefix Trees, semantic retrieval, and Large Language Models (LLMs) with and without finetuning. We find that finetuned LLMs perform best, particularly when incorporating contextual information. However, even our best system achieves only half of what we calculate is theoretically possible on our test data, which implies QAC is a challenging problem that is far from solved with existing systems. This contribution aims to stimulate further research on QAC systems to better serve user needs in diverse environments. We open-source this data on Hugging Face at https://huggingface.co/datasets/amazon/AmazonQAC.",
}

Dataset Summary

AmazonQAC is a large-scale dataset designed for Query Autocomplete (QAC) tasks, sourced from real-world Amazon Search logs. It provides anonymized sequences of user-typed prefixes leading to final search terms, along with rich session metadata such as timestamps and session IDs. This dataset supports research on context-aware query completion by offering realistic, large-scale, and natural user behavior data.

QAC is a widely used feature in search engines, designed to predict users' full search queries as they type. Despite its importance, research progress has been limited by the lack of realistic datasets. AmazonQAC aims to address this gap by providing a comprehensive dataset to spur advancements in QAC systems. AmazonQAC also contains a realistic test set for benchmarking of different QAC approaches, consisting of past_search, prefix and final search term rows (mimics a real QAC service).

Key Features:

Train:

  • 395M samples: Each sample includes the user’s search term and the sequence of prefixes they typed. Collected from 2023-09-01 to 2023-09-30 from US logs
  • Session Metadata: Includes session IDs and timestamps for context-aware modeling.
  • Naturalistic Data: Real user interactions are captured, including non-linear typing patterns and partial prefix matches.
  • Popularity Information: Popularity of search terms is included as metadata.

Test:

  • 20k samples: Each sample includes a prefix and the user’s final search term. Collected from 2023-10-01 to 2023-10-14 (after the train data time period) from US logs
  • Session Metadata: Each sample also contains an array of the user's past search terms for input to context-aware QAC systems
  • Naturalistic Data: Each row is randomly sampled prefix/search term/context from search logs (no sequence of past typed prefixes, etc), mimicking the asynchronous nature of a real-world QAC service

Dataset Structure

Train:

Each data entry consists of:

  • query_id: long A unique identifier for each row/user search.
  • session_id: string The user session ID.
  • prefixes: array<string> A sequence of prefixes typed by the user in order.
  • first_prefix_typed_time: string (YYYY-MM-DD HH:MM:SS.sss) The timestamp when the first prefix was typed.
  • final_search_term: string The final search term searched for by the user.
  • search_time: string (YYYY-MM-DD HH:MM:SS) The timestamp of the final search.
  • popularity: long The number of occurrences of the search term before filtering.

Test:

Each data entry consists of:

  • query_id: long A unique identifier for each row/user search.
  • session_id: string The user session ID.
  • past_search_terms: array<array<string>> A sequence of past search terms from the user in order along with each search term's timestamp
  • prefix: string Prefix typed by the user
  • prefix_typed_time: string (YYYY-MM-DD HH:MM:SS.sss) The timestamp when the prefix was typed.
  • final_search_term: string The final search term searched for by the user.
  • search_time: string (YYYY-MM-DD HH:MM:SS) The timestamp of the final search term.

Example

Train

{
  "query_id": "12",
  "session_id": "354",
  "prefixes": ["s", "si", "sin", "sink", "sink r", "sink ra", "sink rac", "sink rack"],
  "first_prefix_typed_time": "2023-09-04T20:46:14.293Z",
  "final_search_term": "sink rack for bottom of sink",
  "search_time": "2023-09-04T20:46:27",
  "popularity": 125
}

Test

{
  "query_id": "23",
  "session_id": "783",
  "past_search_terms": [["transformers rise of the beast toys", "2023-10-07 13:03:54"], ["ultra magnus", "2023-10-11 11:54:44"]],
  "prefix": "transf",
  "prefix_typed_time": "2023-10-11T16:42:30.256Z",
  "final_search_term": "transformers legacy",
  "search_time": "2023-10-11 16:42:34"
}

Dataset Statistics

Statistic Train Set Test Set
Total Prefixes 4.28B 20K
Unique Prefixes 384M 15.1K
Unique Search Terms 40M 16.7K
Unique Prefix/Search Term Pairs 1.1B 19.9K
Average Prefix Length 9.5 characters 9.2 characters
Average Search Term Length 20.0 characters 20.3 characters
Searches per Session 7.3 10.3
Train/Test Overlap: Unique Prefixes 13.4k 88%
Train/Test Overlap: Unique Search Terms 12.3k 74%
Train/Test Overlap: Unique Prefix/Search Term Pairs 11.7k 59%

Evaluation Metrics

The dataset is evaluated using the following core metrics:

  • Success@10: Of the 10 suggestions a QAC system provides, whether the correct search term is contained in them
  • Reciprocal Rank@10: Of the 10 suggestions a QAC systems provides, 1/rank if the correct term is present otherwise 0 The means for each is calculated across the test dataset.

Ethical Considerations

All data has been anonymized, and personally identifiable information (PII) has been removed using regex filters and LLM-based fileter. The dataset is also restricted to search terms which appeared at least 4 times in 4 different sessions in order to help ensure they are not user specific.

The dataset is derived from U.S. Amazon search logs, so it reflects a specific cultural and linguistic context, which may not generalize to all search environments.