When suggesting task_ids should I ensure that the task is solvable using the dataset?

#1
by Hobson - opened

In order to add a tag for task_ids and task_categories is it necessary to train a model and confirm that some better than random accuracy is possible for the task? Should I try to find an academic white paper that shows the dataset being used for the suggested task_ids/categories ?

The Enron dataset (dataset id aeslc) is only tagged with:

  • arxiv:1906.03497'
  • languages:en
  • pretty_name:AESLC

Using the email subject_line field as a label or target variable, it might be appropriate to add tags for task_ids (in suggested order of relevance):

  • 'task_ids:summarization'
  • 'task_ids:summarization-other-conversations-summarization'
  • "task_ids:other-other-query-based-multi-document-summarization"
  • 'task_ids:summarization-other-aspect-based-summarization'
  • 'task_ids:summarization--other-headline-generation'

The subject might also be used for the task_category "task_categories:summarization"

E-mail chains might be used for the task category "task_categories:dialogue-system"

The paper referenced in the existing tags has the following content which suggests a new tag to be added: task_ids:email-subject-line-generation and perhaps "task_ids:summarization-abstractive" and "task_ids:summarization-email".

Here's the abstract:

This Email Could Save Your Life: Introducing the Task of Email Subject Line Generation
-- Rui Zhang, Joel Tetreault

Given the overwhelming number of emails, an effective subject line becomes essential to better inform the recipient of the email's content. In this paper, we propose and study the task of email subject line generation: automatically generating an email subject line from the email body. We create the first dataset for this task and find that email subject line generation favor extremely abstractive summary which differentiates it from news headline generation or news single document summarization. We then develop a novel deep learning method and compare it to several baselines as well as recent state-of-the-art text summarization systems. We also investigate the efficacy of several automatic metrics based on correlations with human judgments and propose a new automatic evaluation metric. Our system outperforms competitive baselines given both automatic and human evaluations. To our knowledge, this is the first work to tackle the problem of effective email subject line generation.

Hi ! Good idea to add those tags :)

You don't need to train a model/validate that a dataset can be used for a task - any task that seems reasonable or is mentioned in a paper talking about this dataset is fine.

Thank you! Will do.

Hobson changed discussion status to closed

Sign up or log in to comment