Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
task_id
stringclasses
519 values
split
stringclasses
3 values
step
int64
0
26
task_description
stringclasses
519 values
prompt_0
listlengths
2
2
prompt_1
listlengths
1
1
raw_html
stringlengths
4.06k
3.91M
cleaned_html
stringlengths
1.64k
1.74M
candidates
sequencelengths
20
21
target_elements
sequencelengths
0
60
target_op
stringclasses
3 values
target_op_value
stringclasses
388 values
website
stringclasses
125 values
domain
stringclasses
5 values
subdomain
stringclasses
30 values
is_valid
bool
2 classes
d8707876-a921-4935-87bd-bac657247ca2
test_domain
1
Browse senior spayed/neutered dogs near zip code 90028.
[{"content":[{"image_url":null,"text":"Imagine that you are imitating humans doing web navigation fo(...TRUNCATED)
[{"content":[{"text":"(Reiteration)\nFirst, reiterate your next target element, its detailed locatio(...TRUNCATED)
"<!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.0 Transitional//EN\" \"http://www.w3.org/TR/REC-html40/l(...TRUNCATED)
"<html bnid=\"30997\">\n <body>\n <div bnid=\"31100\">\n <div bnid=\"31114\" role=\"dialog\(...TRUNCATED)
["<a bnid=36929><span bnid=36931><text bnid=36932>Fox Terrier (Smooth)</text></span></a>","<a bnid=3(...TRUNCATED)
[ "<label bnid=30979><text bnid=31923>Senior</text></label>" ]
CLICK
adoptapet
Service
Pet
true
d8707876-a921-4935-87bd-bac657247ca2
test_domain
2
Browse senior spayed/neutered dogs near zip code 90028.
[{"content":[{"image_url":null,"text":"Imagine that you are imitating humans doing web navigation fo(...TRUNCATED)
[{"content":[{"text":"(Reiteration)\nFirst, reiterate your next target element, its detailed locatio(...TRUNCATED)
"<!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.0 Transitional//EN\" \"http://www.w3.org/TR/REC-html40/l(...TRUNCATED)
"<html bnid=\"46449\">\n <body>\n <div bnid=\"46552\">\n <div bnid=\"46566\" role=\"dialog\(...TRUNCATED)
["<a bnid=52397><span bnid=52399><text bnid=52400>German Pinscher</text></span></a>","<a bnid=53483>(...TRUNCATED)
[ "<button bnid=46439 type=submit></button>" ]
CLICK
adoptapet
Service
Pet
true
d8707876-a921-4935-87bd-bac657247ca2
test_domain
3
Browse senior spayed/neutered dogs near zip code 90028.
[{"content":[{"image_url":null,"text":"Imagine that you are imitating humans doing web navigation fo(...TRUNCATED)
[{"content":[{"text":"(Reiteration)\nFirst, reiterate your next target element, its detailed locatio(...TRUNCATED)
"<!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.0 Transitional//EN\" \"http://www.w3.org/TR/REC-html40/l(...TRUNCATED)
"<html bnid=\"61919\">\n <body>\n <div bnid=\"62022\">\n <div bnid=\"62036\" role=\"dialog\(...TRUNCATED)
["<div bnid=70660><img bnid=70667 alt=photo of sebastian/><div bnid=70671><p bnid=70673><text bnid=7(...TRUNCATED)
[ "<img bnid=61887 alt=x icon for close button/>" ]
CLICK
adoptapet
Service
Pet
true
d8707876-a921-4935-87bd-bac657247ca2
test_domain
4
Browse senior spayed/neutered dogs near zip code 90028.
[{"content":[{"image_url":null,"text":"Imagine that you are imitating humans doing web navigation fo(...TRUNCATED)
[{"content":[{"text":"(Reiteration)\nFirst, reiterate your next target element, its detailed locatio(...TRUNCATED)
"<!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.0 Transitional//EN\" \"http://www.w3.org/TR/REC-html40/l(...TRUNCATED)
"<html bnid=\"84247\">\n <body>\n <div bnid=\"84350\">\n <div bnid=\"84364\" role=\"dialog\(...TRUNCATED)
["<span bnid=92831><text bnid=92832>Female,</text><span bnid=92833><text bnid=92835>senior</text></s(...TRUNCATED)
[ "<label bnid=84216><text bnid=91333>Spayed/Neutered</text></label>" ]
CLICK
adoptapet
Service
Pet
true
d90d474b-7a34-4a9f-acbd-6b0c9f296792
test_domain
0
Find the 1 month stock price chart of AMC.
[{"content":[{"image_url":null,"text":"Imagine that you are imitating humans doing web navigation fo(...TRUNCATED)
[{"content":[{"text":"(Reiteration)\nFirst, reiterate your next target element, its detailed locatio(...TRUNCATED)
"<!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.0 Transitional//EN\" \"http://www.w3.org/TR/REC-html40/l(...TRUNCATED)
"<html bnid=\"19878\">\n <body bnid=\"20038\">\n <div bnid=\"20041\">\n <header bnid=\"2004(...TRUNCATED)
["<a bnid=28086><div bnid=28087><span bnid=28088><text bnid=28089>Why AMD, Intel, and Nvidia Stock A(...TRUNCATED)
["<div bnid=26913 role=option><div bnid=26915><div bnid=26916><div bnid=19874><text bnid=26917>AMC E(...TRUNCATED)
CLICK
finance.google
Info
Finance
true
d90d474b-7a34-4a9f-acbd-6b0c9f296792
test_domain
1
Find the 1 month stock price chart of AMC.
[{"content":[{"image_url":null,"text":"Imagine that you are imitating humans doing web navigation fo(...TRUNCATED)
[{"content":[{"text":"(Reiteration)\nFirst, reiterate your next target element, its detailed locatio(...TRUNCATED)
"<!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.0 Transitional//EN\" \"http://www.w3.org/TR/REC-html40/l(...TRUNCATED)
"<html bnid=\"39551\">\n <body bnid=\"39711\">\n <div bnid=\"39714\">\n <header bnid=\"3971(...TRUNCATED)
["<div bnid=47156><text bnid=47157>$4.11 - $4.34</text></div>","<div bnid=47858><div bnid=47859><div(...TRUNCATED)
[ "<button bnid=45664 role=tab></button>" ]
CLICK
finance.google
Info
Finance
true
d9605d02-1c75-4809-a923-60cfb2cb588a
test_domain
0
Show me a list kitchen tips for food safety.
[{"content":[{"image_url":null,"text":"Imagine that you are imitating humans doing web navigation fo(...TRUNCATED)
[{"content":[{"text":"(Reiteration)\nFirst, reiterate your next target element, its detailed locatio(...TRUNCATED)
"<!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.0 Transitional//EN\" \"http://www.w3.org/TR/REC-html40/l(...TRUNCATED)
"<html bnid=\"11634\">\n <body>\n <span bnid=\"12058\">\n <text bnid=\"12059\">Skip to co(...TRUNCATED)
["<div bnid=16497><div bnid=16500><text bnid=16501>For adults with type 2 diabetes,</text><text bnid(...TRUNCATED)
[]
CLICK
allrecipes
Info
Cooking
false
d9605d02-1c75-4809-a923-60cfb2cb588a
test_domain
1
Show me a list kitchen tips for food safety.
[{"content":[{"image_url":null,"text":"Imagine that you are imitating humans doing web navigation fo(...TRUNCATED)
[{"content":[{"text":"(Reiteration)\nFirst, reiterate your next target element, its detailed locatio(...TRUNCATED)
"<!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.0 Transitional//EN\" \"http://www.w3.org/TR/REC-html40/l(...TRUNCATED)
"<html bnid=\"17462\">\n <body>\n <span bnid=\"17758\">\n <text bnid=\"17759\">Skip to co(...TRUNCATED)
["<div bnid=20701><div bnid=20703><a bnid=20705><img bnid=20713 alt=container of disinfecting wipes/(...TRUNCATED)
[ "<a bnid=19526><span bnid=17443><text bnid=19527>Food Safety</text></span></a>" ]
CLICK
allrecipes
Info
Cooking
true
d9c8afd2-372a-4a04-8517-24d9c1cd39b0
test_domain
0
How to extend the Global Talent visa
[{"content":[{"image_url":null,"text":"Imagine that you are imitating humans doing web navigation fo(...TRUNCATED)
[{"content":[{"text":"(Reiteration)\nFirst, reiterate your next target element, its detailed locatio(...TRUNCATED)
"<!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.0 Transitional//EN\" \"http://www.w3.org/TR/REC-html40/l(...TRUNCATED)
"<html bnid=\"3286\">\n <body>\n <a bnid=\"3370\">\n <text bnid=\"3371\">Skip to main conte(...TRUNCATED)
["<div bnid=4381><text bnid=4382>Check what photo ID you can use to vote in person from 4 May.</text(...TRUNCATED)
[ "<a bnid=3278><text bnid=3634>Visas and immigration</text></a>" ]
CLICK
gov.uk
Service
Government
true
d9c8afd2-372a-4a04-8517-24d9c1cd39b0
test_domain
1
How to extend the Global Talent visa
[{"content":[{"image_url":null,"text":"Imagine that you are imitating humans doing web navigation fo(...TRUNCATED)
[{"content":[{"text":"(Reiteration)\nFirst, reiterate your next target element, its detailed locatio(...TRUNCATED)
"<!DOCTYPE html PUBLIC \"-//W3C//DTD HTML 4.0 Transitional//EN\" \"http://www.w3.org/TR/REC-html40/l(...TRUNCATED)
"<html bnid=\"4951\">\n <body>\n <a bnid=\"5033\">\n <text bnid=\"5034\">Skip to main conte(...TRUNCATED)
["<li bnid=5913></li>","<a bnid=6002><text bnid=6003>Research and statistics</text></a>","<div bnid=(...TRUNCATED)
[ "<a bnid=4945><text bnid=5549>Work in the UK</text></a>" ]
CLICK
gov.uk
Service
Government
true

MultiModal-Mind2Web~ (MM-Mind2Web~)

rabbit inc.
[Leaderboard & Blogpost to be released]
Configuration: test split, snapshot with seed 42, 20 distractors

Multimodal-Mind2Web is a dataset proposed by Boyuan et al.. It's designed for the development and evaluation of generalist web agents and includes various action trajectories of humans on real websites.

We've simplified the raw dump from both Multimodal-Mind2Web and Mind2Web into sequences of observation-action pairs. We've also adapted prompting and DOM-encoding techniques from SeeAct. This allows us to reformulate the problem of action generation, localization (terminology used in the large action model, or LAM) / element grounding, and reasoning of action (also terminology used in LAM) / action grounding into a straightforward text-generation and multiple-choice problem. This simplification makes the dataset viable as a generic evaluation for a vision language model (VLM). The dataset includes prompts (prompt_0, prompt_1) in a chat format, which makes it easier to use a VLM for evaluation and lowers the implementation barrier common in evaluation frameworks of computer-using agents.

We're currently evaluating state-of-the-art models on the dataset and are gradually providing access to a more comprehensive Gym-compatible evaluation environment. This environment will allow for offline and online evaluations of agents, offering more structural and fundamental improvements over existing benchmarks like MultiModal-Mind2Web. We will share our findings and release the full leaderboard in a blog post on https://engineering.rabbit.tech/ soon.

Preliminary Evaluation Results

  • Operation token F1 is calculated with respect to cl100k_base. We preprocess the text to be lower-case regardless of what the VLM outputs.
  • Raw VLM outputs are parsed in a similar fashion according to SeeAct, which we will explain in more detail in the blog post.
  • For all metrics, higher is better.
model Step Success Rate Task Success Rate Operation Token F1 Element Accuracy
claude-3-5-sonnet-20240620 0.3847 0.0352 0.8104 0.5005
gemini-1.5-flash-001 0.3203 0.0300 0.7764 0.3861
claude-3-opus-20240229 0.3048 0.0141 0.8048 0.3720
claude-3-sonnet-20240229 0.2770 0.0282 0.7241 0.3528
gpt-4o 0.2702 0.0211 0.6239 0.3602
gemini-1.5-pro-001 0.2191 0.0000 0.7151 0.3453
claude-3-haiku-20240307 0.2068 0.0000 0.7835 0.2577

Dataset Structure

  • task_id (str): unique id for each task, equivalent to annotation_id in MultiModal-Mind2Web.
  • split (str): dataset split, one of (test_website, test_task and test_domain), equivalent to the split in MultiModal-Mind2Web.
  • step (int64): the index of the step (starting from zero) this particular action belongs to within the trajectory that it is recorded. Equivalent to target_action_index in MultiModal-Mind2Web.
  • task_description (str): description of the task representing user intent, equivalent to confirmed_task in MultiModal-Mind2Web.
  • prompt_0 (str): prompt to generate action description. Contains image input.
  • prompt_1 (str): prompt to perform action and element grounding, used in conjunction with prompt_0 and outputs of a previous invocation of a VLM.
  • raw_html (str): raw html of the page before the action is performed, consistent with the raw Mind2Web dump.
  • cleaned_html (str): sanitized html of the page before the action is performed, similar to cleaned_html in MultiModal-Mind2Web.
  • candidates (sequence[str]): sampled sanitized html representation of candidates of salient DOM elements in this particular snapshot. One element belongs to pos_candidates and the rest belong to neg_candidates in MultiModal-Mind2Web.
  • target_elements (sequence[str]): sanitized html representation of viable DOM elements in the webpage that the action is performed on. All elements can be found in pos-candidates in MultiModal-Mind2Web.
  • target_op (str): the operation that should be performed, must be one of CLICK, TYPE, and SELECT. Equivalent to operation.op in MultiModal-Mind2Web.
  • target_op_value (str): the argument supplied to the operation that should be performed. May be empty; equivalent to operation.value in MultiModal-Mind2Web.
  • website (str): website name, equivalent to website in MultiModal-Mind2Web.
  • domain (str): website domain, equivalent to website in MultiModal-Mind2Web.
  • subdomain (str): website subdomain, equivalent to website in MultiModal-Mind2Web.
  • is_valid (str): whether this row is valid for evaluation. Rows with is_valid = False must be excluded when calculating average step-wise performance, or task- and trajectory-level performance. A row that is invalid could either have an empty screenshot, or does not have a positive element in the sanitized html.

Improvements from MultiModal-Mind2Web

  1. For all test splits, raw_html is not available in the original Multimodal-Mind2Web dataset uploaded to HuggingFace. From 1, 2 and 3, the values in the column are the same as those of cleaned_html. We re-associated each action with the raw html from the original Mind2Web dump to overcome this challenge.
  2. For all test splits, 11 rows have no screenshot in the original Multimodal-Mind2Web dataset uploaded to HuggingFace. This will make any agent using screenshots as part of its action generation routine fail, which will affect both step-level and task-level metrics. We have labeled these rows with is_valid = False to signal to model evaluators while maintaining the completeness of the action trajectory.
  3. For all test splits, 761 rows have no ground truth element in cleaned_html in the original Multimodal-Mind2Web dataset uploaded to HuggingFace. This will make any agent fail during element grounding, which will affect both step-level and task-level metrics. We have labeled these rows with is_valid = False to signal to model evaluators while maintaining the completeness of the action trajectory.
  4. We have also simplified the sanitized representation of DOM elements, such as shortening backend_node_id into bnid and preserving more structure in the candidate tree representation. We will explain our implementation in more detail in the blog post, as well as providing a detailed example comparing MultiModal-Mind2Web's representation and ours.

Assumptions and Problem Definition

A common subroutine of web agents (MindAct, SeeAct, LAM) is a retriever that identifies salient DOM elements relevant to the action. This localization/element grounding can be reframed as a multiple-choice/re-ranking problem where the VLM must choose an applicable candidate for the action. Since this subroutine is not a universal component of a computer-using agent and is beyond the scope of evaluating a generic VLM's agent-related capabilities, MultiModal-Mind2Web~ assumes the existence of a strong ranker.

Given a distractor parameter k (in this case, 20), we sample k candidates from the negative pool (provided by the heuristic in MultiModal-Mind2Web) and randomly select a ground truth element from the positive pool to construct the scrambled list of candidates available to the VLM. This simulates the existence of a ranker with a nonzero precision at k+1 (P@k+1 > 0). Randomness is controlled through seeding so that the same sets of elements are always selected and appear in the same positions in the scrambled list. All snapshot datasets released by us are seeded with 42.

A snapshot with 10 distractors will have a stronger assumption on the existence of a more powerful retriever with nonzero P@11 compared to a snapshot with 30 distractors (P@31 > 0). This treatment helps MultiModal-Mind2Web to be a very accessible and generic benchmark for VLMs without a complex, stateful setup. It also directly affects the context length required for the VLM and the difficulty of the benchmark in terms of assessing VLM's in-context learning capabilities.

Agent evaluations, whether offline or online, are always dynamic. We have internally built a generic environment to enable candidate sampling as well as simulation of various online environments to evaluate agents. The dataset is taken from a particular episode, hence the name of a "snapshot".

Usage as a generic VLM eval

MultiModal-Mind2Web~ can be used as a generic eval of a VLM to assess various aspects of grounded UI understanding and planning, and could be run in addition to existing generalized benchmarks like MMMU. Below is an example implementation of a baseline gpt-4o agent using the dataset over two rounds of action generation and grounding:

from openai import OpenAI

client = OpenAI()

def deduce_action(prompt_0, prompt_1):

    action_prompt = prompt_0
    grounding_prompt = prompt_1

    resp1 = client.chat.completions.create(
        model="gpt-4o",
        messages=action_prompt,
        max_tokens=500,
        temperature=0,
    )
    response = resp1.choices[0].message.content

    grounding_prompt = (
        action_prompt
        + [
            {
                "role": "assistant",
                "content": [{"type": "text", "text": f"\n\n{response}"}],
            },
        ]
        + grounding_prompt
    )

    resp2 = client.chat.completions.create(
        model="gpt-4o",
        messages=grounding_prompt,
        max_tokens=500,
        temperature=0,
    )

    final_response = resp2.choices[0].message.content
    return final_response

Where prompt_0 and prompt_1 correspond to the column values in the files, and final_response can be either parsed or evaluated against the target values target_elements, target_op and target_op_value via a VQA model.

Downloads last month
89
Edit dataset card