Papers
arxiv:2402.14083

Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping

Published on Feb 21
ยท Submitted by akhaliq on Feb 23
#2 Paper of the day
Authors:
,
,

Abstract

While Transformers have enabled tremendous progress in various application settings, such architectures still lag behind traditional symbolic planners for solving complex decision making tasks. In this work, we demonstrate how to train Transformers to solve complex planning tasks and present Searchformer, a Transformer model that optimally solves previously unseen Sokoban puzzles 93.7% of the time, while using up to 26.8% fewer search steps than standard A^* search. Searchformer is an encoder-decoder Transformer model trained to predict the search dynamics of A^*. This model is then fine-tuned via expert iterations to perform fewer search steps than A^* search while still generating an optimal plan. In our training method, A^*'s search dynamics are expressed as a token sequence outlining when task states are added and removed into the search tree during symbolic planning. In our ablation studies on maze navigation, we find that Searchformer significantly outperforms baselines that predict the optimal plan directly with a 5-10times smaller model size and a 10times smaller training dataset. We also demonstrate how Searchformer scales to larger and more complex decision making tasks like Sokoban with improved percentage of solved tasks and shortened search dynamics.

Community

wow

It's will be interesting to compare computation time with traditional algorithms

๐Ÿ‘
I'd love to see more research like this -- if you can convert your solution into a sequence, then you can use a transformer for it!

Consider it done. I am going going to use a conjecture I just solved relating to unifying physics to base 10 using the Fibonacci binomial conjecture. I am going to use structure output, with each neuron a specific pattern.

I created Relative Field Unified Framework, and I need an endorsement to post my papers. I will update my GitHub datasets to here, and we can all chip in in their relative fields...

We can simulate ANY process as long as it can be described using NLP. We will train and inspire TRANSPARENCY.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Is the code publicly available?

Searchformer: Revolutionizing Planning with Transformers and Search Dynamics Bootstrapping

Links ๐Ÿ”—:

๐Ÿ‘‰ Subscribe: https://www.youtube.com/@Arxflix
๐Ÿ‘‰ Twitter: https://x.com/arxflix
๐Ÿ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2402.14083 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2402.14083 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2402.14083 in a Space README.md to link it from this page.

Collections including this paper 31