Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping
Abstract
While Transformers have enabled tremendous progress in various application settings, such architectures still lag behind traditional symbolic planners for solving complex decision making tasks. In this work, we demonstrate how to train Transformers to solve complex planning tasks and present Searchformer, a Transformer model that optimally solves previously unseen Sokoban puzzles 93.7% of the time, while using up to 26.8% fewer search steps than standard A^* search. Searchformer is an encoder-decoder Transformer model trained to predict the search dynamics of A^*. This model is then fine-tuned via expert iterations to perform fewer search steps than A^* search while still generating an optimal plan. In our training method, A^*'s search dynamics are expressed as a token sequence outlining when task states are added and removed into the search tree during symbolic planning. In our ablation studies on maze navigation, we find that Searchformer significantly outperforms baselines that predict the optimal plan directly with a 5-10times smaller model size and a 10times smaller training dataset. We also demonstrate how Searchformer scales to larger and more complex decision making tasks like Sokoban with improved percentage of solved tasks and shortened search dynamics.
Community
wow
It's will be interesting to compare computation time with traditional algorithms
๐
I'd love to see more research like this -- if you can convert your solution into a sequence, then you can use a transformer for it!
Consider it done. I am going going to use a conjecture I just solved relating to unifying physics to base 10 using the Fibonacci binomial conjecture. I am going to use structure output, with each neuron a specific pattern.
I created Relative Field Unified Framework, and I need an endorsement to post my papers. I will update my GitHub datasets to here, and we can all chip in in their relative fields...
We can simulate ANY process as long as it can be described using NLP. We will train and inspire TRANSPARENCY.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Towards an Understanding of Stepwise Inference in Transformers: A Synthetic Graph Navigation Model (2024)
- GLoRe: When, Where, and How to Improve LLM Reasoning via Global and Local Refinements (2024)
- Learning Cognitive Maps from Transformer Representations for Efficient Planning in Partially Observed Environments (2024)
- A Mechanistic Analysis of a Transformer Trained on a Symbolic Multi-Step Reasoning Task (2024)
- AutoPRM: Automating Procedural Supervision for Multi-Step Reasoning via Controllable Question Decomposition (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Is the code publicly available?
Searchformer: Revolutionizing Planning with Transformers and Search Dynamics Bootstrapping
Links ๐:
๐ Subscribe: https://www.youtube.com/@Arxflix
๐ Twitter: https://x.com/arxflix
๐ LMNT (Partner): https://lmnt.com/
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper