-
Bootstrap Your Own Skills: Learning to Solve New Tasks with Large Language Model Guidance
Paper • 2310.10021 • Published • 2 -
Unleashing the Power of Pre-trained Language Models for Offline Reinforcement Learning
Paper • 2310.20587 • Published • 16 -
Discovering Adaptable Symbolic Algorithms from Scratch
Paper • 2307.16890 • Published • 5 -
DragAPart: Learning a Part-Level Motion Prior for Articulated Objects
Paper • 2403.15382 • Published • 9
Collections
Discover the best community collections!
Collections including paper arxiv:2310.10021
-
An Interdisciplinary Comparison of Sequence Modeling Methods for Next-Element Prediction
Paper • 1811.00062 • Published • 2 -
mT5: A massively multilingual pre-trained text-to-text transformer
Paper • 2010.11934 • Published • 4 -
Bootstrap Your Own Skills: Learning to Solve New Tasks with Large Language Model Guidance
Paper • 2310.10021 • Published • 2 -
Gemma: Open Models Based on Gemini Research and Technology
Paper • 2403.08295 • Published • 47
-
Measuring the Effects of Data Parallelism on Neural Network Training
Paper • 1811.03600 • Published • 2 -
Adafactor: Adaptive Learning Rates with Sublinear Memory Cost
Paper • 1804.04235 • Published • 2 -
EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks
Paper • 1905.11946 • Published • 3 -
Yi: Open Foundation Models by 01.AI
Paper • 2403.04652 • Published • 62