Papers
arxiv:2311.11045

Orca 2: Teaching Small Language Models How to Reason

Published on Nov 18, 2023
Β· Submitted by akhaliq on Nov 21, 2023
#2 Paper of the day

Abstract

Orca 1 learns from rich signals, such as explanation traces, allowing it to outperform conventional instruction-tuned models on benchmarks like BigBench Hard and AGIEval. In Orca 2, we continue exploring how improved training signals can enhance smaller LMs' reasoning abilities. Research on training small LMs has often relied on imitation learning to replicate the output of more capable models. We contend that excessive emphasis on imitation may restrict the potential of smaller models. We seek to teach small LMs to employ different solution strategies for different tasks, potentially different from the one used by the larger model. For example, while larger models might provide a direct answer to a complex task, smaller models may not have the same capacity. In Orca 2, we teach the model various reasoning techniques (step-by-step, recall then generate, recall-reason-generate, direct answer, etc.). More crucially, we aim to help the model learn to determine the most effective solution strategy for each task. We evaluate Orca 2 using a comprehensive set of 15 diverse benchmarks (corresponding to approximately 100 tasks and over 36,000 unique prompts). Orca 2 significantly surpasses models of similar size and attains performance levels similar or better to those of models 5-10x larger, as assessed on complex tasks that test advanced reasoning abilities in zero-shot settings. We open-source Orca 2 to encourage further research on the development, evaluation, and alignment of smaller LMs.

Community

This method of adding reasoning to LLMs is DEEPLY flawed because it doesn't distinguish between correlation and causation (C&C). To do so, one requires causal DAGs, and I see none of those in this paper. There is a term for when you equate C&C: superstition. This AI will be superstitious. IMHO, superstitions are even more dangerous than hallucinations. Religion is a superstition that has been used as an excuse for war since time immemorial. I expand on this idea in the following essay: https://qbnets.wordpress.com/2023/10/30/yann-lecun-the-godfather-of-superstitious-ai/

This method of adding reasoning to LLMs is DEEPLY flawed because it doesn't...

All research is flawed. Until it isn't. It's like shining a torch into a dark room. If anyone is superstitious then it's you, because you already know what's in the dark room, without looking.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

The paper does not explain the real important question to me, which are the reasoning strategies and its related system instructions for each sub-tasks , how to select the strategy for each clustered sub-task? manually or through some prompts by leveraging openai.

If they did the main task by hand, then this paper is not insightful at all.

The paper does not explain the real important question to me, which are the reasoning strategies and its related system instructions for each sub-tasks , how to select the strategy for each clustered sub-task? manually or through some prompts by leveraging openai.

If they did the main task by hand, then this paper is not insightful at all.

Exactly! Where is the complete list of strategies along with their system instructions? It's really weird how these were left out of the paper while they seem to be the cornerstone of this paper!

Orca 2: Enhancing Small Language Models' Reasoning Skills

Links πŸ”—:

πŸ‘‰ Subscribe: https://www.youtube.com/@Arxflix
πŸ‘‰ Twitter: https://x.com/arxflix
πŸ‘‰ LMNT (Partner): https://lmnt.com/

By Arxflix
9t4iCUHx_400x400-1.jpg

Sign up or log in to comment

Models citing this paper 23

Browse 23 models citing this paper

Datasets citing this paper 3

Spaces citing this paper 75

Collections including this paper 47