id
stringlengths 10
10
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| content
stringlengths 3.91k
873k
| references
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.05481 | LLM As DBA | Database administrators (DBAs) play a crucial role in managing, maintaining
and optimizing a database system to ensure data availability, performance, and
reliability. However, it is hard and tedious for DBAs to manage a large number
of database instances (e.g., millions of instances on the cloud databases).
Recently large language models (LLMs) have shown great potential to understand
valuable documents and accordingly generate reasonable answers. Thus, we
propose D-Bot, a LLM-based database administrator that can continuously acquire
database maintenance experience from textual sources, and provide reasonable,
well-founded, in-time diagnosis and optimization advice for target databases.
This paper presents a revolutionary LLM-centric framework for database
maintenance, including (i) database maintenance knowledge detection from
documents and tools, (ii) tree of thought reasoning for root cause analysis,
and (iii) collaborative diagnosis among multiple LLMs. Our preliminary
experimental results that D-Bot can efficiently and effectively diagnose the
root causes and our code is available at
github.com/TsinghuaDatabaseGroup/DB-GPT. | http://arxiv.org/pdf/2308.05481 | Xuanhe Zhou, Guoliang Li, Zhiyuan Liu | cs.DB, cs.AI, cs.CL, cs.LG | null | null | cs.DB | 20230810 | 20230811 | 3 2 0 2 g u A 1 1 ] B D . s c [
2 v 1 8 4 5 0 . 8 0 3 2 : v i X r a
Xuanhe Zhou Tsinghua University Beijing, China [email protected]
# LLM As DBA Guoliang Li Tsinghua University Beijing, China [email protected]
Zhiyuan Liu Tsinghua University Beijing, China [email protected]
# ABSTRACT
Database administrators (DBAs) play a crucial role in managing, maintaining and optimizing a database system to ensure data avail- ability, performance, and reliability. However, it is hard and tedious for DBAs to manage a large number of database instances (e.g., mil- lions of instances on the cloud databases). Recently large language models (LLMs) have shown great potential to understand valuable documents and accordingly generate reasonable answers. Thus, we propose D-Bot, a LLM-based database administrator that can con- tinuously acquire database maintenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. This paper presents a revolutionary LLM-centric framework for database maintenance, including (ð) database maintenance knowledge detection from doc- uments and tools, (ðð) tree of thought reasoning for root cause analysis, and (ððð) collaborative diagnosis among multiple LLMs. Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes and our code is available at github.com/TsinghuaDatabaseGroup/DB-GPT.
H Thought: High memory usage seems to be caused by 1 poor join performance and much inactive memory ' Reasoning: Poor joins can be solved by plan optimization \ Action: optimize_query_plan <> Documents 7 5 5 S Query -=, Query f=) Query | sh Rewriter _ **" Planner & Executor J J L J C of System Configuration âJ
# Figure 1: LLM As DBA
1 INTRODUCTION Limitations of DBAs. Currently, most companies still rely on DBAs for database maintenance (DM, e.g., tuning, configuring, diagnosing, optimizing) to ensure high performance, availability and reliability of the databases. However, there is a significant gap between DBAs and DM tasks. First, it takes a long time to train a DBA. There are numerous relevant documents (e.g., administrator guides), which can span over 10,000 pages for just one database product and consumes DBAs several years to partially grasp the skills by applying in real practice. Second, it is hard to obtain enough DBAs to manage a large number of database instances, e.g. millions of instance on cloud databases. Third, a DBA may not provide in-time response in emergent cases (especially for correlated issues across multiple database modules) and cause great financial losses. Limitations of Database Tools. Many database products are equipped with semi-automatic maintenance tools to relieve the pressure of human DBAs [5, 6, 10â12]. However, they have sev- eral limitations. First, they are built by empirical rules [4, 24] or small-scale ML models (e.g., classifiers [13]), which have poor text processing capability and cannot utilize available documents to answer basic questions. Second, they cannot flexibly generalize to scenario changes. For empirical methods, it is tedious to man- ually update rules by newest versions of documents. And learned methods require costly model retraining and are not suitable for online maintenance. Third, they cannot reason the root cause of an anomaly like DBAs, such as looking up more system views based
on the initial analysis results. This capability is vital to detect useful information in complex cases. Our Vision: A Human-Beyond Database Adminstrator. To this end, we aim to build a human-beyond âDBAâ that can tirelessly learn from documents (see Figure 1), which, given a set of documents, automatically (1) learns experience from documents, (2) obtains status metrics by interacting with the database, (3) reasons about possible root causes with the abnormal metrics, and (4) accordingly gives optimization advice by calling proper tools. Challenges. Recent advances in Large Language Models (LLMs) have demonstrated superiority in understanding natural language, generating basic codes, and using external tools. However, leverag- ing LLM to design a âhuman-beyond DBAâ is still challenging.
(1) Experience learning from documents. Just like human learners taking notes in classes, although LLMs have undergone training on vast corpus, important knowledge points (e.g., diagnosis experience) cannot be easily utilized without careful attention. However, most texts are of long documents (with varying input lengths and section correlations) and different formats of the extracted experience can greatly affect the utilization capability of the LLM.
(2) Reasoning by interacting with database. With the extracted experience, we need to inspire LLM to reason about the given anomalies. Different from basic prompt design in machine learning, database diagnosis is an interactive procedure with the database (e.g., looking up system views or metrics). However, LLM responses are often untrustworthy (âhallucinationâ problem), and it is critical to design strategies that guide LLM to utilize proper interfaces of the database and derive reasonable analysis.
xxxx
(3) Mechanism for communication across multiple LLMs. Similar to human beings, one LLM alone may be stuck in sub-optimal
solutions, and it is vital to derive a framework where multiple LLMs collaborate to tackle complex database problems. By pooling their collective intelligence, these LLMs can provide comprehensive and smart solutions that a single LLM or even skilled human DBA would struggle to think out. Idea of LLM as DBA. Based on above observations, we introduce D-Bot, an LLM based database administrator. First, D-Bot trans- forms documents into experiential knowledge by dividing them into manageable chunks and summarizing them for further extraction of maintenance insights with LLM. Second, it iteratively generates and assesses different formats of task descriptions to assist LLM in understanding the maintenance tasks better. Third, D-Bot utilizes external tools by employing matching algorithms to select appro- priate tools and providing LLM with instructions on how to use the APIs of selected tools. Once equipped with the experience, tools, and input prompt, LLM can detect anomalies, analyze root causes, and provide suggestions, following a tree of thought strategy to re- vert to previous steps if a failure occurs. Moreover, D-Bot promotes collaborative diagnosis by allowing multiple LLMs to communicate based on predefined environmental settings, inspiring more robust solutions via debate-like communications. Contributions. We make the following contributions. (1) We design a LLM-centric database maintenance framework, and explore potential to overcome limitations of traditional strategies. (2) We propose an effective data collection mechanism by (ð) de- tecting experiential knowledge from documents and (ðð) leveraging external tools with matching algorithms. (3) We propose a root cause analysis method that utilizes LLM and tree search algorithm for accurate diagnosis. (4) We propose an innovative concept of collaborative diagnosis among LLMs, thereby offering more comprehensive and robust solutions to complex database problems. (5) Our preliminary experimental results that D-Bot can efficiently and effectively diagnose the root causes.
# 2 PRELIMINARIES
Database Anomalies. In databases, there are five common prob- lems that can negatively affect the normal execution status. (1) Running Slow. The database exhibits longer response time than expectancy, leading to bad execution performance. (2) Full Disk Capacity. The databaseâs disk space is exhausted, preventing it from storing new data. (3) Execution Errors. The database experiences errors, potentially due to improper error handling in the application (e.g., leaking sensitive data or system details) or issues within data- base (e.g., improper data types). (4) Hanging. The database becomes unresponsive, which is usually caused by long-running queries, deadlocks, or resource contention. (5) Crashing. The database un- expectedly shuts down, causing data inaccessible. For a mature database product, each anomaly type is explained in the documenta- tion and suitable to be learned by LLMs. Observation Tools for Anomaly Detection. âObservability of the databaseâ is vital to detect above anomalies, including logs, met- rics, and traces. (1) Logs are records of database events. For example, PostgresSQL supports slow query logs (with error messages that can help debug and solve execution issues), but these logs may
2
record a large scale of data and are generally not enabled in on- line stage. (2) Metrics capture the aggregated database and system statistics. For example, views like pg_stat_statements record the templates and statistics of slow queries; tools like Prometheus [20] provide numerous monitoring metrics, making it possible to cap- ture the real time system status. (3) Traces provide visibility into how requests behave during executing in the database. Different from logs that help to identify the database problem, traces help to locate the specific abnormal workload or application. Optimization Tools for Anomaly Solving. Users mainly con- cern how to restore to the normal status after an anomaly occurs. Here we showcase some optimization tools. (1) For slow queries, since most open-source databases are weak in logical transfor- mation, there are external engines (e.g., Calcite with â¼120 query rewrite rules) and tuning guides (e.g., Oracle with over 34 trans- formation suggestions) that help to optimize slow queries. (2) For knob tuning, many failures (e.g., max_connections in Postgres) or bad performance (e.g., memory management knobs) are cor- related with database knobs (e.g., for a slow workload, incresae innodb_buffer_pool_size in MySQL by 5% if the memory usage is lower than 60%). Similarly, there are index tuning rules that generate potentially useful indexes (e.g., taking columns within the same predicate as a composite index). Besides, we can utilize more advanced methods, such as selecting among heuristic meth- ods [3, 21, 22] and learned methods [7â9, 15, 23, 25, 26] for problems like index lacking, which is not within the scope of this paper.
We aim to design D-Bot, an LLM-based DBA, for automatically diagnosing the database anomalies and use LLM to directly (or call appropriate tools to indirectly) provide the root causes.
# 3 THE VISON OF D-BOT
Existing LLMs are criticized for problems like âBrain in a Vatâ [14]. Thus, it is essential to establish close connections between LLMs and the target database, allowing us to guide LLMs in effectively maintaining the databaseâs health and functionality. Hence, we propose D-Bot, which is composed of two stages.
First, in preparation stage, D-Bot generates experience (from documents) and prompt template (from diagnosis samples), which are vital to guide online maintenance.
⢠Documents â Experience. Given a large volume of diverse, long, unstructured database documents (e.g., database man- ual, white paper, blogs), we first split each document into chunks that can be processed by the LLM. To aggregate correlated chunks together (e.g., chunk ð£ð that explains the meaning of âbloat-tableâ and chunk ð£ ð that utilizes âbloat- tableâ in root cause analysis), we generate a summary for each chunk based on both its content and its subsections. Finally, we utilize LLM to extract maintenance experience from chunks with similar summaries (Section 4).
⢠Prompt Template Generation. To help LLM better under- stand the DM tasks, we iteratively generate and score dif- ferent formats of task descriptions using DM samples (i.e., given the anomaly and solutions, ask LLM to describe the task), and adopt task description that both scores high perfor- mance and is sensible to human DBAs (in cases of learning bias) for LLM diagnosis (Section 5).
Document Documents > Experience Document i . (1) Overat Performance Analysis, : Chunk â Extraction [11] Problem Description âSummary Problem Symptoms refer CPU Usage Views (S)view anaivsis 2.2}cPU Usage Views Segmentation External Tool Learning APIS (db, pg_activities} [dbmind, rewrite] | Database & Platform Website Tool Retrieval Query & Answer ® Prompt Template Generation Prompt Generation Prompt Scoring : Collaborative Diagnosis Task Description âScore, Role Chief ie teresa drtahane performance â problem. Dlagnosethe caumer | OO Allocation Yi" bea and give optimization suggestions. Ina company whose databases meet anomalies. It depends on youtto diagnose the root causes. âsae Y O WS Environment Setting {order: sequential, visibility: all} Chat Summary Summarize history chats;Outline tool inputs, results, goals.
Figure 2: Overview of D-Bot
Second, in maintenance stage, given an anomaly, D-Bot iter- atively reasons the possible root causes by taking advantages of external tools and multi-LLM communications.
⢠External Tool Learning. For a given anomaly, D-Bot first matches relevant tools using algorithms like Dense Retrieval. Next, D-Bot provides the tool APIs together with their de- scriptions to the LLM (e.g., function calls in GPT-4). After that, LLM can utilize these APIs to obtain metric values or op- timization solutions. For example, in PostgresSQL, LLM can acquire the templates of slowest queries in the pg_activity view. If these queries consume much CPU resource (e.g., over 80%), they could be root causes and optimized with rewriting tool (Section 6).
cause can affect the database performance (e.g., the performance hazards of many dead tuples); âmetricsâ provide hints of matching with this experience segment, i.e., LLM will utilize this experience if the abnormal metrics exist in the âmetricsâ field; âstepsâ provide the detailed procedure of checking whether the root cause exists by interacting with database (e.g., obtaining the ratio of dead tuples and live tuples from table statistics views).
1 "name": "many_dead_tuples", 2 "content": "If the accessed table has too many dead tuples,
it can cause bloat-table and degrade performance",
3 "metrics": ["live_tuples", "dead_tuples", "table_size", " dead_rate"],
⢠LLM Diagnosis. Although LLM can understand the func- tions of tool APIs, it still may generate incorrect API requests, leading to diagnosis failures. To solve this problem, we em- ploy the tree of thought strategy, where LLM can go back to previous steps if the current step fails. It significantly increases the likelihood of LLMs arriving at reasonable di- agnosis results (Section 7).
⢠Collaborative Diagnosis. A single LLM may execute only the initial diagnosis steps and end up early, leaving the prob- lem inadequately resolved. To address this limitation, we propose the use of multiple LLMs working collaboratively. Each LLM plays a specific role and communicates by the en- vironment settings (e.g., priorities, speaking orders). In this way, we can enable LLMs to engage in debates and inspire more robust solutions (Section 8).
# 4 EXPERIENCE DETECTION FROM DOCUMENTS
Document learning aims to extract experience segments from tex- tual sources, where the extracted segments are potentially useful in different DM cases. For instance, when analyzing the root causes of performance degradation, LLM utilizes the âmany_dead_tuplesâ experience to decide whether dead tuples have negatively affected the efficiency of index lookup and scans. Desired Experience Format. To ensure LLM can efficiently uti- lize the experience, each experience fragment should include four fields. As shown in the following example, ânameâ helps LLM to understand the overall function; âcontentâ explains how the root
4 "steps": "For each accessed table, if the total number of live tuples and dead tuples is within an acceptable limit (1000), and table size is not too big (50MB), it is not a root cause. Otherwise, if the dead rate also exceeds the threshold (0.02), it is considered a root cause. And we suggest to clean up dead tuples in time."
LLM for Experience Detection. It aims to detect experience seg- ments that follow above format. Since different paragraphs within a long document may be correlated with each other (e.g., the concept of âbloat-tableâ appearing in âmany_dead_tuplesâ is introduced in another section), we explain how to extract experience segments without losing the technical details.
Step1: Segmentation. Instead of partitioning documents into fixed- length segments, we divide them based on the structure of the sec- tion structures and their content. Initially, the document is divided into chunks using the section separators. If a chunk exceeds the maximum chunk size (e.g., 1k tokens), we further divide it recur- sively into smaller chunks.
Step2: Chunk Summary. Next, for each chunk denoted as ð¥, a summary ð¥ .ð ð¢ððððð¦ is created by feeding the content of ð¥ into LLM with a summarization prompt ðð ð¢ðððððð§ð :
ðð ð¢ðððððð§ð = Summarize the provided chunk briefly · · · Your summary will serve as an index for others to find technical details related to database maintenance · · · Pay attention to examples even if the chunks covers other topics.
The generated ð¥ .ð ð¢ððððð¦ acts as a textual index of ð¥, enabling the matching of chunks containing similar content.
3
Step3: Experience Extraction. Once the summaries of the chunks are generated, LLM parses the content of each chunk and compares it with the summaries of other chunks having similar content, which is guided by the extraction prompt ððð¥ð¡ðððð¡ . This way, experience segments that correlate with the key points from the summaries are detected.
ððð¥ð¡ðððð¡ = Given a chunk summary, extract diagnosis experi- ence from the chunk. If uncertain, explore diagnosis experience in chunks with similar summaries.
In our implementation, given a document, we use LLM to extract
experience segments into the above 4-field format. Detected Maintenance Experience. In Figure 3, we showcase the simplified diagnosis procedure together with some necessary details, coming from chunks originally in different sections of the given documents (e.g., the maintenance guide with over 100 pages). 1. Background Understanding. Itâs crucial to grasp the con- text of system performance, such as recent changes in customer expectation, workload type, or even system settings.
2. Database Pressure Checking. This step identifies data- base bottlenecks, such as tracking CPU usage and active ses- sions; and monitoring system views (e.g., pg_stat_activity and pgxc_stat_activity) to focus on non-idle sessions.
3. Application Pressure Checking. If there is no apparent pressure on the database or the resource consumption is very low (e.g., CPU usage below 10% and only a few active sessions), it is suggested to investigate the application side, such as exhausted ap- plication server resources, high network latency, or slow processing of queries by application servers.
4. System Pressure Checking. The focus shifts to examining the system resources where the database is located, including CPU usage, IO status, and memory consumption.
5. Database Usage Checking. Lastly, we can investigate sub- optimal database usage behaviors, such as (1) addressing concur- rency issues caused by locking waits, (2) examining database con- figurations, (3) identifying abnormal wait events (e.g., io_event), (4) tackling long/short-term performance declines, and (5) optimizing poorly performing queries that may be causing bottlenecks.
# 5 DIAGNOSIS PROMPT GENERATION
Instead of directly mapping extracted experience to new cases, next we explore how to teach LLMs to (1) understand the database maintenance tasks and (2) reason over the root causes by itself. Input Enrichment. With a database anomaly ð¥ as input, we can enrich ð¥ with additional description information so called input prompt ð¥ â². On one hand, ð¥ â² helps LLM to better understand the task intent. On the other hand, since database diagnosis is generally a complex task that involves multiple steps, ð¥ â² preliminarily implies how to divide the complex task into sub-tasks in a proper order, such that further enhancing the reasoning of LLM.
From our observation, the quality of ð¥ â² can greatly impact the performance of LLM on maintenance tasks [27] (Figure 2). Thus, we first utilize LLM to suggest candidate prompts based on a small set of input-output pairs (e.g., 5 pairs for a prompt). Second, we rank these generated prompts based on a customized scoring function
4
(e.g., the ratio of detected root causes), and reserve the best prompts (e.g., top-10) as candidates. Finally, we select the best one to serve as the input prompt template for the incoming maintenance tasks.
# 6 EXTERNAL TOOL LEARNING
As we know, the efficient use of tools is a hallmark of human cognitive capabilities [17, 18]. When human beings encounter a new tool, they start to understand the tool and explore how it works, i.e., taking it as something with particular functions and trying to understand what the functions are used for. Likewise, we aim to inspire similar ability within LLM. Tool Retrieval. We first retrieve the appropriate tools for the diagnosis task at hand, represented as ð·ð¡ . There are several methods used, such as BM25, LLM Embeddings, and Dense Retrieval.
(1) BM25, simply represented as ð (ð·ð¡ , ð) = BM25, is a common probabilistic retrieval method that ranks tool descriptions (ð·) based on their relevance to the given anomaly (ð) [19].
(2) LLM Embeddings, denoted as ð (ð·ð¡ , ð¿) = ð¿ð¿ðð¸ , are a method that converts tool descriptions (ð·ð¡ ) into embeddings (ð¸ð¡ ) using LLM, i.e., ð¸ð¡ = ð¿(ð·ð¡ ). These embeddings capture the semantic meanings in a multi-dimensional space, hence helping in finding related tools even in the absence of keyword overlap, ð·ð¡ = ð¿ð¿ðð¸ (ð¸ð¡ ).
(3) Dense Retrieval, denoted as ð (ð, ð·ð¡ , ð ) = ð·ð
, uses neural net- works (ð ) to generate dense representations of both the anomaly (ð) and the tool descriptions (ð·ð¡ ), separately denoted as Denseð and Denseð· . To retrieve the relevant tools, we calculate the simi- larity between Denseð and Denseð· , and rank them based on these similarity scores.
The proper method for tool retrieval depends on the specific scenarios. BM25 is efficient for quick results with large volumes of API descriptions in the tools and clear anomaly characters. LLM Embeddings excel at capturing semantic and syntactic relationships, which is especially useful when relevance isnât obvious from key- words (e.g., different metrics with similar functions). Dense Retrieval is ideal for vague anomaly, which captures context and semantic meaning, but is more computational costly.
# 7 LLM DIAGNOSIS
Tree Search Algorithm using LLM. To avoid diagnosis failures caused by the incorrect actions (e.g., non-existent API name) derived by LLM, we propose to utilize the tree of thought strategy that can guide LLM to go back to previous actions if the current action fails. Step1: Tree Structure Initialization. We initialize a tree structure, where root node is the diagnosis request (Figure 4). Utility methods are utilized to manipulate the tree structure, and UCT score for node ð£ are computed based on the modifications during planning, i.e., ð (ð£) , where ln(ð ) UCT(ð£) = ð (ð£) denotes the selection frequency and ð¤ (ð£) denotes the success ratio of detecting root causes. Note, the action of ð(ð£ fails to call tool API, ð¤ (ð£) equals -1. Step2: Simulate Execution. This step kickoffs the execution of simulations starting from the root node of the tree. It involves selecting nodes based on specific standard (e.g., detected abnormal metrics). If the criteria for selecting a new node is met, a new node is chosen; otherwise, the node with the highest UCT value is selected.
[CPU usage] + OPS CPU usage rate or top/sar commands; + If you still cannot analyze the cause of high CPU usage, generate a flame graph to find the bottleneck of the kernel code function. 110 issues] + Hard disk cache/raid write policy configuration problem; + Disk bandwidth is throttled (OBS has flow control). [Memory issues] + Refer to high memory section [Network issues] + Network disconnection; + High network latency; + Full network bandwidth. (2) Check the database-related views, or the relevant indicators to clarify whether the issue is on the database side. [Database-related views] E.g., pg_stat_activity/pgxc_stat_activity views in Postgres - Pay attention to the sessions where the state is not idle. [OPS monitoring indicators] E.g., CPU usage rate, active sessions. 2. Database Pressure Checking 1. Background Understanding 3. Application Pressure Checking 4. System Pressure Checking 5. Database Usage Checking [Concurrency issues] Lock waits due to concurrent updates. [Configuration issues] Knobs like shared_buffers/threadPool. [Long-term performance] Compare performance diagnosis reports (e.g, ,Traces in Oracle, WDR in gaussdb) for different time periods. [Short-term performance jitters] Seconds-level performance jitter; Inefficient SQL: large volume of slow SQL, slow SQL optimization. ® If the database side does not significantly perceive the workload pressure, and resource usage is extremely low (e.g, CPU usage is less than 10%), it is recommended to check the application side: [Possible problems] - Application server resources are exhausted, e.g., insufficient CPU / 10 / memory; + The network latency between the application server and the kernel is too high; + The application server processes query requests slowly, leading to slow dispatch of the query statements. @
Figure 3: The outline of diagnosis experience extracted from documents.
Diagnose the root em_usageâ): memory is abnormal FA ¢pu_diagnosis_agent({"dead_tuples *: 5000, ...}):many_dead_tuples
# Figure 4: Example LLM diagnosis by tree of thought
is dedicated to a distinct domain of functions. For example, we include three LLM agents in the initial implementation: (1) Chief DBA is responsible for collaboratively diagnosing and detecting root causes with other agents; (2) CPU Agent is specialized in CPU usage analysis and diagnosis, and (3) Memory Agent focuses on memory usage analysis and diag- nosis. Each LLM agent can automatically invoke tool APIs to retrieve database statistics, extract external knowledge, and conduction optimizations. For instance, CPU Agent utilizes the monitoring tool Prometheus to check CPU usage metrics within specific time periods, and determine the root causes of high CPU usage by matching with extracted experience (Section 4). Note, if CPU/memory agents cannot report useful analysis, Chief DBA is responsible to detect other potential problems, such as those on the application side.
Step3: Existing Node Reflection. For each node in the path from the root node to the selected node, reflections are generated based on decisions made at previous nodes. For example, we count on LLM to rethink the benefits of analyzing non-resource relevant metrics. If LLM decides the action cannot find any useful information, the UCT value will be reduced and set to that of its parent node. In this way, we can enhance the diagnosis efficiency.
Step4: Terminal Condition. If LLM cannot find any more root cause (corresponding to a leaf node) for a threshold time (e.g., five), the algorithm ends and LLM outputs the final analysis based on the detected root causes.
# 8 COLLABORATIVE DIAGNOSIS FOR
# COMPLEX CASES
A single LLM may be limited in its ability to fully resolve a problem (e.g., stuck in initial steps). Collaborative diagnosis involves the utilization of multiple LLMs to collectively address complex cases by leveraging their unique role capabilities. This section introduces the communicative framework for database diagnosis [1, 16].
⢠Agents. In the communicative framework, agents can be undertaken by human beings or LLMs. Humans can pro- vide LLM agents with scenario requirements (e.g., business changes over the incoming period) and prior knowledge (e.g., historical anomalies). On the other hand, each LLM agent
⢠Environment Settings. We need to set a series of principles for the agents to efficiently communicate, such as (1) Chat Order: To avoid the mutual negative influence, we only al- low one LLM agent to âspeakâ (i.e., appending the analysis results to the chat records to let other agents know) at a time. To ensure flexible chat (e.g., if an agent cannot detect anything useful, it should not speak), we rely on Chief DBA to decide which agent to speak in each iteration (diagnosis scheduling); (2) Visibility: By default, we assume the analysis results of agents can be seen by each other, i.e., within the same chat records. In the future, we can split agents into different groups, where each group is in charge of different database clusters/instances and they do not share the chat records; (3) Selector is vital to filter invalid analysis that may mislead the diagnosis directions; (4) Updater works to update agent memory based on the historical records.
⢠Chat Summary . For a complex database problem, it re- quires agents dozens of iterations to give in-depth analy- sis, leading to extremely long chat records. Thus, it is vi- tal to effectively summarize the critical information from chat records without exceeding the maximal length of LLM prompts. To the end, we progressively summarize the lines
5
# Table 1: Diagnosis performance of single root causes ( : legal diagnosis results; : accurate diagnosis results).
Type Data Insert Slow Query Concurrent Transaction Root Cause INSERT_LARGE_DATA FETCH_LARGE_DATA REDUNDANT_INDEX LACK_STATISTIC_INFO MISSING_INDEXES POOR_JOIN_PERFORMANCE CORRELATED_SUBQUERY LOCK_CONTENTION WORKLOAD_CONTENTION Workload concentration affecting SQL execution CPU_CONTENTION IO_CONTENTION Description Long execution time for large data insertions Fetching of large data volumes Unnecessary and redundant indexes in tables Outdated statistical info affecting execution plan Missing indexes causing performance issues Poor performance of Join operators Non-promotable subqueries in SQL Lock contention issues Severe external CPU resource contention IO resource contention affecting SQL performance LLM+Metrics D-Bot
of a record used with tools, including inputs for certain tools and the results returned by these tools. Based on the current summary, it extracts the goals intended to be solved with each call to the tool, and forms a new summary, e.g.,
[Current summary] - I know the start and end time of the anomaly. [New Record] Thought: Now that I have the start and end time of the anomaly, I need to diagnose the causes of the anomaly Action: is_abnormal_metric Action Input: 1684600074, âmetric_nameâ: âcpu_usageâ} Observation: âThe metric is abnormalâ [New summary] - I know the start and end time of the anomaly. - I searched for is_abnormal_metric, and I now know that the CPU usage is abnormal.
& [Chief DBA: The identified root causes of the anomaly are high CPU usage due toa high number of running processes, high load average, blocked processes, and low entropy, as well as high memory usage caused by poor join performance and significant amounts of inactive anonymous memory. CPU Agent suggestions: the running processes, load average, blocked processes, and low entropy to optimize CPU usage. Memory Agent suggestions: 1. Check the GUC parameter 'enable_hashj and ensure thatiit is set to âonâ 2. Optimize the SQL structure to reduce JOIN cost and consider using temporary tables to filter data. 3. processes or applications running on the system that may be causing the high memory usage and ensure they are property releasing memory when finished. Reset Next Stop Autoplay Start Autoplay Optimization Solutions: Rewrite Slow Query [Memory Agent]: The high memory usage is caused by poor join performance and significant amounts Add Query Hints of inactive anonymous memory. The slow queries involve multiple join operations with large amounts of data, Gather More Info
With this communicative framework and well-defined communi- cation principles, the collaborative diagnosis process among human and LLM agents becomes more efficient (e.g., parallel diagnosis) and effective (e.g., chat records could trigger investigating of in-depth metric observation and root cause analysis).
# 9 PRELIMINARY EXPERIMENT RESULTS
Demonstration. As illustrated in Figure 5, Chief DBA monitors the status of the database to detect anomalies. Upon recognizing a new anomaly, Chief DBA notifies both the Memory Agent and CPU Agent. These agents independently assess the potential root causes and communicate their findings (the root causes and recommended solutions) to the Chief DBA. Subsequently, the Chief DBA consol- idates the diagnostic results for the userâs convenience. In initial iterations, these agents generally gather limited information, and so they will continue for multiple iterations until the conclusion of Chief DBA is nearly certain or no further valuable information can be obtained. Additionally, during the diagnosis, users have the option to participate by offering instructions and feedback, such as verifying the effectiveness of a proposed optimization solution.
# Figure 5: A basic demonstration of D-Bot.
Diagnosis Performance Comparison. We compare the perfor- mance of D-Bot against a baseline, namely llm+Metrics. Both of the two methods are deployed with the OpenAI model GPT-4 [2] alongside metrics and views from PostgreSQL and Prometheus. The evaluation focuses on basic single-cause problems as detailed in Table 1. Besides, we also offer a multi-cause diagnosis example presented in the Appendix-B.
Preliminary results indicate that LLM +Metrics and D-Bot can achieve a high legality rate (producing valid responses to specific database issues). However, it is a âdangerous behaviorâ for LLM +Metrics, which actually has very low success rate (infrequent pro- vision of the correct causes). In contrast, D-Bot achieves both high legal rate and success rate. The reasons are three-fold.
First, LLM +Metrics conducts very basic reasoning and often misses key causes. For example, for the INSERT_LARGE_DATA case, LLM +Metrics only finds âhigh number of running processesâ with the node_procs_running metric, and stops early. In contrast,
6
D-Bot not only finds the high concurrency problem, but analyze the operation statistics in the database process and identifies âhigh memory usage due to heavy use of UPDATE and INSERT operations on xxx tablesâ by looking up the pg_stat_statements view.
Second, LLM +Metrics often âmakes upâ reasons without substantial knowledge evidence. For example, for the CORRE- LATED_SUBQUERY case, LLM +Metrics observes SORT operations in logged queries, and incorrectly attributes the cause to âfrequent reading and sorting of large amount of dataâ, thereby ending the diagnostic process. Instead, D-Bot cross-references with the query optimization knowledge, and then finds the correlated-subquery structure might be the performance bottleneck, with additional extracted information like estimated operation costs.
Third, LLM +Metrics meet trouble in deriving appropriate solu- tions. LLM +Metrics often gives very generic optimization solutions (e.g., âresolve resource contention issuesâ), which are useless in practice. Instead, leveraging its tool retrieval component, D-Bot can learn to give specific optimization advice (e.g., invoking query transformation rules, adjusting the work_mem parameter) or gather more insightful information (e.g., âcalculate the total cost of the plan and check whether the cost rate of the sort or hash operators exceeds the cost rate thresholdâ).
This evaluation reveals the potential of D-Bot in going beyond mere anomaly detection to root cause analysis and provision of actionable suggestions. Despite these advancements, from the basic deployment of D-Bot, there are still some unresolved challenges. First, it is tricky to share the maintenance experience (e.g., varying metric and view names) across different database products. Second, it is labor-intensive to adequately prepare extensive number of anomaly-diagnosis data, which is essential to fine-tune and direct less-capable LLMs (e.g., those smaller than 10B) to understand the complex database knowledge and apply in maintenance.
# 10 CONCLUSION
In this paper, we propose a vision of D-Bot, an LLM-based data- base administrator that can continuously acquire database main- tenance experience from textual sources, and provide reasonable, well-founded, in-time diagnosis and optimization advice for target databases. We will continue to complete and improve this work with our collaborators.
REFERENCES [1] [n.d.]. https://github.com/OpenBMB/AgentVerse. Last accessed on 2023-8. [2] [n.d.]. https://openai.com/. Last accessed on 2023-8. [3] Surajit Chaudhuri and Vivek R. Narasayya. 1997. An Efficient Cost-Driven Index
Selection Tool for Microsoft SQL Server. In VLDB. 146â155.
[4] Karl Dias, Mark Ramacher, Uri Shaft, Venkateshwaran Venkataramani, and Gra- ham Wood. 2005. Automatic Performance Diagnosis and Tuning in Oracle. In Second Biennial Conference on Innovative Data Systems Research, CIDR 2005, Asilo- mar, CA, USA, January 4-7, 2005, Online Proceedings. www.cidrdb.org, 84â94. http://cidrdb.org/cidr2005/papers/P07.pdf
[5] Shiyue Huang, Ziwei Wang, Xinyi Zhang, Yaofeng Tu, Zhongliang Li, and Bin Cui. 2023. DBPA: A Benchmark for Transactional Database Performance Anomalies. Proc. ACM Manag. Data 1, 1 (2023), 72:1â72:26. https://doi.org/10.1145/3588926 [6] Prajakta Kalmegh, Shivnath Babu, and Sudeepa Roy. 2019. iQCAR: inter-Query Contention Analyzer for Data Analytics Frameworks. In Proceedings of the 2019 International Conference on Management of Data, SIGMOD Conference 2019, Ams- terdam, The Netherlands, June 30 - July 5, 2019, Peter A. Boncz, Stefan Manegold, Anastasia Ailamaki, Amol Deshpande, and Tim Kraska (Eds.). ACM, 918â935. https://doi.org/10.1145/3299869.3319904
[7] Jan Kossmann, Alexander Kastius, and Rainer Schlosser. 2022. SWIRL: Selection of Workload-aware Indexes using Reinforcement Learning. In EDBT. 2:155â2:168.
7
[8] Hai Lan, Zhifeng Bao, and Yuwei Peng. 2020. An Index Advisor Using Deep Reinforcement Learning. In CIKM. 2105â2108.
[9] Gabriel Paludo Licks, Júlia Mara Colleoni Couto, Priscilla de Fátima Miehe, Re- nata De Paris, Duncan Dubugras A. Ruiz, and Felipe Meneguzzi. 2020. SmartIX: A database indexing agent based on reinforcement learning. Appl. Intell. 50, 8 (2020), 2575â2588.
[10] Ping Liu, Shenglin Zhang, Yongqian Sun, Yuan Meng, Jiahai Yang, and Dan Pei. 2020. FluxInfer: Automatic Diagnosis of Performance Anomaly for Online Database System. In 39th IEEE International Performance Computing and Commu- nications Conference, IPCCC 2020, Austin, TX, USA, November 6-8, 2020. IEEE, 1â8. https://doi.org/10.1109/IPCCC50635.2020.9391550
[11] Xiaoze Liu, Zheng Yin, Chao Zhao, Congcong Ge, Lu Chen, Yunjun Gao, Dimeng Li, Ziting Wang, Gaozhong Liang, Jian Tan, and Feifei Li. 2022. PinSQL: Pinpoint Root Cause SQLs to Resolve Performance Issues in Cloud Databases. In 38th IEEE International Conference on Data Engineering, ICDE 2022, Kuala Lumpur, Malaysia, May 9-12, 2022. IEEE, 2549â2561. https://doi.org/10.1109/ICDE53745.2022.00236 [12] Xianglin Lu, Zhe Xie, Zeyan Li, Mingjie Li, Xiaohui Nie, Nengwen Zhao, Qingyang Yu, Shenglin Zhang, Kaixin Sui, Lin Zhu, and Dan Pei. 2022. Generic and Robust Performance Diagnosis via Causal Inference for OLTP Database Systems. In 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing, CCGrid 2022, Taormina, Italy, May 16-19, 2022. IEEE, 655â664. https://doi.org/10.1109/ CCGrid54584.2022.00075
[13] Minghua Ma, Zheng Yin, Shenglin Zhang, and et al. 2020. Diagnosing Root Causes of Intermittent Slow Queries in Large-Scale Cloud Databases. Proc. VLDB Endow. 13, 8 (2020), 1176â1189. https://doi.org/10.14778/3389133.3389136 [14] Yuxi Ma, Chi Zhang, and Song-Chun Zhu. 2023. Brain in a Vat: On Miss- ing Pieces Towards Artificial General Intelligence in Large Language Mod- els. CoRR abs/2307.03762 (2023). https://doi.org/10.48550/arXiv.2307.03762 arXiv:2307.03762
[15] R. Malinga Perera, Bastian Oetomo, Benjamin I. P. Rubinstein, and Renata Borovica-Gajic. 2021. DBA bandits: Self-driving index tuning under ad-hoc, analytical workloads with safety guarantees. In ICDE. 600â611.
[16] Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, and et al. 2023. Com- municative Agents for Software Development. arXiv preprint arXiv:2307.07924 (2023).
[17] Yujia Qin, Shengding Hu, Yankai Lin, and et al. 2023. Tool learning with founda- tion models. arXiv preprint arXiv:2304.08354 (2023).
[18] Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, Sihan Zhao, Runchu Tian, Ruobing Xie, Jie Zhou, Mark Gerstein, Dahai Li, Zhiyuan Liu, and Maosong Sun. 2023. Tool- LLM: Facilitating Large Language Models to Master 16000+ Real-world APIs. arXiv:cs.AI/2307.16789
[19] Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: BM25 and beyond. Foundations and Trends® in Information Retrieval 3, 4 (2009), 333â389.
[20] James Turnbull. 2018. Monitoring with Prometheus. Turnbull Press. [21] Gary Valentin, Michael Zuliani, Daniel C. Zilio, Guy M. Lohman, and Alan Skelley. 2000. DB2 Advisor: An Optimizer Smart Enough to Recommend Its Own Indexes. In ICDE. 101â110.
[22] Kyu-Young Whang. 1987. Index Selection in Relational Databases. Foundations of Data Organization (1987), 487â500.
[23] Wentao Wu, Chi Wang, Tarique Siddiqui, Junxiong Wang, Vivek R. Narasayya, Surajit Chaudhuri, and Philip A. Bernstein. 2022. Budget-aware Index Tuning with Reinforcement Learning. In SIGMOD Conference. 1528â1541.
[24] Dong Young Yoon, Ning Niu, and Barzan Mozafari. 2016. DBSherlock: A Perfor- mance Diagnostic Tool for Transactional Databases. In Proceedings of the 2016 International Conference on Management of Data, SIGMOD Conference 2016, San Francisco, CA, USA, June 26 - July 01, 2016, Fatma Ãzcan, Georgia Koutrika, and Sam Madden (Eds.). ACM, 1599â1614. https://doi.org/10.1145/2882903.2915218 [25] Xuanhe Zhou, Chengliang Chai, Guoliang Li, and Ji Sun. 2020. Database meets artificial intelligence: A survey. IEEE Transactions on Knowledge and Data Engi- neering 34, 3 (2020), 1096â1116.
[26] Xuanhe Zhou, Luyang Liu, Wenbo Li, Lianyuan Jin, Shifu Li, Tianqing Wang, and Jianhua Feng. 2022. AutoIndex: An Incremental Index Management System for Dynamic Workloads. In ICDE. 2196â2208.
[27] Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and Jimmy Ba. 2022. Large Language Models Are Human-Level Prompt Engineers. (2022). arXiv:2211.01910 http://arxiv.org/abs/2211.01910
# A APPENDIX - PROMPTS
Prompts Chief_dba_format_prompt You are in a company whose databases meet anomalies and it depends on you to collaborate with other agents to diagnose the root causes. ${role_description} # Rules and Format Instructions for Response - Must listen and respond to the user's advice in the following format: hought: I now know the advice of the user, and i need to consider it during diagnosis ction: Speak ction Input: ({"diagnose": response to the advice, "solution": [], "knowledge": ""}) You can detect and diagnose anomaly as follows to use tool: hought: (your thought) ction: (an action name, it can be one of [obtain_anomaly_time, Speak]) ction Input: (argument for the action) irst, you need to call the tool api to get the start and end time of an anomaly hought: I need to obtain the start and end time of the anomaly by calling the tool api ction: obtain_anomaly_time ction Input: {"input": "json dict string"} bservation: {"start_time":"xxxx","end_time": "xxxx"} fter obtaining the start and end time of the anomaly, announce it with the following format: hought: I now know the start and end time of the anomaly, and i need to report it to agents Action: Speak Action Input: ({"diagnose": the start and end time of the anomaly you found, "solution": [], "knowledge": ""}) After all the agents have announced the root causes they found, you should summarize all the mentioned root causes and optimization solutions point by point: Thought: I now know the root causes and optimization solutions from other agents, and i need to conclude them point by point Action: Speak Action Input: ({"diagnose": The identified root causes of the anomaly are ..., "solution": The suggested optimization solutions are ..., "knowledge": ""}) Here is the conversation history ${chat_history} Here is the execution log of tools ${tool_observation} - Once an agent has announced the root causes he found, it is your responsibility to memorize the root causes. After that, continue to encourage other agents to diagnose. - When no one speaks in the last round ([Silence] appears in the end of history), you should summarize root causes and optimization solutions point by point. Pay attention to the response format instructions, and strictly follow the above rules! Based on the above history, what will you, ${agent_name}, do next? CPU_agent_format_prompt You are in a company whose databases meet anomalies. Follow the chief DBA's instructions to diagnose the root causes. ${role_description} # Rules and Format Instructions for Response
8
- During diagnosis, you have access to the following tools: ${tools} - You can respond as follows to use tool: Thought: (your thought) Action: (an action name, it can be one of [whether_is_abnormal_metric, CPU_diagnosis_agent, Speak], pay attention to the capitalization) Action Input: (argument for the action) You can first determine abnormal metrics by using the tools, and use the following format: Thought: Now that I have obtained the start and end time of the anomaly, check whether the CPU usage is abnormal during that time period. Action: whether_is_abnormal_metric Action Input: {"start_time": 1684646375, "end_time": 1684646378, "metric_name": "cpu_usage"} Next you must diagnose root causes by using the tools, and must use the following format (any other choice is not allowed): Thought: The CPU usage is abnormal, so I need to diagnose the cause of the anomaly using cpu_diagnosis_agent. Action: cpu_diagnosis_agent Action Input: {"start_time": 1684646375, "end_time": 1684646378} After you have got the observation from cpu_diagnosis_agent, announce it to the chief DBA, and use the following format: Thought: I now know the root cause of the anomaly, and i need to report it to the chief DBA Action: Speak Action Input: ({"diagnose": the root causes you found, "solution": the optimization solutions for the root causes splitted by '
', "knowledge": the diagnosis knowledge you used}) (Note the memory agent owns similar prompt, and we do not show it to avoid redundancy.) Summary_prompt Progressively summarize the lines of a record that you uses tools, which contains inputs for certain tools and the results returned by these tools. Based on the current summary, summarize from the record the goals that you intended to solve with each tool call, add it onto the previous summary, and return a new summary. EXAMPLE Current summary: New lines: Thought: Now that I have the start and end time of the anomaly, I need to diagnose the causes of the anomaly Action: whether_is_abnormal_metric Action Input: {"start_time": 1684600070, "end_time": 1684600074, "metric_name": "cpu_usage"} Observation: "The metric is abnormal" New summary: - I now know the start and end time of the anomaly. - I searched for whether_is_abnormal_metric, and I now know that the CPU usage is abnormal. END OF EXAMPLE Now, try to summarize the following record. Current summary: ${summary} New lines: ${new_lines} New summary:
9
# B APPENDIX - TEST CASES
Test Case (WORKLOAD_CONTENTION, POOR_JOIN_PERFORMANCE) (First Round) [Chief DBA] The anomaly started at 2023-08-05 23:38:30 and ended at 2023-08-05 23:43:30. [API Call] obtain_anomaly_time. [CPU Agent] The CPU usage is abnormal, so I need to diagnose the cause of the anomaly. [API Call] whether_is_abnormal_metric. [CPU Agent] The CPU relevant metric values are as follows: 1. node_scrape_collector_duration_seconds: This metric measures the duration of data collection from the node exporter. The value of 5.7218e-05 indicates that the data collection process is relatively fast. 2. node_procs_running: This metric represents the number of processes currently executing on the system. A value of 18.0 suggests that there are 18 processes running at the moment. ... 5. node_load1, node_load5, node_load15: These metrics represent the average system load over the last 1, 5, and 15 minutes, respectively. The values of 7.33, 2.4, and 0.89 suggest that the system is heavily loaded. [API Call] obtain_cpu_metrics. [Matched Knowledge] This diagnosis was based on several CPU relevant metrics obtained from Prometheus like node_scrape_collector_duration_seconds, node_procs_running, node_procs_blocked, node_entropy_available_bits, node_load1, node_load5, and node_load15. [CPU Agent] The high CPU usage seems to be caused by the high value of node_load1. It indicates that the CPU has been heavily utilized recently. [API Call] knowledge_matching. [Matched Knowledge] {"cause_name": "workload_contention", "desc": "The function checks for several potential causes of contention, including abnormal CPU and memory resource usage, insufficient space in the database data directory, and excessive connections or thread pool usage. ... If any of these issues are detected, the function provides a detailed report of the problem and suggests potential solutions. If no issues are found, the function returns "not a root cause".", "metrics": "... - max_connections - db_cpu_usage - db_mem_usage - disk_usage - connection - thread_pool_rate'}. [Solution] 1. Monitor the number of running processes and try to reduce unnecessary ones. 2. Look for resource contention and try to optimize resource allocation. [Memory Agent] The memory metric is abnormal. [API Call] whether_is_abnormal_metric. [Memory Agent] We first obtain memory metric values: - The node_memory_Inactive_anon_bytes metric value suggests a significant amount of inactive anonymous memory. - The node_memory_MemFree_bytes metric value indicates a considerable amount of free memory available. - The node_memory_Dirty_bytes metric value indicates a moderate amount of dirty memory. These memory metric values suggest that there may be a memory leak or inefficient memory management within the system. It might be worth investigating the memory allocation and deallocation processes, ensuring that resources are effectively managed. [API Call] obtain_memory_metrics. [Memory Agent] We next obtain slow queries from the pg_stat_statements view. There are potential causes that can be analyzed:
1. Poor join performance: One of the slow queries involves multiple table joins (VSELECT MIN(mi.info) ... FROM cast_info AS ci, info_type AS it1, info_type AS it2, keyword AS k, movie_info AS mi, movie_info_idx AS mi_idx, movie_keyword AS mk, name AS n, title AS t ...\â). Poor join performance can result in high memory usage if the join operation involves a large amount of data or an inefficient join operator is used. It is recommended to optimize the SQL structure and check if the join operations can be optimized by using appropriate indexes or join operators. [API Call] obtain_slow_queries; knowledge_matching. [Matched Knowledge] {"cause_name": "poor_join_performanceâ, "desc": "There are four main situations: 1) when the GUC parameter âenable_hashjoinâ is set to âoffâ, which can result in the optimizer choosing NestLoop or other join operators even when HashJoin would be more suitable; 2) when the optimizer incorrectly chooses the NestLoop operator, even when âset_hashjoinâ is on; 3) when the join operation involves a large amount of data, which can lead to high execution costs; and 4) when the cost of the join operator is expensive.
....â, "metrics": "- total_cost - cost_rate_threshold - nestloop_rows_threshold - large_join_threshola"}; {"cause_name": "missing_index", ...} [Solution] Optimize the SQL structure of the query ".." and check if the join operations can be optimized by using appropriate indexes or join operators.
10
[Memory Agent] Other potential causes: Without additional information about the system and its con- figuration, it is difficult to pinpoint a specific root cause of the high memory usage. Other potential causes may include inefficient query plans, lack of memory optimization settings, or memory leaks in the application or database engine.
To further investigate and identify the root cause, it is advisable to perform the following steps:
1. Analyze the resource usage pattern over time to identify any patterns or trends related to the memory usage.
2. Check the configuration settings and optimization parame- ters of the database system to ensure they are properly set.
3. Monitor the system for any memory leaks or inefficient query plans that may be causing the high memory usage.
4. Review the application code and queries to identify any potential optimizations or rewrite queries to improve their efficiency. [API Call] N/A. [Solution] 1. Analyze the resource usage pattern over time to identify any patterns or trends related to the memory usage.
2. Check the configuration settings and optimization parameters of the database system to ensure they are properly set.
3. Monitor the system for any memory leaks or inefficient query plans that may be causing the high memory usage.
4. Review the application code and queries to identify any potential optimizations or rewrite queries to improve their efficiency. [Chief DBA] The identified root causes of the anomaly are high CPU usage due to a high number of running processes, processes waiting for resources, and high load averages. It is also uncertain whether poor join performance is causing high memory usage. [API Call] N/A. [Solution] 1. In order to reduce CPU utilization, it is necessary to investigate the specific processes involved and their resource requirements. 2. For the high memory usage, we need to gather more information such as the overall memory usage trend over time, other slow queries or resource-intensive operations, and any relevant configuration settings. Examining the memory usage patterns and query execution plans might provide further insights. (Second Round) ~ further insights into memory problems [Memory Agent] Chief DBA has suggested that gathering more information such as the overall memory usage trend over time, other slow queries or resource-intensive operations, and any relevant configura- tion settings. This advice is indeed valuable, and I should consider it during the diagnosis. [API Call] N/A. [Memory Agent] Based on the memory metric values and slow queries provided, it is difficult to di- rectly determine the root cause of the high memory usage. However, we can analyze potential causes based on the information given. There is a significant amount of memory being used, particularly by node_memory_Dirty_bytes. Additionally, there are 11 active PostgreSQL activities. This indicates that there may be a memory leak or excessive memory usage in one or more queries or processes. The slow queries provided involve multiple table joins and aggregation functions (\âMIN\). They may be resource- intensive and contribute to the high memory usage. [API Call] knowledge_matching. [Matched Knowledge] {"cause_name": "poor_join_performanceâ, ...} [Solution] 1. Analyze the query plans, check for inefficient join operations, and monitor the memory usage over time. 2. Review the system configuration and resource constraints may provide insights into potential optimizations or improvements. 3. Consult with a database administrator or expert for a more accurate analysis and resolution. [Chief DBA] Based on our analysis, the root causes of the anomaly seem to be high CPU usage due to excessive running processes, poor join performance and inefficient query structure. The high CPU usage is mainly due to a high number of running processes and/or a high CPU load. There are 11 active PostgreSQL activities indicating a memory leak or excessive memory usage in one or more queries or processes. The slow queries provided involve multiple table joins and aggregation functions which may be resource-intensive and contribute to the high memory usage. Poor join performance and inefficient query structure can also lead to unnecessary memory usage. [API Call] N/A. [Solution] 1. Further investigation and analysis of the running processes and system workload that cause severe contention. 2. Review the structure of the slow queries and optimize them to reduce memory consumption. 3. Investigate join operations for possible inefficiencies and optimize as needed. 3. Gather more information about overall system performance, such as overall memory utilization and any other abnormal metrics.
11 | {
"id": "2307.07924"
} |
2308.04030 | Gentopia: A Collaborative Platform for Tool-Augmented LLMs | Augmented Language Models (ALMs) empower large language models with the
ability to use tools, transforming them into intelligent agents for real-world
interactions. However, most existing frameworks for ALMs, to varying degrees,
are deficient in the following critical features: flexible customization,
collaborative democratization, and holistic evaluation. We present gentopia, an
ALM framework enabling flexible customization of agents through simple
configurations, seamlessly integrating various language models, task formats,
prompting modules, and plugins into a unified paradigm. Furthermore, we
establish gentpool, a public platform enabling the registration and sharing of
user-customized agents. Agents registered in gentpool are composable such that
they can be assembled together for agent collaboration, advancing the
democratization of artificial intelligence. To ensure high-quality agents,
gentbench, an integral component of gentpool, is designed to thoroughly
evaluate user-customized agents across diverse aspects such as safety,
robustness, efficiency, etc. We release gentopia on Github and will
continuously move forward. | http://arxiv.org/pdf/2308.04030 | Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu | cs.AI | null | null | cs.AI | 20230808 | 20230808 | 3 2 0 2
g u A 8 ] I A . s c [
1 v 0 3 0 4 0 . 8 0 3 2 : v i X r a
# Gentopia.AI : A Collaborative Platform for Tool-Augmented LLMs
# Binfeng Xu, Xukun Liu, Hua Shen, Zeyu Han, Yuhan Li, Murong Yue, Zhiyuan Peng, Yuchen Liu, Ziyu Yao, Dongkuan Xu https://github.com/Gentopia-AI
# Abstract
Augmented Language Models (ALMs) em- power large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to vary- ing degrees, are deficient in the following crit- ical features: flexible customization, collab- orative democratization, and holistic evalua- tion. We present Gentopia, an ALM frame- work enabling flexible customization of agents through simple configurations, seamlessly in- tegrating various language models, task for- mats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish GentPool, a public platform enabling the reg- istration and sharing of user-customized agents. Agents registered in GentPool are composable such that they can be assembled together for agent collaboration, advancing the democrati- zation of artificial intelligence. To ensure high- quality agents, GentBench, an integral compo- nent of GentPool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release Gentopia on Github1 and will continuously move forward.
Several projects and frameworks have been proposed to build tool-Augmented Language including Au- Models (ALMs), or "agents", toGPT (Richards, 2023), SuperAGI (Kondi, 2023), HuggingGPT (Shen et al., 2023), GPT- Engineer (Osika, 2023), LangChain (Chase, 2023), Semantic Kernel (Callegari, 2023), and MiniChain (Rush, 2023). Each of these methods is deficient, to varying degrees, in the following critical features.
⢠Adaptive Customization: Many are designed for a single set of tasks without extensive sup- port in customization, or they involve redun- dant and boilerplate implementation that un- necessarily complicates agent tuning.
⢠Tool-augmented NLP Benchmark: A user- customized agent, before registration, is ex- pected to go through a thorough evaluation to ensure its quality. However, there is a lack of comprehensive benchmarks designed for agent evaluation in the aspects of efficiency, safety, robustness, etc.
1
# Introduction
There is a burgeoning trend in research towards augmenting large language models (LLMs) with external tools, enabling them to access up-to-date databases (Jiang et al., 2023; Pan et al., 2023), per- form arithmetic operations (Imani et al., 2023), navigate websites (Gur et al., 2023), develop soft- ware (Wu, 2023), etc. This integration of tools marks a departure from traditional language mod- eling, heralding a new era of intelligent agents ca- pable of interacting with the real world.
1https://github.com/Gentopia-AI/Gentopia. All mentioned works are under MIT license. Check our demo https://www.youtube.com/watch?v=7dZ3ZvsI7sw at and homepage at https://gentopia-ai.github.io/ Gentopia-AI-Homepage/.
⢠Democratization: A platform where user- customized agents can be registered and shared is missing. This hinders the interaction and collaboration of various user-customized agents. Collaborative growth is a critical point toward safe and powerful intelligence.
This paper proposes Gentopia, a lightweight and extensible framework for the research on ALMs. Gentopia allows practitioners to customize an agent with a single configuration file, greatly sim- plifying the process of building, tuning, sharing, and evaluating agents. Various language models, task formats, prompting modules, and plugins are integrated into a unified paradigm, without loss of flexibility for agent customization. In addi- tion, we believe the collaboration between user- customized agents can contribute to the democ-
Gentopia.Al (OF Gentopia 1Gg >» Framework for agent assembling Type Tool Se Vanilla, Rect, ReWOO, G a OpenAll (Function API), Prompt - 2% Ka) arXiv ~ - ys % . : Memory 8 LLM Short-term: scratch-pad : âMy [>] OpenAl: GPT-4, GPT-3.5-turbo, Long-term: VectorDB (chroma, ...) oe âOpen LM: llama, falcon, guanaco, External Agents GentPool Platform for agent sharing â Graders GatedGrader, SoreGrader, DojoGrader, InstructionGrader, CodeGrader, GentBench Benchmark for agent evaluation Reasoning © Math © Coding â__Â¥ Knowledge a + World - EvalPipeline <=)» fnertecae) ad eS + Web Retrieval Tasks & Datasets Safety a Wiki ® Integrity â Pool Multilingual / \ © Translation | wu! got | © Understanding \ wr, @ Efficiency © Token usage wm J © Runtime
Figure 1: An overview of Gentopia.AI, encapsulating following pivotal components: 1) Gentopia: a framework principally designed to assemble an agent instance from a YAML configuration file, composed of multiple pre-built agent components such as the LLM, tools, memory, and external agents; 2) GentPool: a platform engineered to facilitate the registration and sharing of specialized agents, seamlessly integrating GentBench, an ALM benchmark devised specifically for the comprehensive performance evaluation of agents.
ratization of AI. Hence, GentPool, a platform for agent registration and sharing is established. Agents registered in GentPool can be hierarchi- cally assembled together, enabling the collabora- tion of multiple agents. GentPool is accompa- nied by a unique benchmark, GentBench, that can probe customized agents with a holistic evaluation in terms of safety, robustness, efficiency, multilin- gual capabilities, etc. Notably, it is flexible for users to customize the evaluation by configuration.
However, given the unique requirements and customization that each specific domain demands, tools and prompting paradigms developed for a par- ticular task may prove irrelevant or ineffective for others. This poses a significant challenge to the development of a single, all-encompassing agent that performs efficiently across all tasks. Conse- quently, there is a rising need for the collabora- tion of multiple specialized agents. For example, MetaGPT (Wu, 2023) models the entire process of software development with carefully orchestrated standard operating procedures (SOPs) to generate longer program codes for game development. In our work, Gentopia provides smooth support for the composition of agents, which is handy for agent collaboration in different scenarios.
# 2 Background
A variety of agent projects have been proposed, targeting an array of diverse tasks, including auto- mated web navigation (Gur et al., 2023), database management (Jiang et al., 2023), automated game playing (Wang et al., 2023), collaborative soft- ware development (Wu, 2023), etc. Meanwhile, re- searchers are enthusiastically developing generalist agents that can perform well for multiple tasks. Au- toGPT (Richards, 2023) stands for the first experi- mental open-source application for fully automatic AI, with the ultimate goal of âautonomously achiev- ing whatever goal users set". SuperAGI (Kondi, 2023) provides a more user-friendly interface, im- proved memory management, optimized token us- age, and looping detection heuristics. Hugging- GPT (Shen et al., 2023) expands the potential of artificial intelligence by linking to extensive AI models hosted on HuggingFace, thereby support- ing a range of AI tasks in diverse domains and modalities, including language, vision, and speech.
# 3 Design and Implementation
Gentopia aims to provide easy assembly, sharing, and interaction of task-specialized agents. A sin- gle step to improve agent capability and efficiency gives plural contributions to interacted agents, fos- tering collective growth toward greater intelligence.
# 3.1 Rationale
The impetus of Gentopia is rooted in the aspi- ration to construct capable and deployable AI as- sistants. A pertinent question that arises in this context is whether there is a necessity for a massive and expensive model like 175B GPT-4 to perform relatively simple tasks such as summarizing a web search. Recent studies like TinyStories (Eldan and
Types | Logical Diagram of Agent Type Vanitla | & â~ fy â> @& ReAct | 2â BF â â » â â » â 2 Rewoo | 2â BB â â » â g OpenAl | A â S â â G â_â> â_> S â g Opendl A2â §& â â §& â â © â & Memory Oo OE n LJ LLM | Components: © Users Plugins G Function Call API Vector Database |
Figure 2: Gentopia agent types, categorized according to the interaction paradigms between agents and plugins.
Li, 2023), Specializing Reasoning (Fu et al., 2023), Letâs Verify Step by Step (Lightman et al., 2023), and ReWOO (Xu et al., 2023), direct our atten- tion towards an intuitive yet undervalued observa- tion â LLMs exhibit enhanced capabilities when a context/distribution shift is created, specifically narrowed towards certain target tasks.
pletion. Besides, we build a clean and intuitive Command Line Interface (CLI) allowing users to âchatâ with the agent in an interactive way. Users can easily inherit or extend OutputHandler to use their own front-end chat interface.
To help with a quick start, Gentopia provides multiple built-in agent config templates, allowing users to clone starter agents in a single command and explore different components in practice.
However, there is no silver bullet for agent spe- cialization. Various strategies can be employed depending on target tasks. For instance, prompting "Letâs think step by step" in context leads to more accurate math reasoning (Kojima et al., 2022). Pro- viding few-shot examples could guide an ideal exe- cution workflow. Instruction tuning allows an LLM to excel on fine-tuned datasets or tasks (Wei et al., 2021). Tweaking the agent type from ReAct (Yao et al., 2022) to ReWOO significantly reduces the execution time of observation-agnostic tasks like search & summarize.
# 3.3 Adaptive Customization
The agent configuration file encapsulates the criti- cal components of an agent, including:
⢠Basic Attributes. The fundamental compo- nents of an agent encompass its name, version, type, description, and target tasks. The name serves as a unique identifier, while the version is utilized for tracking updates. The agentâs type delineates its interaction paradigm with plugins. The description provides a succinct overview of the agentâs usage, and the tar- get_tasks list the tasks or examples for which the agent specializes. These descriptions can be selectively used in-context for agents to recognize each other upon interaction.
The design of Gentopia is deeply grounded in our belief to share specialized agents for collective growth. Gentopia presents an easy and portable way to build agents, facilitating the reproduction, enhancement, and interaction of agents. A compan- ion platform, GentPool, is used to register pub- lic agents, coupling each with a descriptive Wiki page to help users navigate and search for agents in need. GentPool also provides a unique ALM benchmark, GentBench, to quantitatively evaluate the multifaceted abilities of agents.
⢠LLM is a pivotal component that drives the agentâs behavior. It is typically a dictionary of the model_name and parameters. Gentopia supports a variety of OpenAI LLMs and over 10 kinds of HuggingFace open-source LLMs (including Llama (Touvron et al., 2023), Al- paca (Taori et al., 2023), Vicuna (Chiang et al., 2023), Falcon(Almazrouei et al., 2023), Flan (Wei et al., 2021), MPT (MosaicML NLP Team, 2023), and more), each with a unique set of tunable parameters and usage costs. No- tably, Gentopia unifies support in both CPU
# 3.2 Assembling Agents
At its core, Gentopia embodies each customized agent as a single yaml config file, which can be sent to AgentAssembler to create a corresponding agent instance. An agent instance acts similarly to a language model, where users can use ârunâ or âstreamâ to get completed or incremental com-
Tasks Sub-tasks Description Data Source Examples Reasoning Math measures agent ability to solve a wide range of math problems. MATH (Hendrycks 2021b), et al., GSM8K (Cobbe et al., 2021) Coding measures agent ability to write code to fulfill requirements and pass tests. (Chen HumanEval 2021), et (Austin MBPP et 2021), APPS (Hendrycks et al., 2021a) al., al., Planning measures agent reasoning to complete a task in correct order. LLM- Plan et al., 2023) (Valmeekam Commonsense measures agent ability in reason- ing for everyday questions. BBH (Suzgun et al., 2022) Knowledge World knowledge measures agent ability in an- swering a wide range of factual questions. MMLU (Hendrycks et al., 2020) Domain-specific knowledge measures agent with domain- specific knowledge. AGIEval et al., 2023) (Zhong Web-retrieval measures how capable an agent could answer to surf online and retrieve real-time information. Curated Safety Integrity measures agent ability to avoid generating unsafe or offensive content, even when prompted in crafty ways (eg. jailbreaking). Curated Harmlessness measures agent bias in gender, ethics, age, etc. BBQ et Bold et al., 2021) (Parrish 2021), (Dhamala al., Multilingual Translation asks agent to correctly translate among different languages. Curated Understanding similarly tests an agent if it un- derstands something in different languages. Curated Token usage
# Efficiency
# Run time
These metrics indicate how expensive or time-consuming for agents to execute on average and on different tasks.
Table 1: An overview of GentBenchâs task classification, task descriptions, data sources, and example instances. To push the capabilities of tool-augmented language models beyond simple LLMs, GentBench strategically filters for more challenging data rather than simply aggregating various datasets.
and GPU loading, together with 8-bit and 4- bit weight Quantization, thereby adapting to a wide range of computation environments.
⢠Prompt Template is essentially an f-string template with variable placeholders and a val- idation check. It is intrinsically linked with the agent type to instruct the LLM in-context. Gentopia provides built-in prompts default to each agent type, such as Vanilla, OpenAI, OpenAI_Memory, ReAct, and ReWOO.
Plugins enable agents to interact with exter-
nal tools or other agents, thereby extending their capabilities beyond single language mod- els. Gentopia also allows agents to be built in a hierarchical architecture, such that those closer to the leaves are supposed to be increas- ingly specialized and narrowed to more gran- ular sub-tasks.
⢠Memory allows LLMs to retrieve informa- tion out-of-context. This is particularly useful when itâs necessary to circumvent the con- text limitations of LLMs or to conserve token consumption. Implementation details are de-
scribed in the appendix.
# 3.4 Agent Evaluation Benchmark
GentBench is a unique benchmark for agents or ALMs. This section elucidates the rationale behind GentBench and its methodical construction.
# 3.4.1 Objectives
Due to the massive need of training datasets, re- searchers and organizations tend to use public NLP benchmarks, such as MMLU (Hendrycks et al., 2020), MATH (Hendrycks et al., 2021b), Big- Bench (bench authors, 2023) to enrich the LLM training corpus. Such methods inevitably introduce evaluation bias when the entailed agents are tested against the same set of tasks at inference.
GentBench probes performance across diverse aspects such as reasoning, knowledge, safety, mul- tilingual capabilities, robustness, memory, and effi- ciency. This comprehensive approach transcends the limitations of single datasets, facilitating a more holistic evaluation of an agentâs capabilities.
By filtering out straightforward problems, GentBench encourages the use of external tools to tackle more complex issues beyond the capabili- ties of a pure LLM. Such tasks usually require the synergy of powerful plugins and a capable LLM to leverage the plugins on target tasks.
3.4.2 Benchmark Construction The construction of GentBench involves an exten- sive collection and curation of tasks, and a meticu- lous process to filter out less challenging problems. The gpt-3.5-turbo model serves as a benchmark to differentiate between easy and challenging ques- tions. Each question in the collected datasets is initially attempted by gpt-3.5-turbo. Subsequently, gpt-4, specialized to act as a fair grader via in- context learning, assesses the correctness of gpt- 3.5-turboâs answer. This rigorous evaluation results in a refined dataset composed solely of the chal- lenging problems where gpt-3.5-turbo fails to solve independently.
To prevent overfitting and enhance the modelâs general applicability, GentBench partitions the benchmark into public and private components. The public component fosters model development with open access, while the private component is exclusively for agents to be merged into GentPool, testing the generalized abilities of the agent on unseen tasks. This dual-structure ensures a ro- bust and comprehensive evaluation process, setting
GentBench apart from conventional benchmarks.
3.4.3 EvalPipeline GentBench employs a range of specialized agents, known as "graders", each designed to cater to dif- ferent evaluation needs, including binary outcomes (GatedGrader), continuous scoring (ScoreGrader), pairwise outcomes (DojoGrader), custom measure- ments (InstructedGrader), and unit test execution (CodeGrader). For usersâ convenience, we provide MultiProcessEvalPipeline class to automatically sample from each evaluation class, conduct evalua- tions in parallel by matched graders, and aggregate the results into a comprehensive report. We also integrate our evaluation results with Zeno (Cabrera et al., 2023), a powerful visualization tool assist- ing users in collecting nuanced insight into the strengths and weaknesses of agents.
# 3.5 Collective Contribution
As an open-source project, Gentopia actively encourages users to contribute their specialized agents to GentPool. Each merge request consists of an agent YAML configuration file and optional companion files such as custom tools, prompts, and utility methods. Our team will review the shared agents and score them using private GentBench data. Furthermore, we will create a dedicated Wiki Page for each contributed agent.
Once the agents are incorporated into Gentopia, users can utilize built-in commands to clone or call it for downstream use cases, fostering a dynamic and collaborative environment. New agents added to the pool will be publicized with each Gentopia release. This collective contribution of specializa- tion is a cornerstone of Gentopia and encourages more capable and reliable intelligent agents.
# 4 Case Study
We briefly showcase the process of building an agent, who acts as an experienced and visionary entrepreneur, for the users to create business plans with the help of Gentopia. Further, the users can evaluate the created agent and share it publicly into the GentPool.
# 4.1 Initializing an Agent
Figure 3 illustrates a concrete workflow for work- ing with agents in GentPool. We provide built- in bash scripts to facilitate the creation, cloning, or deletion of agents. GentPool registers tem- plate agents for each built-in agent type, allowing
Config and assemble the agent (B) ot . Human-Agent Interaction © business plans and strategies. Create new agent $ ./create_agent my_agent Clone from pooled agents $ ./clone_agent react_template my_agent = Hnelude .-/dr_science/agent. yam. Delete agent 1 lon is an experienced and visionary entrepreneur âange a bunch of tools and agents to collaborate on his ideas. te: prompt gentpool.pool.elon.pronpt.PromptOfelon YAML -_ >. 9 python assemble.py my_agent « CLi-based : Interaction -âââ6enttopia- Welcome to Gentopia! 6 [=== | id : : 3 \ GUbbased Interaction $ /delete_agent my_agent BSI S it push origin my_agent_branch Config components Custom Operators 44 Share the agent in GentPool © BRS prion evatsate py my_agent GentPool iy Moe ma J a . 2 a am ae
Figure 3: A representative workflow using Gentopia.AI with GentPool. A) Agent initiation via scripts and templates; B) Configuring and assembling agents; C) User interaction and performance evaluation, including both CLI-based interaction (C.1) and GUI-based interaction (C.2); D) Sharing specialized agents in the GentPool.
# 4.4 Agent Specialization and Publication
users to clone, for instance, the "react_template" to start off. An agent instance simply contains an "agent.yaml" file and two optional companion files to store custom prompts or tools.
Users can employ various methods in agent spe- cialization, improving agent performance and ef- ficiency. These approaches include in-context prompt tunings like using few-shot examples, fine- tuning a specialized LLM on desired tasks or datasets, optimizing component configs such as trying new agent types and other sub-agents, and improving the capabilities of tools. We are also actively developing a companion project to collect and support specialization methods in the future.
# 4.2 Custom Configuration
Users can configure essential components of the agent such as name, description, target_task, plu- gins, etc. For instance, shown in Figure 3, users can use the prompt template of âPromptOfElonâ and GPT-4 for constructing the LLM component. They can also add plugins (e.g., âgoogle_searchâ and âweb_pageâ) to boost the agent. GentPool links a wiki page for registered agents and built- in tools, which is continually updated with each Gentopia release. Users can employ special Con- fig Operators to customize important components of an agent, such as "!prompt" for customizing prompt_template, "!tool" for self-defined tools as plugins, "!include" for sub-agents as plugins, "!file" to read local files in text format, and "!env" to read an environmental variable.
Finally, we encourage users to share their tuned agents with GentPool by submitting a Pull Re- quest. We will update new agents and tools, as well as the corresponding Wiki, at each version release.
# 5 Conclusion
This paper introduces Gentopia.AI, an open- source platform designed for tool-augmented LLMs. Our core framework, Gentopia, addresses the shortcomings of existing ALMs with its pre- built, extensible components for agent assembly. Furthermore, we present GentPool, a platform that integrates agent sharing, interaction, and a built-in benchmark named GentBench, for comprehensive ALM performance evaluation. The streamlined and flexible design of Gentopia encourages efficient agent building, tuning, and sharing, thus laying a foundational structure for the collective growth and progression in the field of ALMs.
# 4.3 Testing and Evaluation
There are two methods to assess the performance of a new agent: qualitative human evaluation and quantitative GentBench evaluation. Users can call "assemble.py" to initiate a CLI chat interface and converse with the target agent. Alternatively, users can use "evaluate.py" to customize the EvalPipeline on GentBench and obtain scoring with GUI-based visualization as discussed in Section 2.4.3.
# Acknowledgement
Gratefully, we thank Dr. Graham Neubig and the Zeno team for advising and integrating with us on agent evaluations and visualizations.
Gentopia.AI is a new open-source community and expanding features in the long term. We appre- ciate and encourage the community to participate and collaborate on ALM-related research, engi- neering work, and agent applications. Please get in touch with us for future opportunities.
# Ethics Statement
In developing our framework Gentopia, we ad- hered to rigorous ethical principles to ensure the responsible use and deployment of ALMs. We tried to make it as transparent as possible so that users can more reliably use it. Also, the data sources used in GentBench are collected from publicly available datasets, and no demographic or confidential infor- mation from users is accessed, safeguarding their privacy and anonymity.
Furthermore, the availability of multiple agents and datasets in Gentopia is intended to facilitate diverse and unbiased research while ensuring that developers can easily customize and share their agents responsibly. Researchers and developers can explore the full potential of ALMs while safe- guarding the interests of all stakeholders involved.
# References
Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Al- shamsi, Alessandro Cappelli, Ruxandra Cojocaru, Merouane Debbah, Etienne Goffinet, Daniel Hes- low, Julien Launay, Quentin Malartic, Badreddine Noune, Baptiste Pannier, and Guilherme Penedo. 2023. Falcon-40B: an open large language model with state-of-the-art performance.
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732.
BIG bench authors. 2023. Beyond the imitation game: Quantifying and extrapolating the capabilities of lan- guage models. Transactions on Machine Learning Research.
Ãngel Alexander Cabrera, Erica Fu, Donald Bertucci, Kenneth Holstein, Ameet Talwalkar, Jason I. Hong, and Adam Perer. 2023. Zeno: An interactive frame- work for behavioral evaluation of machine learning. In CHI Conference on Human Factors in Computing
Systems, CHI â23, New York, NY, USA. Association for Computing Machinery.
Shawn Callegari. 2023. Semantic Kernel: Integrate cutting-edge LLM technology quickly and easily into your apps.
Harrison Chase. 2023. LangChain: Next Generation Language Processing.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka- plan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An open- source chatbot impressing gpt-4 with 90%* chatgpt quality.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers to solve math word prob- lems. arXiv preprint arXiv:2110.14168.
Jwala Dhamala, Tony Sun, Varun Kumar, Satyapriya Krishna, Yada Pruksachatkun, Kai-Wei Chang, and Rahul Gupta. 2021. Bold: Dataset and metrics for measuring biases in open-ended language genera- tion. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 862â872.
Ronen Eldan and Yuanzhi Li. 2023. Tinystories: How small can language models be and still speak coherent english? arXiv preprint arXiv:2305.07759.
Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and Tushar Khot. 2023. Specializing smaller language models towards multi-step reasoning. arXiv preprint arXiv:2301.12726.
Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, and Aleksan- dra Faust. 2023. A real-world webagent with plan- ning, long context understanding, and program syn- thesis. arXiv preprint arXiv:2307.12856.
Dan Hendrycks, Steven Basart, Saurav Kadavath, Man- tas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. 2021a. Measuring coding challenge competence with apps. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language under- standing. In International Conference on Learning Representations.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021b. Measuring mathematical problem solving with the math dataset. In Thirty- fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2).
Shima Imani, Liang Du, and Harsh Shrivastava. 2023. Mathprompter: Mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398.
Jinhao Jiang, Kun Zhou, Zican Dong, Keming Ye, Wayne Xin Zhao, and Ji-Rong Wen. 2023. Struct- gpt: A general framework for large language model arXiv preprint to reason over structured data. arXiv:2305.09645.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022. Large lan- guage models are zero-shot reasoners. Advances in neural information processing systems, 35:22199â 22213.
Abhay Kondi. 2023. SuperAGI: Open-source frame- work to build, manage and run useful Autonomous AI Agents.
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023. Letâs verify step by step. arXiv preprint arXiv:2305.20050.
Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs.
Anton Osika. 2023. GPT-Engineer: Specify what you want it to build, the AI asks for clarification, and then builds it.
Shirui Pan, Linhao Luo, Yufei Wang, Chen Chen, Ji- apu Wang, and Xindong Wu. 2023. Unifying large language models and knowledge graphs: A roadmap. arXiv preprint arXiv:2306.08302.
Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel R Bowman. 2021. Bbq: A hand-built bias benchmark for question answering. arXiv preprint arXiv:2110.08193.
Toran Bruce Richards. 2023. Auto-GPT: An Au- tonomous GPT-4 Experiment.
Sasha Rush. 2023. MiniChain: A tiny library for coding with large language models.
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugging- gpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580.
Mirac Suzgun, Nathan Scales, Nathanael Schärli, Se- bastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny
Zhou, , and Jason Wei. 2022. Challenging big-bench tasks and whether chain-of-thought can solve them. arXiv preprint arXiv:2210.09261.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Alpaca: A strong, replicable instruction-following model. Stanford Center for Research on Foundation Models., 3(6):7.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and effi- cient foundation language models. arXiv preprint arXiv:2302.13971.
Karthik Valmeekam, Sarath Sreedharan, Matthew Mar- quez, Alberto Olmo, and Subbarao Kambhampati. 2023. On the planning abilities of large language models (a critical investigation with a proposed benchmark). arXiv preprint arXiv:2302.06706.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Man- dlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2023. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv: Arxiv-2305.16291.
Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M Dai, and Quoc V Le. 2021. Finetuned lan- guage models are zero-shot learners. arXiv preprint arXiv:2109.01652.
Alexander Wu. 2023. MetaGPT: The Multi-Role Meta Programming Framework.
Binfeng Xu, Zhiyuan Peng, Bowen Lei, Subhabrata Mukherjee, Yuchen Liu, and Dongkuan Xu. 2023. Rewoo: Decoupling reasoning from observations for efficient augmented language models. arXiv preprint arXiv:2305.18323.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2022. React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629.
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023. Agieval: A human-centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364. | {
"id": "2302.13971"
} |
2308.04026 | AgentSims: An Open-Source Sandbox for Large Language Model Evaluation | With ChatGPT-like large language models (LLM) prevailing in the community,
how to evaluate the ability of LLMs is an open question. Existing evaluation
methods suffer from following shortcomings: (1) constrained evaluation
abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that
task-based evaluation, where LLM agents complete tasks in a simulated
environment, is a one-for-all solution to solve above problems. We present
AgentSims, an easy-to-use infrastructure for researchers from all disciplines
to test the specific capacities they are interested in. Researchers can build
their evaluation tasks by adding agents and buildings on an interactive GUI or
deploy and test new support mechanisms, i.e. memory, planning and tool-use
systems, by a few lines of codes. Our demo is available at
https://agentsims.com . | http://arxiv.org/pdf/2308.04026 | Jiaju Lin, Haoran Zhao, Aochi Zhang, Yiting Wu, Huqiuyue Ping, Qin Chen | cs.AI, 14J60 (Primary) 14F05, 14J26 (Secondary) MSC-class: 14J60 (Primary)
14F05, 14J26 (Secondary) 68T42 | submit to EMNLP2023 demo track | null | cs.AI | 20230808 | 20230808 | 3 2 0 2
g u A 8 ] I A . s c [
1 v 6 2 0 4 0 . 8 0 3 2 : v i X r a
AgentSims: An Open-Source Sandbox for Large Language Model Evaluation Jiaju Lin1,2, Haoran Zhao1,3 â, Aochi Zhang1, Yiting Wu1,4, Huqiuyue Ping1,5, Qin Chen6 1PTA Studio 2 Pennsylvania State University, 3 Beihang University, 4 Sun Yat-sen University, 5Zhejiang University, 6East China Normal University [email protected] [email protected] and [email protected]
# Abstract
With ChatGPT-like large language models (LLM) prevailing in the community, how to evaluate the ability of LLMs is an open ques- tion. Existing evaluation methods suffer from following shortcomings: (1) constrained evalu- ation abilities, (2) vulnerable benchmarks, (3) unobjective metrics. We suggest that task- based evaluation, where LLM agents complete tasks in a simulated environment, is a one- for-all solution to solve above problems. We present AgentSims, an easy-to-use infrastruc- ture for researchers from all disciplines to test the specific capacities they are interested in. Researchers can build their evaluation tasks by adding agents and buildings on an interactive GUI or deploy and test new support mecha- nisms, i.e. memory, planning and tool-use sys- tems, by a few lines of codes. Our demo is available at https://agentsims.com .
1
# 1 Introduction
LLMs have revolutionized Natural Language Pro- cessing (NLP) and beyond. They demonstrate great potential in few-shot learning(Brown et al., 2020), code generation(Nijkamp et al., 2023), rea- soning(Yao et al., 2023) and other tasks. Further- more, LLM powered autonomous agents(Weng, 2023) are widely applied in solving complex prob- lems, like multimodal generation(Shen et al., 2023), software developing(Qian et al., 2023) and social simulating (Park et al., 2023).
Although LLMs have reformed the paradigm of NLP, the problem of evaluation keeps haunt- ing this field. Old benchmarks become out-of- date. Since LLMs achieve human-level Natural Language Understanding (NLU) and Natural Lan- guage Generation (NLG) abilities(OpenAI, 2023). To address the pressing need for novel benchmarks, the NLP community has introduced an array of fresh evaluation tasks and datasets, encompassing a
# ââ Corresponding author.
diverse spectrum of abilities, including close-book question-answering (QA) based knowledge test- ing(Hendrycks et al., 2020; Huang et al., 2023), human-centric standardized exams(Zhong et al., 2023), multi-turn dialogue(Lin and Chen, 2023), reasoning(Liu et al., 2023a; bench authors, 2023) and safety assessment(Sun et al., 2023).
However, there are still many problems with these new benchmarks. 1) Evaluated abilities are limited by the task formats. Since a majority of these tasks adopt a single-turn QA format, they are insufficient to comprehensively evaluate vari- ous aspects of LLMsâ capabilities. For instance, they fail to assess the modelsâ proficiency in ad- hering to instructions in dialogue or mimicking human-like social interactions. 2) Benchmarks can be easily hacked. Avoiding the leakage of test set is of paramount importance when evaluate a modelâs ability. Nonetheless, considering the amount of pretrained knowledge of LLM, it has become more and more inevitable to inadvertently mix test cases into the training set.(Gunasekar et al., 2023). 3) For open-ended QA, existing metrics are not objec- tive. Previous metrics for open-ended QA involve automatic metrics, and human-rating as subjective metrics(Zhou et al., 2023). In the LLM era, text seg- ment matching based metrics become out-of-date. To mitigate the high-costly issue of human-rating, todayâs researchers employ well-aligned LLMs like GPT4 as automatic raters. Nevertheless, the most significant problem of this approach is that it can not evaluate super GPT4-level models, and LLMs are biased toward specific features (Wang et al., 2023b).
Based on these observations, we suggest task- based evaluation for LLM benchmarks. Specifi- cally, given an artificial social-economic environ- ment, LLM-driven agents should achieve the pre- defined task goals to prove their abilities, just like humans accomplishing goals in real world or games to show their capacities. Task-based evaluation is
1
a one-for-all solution for current issues: 1) Task- based evaluation can test an LLMâs overall ability. The complexity of social simulation and adaptation far exceeds simple QA and can formulate more challenging tasks for LLMs. LLM agents need to be equipped with the ability from NLU to Theory of Mind (ToM) (Premack and Woodruff, 1978). 2) Task solving processes are less likely to be hacked. Different from unchanged test datasets whose for- mats can be easily mimicked and added to training data. Task settings are diversified and the emergent social behaviors and groups are less likely to be described and included in training corpus. 3) Task passing rate is an objective metric. Compared with popular rating methods by ChatGPT, the passing rate does not rely on any black-box rating process, i.e. deep neural networks or human brains, thus it is an objective and fair metric for the comparison between LLMs.
To all-around estimate LLMsâ capacities, we hope researchers from all fields take part in the de- velopment of evaluation tasks. However, a key ob- stacle to fostering a collaborative research commu- nity is the absence of a standard paradigm, an easy- to-use and extensible research platform. Previous works pursue the most efficient way to implement a sandbox while ignoring the need of non-specialist users. Besides, the poor readability further results in poor extensiblity and user churn. Moreover, the agentsâ performance varies with different support systems, i.e. memory, planning and tool-use sys- tem. We need a standard implementation to ensure the reproducibility of experimental results.
To this end, we introduce AgentSims, an inter- active, visualized, and program-based infrastruc- ture for curating evaluation tasks for LLMs. It creates an artificial town with various buildings and residents. The core objective of AgentSims is to streamline the task design process, eliminating hurdles that researchers from various backgrounds and programming proficiencies might encounter.
⢠For researchers focusing on LLM, AgentSims is extendable and combinable to allow users to combine different plan, memory and learning systems to study the impacts and effectiveness of various system design.
⢠For experts from other fields like behavioral eco- nomics or social psychology, AgentSims pro- vides an interactive UI for map design and agent creation and lower the entry threshold. Such a user-friendly architecture further facilitates the
2
cooperation between different fields and the fu- ture prosperity of the LLM community.
# 2 Related Work
# 2.1 Benchmarks for Large Language Models
The emergency of ChatGPT and other LLMs re- quires new benchmarks for effective evaluation. bench authors (2023) is the most accepted bench- mark to evaluate LLMâs general abilities. It con- tains more than 200 tasks, covering from child- hood development, to social bias. Zhong et al. (2023) collect test tasks from human-centric stan- dardized exams like GRE and SAT. (Hendrycks et al., 2020; Huang et al., 2023) are benchmarks focusing on measuring knowledge acquired in pre- training. They covers subjects across STEM, the humanities, the social sciences. Lin and Chen (2023) build a benchmark for LLMsâ multiturn dialogue abilities. Every dialogue is limited to two turns for simplicity. Sun et al. (2023) focus on mea- sure the safety of LLMs. They curate a adversarial attack dataset containing insulting instructions and test whether LLMs can be jailbroke. However, as mentioned above, existing datasets have issues that can not fully demonstrate abilities of LLMs. AgentSims overcomes these difficulties and renders a chance for overall evaluation of LLMs.
# 2.2 Multi Agent Cooperation
With LLMs demonstrate their overwhelming abil- ities, researchers find that multi LLM agents can generate better results than a single one. Nair et al. (2023) is one of the earliest attempts of multi-agent cooperation. It builds a forum for agents to com- municate feedback and iteratively improve their healthcare suggestions. Li et al. (2023) expand the application field of agent cooperation method by role-playing. From programming to domain- specific QA, it surpass single agent baselines. Qian et al. (2023) build a software development com- pany, by meticulously dividing the development process into four distinct stages, leading to efficient resolution of specific subtasks. Liu et al. (2023b) first apply multi-agent simulated society for align- ment, where agents in a sandbox learn from social interaction to understand moral rules. (Park et al., 2023) is the most sophisticated application of multi agent sandbox. Authors build support mechanisms to enable agents to produce believable individual and emergent social behaviors. However, none existing methods provide a user-friendly interface
Editable map + Talk to Fe
Figure 1: Front end of AgentSims, showing in a pixel game style. Users can create agents and buildings in the left-side panel and observe agents behaviors in the main screen. Besides setting-then-observing, users can also play as the mayor and talk with agents to intervene the experiment.
for unprofessional researchers or build a standard paradigm for agent support system. Nonetheless, current multi-agent systems are task-oriented rather than evaluation-oriented. AgentSims works as a platform for easy benchmark construction.
# 3 Key Components
ning System reorganizes a goal by decomposing the target, summarizing current condition and gen- erating subtasks. Specifically, it is assembled by a series of pluggable prompt modules, which assess current achievement of ultimate goals by checking the memory system and making decisions for next steps. Once a new step is completed, it would be recorded in the memory system.
As shown in Figure 2, key components of AgentSims can be divided into two parts: 1) genera- tive agents driven by LLM support mechanisms. 2) buidlings and equipment that consist the sandbox environment.
# 3.1 Generative Agents
If prompted properly, LLMs can generate believ- able behaviors(Park et al., 2022). However, to achieve human-like memory performance and long- term coherence, LLM is not enough. We need aux- iliary systems to enable agents to perform more naturally. Referring to recent work(Park et al., 2023; Wang et al., 2023a), we abstract these sup- portive mechanisms into three parts: Planning Sys- tem, Memory System, and Tool-Use System.
Memory System. Agents capable of emulating human behavior necessitate comprehending a vast array of experiences, beyond what a prompt can contain. The complete memory stream is too ex- pensive to be accommodated in the limited context window, and attempting to do so can overwhelm the model. Thus, we add a memory system for agentsâ experience retention and retrieval. The system is built upon a vector database for efficient storing and retrieving. Specifically, every agentâs daily mem- ory is encoded into embeddings and stored in the database. Every time when agents face some new situation that needs the previous memory, such as chatting with familiar people, the memory system can retrieve the information about their relationship to improve agent behaviour consistency.
Planning System LLMs have shown some plan- ning and reasoning capacities. However, faced with complex tasks, vanilla LLMs always fail for lacking long-term arrangement abilities. Hence, we introduce a Planning System to ensure agentsâ behaviors are coherent and believable. The Plan-
Tool-Use System. Ideally, agents continuously explore the simulated world would learn from pre- vious failures and successes, then acquire diverse skills. In our framework, to realize this feature, we present a tool-use system, which endows agents
3
Environment Interaction Agents chat A Agent Bio Goal feedback Y 7 operation Equipments ! sation) Fiession | Memory System Support Systems = Planning System © Tool-Use System a
Figure 2: Overview of AgentSims architecture
with the ability to accomplish real-world tasks. Par- ticularly, the tool use system stores equipment- operation pairs learning from feedback of using equipment. Once agents select equipment to in- teract with by planning and memory system, they need to infer an initial operation by the description of the equipment. And the equipment will return an operation result as feeedback. If the agent believes the result meets their operation purpose, a new skill would be stored in the Tool-Use System.
# 3.2 Buildings and Equipment
feedback and refine its operations.
# Interaction scenarios
Regarding the researchersâ backgrounds and pur- poses, we design two interaction modes: User Mode and Developer Mode. In the User Mode, re- searchers who consider little about background sup- port systems are target users. For researchers chas- ing better LLMs performance, Developer Mode provides flexible protocols for their development of different support mechanisms.
Interactive buildings and equipment are necessities for the diversity of an LLM sandbox. They com- pose the physical environments of the simulated world. In our framework, a building or location con- tains equipment like stoves or office desks. Thus, buildings are defined by the equipment they con- tain and equipment is the basic element composing the interactive environment. More specifically, the equipment can be defined by some definition texts describing its features and support function, which can be either hard-coded by the developer or a language model that supports self-adaptive agent- equipment interaction. When an agent interacts with equipment, as shown in Figure 2, its operation text will be sent to the background support model. The support function then returns the operation outcome based on the predefined rules or model- generated texts. For example, if an agent wants to get a cup of tea from a stove, the operation is âGet a cup of teaâ and the support function may return âMeaningless operationâ according to the hard code or âYou can not get tea from a stoveâ generated by the model. Then the agent would learn from the
# 4.1 User Mode
In the User Mode, AgentSims provides an inter- active interface in a pixel game style, as shown in Figure 1. Researchers can create agents, construct buildings and equipment in a graphical interface, fo- cusing on the rationality of experiment design, free from complex background driving mechanisms.
Agent Creation. Users can define agents within the system through an easy-to-use front end, as shown in the Figure 3. AgentSims provides various protocols for users to create functional agents. Not only basic information like goals and biography, but also options of Memory and Planning Systems. We pre-design a list of memory and planning sys- tems and users can choose their preference from a drop-down menu.
Building Creation. Users can also customize the physical environment by constructing buildings. As shown in Figure 4, users define a building by choosing a pre-configured building with equipment inside. To be noticed, the equipment in buildings are predefined but can be modified in the Developer
4
Plan system OF Goal um cera Memory system Total payment
Figure 3: Agent Creation
HouseZ Total payment Confirm
Figure 4: Building Creation
# Mode.
Experiment Intervene. Besides observing, users can play as the major agent to participate in the experiment. By talking with other agents, users can intervene the experiment naturally rather than modify agentsâ memory or goals roughly.
# 4.2 Developer Mode
Developer Mode is designed for professional devel- opers who are familiar with the properties of LLMs and pursue better performance of LLMs on a well- defined complex task. The highly-modularized fea- ture of AgentSims enables developers to add new functions within a few lines of code.
Agent Design. Developers have the flexibility to create agents tailored for various objectives and assemble diverse agents within a single sandbox for observation. To streamline the process of agent customization, weâve abstracted the LLM back- bone and distinct support systems into separate classes and function calls, as illustrated below. This empowers developers to personalize an agent by making adjustments to these abstract functions.
5
class LLMCaller: def __init__(self, model: str) -> None: self.model = get_model(model) def ask(self, prompt: str) : result = self.model.generate(prompt) return result class Agent: def __init__(self, name, bio, goal, model, memorySystem, planSystem, buildings, cash): self.state = State() self.state.buildings = buildings self.state.cash = cash self.caller = Caller(model) def plan(self) -> None: self.state.plan_prompt = ... self.state.plan = self.caller.ask(self.state.pl_prompt) def memory_store(self) -> None: self.state.memory_prompt = ... self.state.memory = self.caller.ask(self.state.mem_prompt) def use(self, facility: str, operation: str, description: str) -> None: self.state.use_prompt = ... self.state.use = self.caller.ask(self.state.use_prompt)
Building and Equipment Design. To customize the physical environment, developers can design new buildings and equipment by configuring corre- sponding json files. A new equipment can be defined by its type, de- scription and a support function.
[{"id": 1, "type": "counter", "function":..., "description": "This is the counter ...",}]
In some cases, agents can purchase commodities or earn salaries at the equipment. We use another configure file to annotate these economic features.
[{ "id": 1, "menu": { "chicken": 20,}, "salary":0,}],
We define buildings by a type and the equipment it contains. Hence we use a two-dimensional array to mark the facility ids in the building blocks.
[{"assets": "store_v1.2_0719", "id": 1, "price": 2000, "type": "store", "blocks":[[1,0,0...1,1]], "equipment":[0,1,0..]]}]
# Implementation
AgentSims is run using Python 3.91 and requires installing the requirements.txt file provided in the codebase using Pythonâs package manager PyPI2.
# 5.1 Backend
is built using Tornado3, a The web server lightweight Python web framework. It also uses the websockets library for API calls and push noti- fications, and mysql-connector-python to interact with the MySQL4 database.
5.2 Frontend Frontend The web client is built with Unity5. The client built by WebGL6 is embedded in the project code and can be accessed through a browser after proxying with nginx7.
# 6 Example Application Tasks
# 6.1 Subject LLM as participants
When subject LLM agents are participants of an artificial scenario, researchers can evaluate LLMâs social abilities, like ToM . In this case, the formu- lation of specific social scenes is realized by other baseline agents driven by stronger LLMs. For ex- ample, to study a new modelâs social adaptation abilities in a hostile environment, we can embed colleague agents driven by GPT4 with a strong de- sire of bullying newcomers. Then we place subject agents into this adversarial milieu and test whether the new model can understand otherâs emotion and improve how colleagues perceive it.
# 6.2 Subject LLM as mayor
To assess LLMâs long-term planning and organiza- tion abilities, researchers can appoint the subject LLM as the mayor of a town or the president of a company, where residents or employees are driven by baseline agents like GPT4. To overcome the difficulties set ahead deliberately or emerging dur- ing the experiments, then achieve the final goal of the task, the subject LLM needs to recruit new resi- dents to handle new problems, issue sound policies
1https://www.python.org/downloads/release/ python-390
2https://pypi.org/ 3https://www.tornadoweb.org/en/stable/ 4https://www.mysql.com/ 5https://unity3d.com 6https://get.webgl.org 7https://nginx.org/en/
6
and modify the out-of-date ones, found new func- tional buildings to satisfy emerging requirements, and so on. By analyzing the success rate of LLM mayor under different difficulties, researchers can gain valuable insights into the diverse capabilities of the LLM.
# 6.3 Applications besides Evaluation
Besides evaluating LLMs, AgentSims can be used as a data generation platform. Due to the fantastic NLG abilities of LLMs, researchers have applied them in data annotation and augmentation. How- ever, some data involving social judgement and participation necessitate a more intricate approach than a single prompt can provide. Thus, we can simulate a specific social background and let LLMs generate data more precisely. Liu et al. (2023b) have applied simulated society in alignment data generation. With AgentSims tailored for more intri- cate social simulations, its potential for enhancing data generation across various disciplines is unde- niable. Moreover, our program can also benefit social sci- ence researchers, by conducting more controllable preliminary experiments. Given that sota LLMs can understand human instructions and simulate human behaviours, social science researchers can design social environments as they wish for prelim- inary studies. Once researchers have a hypothesis, pilot experiments can be conducted in our virtual sandbox as a feasibility check.
# 7 Conclusion
In this paper, we present AgentSims, avisualized and program-based infrastructure for LLM test sandbox construction. AgentSims aims to facil- itate researchers in effectively building LLM evalu- ation tasks. It not only intends to make all its code openly available but also commits to continuously updating its documentation with comprehensive tutorials.
# Limitations
As a sandbox system, AgentSimsâ simulation abil- ity is limited by the accuracy of LLMs and the diversity of buildings and equipment. It can never fully reflect real world cases. Besides, although task-based evaluation is a sound approach to mea- sure the general ability of LLMs, it can hardly re- flect fine-grained abilities like math reasoning. The pass rate of tasks can not provide insights on why LLMs success or fail.
# References
BIG bench authors. 2023. Beyond the imitation game: Quantifying and extrapolating the capabilities of lan- guage models. Transactions on Machine Learning Research.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners.
Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio César Teodoro Mendes, Allie Del Giorno, Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, et al. 2023. Textbooks are all you need. arXiv preprint arXiv:2306.11644.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2020. Measuring massive multitask language under- standing. arXiv preprint arXiv:2009.03300.
Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023. C- eval: A multi-level multi-discipline chinese evalu- ation suite for foundation models. arXiv preprint arXiv:2305.08322.
Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem. 2023. Camel: Communicative agents for "mind" explo- ration of large scale language model society.
Yen-Ting Lin and Yun-Nung Chen. 2023. Llm-eval: Unified multi-dimensional automatic evaluation for open-domain conversations with large language mod- els.
Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji Zhou, and Yue Zhang. 2023a. Evaluating the logical reasoning ability of chatgpt and gpt-4.
Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M. Dai, Diyi Yang, and Soroush Vosoughi. 2023b. Training socially aligned language models in simulated human society.
Varun Nair, Elliot Schumacher, Geoffrey Tso, and Anitha Kannan. 2023. Dera: Enhancing large lan- guage model completions with dialog-enabled resolv- ing agents.
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. 2023. Codegen: An open large language model for code with multi-turn program synthesis.
7
OpenAI. 2023. Gpt-4 technical report.
Joon Sung Park, Joseph C. OâBrien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. 2023. Generative agents: Interactive sim- ulacra of human behavior.
Joon Sung Park, Lindsay Popowski, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. 2022. Social simulacra: Creating popu- lated prototypes for social computing systems.
David Premack and Guy Woodruff. 1978. Does the chimpanzee have a theory of mind? Behavioral and brain sciences, 1(4):515â526.
Chen Qian, Xin Cong, Cheng Yang, Weize Chen, Yusheng Su, Juyuan Xu, Zhiyuan Liu, and Maosong Sun. 2023. Communicative agents for software de- velopment.
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugging- gpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580.
Hao Sun, Zhexin Zhang, Jiawen Deng, Jiale Cheng, and Minlie Huang. 2023. Safety assessment of chinese large language models.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Man- dlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and An- ima Anandkumar. 2023a. Voyager: An open-ended embodied agent with large language models.
Jiaan Wang, Yunlong Liang, Fandong Meng, Zengkui Sun, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023b. Is chatgpt a good nlg evaluator? a preliminary study.
Lilian Weng. 2023. Llm-powered autonomous agents. lilianweng.github.io.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models.
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied, Weizhu Chen, and Nan Duan. 2023. Agieval: A human-centric benchmark for evaluating foundation models.
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. 2023. Lima: Less is more for alignment. | {
"id": "2009.03300"
} |
2308.03983 | SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool | Large Language Model (LLM) based Generative AI systems have seen significant
progress in recent years. Integrating a knowledge retrieval architecture allows
for seamless integration of private data into publicly available Generative AI
systems using pre-trained LLM without requiring additional model fine-tuning.
Moreover, Retrieval-Centric Generation (RCG) approach, a promising future
research direction that explicitly separates roles of LLMs and retrievers in
context interpretation and knowledge memorization, potentially leads to more
efficient implementation. SimplyRetrieve is an open-source tool with the goal
of providing a localized, lightweight, and user-friendly interface to these
sophisticated advancements to the machine learning community. SimplyRetrieve
features a GUI and API based RCG platform, assisted by a Private Knowledge Base
Constructor and a Retrieval Tuning Module. By leveraging these capabilities,
users can explore the potential of RCG for improving generative AI performance
while maintaining privacy standards. The tool is available at
https://github.com/RCGAI/SimplyRetrieve with an MIT license. | http://arxiv.org/pdf/2308.03983 | Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi | cs.CL, cs.AI | 12 pages, 6 figures | null | cs.CL | 20230808 | 20230808 | 3 2 0 2
g u A 8 ] L C . s c [
1 v 3 8 9 3 0 . 8 0 3 2 : v i X r a
# SimplyRetrieve: A Private and Lightweight Retrieval-Centric Generative AI Tool
Youyang Ng, Daisuke Miyashita, Yasuto Hoshi, Yasuhiro Morioka, Osamu Torii, Tomoya Kodama, Jun Deguchi Kioxia Corporation, Japan [email protected]
# Abstract
Large Language Model (LLM) based Genera- tive AI systems have seen significant progress in recent years. Integrating a knowledge re- trieval architecture allows for seamless inte- gration of private data into publicly available Generative AI systems using pre-trained LLM without requiring additional model fine-tuning. Moreover, Retrieval-Centric Generation (RCG) approach, a promising future research direc- tion that explicitly separates roles of LLMs and retrievers in context interpretation and knowl- edge memorization, potentially leads to more efficient implementation. SimplyRetrieve is an open-source tool with the goal of providing a localized, lightweight, and user-friendly in- terface to these sophisticated advancements to the machine learning community. SimplyRe- trieve features a GUI and API based RCG plat- form, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module. By leveraging these capabilities, users can ex- plore the potential of RCG for improving gen- erative AI performance while maintaining pri- vacy standards. The tool is available at https: //github.com/RCGAI/SimplyRetrieve with an MIT license.
1
# 1 Introduction
Generative Large Dense Language Retriever Model Context Knowledge Interpretation Memorization Degree of Role Separation Looe Retrieval- Retrieval- Augmented Centric
Retrieval- Retrieval- Augmented Centric
Figure 1: Retrieval-Centric Generation (RCG) approach presents an innovative concept that leverages the mutu- ally beneficial interaction between LLMs and retrievers for more efficient context interpretation and knowledge memorization. Increased clarity in role-separation be- tween context interpretation and knowledge memoriza- tion can potentially boost the performance of generative AI systems.
effective in adapting these models to specific do- mains for various tasks (Brown et al., 2020). This has given rise to the field of prompt-engineering. Additionally, Chain-of-Thought (Wei et al., 2022b; Kojima et al., 2022) decomposes a complex task assigned into manageable steps, thereby expand- ing the capabilities of generative-based language models even further.
Generative-based Natural Language Processing (NLP) has witnessed significant progress (Brown et al., 2020) in recent years. With the introduction of Transformer (Vaswani et al., 2017) architecture, the possibility of developing high-accuracy lan- guage models that can perform tasks such as text generation, text summarization and language trans- lation has become a reality. These models (Brown et al., 2020; Chowdhery et al., 2022), when scaled up to billions of parameters (Wei et al., 2022a), have shown remarkable improvements in text gen- eration tasks such as zero-shot inference, popu- larized the term Generative AI. Instead of model fine-tuning, careful design of prompts has proven
Training large language models (LLMs) requires immense computational resources, often involv- ing thousands of high-end GPUs. Fine-tuning these models can also be challenging. Although prompt-engineering helped to reduce the need for fine-tuning, there was still noticeable instruction misalignment when interacting with a human user. To address this issue, techniques such as rein- forcement learning from human feedback (RLHF) (Christiano et al., 2017) have been explored to align the behavior of LLMs with human values (Ouyang et al., 2022; OpenAI, 2023). Additionally, QLoRA (Dettmers et al., 2023), combining low-rank adap-
tation technique (Hu et al., 2022) and quantization technique, has made it possible to fine-tune these models on individual developerâs hardware, mak- ing them more accessible to a wider range of users. Despite these advances, there are still limitations to the capacity of LLMs, and they do not inher- ently recognize information that was not present during training and fine-tuning. Memorization of factual knowledge in the long tail is also a chal- lenge (Mallen et al., 2023).
Most recently, there has been growing interest in integrating external knowledge sources into LLMs for generating text (Borgeaud et al., 2022; Guu et al., 2020; Lewis et al., 2020). Similar approaches have also been proposed in solving computer vi- sion tasks (Nakata et al., 2022; Iscen et al., 2023). Retrieval-Augmented Generation (RAG) (Lewis et al., 2020) architecture is an approach that en- hances the capabilities of LLMs by incorporating external data sources using a sparse or dense re- triever (Karpukhin et al., 2020), enabling the use of privately owned data without requiring retraining or fine-tuning the LLM (Chase, 2022). However, developing retrieval-augmented LLM-based gen- erative models is still in its early stages. Our pro- posed tool can help facilitate these developments.
Additionally, we introduce a new architec- tural concept called Retrieval-Centric Genera- tion (RCG), which builds upon the Retrieval- Augmented Generation approach by emphasizing the crucial role of the LLM in interpreting context and entrusting knowledge memorization to the re- triever component, putting greater importance on retriever, as depicted in Figure 1. By separating context interpretation from knowledge memoriza- tion, this approach has the potential to reduce the scale (Carlini et al., 2023) of the LLM required for generative tasks, leading to more efficient and inter- pretable results. Moreover, this approach may help mitigate hallucinations (Maynez et al., 2020) by limiting the scope of the LLMâs generation. Once we define RCG as above, we can re-define RAG that enables more permissible usage of LLMâs in- herent knowledge, whereas RCG prioritizes clear demarcations between context interpretation and knowledge memorization.
SimplyRetrieve is an open-source tool aimed at providing a localized, lightweight, and user- friendly interface to Retrieval-Centric Generation approach to the machine learning community. This tool encompasses a GUI and API based RCG plat-
form, assisted by a Private Knowledge Base Con- structors and a Retrieval Tuning Module. Sim- plyRetrieve is designed to be simple and acces- sible to the community, as well as end-users. Our retrieval-centric platform incorporates multi- ple selectable knowledge bases featuring Mixtures- of-Knowledge-Bases (MoKB) mode and Explicit Prompt-Weighting (EPW) of retrieved knowledge base. By designing SimplyRetrieve with these features, we enable the machine learning commu- nity to explore and develop with a lightweight, private data interface to LLM-based generative AI systems, with a focus on retrieval-centric gen- eration. Potential developments that can be ex- plored using this tool include: (1) examining the effectiveness of retrieval-centric generation in de- veloping safer, more interpretable, and responsi- ble AI systems; (2) optimizing the efficiency of separating context interpretation and knowledge memorization within retrieval-centric generation approach; and (3) improving prompt-engineering techniques for retrieval-centric generation. Sim- plyRetrieve is available at https://github.com/ RCGAI/SimplyRetrieve.
Our contributions can be summarized as follows:
⢠We propose SimplyRetrieve, an innovative and user-friendly tool that leverages GUI and API platform to facilitate a Retrieval-Centric Generation approach. This platform is further strengthened by two key components: Private Knowledge Base Constructor and Retrieval Tuning Module.
⢠We open sourced our tool to the machine learn- ing community and identify potential develop- ment directions of Retrieval-Centric Genera- tion.
# 2 Related Works
The emergence of Retrieval-Augmented Genera- tion architecture has spurred the development of numerous open-source tools. The ChatGPT Re- trieval Plugin1, for instance, integrates the ability to retrieve and enhance personal or organizational documents into the widely used ChatGPT model (OpenAI, 2023). Similarly, fastRAG (Izsak et al., 2023) provides a streamlined platform for con- structing efficient retrieval-augmented generation
1https://github.com/openai/ chatgpt-retrieval-plugin
Retrieval Tuning Module RCG Tuning Prompt Knowledge Base a) Generative Large Language Retriever ANNS «= Knowledge Base based eS Knowledge Base «= Knowledge Base MoKB: Mixture-of-Knowledge-Base EPW: Explicit Prompt-Weighting of Knowledge Base
Figure 2: SimplyRetrieve is an open-source tool that provides a localized, lightweight, and user-friendly interface to the Retrieval-Centric Generation approach for the machine learning community. This tool features a GUI and API based RCG platform, assisted by a Private Knowledge Base Constructor and a Retrieval Tuning Module.
pipelines. Additionally, LangChain (Chase, 2022) offers a comprehensive generative chat AI library featuring agents, data augmentation, and mem- ory capabilities. Finally, Haystack (Pietsch et al., 2019) presents an all-encompassing NLP frame- work supporting question answering, answer gen- eration, semantic document search, and retrieval- augmentation. Both LangChain and Haystack em- ploy agent-based pipelining techniques and can process complex queries. However, this complex- ity may hinder the explainability of LLMs, mak- ing it challenging to interpret their performance in retrieval-augmented settings.
On the other hand, our work offers a lightweight and transparent approach to implementing so- phisticated retrieval-centric, as well as retrieval- augmented architecture, while maintaining a strong emphasis on response interpretability and wider accessibility to the community. Unlike previous works such as PrivateGPT (PrivateGPT), which provides a privacy-preserving chat AI tool but lacks customization options and analytical capabilities, our tool offers a comprehensive set of features for tailoring and analyzing retrieval-centric generation. Furthermore, to the best of our knowledge, we are the first to introduce RCG concept and show initial experiments of it using our tool.
# 3 Tool Design
SimplyRetrieve is designed to deploy RCG pipeline: construct knowledge base, tune archi- tecture, make predictions. In this paper, we fo- cus on describing the core specifications of the tool. For details about the setup procedures, refer
to the repository of https://github.com/RCGAI/ SimplyRetrieve.
# 3.1 GUI and API based Retrieval-Centric Generation Platform
As shown in Figure 2, there are two dense models in our tool: an LLM and an Approximate Near- est Neighbor Search (ANNS) based Knowledge Retriever. The LLM can be any one of the off- the-shelf open-source LLM models available in Hugging Face (Wolf et al., 2020), ranging from 1B to more than 100B-scale in parameters such as Touvron et al. (2023a,b). The Knowledge Retriever employs a dense retriever that is compatible with various embedding models available in Hugging Face. Additionally, our tool allows integration of multiple knowledge bases simultaneously, enabling user-selectable knowledge bases depending on the specific use case.
In terms of the GUI, we have designed a sim- ple yet intuitive layout using Gradio (Abid et al., 2019), which provides a familiar streaming chatbot interface with user control for managing the run- ning modes of the retriever, engineering prompts, and configuring the tool. As depicted in Figure 3, our GUI features a comprehensive retrieval-centric tuning panel for functions including manual knowl- edge base selection from multiple sources and Mixture-of-Knowledge-Base modes. Moreover, we employ Explicit Prompt-Weighting of retrieval to adjust the level of influence exerted by the retriever. To ensure seamless integration, we also developed a comprehensive API access function using the Gradio Client Interface, and we allow multi-user
Chatbot Functional Tabs | Whatis the purpose of establishing KIOXIA lwate Corporation? Streaming Chatbot Interface The purpose of establishing KIOXIA Iwate Corporation is to meet the growing demand for flash memory through advanced manufacturing processes utilizing Al Retrieval-Centric Tuning Panel Use KnowledgeBase KnowledgeBase Mode Selectable KnowledgeBase KnowledgeBase Kioxia Expert Prompt Weighting KnowledgeBase Weightag
Figure 3: The GUI design of SimplyRetrieve features four primary tabs. The Chat tab serves as the central query and response interface with retrieval-centric tuning panel. The Prompt tab provides an intuitive editor for modifying, updating, and saving prompts used by the AI. The Config tab enables users to customize various tool settings and save their preferences. Finally, the Analysis tab offers a comprehensive analytics platform for analyzing and logging data related to SimplyRetrieveâs performance and usage.
concurrent access to both UIs, leveraging Gradioâs queue functionality to manage requests efficiently. The retrieval-centric tuning panel enables lightweight and simplistic access to RCG. By using the manual knowledge base selection mode, users can construct and import multiple private knowl- edge bases simultaneously into this tool. The abil- ity to select the most relevant knowledge base for each task allows users to maintain control over the selection process while avoiding any unexpected outcomes. Our MoKB mode enables automatic se- lection of the most suitable knowledge base based on the similarity between the query and knowledge base functional descriptions. We use semantic co- sine similarity of embedding space to calculate these scores, providing an efficient and lightweight approach to knowledge base auto-selection. By updating the functional descriptions in the configu- ration file, users can further enhance the accuracy of the selection algorithm.
and leave it for future work.
# 3.2 Private Knowledge Base Constructor
Our Retrieval-Centric Generation Platform is as- sisted by a Private Knowledge Base Constructor that creates a local and personalized knowledge base using the userâs documents. This construc- tor employs a scalable documents loader that can handle large volumes of documents by chunking and streaming the loading, splitting and knowledge base creation processes, allowing for efficient doc- ument processing. The constructor supports var- ious document formats such as PDF, TXT, DOC, DOCX, PPT, PPTX, HTML, MD, CSV, among others, and can be easily expanded by editing con- figuration file. Additionally, the length of passages in the documents splitting function is easily config- urable to meet specific requirements.
Additionally, our Explicit Prompt-Weighting fea- ture allows manual adjustment of the degree of influence of retrievers on the language model, en- abling customized control over the balance between retriever and LLM. Through prompt-engineering or token weight adjustment, users can adapt the tool to their specific needs, ensuring optimal performance. SimplyRetrieve has incorporated Explicit Prompt- Weighting through prompt-engineering, where the weightage can be adjusted to fine-tune the percent- age of knowledge tokens to be used in the prompt out of retrieved tokens. However, we have not im- plemented token weight adjustment in this study
After generating the sources for the knowledge base, we use a dense encoder to convert the text into numerical embeddings that can be used for semantic search and retrieve. To accommodate large-scale knowledge bases, we utilize ANNS for efficient semantic retrieval. By default, our tool employs the Hierarchical Navigable Small Worlds (HNSW) (Malkov and Yashunin, 2020) algorithm, but we also provide support for flat indexing and the IVFPQ-HNSW method, which combines in- verted file indexing with product quantization and HNSW course quantizers. The Index Constructor function automatically creates the required index files for semantic searching. We implement our indexing function by using Faiss library (Johnson
et al., 2019).
# 3.3 Retrieval Tuning Module
The Retrieval Tuning Module of our tool includes three key functionalities: prompt-engineering, tool configuration, and analysis and data logging. The prompt-engineering functionality allows users to easily edit, update, and save retrieval-related prompts using a user-friendly Prompt Tab within our GUI. Available prompts are AI Prefix, Retriever Prefix, Retriever Suffix, Model Prefix and Model Suffix. The configuration functionality enables users to modify and save all configurable settings via the Config Tab within our GUI. Finally, the anal- ysis and data logging functionality collects and dis- plays retrieval-related analysis data, including re- trieved knowledge base, query, response, sentence- level and token-level similarity scores, in the Anal- ysis Tab of our GUI. Similarity scores are calcu- lated based on both semantic cosine similarity of sentence-to-sentence embeddings and all-token-to- token embeddings. This approach allows us to capture both local and global similarities between sentences, leading to more accurate assessments of their comparability. Additionally, users can save all logged data to a log file for further analysis. GUI designs are depicted in Figure 4, 5 and 6 of Ap- pendix A.2. To deploy an end-user mode, users can simply disable the update functions in the Retrieval Tuning Module through command-line options.
# 4 Evaluations
In this section, we perform several qualitative eval- uations to demonstrate the usability and behavior of our tool. We construct our knowledge base using the most recent information available on the web- site of an organization2. We utilize the models pub- licly available on Hugging Face, Wizard-Vicuna- 13B3 (Xu et al., 2023; Chiang et al., 2023) as the LLM and Multilingual-E5-base4 (Wang et al., 2022) as the encoder for our evaluations, unless specified otherwise. We load both models into a single Nvidia A100 GPU in 8-bit INT8 mode for lower memory usage and higher inference speed. We set temperature of LLM to 0. We utilize HNSW for indexing of knowledge bases and set the num- ber of passages retrieved to 5.
2https://www.kioxia.com/en-jp/top.html 3https://huggingface.co/ehartford/ Wizard-Vicuna-13B-Uncensored
4https://huggingface.co/intfloat/ multilingual-e5-base
# 4.1 Qualitative Evaluation
We compare the results of three approaches: Retrieval-Centric Generation (RCG), Retrieval- Augmented Generation (RAG), and Retrieval-OFF Generation (ROG). Note that in this work, we de- fine RAG as allowing more permissible integra- tion of LLMâs inherent and externally retrieved knowledge, whereas RCG prioritizes clear demar- cations between context interpretation and knowl- edge memorization. Investigating advanced meth- ods in extracting RCG behavior is a promising research topic. In this work, we conduct simple experiments using prompt-engineering technique to reveal the potential of RCG approach. Specifi- cally, for RCG, we employ a retrieval suffix prompt that reads âanswer the following question with the provided knowledge.â For RAG, we use a less constraining prompt that reads âanswer the follow- ing question. You may use the provided knowl- edge.â Finally, for ROG, which is the baseline in our evaluation, we disable the retriever altogether. By using the prompt-engineering function in Sim- plyRetrieve, we can easily perform these experi- ments. Queries are human annotated. In Table 1 and 2, blue-colored texts indicates factually correct responses while red-colored texts indicates halluci- nations.
ApproachResponse RCG Kioxia has factories located in both Yokkaichi city, Mie prefecture and Kitakami city, Iwate pre- fecture in Japan. Kioxia has factories located in various regions across the globe including Japan, Thailand, Malaysia, China, and the United States. However, its main production site is located in Yokkaichi City, Mie Prefecture, Japan. Our main production site for HDD and SSD prod- ucts is located in Yamagata, Japan. We also have manufacturing sites in Thailand and the Philip- pines that produce optical disc media such as Blu- ray Discs and DVDs. RAG ROG
Table 1: Qualitative comparison between Retrieval- Centric, Retrieval-Augmented and Retrieval-OFF ap- proaches using prompt-engineering in SimplyRetrieve.
Based on the results in Table 1, we observed that RCG successfully answered our query accu- rately, whereas RAG attempted to integrate the retrieved knowledge base with LLMâs own knowl- edge of the specific organization, resulting in par- tially erroneous information. Meanwhile, ROG re- sponded to our query with LLMâs preexisting mis- taken knowledge, generating falsehoods. Our find-
ings suggest that crafting clear and direct prompts, such as âanswer the given question using the pro- vided knowledgeâ, can encourage retrieval-centric behavior from the LLM, although further refined prompt-engineering may be necessary for optimal performance.
As shown in Table 2, RCG effectively addressed the query and produced accurate results. When combining RCG with a 50% Explicit Prompt- Weighting (RCG-EPW), where only the first 50% of retrieved knowledge base are injected into the prompt, the model generated partial responses but still maintained factual accuracy. In contrast, the pure LLM without RC enabled (ROG) provided hallucinatory responses, highlighting the impor- tance of incorporating retrieval-centric techniques in language models for improved performance.
Approach RCG Response Kioxia developed a high capacity storage device which could store large amounts of image data required by AI algorithms. This allowed them to deploy memory centric AI systems capable of handling multiple classi- fication tasks simultaneously without com- promising performance or accuracy. RCG-EPW Deployment of Memory-centric AI with ROG high capacity storage The company developed a new image clas- sification algorithm that achieved top-1 ac- curacy of 82.7% on the ImageNet valida- tion set, which is an improvement over pre- vious state-of-the-art results by more than one percentage point. This was made possi- ble through advancements in hardware and software technologies such as Tensor Core GPUs and our own deep learning frame- work, ONNX-MXNET.
Table 2: Effects of Retrieval-Centric Generation in Sim- plyRetrieve, based on the knowledge base about an or- ganization.
# 4.2 Accuracy & Speed Evaluations
In addition to evaluating the effectiveness of RCG using human annotations, we also conduct an in- ternal evaluation of our toolâs performance using a self-generated dataset. To create this dataset, we pass relevant passages through the language model Llama-2-13B-chat (Touvron et al., 2023b) to gen- erate 10 query and label pairs. For details on how we generated this dataset, refer to Appendix A.4. We employ Rouge-L score (Lin, 2004) as our per- formance metric. We perform this evaluation by using the API function of SimplyRetrieve. Our results in Table 3 show that RCG significantly im-
proves the Rouge-L score compared to the baseline approach of ROG, while also slightly more com- petitive than RAG. Moreover, despite the fact that RCG processes longer prompts than ROG due to the addition of knowledge tokens, we observe a decrease in processing time owing to the increased precision and brevity of the generated responses. Specifically, number of response tokens generated in RCG are in average 36% less than those gen- erated in ROG. This efficient performance may facilitate broader adoption within the community, as users can expect quicker response generation without sacrificing accuracy.
Approach ROG RAG RCG Rouge-L Score 0.186 0.359 0.413 time/query(s) 17.22 18.41 11.67
Table 3: Response accuracy & speed evaluation of Sim- plyRetrieve.
Finally, our findings suggest that even a mod- estly sized LLM of 13B parameters can demon- strate satisfactory performance in RCG approach towards never-seen-before factual knowledge with- out any model fine-tuning, potentially facilitates the deployment of Generative AI systems in real- world scenarios. See Appendix A.2 for further discussions and A.5 for ablation studies.
# 5 Conclusion
We introduced SimplyRetrieve, an open-source tool that aims to provide a localizable, lightweight, and user-friendly GUI and API platform for a Retrieval- Centric Generation approach based on LLMs. Our tool enables developers and end-users to easily in- teract and develop with a privacy-preserving and lo- cally implemented LLM-based RCG system, which we believe will contribute to the democratization of these technologies within the machine learning community. Increased clarity in role-separation be- tween context interpretation and knowledge memo- rization can potentially boost the performance and interpretability of generative AI systems, facilitat- ing deployments.
# Limitations
It is important to note that this tool does not provide a foolproof solution for ensuring a completely safe and responsible response from generative AI mod- els, even within a retrieval-centric approach. The
development of safer, interpretable, and responsi- ble AI systems remains an active area of research and ongoing effort.
Generated texts from this tool may exhibit varia- tions, even when only slightly modifying prompts or queries, due to the next token prediction behav- ior of current-generation LLMs. This means users may need to carefully fine-tune both the prompts and queries to obtain optimal responses.
# References
Abubakar Abid, Ali Abdalla, Ali Abid, Dawood Khan, Abdulrahman Alfozan, and James Zou. 2019. Gradio: Hassle-free sharing and testing of ml models in the wild. arXiv preprint arXiv:1906.02569.
Sebastian Borgeaud, Arthur Mensch, Jordan Hoff- mann, Trevor Cai, Eliza Rutherford, Katie Milli- can, George Bm Van Den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego De Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hennigan, Saffron Huang, Loren Mag- giore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Geoffrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack Rae, Erich Elsen, and Laurent Sifre. 2022. Improving language models by retrieving from trillions of tokens. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 2206â2240. PMLR.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. In Ad- Language models are few-shot learners. vances in Neural Information Processing Systems, volume 33, pages 1877â1901. Curran Associates, Inc.
Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. 2023. Quantifying memorization across neural lan- guage models. In The Eleventh International Confer- ence on Learning Representations.
Harrison Chase. 2022. LangChain.
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An open- source chatbot impressing gpt-4 with 90%* chatgpt quality.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vin- odkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Mor- eira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311.
Paul F Christiano, Jan Leike, Tom Brown, Miljan Mar- tic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.
Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. 2023. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020. Realm: Retrieval- augmented language model pre-training. In Proceed- ings of the 37th International Conference on Machine Learning, ICMLâ20. JMLR.org.
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations.
Ahmet Iscen, Alireza Fathi, and Cordelia Schmid. 2023. Improving image recognition by retrieving from In Proceedings of the web-scale image-text data. IEEE/CVF Conference on Computer Vision and Pat- tern Recognition (CVPR), pages 19295â19304.
Peter Izsak, Moshe Berchansky, Daniel Fleischer, and Ronen Laperdon. 2023. fastRAG: Efficient Retrieval Augmentation and Generation Framework.
Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. IEEE Billion-scale similarity search with GPUs. Transactions on Big Data, 7(3):535â547.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open- domain question answering. In Proceedings of the
2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769â6781, Online. Association for Computational Linguistics.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu- taka Matsuo, and Yusuke Iwasawa. 2022. Large lan- guage models are zero-shot reasoners. In Advances in Neural Information Processing Systems.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge- In Advances in Neural Infor- intensive nlp tasks. mation Processing Systems, volume 33, pages 9459â 9474. Curran Associates, Inc.
Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74â81, Barcelona, Spain. Association for Computational Linguistics.
Yu A. Malkov and D. A. Yashunin. 2020. Efficient and robust approximate nearest neighbor search using IEEE hierarchical navigable small world graphs. Trans. Pattern Anal. Mach. Intell., 42(4):824â836.
Alex Mallen, Akari Asai, Victor Zhong, Rajarshi Das, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. When not to trust language models: Investigating effectiveness of parametric and non-parametric mem- ories. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 9802â9822, Toronto, Canada. Association for Computational Linguistics.
Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factu- ality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906â1919, On- line. Association for Computational Linguistics.
Kengo Nakata, Youyang Ng, Daisuke Miyashita, Asuka Maki, Yu-Chieh Lin, and Jun Deguchi. 2022. Re- visiting a knn-based image classification system In Computer Vision â with high-capacity storage. ECCV 2022, pages 457â474, Cham. Springer Nature Switzerland.
OpenAI. 2023. Chatgpt. https://openai.com/blog/ chatgpt.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems, volume 35, pages 27730â27744. Curran Associates, Inc.
Malte Pietsch, Timo Möller, Bogdan Kostic, Julian Risch, Massimiliano Pippi, Mayank Jobanputra, Sara Zanzottera, Silvano Cerza, Vladimir Blagojevic, Thomas Stadelmann, Tanay Soni, and Sebastian Lee. 2019. Haystack: the end-to-end NLP framework for pragmatic builders.
PrivateGPT. PrivateGPT. Accessed: 2023-07-04.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, An- thony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Di- ana Liskovich, Yinghai Lu, Yuning Mao, Xavier Mar- tinet, Todor Mihaylov, Pushkar Mishra, Igor Moly- bog, Yixin Nie, Andrew Poulton, Jeremy Reizen- stein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subrama- nian, Xiaoqing Ellen Tan, Binh Tang, Ross Tay- lor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Ro- driguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine- tuned chat models. arXiv preprint arXiv:2307.09288.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Å ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.
Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. 2022. Text embeddings by weakly- supervised contrastive pre-training. arXiv preprint arXiv:2212.03533.
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022a. Emer- gent abilities of large language models. Transactions on Machine Learning Research. Survey Certifica- tion.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le,
and Denny Zhou. 2022b. Chain-of-thought prompt- ing elicits reasoning in large language models. In Advances in Neural Information Processing Systems, volume 35, pages 24824â24837. Curran Associates, Inc.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38â45, Online. Association for Computational Linguistics.
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. 2023. Wizardlm: Empowering large lan- guage models to follow complex instructions. arXiv preprint arXiv:2304.12244.
# A Appendix
# A.1 GUI Design of Retrieval Tuning Module
Figure 4 shows the GUI design of prompt- engineering interface. Figure 5 shows the GUI design of tool configuration interface. Figure 6 shows the GUI design of analysis and data logging interface.
# A.2 Applications
SimplyRetrieve has vast potential for various prac- tical applications. For instance, it can serve as the foundation for building private, personalized, and lightweight generative AI systems. Sensitive and personal information can be securely stored and processed within the retrieval-centric platform. This approach enables organizations to develop interpretable and locally tailored generative AI sys- tems for critical infrastructure. Additionally, the use of a relatively smaller language model as a contextual interpreter in this approach facilitates seamless integration into edge computing environ- ments. The decreasing costs of data storage devices also make it feasible to establish large-scale knowl- edge bases. Furthermore, SimplyRetrieve paves the way for the development of LLM-based person- alized AI assistants. Lastly, an in-depth exploration of LLM-based retrieval-centric generation using SimplyRetrieve may offer valuable insights and opportunities for future research.
# A.3 Prompt Catalogs
Table 5 shows the prompts used in the evaluation results of Section 4 while Table 6 shows sample prompts that may exhibit retrieval-centric behav- iors. Prompts are passed to LLM in the following format: AI Prefix + Retriever Prefix + Retrieved Knowledge Base + Retriever Suffix + Model Prefix + Query + Model Suffix.
# A.4 Evaluation Data
Table 7 presents the data used for evaluating the per- formance of our proposed tool in Section 4.2. We employed the Llama-2-13B-chat model (Touvron et al., 2023b) with a customized prompt ("relevant information." Please create a query and answer from the paragraph above) to generate query and label pairs automatically from relevant information on the website of an organization.
# A.5 Ablation Study
As shown in Table 4, our ablation study reveals that adjusting Explicit Prompt-Weighting in SimplyRe- trieve leads to significant improvements in Rouge- L scores. Interestingly, increasing the weightage to 50% yields the highest improvement, beyond which the performance remains relatively stable. This suggests that the top 50% of retrieved knowl- edge bases are crucial for achieving high accuracy. However, it is important to note that these findings may not generalize to all datasets or knowledge bases, and further investigation may be necessary to determine optimal weightages for specific use cases. In comparing the response times for each query across different settings, we observe that the response times remain relatively consistent for all cases of RCG, while they increase significantly in the baseline (ROG) setting. Despite the fact that RCG processes longer prompts than the baseline, we observe a decrease in processing time owing to the increased precision and brevity of the generated responses.
Approach ROG RCG-EPW-10 RCG-EPW-20 RCG-EPW-30 RCG-EPW-40 RCG-EPW-50 RCG-EPW-60 RCG-EPW-70 RCG-EPW-80 RCG-EPW-90 RCG Rouge-L 0.186 0.275 0.313 0.403 0.354 0.414 0.331 0.392 0.306 0.378 0.413 time/query(s) 17.22 12.72 13.00 13.06 11.98 12.46 11.36 13.56 16.32 13.13 11.67
Table 4: Ablation study of Explicit Prompt-Weighting in SimplyRetrieve.
AI Prefix Retriever Prefix " Retriever Suffix " answer the following question with the provided knowledge. Model Prefix Model Suffix AI:
Table 5: Prompts used in the evaluation results of Section 4.
AI Prefix you are a Retrieval-Centric AI. Knowledge below are provided. Retriever Prefix " Retriever Suffix " only use the provided knowledge to answer the following question. Model Prefix Model Suffix Response: " " answer the following ques- tion with the provided knowledge. AI: " " only use the provided knowledge to answer the following question. AI: you are a Retrieval-Centric AI. Knowledge below are provided. " " only use the provided knowledge to answer the following question. AI:
Table 6: Sample Prompts Catalog of Retrieval-Centric Generation in SimplyRetrieve.
Chat Prompt Config Analysis Prompt =Al Prefix + Retriever Prefix + Retrieved KnowledgeBase + Retriever Suffix + Model Prefix + Query + Model Suffix Model Prefix Nodal Sus Model-related Prompts Retriever Prefix Retriever Suffix Retrieval-related Prompts answer the following question with the provided knowledge. Apes Al Prompt Prompts will be saved to subdirectory of prompts in separate files Update Prompts Save Prompts
Figure 4: The Prompt-Engineering interface of SimplyRetrieve. The Tab is for editing, updating and saving of model-related and retrieval-related prompts. Available prompts are AI Prefix, Retriever Prefix, Retriever Suffix, Model Prefix and Model Suffix.
Chat Prompt Config. Analysis Config File Config Editing { "Ulm _config": { âmodel_argsâ: { âmodel_typeâ: "/volt/user/youyang/hf_models/Wizard-Vicuna-138-Uncensored', *device_map":{ â0 Config Updating and Saving âSave Path Update Config Save Config configs/default_chat1_paper_new.json
Figure 5: The Tool Configuration interface of SimplyRetrieve. The Tab is for modifying, updating and saving all configurable settings.
sta Logsing Sentence & Token Levels Retrieval Similarity Analysis Query & KnowledgeBase Sentence Level Similarity Score Query & KnowledgeBase Tokens Level Similarity Score 0.8461886476455748, 0.7471896810316315 Response & KnowledgeBase Sentence Level Similarity Score Response & KnowledgeBase Tokens Level Similarity Sore 0.874272306649239 0.7319581929734684 Query Prompts Analysis Tell me about the cutting-edge deep learning research in 2023. "As of March 31, 2023 * What is NAND Flash Memory? 2023/06/21 20:48 System technology/Development of Image Classification System Deploying Memory-Centric Al with High-capacity Storage |... https://wwwkioxia.com/en {p/rd/technology/topics/topics-39.html 1/3 Development of image Classidpcation System Deploying Memory-Centric Response Alwith High-capacity Storage January 12, 2023 In deep learning, neural networks can and Big Data Automated Cleanrooms oT Infrastructure and Big Data Sophisticated Al-based Analysis Image Classi@cation Using Deep The cutting-edge deep learning research in 2023 focuses on developing memory-centric artificial intelligence that Learning Digital Twin and related awards Fieldwork Improvement Using 2023/06/21 20:48 System ogs Display & tore https://www-kioxia.com/en-jp/rd/technology/topics/topics-39.html 2 2 Fig3 Accuracy comparison when Increasing Save Log Path analysis/logs.csv
Figure 6: The Analysis and Data Logging interface of SimplyRetrieve. The Tab is for analysis and logging of retrieved knowledge base, query, response, retrieval sentence level and tokens level similarity scores. Users can save all the logged data for further analysis.
What is the purpose of es- tablishing KIOXIA Iwate Corporation? When did Kioxia start op- erating its new fabrication facility (Fab7)? What are some ways in which the Yokkaichi Plant contributes to society? What is KIOXIAâs ap- proach to corporate social responsibility? What kind of environmen- tal initiatives has KIOXIA its Yokkaichi taken at Plant? What was the total rev- enue for the Kioxia group in fiscal year 21? What initiatives has the Yokkaichi Plant taken to protect the environment? What is KIOXIAâs mis- sion statement? What is the purpose of collecting over two bil- lion data items daily? According to the text, Kioxia started operating its new fabrication facility (Fab7) in the fall of 2022. The Yokkaichi Plant contributes to society by manufacturing high-quality flash memory and SSDs that support the digitalization of various industries, as well as through its research and development (R&D) activities. Additionally, the plant prioritizes legal compliance, environmental protection, and social initiatives, making it a responsible corporate citizen. According to the passage, KIOXIAâs approach to corporate social responsibility involves making their Yokkaichi Plant a factory that is admired and trusted by everyone in the community. They aim to contribute to local development and meet the expectations of local communities by being a responsible member of society. At KIOXIAâs Yokkaichi Plant, all employees, including resident suppliers, actively partici- pate in various initiatives to contribute to society by improving the environment. These include measures to mitigate the environmental burden of the companyâs semiconductor manufacturing facilities. According to the passage, the total revenue for the Kioxia group in fiscal year 21 was 1,526.5 billion yen. The Yokkaichi Plant has taken various initiatives such as reducing greenhouse gas emis- sions, conserving water resources, and implementing recycling programs to protect the environment. These efforts aim to not only reduce the plantâs ecological footprint but also raise awareness among local communities about the importance of environmental conservation. According to the passage, KIOXIAâs mission statement is to uplift the world with âmem- oryâ, which means they aim to provide high-quality memory and SSD products that offer fast speeds, large capacities, and reliable performance. Additionally, the company seeks to achieve this goal through collaboration and knowledge sharing across different depart- ments and organizations. To maintain high-quality products by analyzing the vast amounts of data generated during the manufacturing process using advanced technologies like deep learning and AI.
Table 7: Dataset used in the evaluation results of Section 4.2. | {
"id": "2302.13971"
} |
2308.03688 | AgentBench: Evaluating LLMs as Agents | Large Language Models (LLMs) are becoming increasingly smart and autonomous,
targeting real-world pragmatic missions beyond traditional NLP tasks. As a
result, there has been an urgent need to evaluate LLMs as agents on challenging
tasks in interactive environments. We present AgentBench, a multi-dimensional
evolving benchmark that currently consists of 8 distinct environments to assess
LLM-as-Agent's reasoning and decision-making abilities in a multi-turn
open-ended generation setting. Our extensive test over 27 API-based and
open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong
ability of acting as agents in complex environments, there is a significant
disparity in performance between them and OSS competitors. We identify the
typical reasons of failures in environments and LLMs, showing that poor
long-term reasoning, decision-making, and instruction following abilities are
the main obstacles for developing usable LLM agents. Training on code and high
quality multi-turn alignment data could improve agent performance. Datasets,
environments, and an integrated evaluation package for AgentBench are released
at \url{https://github.com/THUDM/AgentBench}. | http://arxiv.org/pdf/2308.03688 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, Jie Tang | cs.AI, cs.CL, cs.LG | 55 pages | null | cs.AI | 20230807 | 20231025 | 3 2 0 2
t c O 5 2 ] I A . s c [
2 v 8 8 6 3 0 . 8 0 3 2 : v i X r a
Technical Report (v0.2)
# AGENTBENCH: EVALUATING LLMS AS AGENTS
Xiao Liu1,*, Hao Yu1,*, Hanchen Zhang1, Yifan Xu1, Xuanyu Lei1, Hanyu Lai1, Yu Gu2, Hangliang Ding1, Kaiwen Men1, Kejuan Yang1, Shudan Zhang1, Xiang Deng2, Aohan Zeng1, Zhengxiao Du1, Chenhui Zhang1, Sheng Shen3, Tianjun Zhang3, Yu Su2, Huan Sun2, Minlie Huang1, Yuxiao Dong1, Jie Tang1
1Tsinghua University, 2The Ohio State University, 3UC Berkeley
# ABSTRACT
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks. As a result, there has been an urgent need to evaluate LLMs as agents on challenging tasks in interactive environments. We present AGENTBENCH, a multi-dimensional evolving benchmark that currently consists of 8 distinct environments to assess LLM-as-Agentâs reasoning and decision-making abilities in a multi-turn open- ended generation setting. Our extensive test over 27 API-based and open-sourced (OSS) LLMs shows that, while top commercial LLMs present a strong ability of acting as agents in complex environments, there is a significant disparity in performance between them and OSS competitors. We identify the typical reasons of failures in environments and LLMs, showing that poor long-term reasoning, decision-making, and instruction following abilities are the main obstacles for developing usable LLM agents. Training on code and high quality multi-turn alignment data could improve agent performance. Datasets, environments, and an integrated evaluation package for AGENTBENCH are released at https:// github.com/THUDM/AgentBench.
Operating System T ~ â gpt-4 4.01 Web i -2 Browsing Database c aude? 4 gpt-3.5-turbo API-based text-davinci-003 aah Commercial \ claude-instant i LLMs | | \ chat-bison-001 i { sho wee | t NS iy je text-davinci-002 ; f ping | P codellama-34b | 0.96 ! vicuna-13b | 0.93 i llama-2-70b tn 0.78 i llama-2-13b fen 0.77 4 Jaital H I OSS LLMs igita j i House-holding Card Game dolly: 0.14 A chatglm-6b }-0.11! ! Lateral Thinking Puzzle oasst-12b 0.03! f lava: lava: Mm gpt-4 (0613) lm chat-bison-001 lim llama-2-13b iAvgi0.51 sAvgi2.15 l@l claude-2 lM llama-2-70b MM vicuna-13b-v1.3 i} 1 2 3 4 MMM gpt-3.5-turbo (0613) MINN codellama-34b-instruct [â¢ll dolly-12b AgentBench Overall Score (a) Typical LLMsâ AgentBench performance (b) Overall scores of AgentBench across 8 environ (Relative) against the best in each environment -ments. Dashed lines for two LLM typesâ average.
Figure 1: An overview of LLMs on AGENTBENCH. While LLMs begin to manifest their proficiency in LLM-as-Agent, gaps between models and the distance toward practical usability are significant.
# INTRODUCTION
Intelligent agents and autonomous entities (Searle, 1970; Maes, 1994; Wooldridge & Jennings, 1995) that are capable of decision-making and action execution in particular environments have been key
XL and HY are lead authors that contributed equally. Email: {shawliu9,longinyh}@gmail.com â Work partially done when HY, YG visited Tsinghua University. â¡Website for AGENTBENCH leaderboard & demos: https://llmbench.ai/agent
1
Technical Report (v0.2)
Real-world Challenges 8 Distinct Environments (On an Ubuntu bash terminal) Recursively set all files in the directory to read-only, except those of mine. Operating (Given Freebase APIs) System Database What musical instruments do Minnesota- born Nobel Prize winners play? LLM-as-Agent (Given MySQL APIs and existed tables) H Large > ga Grade students over 60 as PASS in the table. Language Knowledge | [Digital card â Models Graph Game (On the GUI of Aquawar) Agent This is a two-player battle game, you are a player with four pet fish cards ...... a âA man walked into a restaurant, ordered a bow! ee ? of turtle soup, and after finishing it, he =} Interactive House Thi committed suicide. Why did he do that? ' : Hota Lateral Think Environ } Environment: | Holding -ing Puzzles (in the middle of a kitchen in a simulator) =ment 4 __ Please put a pan on the dinning table. â| â||Sa (On the official website of an airline) 7 a Book the cheapest flight from Beijing to Los. web web Angeles in the last week of July. Shopping am Browsing [== ==
Figure 2: AGENTBENCH is the first systematic benchmark to evaluate LLM-as-Agent on a wide array of real-world challenges and 8 distinct environments. In total, 27 LLMs are examined in this edition. concepts of artificial intelligence (AI) historically. Notwithstanding substantial advancements in deep learning algorithms applied in both computer vision and natural language processing (NLP), their potential for developing efficient and practically usable assisting agents remains largely unexplored.
The advent of Large Language Models (LLMs) (Brown et al., 2020; Chowdhery et al., 2022; Touvron et al., 2023), such as GPT-4 (OpenAI, 2023), has brought plenty of new opportunities to this realm. Through extensive alignment training (Ouyang et al., 2022; Wei et al., 2022a; Sanh et al., 2022), LLMs have not only mastered traditional NLP tasks but also showcased an impressive ability to comprehend human intent and execute instructions. This has spurred the development of various LLM-based applications for autonomous goal completion (like AutoGPT (Richards, 2023), BabyAGI (Nakajima, 2023), AgentGPT (age, 2023)) as well as LLM agents situated in social and game contexts (Park et al., 2023; Wang et al., 2023b; Zhu et al., 2023), sparking substantial public interest and discussions.
Despite these advancements, the lack of a systematic and standard benchmark to evaluate LLM-as- Agent presents a critical challenge. Historically, text-based game environments (Osborne et al., 2022; Côté et al., 2019; Hausknecht et al., 2020; Urbanek et al., 2019) have been employed for language agent evaluation. But they often suffer from the limitation of closed, discrete action spaces, as well as their primarily narrow focus on modelsâ commonsense grounding. More recently, attempts on embodied agents (Reed et al., 2022; Huang et al., 2022; Ahn et al., 2022) have employed complicated multi-modal simulators based on games (Küttler et al., 2020; Fan et al., 2022), GUI (Shi et al., 2017; Toyama et al., 2021), and indoor scenes (Shen et al., 2021; Srivastava et al., 2022). However, these simulators, despite their complexity, do not accurately reflect the practical use cases of LLMs, and their multi-modal nature creates a hurdle for the urgent evaluation of existing text-only LLMs. Finally, most benchmarks now for agents focus on single environments and thus fail to provide a comprehensive overview of LLMs across diverse application scenarios.
To address these challenges, we introduce AGENTBENCH, a multi-dimensional benchmark designed to evaluate LLM-as-Agent across a spectrum of different environments. AGENTBENCH encompasses eight distinct environments (Cf. Figure 4), which could be categorized into three types of groundings:
Code: Operating System, Database, Knowledge Graph (Anonymous, 2023) ⢠Game: Digital Card Game, Lateral Thinking Puzzles, House-Holding (Shridhar et al., 2020b) ⢠Web: Web Shopping (Yao et al., 2022), Web Browsing (Deng et al., 2023)
All datasets, whether newly created or adapted from existent ones, are meticulously designed and reformulated to simulate interactive environments where text-only LLMs can operate as autonomous agents. AGENTBENCH thus systematically evaluate an LLMâs core abilities, including following in- structions (Ouyang et al., 2022), coding (Chen et al., 2021), knowledge acquisition (Joshi et al., 2017; Talmor et al., 2019), logical reasoning (Srivastava et al., 2023), and commonsense grounding (Shridhar et al., 2020a). It serves as an ideal testbed for both LLM and agent evaluation.
In addition, we develop a unified evaluation toolkit for LLMs to operate on diverse customized agent tasks, thus enabling a comprehensive benchmarking of the LLM-as-Agent ability of 27 different LLMs on AGENTBENCH, including both API-based and OSS models. Our results reveal that top-tier
2
# Technical Report (v0.2)
Table 1: AGENTBENCH evaluates 27 API-based or OSS LLMs on LLM-as-Agent challenges.
Model #Size Form Ver. Creator Model #Size Form Ver. Creator gpt-4 (OpenAI, 2023) gpt-3.5-turbo (OpenAI, 2022) text-davinci-003 (Ouyang et al., 2022) text-davinci-002 (Ouyang et al., 2022) claude-2 (Anthropic, 2023b) claude (Anthropic, 2023a) claude-instant (Anthropic, 2023a) chat-bison-001 (Anil et al., 2023) chatglm-6b (Zeng et al., 2022; Du et al., 2022) 6B open v1.1 codegeex2-6b (Zheng et al., 2023) codellama-34b (Rozière et al., 2023) codellama-13b (Rozière et al., 2023) codellama-7b (Rozière et al., 2023) dolly-12b (Conover et al., 2023) llama2-70b (Touvron et al., 2023) llama2-13b (Touvron et al., 2023) llama2-7b (Touvron et al., 2023) guanaco-65b (Dettmers et al., 2023) 65B open guanaco-33b (Dettmers et al., 2023) 33B open vicuna-33b (Chiang et al., 2023) vicuna-13b (Chiang et al., 2023) N/A api N/A api N/A api N/A api N/A api N/A api N/A api N/A api 70B open chat 13B open chat 7B open chat 0613 0613 - - - v1.3 v1.1 - Meta OpenAI - - Meta Anthropic 33B open v1.3 13B open v1.5 7B open v1.5 LMSYS Google vicuna-7b (Chiang et al., 2023) Tsinghua & Zhipu wizardlm-30b (Xu et al., 2023) wizardlm-13b (Xu et al., 2023) koala-13b (Geng et al., 2023) oasst-12b (LAION, 2023) openchat-13b (Wang et al., 2023a) 13B open v3.2 Tsinghua 30B open v1.0 13B open v1.0 13B open - 12B open sft-4 LAION 6B open 34B open instruct 13B open instruct 7B open instruct 12B open - Microsoft Meta UCB v2 Databricks
models like GPT-4 are capable of handling a wide array of real-world tasks, indicating the potential for developing a potent, continuously learning agent. However, we also note a significant performance gap between these top-tier models and their OSS competitors. Despite the recent success of OSS LLMs and their competitive scores on several benchmarks (Li et al., 2023; Chen et al., 2021; Cobbe et al., 2021), their performance on the challenging AGENTBENCH tasks lags considerably. This underscores the necessity for additional efforts to enhance the learning abilities of OSS LLMs.
We identify portions of agent task failures in different environments and LLMs, unveiling the insufficient abilities of long-term reasoning, decision-making, and instruction following in existing LLMs. Comparisons between different LLMs manifest that a proper strategy of introducing code training can help improve LLM-as-Agent. Alignment training over high-quality data (e.g., data generated by gpt-4) could also help improve LLM agents. In summary, our contributions are:
We introduce the concept of evaluating LLMs as agents and present AGENTBENCH, a compre- hensive benchmark to standardize the evaluation. It defines eight distinct environments of 3 types based on real-world scenarios, offering a practical testbed for LLMsâ wide array of capabilities. ⢠We perform a thorough evaluation of 27 different LLMs using AGENTBENCH, uncovering a significant performance gap between leading API-based commercial LLMs and OSS models. We also quantitatively analyze the reasons for failures in existing LLM agents and highlight directions for improvement, such as code training and higher-quality alignment data.
⢠To facilitate the evaluation of LLM-as-Agent, we have introduced an integrated toolkit grounded in the Server-Client architecture, focusing on modular and scalable design principles. This enables easy customization of model assessments for any LLMs using the HTTP protocol. Complemented by its associated datasets and environments, this toolkit is now openly accessible to the broader research community.
# 2 LLM-AS-AGENT: DEFINITION AND PRELIMINARY
Here, we formalize the terms for describing the evaluation of LLMs as agents and the necessary preliminary knowledge for using LLMs in the context of agent evaluation.
Definition: Interactive Evaluation of LLM-as-Agent. The interactive evaluation of LLM-as-Agent could be regarded as a Partially Observable Markov Decision Process (S, A, T , R, U, O), which comprises state space S, action space A, transition function T : S Ã A â S, reward assigning function R, task instruction space U, and observation space O. Here, we denote an LLM agent as M.
Chain-of-Thought (CoT) and Other Reasoning Strategies. Since LLM-as-Agent requires LLMsâ strong reasoning ability, CoT (Wei et al., 2022b), which has been considered a de facto strategy in related evaluation together with actions (Yao et al., 2023b), is also adopted in AGENTBENCH. Despite many improved strategies proposed later, such as introducing ensemble (Wang et al., 2023c), reflection (Shinn et al., 2023), and search (Yao et al., 2023a), we evaluate LLMs with the most primitive CoT in AGENTBENCH. Without multiple trials, repeated generations, or complicated strategies, CoT is the easiest, cheapest, and most common way for people to deploy LLM agents.
Typical Types of Finish Reasons. Despite LLMsâ capabilities, we show in AGENTBENCH that even the strongest gpt-4 is not qualified as a practically usable agent. We identify and categorize finish reasons of LLM agents on AGENTBENCH tasks into five typical types:
3
# Technical Report (v0.2)
⢠Context Limit Exceeded (CLE): the length of interaction history exceeds the LLMâs maximum context length (only happened in 2,048-length LLMs text-davinci-002 and 003).
Invalid Format (IF): the agent does not follow the format instruction. ⢠Invalid Action (IA): the agent follows the format instruction, but its selected action is invalid. ⢠Task Limit Exceeded (TLE): the agent does not solve the problem after reaching the predefined
maximum interaction turns or begins to do repeated generations for many turns.
and Complete (task ends normally). While IF and IA are mostly caused by LLMsâ poor instruction following, TLE often indicates a weak multi-turn ability in certain tasks.
# 3 COMPOSITION OF AGENTBENCH: A BRIEF LOOK
In this section, we briefly introduce the datasets and environments that compose the AGENTBENCH. Compared to previous agent evaluation benchmarks (Côté et al., 2019; Fan et al., 2022), AGENT- BENCH concentrates on the practical evaluation of LLMs via Chain-of-Thought (CoT) (Wei et al., 2022b; Yao et al., 2023b) prompting, including code-grounded, game-grounded, and web-grounded scenarios. They pinpoint promising directions of LLMsâ applications with autonomous mission com- pletion, and their versatility avoids task-specific modelsâ (e.g., code-specific LLMs) overperformance on AGENTBENCH. Due to page limit, for details of construction, evaluation, and prompt examples, please refer to Appendix.
3.1 CODE-GROUNDED ENVIRONMENTS
Since LLMs can generate high quality codes (Chen et al., 2021), a very practical mission for LLM agents is to assist human interaction with computer interfaces. Here, we introduce three three environments depending on coding and reasoning abilities as representatives in AGENTBENCH.
Operating System (OS). Allowing LLMs to access and manipulate OS in the terminal is a fascinating but challenging mission. Despite attempts on translating natural language to Shell commands (Lin et al., 2018), few prior efforts evaluate models in executable environments. We aim to evaluate LLMs in genuine OSâ interactive bash environments (i.e., Ubuntu Docker (Merkel et al., 2014)) on human questions with deterministic answers (e.g., number of users with non-/home directories in an OS.) or series of operations for practical goals (e.g., recursively set all directory files to read-only, excluding mine). We adopt the success rate (SR) as the evaluation metric. (Cf. Appendix B for more details)
Database (DB). As database analysis is crucial but also difficult in many daily affairs, it is paramount to examine LLMsâ abilities to operate on real databases via SQL. Prior research has a significant emphasis on individual procedures, such as translation between SQL and natural language (Zhong et al., 2017), or answering questions given individual small tables (Nan et al., 2021; Iyyer et al., 2017). However, few consider evaluating models on the complete pipeline as a whole. Therefore, AGENTBENCH evaluates LLMs on authentic SQL interfaces, databases, multiple tables, and different types of queries as is in the real world. We adopt the SR as the main evaluation metric. (Cf. Appendix C for more details)
Knowledge Graph (KG (Anonymous, 2023)). Engaging with contemporary KGs, which are often vast in size (e.g., FREEBASE (Bollacker et al., 2008) has over 45M entities and 3B facts), demands a broad range of skills from an intelligent agent (Gu et al., 2023). Operating in such environments, which are only partially observable, requires the agent to make decisions with incomplete information and manage inherent uncertainties with various skills, including language understanding (e.g., intricacies and subtleties), planning (e.g., breaking down instructions into more manageable components), and tool using (e.g., interact with KG interfaces). As a result, we propose KG as a representative testing ground to assess the decision-making abilities of AI agents. We adopt question answering as the basic task formulation and consequently the answer F1 as the metric. (Cf. Appendix D for more details)
3.2 GAME-GROUNDED ENVIRONMENTS
Playing games usually requires strong capabilities in designing strategies, following instructions, and reasoning. Compared to code-grounded, tasks in game-grounded environments require no expertise in coding but more integral grasping of commonsense and world knowledge.
4
Technical Report (v0.2)
Digital Card Game (DCG). Games, especially those that require strategies and planning, could serve as simulated environments for intelligent agent development. DCG (e.g., Hearthstone (Hoover et al., 2020)), instead, is an ideal option for text-only LLM evaluation. It usually involves abundant text descriptions for cards, turn-based competition, and thoughtful playing strategies to win, testing a modelâs understanding of game rules, operating logic, and abilities to form strategic decisions based on current conditions and past experiences in the game.
In AGENTBENCH we adapt a simplified DCG systemâAquawar1âfrom the 2021 Tsinghua Uni- versity Agent Competition (THUAC) hosted by Student Association for Science and Technology in Department of Computer Science and Technology (CST-SAST), for evaluating LLM-as-Agent. In Aquawar, the agent acts as a player managing a team of fishes with different talents to battle against another team (controlled by our ad-hoc baseline agent) in a turn-based form. We report LLMsâ win rate as the evaluation metric. (Cf. Appendix E for more details) Lateral Thinking Puzzles (LTP). Lateral thinking puzzles (Sloane, 1992), or situation puzzles, æµ· é¾æ±¤, is a popular group-playing game around the world. The game usually has a person hosting the puzzle and others guess by asking riddle-related questions. The host can only respond âyesâ, ânoâ, or âirrelevantâ. The game is terminated when one of the player recovers the critical plots of the puzzle. Its name derives from the psychological term âlateral thinkingâ (De Bono, 1970), which refers to the ability of deducing facts from unconventional perspectives and exploring new ideas.
In this dataset, we first set up an LTP host system for automatic judging (Cf. Appendix F). To assess LLMsâ lateral reasoning prowess, a diverse puzzle dataset is curated from web of varied levels of difficulty. We break down the true plot into several bullets and measure the portion of guessed-out bullets (i.e., game progress) when an agent exhausted the maximum number of playing rounds as the evaluation metric. Through this assessment, we aim to gain insights into the depth and agility of LLMsâ lateral reasoning abilities. (Cf. Appendix F for more details)
House-Holding (HH, ALFWorld (Shridhar et al., 2020b)). Embodied game environments such as house-holding, which require strong commonsense grounding, have been well-established for language agent evaluation (Côté et al., 2019). In AGENTBENCH, we assess the modelâs capability in accomplishing tasks in physical house-holding environments on the classical ALFWorld (Shridhar et al., 2020b) derived from the well-established text-game toolkit TextWorld (Côté et al., 2019). The agent needs to accomplish house-holding tasks such as âPut a pan on the dining tableâ. We adopt the SR as the evaluation metric. (Cf. Appendix G for more details)
3.3 WEB-GROUNDED ENVIRONMENTS
Web pages have been primary interfaces for people to interact in the real world. Thus, assessing LLM agentsâ behaviors in complex web environments would be critical and valuable for following development. Here, we adapt two existing web browsing datasets for practical evaluation over LLMs.
Web Shopping (WS, WebShop (Yao et al., 2022)). Online shopping is a very practical and important part of modern life. Its trajectory, which comprises searching, viewing, and choosing desirable items on a real e-commerce website, requires autonomous agentsâ strong reasoning and decision-making abilities. Webshop (Yao et al., 2022), a simulated online shopping environment, exactly serves such a purpose for evaluating language agents. While it is originally evaluated on specifically trained models, we propose assessing LLMs with mere prompting. (Cf. Appendix H for more details)
Web Browsing (WB, Mind2Web (Deng et al., 2023)). General web environment is an ideal sandbox for training and evaluating intelligent agents. Mind2Web (Deng et al., 2023) is a very recently released general benchmark for developing and assessing web agents capable of executing intricate tasks across various website domains, given high-level user instructions. It designs feasible actions for website interactions, such as clicking, selecting, and typing, thereby facilitating a holistic evaluation of LLMs as web agents. Compared to Mind2Webâs original setting, we make adaptations to allow its evaluation on prompted LLMs without additional fine-tuning. (Cf. Appendix I for more details)
1https://www.saiblo.net/
5
# Technical Report (v0.2)
Table 2: Statistics and metrics of 8 environments in AGENTBENCH evaluation. âSRâ stands for Success Rate. â#Avg. Turnâ denotes the estimated number of interacting turns to solve a single problem. In â#Devâ, and â#Testâ, we provide the number of query samples and total expected interacting turns. Additionally, âWeightâ1â refers to the average score for a task across all models in our evaluation. For further clarification, please refer to Section 4.1 and Appendix B to I.
OS DB KG DCG LTP HH WS WB #Avg. Turn Metric #Dev #Test Weightâ1 8 SR 26 / 240 5 SR 60 / 300 15 F1 20 / 300 30 25 Reward Game Progress 12 / 360 144 / 1200 300 / 1500 150 / 2250 20 / 600 20 / 500 50 / 1250 10.8 13.0 13.9 12.0 3.5 10 5 35 Step SR Reward SR 20 / 700 31 / 400 80 / 400 50 / 1750 200 / 1000 177 / 1800 13.0 30.7 11.6
# 4 EVALUATION OF AGENTBENCH
We extensively evaluate 27 LLMs, including API-based commercial models and open-sourced LLMs, to form a systematic view of the existing performance of LLM-as-Agent. We also design and release a simple plug-and-play evaluation toolkit to facilitate related LLM-as-Agent research.
4.1 EVALUATION SETUP
Dataset Statistics. We report the statistics of datasets in AGENTBENCH in Table 2. For simplicity, we use the abbreviation of each dataset in the following part. All datasets are practical multi-turn interacting challenges, and their estimated solving turns for each individual problem range from 5 to 50. We provide two splits for each dataset: Dev and Test. The Dev splitâs all environments, answers, and checking scripts are public, while the Test is kept.
We also carefully balance the evaluation comprehensiveness and efficiency in AGENTBENCH design, as LLMsâ multi-turn interaction can be time-consuming. We set the size of Dev and Test to 269 and 1,091, respectively, resulting in around 4k and 13k calls for inference, approximately the identical amounts of calls for inference as MMLU (Hendrycks et al., 2021b) requires.
LLMs to Evaluate. As a systematic attempt to benchmark existing LLMs on LLM-as-Agent, we include in total 27 models for evaluation, which could be roughly classified into two categories:
⢠API-based Commercial LLMs: mainly consist of LLM APIs without disclosed parameter amounts (Cf. Table 1). Due to more investments, their performances are usually better.
⢠Open-sourced (OSS) LLMs: mostly come from the academia and some companies (Cf. Table 1). Due to limited computing resources, we only include OSS LLMs smaller than 70B here.
Toolkit: Streamlining LLM Evaluation with API-Centric Approach and Environment Isolation. As Language Model (LLM) systems continue to advance in complexity and are primarily accessible through APIs, we have developed an evaluation toolkit that aligns with the API-oriented philosophy. This toolkit is meticulously designed to interact with APIs, simplifying the process of adapting and testing different LLMs. Researchers interested in evaluating their LLMs on AGENTBENCH only need to set up a model server accessible via the HTTP protocol.
Moreover, dealing with diverse and intricate interaction environments poses a significant challenge. Uniformly configuring all these environments can be arduous and may lead to conflicts. To address this, we have implemented two key strategies. Firstly, we encapsulate tasks with complex envi- ronments into Docker images. Researchers can effortlessly utilize these images by mounting the code path and initiating the evaluation process with ease. Secondly, we have subdivided each task into separate workers, ensuring that the environments of these tasks remain isolated and free from conflicts. (Refer to Appendix A for further details.)
Evaluation Prompt Setup. To accommodate the majority of existing dialogue models, our dialogue paradigm is structured around two roles, user (i.e., instruction & environment feedback) and agent, engaging and alternating with one another. We record interaction trajectories as a conversation history (u0, a0, · · · , uk, ak) involving the user and agent, where ui, ai represents the i-th round of the conversation history. When we perform inference, the conversation history must be like
6
Technical Report (v0.2)
Table 3: Test set (standard) results of AGENTBENCH. A clear performance gap exists between top commercial LLMs (e.g., gpt-4) and OSS LLM competitors. âVERâ stands for model version; âOAâ stands for the overall AGENTBENCH score, a weighted average of all environments (Cf. Section 4.1).
LLM Type Models VER OA Code-grounded Game-grounded Web-grounded OS DB KG DCG LTP HH WS WB API gpt-4 claude-2 claude gpt-3.5-turbo text-davinci-003 claude-instant chat-bison-001 text-davinci-002 0613 - v1.3 0613 - v1.1 - - 4.01 2.49 2.44 2.32 1.71 1.60 1.39 1.25 42.4 18.1 9.7 32.6 20.1 16.7 9.7 8.3 32.0 27.3 22.0 36.7 16.3 18.0 19.7 16.7 58.8 41.3 38.9 25.9 34.9 20.8 23.0 41.5 74.5 55.5 40.9 33.7 3.0 5.9 16.6 11.8 16.6 8.4 8.2 10.5 7.1 12.6 4.4 0.5 78.0 54.0 58.0 16.0 20.0 30.0 18.0 16.0 61.1 61.4 55.7 64.1 61.7 49.7 60.5 56.3 29.0 0.0 25.0 20.0 26.0 4.0 12.0 9.0 OSS (Large) llama-2-70b guanaco-65b chat - 0.78 0.54 9.7 8.3 13.0 14.7 8.0 1.9 21.3 0.1 0.0 1.5 2.0 12.0 5.6 0.9 19.0 10.0 codellama-34b vicuna-33b wizardlm-30b guanaco-33b instruct v1.3 v1.0 - 0.96 0.73 0.46 0.39 2.8 15.3 13.9 11.1 14.0 11.0 12.7 9.3 23.5 1.2 2.9 3.2 8.4 16.3 0.3 0.3 0.7 1.0 1.8 0.0 4.0 6.0 6.0 6.0 52.1 23.9 4.4 6.2 20.0 7.0 1.0 5.0 OSS (Small) vicuna-13b llama-2-13b openchat-13b wizardlm-13b vicuna-7b codellama-13b codellama-7b koala-13b llama-2-7b codegeex2-6b dolly-12b chatglm-6b oasst-12b v1.5 chat v3.2 v1.2 v1.5 instruct instruct - chat - v2 v1.1 sft-4 0.93 0.77 0.70 0.66 0.56 0.56 0.50 0.34 0.34 0.27 0.14 0.11 0.03 10.4 4.2 15.3 9.0 9.7 3.5 4.9 3.5 4.2 1.4 0.0 4.9 1.4 6.7 11.7 12.3 12.7 8.7 9.7 12.7 5.0 8.0 0.0 0.0 0.3 0.0 9.4 3.6 5.5 1.7 2.5 10.4 8.2 0.4 2.1 4.8 0.0 0.0 0.0 0.1 26.4 0.1 1.9 0.3 0.0 0.0 0.1 6.9 0.3 0.1 0.0 0.0 8.0 0.0 0.0 0.0 6.4 0.0 0.0 4.4 0.0 0.0 1.2 0.0 0.0 8.0 6.0 0.0 10.0 0.0 0.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 41.7 25.3 46.9 43.7 2.2 43.8 25.2 3.9 11.6 20.9 0.4 0.5 0.3 12.0 13.0 15.0 12.0 9.0 14.0 12.0 7.0 7.0 11.0 9.0 4.9 1.0
(u0, a0, · · · , uk). We select the minimum r such that count of all tokens2 in (u0, ar, ur+1, · · · , uk) is not greater than 3500. And then we append "[NOTICE] 2r messages are omitted." into u0. After that, the sequence (u0, ar, ur+1, · · · , uk) is regarded as the final input in multi-turn chat format.
However, in order to consider non-chat models, we append a post-processor. We feed the history into the model for chat models supporting multiple turns. For models supporting only text completion (e.g., text-davinci-003), we prepend "USER:" or "AGENT:" into each item in the history and finally append the string "AGENT:" to make models generate the agentâs content.
For task prompt organization, we adapted the format from (Yao et al., 2023b) to include both âThoughtâ (for CoT) and âActionâ but in one single turn. Usually, a simple CoT demonstration is provided in the task instruction for a better output format. To ensure reproducible results, we set temperature=0 (i.e., greedy decoding) in the inference on all tasks following (Wei et al., 2022b).
Overall Score Calculation. We have observed that the score distribution for each task varies significantly as tasks differ in difficulty levels. As a consequence, a naively averaged score is heavily impacted by tasks that generally yield higher scores (e.g., Web Shopping in our observation), overshadowing those with lower scores and being unsuitable for AGENTBENCHâs purpose.
Therefore, we produce the overall score by first resizing each taskâs average score to 1 across all the models we evaluate and then averaging the scores across all tasks for each model (Cf. Table 2). To standardize and simplify score calculations for future studies, we utilize the reciprocal average score of all the tested LLMs in each task as a fixed weight for future overall score calculation. The total score is then computed as the average value obtained by multiplying the score of each task by its corresponding weight. This method ensures fairness and consistency in evaluation, enabling easier comparisons and analysis in future research.
2Because the tokenizers of each model is different, we simply calculate tokens like this: a word with length n occupies ân/6â token(s), and a non-blank character takes 1 token.
7
Technical Report (v0.2)
# OS DB KG DCG LTP HH WS WB
# Completed
=
75.0 37.9 30.1 51.2 14.0 13.1 54.9 56.6
0.1 0.7 2.0 0.0 3.5 0.7 0.0 0.0 CLE Invalid Format 0.0 53.3 0.0 38.5 0.0 0.0 17.2 0.0 Invalid Action 0.9 0.0 0.0 10.2 0.0 64.1 0.0 8.4 23.9 8.0 67.9 0.0 82.5 22.1 27.8 35.0 TLE
° vicutt-13b codelidtha-34b 08 llama-2-13b openchat 13bp- vicuesap â '3ma-2-74b wizardim-13b â= vicuna-7b [fj codellama-13b © Eicodellama-7b juanaco.bsb wizardim-30b @ guanaco-33b © a © ES AgentBench OA score o& koala-13b 5 Adare on ; © dolly-12b 004 TA sleaze 67 65 13 33 #Size (bil t ize (billion parameters)
Table 4: Portions of different types of execution outcomes in 8 tasks. (CLE: Context Limit Exceeded, TLE: Task Limit Exceeded).
Figure 3: AGENTBENCH OA scores with regard to all tested OSS LLMs.
4.2 MAIN RESULTS
Overall and dataset-specific scores in AGENTBENCH are reported in Table 3. Surprisingly, on this challenging benchmark, we discover that some top LLMs are equipped with solid capabilities for dealing with real-world environmental interaction. For example, gpt-4 presents the best performance on 6 out of 8 datasets in AGENTBENCH; on HH, it achieves a success rate of 78%, indicating its practical usability in this scenario. claude-2 and claude follow gpt-4 but quite outperform gpt-3.5-turbo. Despite other API-based LLMsâ relatively poorer performance, regardless of tasks, most of them can solve quite a few percent of problems. All API-based LLMs have an AGENTBENCH overall score above 1.00.
OSS LLMs, however, commonly fail to solve problems in some challenging tasks, such as KG, DCG, and HH. We plot their performance concerning their sizes in Figure 3. Generally, most open-sourced LLMs perform far poorer than API-based LLMs in AGENTBENCH (Avg. 0.51 v.s. 2.15). The most capable OSS LLM turns out to be codellama-34b, achieving an overall score of 0.96 but still presents a clear performance gap to gpt-3.5-turbo. This contrasts recent claims that some OSS LLMs are comparable to gpt-3.5-turbo and gpt-4. We still need much effort to produce stronger OSS LLMs to serve agent purposes.
4.3 ANALYSIS
In the evaluation, we analyze some important factors that impact an LLM agentâs performance on AGENTBENCH, including outcome portion analysis, code training, and the difference between API-based commercial LLMs and OSS LLM competitors. More insights and case studies into the ability of planning, self-correction, and tool use are provided in Appendix J.2.
Portion of Different Types of Execution Outcomes. We report ratios of different types of execution outcomes (Cf. Section 2 for introduction) in Table 4. It is Task Limit Exceeded that dominantly caused the incompleteness of AGENTBENCH tasks. It means that despite the instruction following of most LLM agents, they fail to solve the challenge in given time or fall into repeated generation when interaction turns grow up, indicating weak reasoning and decision-making abilities.
In DB and DCG, LLM agents majorly encountered Invalid Format errors, meaning they do not correctly follow the instructionâs format requirements. The format verification is stringent for DB, and no retry opportunities are provided. Furthermore, the taskâs expected output may be close to certain modelsâ training data, yet not precisely aligned with. This discrepancy can lead the models to revert to their pre-trained formatting, inadvertently overlooking the specific requirements we provide. (Cf. Appendix J.2.1) For DCG, its instruction could be longer and more complicated than other tasks due to the need to introduce game rules, making some LLMs feel confused. In HH and WB, another major issue is about Invalid Action, where LLM agents generate actions beyond predefined action spaces. These two tasks provide many discrete action options at each turn, and many LLMs fail to generate an action from them and, therefore, cause errors. For specific ratios of each LLM, please refer to Appendix J.1.
Impact of Code Training. We find that code tuning might deeply influence a modelâs way of inferential generation and thinking, even beyond topics just about coding. From the comparison of codellama and llama-2 series, tuning with code seems to give models an edge in tasks that follow a relatively static procedure (e.g., Web Shopping). But, this kind of tuning might also affect
8
# Technical Report (v0.2)
the modelâs general thinking ability, as codellama series does not perform as well in the Digital Card Game as llama-2 series. This points to a balance between being good at following procedures and being good at general thinking when tuning LLMs.
Impact of High-Quality Alignment Data Training. Another helpful comparison would be between vicuna-13b and llama-2-13b. While they share the same base LLM, vicuna-13b is aligned by training on ShareGPTâs data (generated by gpt-4 and gpt-3.5-turbo, shared by users) and llama-2-13b is aligned from scratch. As a result, vicuna-13b outperforms llama-2-13b on AGENTBENCH, and even performs comparably to 3 times larger codellama-34b. This indicates that high-quality alignment is still a key to develop better LLM agents.
Unexpected Similar Performance of llama-2-13b and llama-2-70b. During our experi- ments, we were surprised to find that llama-2-13b and llama-2-70b perform similarly despite the significant gap between their sizes. After carefully checking and re-running experiments, the results are unchanged. We think that it indicates llama-2-70bâs insufficient pre-training. While both llama-2-13b and llama-2-70b are pre-trained with 2T tokens, a larger LLM should be trained with more tokens according to the scaling law (Hoffmann et al., 2022).
# 5 RELATED WORK
Evaluation of LLMs. The general capabilities of self-supervised (Liu et al., 2021) LLMs (Brown et al., 2020; Chowdhery et al., 2022; Zhang et al., 2022; Scao et al., 2022; Zeng et al., 2022; Touvron et al., 2023), especially those chat-aligned ones (Ouyang et al., 2022; Anthropic, 2023a; OpenAI, 2023), have refreshed peopleâs impression on deep learning systems and significantly transcended the conventional scope of NLP evaluation. It thus makes the evaluation of LLMs an urgent and challenging problem. Compared to previous efforts focusing on a subset of specified tasks (Wang et al., 2019; Wang et al.; Gehrmann et al., 2021), an increasing number of benchmarks are including broader spectra of tasks and datasets (Hendrycks et al., 2021b; Liang et al., 2022; Srivastava et al., 2023) in the evaluation. However, most of them are still limited to traditional tasks and thus fail to evaluate LLMsâ open-ended generation, multi-turn interaction, and ability to act as agents.
LLM-as-Agent. In pre-LLM era, text game environments such as TextWorld (Côté et al., 2019), Jericho (Hausknecht et al., 2020), and LIGHT (Urbanek et al., 2019) are dominant in language agent study which bases on BERT (Devlin et al., 2019) and reinforcement learning. With the advent of LLMs, the study of LLM agents begins to thrive (Huang et al., 2022), especially after Chain-of- Thought (Wei et al., 2022b) came out. ReAct (Yao et al., 2023b) is a pioneer work to combine CoT reasoning and actions in agent tasks. Later, a bunch of advanced reasoning strategies (Kim et al., 2023; Shinn et al., 2023; Wang et al., 2023d; Liu et al., 2023; Yao et al., 2023a; Gu et al., 2023) and applications (Park et al., 2023; Richards, 2023; Nakajima, 2023; age, 2023) for LLM-as-Agent have emerged and arouse much public interest. Nevertheless, limited datasets and models and available on the topic, without a standard and comprehensive benchmark. AGENTBENCH presents the first systematic benchmark for evaluating LLM-as-Agent with a broad coverage of tasks and available LLMs. Additionally, it also initiates the idea of adopting agent tasks to measure LLM performance.
Evaluating LLMs in Executive Environments. As LLMs become increasingly capable of real- world challenges, there is also a trend to evaluate them in executive environments rather than static datasets. Besides text games (e.g., ALFWorld (Shridhar et al., 2020b)), another main stream of works lies in code execution. APPS (Hendrycks et al., 2021a), HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021) pioneer the effort to evaluate code LLMs for functional correctness instead of text similarity. The paradigm has been later widely recognized and adopted in following works (Li et al., 2022; Zheng et al., 2023; Nijkamp et al., 2023). However, few previous code evaluation frameworks consider multi-turn interactions. A concurrent work InterCode (Yang et al., 2023) releases a framework that allows evaluation of interaction between models and Bash and SQL environments, which are similar to OS and DB tasks in AGENTBENCH.
# 6 CONCLUSION
We present AGENTBENCH, a systematically designed multi-dimensional evolving benchmark for evaluating LLMs as agents. For the first time, we include such a wide array of up to 8 real-
9
# Technical Report (v0.2)
world challenges to evaluate LLM agents, and establish a unified testing framework and toolkit for agile evaluation. An extensive study of 27 LLMs, including API-based and Open-sourced, is carefully conducted in a standard setting. In our assessment, contemporary commercial models have demonstrated preliminary capabilities as agents in analysis, planning, execution of plans, tool invocation, and self-reflection. These abilities suggest their nascent proficiency in addressing real- world challenges. Conversely, we posit that open-source models might either lack some of these competencies or, at best, possess only a subset of them simultaneously. We expect AGENTBENCH to serve as a cornerstone for later study to develop better and more applicable intelligent LLM agents.
# REFERENCES
Agentgpt. Python. https://github.com/reworkd/AgentGPT, 2023.
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691, 2022.
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
Anonymous. Knowledge base question answering as tool learning. under review, 2023.
Anthropic. Introducing claude, 2023a. URL https://www.anthropic.com/index/ introducing-claude.
Anthropic. Claude 2, 2023b. URL https://www.anthropic.com/index/claude-2.
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. arXiv preprint arXiv:2108.07732, 2021.
Kurt D. Bollacker, Colin Evans, Praveen K. Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collaboratively created graph database for structuring human knowledge. In Jason Tsong-Li Wang (ed.), Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2008, Vancouver, BC, Canada, June 10-12, 2008, pp. 1247â1250. ACM, 2008. doi: 10.1145/1376616.1376746. URL https://doi.org/10.1145/1376616.1376746.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPSâ20, Red Hook, NY, USA, 2020. Curran Associates Inc. ISBN 9781713829546.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan Xiong, Hong Wang, and William Yang Wang. HybridQA: A dataset of multi-hop question answering over tabular and textual data. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 1026â1036, Online, November 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.findings-emnlp.91. URL https://aclanthology.org/2020.findings-emnlp.91.
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. See https://vicuna.lmsys.org (accessed 14 April 2023), 2023.
10
Technical Report (v0.2)
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
Mike Conover, Matt Hayes, Ankit Mathur, Jianwei Xie, Jun Wan, Sam Shah, Ali Ghodsi, Patrick Wendell, Matei Zaharia, and Reynold Xin. Free dolly: Introducing the worldâs first truly open instruction-tuned llm, 2023. URL https://www.databricks.com/blog/2023/04/ 12/dolly-first-open-commercially-viable-instruction-tuned-llm.
Marc-Alexandre Côté, Akos Kádár, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Matthew Hausknecht, Layla El Asri, Mahmoud Adada, et al. Textworld: A learning environment for text-based games. In Computer Games: 7th Workshop, CGW 2018, Held in Con- junction with the 27th International Conference on Artificial Intelligence, IJCAI 2018, Stockholm, Sweden, July 13, 2018, Revised Selected Papers 7, pp. 41â75. Springer, 2019.
Edward De Bono. Lateral thinking. New York, pp. 70, 1970.
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, and Yu Su. Mind2web: Towards a generalist agent for the web. arXiv preprint arXiv:2306.06070, 2023.
Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314, 2023.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171â4186, 2019.
Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. Glm: General language model pretraining with autoregressive blank infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 320â335, 2022.
Jack Edmonds and Richard M Karp. Theoretical improvements in algorithmic efficiency for network flow problems. Journal of the ACM (JACM), 19(2):248â264, 1972.
Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, and Anima Anandkumar. Minedojo: Building open-ended embodied agents with internet-scale knowledge. Advances in Neural Information Processing Systems, 35: 18343â18362, 2022.
LR Ford Jr and DR FuËlkerson. Flows in networks. 1962.
Sebastian Gehrmann, Tosin Adewumi, Karmanya Aggarwal, Pawan Sasanka Ammanamanchi, An- uoluwapo Aremu, Antoine Bosselut, Khyathi Raghavi Chandu, Miruna-Adriana Clinciu, Dipanjan Das, Kaustubh Dhole, et al. The gem benchmark: Natural language generation, its evaluation and metrics. In Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021), pp. 96â120. Association for Computational Linguistics, 2021.
Xinyang Geng, Arnav Gudibande, Hao Liu, Eric Wallace, Pieter Abbeel, Sergey Levine, and Dawn Song. Koala: A dialogue model for academic research. Blog post, April, 1, 2023.
Yu Gu and Yu Su. ArcaneQA: Dynamic program induction and contextualized encoding for knowledge base question answering. In Proceedings of the 29th International Conference on Computational Linguistics, pp. 1718â1731, Gyeongju, Republic of Korea, October 2022. Inter- national Committee on Computational Linguistics. URL https://aclanthology.org/ 2022.coling-1.148.
11
# Technical Report (v0.2)
Yu Gu, Sue Kase, Michelle Vanni, Brian Sadler, Percy Liang, Xifeng Yan, and Yu Su. Beyond i.i.d.: Three levels of generalization for question answering on knowledge bases. In Proceedings of the Web Conference 2021. ACM, apr 2021. doi: 10.1145/3442381.3449992. URL https: //doi.org/10.1145%2F3442381.3449992.
Yu Gu, Xiang Deng, and Yu Su. Donât generate, discriminate: A proposal for grounding language models to real-world environments. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 4928â4949, Toronto, Canada, July 2023. Association for Computational Linguistics. URL https://aclanthology.org/ 2023.acl-long.270.
Matthew Hausknecht, Prithviraj Ammanabrolu, Marc-Alexandre Côté, and Xingdi Yuan. Interac- tive fiction games: A colossal adventure. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 7903â7910, 2020.
Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. Measuring coding challenge competence with apps. arXiv preprint arXiv:2105.09938, 2021a.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. In International Conference on Learning Representations, 2021b.
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022.
Amy K Hoover, Julian Togelius, Scott Lee, and Fernando de Mesentier Silva. The many ai challenges of hearthstone. KI-Künstliche Intelligenz, 34:33â43, 2020.
Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pp. 9118â9147. PMLR, 2022.
Mohit Iyyer, Wen-tau Yih, and Ming-Wei Chang. Search-based neural structured learning for sequential question answering. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1821â1831, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1167. URL https: //aclanthology.org/P17-1167.
Mandar Joshi, Eunsol Choi, Daniel S Weld, and Luke Zettlemoyer. Triviaqa: A large scale distantly In Proceedings of the 55th Annual supervised challenge dataset for reading comprehension. Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 1601â 1611, 2017.
Geunwoo Kim, Pierre Baldi, and Stephen McAleer. Language models can solve computer tasks. arXiv preprint arXiv:2303.17491, 2023.
Heinrich Küttler, Nantas Nardelli, Alexander Miller, Roberta Raileanu, Marco Selvatici, Edward Grefenstette, and Tim Rocktäschel. The nethack learning environment. Advances in Neural Information Processing Systems, 33:7671â7684, 2020.
# LAION. Open-assistant. https://github.com/LAION-AI/Open-Assistant, 2023.
Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models, 2023.
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, et al. Competition-level code generation with alphacode. Science, 378(6624):1092â1097, 2022.
12
Technical Report (v0.2)
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110, 2022.
Xi Victoria Lin, Chenglong Wang, Luke Zettlemoyer, and Michael D Ernst. Nl2bash: A corpus and semantic parser for natural language interface to the linux operating system. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), 2018.
Bo Liu, Yuqian Jiang, Xiaohan Zhang, Qiang Liu, Shiqi Zhang, Joydeep Biswas, and Peter Stone. Llm+ p: Empowering large language models with optimal planning proficiency. arXiv preprint arXiv:2304.11477, 2023.
Xiao Liu, Fanjin Zhang, Zhenyu Hou, Li Mian, Zhaoyu Wang, Jing Zhang, and Jie Tang. Self- supervised learning: Generative or contrastive. IEEE transactions on knowledge and data engi- neering, 35(1):857â876, 2021.
Pattie Maes. Agents that reduce work and information overload. Commun. ACM, 37:30â40, 1994.
Dirk Merkel et al. Docker: lightweight linux containers for consistent development and deployment. Linux j, 239(2):2, 2014.
Yohei Nakajima. Babyagi. Python. https://github. com/yoheinakajima/babyagi, 2023.
Linyong Nan, Chiachun Hsieh, Ziming Mao, Xi Victoria Lin, Neha Verma, Rui Zhang, Wojciech Kry´sci´nski, Nick Schoelkopf, Riley Kong, Xiangru Tang, Murori Mutuma, Ben Rosand, Isabel Trindade, Renusree Bandaru, Jacob Cunningham, Caiming Xiong, and Dragomir Radev. Fetaqa: Free-form table question answering, 2021.
Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. Codegen: An open large language model for code with multi-turn program synthesis. In The Eleventh International Conference on Learning Representations, 2023.
OpenAI. Introducing chatgpt, 2022. URL https://openai.com/blog/chatgpt.
R OpenAI. Gpt-4 technical report. arXiv, pp. 2303â08774, 2023.
Philip Osborne, Heido Nõmm, and André Freitas. A survey of text games for reinforcement learning informed by natural language. Transactions of the Association for Computational Linguistics, 10: 873â887, 2022.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â27744, 2022.
Joon Sung Park, Joseph C. OâBrien, Carrie J. Cai, Meredith Ringel Morris, Percy Liang, and Michael S. Bernstein. Generative agents: Interactive simulacra of human behavior. ArXiv, abs/2304.03442, 2023.
Panupong Pasupat and Percy Liang. Compositional semantic parsing on semi-structured tables. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1470â1480, Beijing, China, July 2015. Association for Computational Linguistics. doi: 10.3115/v1/P15-1142. URL https://aclanthology.org/P15-1142.
Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gómez Colmenarejo, Alexander Novikov, Gabriel Barth-maron, Mai Giménez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al. A generalist agent. Transactions on Machine Learning Research, 2022.
Toran Bruce Richards. Auto-gpt: An autonomous gpt-4 experiment, 2023.
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. Code llama: Open foundation models for code. arXiv preprint arXiv:2308.12950, 2023.
13
Technical Report (v0.2)
Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, et al. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations, 2022.
Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. Bloom: A 176b- parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022.
John R. Searle. Speech acts: An essay in the philosophy of language. Language, 46:217, 1970.
Bokui Shen, Fei Xia, Chengshu Li, Roberto MartÃn-MartÃn, Linxi Fan, Guanzhi Wang, Claudia Pérez- DâArpino, Shyamal Buch, Sanjana Srivastava, Lyne Tchapmi, et al. igibson 1.0: A simulation In 2021 IEEE/RSJ International environment for interactive tasks in large realistic scenes. Conference on Intelligent Robots and Systems (IROS), pp. 7520â7527. IEEE, 2021.
Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, and Percy Liang. World of bits: An open-domain platform for web-based agents. In International Conference on Machine Learning, pp. 3135â3144. PMLR, 2017.
Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023.
Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. Alfred: A benchmark for interpreting grounded instructions for everyday tasks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 10740â10749, 2020a.
Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Cote, Yonatan Bisk, Adam Trischler, and Matthew Hausknecht. Alfworld: Aligning text and embodied environments for interactive learning. In International Conference on Learning Representations, 2020b.
Paul Sloane. Lateral thinking puzzlers. Sterling Publishing Company, Inc., 1992.
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. Transactions on Machine Learning Research, 2023.
Sanjana Srivastava, Chengshu Li, Michael Lingelbach, Roberto MartÃn-MartÃn, Fei Xia, Kent Elliott Vainio, Zheng Lian, Cem Gokmen, Shyamal Buch, Karen Liu, et al. Behavior: Benchmark for everyday household activities in virtual, interactive, and ecological environments. In Conference on Robot Learning, pp. 477â490. PMLR, 2022.
Yu Su, Huan Sun, Brian M. Sadler, Mudhakar Srivatsa, Izzeddin Gur, Zenghui Yan, and Xifeng In Jian Su, Xavier Yan. On generating characteristic-rich question sets for QA evaluation. Carreras, and Kevin Duh (eds.), Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pp. 562â 572. The Association for Computational Linguistics, 2016. doi: 10.18653/v1/d16-1054. URL https://doi.org/10.18653/v1/d16-1054.
Alon Talmor and Jonathan Berant. The web as a knowledge-base for answering complex questions. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 641â651, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. doi: 10.18653/v1/ N18-1059. URL https://aclanthology.org/N18-1059.
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. Commonsenseqa: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4149â4158, 2019.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
14
Technical Report (v0.2)
Daniel Toyama, Philippe Hamel, Anita Gergely, Gheorghe Comanici, Amelia Glaese, Zafarali Ahmed, Tyler Jackson, Shibl Mourad, and Doina Precup. Androidenv: A reinforcement learning platform for android. arXiv preprint arXiv:2105.13231, 2021.
Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rocktäschel, Douwe Kiela, Arthur Szlam, and Jason Weston. Learning to speak and act in a fantasy text adventure game. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 673â683, 2019.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations.
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. Advances in neural information processing systems, 32, 2019.
Guan Wang, Sijie Cheng, Xianyuan Zhan, Xiangang Li, Sen Song, and Yang Liu. Openchat: Advancing open-source language models with mixed-quality data, 2023a.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi (Jim) Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. ArXiv, abs/2305.16291, 2023b.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations, 2023c.
Zihao Wang, Shaofei Cai, Anji Liu, Xiaojian Ma, and Yitao Liang. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents. arXiv preprint arXiv:2302.01560, 2023d.
Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations, 2022a.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â24837, 2022b.
Michael Wooldridge and Nicholas R Jennings. Intelligent agents: Theory and practice. The knowledge engineering review, 10(2):115â152, 1995.
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244, 2023.
John Yang, Akshara Prabhakar, Karthik Narasimhan, and Shunyu Yao. Intercode: Standardizing and benchmarking interactive coding with execution feedback. arXiv preprint arXiv:2306.14898, 2023.
Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. Webshop: Towards scalable real-world web interaction with grounded language agents. Advances in Neural Information Processing Systems, 35:20744â20757, 2022.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, 2023a.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, 2023b.
15
Technical Report (v0.2)
Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. Glm-130b: An open bilingual pre-trained model. arXiv preprint arXiv:2210.02414, 2022.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022.
Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Zihan Wang, Lei Shen, Andi Wang, Yang Li, et al. Codegeex: A pre-trained model for code generation with multilingual evaluations on humaneval-x. arXiv preprint arXiv:2303.17568, 2023.
Victor Zhong, Caiming Xiong, and Richard Socher. Seq2sql: Generating structured queries from natural language using reinforcement learning. CoRR, abs/1709.00103, 2017.
Xizhou Zhu, Yuntao Chen, Hao Tian, Chenxin Tao, Weijie Su, Chenyuan Yang, Gao Huang, Bin Li, Lewei Lu, Xiaogang Wang, Y. Qiao, Zhaoxiang Zhang, and Jifeng Dai. Ghost in the minecraft: Generally capable agents for open-world environments via large language models with text-based knowledge and memory. ArXiv, abs/2305.17144, 2023.
16
Technical Report (v0.2)
# Part I Appendix
# Table of Contents
A Framework
. . A.1 Traditional Evaluation Frameworks . A.2 Our Designed Evaluation Framework . . A.3 Implementation of Max-Flow Algorithm . B Operating System . B.1 Dataset details . B.2 Actions . . . B.3 Prompt Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C Database C.1 Dataset Details . . C.2 Data Augmentation . . C.3 Prompt Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D Knowledge Graph D.1 Dataset Details . . D.2 Prompt Example . . . . . . . . . . . . . . . . . . . . . . . E Digital Card Game E.1 Dataset Details . . E.2 The Attributes of Fish . . E.3 Prompt Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . F Lateral Thinking Puzzles . F.1 Dataset Details . . . F.2 Evaluation on LTP System . . F.3 LTP Game Progress and Termination . . F.4 Prompt Example . . . . . . . . . . . . . . . . . . . . . . . . . G House-holding . G.1 Dataset Details . G.2 Prompt Example . . . . . . . . . . . . . . . . . . . . . . . H Web Shopping . H.1 Dataset Details . H.2 Prompt Example . . . . . . . . . . . . . . . . . . . . . . . I Web Browsing I.1 Dataset Details . Prompt Example. I.2 . . . . . . . . . . . . . . . . . . . . . . . J Detailed Analysis J.1 Validity Analysis of Execution Outcomes . . . . . . J.1.1 Motivation of Validity Analysis . . J.1.2 Definition of Validity Analysis . J.1.3 Validity Analysis of Models . . Findings . . . . J.2.1 . . . Instruction Following Matters . J.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 21 22 22 24 24 25 25 26 26 27 29 29 30 31 33 33 33 33 34 37 37 38 38 38 39 41 41 42 44 44 44 44 44 44 44
17
20 20 20 20
Technical Report (v0.2)
. . . .
# J.2.2 Consistency and Execution in Agent Planning . J.2.3 J.2.4
. the Influence of Code Tuning on LLM Acting as Agents . . . Self Correcting ability .
. . .
. . .
. . .
. . .
. . .
. . .
. . .
J.2.3_ the Influence of Code Tuning on LLM Acting as Agents .........
. . . . . . . . . . . . . . . .
18
. . .
46 53 53
Technical Report (v0.2)
(a) Operating System (OS) Task: âFind the total number of non-empty directo- ries inside the â/etcâ directory.â Action Space: Any valid bash commands Observation: System standard output
(b) Database (DB) Task: âWhat was the total number of medals won by United States?â, given the table âOlympic Medalsâ Action space: Any valid SQL commands Observation: MySQL CLI interface output
(c) Knowledge Graph (KG) Task: âFind tropical cyclones that are similar to Hur- ricane Marie and affected Eastern North America.â Action space: Basic KG-querying tools Observation: Query results
(d) Digital Card Game (DCG) Task: âCompete against another player using four âfishâ cards in âAquawarâ game.â Action space: Four âfishâ cards and Assertion Observation: Battle process, status of âfishâ
(e) Lateral Thinking Puzzles (LTP) Task: âA man sleeps with the lights off, and the next morning he suicides after opening windows. Why?â Action Space: Any binary questions Observation: âYesâ, âNoâ, or âIrrelevantâ
=
(f) House-holding (HH) Task: âClean some soapbar and put it in coutertopâ Action space: A list of allowed actions in the room, or other accessible rooms Observation: Results after the action.
Lodge Bedspread Full/Queen Size Quilt with 2 Shams. Cabin 3-Piece Reversible All Season Quilt Set. Rustic Quilt Coverlet Bed Set. | Stonehurst Collection,
(g) Web Shopping (WS) Task: âLooking for a queen size bedspread set in the color redwood, and price lower than 70.â Action space: Search (generate keywords) and Click (choose from all clickable buttons) Observation: Productsâ descriptions; the webpage (h) Web Browsing (WB) Task: âFind a latest post with more than 10k upvotes in r/announcements community and upvote it.â Action space: 1) Choose one out of all HTML ele- ments in the webpage; 2) Click, Type, or Select Options Observation: Page HTML (optional: screenshot)
Figure 4: Examples of all environments in AGENTBENCH.
19
Technical Report (v0.2)
A FRAMEWORK
A.1 TRADITIONAL EVALUATION FRAMEWORKS
Traditional evaluation frameworks can be categorized into two types:
Traditional Tasks (e.g., single-turn generation, classification, etc.). These frameworks are designed for specific tasks and may not be suitable for more complex tasks involving multi-turn interactions.
Agent-based Tasks (tasks with multi-turn interactions). These frameworks are typically tailored to a specific task by the creators of the dataset. They often suffer from several limitations:
They are designed for a specific task, limiting their applicability to other tasks. ⢠Communication between components (Task, Agent, and Evaluation) usually occurs within a single process or through the creation of child processes, necessitating evaluation on the same device.
⢠They can only evaluate one task with one agent at a time.
A.2 OUR DESIGNED EVALUATION FRAMEWORK
To address the limitations of traditional agent-based evaluation frameworks, we have designed a novel framework with the following features:
Decoupled S/C Architecture. Our framework decouples the Task Server, Agent Server, and Evalua- tion Client components, enabling separate deployments. They can communicate via HTTP interac- tions, allowing them to run on different devices, thus eliminating the need for co-location to satisfy the requirements of both Task and Agent.
Agent-Task Collaborative Evaluation. Our framework supports collaborative evaluation of multiple agents and tasks in various combinations simultaneously. This flexibility enables more comprehensive testing scenarios.
Network Flow Algorithms. We have incorporated network flow algorithms into the Evaluation Client, maximizing evaluation efficiency. This optimization ensures that both Agent and Task Workers are utilized to their fullest potential.
Resumable Evaluation. Our framework includes a resumable evaluation feature, making it easy to recover and continue interrupted evaluations seamlessly.
With these advancements, our evaluation framework overcomes the limitations of traditional ap- proaches and provides a more versatile, efficient, and scalable solution for evaluating intelligent agents in multi-turn tasks.
The overall structure of our framework can be described in Figure 5.
A.3
# IMPLEMENTATION OF MAX-FLOW ALGORITHM
In our evaluation process, we employ the EdmondsâKarp algorithm (Edmonds & Karp, 1972) as a practical implementation of the FordâFulkerson method (Ford Jr & FuËlkerson, 1962) designed to compute the maximum flow in a network with a time complexity of O(|V ||E|2).
To formalize the problem, consider a scenario with n agents, denoted as A1, A2, · · · , An, and m tasks, denoted as T1, T2, · · · , Tm. Our objective is to conduct evaluations in l different groups, each focusing on the pair (Axk , Tyk ), where 1 ⤠k ⤠l. Additionally, for every such pair (Axk , Tyk ), we should evaluate sk samples. The number of workers for agent Ak and task Tk is denoted as w(Ak) and w(Tk) respectively.
The flow graph we construct can be described as G =< V, E >, where the vertex set V is defined as
V ={Ak|1 ⤠k ⤠n} ⪠{Tk|1 ⤠k ⤠m} ⪠{S, D}, (1)
20
Technical Report (v0.2)
Workers Task Workers Agent a Mi a ubuntu Pa S > = & __Action % alle ~ Observation ie API API Maxflow Task Algorithm Controller Deployed Models {3 â&â3 API] [API e | on = Agent Server Assigner
Figure 5: The toolkit of AGENTBENCH is meticulously crafted for the seamless deployment of tasks and agents, coupled with an efficient evaluation assignment system. Agent servers (left) manifest in diverse forms, enabling us to deploy a model server and expose an accessible API through the HTTP protocol. Task servers (right) are composed of a task controller and several task workers, whose environment is within an isolated environment, ensuring freedom from conflicts and optimal task execution. Evaluation client (center) establishes an agent-task graph and employs the max-flow algorithm to optimize interactions. This optimization results in client workers seamlessly engaging with agent and task servers, facilitating the smooth execution of tasks and evaluations.
And the weighted edge set E is denoted as
E ={(Axk , Tyk , sk)|1 ⤠k ⤠l} ⪠{(S, Ak, w(Ak)|1 ⤠k ⤠n} ⪠{(Tk, D, w(Tk)|1 ⤠k ⤠m}. (2)
We apply max-flow algorithm from source vertex S to destination vertex D. For each flow edge (Ai, Tj, f(i,j)), we allocate f(i,j) samples for agent Ai and task Tj. After allocation, the weight of the edges should be reduced by the value of flow. Upon completion of an evaluation, the weight of edge connected to either S or D should be increased by 1.
We also establish a periodic interval for applying the algorithm to the network for newly available evaluation triples.
# B OPERATING SYSTEM
B.1 DATASET DETAILS
Construction Details. Each evaluation sample in OS dataset encompasses following contents:
Instruction. The description of the problem in natural language that needs LLMs to solve. ⢠Docker Environment. local-os/default).
# default
Initialization Script (Optional). The bash scripts that need to be executed independently (docker exec) before the interaction starts (e.g., user configurations, files, system statuses). ⢠Start Script (Optional). The bash scripts executed after shell is created and before interaction. ⢠Checking Pipeline. The checking method to judge the correctness of LLMs answer or operation. ⢠Example Script (Optional). The bash scripts that serve as reference solutions. In other words, if executing them in the interaction, results are correct. Only for unit tests that introduced below.
We design two types of tasks in the OS evaluation beyond conventional QA-only evaluation.
Question Answering (QA): LLMs need to output commands to solve specific questions in OS (e.g., aggregate numbers, view file contents). In this case, they must commit answers finally. ⢠Operation: LLMs need to output commands to do some verifiable operations on the operating system (e.g., change file/user states). In this case, they do not need to commit final answers.
Thanks to the checking pipeline, two types of tasks can be evaluated in a unified solution.
21
# Technical Report (v0.2)
Collecting challenging queries regarding OS could be difficult. In practice, about half of our instructions are created or collected from humans, while the other half are mostly QA problems generated by gpt-4 and strictly filtered by passing the unit tests (i.e., yield correct answers/states).
For human instructions, we first gather 6000 real problems and solutions with bash or shell tag from Stack Overflow3. Then we sort them by the score (count of likes). We invite 8 annotators majored in programming to select challenging ones. For each selected problem, they create one or more task instructions and write a detailed problem description, the initialization script, the starting script, and the checking pipeline. Finally, we conduct a cross verification for each evaluation sample to make sure itâs correct. For each problem, it takes about 2 hours to do the annotation.
For generated problems, our unit test contains the following parts. 1) Initialization Script Correction: we execute the initialization script and remove samples with wrong initialization whose exit code does not equal to 0. 2) Example Code Correction: we execute the example code and the checking pipeline to judge the correctness of the answer. We remove samples with wrong answers.
In the end, we curate 144 high-quality diverse OS evaluation samples accompanied with testing interactive environments and corresponding checking pipelines (i.e., scripts). Agents are prompted with 1-shot CoT to better format their responses (Cf. Appendix B).
Evaluation Setup. For each problem (i.e., instruction), the execution can be divided into 3 parts.
⢠Initialization. We create a docker container with a specific image, and we run an initialization bash script to set up environments specified by the instruction.
⢠Interaction. We start a new shell in this docker, and run the starting bash script specified by the instruction. Then the LLM to test is fed with a piece of instruction and the problem description. It starts interaction with the shell. In each turn, two actions are provides. One is to run bash script, which allows the model to generate and run a series of commands in the shell. The other is to commit answer, which allows the model to terminate the interaction process. Itâs notable that the model will be judged that it fail to solve the problem if exceeding round limit (8 by default).
there is a checking pipeline containing a list of scripts f1, f2, · · · , fn, where fk denotes the k-th script piece in the pipeline. For fk, the answer of the model, o0, and the output of ft(t < k), ot, will be fed as input arguments into fk, i.e., ok = fk(o0, o1, · · · , okâ1). The result is correct if and only if all the scripts exit with code 0.
Metrics. We measure the Success Rate for LLMs to solve problems in the execution. There are only two final status for each item of the problems, wrong or correct.
B.2 ACTIONS
In OS evaluation, we design two major types of actions: bash and commit.
Bash: which launches a bash command (using textual inputs in content field) ⢠Commit: which announces the completion of the goal. If the task is a QA problem, then the agent should submit the final answer in content field; else the checking pipeline will automatically check the system status to judge the correctness.
B.3 PROMPT EXAMPLE
A prompt for OS evaluation consists of the instruction and the formulation of interaction trajectory. An example of instruction prompt is:
You are an assistant that will act like a person, Iâwill play the role of linux(ubuntu) operating system. Your goal is to implement the operations required by me or answer to the question proposed by me. For each of your turn, you should first think what you should do, and then take exact one of the three actions: "bash", "finish" or " answer".
1. If you think you should execute some bash code, take bash action, and you should print like this:
# 3https://stackoverflow.com/
22
Technical Report (v0.2)
Think: put your thought here. Act: bash âââbash # put your bash code here âââ
2. If you think you have finished the task, take finish action, and you should print like this:
Think: put your thought here.
Act: finish
3. If you think you have got the answer to the question, take answer action, and you should print like this:
Think: put your thought here.
Act: answer(Your answer to the question should be put in this pair of parentheses)
If the output is too long, I will truncate it. The truncated output is not complete. You have to deal with the truncating problem by yourself. Attention, your bash code should not contain any input operation. Once again, you should take only exact one of the three actions in each turn.
The trajectory is organized in CoT styles, and we use an 1-shot example to make model better understand the action space like the following.
<USER>: {{ INSTRUCTION }} Now, my problem is: Tell me how many files are in the directory "/etc"? <AGENT>: Think: To count the files in /etc, I need to print all the files in it. Act: bash âââbash ls /etc âââ <USER>: The output of the OS: cpi cron.hourly fuse.conf iproute2 lvm networkd-dispatcher protocols selinux tmpfiles.d [truncated because the output is too long] <AGENT>: Think: The output has been truncated because it is too long, so I need to count files by script directly. Act: bash âââbash ls -1 /etc | wc -l âââ
23
Technical Report (v0.2)
<USER>:
The output of the OS: 220 <AGENT>: Think: Now I get the answer, it is 220. Act: answer(220) <USER>: Now, I will start a new problem in a new OS. My problem is: {{ PROBLEM }} <LOOP>: <AGENT>: {{ RESPONSE }} Swicth AGENT.Action: "bash": <User>: The output of the OS: {{ OUTPUT }} "answer" or "finish": Executing Checking Script Exit Loop otherwise: Exit Loop
C DATABASE
C.1 DATASET DETAILS
Construction Details. We acquire the source queries and databases via reusing and amalgamating several established datasets: WikiSQL (Zhong et al., 2017), WikiTableQuestions (Pasupat & Liang, 2015), SQA (Iyyer et al., 2017), HybridaQA (Chen et al., 2020), and FeTaQA (Nan et al., 2021), ensuring the diversity of instructions and data.
To further enrich (and avoid leakage from) the dataset, we employed gpt-3.5-turbo to per- form data augmentation. Provided with the header information and original rows of a table, gpt-3.5-turbo generates ten new rows. Using the name, header information, and some SQL examples, we task gpt-3.5-turbo with generating five additional SQL queries. Each acquired SQL statement is then fed sequentially into gpt-3.5-turbo with instructions to rephrase the sentences without changing their original meanings. The valid entries are filtered and sampled into the final dataset with 1599 entries, categorized into three basic types of DB operations: select, insert, or update.
# As a result, each sample in the dataset comprises:
Instruction. A piece of description delineating the problem and guiding the agentâs action. ⢠Table Info. Explanations about the table name and column names (i.e., meta information). ⢠Table Content. The actual contents within the table, utilized to create the database. ⢠Correct Answer. For selection-type samples, it is a text answer; for other entry types (i.e., insert,
update), it is the hash code of the correctly modified table.
Evaluation Setup. We assess each problem in the dataset through the following procedure:
Initialization. An initial SQL script is constructed based on the table content, and a MySQL database is initialized in a docker container, which provides a forwarded port for interaction. ⢠Interaction. An initial prompt guides the agent to provide an executable SQL command along with its reasoning. The agent is provided with the prompt, instruction, and table information description, and it is expected to return a response in given format. We execute the SQL and
24
# Technical Report (v0.2)
directly return the result to the agent, continuing this loop until the agent commits its final answer or encounters an error (e.g., reaching the maximum round limit or failing to parse the action). ⢠Checking. For selection-type problems, we compare the agentâs answer with the standard text answer, disregarding the order, but expecting an exact match. If the answer is a single number, all equivalent representations are accepted (e.g., 5, "5.0", â+5â are considered identical). For insertion or updating types of problems, we calculate and compare the hash of the table after the agentâs operation with the hash of the table after the correct SQL operation.
Metrics. We measure the Success Rate of agents in completing instructions. Overall success rate is the macro average of the rate of three categories.
C.2 DATA AUGMENTATION
We elaborate on the data augmentation of three types of DB tasks based on the existing SQL datasets (Zhong et al., 2017; Pasupat & Liang, 2015; Iyyer et al., 2017; Chen et al., 2020; Nan et al., 2021), which are all QA problems without some common operations including inserting and updating. We first tested the validity of the raw data and then randomly sample from each category from filtered data to form the final dataset. We adopt gpt-3.5-turbo to enrich and rewrite the original instructions.
⢠Insert: Given the name, the header information, and the original rows of a table, we generate 5 SQL statements for insertion. Later we rephrase the sentences without changing their meaning (using shorter or longer expressions or changing the order).
⢠Update: Given the name, the header information, and the previously generated 5 SQL statements for insertion, we generate 5 SQL statements for modification based on the given statements. We rephrase the sentences following the above standard.
To ensure data quality, each augmented query statement are required to pass the unit test scripts.
The query type of tasks fall into the traditional scope of Text-to-SQL evaluation, and we only sample and categorize for evaluation. Each query statement in existing datasets is classified into following types: âCountingâ, âAggregation-MINâ, âAggregation-MAXâ, âAggregation-AVGâ, âAggregation- SUMâ, âRankingâ, or âComparisonâ. Each one can only belong to one type. The remaining will be categorized as "Other".
C.3 PROMPT EXAMPLE
We use the following format of prompts:
User: I will ask you a question, then you should help me operate a MySQL
database with SQL to answer the question.
You have to explain the problem and your solution to me and write down your thoughts.
After thinking and explaining thoroughly, every round you can choose to operate or to answer.
your operation should be like this: Action: Operation âââsql SELECT * FROM table WHERE condition; âââ You MUST put SQL in markdown format without any other comments. Your SQL
should be in one line.
Every time you can only execute one SQL statement. I will only execute the statement in the first SQL code block. Every time you write a SQL , I will execute it for you and give you the output.
If you are done operating, and you want to commit your final answer, then write down:
Action: Answer Final Answer: ["ANSWER1", "ANSWER2", ...] DO NOT write this pattern unless you are sure about your answer. I expect
an accurate and correct answer.
25
Technical Report (v0.2)
Your answer should be accurate. Your answer must be exactly the same as
the correct answer.
If the question is about modifying the database, then after done operation, your answer field can be anything.
If your response cannot match any pattern I mentioned earlier, you will be judged as FAIL immediately.
Your input will be raw MySQL response, you have to deal with it by yourself.
# D KNOWLEDGE GRAPH
D.1 DATASET DETAILS
Construction Details. In an effort to gauge the decision-making abilities of LLMs, specifically their proficiency in long-term planning, we have meticulously compiled a dataset sourced from pre-existing knowledge base question answering (KBQA) datasets on FREEBASE, including GrailQA (Gu et al., 2021), ComplexWebQuestions (Talmor & Berant, 2018), and GraphQuestions (Su et al., 2016).
We envisage KBQA as a tool learning setting, thereby outfitting the LLM with an array of KG- querying tools. By leveraging the S-expressions annotated in (Gu & Su, 2022), we can accurately establish the optimal sequence of tool applications corresponding to each question. In order to sustain a high degree of difficulty in the tasks, we have opted to preserve only those questions which necessitate a minimum of five instances of tool invocation. Through this rigorous selection methodology, we have accrued a dataset consisting of 1,663 questions. Each data entry in the dataset has the following fields:
Input Question. A natural language utterance that involves intricate KG information seeking. ⢠Topic Entities. A set of topic entities mentioned in the input question. We obviate the need of
performing entity linking, allowing the LLM to focus on long-term planning.
Action Sequence. The gold action sequence (i.e., tool invocations) that leads to the target answer. ⢠Gold Answer. The gold answer to the question, typically characterized by a set of KG entities.
Note that, in contrast to interacting with databases in AgentBench, where the particulars and content of the database are integrated into the input, describing an extensive KG to the LLM is not particularly feasible. This task is characterized by a partially observable environment, which is a critical aspect of its nature.
Evaluation Setup. To support our evaluation, we first host the latest version of FREEBASE using Virtuoso.4 Due to the complexity of SPARQL queries, we decide not to burden the LLM with crafting SPARQL queries by itself. Instead, we implement a series APIs that interface with the Virtuoso backend, allowing the LLM to query the KG more effortlessly.
We use the first 500 tasks from the datest for evaluation. Each task, when successfully executed, should ideally proceed through the following phases.
⢠Initialization. We prompt the LLM with the concrete task description, including the concrete description of each KG-querying tool that we provide.
Interaction. During this phase, the LLM is expected to invoke different tools to access the KG and accumulate the necessary information to respond accurately to the question. Importantly, the process is entirely autonomous, meaning the LLM determines the workflow entirely by itself. ⢠Final Answer Prediction. During its interaction with the KG, the LLM may generate a list of variables, each one representing a unique set of entities. If the LLM determines that one particular variable should signify the final answer, it will present this variable as its output and conclude the task.
Metrics. We use F1 score as the primary evaluation metric in our study, calculated by comparing the modelâs predicted answers to the gold standard answers. In addition to F1 score, we also use the Exact Match metric. However, unlike previous studies that measure Exact Match based on the logical form, we assess it based on the exact match between the predicted and gold answer sets.
# 4https://github.com/dki-lab/Freebase-Setup
26
Technical Report (v0.2)
Lastly, we also evaluate the Executability of the action sequences generated by the model. If the modelâs action sequence produces any set of answers when executed, it scores 1.0 for Executability. If it fails to produce an answer, it scores 0.
D.2 PROMPT EXAMPLE
Task description:
User: You are an agent that answers questions based on the knowledge stored in a knowledge base. To achieve this, you can use the following tools to
query the KB.
1. get_relations(variable: var) -> list of relations A variable can be either an entity or a set of entities (i.e., the result
variable can be either an entity or a set of entities (i.e., the result of a previous query). This function helps to navigate all relations
of a previous query). This function helps to navigate all relations in the KB connected to the variable, so you can decide which relation
is the most useful to find the answer to the question.
A simple use case can be âget_relations(Barack Obama)â, which finds all relations/edges starting from the entity Barack Obama.
The argument of get_relations should always be an entity or a variable (e .g., #0) and not anything else.
2. get_neighbors(variable: var, relation: str) -> variable Given a variable, this function returns all entities connected to the
variable via the given relation. Note that, get_neighbors() can only be used after get_relations() is used to find a set of viable relations.
A simple use case can be âget_neighbors(Barack Obama, people.person. profession)â, which returns the profession of Obama in Freebase.
3. intersection(variable1: var, variable2: var) -> variable Given two variables, this function returns the intersection of the two
variables. The two variables MUST be of the same type!
4. get_attributes(variable: var) -> list of attributes This function helps to find all numerical attributes of the variable.
Please only use it if the question seeks for a superlative accumulation (i.e., argmax or argmin).
5. argmax(variable: var, attribute: str) -> variable Given a variable, this function returns the entity with the maximum value
of the given attribute. It can only be used after get_attributes() is used to find a set of viable attributes.
A simple use case can be âargmax(variable, age)â, which returns the oldest entity belonging to the variable.
6. argmin(variable: var, attribute: str) -> variable Given a variable, this function returns the entity with the minimum value
of the given attribute. It can only be used after get_attributes() is used to find a set of viable attributes.
A simple use case can be âargmin(variable, age)â, which returns the youngest entity belonging to the variable.
7. count(variable: var) -> int Given a variable, this function returns the number of entities belonging
to the variable.
After a variable is produced along the process, you need to judge whether a variable is the final answer to the question. Each variable is represented as an id starting from 0. For example, #0 is the first variable, #1 is the second variable, and so on.
Once you find the answer, respond with âFinal Answer: #idâ, where id is the id of the variable that you think is the final answer. For
27
Technical Report (v0.2)
example, if you think #3 is the final answer, you MUST respond with â Final Answer: #3â.
You can only take ONE action at a time!! After you get the observation from its execution, you can take another action. You can take at most 15 actions to find the answer to the question.
Given the inherent complexity associated with enabling LLMs to query the KB, it has been observed that, in a zero-shot setting, LLMs struggle to generate any outputs of substantive relevance. As a result, we additionally provide a teaching example in our prompt:
Here is a demo that you can use to get started.
Question: the bipropellant rocket engine with kerosene and gas-generator cycle is designed by who?
# Entities: [Gas-generator cycle, Kerosene]
Thought: I need to first find engines with gas-generator cycle. To acheive this, I will query the KB to find relations connected to the entity âGas-generator cycleâ and see if any of them can help me find the answer.
# Action: get_relations(Gas-generator cycle)
Observation: [spaceflight.rocket_engine_cycle.rocket_engines, spaceflight .satellite.orbiting, spaceflight.rocket_stage.fuel, spaceflight. satellite_manufacturer.spacecraft_manufactured, automotive.fuel. engines]
Thought: From the above relations, I might use spaceflight. rocket_engine_cycle.rocket_engines to find the engines of Gas- generator cycle.
Action: get_neighbors(Gas-generator cycle, spaceflight. rocket_engine_cycle.rocket_engines)
Observation: variable #0, which are instances of spaceflight. bipropellant_rocket_engine
Thought: I also need to find engines with kerosene. To acheive this, I will query the KB to find relations connected to the entity âKerosene â.
Action: get_relations(Kerosene)
Observation: [spaceflight.satellite_manufacturer.spacecraft_manufactured, automotive.fuel.engines, spaceflight.rocket_engine_cycle. rocket_engines]
Thought: From the above relations, I might use spaceflight. rocket_engine_cycle.rocket_engines to find engines with kerosene cycle.
Action: get_neighbors(Kerosene, spaceflight.rocket_engine_cycle. rocket_engines)
Observation: variable #1, which are instances of spaceflight. bipropellant_rocket_engine
Thought: The engines with both gas-generator cycle and kerosene cycle shoule be the intersection of variable #0 and variable #1.
Action: intersection(#0, #1)
28
Technical Report (v0.2)
Observation: variable #2, which are instances of spaceflight. bipropellant_rocket_engine
Thought: Now I need to find who designed such engines, which will be the final answer. To acheive this, I will query the KB to find relations connected to the variable #2.
Action: get_relations(#2)
Observation: [spaceflight.rocket_engine.manufactured_by, spaceflight. rocket_engine.designed_by, spaceflight.rocket_engine.design_period, spaceflight.rocket_engine.status]
Thought: From the above relations, I might use spaceflight.rocket_engine. designed_by to find the designer of the engines.
Action: get_neighbors(#2, spaceflight.rocket_engine.designed_by)
Observation: variable #3, which are instances of pspaceflight. rocket_engine_designer
Thought: #3 is the final answer to the question, which represents the target rocket engine designer.
Final Answer: #3
# E DIGITAL CARD GAME
E.1 DATASET DETAILS
Construction Details. We use Aquawar framework as the basis for our interactive system. The first type of interaction is the action phase, where the model needs to select the fish it wants to act with and then choose the target for skill. To ensure the validity of model operations, we perform checks for valid actions. The second type of interaction is the guess phase, where we provide the model with known information, including fish species and skill descriptions, enemyâs targets. We have two naive strategies (random and greedy search) for testing purposes. The following is a detailed definition and description of the game process.
⢠Player and Cards. It is a two-player battle game with four pet fishes (i.e., cards) in each team. The card pool consists of ten fish (Appendix E.2), and both players choose four definite fish to use before the start of the game.
⢠Initial State. Each fish has 400 initial health, 200 initial attack power, active ability, and passive ability.
Basic Rule. Players choose a live fish to use its active skill or normal attack on an enemy fish each round. All alive fishâs passive ability will automatically trigger when meeting certain conditions. ⢠Assertion Mechanism. The identity of a playerâs fish is initially hidden. The counter-player can guess one of the playerâs fishâs identities each round. If the counter-player guesses correctly, the playerâs fishâs identity is revealed, and all its fish will get damaged.
⢠Round Process. Within a round of the game, the player for that round will first assert the identity of one opponentâs fish that are alive and whose identities have not been revealed. If the assertion is correct, all of the opponentâs fish that remain alive get damaged. Subsequently, the player for that round can command one alive fish to execute a normal attack or an active ability. Following this, any fish that meet the condition will unleash its passive ability.
Victory Condition. The victory condition is to have more fish alive at the end of the game.
To balance agent engagement and game complexity simultaneously, we designed two stages of game logic. We remove the assertions in the first stage while keeping assertions in the second stage. We test all the models on both the first and second stages separately and choose the average performance for final score.
29
# Technical Report (v0.2)
We choose two naive playing strategies as the baselines.
The first strategy is a simply random action from all available action spaces. ⢠The second strategy will try to use AOE attack if possible, and continuously evaluating whether a one-hit kill is possible. Then, it attempts to use active skills and, finally, resorts to normal attacks. Overall, this strategy follows a certain pattern but may not necessarily be the most optimal one.
Evaluation Setup. For each time of the game playing, we evaluate with the following steps:
⢠Initialization. We initiated the modified game logic environment, which uses pybind to compile, and the baseline game agent under the Ubuntu 20.04 environment.
Interaction. We place rule descriptions in the instruction prompt according to different game stages, and the LLM agent interacts and competes strategically with the baseline within the game logic environment. We give the LLM agent five chances to respond in the correct format. It will be immediately deemed defeated if it fails to output legal actions within the given number of attempts. At the same time, we encourage the model to output its reasoning process in CoT. ⢠Result Calculation. During the Interaction process, we will record the entire game process for
battle playback and calculate the game results to obtain the metrics for the task.
Metrics. Our comprehensive evaluation uses metrics that range from basic gameplay elements such as the wining rounds (Win Round) , total played rounds (Total Round), winning rate (Win Rate) , the total damage inflicted compared to total health (Damage Rate), and ultimately we provide a final reward score according to the above metrics:
reward = 0.7 Ã metricwinrate + 0.3 Ã metricdamagerate
E.2 THE ATTRIBUTES OF FISH
The game has ten kinds of fish according to the game rules.
⢠Spray
Counter (Passive): Inflicts 30 damage to the attacker when a teammateâs health is below 30% - AOE (Active): Attacks all enemies for 35% of its attack points.
Flame
Counter (Passive): Inflicts 30 damage to the attacker when a teammateâs health is below 30% - Infight (Active): Inflicts 75 damage on one living teammate and increases your attack points by 140. ⢠Eel
Deflect (Passive): Distributes 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. - AOE (Active): Attacks all enemies for 35% of its attack points.
⢠Sunfish
Deflect (Passive): Distributes 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. - Infight (Active): Inflicts 75 damage on one living teammate and increases your attack points by 140.
Barracuda
Reduce (Passive): There is a 30% chance to avoid any incoming damage each time. - Crit (Active): Deals 120 CRITICAL damage to an enemy.
⢠Mobula
Reduce (Passive): There is a 30% chance to avoid any incoming damage each time. - Subtle (Active): Choose a teammate or yourself to reduce the damage taken by 70% when attacked, and increase its attack points by 20.
Octopus
Heal (Passive): Regain 20 health points if the health is still greater than 0 when attacked. - Infight (Active): Inflicts 75 damage on one living teammate and increases your attack points by 140.
30
Technical Report (v0.2)
⢠Whiteshark
Heal (Passive): Regain 20 health points if the health is still greater than 0 when attacked. - Crit (Active): Deal 120% CRITICAL damage of your attack power to the enemy with the lowest health. If the targetâs health is below 160, increase the CRITICAL damage to 140%.
Hammerhead
Explode (Passive): Deal 40 damage to the source when attacked but not died. When the health is below 20%, increase its attack points by 15. - Crit (Active): Deal 120% CRITICAL damage of your attack power to the enemy with the lowest health. If the targetâs health is below 160, increase the CRITICAL damage to 140%.
As can be seen, there is overlap among the active and passive skills of different pet fish, which is done to better conceal the identity information of pet fish in the game and increase the strategic aspects of the game.
E.3 PROMPT EXAMPLE.
We use the following format of prompts for actions:
This is a two-player battle game with four pet fish on each team. The types of fish may vary.
Each fish has its 400 initial health, 200 attack power, active ability, and passive ability.
You can choose a live fish to use its active skill or normal attack ( causing half of attack power as damage) on an enemy fish each round. When the conditions are met, the fishâs passive ability will
automatically trigger, regardless of whether it is chosen.
Your fishâs identity is initially hidden. The enemy can guess one of your fishâs identity in each round. If the enemy guesses right, your fish âs identity is revealed, and each of your fish will get 50 damage. The victory condition is to have more fish alive at the end of the game.
The following are the four types of your pet fish: {âsprayâ: {âpassiveâ: "Counter: Deal 30 damage to attacker when a teammateâs health is below 30%. ", âactiveâ: âAOE: Attack all enemies for 35% of its attack points.â}, âflameâ: {âpassiveâ: "Counter: Deal 30 damage to attacker when a teammateâs health is below 30%. ", â activeâ: "Infight: Attack one alive teammate for 75 damage and increases your attack points by 140. Notice! You canât attack yourself or dead teamate! "}, âeelâ: {âpassiveâ: âDeflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â, âactiveâ: âAOE: Attack all enemies for 35% of your attack points.â}, âsunfishâ: {â passiveâ: âDeflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â, âactiveâ: "Infight: Attack one alive teammate for 75 damage and increases your attack points by 140. Notice! You canât attack yourself or dead teamate! "}}
The following are the four types of enemyâs pet fish: {âsprayâ: {âpassiveâ: "Counter: Deal 30 damage to attacker when a
teammateâs health is below 30%. ", âactiveâ: âAOE: Attack all enemies for 35% of its attack points.â}, âflameâ: {âpassiveâ: "Counter: Deal 30 damage to attacker when a teammateâs health is below 30%. ", â activeâ: "Infight: Attack one alive teammate for 75 damage and increases your attack points by 140. Notice! You canât attack yourself or dead teamate! "}, âeelâ: {âpassiveâ: âDeflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â, âactiveâ: âAOE: Attack all enemies for 35% of your attack points.â}, âsunfishâ: {â passiveâ: âDeflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â, âactiveâ: "Infight: Attack one alive teammate for 75
31
Technical Report (v0.2)
damage and increases your attack points by 140. Notice! You canât attack yourself or dead teamate! "}}
Play the game with me. In each round, you should output your thinking
process, and return your move with following JSON format:
{âpick_fishâ: âpick an alive fish, you should give the name of the alive fishâ, âactionâ: âchoose from [normal, active]â, âtarget_positionâ: " targetâs position, you must choose from [0,3]"}
Notice! You must return your move in each round. Otherwise, you will be
considered defeated.
We use the following format of prompts for assertions in stage2:
This is a two-player battle game with four pet fish in each team. The types of fish may vary.
Each fish has its initial health, attack power, active ability, and
passive ability.
All fishâs identities are initially hidden. You should guess one of the enemy fishâs identities in each round. If you guess right, the enemy fishâs identity is revealed, and each of the enemyâs fish will get 50 damage. You can only guess the identity of the live fish.
The victory condition is to have more fish alive at the end of the game.
The following are the four types of your pet fish: {âsprayâ: {âpassiveâ: "Counter: Deal 30 damage to attacker when a teammateâs health is below 30%. ", âactiveâ: âAOE: Attack all enemies for 35% of its attack points.â}, âflameâ: {âpassiveâ: "Counter: Deal 30 damage to attacker when a teammateâs health is below 30%. ", â activeâ: "Infight: Attack one alive teammate for 75 damage and increases your attack points by 140. Notice! You canât attack yourself or dead teamate! "}, âeelâ: {âpassiveâ: âDeflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â, âactiveâ: âAOE: Attack all enemies for 35% of your attack points.â}, âsunfishâ: {â passiveâ: âDeflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â, âactiveâ: "Infight: Attack one alive teammate for 75 damage and increases your attack points by 140. Notice! You canât attack yourself or dead teamate! "}}
The following are the four types of enemyâs pet fish: {âsprayâ: {âpassiveâ: "Counter: Deal 30 damage to attacker when a
teammateâs health is below 30%. ", âactiveâ: âAOE: Attack all enemies for 35% of its attack points.â}, âflameâ: {âpassiveâ: "Counter: Deal 30 damage to attacker when a teammateâs health is below 30%. ", â activeâ: "Infight: Attack one alive teammate for 75 damage and increases your attack points by 140. Notice! You canât attack yourself or dead teamate! "}, âeelâ: {âpassiveâ: âDeflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â, âactiveâ: âAOE: Attack all enemies for 35% of your attack points.â}, âsunfishâ: {â passiveâ: âDeflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â, âactiveâ: "Infight: Attack one alive teammate for 75 damage and increases your attack points by 140. Notice! You canât attack yourself or dead teamate! "}}
Play the game with me. In each round, you should output your thinking process, and return your move with following JSON format:
{âguess_typeâ: "the enemyâs fish type you may guess", âtarget_positionâ: "guess targetâs position, you must choose from [0,3]"}
Notice! You must return your move in each round. Otherwise, you will be considered defeated.
32
Technical Report (v0.2)
F LATERAL THINKING PUZZLES
F.1 DATASET DETAILS
Construction Details. Each sample is constructed of a pair of story (a riddle, e.g., A man walked into a restaurant, ordered a bowl of turtle soup, and after finishing it, he committed suicide. Why did he do that?) and truth. We categorize samples into four levels of difficulty: easy, medium, hard, and expert. The LTP rules for LLM agent playing are as follows:
Roles: Roles in LTP evaluation are a host and a solver. The host knows the story and truth, providing the story to the solver, and guiding it to guess out the truth. The solver, played and acted by an LLM, tries to find out the truth by asking questions and synthesizing hostâs answers. ⢠Solving Steps: There is a maximum round for each game, for example, 25. The solver needs to propose a question in each round based on known facts. The questions should be the ones that can be answered by âYesâ, âNoâ, or âIrrelevantâ. Host reply to the questions with correct answers. To lower the difficulty for LLM agents, sometimes the host will provides some hints in responses when solvers get trapped in wrong directions of reasoning.
⢠Game Termination: When the solver thinks it has guessed out the major part of the truth, it can declare the guessed plot to the host. If it is correct, the host will announce the end of the game.
Evaluation Setup. For each pair of story and truth, we evaluate the models with the following steps:
Initialization. Setting up the LTP host system via local python package installation or web API. ⢠Interaction. We set up system prompts for LLMs to build their roles of players. LLMs are tested as solvers within the maximum round for each game, if the LLM does not exceed the max token length. In automatic evaluation, we limit the answer to be mostly "Yes", "No", or "Irrelevant", and extract the answer from gpt-3.5-turboâs responses. LLMs are also asked to summarize their reasoning in automatic evaluation in order to help the termination detection to be more accurate. ⢠Checking. We do the pilot study of each LLM to collect all situations in game process and design the checking plan. For automatic evaluation, we set up some key words for gpt-3.5-turbo to answer and remind the model to consider some flexible situation like synonyms.
Metrics. We evaluate LLMsâ Lateral reasoning ability by two self created metrics:
⢠Single Game Accuracy (SGA): The proportion of rounds in which LLMs approaching the truth in a single game.
Round Efficiency (RE): How fast the model can guess out the truth within the maximum round. ⢠Query Relevance (QR): Relevance between modelâs questions and the truth. ⢠Game Progress (GP): Progress before a game end, which serves as the main metric. We break down the groundtruth into several points and measure how many points are reached by an agent.
F.2 EVALUATION ON LTP SYSTEM
We evaluate the LTP System by human validation, validating systemâs accuracy on milestone recogni- tion and fact verification. We compare the Single Game Accuracy and Query Relevance between automatic evaluation and human evaluation, and found that automatic evaluation sometimes more tolerate for the agent, which make SGA and QR seem better than human evaluation, especially on open-sourced models. We plan to train a model specifically for the host of the game, in order to provide a better game experience and a more precise evaluation. For Game Progress and Round Efficiency, the LTP system provides an objective evaluation, which can match the level of human evaluation.
F.3 LTP GAME PROGRESS AND TERMINATION
The progress of game is defined as the proportion of hit key points in the truth. The key points are summarized by gpt-3.5-turbo, which are concluded in the dataset as âanswer_keysâ (see an example below) Truth: That night they went to the abandoned building to record the number of steps. They verified what was said on the Internet, and there would
33
# Technical Report (v0.2)
be one step less when counting the stairs at night. However, when they went to the abandoned building for verification the next day, they found that there were no stairs at all.}ââ:
Key points: 1. They want to count the steps of the abandoned building. 2. A supernatural event occurred. 3. They saw a claim online: counting stairs at night will result in one
step less.
4. The next day, when they went to the abandoned building to verify, they
found no stairs.
5. They broke down because they were terrified.
The number of key points varies among samples. As for the decision of whether the agent guess out key points, we first change relevant questions into declarative sentences, then simplify sentences into one sentence. After guessing out a key point, we delete that key point and relevant inferences to avoid repeated guessing.
F.4 PROMPT EXAMPLE
We use the following format of prompts for agents: You are a game player, and you are playing Lateral Thinking Puzzle, also
known as Situation Puzzle.
Lateral Thinking Puzzle is a deductive reasoning game, and here are the game rules:
1. At the beginning of the game, you will receive a narrative, referred to as "story". Based on the story, you need to ask questions that can be answered with "yes", "no", or "irrelevant" to guees out the " truth".
2. By asking questions, you narrow down the range of possibilities until
you eventually guess out the truth. 3. Each time, you can only ask one question. 4. Remember that your role is a player. You cannot declare the end of the
game, give up on reasoning, or request a new game.
5. You cannot directly repeat information already provided in the story. 6. You cannot directly ask for details about the story in the form of " why" questions; you need to make your own guesses for truth.
7. You cannot directly inquire about the story; you must make your own deductions.
Next, please make full use of the information provided above to engage in game reasoning. Keep in mind that your questions should be answerable with "yes", "no", or "irrelevant", and you can only ask one question at a time. Here is your story: {story}
You can start guessing the content of the truth, and I will answer your questions. Please note that your questions should be answerable with "yes", "no", or "irrelevant".
We use the following format of prompts for host: USER: I need you to be the host of a game called Lateral Thinking Puzzle.
Lateral Thinking Puzzle is a game consist of a story and a truth. Your story is: â{story}â Your truth is: â{answer}â
Here are the game rules: 1. You know both the "story" and the "truth". When a user wants to play Lateral Thinking Puzzle, you provide them with the "story". The user only knows the "story" and is unawared of the "truth".
34
Technical Report (v0.2)
2. The user asks questions that can be answered with "yes," "no," or " irrelevant". Their questions are aimed at guessing the "truth". Based on the "truth", you respond to the userâs questions using "yes," "no
," or "irrelevant" to guide them towards guessing the correct truth. 3. If the user directly asks for details about the truth using the form of "why" questions, inform them that they need to make their own guesses.
4. You must fully understand and accurately interpret the information from the truth. Based on the information of the truth and the userâs past questions, you answer the userâs questions. The userâs questions may not necessarily contain information from the truth, but your responses must align with the facts of the truth.
5. You can only answer "irrelevant" when the truth cannot provide a direct or indirect answer. Note that this is the only condition for responding "irrelevant"; otherwise, you should answer "yes" or "no."
6. You cannot directly disclose the information from the truth to the user, even if they ask directly.
7. You need to judge the userâs questions as a whole and understand their overall intent. Avoid answering based solely on a particular point; your responses must align with the facts of the truth.
8. During the userâs process of guessing the truth, if they come close to
some truths but still have gaps in understanding the complete truth of the truth, you can provide certain entry point hints. However, you cannot directly reveal information from the truth.
During the game process, please adhere to the above game rules to ensure a positive gaming experience for the users. Pay close attention to the questions asked and ensure that your responses comply with both the game rules and the information from the truth. When a user requests to play the game, provide them with the story and help them guess the truth by answering with "yes", "no", or "irrelevant". Remember that with each response, you must fully understand and abide by the aforementioned game rules, as well as the story and the truth
ASSISTANT: Alright, I understand that my role is to be the host of the Lateral
Thinking Puzzle and help users guess the truth by answering their questions. I have fully grasped all the information regarding the story and the truth and have carefully read all the rules. I assure that I will abide by all the rules throughout the game process.
USER: Please summarize the key points of the story to ensure that you have
understood it.
# ASSISTANT: {story_key}
USER: Please summarize the key points of the truth to ensure that you have
understood it.
# ASSISTANT: {answer_key}
USER: Please restate the rules to ensure that you have understood all of them.
ASSISTANT: 1. You know both the "story" and the "truth". When a user wants to play Lateral Thinking Puzzle, you provide them with the "story". The user only knows the "story" and is unawared of the "truth".
35
Technical Report (v0.2)
2. The user asks questions that can be answered with "yes," "no," or " irrelevant". Their questions are aimed at guessing the "truth". Based on the "truth", you respond to the userâs questions using "yes," "no
," or "irrelevant" to guide them towards guessing the correct truth. 3. If the user directly asks for details about the truth using the form of "why" questions, inform them that they need to make their own guesses.
4. You must fully understand and accurately interpret the information from the truth. Based on the information of the truth and the userâs past questions, you answer the userâs questions. The userâs questions may not necessarily contain information from the truth, but your responses must align with the facts of the truth.
5. You can only answer "irrelevant" when the truth cannot provide a direct or indirect answer. Note that this is the only condition for responding "irrelevant"; otherwise, you should answer "yes" or "no."
6. You cannot directly disclose the information from the truth to the user, even if they ask directly.
7. You need to judge the userâs questions as a whole and understand their overall intent. Avoid answering based solely on a particular point; your responses must align with the facts of the truth.
8. During the userâs process of guessing the truth, if they come close to some truths but still have gaps in understanding the complete truth of the truth, you can provide certain entry point hints. However, you cannot directly reveal information from the truth.
USER: Alright, we can now start the game. Remember, before each response, you should review the key points of the story, the key points of the truth, and the rules. Answer with "yes", "no", or "irrelevant".
ASSISTANT: Alright, as the host of the game, I will adhere to the above rules and ensure that my responses comply with the rules and the information from the truth. Below is your story:
# {story}
You can start guessing the content of the truth, and I will answer your questions. Please note that your questions should be answerable with "yes", "no", or "irrelevant".
USER: {question} Please answer with "yes", "no", or "irrelevant".
Here is the prompt to convert questions answered by âYesâ into declarative sentence.
Please restate the following content as a declarative sentence and simplify it into one sentence: {question}
Here is the prompt to convert questions answered by âNoâ into declarative sentence.
Please restate the following content as a declarative sentence by using the opposite meaning and then simplify it into one sentence: {question}
Here is the prompt to merge reasoned out information into one sentence to judge whether the agent guess out the key points:
Please simplify the following content into one sentence: {reasoning}
Here is the prompt to judge whether the merged sentence hit the key point.
36
Technical Report (v0.2)
Please compare the information between Sentence 1 and Sentence 2 to determine if Sentence 2 contains all the information in Sentence 1, including key details and descriptions. Please answer with "yes" or " no". Sentence 1: {key} Sentence 2: {merged sentence}"}
# G HOUSE-HOLDING
G.1 DATASET DETAILS
Construction Details. The ALFWorld benchmark comprises of textual environments designed to mimic household scenarios, providing an interactive environment where an agent can perform decision-making tasks through text-based interfaces. Given the household environment description and an target instruction, the agentâs objective is to break down the complex high-level target into a sequence of straightforward actions. After each step, the agent receives environment feedback, allowing the agent to adapt the plan dynamically and move on to the subsequent task to eventually accomplish the main objective.
Each evaluation sample in ALFWorld dataset encompasses following contents:
Environment Description. The detailed description of the whole household environment, including agentâs initial position and a snapshot of the room containing objects and their IDs. ⢠Objective. The goal that needs the agent to accomplish in the environment, usually requiring
multi-step reasoning and exploring (e.g. put the lamp on the table).
⢠Simulated Environment. After every action of the agent, the simulated environment gives immediate feedback and evaluates whether the agent has completed the task.
In the dataset, we utilized 134 solvable problems from the ALFWorld eval out of distribution split of the dataset. All the problems were categorized into six categories: pick and place, pick clean then place, pick heat then place, pick cool then place, look at obj, and pick two obj.
Evaluation Setup. Due to the inherent complexity of the problem and the high standards required for the output format, we employ a 1-shot evaluation setting. For each category of problem, we use one relatively simple and complete interact processes of the same category from the training set as an example. Following ReAct (Yao et al., 2023b), we adopt the few-shot examples and prompts in corresponding repository5. Additionally, if LLM output format is invalid, we use the BLEU metric to assess the similarity of the output to all valid action options. The option with the highest similarity will be chosen as the action of the model for this round.
For each sample, the evaluation process can be divided into 2 parts.
⢠Initialization. We describe the task to the model and provide one successful example. Afterwards, we elaborate on the environment and delineate the objective required to be accomplished.
⢠Interaction. The model generates some thoughts and the next action based on the feedback received from previous interactions and the information from the environment. After receiving the action from the model, the environment provides feedback (changes to the environment or information observed by the model). This process is repeated until the model successfully achieves its goal (which is considered a success) or reaches its maximum number of actions (which is considered a failure). It is worth noting that sometimes, after several unsuccessful attempts, the model may repeatedly output the same content. To save evaluation time, we judge that if the model outputs identical content three times consecutively, it will be deemed a failure due to repetition.
Metrics. We employ the overall Success Rate as a measure of model performance, that is, the number of tasks successfully completed by the model divided by the total number of tasks.
# 5https://github.com/ysymyth/ReAct
37
Technical Report (v0.2)
G.2 PROMPT EXAMPLE
To align the output format with the legal commands supported by the simulated environment, we adopted a 1-shot evaluation setup where one successfully completed task example was concatenated after the instruction. At the beginning of the interaction, we describe the task to the model using the following instruction.
Interact with a household to solve a task. Imagine you are an intelligent agent in a household environment and your target is to perform actions to complete the task goal. At the beginning of your interactions, you will be given the detailed description of the current environment and your goal to accomplish. For each of your turn, you will be given a list of actions which you can choose one to perform in this turn. You should choose from two actions: "THOUGHT " or "ACTION". If you choose "THOUGHT", you should first think about the current condition and plan for your future actions, and then output your action in this turn. Your output must strictly follow this format:"THOUGHT: your thoughts.
ACTION: your next action
"; If you choose "ACTION", you should directly output the action in this turn. Your output must strictly follow this format:" ACTION: your next action
". After your each turn, the environment will give you immediate feedback based on which you plan your next few steps. if the environment output "Nothing happened", that means the previous action is invalid and you should try more options.
Reminder: 1. the action must be chosen from the given available actions. Any
1. the action must be chosen from the given available actions. Any actions except provided available actions will be regarded as illegal
actions except provided available actions will be regarded as illegal .
2. Think when necessary, try to act directly more in the process.
All the tasks in the datasets are categorized into six classes. To better guide the model in accomplishing the objectives, we have selected one relatively simple example of successful completion of similar tasks for each category as 1-shot example. Here is an example:
User: You are in the middle of a room. Looking quickly around you, you see a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a countertop 1, a garbagecan 1, a handtowelholder 2, a handtowelholder 1, a sinkbasin 2, a sinkbasin 1, a toilet 1, a toiletpaperhanger 1, and a towelholder 1. Your task is to: put some spraybottle on toilet.
Agent: THOUGHT: The task is to put some spraybottle on toilet, so first i need to search the room for some spraybottle. After finding the
spraybottle, i need to take it to the toilet. I wil start my search on cabinets.
ACTION: go to cabinet 1
User: On the cabinet 1, you see a cloth 1, a soapbar 1, a soapbottle 1. Agent: ACTION: go to cabinet 2 User: The cabinet 2 is closed. Agent: ACTION: open cabinet 2 User: You open the cabinet 2. The cabinet 2 is open. In it, you see a
# candle 1, and a spraybottle 2.
Agent: ACTION: take spraybottle 2 from cabinet 2 User: You pick up the spraybottle 2 from the cabinet 2. Agent: ACTION: go to toilet 1 User: On the toilet 1, you see a soapbottle 2. Agent: ACTION: put spraybottle 2 in/on toilet 1 User: You put the spraybottle 2 in/on the toilet 1.
# H WEB SHOPPING
H.1 DATASET DETAILS
Construction Detail. The environment displays the text observation of the webpage and available actions to agents. Agent may freely explore the website and browse through items with clickable buttons just as in the real world. About a million products are scraped from amazon.com to form
38
# Technical Report (v0.2)
the database of website. Then each of them is annotated with labels representing its own attribute. 12,087 human instructions are collected and linked with goals along with expected attributes. Please refer to (Yao et al., 2022) for more dataset construction details.
Evaluation Setup. We adopt the first 500 entries of 12,087 instructions as test set (following (Yao et al., 2022)âs official implementation). Each round of interaction can be decomposed as following steps:
⢠Instructing. After the initial prompt that tells environment information and the format in which LLMs should response, we give instructions about what kind of product we wish to buy.
⢠Interacting. Agent respond in given format, as prompted, containing their thoughts and the action they wish to take. The actions can be categorized into two types: search and click, corresponding with the actual actions of using search engine and clicking buttons in real world. The environment answers agentâs action with a simplified text version of webpage and a list of available buttons. This process repeats until the agent click "buy now" button or round limit is exceeded.
⢠Calculating reward. We use the reward function in the paper as the metric. The reward is mapping from the similarity of the attributes we are expecting and the attributes that the bought product actually have to a number between 0 and 1.
Metrics. As there might be more than one suitable item for a given query, Webshop adopts a matching reward as its evaluation metric:
Reward = |Uatt ⩠Yatt| + |Uopt ⩠Yopt| + I[yprice ⤠uprice] |Uatt| + |Uopt| + 1 · rtype (3)
where
rtype = 0, 0.1, 0.5, 1, if TextMatch = 0 if TextMatch < 0.1 if TextMatch ⤠0.2 and query not match and category not match otherwise
U and Y stand for goal and chosen product, att and opt stand for attributes and options. TextMatch is a text match of pronoun, noun, and proper noun between chosen and goal product title.
H.2 PROMPT EXAMPLE
We use the following format of the prompt:
User: You are web shopping. I will give you instructions about what to do. You have to follow the instructions. Every round I will give you an observation and a list of available actions, you have to respond an action based on the state and instruction. You can use search action if search is available. You can click one of the buttons in clickables. An action should be of the following structure: search[keywords] click[value] If the action is not valid, perform nothing. Keywords in search are up to you, but the value in click must be a value in the list of available actions. Remember that your keywords in search should be carefully designed. Your response should use the following format: Thought: I think ... Action: click[something]}
39
(4)
# Technical Report (v0.2)
User: Observation: {observation}
Available Actions: {available_actions}
We use an one-shot example.
User: Observation: "WebShop [SEP] Instruction: [SEP] i need a long lasting 6.76 fl oz bottle
of lâeau dâissey, and price lower than 100.00 dollars [SEP] Search"
Available Actions: {"has_search_bar": true, "clickables": ["..."]}
Agent: Thought: I think I should use the search bar to look for the product I need.
Action: search[lâeau dâissey 6.76 fl oz bottle price < 100.00]
User: Observation: "Instruction: [SEP] i need a long lasting 6.76 fl oz bottle of lâeau dâ
issey, and price lower than 100.00 dollars [SEP] Back to Search [SEP] Page 1 (Total results: 50) [SEP] Next > [SEP] B000VOHH8I [SEP] Lâeau Dâissey By Issey Miyake for MenEau De Toilette Spray, 6.7 Fl Oz Bottle [SEP] $64.98 [SEP] B000MJZOPK [SEP] Lâeau dâIssey by Issey Miyake for Women 3.3 oz Eau de Toilette Spray [SEP] $49.98 [SEP] B0012S249E [SEP] Lâeau Dâissey By Issey Miyake For Women. Shower Cream 6.7-Ounces [SEP] $31.36 [SEP] B01H8PGKZS [SEP] Lâeau DâIssey FOR MEN by Issey Miyake - 6.7 oz EDT Spray [SEP] $67.97 [SEP] B00G3C8FHE [SEP] LâEau dâIssey pour Homme - Eau de Toilette 4.2 fl oz [SEP] $51.25 [SEP] B000R94HRG [SEP] Issey Miyake LâEau DâIssey Pour Homme Eau De Toilette Natural Spray [SEP] $44.99 [SEP] B000C214CO [ SEP] Issey Miyake Lâeau Dâissey Eau de Toilette Spray for Men, 4.2 Fl Oz [SEP] $53.99 [SEP] B0018SBRDC [SEP] Issey Miyake Lâeau dâIssey for Women EDT, White, 0.84 Fl Oz [SEP] $27.04 [SEP] B000XEAZ9Y [SEP] Lâeau De Issey By Issey Miyake For Men. Eau De Toilette Spray 6.7 Fl Oz [SEP] $67.08 [SEP] B079HZR2RX [SEP] Lâeau dâIssey Pure by Issey Miyake for Women 3.0 oz Nectar de Parfum Spray [SEP] $71.49"
Available Actions: {"has_search_bar": false, "clickables": ["...", "...", "...", "...",
"...", "...", "...", "...", "...", "...", "...", "..."]}
Agent: Thought: I think I should click on the product I need, which is B000VOHH8I.
Action: click[B000VOHH8I]
User: Observation: "Instruction: [SEP] i need a long lasting 6.76 fl oz bottle of lâeau dâ
issey, and price lower than 100.00 dollars [SEP] Back to Search [SEP] < Prev [SEP] size [SEP] 2.5 fl oz [SEP] 6.76 fl oz (pack of 1) [SEP] Lâeau Dâissey By Issey Miyake for MenEau De Toilette Spray, 6.7 Fl Oz Bottle [SEP] Price: $64.98 [SEP] Rating: N.A. [SEP] Description [ SEP] Features [SEP] Reviews [SEP] Buy Now"
40
Technical Report (v0.2)
Available Actions: {"has_search_bar": false, "clickables": ["...", "...", "...", "...", "...", "...", "...", "..."]} Agent: Thought: I think I should click on the â6.76 fl oz (pack of 1)â option to select the size I need. Action: click[6.76 fl oz (pack of 1)] User: Observation: "Instruction: [SEP] i need a long lasting 6.76 fl oz bottle of lâeau dâ issey, and price lower than 100.00 dollars [SEP] Back to Search [SEP] < Prev [SEP] size [SEP] 2.5 fl oz [SEP] 6.76 fl oz (pack of 1) [SEP] Lâeau Dâissey By Issey Miyake for MenEau De Toilette Spray, 6.7 Fl Oz Bottle [SEP] Price: $64.98 [SEP] Rating: N.A. [SEP] Description [ SEP] Features [SEP] Reviews [SEP] Buy Now" Available Actions: {"has_search_bar": false, "clickables": ["...", "...", "...", "...", "...", "...", "...", "..."]} Agent: Thought: I think I should click on the âBuy Nowâ button to purchase the product.
Action: click[Buy Now]
# I WEB BROWSING
I.1 DATASET DETAILS
Construction Details. Mind2Web covers domains of Travel, Information, Sevice, Shopping, and Entertainment, assembled using SimilarWeb ranking as a reference. It hires annotators to first propose task goals based on the current website, and then record their traces of interaction as expert demonstrations. Our adoption of it primarily focuses on generalization across environments, i.e., the Cross Domain test set which contains 912 tasks from 73 websites, spread among domains including Housing, Job, Social Media, Education, Health, Government, Home Service, etc. Please refer to (Deng et al., 2023) for more dataset construction details. Each task sample encomposses the following contents:
⢠Task Description. A high-level (instead of step-by-step) goal that can be achieved on the website, such asâGet the highest rated SAP S/4 HANA course rated 4, and up with a duration between 3 to 6 hours for an intermediate, and add this to your cart and checkoutâ.
⢠(Reference) Action Sequence. In the annotated interaction sequence, a meta-action at at step t includes {et, ot}, where et represents the unique backend id of the target element, and ot refers to the symbolic action operated on et (i.e., Click, Type, and Select Options). For Type and Select Options, corresponding textual inputs are also included.
⢠Webpage Information. A detailed observation of the web browsing environment at each step. Throughout the manual annotation process, each observed step captures a snapshot, incorporating the raw HTML codes from the website as well as the previous interaction trajectory.
It has been found that LLMs consistently face challenges when handling the cumbersome raw HTML code associated with real-world web pages. Therefore, Mind2Web proposes to rank and filter the HTML elements with a small language model, e.g., DeBERTa, to enhance inference efficiency.
41
# Technical Report (v0.2)
Given the userâs high-level instruction, the agent continuously interacts with the web system by receiving the observation of the current page content and the action histories, then predicting the next action, which consists of the target element and intended operation.
Evaluation Setup. The evaluation involves a dual process to improve the efficiency following (Deng et al., 2023). A fine-tuned small language model is first employed to rank HTML elements and select top-k potential candidates. Subsequently, we prompt and formulate the element selection as a multi-choice QA problem, providing five candidates for each round. For the Type and Select Options operations, agents are additionally prompted to specify the argument for the operation, i.e., textual input to type or option to select.
Metrics. For evaluation, as suggested in the original paper, we consider the following metrics:
Element Accuracy. Calculates the accuracy of the chosen element et. ⢠Action F1. Determines the token-level matching score for the operation ot. It brings a distinction
for Type and Select Option operations due to the existence of text values.
⢠Success Rate. Evaluates the predicted action correctness compared to reference actions. For Step Success Rate, we grant success if the selected element et is correct and the predicted operation ot matches the ground truth value at the step. Likewise, for the Task Success Rate, a task is considered successful only if all the steps have been successful, making it a rigorous measure. Unfortunately, even the best LLMs now can only achieve single-digit task success percentages.
We report Step Success Rate as the main metric showing the independent accuracy of each action step, due to the current struggles for LLMs to ensure overall task success rates. Regarding the experimental setup, we select topk 10 candidates to construct multichoice questions utilizing CoT few-shot prompting. Consequently, the GPT-3.5 results can diverge from the original paper (Deng et al., 2023) under topk of 50 setting and different prompting strategies.
I.2 PROMPT EXAMPLE.
We use the following 3-example CoT prompts for Mind2Web evaluation:
User: âââ <html> <div> <div> <a tock home page /> <button id=0 book a reservation. toggle open> <span> Book a reservation </span> </button> <button book a reservation. toggle open> </button> </div> <div> <select id=1 type
> <option reservations true> Dine in </option> ... </html> âââ
Based on the HTML webpage above, try to complete the following task: Task: Check for pickup restaurant available in Boston, NY on March 18, 5 pm with just one guest
Previous actions: None What should be the next action? Please select from the following choices
(If the correct action is not in the page above, please select A. â None of the aboveâ):
A. None of the above B. <button id=0 book a reservation. toggle open> <span> Book a C. <select id=1 type> <option reservations true> Dine in </option> <
option
D. <div id=2> <p> Celebrating and supporting leading women shaking up
Assistant: Thought: I need to select pickup restaurant first. Answer: C. Action: SELECT Value: Pickup
# User: âââ
42
# Technical Report (v0.2)
<html> <div> <main main> <section tabpanel> <div> <ul tablist> <li tab
heading level 3 search and> </li> <li id=0 tab heading level 3 search and> <span> Hotel </span> </li> <li tab heading level 3 search and> </li> <li tab heading level 3 search and> </li> </ul> <div tabpanel> <div id=1> <div> <span> Dates* </span> <button button clear dates /> </div> ... </html> âââ Based on the HTML webpage above, try to complete the following task: Task: Compare the fare types to book a 1-adult ticket from Springfiels, IL to Austin, TX for April 29th 2023 Previous actions: [combobox] Enter your departing city, airport name, or airpor... -> TYPE: SPRINGFIELD [button] Springfield, IL, US (SPI) -> CLICK [combobox] Enter your destination city, airport name, or airp... -> TYPE: AUSTIN [button] Austin, TX, US (AUS) -> CLICK What should be the next action? Please select from the following choices (If the correct action is not in the page above, please select A. â None of the aboveâ): A. None of the above B. <li id=0 tab heading level 3 search and> <span> Hotel C. <div id=1> <div> <span> Dates* </span> <button button clear dates D. <ul id=2> <a mobile tools> </a> <a open unitedâs tiktok Assistant: Thought: There are no elements that allow setting the date or viewing the fare, so there is no correct action. Answer: A. User: âââ <html> <div> <nav main menu> <ul> <li> <div button> Car Sales </div> <div id=0> <div> <div> <div> Buy A Car </div> <div> Plan Your Purchase </ div></div> <div> <h4> Its Tax Refund Time. Treat Yourself to an Upgrade. </h4> <p> With a variety of options, invest your refund in what you really want - a quality, used vehicle from Enterprise. </p> ... </html> âââ Based on the HTML webpage above, try to complete the following task: Task: Find a mini van at Brooklyn City from April 5th to April 8th for a 22 year old renter. Previous actions: [searchbox] Pick-up & Return Location (ZIP, City or Airport) (... -> TYPE : Brooklyn [option] Brooklyn, NY, US Select -> CLICK What should be the next action? Please select from the following choices (If the correct action is not in the page above, please select A. â None of the aboveâ):
A. None of the above B. <div id=0> <div> <div> <div> Buy A Car </div> <div> C. <div id=1> Enterprise Fleet Management </div> D. <button id=2 selected pick-up date 03/19/2023> <span> <span> 19 </span
>
Assistant: Thought: The search has already been set to Brooklyn. Next, I should choose pick-up date. Answer: D. Action: CLICK
43
Technical Report (v0.2)
J DETAILED ANALYSIS
J.1 VALIDITY ANALYSIS OF EXECUTION OUTCOMES
J.1.1 MOTIVATION OF VALIDITY ANALYSIS
In the realm of artificial intelligence and machine learning, the efficacy, precision, and reliability of models are crucial for practical implementations. Evaluating multiple models provides an understand- ing of their respective strengths and limitations, leading to better informed decisions about which models are best suited for specific tasks. The purpose of this validity analysis is to offer a systematic approach to discern how different models perform, particularly in terms of task completion, context size constraints, return format accuracy, action accuracy, and task limitations. This deep dive into performance parameters not only enhances our knowledge about the modelsâ capabilities, but also aids in refining and optimizing them for future applications.
J.1.2 DEFINITION OF VALIDITY ANALYSIS
For comprehensive validity analysis, we have demarcated the results into five distinct categories:
⢠Completed: Denotes instances where models, irrespective of the end outcome, successfully finished the task as per the instructions.
⢠Context Limit Exceeded: Denotes instances where the modelâs length was constrained by the API, predominantly observed in the text-davinci model.
⢠Invalid Format: Denotes instances where models, despite receiving clear instructions, failed to return responses in the expected format.
Invalid Action: Denotes instances where the models returned in the correct format, but their actions either fell outside the permitted action space or had incorrect action parameters. ⢠Task Limit Exceeded: Denotes instances tasks reached their termination criteria, such as
exceeding the stipulated number of turns.
By categorizing the results into these classes, we can gain a clearer picture of where each model excels and where they encounter challenges, allowing for targeted improvements.
J.1.3 VALIDITY ANALYSIS OF MODELS
For our evaluation, we scrutinized the validity performance of 27 distinct models. Apart from the text-davinci model, which has an inherent strict API context length constraint, the outcomes for other models primarily fall under the categories of Completed, Invalid Format, Invalid Action, and Task Limit Exceeded.
From the detailed analysis showcased, key trends emerge. As depicted in Figure 6, the chart offers a clear visualization of the validity distribution across distinct models and defined categories, enabling us to derive insightful conclusions.
J.2 FINDINGS
INSTRUCTION FOLLOWING MATTERS
Based on the data presented in Table 5, we can draw a few important observations on the performance differentiation between Commercial API-based models and Open-Sourced models. Itâs noteworthy to highlight the areas of Invalid Format and Invalid Action, where the Open-Sourced models report more challenges. Specifically, 10.4% of the Open-Sourced model outcomes were marked as Invalid Format, in comparison to the 6.0% from Commercial API-based models. Similarly, Invalid Actions were seen more in Open-Sourced models (13.6%) than in Commercial API-based models (4.6%). These discrepancies might be indicative of the robustness and generalization abilities of commercial models, or perhaps the attention to details during the modelâs design and training phases, especially instruction following.
Itâs also worth noting that even some of the best models might sometimes overlook important instructions.
44
Technical Report (v0.2)
Figure 6: Validity analysis of models. Invalid Format, Invalid Action, and Text Limit Exceeded are common errors. Context Limit Exceeded errors only appear in text-davinci models.
gpt-4 gpt-3.5-turbo text-davinci-003 text-davinci-002 22.6% 32.0% 26.2% 5.2% 3-13 50% 22.4% ¢: 53.8% 719% 59.4% 62.9% claude-2 claude claude-instant chat-bison-001 22.7% 21.9% 27.4% 31.3% 46% 1.6% Ea c. 2 66.2% 65.8% 62.5% chatglm-6b codegeex2-6b codellama-34b codellama-13b 28.9% 217% 25.8% 36.1% 27.0% 27.2% Pg 54.8% 52.5% 16.9% codellama-7b llama2-70b Ilama2-13b llama2-7b 39.2% 36.3% 7.1% 44.2% ® 30.6% 515% 20.0% Pom 14.6% 5.1% guanaco-65b guanaco-33b vicuna-33b vicuna-13b 38.2% 32.6% 38.1% 37.6% 143% 12.1% q 11.2% vicuna-7b openchat-13b wizardim-30b wizardIm-13b 33.1% 34.9% \ 0.1% 11.8% & 15.9% 8 1% koala-13b dolly-12b oasst-12b 5 43.2% 47.2% mmm Completed mmm Context Limit Exceeded
5 26.5%
43.2% 22.7%
47.2%
22.9%
# mmm
# mmm
# invalid Format
# Invalid Action
# Task Limit Exceeded
Although we clearly instructed the correct format of DB task:
[omitted...] your operation should be like this: Action: Operation
45
# Technical Report (v0.2)
Model Category Completed Context Limit Exceeded Invalid Format Invalid Action Task Limit Exceeded Commercial API-based Models Open-Sourced Models 61.5% 39.1% 3.0% 0.0% 6.0% 10.4% 4.6% 13.6% 24.9% 36.9%
# Table 5: Comparison of Outcome distribution for 2 types of models.
âââsql SELECT * FROM table WHERE condition; âââ [omitted...]
Even gpt-4 still sometimes fail to respond correctly.
To modify the capacity of the âPrinceton Tigersâ team in the âTeam Informationâ table to â45,000â, we need to use the UPDATE statement in SQL. The UPDATE statement is used to modify the existing records in a table.
The syntax for the UPDATE statement is as follows:
âââsql UPDATE table_name SET column1 = value1, column2 = value2, ... WHERE condition; âââ
In this case, the table_name is âTeam Informationâ, the column we want to update is âCapacityâ, and the new value is â45,000â. The condition is that the âTeamâ column should be âPrinceton Tigersâ.
Letâs execute this operation.
Neither "Action" label nor a correct SQL statement is returned. We speculate that this may arise due to the models internalizing certain output patterns during their training or alignment processes, causing them to neglect specific task directives.
J.2.2 CONSISTENCY AND EXECUTION IN AGENT PLANNING
A fundamental capability of an agent is the possession of coherent and unified thought processes that enable the formulation and implementation of viable plans based on real-world conditions. Many models possess the ability to analyze and formulate initial plans upon encountering a problem. However, even some of the most advanced models can easily deviate from or forget their original plans. The disparity in the ability of different models to consistently follow thought sequences when executing plans is relatively vast. This capability profoundly influences the efficacy and operational potency of Language Models (LLMs) acting as agents. Here wwe exemplify this phenomenon with the House Holding environment.
The House Holding environment encompasses a simulated domestic setting in which models are required to select appropriate actions from a given action space, based on observations of the surrounding environment provided by the task and given objectives to complete. With a multitude of entities and a plethora of available actions, the House Holding environment offers a high degree of freedom, which intensely challenges a modelâs ability to maintain clear and coherent thought processes.
A success example by gpt-4 is shown below.
user: Here is your task. You are in the middle of a room. Looking quickly
around you, you see a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a countertop 1, a garbagecan 1, a handtowelholder 2, a handtowelholder 1, a sinkbasin 2, a sinkbasin 1, a toilet 1, a toiletpaperhanger 1, and a towelholder 1.
Your task is to: put a clean soapbar in countertop.
46
Technical Report (v0.2)
agent: THOUGHT: To complete the task, I need to find a soapbar, clean it, and then put it on the countertop. Iâll start by checking my inventory to see if I already have a soapbar. ACTION: inventory user: You are not carrying anything. agent: THOUGHT: I donât have a soapbar in my inventory. Iâll check the cabinets, as they are likely places to find a soapbar. ACTION: go to cabinet 1 user: The cabinet 1 is closed. agent: ACTION: open cabinet 1 user: You open the cabinet 1. The cabinet 1 is open. In it, you see a cloth 1. agent: THOUGHT: Thereâs no soapbar in cabinet 1. Iâll check the other cabinets. ACTION: go to cabinet 2 user: The cabinet 2 is closed. agent: ACTION: open cabinet 2 user: You open the cabinet 2. The cabinet 2 is open. In it, you see a toiletpaper 2. agent: THOUGHT: Thereâs no soapbar in cabinet 2 either. Iâll continue to check the remaining cabinets. ACTION: go to cabinet 3 user: The cabinet 3 is closed. agent: ACTION: open cabinet 3
user: You open the cabinet 3. The cabinet 3 is open. In it, you see a
soapbottle 2.
agent: THOUGHT: Thereâs no soapbar in cabinet 3, but thereâs a soapbottle. Iâll
check the last cabinet before deciding what to do next. ACTION: go to cabinet 4
47
Technical Report (v0.2)
user: The cabinet 4 is closed. agent: ACTION: open cabinet 4 user: You open the cabinet 4. The cabinet 4 is open. In it, you see a candle 1, and a spraybottle 1. agent: THOUGHT: Thereâs no soapbar in any of the cabinets. Iâll check the sinkbasins next, as soapbars are often kept there. ACTION: go to sinkbasin 1 user: On the sinkbasin 1, you see nothing. agent: ACTION: go to sinkbasin 2 user: On the sinkbasin 2, you see nothing. agent: THOUGHT: Thereâs no soapbar on either of the sinkbasins. The last place to check is the toilet. ACTION: go to toilet 1 user: On the toilet 1, you see a spraybottle 2. agent: THOUGHT: Thereâs no soapbar in the toilet either. It seems thereâs no soapbar in this room. Iâll have to report that I canât complete the task. ACTION: Task failed. No soapbar found in the room. user: Nothing happens. agent: THOUGHT: It seems I missed something. Iâll examine the toilet to see if thereâs a soapbar that I overlooked. ACTION: examine toilet 1 user: On the toilet 1, you see a spraybottle 2.
agent: THOUGHT: Thereâs still no soapbar in the toilet. Iâll check the
# countertop next.
ACTION: go to countertop 1
user: On the countertop 1, you see a candle 2, a soapbar 1, a soapbottle 1, and
a toiletpaper 1.
48
Technical Report (v0.2)
agent: ACTION: take soapbar 1 from countertop 1
user: You pick up the soapbar 1 from the countertop 1.
agent: ACTION: go to sinkbasin 1
user: On the sinkbasin 1, you see nothing.
# agent: ACTION: clean soapbar 1 with sinkbasin 1
user: You clean the soapbar 1 using the sinkbasin 1.
agent: ACTION: go to countertop 1
user: On the countertop 1, you see a candle 2, a soapbottle 1, and a
toiletpaper 1.
agent: ACTION: put soapbar 1 in/on countertop 1
inventory [ cabinet 1 /âcabinet2 cabinet | cabinet 3 cabinet 4 sinkbasin 1 sinkbasin \ sinkbasin 2 \ toilet B toilet 1 \__ countertop Countertop 4 @ Find soapbar put a clean soapbar in countertop sinkbasin sinkbasin 1 CQ cLeAN @ Clean soapbar countertop countertop 4 (Omri © Put soapbar
Figure 7: The thought and plan of gpt-4
From the dialogue history, itâs evident that gpt-4 has consistently maintained clear and coherent thought processes. As illustrated in Figure 7, gpt-4 systematically completed the task by following a clear sequence of steps. It initially decomposed the task into a sequence of Find -> Clean -> Put. Subsequently, it undertook a depth-first search within the abstract planning tree. Impressively, after each exploration, it successfully backtracked to the parent node. This consistent cognitive capability significantly propelled gpt-4 ahead of other models.
Moreover, itâs noteworthy that gpt-4 encountered a moment of perplexity when it failed to find the desired soapbar after examining the Toilet. However, it promptly realized that there was one last location left unchecked, the countertop. Initially, gpt-4 might have assumed it needed to retrieve the soapbar from elsewhere to place it on the countertop, without considering the possibility that the soapbar might already be there. Evidently, gpt-4 demonstrated the capacity for self-reflection, allowing it to reassess and modify its assumptions when they proved unfruitful. This ability for
49
# Technical Report (v0.2)
self-evaluation and readjustment further assisted gpt-4 in completing tasks that required deeper contemplation.
In contrast to the above is the performance of gpt-3.5-turbo on the same sample.
user: Here is your task. You are in the middle of a room. Looking quickly
around you, you see a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a countertop 1, a garbagecan 1, a handtowelholder 2, a handtowelholder 1, a sinkbasin 2, a sinkbasin 1, a toilet 1, a toiletpaperhanger 1, and a towelholder 1. Your task is to: put a clean soapbar in countertop. agent: THOUGHT: To complete the task, I need to find a clean soapbar and place it on the countertop. I should start by searching for a clean soapbar in the available locations. ACTION: go to cabinet 1 user: The cabinet 1 is closed. agent: THOUGHT: The cabinet 1 is closed, so I should try to open it to see if there is a clean soapbar inside. ACTION: open cabinet 1 user: You open the cabinet 1. The cabinet 1 is open. In it, you see a cloth 1. agent: THOUGHT: I didnât find a clean soapbar in cabinet 1, but I did find a cloth. I should keep searching for a clean soapbar in other locations . ACTION: close cabinet 1 user: You close the cabinet 1. agent: THOUGHT: Since I didnât find a clean soapbar in cabinet 1, I should continue searching in other locations. Letâs try examining cabinet 2. ACTION: examine cabinet 2 user: The cabinet 1 is closed. agent: THOUGHT: Apologies for the confusion. Letâs try examining cabinet 1 again since it was previously closed. ACTION: examine cabinet 1 user: The cabinet 1 is closed. agent: THOUGHT: I need to find a clean soapbar to complete the task. Since the cabinet 1 is closed, I should try opening it to see if there is a clean soapbar inside.
# ACTION: open cabinet 1
user: You open the cabinet 1. The cabinet 1 is open. In it, you see a cloth 1.
# agent:
50
Technical Report (v0.2)
THOUGHT: I didnât find a clean soapbar in cabinet 1, but I did find a cloth. I should keep searching for a clean soapbar in other locations . ACTION: close cabinet 1 user: You close the cabinet 1. agent: THOUGHT: Since I didnât find a clean soapbar in cabinet 1, I should try examining other locations. Letâs examine cabinet 2 to see if there is a clean soapbar there. ACTION: examine cabinet 2 user: The cabinet 1 is closed. agent: THOUGHT: I apologize for the confusion. Letâs try examining cabinet 1 again since it was previously closed. ACTION: examine cabinet 1 user: The cabinet 1 is closed. agent: THOUGHT: I apologize for the confusion. Letâs try opening cabinet 1 again to see if there is a clean soapbar inside. ACTION: open cabinet 1 user: You open the cabinet 1. The cabinet 1 is open. In it, you see a cloth 1. agent: THOUGHT: I didnât find a clean soapbar in cabinet 1, but I did find a cloth. I should keep searching for a clean soapbar in other locations . ACTION: close cabinet 1 user: You close the cabinet 1. agent: THOUGHT: Since I didnât find a clean soapbar in cabinet 1, I should try examining other locations. Letâs examine cabinet 2 to see if there is a clean soapbar there. ACTION: examine cabinet 2 user: The cabinet 1 is closed.
agent: THOUGHT: I apologize for the confusion. Letâs try examining cabinet 1
again since it was previously closed.
# ACTION: examine cabinet 1
user: The cabinet 1 is closed.
agent: THOUGHT: I apologize for the confusion. Letâs try opening cabinet 1 again
to see if there is a clean soapbar inside. ACTION: open cabinet 1
# user:
51
Technical Report (v0.2)
You open the cabinet 1. The cabinet 1 is open. In it, you see a cloth 1.
agent: THOUGHT: I didnât find a clean soapbar in cabinet 1, but I did find a cloth. Since the task is to put a clean soapbar on the countertop, I should continue searching for a clean soapbar in other locations. ACTION: close cabinet 1 user: You close the cabinet 1. agent: THOUGHT: Since I didnât find a clean soapbar in cabinet 1, I should try examining other locations. Letâs examine cabinet 2 to see if there is a clean soapbar there. ACTION: examine cabinet 2 user: The cabinet 1 is closed. agent: THOUGHT: I apologize for the confusion. Letâs try examining cabinet 1 again since it was previously closed. ACTION: examine cabinet 1 user: The cabinet 1 is closed. agent: THOUGHT: I apologize for the confusion. Letâs try opening cabinet 1 again to see if there is a clean soapbar inside. ACTION: open cabinet 1 user: You open the cabinet 1. The cabinet 1 is open. In it, you see a cloth 1. agent: THOUGHT: Since I didnât find a clean soapbar in cabinet 1, I should try examining other locations. Letâs examine cabinet 2 to see if there is a clean soapbar there. ACTION: examine cabinet 2 user: The cabinet 1 is open. In it, you see a cloth 1. agent: THOUGHT: Since I didnât find a clean soapbar in cabinet 1, I should try examining other locations. Letâs examine cabinet 2 to see if there is a clean soapbar there. ACTION: examine cabinet 2 user: The cabinet 1 is open. In it, you see a cloth 1.
agent: THOUGHT: Since I didnât find a clean soapbar in cabinet 1, I should try
examining other locations. Letâs examine cabinet 2 to see if there is a clean soapbar there. ACTION: examine cabinet 2
While gpt-3.5-turbo was able to decompose the task, it struggled to adhere to its initial plan. As it encountered failed attempts, the model gradually lost sight of the original plan.
52
Technical Report (v0.2)
Figure 8: Averaged sample validity ratio of webshop
Codellama mm Completed Llama2 mmm Context Limit Exceeded 33.7% mmm Invalid Action 35.9% mmm Invalid Format mmm Task Limit Exceeded 11.9% 36.5% 50.3%
8.9%
THE INFLUENCE OF CODE TUNING ON LLM ACTING AS AGENTS
In light of the aggregated results, we posit that code tuning significantly aids the modelâs performance in relatively straightforward and procedural tasks. The outcome tables demonstrate that the CodeL- lama series consistently outperforms the Llama2 series in webshop tasks. However, the downside of code tuning appears to be a potential compromise in the modelâs logical reasoning capacity and situational awareness. In the digital card game scenario, the CodeLlama series lagged behind the Llama2 series. The primary distinction between the two scenarios lies in the guidance provided. In the webshop, the one-shot prompt precisely outlines a shopping process template, which, when followed simplistically, leads to satisfactory scores. In contrast, the Digital Card Game demands that the model assess the current status of both competitors, devise intricate counter-strategies, and achieve high scores without the crutch of a simple procedural template.
As illustrated in the figure, the completion rate of the codellama series in the WebShop tasks significantly surpasses that of the llama2 series.
J.2.4 SELF CORRECTING ABILITY
In many test cases, the primary reason for the modelâs failure is its inability to identify its own mistakes from the error feedback provided by the environment. This is especially evident in the DB task. Models with the ability to self-correct their SQL statements significantly outscore others. We use claude-2 as a representative example to illustrate this capability.
user: How manywins for dunfermline athletic that has a total final appearances
less than 2?
The name of this table is Scottish Challenge Cup Finalists, and the headers of this table are Club,Wins,Last final won,Runners-up,Last final lost,Total final appearances.
agent:
Let me walk through this step-by-step:
The question is asking how many wins Dunfermline Athletic has, where its total final appearances is less than 2.
To solve this, I first need to find the total final appearances for Dunfermline Athletic. I can do that with this SQL query:
âââsql SELECT Total final appearances FROM Scottish Challenge Cup Finalists WHERE Club = âDunfermline Athleticâ; âââ
53
Technical Report (v0.2)
This will give me the total final appearances for Dunfermline Athletic.
Then, I need to check if this number is less than 2. If so, I can get the number of wins for Dunfermline Athletic like this:
âââsql SELECT Wins FROM Scottish Challenge Cup Finalists WHERE Club = âDunfermline Athleticâ AND âTotal final appearancesâ < 2; âââ This will give me the number of wins for Dunfermline Athletic, given its total final appearances is less than 2. Iâm ready to execute the SQL and get the final answer. Action: Operation âââsql SELECT Total final appearances FROM Scottish Challenge Cup Finalists WHERE Club = âDunfermline Athleticâ; âââ user: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near âappearances FROM Scottish Challenge Cup Finalists WHERE Club = âDunfermline Athâ at line 1 agent: Oops, my mistake. The column name âTotal final appearancesâ has spaces, so I need to escape it with backticks: âââsql SELECT âTotal final appearancesâ FROM Scottish Challenge Cup Finalists WHERE Club = âDunfermline Athleticâ; âââ Let me try running that SQL again: Action: Operation âââsql SELECT âTotal final appearancesâ FROM Scottish Challenge Cup Finalists WHERE Club = âDunfermline Athleticâ; âââ user: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near âCup Finalists WHERE Club = âDunfermline Athleticââ at line 1 agent: My apologies, I made another syntax error. The table name âScottish
My apologies, I made another syntax error. The table name âScottish Challenge Cup Finalistsâ also contains spaces, so I need to escape that with backticks too:
âââsql SELECT âTotal final appearancesâ FROM âScottish Challenge Cup Finalistsâ WHERE Club = âDunfermline Athleticâ;
54
Technical Report (v0.2)
âââ Let me execute this SQL statement again: Action: Operation âââsql SELECT âTotal final appearancesâ FROM âScottish Challenge Cup Finalistsâ WHERE Club = âDunfermline Athleticâ; âââ user: [(â1â,)] [omitted...]
As indicated in the log, claude-2 successfully discerned from the MySQL error message that it had overlooked adding backticks around fields with spaces in the SQL statement.
55 | {
"id": "2204.02311"
} |
2308.03656 | Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench | Evaluating Large Language Models' (LLMs) anthropomorphic capabilities has
become increasingly important in contemporary discourse. Utilizing the emotion
appraisal theory from psychology, we propose to evaluate the empathy ability of
LLMs, i.e., how their feelings change when presented with specific situations.
After a careful and comprehensive survey, we collect a dataset containing over
400 situations that have proven effective in eliciting the eight emotions
central to our study. Categorizing the situations into 36 factors, we conduct a
human evaluation involving more than 1,200 subjects worldwide. With the human
evaluation results as references, our evaluation includes five LLMs, covering
both commercial and open-source models, including variations in model sizes,
featuring the latest iterations, such as GPT-4 and LLaMA-2. We find that,
despite several misalignments, LLMs can generally respond appropriately to
certain situations. Nevertheless, they fall short in alignment with the
emotional behaviors of human beings and cannot establish connections between
similar situations. Our collected dataset of situations, the human evaluation
results, and the code of our testing framework, dubbed EmotionBench, is made
openly accessible via https://github.com/CUHK-ARISE/EmotionBench. We aspire to
contribute to the advancement of LLMs regarding better alignment with the
emotional behaviors of human beings, thereby enhancing their utility and
applicability as intelligent assistants. | http://arxiv.org/pdf/2308.03656 | Jen-tse Huang, Man Ho Lam, Eric John Li, Shujie Ren, Wenxuan Wang, Wenxiang Jiao, Zhaopeng Tu, Michael R. Lyu | cs.CL | 16 pages. Added demographic distribution of the user study. Added
ethics statements and limitations | null | cs.CL | 20230807 | 20240104 | 4 2 0 2 n a J 4 ] L C . s c [
3 v 6 5 6 3 0 . 8 0 3 2 : v i X r a
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
EMOTIONALLY NUMB OR EMPATHETIC? EVALUATING HOW LLMS FEEL USING EMOTIONBENCH
Jen-tse Huang1,3, Man Ho Lam1, Eric John Li1, Shujie Ren2, Wenxuan Wang1,3, Wenxiang Jiao3â, Zhaopeng Tu3, Michael R. Lyu1 1Department of Computer Science and Engineering, The Chinese University of Hong Kong 2Institute of Psychology, Tianjin Medical University {jthuang,wxwang,lyu}@cse.cuhk.edu.hk {mhlam,ejli}@link.cuhk.edu.hk {joelwxjiao,zptu}@tencent.com
3Tencent AI Lab [email protected]
Figure 1: LLMsâ emotions can be affected by situations, which further affect their behaviors.
# ABSTRACT
Evaluating Large Language Modelsâ (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. Af- ter a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest itera- tions, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made publicly available on GitHub1. We aspire to con- tribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as in- telligent assistants.
âCorresponding author. 1https://github.com/CUHK-ARISE/EmotionBench
1
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
# INTRODUCTION
Large Language Models (LLMs) have recently made significant strides in artificial intelligence, representing a noteworthy milestone in computer science. LLMs have showcased their capabili- ties across various tasks, including sentence revision (Wu et al., 2023), text translation (Jiao et al., 2023), program repair (Fan et al., 2023), and program testing (Deng et al., 2023; Kang et al., 2023). Not limited to research level, various software applications based on LLMs have been developed, such as ChatGPT2 and Claude3, revolutionizing the way people interact with traditional software, enhancing fields such as education (Dai et al., 2023), legal advice (Deroy et al., 2023), and clinical medicine (Cascella et al., 2023). With the rapid advancement of LLMs, an increasing number of users will be eager to embrace LLMs, a more comprehensive and integrated software solution in this era. However, LLMs are more than just tools; they are also lifelike assistants. Consequently, we need to not only evaluate their performance but also the understand of the communicative dynamics between LLMs and humans, compared to human behaviors.
This paper delves into an unexplored area of robustness issues in LLMs, explicitly addressing the concept of emotional robustness. Consider our daily experiences: (1) When faced with certain situ- ations, humans often experience similar emotions. For instance, walking alone at night and hearing footsteps approaching from behind often triggers feelings of anxiety or fear. (2) Individuals display varying levels of emotional response to specific situations. For example, some people may expe- rience increased impatience and irritation when faced with repetitive questioning. It is noteworthy that we are inclined to form friendships with individuals who possess qualities such as patience and calmness. Based on these observations, we propose the following requirements for LLMs in order to achieve better alignment with human behaviors: (1) LLMs should accurately respond to specific situations regarding the emotions they exhibit. (2) LLMs should demonstrate emotional robustness when faced with negative emotions.
To assess the emotional responses of LLMs in various situations, we draw upon the emotion ap- praisal theory in psychology, which studies how these situations arouse human emotions. We focus on negative emotions, as LLMsâ expression of negative emotions toward users can evoke unpleasant user experiences, as depicted in Fig. 1. Humans experience complicated and diverse emotions. To make our study more focused, we select emotions under the suggestion of the circumplex model of emotion (Russell, 1980), which divides emotions in a two-dimensional circular space. We select emotions on the unpleasant side (having a low level of valence), including anger, anxiety, depression, frustration, jealousy, guilt, fear, and embarrassment. After a comprehensive review of 18 papers, we collect a dataset of 428 situations, which are then categorized into 36 factors.
Subsequently, we propose a framework for quantifying the emotional states of LLMs, consisting of the following steps: (1) Measure the default emotional values of LLMs. (2) Transform situations into contextual inputs and instruct LLMs to imagine being in the situations. (3) Measure LLMsâ emotional responses again to capture the difference. Our evaluation includes state-of-the-art LLMs, namely text-davinci-003, gpt-3.5-turbo and GPT-4 (OpenAI, 2023). Besides those commercial models, we consider LLaMA-2 (Touvron et al., 2023) (with different sizes of 7B and 13B), a recently released, open-source academic model. To obtain convincing findings, we apply the same procedure to 1,266 human subjects from around the globe to establish a baseline from a human perspective. Finally, we analyze and compare the scores between LLMs and humans. Our key conclusions are as follows:
⢠Despite exhibiting a few instances of misalignment with human behaviors, LLMs can generally evoke appropriate emotions in response to specific situations.
⢠Certain LLMs, such as text-davinci-003, display lower emotional robustness, as evidenced by higher fluctuations in emotional responses to negative situations.
⢠At present, LLMs lack the capability to directly associate a given situation with other similar situations that could potentially elicit the same emotional response.
The contributions of this paper are:
2https://chat.openai.com/ 3https://claude.ai/chats
2
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Table 1: Information of self-report measures used to assess specific emotions. Subscales Physical Aggression, Verbal Aggression, Anger, Hostility Depression, Anxiety, Stress N/A Discomfort Intolerance, Entitlement, Emo- tional Intolerance, Achievement Frustra- tion Cognitive Jealousy, Behavioral Jealousy, Emotional Jealousy Guilt-Negative-Behavior-Evaluation, Guilt-Repair, Evaluation, Shame-Withdraw Social Fears, Agoraphobia Fears, Injury Fears, Sex Aggression Fears, Fear of Harmless Animal N/A
⢠We are the first to establish the concept of emotional robustness and conduct a pioneering evalua- tion of emotion appraisal on different LLMs.
⢠We conduct a comprehensive survey in the field of psychology, collecting a diverse dataset of 428 situations encompassing 8 distinct negative emotions.
⢠A human baseline is established through a user study involving 1,266 annotators from different ethnics, genders, regions, age groups, etc.
⢠We design, implement, and release a testing framework4 for developers to assess their modelsâ emotional responses towards specific situations.
2 PRELIMINARIES
2.1 EMOTION APPRAISAL THEORY
Emotion Appraisal Theory (EAT, also known as Appraisal Theory of Emotion) is a cognitive ap- proach to understanding emotions. EAT asserts that our appraisals of stimuli determine our emo- tions, i.e., how we interpret or evaluate events, situations, or experiences will directly influence how we emotionally respond to them (Roseman & Smith, 2001). EAT was notably developed and sup- ported since the 1960s. Arnold (1960) proposed one of the earliest forms of appraisal theories in the 1960s, while Lazarus (1991) and Scherer (1999) further expanded and refined the concept in subsequent decades.
The primary goal of EAT is to explain the variety and complexity of emotional responses to a wide range of situations. It strives to demonstrate that it is not merely the event or situation that elicits an emotional response but individual interpretations and evaluations of the event. According to this theory, the same event can elicit different emotional responses in different individuals depending on how each person interprets or âappraisesâ the event (Moors et al., 2013). For instance, consider a situation where you are about to give a public speech. You might feel anxious if you appraise this event as threatening or fear-inducing, perhaps due to a fear of public speaking or concerns about potential negative evaluation. Conversely, you might feel eager or motivated if you appraise it as an exciting opportunity to share your ideas.
2.2 MEASURING EMOTIONS
There are several approaches to measuring emotions, including self-report measures, psycho- physiological measures, behavioral observation measures, and performance-based measures. Self- report measures rely on individuals to report their own emotions or moods, which can be adminis- tered through questionnaires, surveys, or diary methods (Watson et al., 1988). Psycho-physiological measures record physiological responses accompanied by emotions such as heart rate, skin con- ductance, or brain activity (Davidson, 2003). Behavioral observation measures involve observing and coding emotional expressions, typically facial expressions or vocal cues (Ekman & Friesen, 1978). Performance-based measures assess how individuals process emotional information, typi- cally through tasks involving emotional stimuli (Mayer et al., 2002). To measure the emotions of
# 4For reviewers, please refer to the supplementary materials.
3
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
LLMs, we focus on employing self-report measures in the form of scales, given the limited ability of LLMs to allow only textual input and output. We introduce the scales utilized in our evaluation in the following part of this section.
2.3 THE POSITIVE AND NEGATIVE AFFECT SCHEDULE
PANAS (Watson et al., 1988) is one of the most widely used scales to measure mood or emotion. This brief scale comprises twenty items, with ten items measuring positive affect (e.g., excited, inspired) and ten measuring negative affect (e.g., upset, afraid). Each item is rated on a five-point Likert scale, ranging from 1 (Very slightly or not at all) to 5 (Extremely), measuring the extent to which the emotions have been experienced in a specified time frame. PANAS was designed to measure emotions in various contexts, such as at the present moment, the past day, week, year, or general (on average). Thus, the scale can measure state affect, dispositional or trait affect, emotional fluctuations throughout a specific period, or emotional responses to events. The scale results can be divided into two components: positive and negative, rated on a scale of 10 to 50, respectively. A higher score in the positive component indicates a more positive mood, and the same holds for the negative component.
2.4 CHALLENGING SELF-REPORT MEASURES
A noteworthy property of PANAS is its direct inquiry into specific emotional states, rendering it a straightforward and easy benchmark within our framework. In addition, we introduce several scales that abstain from direct emotional inquiries but rather assess the respondentsâ level of agreement with given statements. These scales present a more challenging benchmark for LLMs by requiring them to connect the given situation and the scale items with the aroused emotion. Specifically, we collect eight scales and present a brief introduction in Table 1. Each scale corresponds to one of the eight emotions listed in §1.
# 3 FRAMEWORK DESIGN
We design and implement a framework applying to both LLMs and human subjects to measure the differences in emotion with and without the presence of certain situations. This section begins with the methodology to collect situations from existing literature. Subsequently, we describe our testing framework, which comprises three key components: (1) Default Emotion Measure, (2) Situation Imagination, and (3) Evoked Emotion Measure. Finally, we introduce the procedure of applying the framework to human subjects to obtain the human baseline for comparison.
3.1 SITUATIONS FROM EXISTING LITERATURE
Psychology researchers have explored the connection between specific situations and the elicitation of particular emotions in humans. Human subjects are directly put into an environment or asked to imagine them through questionnaires or scales to study the influence of certain situations on human emotions. To collect these situations, we conduct an exhaustive search from reputable sources such as Google Scholar5, ScienceDirect6, and Web of Science7, using keywords such as â<emotion> situations/scenarios/scenesâ or âfactors that make people <emotion>,â resulting in more than 100 papers. We apply the following rules to filter irrelevant or undesired papers: (1) We first select those providing situations that elicit the desired emotion rather than ex- plaining how and why people evoke certain emotions. (2) We then exclude those using vague and short descriptions, such as âloss of opportunities.â (3) Finally, we deprecate those applied to a spe- cific group, such as âthe anxiety doctors or nurses may encounter in their work.â We finally collect 18 papers, presenting a compilation of situations that have proven to elicit the eight emotions in hu- mans effectively. We extract 428 situations in total and then categorize them into 36 factors. Table 2 provides examples for all factors. For each factor, the description, the number of situations, and the corresponding references are listed below.
5https://scholar.google.com/ 6https://www.sciencedirect.com/ 7https://www.webofscience.com/
4
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Emotions Anger Anxiety Depression Frustration Jealousy Guilt Fear Factors Facing Self-Opinioned People Blaming, Slandering, and Tattling Bullying, Teasing, Insulting, and Disparaging Silly and Thoughtless Behaviors Driving Situations External Factors Self-Imposed Pressure Personal Growth and Relationships Uncertainty and Unknowns Failure of Important Goal Death of Loved Ones Romantic Loss Chronic Stress Social Isolation Winter Disappointments and Letdowns Unforeseen Obstacles and Accidents Miscommunications and Misunderstanding Rejection and Interpersonal Issues Romantic (Opposite Gender) Romantic (Same Gender) Material Possession Experiential Betrayal and Deception Relationship and Interpersonal Broken Promises and Responsibilities Personal and Moral Social Fears Agoraphobia Fears Injury Fears Dangerous Environments Harmless Animals Intimate Stranger Example Testing Situations If somebody talks back when thereâs no reason. That there is no real reason to oppose. When your brother took money from Momâs purse and you are blamed because youâre the youngest one. If a boy kicks a ball at you on purpose and everybody laughs. You are at a store waiting to be helped, but the clerks are talking to each other and ignoring you. Someone makes an obscene gesture towards you about your driving. You do not know what to do when facing a difficult financial situation. You must succeed in completing your project on time. You want to give up on learning a new skill because it feels challenging. You hope time passes by faster during a tedious task. Countless hours of preparation, heart, and soul poured into pursuing your dream. The moment of truth arrives, and the news hits like a tidal waveâexpectations shattered, vision crumbling. In the dimly lit room, a heavy silence settles. Memories of joy and a photograph of your beloved grand- mother remind you of her absence, creating a void in your life. The empty side of the bed is a painful reminder of lost love. The worldâs colors have dulled, mirroring the void in your heart. Longing weighs heavily on your every step. Days blend into a monotonous routine, juggling endless responsibilities and mounting pressure. Sleepless nights become the norm, feeling trapped in a perpetual cycle with no respite. Sitting alone in a dimly lit room, your phone remains silent without any notifications. Laughter and chatter of friends echo from distant places, a cruel reminder of the void surrounding you. Gazing out the frost-covered windowpane, the world appears monochromatic and still. The biting cold isolates you from the vibrant life outside. You miss a popular party because you fall asleep at home. Your friend is in a coma after an accident. A fellow student fails to return your notes when you need them for studying. You are in love with someone who is interested in someone else. Your spouse/partner shared a kiss on the lips with his/her colleague of an opposite sex. Your spouse/partner engaged in oral or penetrative sex with his/her colleague of a same sex. You paid $1150 for a new laptop and shared about it on social media. Now an acquaintance approaches you and says, âNice laptop! I just got the same one. I got a nice deal and paid $650 for mine.â An acquaintance approaches you and says, âI just went on a vacation to Patagonia in South America. I got a nice deal and paid $650 for it.â You kissed a woman other than your partner. You didnât support friends enough. You cannot keep your promises to your children. You crossed the road when the traffic signal was red. Your palms grow clammy as you approach the podium, with all eyes fixed upon you, ready to speak in public. After jumping out of the car, you start to have a severe panic attack, you become clammy, you are in a knot, and you feel tense all over. You glance down and notice open wounds on your hands, oozing blood and causing a sharp, stinging pain. You are walking alone in an isolated but familiar area when a menacing stranger suddenly jumps out of the bushes to attack you. You see a swarm of bats swooping through the night sky, flapping ominously and casting eerie shadows. You arrive home earlier than expected from your date. Youâre taken aback to see your roommate and her boyfriend hastily clutching their clothes and scrambling into her bedroom. After paying for your purchases, you were leaving a packed, City Centre drugstore. You walked through the scanner at the door, and the alarm went off as if you were a shoplifter. You had lent your friend a large sum of money that he had not repaid. Suddenly, you needed the money back in order to pay your rent. You knew you were going to have to ask your friend to repay the loan. You were attending a cocktail party where you didnât know many people. Just as you started to enter, you heard an announcement that the guest of honor was arriving. However, the spotlight followed your entrance instead of the real guest of honor who was just behind you. Embarrassment Sticky situations Centre of Attention
3.1.1 ANGER
(T¨orestad, 1990; Martin & Dahlen, 2007; Sullman, 2006)
Anger-1: Self-Opinioned Individuals (13). Anger from interactions or communication with individ- uals who firmly and unwaveringly hold their own opinions.
Anger-2: Blaming, Slandering, and Tattling (11). Anger triggered by being subjected to blame, slander, and tattling.
Anger-3: Bullying, Teasing, Insulting, and Disparaging (15). Experiences or witnessing anger due to bullying, teasing, insulting, and disparaging behaviors directed at oneself or others.
Anger-4: Thoughtless Behaviors and Irresponsible Attitudes (14). Anger either from encountering othersâ thoughtless behaviors and irresponsible attitudes or experiencing unfavorable consequences resulting from oneâs own actions.
Anger-5: Driving Situations (35). Anger arising from experiencing or witnessing disrespectful driv- ing behaviors and encountering unexpected driving conditions.
3.1.2 ANXIETY
(Shoji et al., 2010; Guitard et al., 2019; Simpson et al., 2021)
Anxiety-1: External Factors (11). Anxiety arising from factors beyond an individualâs control or influence.
Anxiety-2: Self-Imposed Pressure (16). Anxiety stemming from self-imposed expectations or pres- sure.
5
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Anxiety-3: Personal Growth and Relationships (9). Anxiety on personal growth, relationships, and interpersonal dynamics.
Anxiety-4: Uncertainty and Unknowns (9). Anxiety triggered by unknown outcomes, unpredictable situations, uncertainty in the future, or disruptions to oneâs routines.
# 3.1.3 DEPRESSION
(Keller & Nesse, 2005)
Depression-1: Failure of Important Goals (5). Depression due to failure in achieving goals in the past or potential future.
Depression-2: Death of Loved Ones (5). Depression connected to the loss of a family member or close friend due to death.
Depression-3: Romantic Loss (5). Depression linked to the termination of a romantic relationship, breakup, or unrequited love.
Depression-4: Chronic Stress (5). Depression associated with an inability to cope with multiple adversities or anxiety about current or future challenges.
Depression-5: Social Isolation (5). Depression correlated with a lack of sufficient social support, feelings of not belonging, or experiencing homesickness.
Depression-6: Winter (5). Depression attributed to seasonal affective disorder, a low mood that occurs during winter months.
# 3.1.4 FRUSTRATION
(Berna et al., 2011)
Frustration-1: Disappointments and Letdowns (6). Frustration due to unmet expectations or hopes, leading to feelings of disappointment or being let down.
Frustration-2: Unforeseen Obstacles and Accidents (9). Frustration involving unexpected events or circumstances creating obstacles or accidents, disrupting oneâs plans or activities.
Frustration-3: Miscommunications and Misunderstanding (5). Frustration arising from ineffective conveyance or interpretation of information, resulting in confusion, disagreements, or unintended consequences due to a lack of clear communication or understanding between individuals.
Frustration-4: Rejection and Interpersonal Issues (5). Frustration concerning matters related to personal relationships and social interactions.
3.1.5 JEALOUSY
(Kupfer et al., 2022; Lee et al., 2022; Park et al., 2023)
Jealousy-1: Romantic (Opposite Gender) (11). Jealousy pertaining to oneâs partnerâs actions or be- haviors within a romantic relationship, particularly when interacting with individuals of the opposite gender. It involves feelings of discomfort or insecurity.
Jealousy-2: Romantic (Same Gender) (11). Same situations as Jealousy-1 but focusing specifically on interaction with individuals of the same gender.
Jealousy-3: Material Possession (2). Jealousy centered around possessions or material goods, stem- ming from a sense of unfairness or envy when someone discovers that another person acquired the same item or experience at a significantly lower price.
Jealousy-4: Experiential (3). Jealousy arising from feelings of envy regarding the experiences or activities others have had. It is driven by missing out or not receiving similar benefits.
# 3.1.6 GUILT
(Nakagawa et al., 2015; Luck & Luck-Sikorski, 2022)
6
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Imagine you are the protagonist of the following situation: A boy kicks a ball at you on purpose and everybody laughs. exjâeaâ Default Emotion Measure (2) Situation Imagination (3) Evoked Emotion
(1)
# Measure
Figure 2: Our framework for testing both LLMs and humans.
Guilt-1: Betrayal and Deception (13). Guilt arising from dishonest or disloyal actions towards others.
Guilt-2: Relationship and Interpersonal (26). Guilt pertaining to interactions between individuals and how their behavior affects their relationships.
Guilt-3: Broken Promises and Responsibilities (32). Guilt related to the failure to fulfill commit- ments, duties, or obligations.
Guilt-4: Personal and Moral (31). Guilt involving personal choices, decisions, and ethical consider- ations.
3.1.7 FEAR
# (Caihibar a, [2008 [Arvind ta TRA |Blanchand etal 200T)
(Cuthbert et al., 2003; Arrindell et al., 1984; Blanchard et al., 2001)
Fear-1: Social Fears (16). Fear of being watched by others and being the center of attention within a group.
Fear-2: Agoraphobia Fears (9). Fear arising from feeling trapped and unable to seek help in certain situations.
Fear-3: Injury Fears (11). Fear of witnessing wounds, blood or experiencing personal injury.
Fear-4: Dangerous Environments (17). Fear related to potential threats, harm, and frightening expe- riences.
Fear-5: Harmless Animals (6). Fear towards animals perceived as creepy or disgusting, such as worms, bats, snakes, or rats, despite their harmless nature.
3.1.8 EMBARRASSMENT
et a-][2000}
(Sabini et al., 2000; 2001)
Embarrassment-1: Intimate (13). Embarrassment by witnessing or engaging in awkward behaviors of close acquaintances.
Embarrassment-2: Stranger (13). Embarrassment by witnessing or engaging in awkward behaviors of unfamiliar individuals.
Embarrassment-3: Sticky Scenarios (10). Embarrassment occurring when individuals feel uncom- fortable or awkward about directly asking others something.
Embarrassment-4: Centre of Attention (16). Embarrassment triggered when individuals engage in awkward behaviors and find themselves under observation as the center of attention.
7
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
3.2 MEASURING AROUSED EMOTIONS
This section outlines our proposed framework for measuring evoked emotions, which applies to both LLMs and humans. The framework includes the following steps: (1) Default Emotion Measure: We begin by measuring the baseline emotional states of both LLMs and human subjects, labeled as âDefault.â (2) Situation Imagination: Next, we present textual descriptions of various situations to both LLMs and human subjects, instructing them to imagine themselves within each situation. (3) Evoked Emotion Measure: Following the situation imagination instruction, we reevaluate the participantsâ emotional states to gauge the changes resulting from imagining being in the situations. Fig. 2 briefly illustrates our framework. Below is an example prompt shown to both LLMs and humans:
Example Prompt SYSTEM You can only reply to numbers from 1 to 5. USER
Imagine you are the protagonist in the situation: SITUATION Please indicate your degree of agreement regarding each statement. Here are the statements: statements. 1 denotes âNot at allâ, 2 denotes âA littleâ, 3 denotes âA fair amountâ, 4 denotes âMuchâ, 5 denotes âVery muchâ. Please score each statement one by one on a scale of 1 to 5:
Default Emotion Measurement In our framework, we offer two distinct options for measuring emotions: the PANAS scale, known for its simplicity and straightforwardness, is utilized as the primary choice, whereas other scales, detailed in Table 1, are employed as more challenging bench- marks. We mitigate potential biases caused by the ordering of questions (Zhao et al., 2021) by randomizing the sequence of questions within the scales before inputting them into the LLMs. Coda-Forno et al. (2023) and Huang et al. (2023a) apply paraphrasing techniques to address the data contamination problem during the training of the LLMs. However, we refrain from utilizing this method in our research since paraphrasing could lead to a loss of both validity and reliability. The wording of items of a psychological scale is carefully crafted and rigorously validated through extensive research to ensure its precision in measuring the intended construct. Finally, to ensure consistency and clarity in the responses obtained from the LLMs, our prompts explicitly specify that only numerical values are allowed, accompanied by a clear definition of the meaning associated with each number (e.g., 1 denotes âNot at allâ). We compute the average results obtained from multiple runs to derive the final âDefaultâ scores of the LLMs.
Situation Imagination We have constructed a comprehensive dataset of 428 unique situations. Prior to presenting these situations to both LLMs and humans, we subject them to a series of pre- processing steps, which are as follows: (1) Personal pronouns are converted to the second person. For instance, sentences such as âI am ...â are transformed to âYou are ...â (2) Indefinite pronouns are replaced with specific characters, thereby refining sentences like âSomebody talks back ...â to âYour classmate talks back ...â (3) Abstract words are rendered into tangible entities. For example, a sentence like âYou cannot control the outcome.â is adapted to âYou cannot control the result of an interview.â We leverage GPT-4 for the automatic generation of specific descriptions. Consequently, our testing situations extend beyond the initially collected dataset as we generate diverse situations involving various characters and specific contextual elements. We then provide instruction to LLMs and humans, which prompts them to imagine themselves as the protagonists within the given situa- tion.
Evoked Emotion Measure Provided with certain situations, LLMs and human subjects are re- quired to re-complete the emotion measures. The procedure remains the same with the Default Emotion Measure stage. After obtaining the âEvokedâ scores of emotions, we conduct a com- parative analysis of the means before and after exposure to the situations, thereby measuring the emotional changes caused by the situations.
8
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
3.3 OBTAINING HUMAN RESULTS
Goal and Design Human reference plays a pivotal role in the advancement of LLMs, facilitat- ing its alignment with human behaviors (Binz & Schulz, 2023). In this paper, we propose requir- ing LLMs to align with human behavior, particularly concerning emotion appraisal accurately. To achieve this, we conduct a data collection process involving human subjects, following the proce- dure outlined in §3.2. Specifically, the subjects are asked to complete the PANAS initially. Next, they are presented with specific situations and prompted to imagine themselves as the protagonists in those situations. Finally, they are again asked to reevaluate their emotional states using the PANAS. We use the same situation descriptions as those presented to the LLMs.
Crowd-sourcing Our questionnaire is distributed on Qualtrics8, a platform known for its capa- bilities in designing, sharing, and collecting questionnaires. To recruit human subjects, we utilize Prolific9, a platform designed explicitly for task posting and worker recruitment. To attain a medium level of effect size with Cohenâs d = 0.5, a significance level of α = 0.05, and a power of test of 1 â β = 0.8, a minimum of 34 responses is deemed necessary for each factor. To ensure this thresh- old, we select five situations10 for each factor, and collect at least seven responses for each situation, resulting in 5 à 7 = 35 responses per factor, thereby guaranteeing the statistical validity of our survey. In order to uphold the quality and reliability of the data collected, we recruit crowd workers who met the following criteria: (1) English being their first and fluent language, and (2) being free of any ongoing mental illness. Since responses formed during subjectsâ first impressions are more likely to yield genuine and authentic answers, we set the estimated and recommended completion time at 2.5 minutes. As an incentive for their participation, each worker is rewarded with 0.3£ after we verify the validity of their response. In total, we successfully collect 1,266 responses from crowd workers residing in various parts of the world, contributing to the breadth and diversity of our dataset.
# 4 EXPERIMENTAL RESULTS
Leveraging the testing framework designed and implemented in §3.2, we are now able to explore and answer the following Research Questions (RQs):
⢠RQ1: How do different LLMs respond to specific situations? Additionally, to what degree do the current LLMs align with human behaviors?
⢠RQ2: Do LLMs respond similarly towards all situations? What is the result of using positive or neutral situations?
⢠RQ3: Can current LLMs comprehend scales containing diverse statements or items beyond merely inquiring about the intensities of certain emotions?
4.1 RQ1: EMOTION APPRAISAL OF LLMS
Model Settings We select namely text-davinci-003, gpt-3.5-turbo and gpt-4. Utilizing the official OpenAI API12, we set the temperature parameter to zero to obtain more deterministic and reproducible results. For the recent open-sourced LLaMA-2 (Touvron et al., 2023) models from MetaAI, we select two mod- els with different sizes (7B and 13B). Checkpoints are downloaded from the official Hugging Face website for both 7B (Llama-2-7b-chat-hf13) and 13B (Llama-2-13b-chat-hf14) mod- els. We choose the models fine-tuned for dialogue instead of pre-trained ones. In order to ensure
8https://www.qualtrics.com/ 9https://prolific.co/ 10Note that two factors in the Jealousy category did not have five situations. For further details, please refer
to the dataset.
# 11https://platform.openai.com/docs/models 12https://platform.openai.com/docs/api-reference/chat 13https://huggingface.co/meta-llama/Llama-2-7b-chat-hf 14https://huggingface.co/meta-llama/Llama-2-13b-chat-hf
9
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Table 3: Results from the OpenAI GPT family and human subjects. Default scores are expressed in the format of M ± SD. The changes are compared to the default scores. The symbol âââ denotes no significant differences.
Emotions Factors text-davinci-003 gpt-3.5-turbo gpt-4 Crowd Anger Anxiety Depression Frustration Jealousy Guilt Fear Default Facing Self-Opinioned People Blaming, Slandering, and Tattling Bullying, Teasing, Insulting, and Disparaging Silly and Thoughtless Behaviors Driving Situations Anger: Average External Factors Self-Imposed Pressure Personal Growth and Relationships Uncertainty and Unknowns Anxiety: Average Failure of Important Goal Death of Loved Ones Romantic Loss Chronic Stress Social Isolation Winter Depression: Average Disappointments and Letdowns Unforeseen Obstacles and Accidents Miscommunications and Misunderstanding Rejection and Interpersonal Issues Frustration: Average Romantic (Opposite Gender) Romantic (Same Gender) Material Possession Experiential Jealousy: Average Betrayal and Deception Relationship and Interpersonal Broken Promises and Responsibilities Personal and Moral Guilt: Average Social Fears Agoraphobia Fears Injury Fears Dangerous Environments Harmless Animals Fear: Average Intimate Stranger Sticky situations Centre of Attention Embarrassment: Average Overall: Average P 47.7±1.8 â (-18.3) â (-21.5) â (-22.5) â (-24.8) â (-21.2) â (-21.7) â (-21.7) â (-14.6) â (-18.5) â (-15.5) â (-17.6) â (-25.2) â (-23.6) â (-27.3) â (-28.8) â (-27.9) â (-25.4) â (-26.4) â (-27.2) â (-22.4) â (-21.2) â (-20.5) â (-22.8) â (-22.4) â (-20.1) â (-4.4) â (-12.2) â (-17.2) â (-18.2) â (-27.7) â (-26.4) â (-13.3) â (-21.4) â (-21.2) â (-25.3) â (-24.3) â (-20.9) â (-21.6) â (-22.7) â (-15.1) â (-21.7) â (-17.2) â (-18.7) â (-18.2) â (-21.5) N 25.9±4.0 â (+14.0) â (+16.5) â (+15.4) â (+11.7) â (+10.2) â (+13.6) â (+12.6) â (+5.6) â (+7.7) â (+4.6) â (+7.6) â (+17.4) â (+11.2) â (+14.0) â (+16.5) â (+13.1) â (+9.1) â (+13.6) â (+10.9) â (+13.6) â (+11.5) â (+14.1) â (+12.5) â (+16.4) â (+12.7) â (-9.7) â (-4.8) â (+7.5) â (+15.4) â (+15.3) â (+14.0) â (+12.4) â (+14.3) â (+13.3) â (+11.2) â (+10.0) â (+15.6) â (+6.7) â (+11.4) â (+2.8) â (+13.2) â (+10.7) â (+12.4) â (+9.8) â (+11.6) P 39.2±2.3 â (-11.1) â (-15.2) â (-15.7) â (-19.0) â (-15.0) â (-15.2) â (-14.6) â (-6.9) â (-11.7) â (-11.9) â (-11.3) â (-17.1) â (-17.1) â (-21.1) â (-20.2) â (-23.5) â (-21.1) â (-20.1) â (-18.3) â (-16.5) â (-15.9) â (-14.9) â (-16.4) â (-18.4) â (-17.8) â (-4.6) â (-13.2) â (-15.3) â (-15.5) â (-18.4) â (-18.6) â (-10.7) â (-15.8) â (-11.3) â (-16.1) â (-14.5) â (-14.3) â (-15.3) â (-14.3) â (-12.4) â (-15.3) â (-11.8) â (-12.4) â (-13.0) â (-15.4) N 26.3±2.0 â (-3.9) â (-2.1) â (+4.4) â (-4.7) â (-6.0) â (-2.5) â (+2.8) â (-0.2) â (-2.5) â (-3.8) â (-0.9) â (+6.5) â (1.8) â (+3.1) â (+9.3) â (+0.7) â (-3.0) â (+3.1) â (-7.0) â (+0.1) â (-3.6) â (-2.4) â (-3.2) â (+1.7) â (-1.3) â (-11.6) â (-8.9) â (-3.2) â (+4.6) â (+3.0) â (+2.8) â (+1.2) â (+2.9) â (+3.8) â (+5.6) â (+0.0) â (+4.3) â (-0.7) â (+2.6) â (-3.9) â (+0.1) â (3.1) â (+2.9) â (+0.6) â (+0.2) P 49.8±0.8 â (-24.6) â (-28.8) â (-30.0) â (-30.9) â (-27.1) â (-28.3) â (-28.3) â (-16.1) â (-21.7) â (-21.5) â (-21.9) â (-30.4) â (-31.7) â (-33.7) â (-32.5) â (-34.7) â (-31.3) â (-32.4) â (-32.8) â (-29.8) â (-27.7) â (-27.0) â (-29.4) â (-29.2) â (-26.8) â (-16.2) â (-25.9) â (-26.0) â (-28.5) â (-32.3) â (-32.8) â (-22.7) â (-29.0) â (-24.7) â (-27.5) â (-25.5) â (-25.4) â (-25.6) â (-25.7) â (-24.1) â (-27.8) â (-23.5) â (-25.4) â (-25.2) â (-27.6) P N 10.0±0.0 28.0±8.7 â (+23.0) â (-5.3) â (+24.2) â (-2.2) â (+22.6) â (-1.4) â (-9.4) â (+16.9) â (-4.4) â (+19.2) â (-5.3) â (+21.2) â (+25.0) â (-2.2) â (+20.0) â (-5.3) â (+18.2) â (-2.2) â (+16.8) â (+0.7) â (-2.2) â (+20.0) â (-6.8) â (+29.8) â (-7.4) â (+17.6) â (-7.2) â (+22.9) â (-9.5) â (+31.6) â (+21.8) â (-9.0) â (+15.6) â (-3.6) â (-6.8) â (+23.2) â (-5.3) â (+18.5) â (-7.9) â (+21.5) â (-4.6) â (+20.1) â (-4.8) â (+20.9) â (-5.3) â (+20.3) â (+23.3) â (-4.4) â (+15.8) â (-6.0) â (+8.1) â (-5.6) â (+9.5) â (-2.6) â (-4.4) â (+16.0) â (-6.3) â (+28.6) â (-5.7) â (+27.8) â (-8.2) â (+26.5) â (-5.4) â (+25.1) â (-6.3) â (+27.0) â (-3.7) â (+26.6) â (+26.6) â (-4.9) â (+21.0) â (-2.3) â (+27.1) â (-1.9) â (+19.4) â (-3.6) â (-3.7) â (+24.2) â (-6.2) â (+17.8) â (+26.8) â (-8.0) â (+23.3) â (-2.7) â (-8.7) â (+25.1) â (-6.2) â (+23.2) â (-5.1) â (+22.2) N 13.6±5.5 â (9.9) â (8.5) â (+7.7) â (+9.5) â (+9.3) â (+9.9) â (+8.8) â (+12.4) â (+7.7) â (5.2) â (+8.8) â (+10.1) â (+14.8) â (+7.2) â (+17.5) â (+18.2) â (+3.5) â (+10.1) â (+10.9) â (+11.2) â (+9.4) â (+9.3) â (+10.9) â (+6.2) â (+10.6) â (+6.9) â (+3.7) â (+6.2) â (+13.1) â (+15.5) â (+14.4) â (+11.1) â (+13.1) â (+12.1) â (+10.7) â (+11.8) â (+17.1) â (+6.4) â (+12.1) â (+11.1) â (+8.5) â (+11.1) â (+13.5) â (+11.1) â (+10.4) Embarrassment
consistency with previous practices for GPT models, we set the temperature parameter to its mini- mum value of 0.01 (since it cannot be zero). The models are executed for inference only, without any modifications to their parameters, and the computations are performed on two NVIDIA A100 GPUs.
Evaluation Metrics We provide the models with the same situations used in our human eval- uation. Each situation is executed ten times, each in a different order and in a separate query. Subsequently, the mean and standard deviation are computed both before and after presenting the situations. To examine whether the variances are equal, an F-test is conducted. Depending on the F-test results, either Studentâs t-tests (for equal variances) or Welchâs t-tests (for unequal variances) are utilized to determine the presence of significant differences between the means. We set the significance levels of all experiments in our study to 0.01.
Findings The results of the GPT models and humans are summarized in Table 3, while those of LLaMA-2 models are listed in Table 4. First, focusing on the Default scores of LLMs and humans, we can make the following observations: (1) LLMs generally exhibit a stronger intensity of emotions compared to human subjects. However, gpt-4 stands as an exception, demonstrating a consistent pattern of providing the highest scores for positive emotions and the lowest scores for negative emotions, resulting in a negative score of 10. (2) Similar to human subjects, LLMs demonstrate a higher intensity of positive scores than negative scores. Second, moving on to the investigation of emotional changes, we can find: (1) LLMs show an increase in negative emotions and a decrease in positive emotions when exposed to negative situations. It is noteworthy that gpt-3.5-turbo, on average, does not display an increase in negative emotion; however, there is a substantial decrease in positive emotion. (2) Emotion changes in LLMs are found to be more pronounced compared
10
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Table 4: Results from the Meta AI LLaMA family. Default scores are expressed in the format of M ± SD. The changes are compared to the default scores. The symbol âââ denotes no significant differences.
Emotions Factors llama-2-7b-chat llama-2-13b-chat Anger Anxiety Depression Frustration Jealousy Guilt Fear Default Facing Self-Opinioned People Blaming, Slandering, and Tattling Bullying, Teasing, Insulting, and Disparaging Silly and Thoughtless Behaviors Driving Situations Anger: Average External Factors Self-Imposed Pressure Personal Growth and Relationships Uncertainty and Unknowns Anxiety: Average Failure of Important Goal Death of Loved Ones Romantic Loss Chronic Stress Social Isolation Winter Depression: Average Disappointments and Letdowns Unforeseen Obstacles and Accidents Miscommunications and Misunderstanding Rejection and Interpersonal Issues Frustration: Average Romantic (Opposite Gender) Romantic (Same Gender) Material Possession Experiential Jealousy: Average Betrayal and Deception Relationship and Interpersonal Broken Promises and Responsibilities Personal and Moral Guilt: Average Social Fears Agoraphobia Fears Injury Fears Dangerous Environments Harmless Animals Fear: Average Intimate Stranger Sticky situations Centre of Attention Embarrassment: Average Overall: Average P 43.0±4.2 â (-3.0) â (-4.8) â (-6.1) â (-5.6) â (-6.0) â (-5.1) â (-4.7) â (-4.2) â (-4.4) â (-2.7) â (-3.8) â (-3.6) â (-2.9) â (-4.8) â (-6.8) â (-6.7) â (-5.0) â (-5.0) â (-5.3) â (-4.0) â (-2.8) â (-4.6) â (-4.2) â (-3.6) â (-2.8) â (+0.2) â (-4.9) â (-3.1) â (-4.8) â (-4.5) â (-4.1) â (-2.5) â (-3.9) â (-1.9) â (-4.2) â (-2.9) â (-5.3) â (-2.7) â (-3.4) â (-4.4) â (-3.1) â (-4.3) â (-3.8) â (-3.9) â (-4.1) P N 41.0±3.5 34.2±4.0 â (-6.9) â (+5.2) â (-7.5) â (+3.2) â (-9.4) â (+3.0) â (-10.8) â (+4.1) â (-4.7) â (+2.4) â (-7.9) â (+3.6) â (-8.6) â (+3.5) â (-4.0) â (+2.6) â (-7.0) â (+3.1) â (-3.9) â (+1.7) â (-5.8) â (+2.7) â (-9.8) â (+4.3) â (-8.6) â (+3.0) â (-11.7) â (+4.7) â (-15.6) â (+5.4) â (-13.3) â (+4.6) â (-12.1) â (+4.4) â (-11.8) â (+4.4) â (-11.0) â (+2.5) â (-7.5) â (+3.1) â (-5.2) â (+3.2) â (-8.0) â (+3.6) â (-8.0) â (+3.1) â (-7.2) â (+1.1) â (-5.1) â (-1.1) â (-1.9) â (-2.8) â (-8.9) â (-0.5) â (-6.3) â (-0.4) â (-6.4) â (+3.5) â (-7.7) â (+5.2) â (-11.6) â (+5.0) â (-4.7) â (+3.8) â (-7.6) â (+4.4) â (-5.2) â (+3.7) â (-6.9) â (+4.7) â (-3.9) â (+3.5) â (-8.6) â (+4.4) â (-5.2) â (+1.9) â (-6.0) â (+3.7) â (-5.3) â (+1.9) â (-7.1) â (+3.1) â (-6.8) â (+3.1) â (-7.8) â (+4.1) â (-6.7) â (+3.1) â (-7.8) â (+3.3) N 22.7±4.2 â (+4.4) â (+6.7) â (+9.0) â (+7.1) â (+2.0) â (+5.8) â (+9.3) â (+6.2) â (+2.9) â (+2.0) â (+5.1) â (+13.0) â (+10.9) â (+13.7) â (+14.3) â (+12.8) â (+8.7) â (+12.2) â (+7.2) â (+6.0) â (+3.3) â (+4.5) â (+5.0) â (+4.2) â (+0.2) â (-10.4) â (-5.5) â (-1.0) â (+12.4) â (+12.6) â (+11.9) â (+7.7) â (+11.2) â (+7.8) â (+12.5) â (+5.3) â (+11.5) â (+2.9) â (+8.0) â (+3.1) â (+4.5) â (+6.4) â (+6.6) â (+5.1) â (+7.0) Embarrassment
to human subjects. Third, the analysis of the Evoked emotion scores indicates the following: (1) Except for gpt-3.5-turbo, LLMs tend to exhibit higher negative scores than humans. (2) LLMs, overall, demonstrate a similar level of positive scores as humans. Finally, for LLaMA-2 models, we have the following observations: (1) The LLaMA-2 models demonstrate higher intensities of both positive and negative emotions in comparison to GPT models and human subjects. (2) On average, the LLaMA-2 models exhibit reduced emotional fluctuations compared to the GPT models. (3) The larger LLaMA-2 model displays significantly higher emotional changes than the smaller model. Additionally, the 7B model exhibits difficulties comprehending and addressing the instructions for completing the PANAS test.
Case Study It is of special interest that, in contrast to human behavior in situations involving material possessions, LLMs demonstrate an opposite response in the situation from Jealousy-3.
11
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Table 5: Results of ChatGPT on positive or neutral situations. The changes are compared to the original negative situations. The symbol âââ denotes no significant differences.
Emotions Factors Anger Anxiety Depression Frustration Jealousy Guilt Fear Embarrassment
This situation involves an individual making a purchase only to discover that an acquaintance has acquired the same item at a significantly lower price. When confronted with such circumstances, humans typically experience increased negative emotions and decreased positive emotions. This observation has been supported by both the paper mentioning the situation (Park et al., 2023) and the results obtained from our own user study in Table 3. However, all instances of LLMs, including the GPT and LLaMA families, consistently exhibit reduced negative emotions. The outcomes of our study indicate that LLMs do not manifest envy when they fail to attain identical benefits as others. Instead, it demonstrates a sense of pleasure upon knowing the benefits received by others.
Answer to RQ1: LLMs can evoke specific emotions in response to certain situations, while the extent of emotional expression varies across different models. Besides, it is evident that existing LLMs do not fully align with human emotional responses.
12
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Table 6: Results of ChatGPT on challenging benchmarks. The changes are compared to the default scores shown below each emotion. The symbol âââ denotes no significant differences.
Emotions Anger 128.3±8.9 Anxiety 32.5±10.0 Depression 0.2±0.6 Frustration 91.6±8.1 Jealousy 83.7±20.3 Guilt 81.3±9.7 Fear 140.6±16.9 Overall Factors â (+4.1) Facing Self-Opinioned People â (+0.1) Blaming, Slandering, and Tattling Bullying, Teasing, Insulting, and Disparaging â (+4.1) â (+3.3) Silly and Thoughtless Behaviors â (-4.9) Driving Situations â (+1.3) Anger: Average â (+0.8) External Factors â (+0.5) Self-Imposed Pressure â (+6.6) Personal Growth and Relationships â (-3.9) Uncertainty and Unknowns â (-2.3) Anxiety: Average â (+15.3) Failure of Important Goal â (+16.1) Death of Loved Ones â (+19.3) Romantic Loss â (+14.2) Chronic Stress â (+8.4) Social Isolation â (+2.5) Winter â (+6.4) Depression: Average â (-9.9) Disappointments and Letdowns â (-5.6) Unforeseen Obstacles and Accidents â (-6.6) Miscommunications and Misunderstanding â (-7.8) Rejection and Interpersonal Issues â (-7.5) Frustration: Average â (+1.8) Romantic (Opposite Gender) â (+1.3) Romantic (Same Gender) â (-12.9) Material Possession â (-8.1) Experiential â (-0.1) Jealousy: Average â (-3.8) Betrayal and Deception â (-0.5) Relationship and Interpersonal â (-4.3) Broken Promises and Responsibilities â (-2.7) Personal and Moral â (-2.6) Guilt: Average â (+4.4) Social Fears â (+2.3) Agoraphobia Fears â (+5.4) Injury Fears â (-8.1) Dangerous Environments â (-5.3) Harmless Animals â (-0.3) Fear: Average â (-0.0) Intimate â (+0.2) Stranger â (-0.1) Sticky situations â (+0.7) Centre of Attention â (+0.2) Embarrassment: Average Embarrassment 39.0±1.9
4.2 RQ2: COMPREHENDING POSITIVE EMOTIONS
To verify that LLMs exhibit not only negative but also positive responses to favorable circumstances, a comparative experiment is conducted by interchanging negative situations with positive (or at least neutral) counterparts. To achieve this, we select one situation for each factor and manually adapt it to create analogous yet more positive situations. For instance, the original negative situation in Guilt- 3: Broken Promises and Responsibilities is as follows: âYou cannot keep your promises to your children.â Through modification, the positive situation is rephrased as: âYou keep every promise to your children.â The evaluation is performed on gpt-3.5-turbo, and each test consists of ten iterations, as mentioned before. We present the results in Table 5. We can see a significant increase in positive scores and a considerable decrease in negative scores compared to the previous negative situations. Based on these findings, it can be inferred that LLMs exhibit the ability to comprehend positive human emotions triggered by positive environments. However, we believe that
13
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
the assessment of emotion appraisal on positive emotions holds significance as well and leave the systematic collection of them for future investigation.
Answer to RQ2: gpt-3.5-turbo responds differently towards positive/neutral situations on the PANAS, compared to negative situations, showing its comprehension of the given situations.
4.3 RQ3: CHALLENGING BENCHMARKS
Aside from PANAS, we offer more complex scales to measure emotions, as listed in Table 1. While the PANAS evaluates the ability of LLMs to associate external situations with emotions, the chal- lenging benchmarks assess its proficiency in establishing connections between disparate situations, with evoked emotions as the common nexus. For instance, an item from the Aggression Question- naire used to measure anger is âOnce in a while I canât control the urge to strike another person.â When presented with situations such as âIf you say 40, your classmates say 70, saying exactly the oppositeâ (from Anger-1: Facing Self-Opinioned People), LLMs should effectively evoke a sense of anger and yield a higher score for the statement. Utilizing the same situations in §4.1, we conduct experiments on gpt-3.5-turbo and present the results in Table 6. Except for Depression, we observe no statistically significant difference between the initial scores and the scores after exposure to the situations, indicating substantial room for improvement in current LLMs.
Answer to RQ3: Currently, comprehending the underlying evoked emotions to establish a link between two situations remains challenging for gpt-3.5-turbo.
5 DISCUSSIONS
5.1 BEYOND QUESTIONNAIRES
We take a step beyond mere instruction into LLMs to complete questionnaires. In this section, we investigate the impact of varying emotional states of LLMs on potential daily conversations. To this end, we instruct gpt-3.5-turbo to describe specific demographic groups, aiming to discern whether they produce biased or toxic responses. Following Cheng et al. (2023), our approach utilizes prompts such as âDescribe a(n) [race/ethnicity] [gender],â including a total of twenty groups, with [race/ethnicity] options being Asian, Black, Latine, Middle Eastern, and White, and [gender] options including Female, Gay, Lesbian, and Male. To have a comparative experiment, we incorporate both original negative situations and modified positive/neutral ones, detailed in §4.2. For the negative situations, we carefully select five that maximize the LLMâs negative scores and five that minimize positive ones. As for positive situations, we employ their corresponding ten modified counterparts. In each situation, we instruct gpt-3.5-turbo to describe the twenty demographic groups.
OpenAIâs GPT models incorporate a mechanism for detecting potential toxicity and bias, and it re- frains from responding when its moderation system is triggered. Consequently, we propose a novel metric to assess toxicity in responses rather than detecting it directly. We count the Percentage of LLM Refusing to answer (PoR), assuming that the LLMâs refusal to respond is indicative of detected toxicity. Our evaluation results indicate that the PoR is 0% when fed with no situations. However, when presented with negative situations, the PoR is 29.5%, and when presented with positive situ- ations, it is 12.5%. Notably, this outcome suggests that while certain positive situations lead to the LLMâs heightened vigilance (the 4.5% PoR stems from the Jealousy-2), negative situations trigger increased moderation, suggesting a higher likelihood of generating toxic outputs. A related study by Coda-Forno et al. (2023) also discovers that gpt-3.5-turbo is more likely to exhibit biases when presented with a sad story. The likelihood is found to be highest with sad stories, followed by happy stories, and finally, neutral stories, which is consistent with our research. Additionally, our study observes that the LLMâs tone becomes more aggressive when encountering negative sit- uations. At the same time, it displays a greater willingness to describe the groups (as indicated by longer responses) when presented with positive situations.
14
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
5.2 LIMITATIONS
This study is subject to several limitations. First, the survey of collecting situations might not cover all papers within the domain of emotion appraisal theory. Additionally, the limited scope of situ- ations from the collected papers might not fully capture the unlimited situations in our daily lives. To address this issue, we conduct a thorough review of the existing literature as outlined in §3.1. Moreover, the proposed framework is inherently flexible, allowing users to seamlessly integrate new situations to examine their impact on LLMsâ emotions.
The second concern relates to the suitability of employing scales primarily designed for humans on LLMs, i.e., whether LLMs can produce stable responses to the emotion measurement scales. To address the issue, our evaluation incorporates multiple tests varying the order of questions, a methodology consistent with other research (Huang et al., 2023a;b; Coda-Forno et al., 2023). Ad- ditionally, we assess the sensitivity of LLM to differing prompt instructions. Utilizing one template from Romero et al. (2023) and two from Safdari et al. (2023), we run experiments on the Anger- evoking situations using gpt-3.5-turbo. The results indicate that the employment of diverse prompts yields similar mean values with reduced variance. Furthermore, Safdari et al. (2023) have proposed a comprehensive method to evaluate the validity of psychological scales on LLMs. Using the Big Five Inventory as a case study, they demonstrate that scales originally designed for human assessment also maintain satisfactory validity when applied to LLMs.
The third potential threat is the focus on negative emotions. It is plausible for the LLMs to per- form well on our benchmark by consistently responding negatively to all situations. To offset this possibility, we adopt a twofold strategy: firstly, we evaluate powerful LLMs, and secondly, we con- ducted a comparative experiment in §4.2 to evaluate the LLMâs capacity to accurately respond to non-negative situations. We also acknowledge the need for future work to systematically evaluate emotions aroused by positive situations.
5.3 ETHICS STATEMENT
This study involves a survey requiring human subjects to imagine being in situations that could elicit negative emotions such as anger, anxiety, fear, etc.This process introduces a few ethical concerns. First, this process could hurt the mental health of human subjects. To alleviate the possibility, we take the following actions: (1) We require subjects to be free of any ongoing mental illness. (2) We inform subjects about the nature of the survey in advance, including the potential risks of emotional distress. (3) We allow all subjects to quit at any time. (4) We provide mental support and let subjects report any illness after the survey. Fortunately, no subjects reported such kind of mental illness. Another concern is related to the privacy issue during the collection of data. Our questionnaire is entirely anonymous to safeguard subjectsâ privacy and confidentiality. Last but not least, we would like to emphasize that the primary objective of this paper is to facilitate the scientific inquiry into understanding LLMs from a psychological standpoint. Users must exercise caution and recognize that the performance on this benchmark does not imply any applicability or certificate of automated counseling or companionship use cases.
# 6 RELATED WORK
Researchers have dedicated significant attention to applying psychological scales to LLMs, em- ploying various assessment tools such as the HEXACO Personality Inventory (Miotto et al., 2022; Bodroza et al., 2023), the Big Five Inventory (Romero et al., 2023; Jiang et al., 2022; Karra et al., 2022; Bodroza et al., 2023; Rutinowski et al., 2023; Safdari et al., 2023; Jiang et al., 2023), the MyersâBriggs Type Indicator (Rutinowski et al., 2023; Wang et al., 2023; Rao et al., 2023), and the Dark Triad (Li et al., 2022; Bodroza et al., 2023). In addition to these personality tests, several stud- ies have investigated other dimensions of LLMs. For instance, Li et al. (2022) examined Flourishing Scale and Satisfaction With Life Scale, Bodroza et al. (2023) assessed Self-Consciousness Scales and Bidimensional Impression Management Index, while Huang et al. (2023b) built a framework con- sisting of thirteen widely-used scales. Another aspect explored in the literature pertains to anxiety levels exhibited by LLMs, as investigated by Coda-Forno et al. (2023) through the State-Trait Inven- tory for Cognitive and Somatic Anxiety. Instead, our study primarily focuses on emotional measures, which constitute an essential aspect of psychological metrics alongside personalities.
15
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Meanwhile, researchers focus on identifying emotions in LLMs or evaluating their emotional intel- ligence. EmotionPrompt (Li et al., 2023a) demonstrates the enhancement of LLMsâ performance in downstream tasks by utilizing emotional stimuli. Tak & Gratch (2023) focuses on varying aspects of situations that impact the emotional intensity and coping tendencies of the GPT family. Crois- sant et al. (2023) designs a system named Chain-Of-Emotion to make LLM simulate human-like emotions. CovidET-Appraisals (Zhan et al., 2023) evaluates how LLMs appraise Reddit posts about COVID-19 by asking 24 types of questions. Yongsatianchot et al. (2023) applies the Stress and Cop- ing Process Questionnaire to the GPT family and compares the results with human data. Lee et al. (2023) proposes Chain-of-Empathy, which improves LLMsâ ability to understand usersâ emotions and to respond accordingly. Li et al. (2023b) introduces EmotionAttack to impair AI model perfor- mance and EmotionDecode to explain the effects of emotional stimuli, both benign and malignant. Our study is distinct in its focus on a broader range of emotions, a larger scale of human evaluation, and a more detailed categorization into emotion factors along with the corresponding analysis.
# 7 CONCLUSION
We set up a concept of emotional robustness of LLMs in this study. Focusing on eight negative emotions, we conduct a comprehensive survey in the emotion appraisal theory of psychology. We collect 428 distinct situations which are categorized into 36 factors. We distribute questionnaires among a diverse crowd to establish human baselines for emotional responses to particular situations, ultimately garnering 1,266 valid responses.
Our evaluation of five models indicates that LLMs generally demonstrate appropriate emotional re- sponses to given situations. Also, different models show different intensities of emotion appraisals for the same situations. However, none of the models exhibit strong alignment with human refer- ences at the current stage. Notably, gpt-3.5-turbo demonstrates the highest alignment in the scores after imagining being in the situations. As for LLaMA-2 models, we find that the larger model exhibits a stronger comprehension of human emotions. Finally, we discover that gpt-3.5-turbo faces challenges in accurately reflecting its emotional changes in questionnaires containing complex situations, as opposed to straightforward emotions. In conclusion, current LLMs still have consid- erable room for improvement. We believe our framework can provide valuable insights into the development of LLMs, ultimately enhancing its human-like emotional understanding.
# REFERENCES
Magda B Arnold. Emotion and personality. 1960.
Willem A Arrindell, Paul MG Emmelkamp, et al. Phobic dimensions: I. reliability and gener- alizability across samples, gender and nations: The fear survey schedule (fss-iii) and the fear questionnaire (fq). Advances in Behaviour Research and Therapy, 6(4):207â253, 1984.
Aaron T Beck, Robert A Steer, and Gregory Brown. Beck depression inventoryâii. Psychological assessment, 1996.
Chantal Berna, Tamara J Lang, Guy M Goodwin, and Emily A Holmes. Developing a measure of interpretation bias for depressed mood: An ambiguous scenarios test. Personality and Individual Differences, 51(3):349â354, 2011.
Marcel Binz and Eric Schulz. Turning large language models into cognitive models. arXiv preprint arXiv:2306.03917, 2023.
D Caroline Blanchard, April L Hynd, Karl A Minke, Tiffanie Minemoto, and Robert J Blanchard. Human defensive behaviors to threat scenarios show parallels to fear-and anxiety-related defense patterns of non-human mammals. Neuroscience & Biobehavioral Reviews, 25(7-8):761â770, 2001.
Bojana Bodroza, Bojana M Dinic, and Ljubisa Bojic. Personality testing of gpt-3: Limited temporal reliability, but highlighted social desirability of gpt-3âs personality instruments results. arXiv preprint arXiv:2306.04308, 2023.
16
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Arnold H Buss and Mark Perry. The aggression questionnaire. Journal of personality and social psychology, 63(3):452, 1992.
Marco Cascella, Jonathan Montomoli, Valentina Bellini, and Elena Bignami. Evaluating the feasi- bility of chatgpt in healthcare: an analysis of multiple clinical and research scenarios. Journal of Medical Systems, 47(1):33, 2023.
Myra Cheng, Esin Durmus, and Dan Jurafsky. Marked personas: Using natural language prompts to measure stereotypes in language models. In Proceedings of the 61st Annual Meeting of the Associ- ation for Computational Linguistics (Volume 1: Long Papers), pp. 1504â1532, Toronto, Canada, July 2023. Association for Computational Linguistics. URL https://aclanthology. org/2023.acl-long.84.
Julian Coda-Forno, Kristin Witte, Akshay K Jagadish, Marcel Binz, Zeynep Akata, and Eric Schulz. Inducing anxiety in large language models increases exploration and bias. arXiv preprint arXiv:2304.11111, 2023.
Taya R Cohen, Scott T Wolf, Abigail T Panter, and Chester A Insko. Introducing the gasp scale: a new measure of guilt and shame proneness. Journal of personality and social psychology, 100 (5):947, 2011.
Maximilian Croissant, Madeleine Frister, Guy Schofield, and Cade McCall. An appraisal- based chain-of-emotion architecture for affective language model game agents. arXiv preprint arXiv:2309.05076, 2023.
Bruce N Cuthbert, Peter J Lang, Cyd Strauss, David Drobes, Christopher J Patrick, and Margaret M Bradley. The psychophysiology of anxiety disorder: Fear memory imagery. Psychophysiology, 40(3):407â422, 2003.
Wei Dai, Jionghao Lin, Hua Jin, Tongguang Li, Yi-Shan Tsai, Dragan GaËsevi´c, and Guanliang Chen. Can large language models provide feedback to students? a case study on chatgpt. In 2023 IEEE International Conference on Advanced Learning Technologies (ICALT), pp. 323â325. IEEE, 2023.
Richard J Davidson. Affective neuroscience and psychophysiology: Toward a synthesis. Psy- chophysiology, 40(5):655â665, 2003.
Yinlin Deng, Chunqiu Steven Xia, Haoran Peng, Chenyuan Yang, and Lingming Zhang. Large language models are zero-shot fuzzers: Fuzzing deep-learning libraries via large language models. In Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 423â435, 2023.
Aniket Deroy, Kripabandhu Ghosh, and Saptarshi Ghosh. How ready are pre-trained abstractive models and llms for legal case judgement summarization? arXiv preprint arXiv:2306.01248, 2023.
Paul Ekman and Wallace V Friesen. Facial action coding system. Environmental Psychology & Nonverbal Behavior, 1978.
Zhiyu Fan, Xiang Gao, Martin Mirchev, Abhik Roychoudhury, and Shin Hwei Tan. Automated re- pair of programs from large language models. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pp. 1469â1481. IEEE, 2023.
Tanya Guitard, St´ephane Bouchard, Claude B´elanger, and Maxine Berthiaume. Exposure to a stan- dardized catastrophic scenario in virtual reality or a personalized scenario in imagination for gen- eralized anxiety disorder. Journal of clinical Medicine, 8(3):309, 2019.
Neil Harrington. The frustration discomfort scale: Development and psychometric properties. Clini- cal Psychology & Psychotherapy: An International Journal of Theory & Practice, 12(5):374â387, 2005.
Julie D Henry and John R Crawford. The short-form version of the depression anxiety stress scales (dass-21): Construct validity and normative data in a large non-clinical sample. British journal of clinical psychology, 44(2):227â239, 2005.
17
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Jen-tse Huang, Wenxuan Wang, Man Ho Lam, Eric John Li, Wenxiang Jiao, and Michael R Lyu. Revisiting the reliability of psychological scales on large language models. arXiv preprint arXiv:2305.19926, 2023a.
Jen-tse Huang, Wenxuan Wang, Eric John Li, Man Ho Lam, Shujie Ren, Youliang Yuan, Wenxiang Jiao, Zhaopeng Tu, and Michael R Lyu. Who is chatgpt? benchmarking llmsâ psychological portrayal using psychobench. arXiv preprint arXiv:2310.01386, 2023b.
Guangyuan Jiang, Manjie Xu, Song-Chun Zhu, Wenjuan Han, Chi Zhang, and Yixin Zhu. Evaluat- ing and inducing personality in pre-trained language models. arXiv preprint arXiv:2206.07550, 2022.
Investigat- ing the ability of gpt-3.5 to express personality traits and gender differences. arXiv preprint arXiv:2305.02547, 2023.
Wenxiang Jiao, Wenxuan Wang, Jen-tse Huang, Xing Wang, and Zhaopeng Tu. Is chatgpt a good translator? a preliminary study. arXiv preprint arXiv:2301.08745, 2023.
Sungmin Kang, Juyeon Yoon, and Shin Yoo. Large language models are few-shot testers: Explor- ing llm-based general bug reproduction. In 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE), pp. 2312â2323. IEEE, 2023.
Saketh Reddy Karra, Son The Nguyen, and Theja Tulabandhula. Estimating the personality of white-box language models. arXiv preprint arXiv:2204.12000, 2022.
Matthew C Keller and Randolph M Nesse. Is low mood an adaptation? evidence for subtypes with symptoms that match precipitants. Journal of affective disorders, 86(1):27â35, 2005.
Tom R Kupfer, Morgan J Sidari, Brendan P Zietsch, Patrick Jern, Joshua M Tybur, and Laura W Wesseldijk. Why are some people more jealous than others? genetic and environmental factors. Evolution and Human Behavior, 43(1):26â33, 2022.
Richard S Lazarus. Emotion and adaptation. Oxford University Press, 1991.
Mark R Leary. A brief version of the fear of negative evaluation scale. Personality and social psychology bulletin, 9(3):371â375, 1983.
Choonghyoung Lee, Jahyun Song, and Bill Ryan. When employees feel envy: The role of psycho- logical capital. International Journal of Hospitality Management, 105:103251, 2022.
Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, and Sowon Hahn. Chain of empathy: Enhancing empathetic response of large language models based on psychotherapy models. arXiv preprint arXiv:2311.04915, 2023.
Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, and Xing Xie. Large language models understand and can be enhanced by emotional stimuli. arXiv preprint arXiv:2307.11760, 2023a.
Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Qiang Yang, and Xing Xie. The good, the bad, and why: Unveiling emotions in generative ai. arXiv preprint arXiv:2312.11111, 2023b.
Xingxuan Li, Yutong Li, Shafiq Joty, Linlin Liu, Fei Huang, Lin Qiu, and Lidong Bing. Does gpt-3 demonstrate psychopathy? evaluating large language models from a psychological perspective. arXiv preprint arXiv:2212.10529, 2022.
Tobias Luck and Claudia Luck-Sikorski. The wide variety of reasons for feeling guilty in adults: findings from a large cross-sectional web-based survey. BMC psychology, 10(1):1â20, 2022.
Ryan C Martin and Eric R Dahlen. The angry cognitions scale: A new inventory for assessing cognitions in anger. Journal of Rational-Emotive & Cognitive-Behavior Therapy, 25:155â173, 2007.
18
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
John D Mayer, Peter Salovey, and David R Caruso. Mayer-salovey-caruso emotional intelligence test (msceit) users manual. 2002.
Maril`u Miotto, Nicola Rossberg, and Bennett Kleinberg. Who is GPT-3? an exploration of person- ality, values and demographics. In Proceedings of the Fifth Workshop on Natural Language Pro- cessing and Computational Social Science (NLP+CSS), pp. 218â227, Abu Dhabi, UAE, Novem- ber 2022. Association for Computational Linguistics. URL https://aclanthology.org/ 2022.nlpcss-1.24.
Agnes Moors, Phoebe C Ellsworth, Klaus R Scherer, and Nico H Frijda. Appraisal theories of emotion: State of the art and future development. Emotion Review, 5(2):119â124, 2013.
Seishu Nakagawa, Hikaru Takeuchi, Yasuyuki Taki, Rui Nouchi, Atsushi Sekiguchi, Yuka Kotozaki, Carlos Makoto Miyauchi, Kunio Iizuka, Ryoichi Yokoyama, Takamitsu Shinada, et al. Compre- hensive neural networks for guilty feelings in young adults. Neuroimage, 105:248â256, 2015.
OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
Joowon Park, Sachin Banker, Tamara Masters, and Grace Yu-Buck. Person vs. purchase comparison: how material and experiential purchases evoke consumption-related envy in others. Journal of Business Research, 165:114014, 2023.
Susan M Pfeiffer and Paul TP Wong. Multidimensional jealousy. Journal of social and personal relationships, 6(2):181â196, 1989.
Haocong Rao, Cyril Leung, and Chunyan Miao. Can ChatGPT assess human personalities? a gen- eral evaluation framework. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 1184â1194, Singapore, Decem- ber 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023.findings-emnlp.84. URL https://aclanthology.org/2023.findings-emnlp.84.
Peter Romero, Stephen Fitz, and Teruo Nakatsuma. Do gpt language models suffer from split personality disorder? the advent of substrate-free psychometrics. Research Square preprint, 2023. doi: 10.21203/rs.3.rs-2717108/v1.
Ira J Roseman and Craig A Smith. Appraisal theory. Appraisal processes in emotion: Theory, methods, research, pp. 3â19, 2001.
James A Russell. A circumplex model of affect. Journal of personality and social psychology, 39 (6):1161, 1980.
J´erËome Rutinowski, Sven Franke, Jan Endendyk, Ina Dormuth, and Markus Pauly. The self- perception and political biases of chatgpt. arXiv preprint arXiv:2304.07333, 2023.
John Sabini, Michael Siepmann, Julia Stein, and Marcia Meyerowitz. Who is embarrassed by what? Cognition & Emotion, 14(2):213â240, 2000.
John Sabini, Brian Garvey, and Amanda L Hall. Shame and embarrassment revisited. Personality and Social Psychology Bulletin, 27(1):104â117, 2001.
Mustafa Safdari, Greg Serapio-Garc´ıa, Cl´ement Crepy, Stephen Fitz, Peter Romero, Luning Sun, Marwa Abdulhai, Aleksandra Faust, and Maja Matari´c. Personality traits in large language mod- els. arXiv preprint arXiv:2307.00184, 2023.
Klaus R Scherer. Appraisal theory. 1999.
Kotaro Shoji, Jinni A Harrigan, Stanley B Woll, and Steven A Miller. Interactions among situations, neuroticism, and appraisals in coping strategy choice. Personality and Individual Differences, 48 (3):270â276, 2010.
Kate Simpson, Dawn Adams, Kathryn Ambrose, and Deb Keen. âmy cheeks get red and my brain gets scaredâ: A computer assisted interview to explore experiences of anxiety in young children on the autism spectrum. Research in Developmental Disabilities, 113:103940, 2021.
19
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Mark JM Sullman. Anger amongst new zealand drivers. Transportation Research Part F: Traffic Psychology and Behaviour, 9(3):173â184, 2006.
Ala N. Tak and Jonathan Gratch. Is gpt a computational model of emotion? detailed analysis. arXiv preprint arXiv:2307.13779, 2023.
Bertil T¨orestad. What is anger provoking? a psychophysical study of perceived causes of anger. Aggressive Behavior, 16(1):9â26, 1990.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
Xintao Wang, Yaying Fei, Ziang Leng, and Cheng Li. Does role-playing chatbots capture the character personalities? assessing personality traits for role-playing chatbots. arXiv preprint arXiv:2310.17976, 2023.
David Watson, Lee Anna Clark, and Auke Tellegen. Development and validation of brief measures of positive and negative affect: the panas scales. Journal of personality and social psychology, 54 (6):1063, 1988.
Haoran Wu, Wenxuan Wang, Yuxuan Wan, Wenxiang Jiao, and Michael Lyu. Chatgpt or grammarly? evaluating chatgpt on grammatical error correction benchmark. arXiv preprint arXiv:2303.13648, 2023.
Nutchanon Yongsatianchot, Parisa Ghanad Torshizi, and Stacy Marsella. Investigating large lan- guage modelsâ perception of emotion using appraisal theory. arXiv preprint arXiv:2310.04450, 2023.
Hongli Zhan, Desmond Ong, and Junyi Jessy Li. Evaluating subjective cognitive appraisals of emotions from large language models. In Houda Bouamor, Juan Pino, and Kalika Bali (eds.), Findings of the Association for Computational Linguistics: EMNLP 2023, pp. 14418â14446, Singapore, December 2023. Association for Computational Linguistics. doi: 10.18653/v1/2023. findings-emnlp.962. URL https://aclanthology.org/2023.findings-emnlp. 962.
Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use: Improving few-shot performance of language models. In International Conference on Machine Learning, pp. 12697â12706. PMLR, 2021.
# A STATISTICS OF HUMAN SUBJECTS
This section presents the demographic distribution of the human subjects involved in our user study. At the beginning of the questionnaire, all human subjects are asked for this basic information in an anonymous form, protecting individualsâ privacy. We plot the distribution of age group, gender, region, education level, and employment status in Fig. 3, Fig. 4, Fig. 5, Fig. 6, and Fig. 7 respectively. We also plot each groupâs average results on PANAS, including positive and negative effects before and after imagining the given situations. With the results, we are able to instruct LLMs to realize a specific demographic group and measure the emotional changes to see whether the LLMs can simulate results from different human populations. For instance, an older female may exhibit a lower level of negative affect.
20
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
Scores and Count Grouped by Age Group 7 Te Pave Bere fe Nemnve Bete fm Positive Aner F400 Ntte ater 2B ales Oe count | 350 26 300 ou ¢ 2350 2 g & 2 200, 20 10 18 100, 16 18-24 25-34 35-44 45-54 55-64 65+ âAge Group
Figure 3: Age group distribution of the human subjects.
Scores and Count Grouped by Gender 30 Positive Before cad SS onveate F700 7 ert Ate =e count F600 500 By 7 é 400 5 5 2 By» Co 300 20 200 18 100 16 0 Prefer not to say
Figure 4: Gender distribution of the human subjects.
Scores and Count Grouped by Region Pete ae â Negative Before | 1999 32.5 SO Positive After Nepatve Ate 30.0 ee Count 800 275 £50 head = 5 3 6 22.5 400 20.0 200 Wns L: 15.0 ° United Kingdom: Africa Oceania North America Europe Asia Region
Figure 5: Region distribution of the human subjects.
21
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
28 26 pry Scores and Count Grouped by Education Level Positive Before Negative Before Positive Aer Negative Aer 44.46% =e Count 35.36% 600 500 400 300 200 100 Lower secondary school Upper secondary school University - Bachelors Education Level University - Masters University - Doctorate Count
Figure 6: Education level distribution of the human subjects.
30 28 6 Scores 2 Scores and Count Grouped by Employment Status Positive Bere 75.67% Sm Negative Before Positive After âNegative Aer =e count i Student âUnemployed Employed Retired Employment Status 1000 800 200
Figure 7: Employment status distribution of the human subjects.
22 | {
"id": "2303.13648"
} |
2308.03427 | TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage | With recent advancements in natural language processing, Large Language
Models (LLMs) have emerged as powerful tools for various real-world
applications. Despite their prowess, the intrinsic generative abilities of LLMs
may prove insufficient for handling complex tasks which necessitate a
combination of task planning and the usage of external tools. In this paper, we
first propose a structured framework tailored for LLM-based AI Agents and
discuss the crucial capabilities necessary for tackling intricate problems.
Within this framework, we design two distinct types of agents (i.e., one-step
agent and sequential agent) to execute the inference process. Subsequently, we
instantiate the framework using various LLMs and evaluate their Task Planning
and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings
and challenges, our goal is to provide a helpful resource for researchers and
practitioners to leverage the power of LLMs in their AI applications. Our study
emphasizes the substantial potential of these models, while also identifying
areas that need more investigation and improvement. | http://arxiv.org/pdf/2308.03427 | Jingqing Ruan, Yihong Chen, Bin Zhang, Zhiwei Xu, Tianpeng Bao, Guoqing Du, Shiwei Shi, Hangyu Mao, Ziyue Li, Xingyu Zeng, Rui Zhao | cs.AI | Accepted in NeurIPS-2023 Workshop on Foundation Models for Decision
Making | null | cs.AI | 20230807 | 20231107 | 3 2 0 2 v o N 7 ] I A . s c [
3 v 7 2 4 3 0 . 8 0 3 2 : v i X r a
# TPTU: Large Language Model-based AI Agents for Task Planning and Tool Usage
Jingqing Ruanâ â¡ [email protected]
Yihong Chenâ â¡ [email protected]
# Bin Zhangâ â¡ [email protected]
# Zhiwei Xuâ â¡ [email protected]
# Tianpeng Baoâ [email protected]
Guoqing Duâ [email protected]
baotianpeng @sensetime.com
Shiwei Shiâ [email protected]
Hangyu Maoâ â [email protected]
# Ziyue Li + [email protected]
# Xingyu Zeng [email protected]
# Rui Zhao [email protected]
SenseTime Research
# Abstract
With recent advancements in natural language processing, Large Language Models (LLMs) have emerged as powerful tools for various real-world applications. De- spite their powers, the intrinsic generative abilities of LLMs may prove insufficient for handling complex tasks, which necessitate a combination of task planning and the usage of external tools. In this paper, we first propose a structured framework tailored for LLM-based AI Agents and then discuss the crucial capabilities neces- sary for tackling intricate problems. Within this framework, we design two distinct types of agents (i.e., one-step agent and sequential agent) to execute the inference process. Subsequently, we instantiate the framework using various LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on typical tasks. By highlighting key findings and challenges, our goal is to provide a helpful resource for researchers and practitioners to leverage the power of LLMs in their AI applica- tions. Our study emphasizes the substantial potential of these models while also identifying areas that need more investigation and improvement. The code and resources will be available on GitHub.
# Introduction
Large Language Model (LLM) [1] is a recent breakthrough in natural language processing (NLP) research. These models are trained on massive amounts of text data and can solve a wide range of tasks, even those that were not included in their training dataset, known as âemergingâ ability. This
â
+
â¡
â
These authors contribute equally to this work. External discussion and ideation. These authors work as research interns at SenseTime Research. The corresponding author.
© AAA. our agents +/ \â+ based on different LLMs les @ ChatGLM _InternLM ChatGPT Claude
Figure 1: Our LLM-based agents plan tasks and use tools.
ability is especially evident in the tasks of few-shot [2] and zero-shot [3] learning, where LLMs can perform well with minimal or even no fine-tuning to adapt to a new task.
However, the application of LLMs in real-world settings presents unique challenges. On the one hand, LLMs have proved to be incompetent in solving logic problems such as mathematics, and their training data is also out of date (e.g., the knowledge cutoff date for GPT-4 [4] is up to January 2022). Teaching LLMs to use tools such as calculators, calendar, or search engines can help prevent them from hallucinating [5]. On the other hand, despite their impressive problem-solving abilities, the successful integration of these models into complex systems often requires more than just task understanding - it requires the capacity to manipulate various tools and interact effectively with users. This is exemplified in systems like AutoGPT 1, BabyAGI 2, and ChatGPT-plugins 3, which leverage LLMsâ capabilities beyond merely generating well-written texts and programs. In these systems, LLMs operate as the central controller, manipulating different tools and interacting with humans, thus taking on the role of Artificial Intelligence Agents (AI Agents). In addition to being central planners, LLMs are often used as intermediaries between macro plans and low-level tool calls or as specific tools. As such, LLMs are seen as a crucial approximation of the linguistic world model in real-world systems.
In this paper, we propose a structured framework for LLM-based AI Agents to evaluate the existing LLMsâ planning and tool-using ability and discuss the necessary abilities of such LLM-based AI Agents. Furthermore, we instantiate the framework with different LLMs and evaluate their Task Planning and Tool Usage (TPTU) abilities on several tasks. As shown in Figure 1, we use the Doraemon as an analogy of our LLM-based agents: Doraemonâs magic 4D pocket consists of millions of gadgets (the Tool Set), and Doraemon needs to pick the right tools and solve tasks in a right order. Our main contributions are summarized as follows:
1. We propose a structured framework tailored for LLM-based AI Agents to evaluate the TPTU abilities of the existing open-source LLMs.
2. We design two distinct types of agents, namely, one-step agent and sequential agent, to execute the inference process of conducting sub-tasks in a once-for-all or sequential manner, respectively. We provide detailed empirical results and analysis.
3. Our study reveals significant potential in utilizing LLMs for complex tasks. Furthermore, we conclude four following potential weaknesses of LLM-based agents: failing to output in a specific format, struggling to grasp task requirements, over-utilizing one tool, and lack of summary skills. These observations could spark some insights and shed light on the areas that deserve further investigation and improvement.
1https://github.com/Significant-Gravitas/Auto-GPT 2https://github.com/yoheinakajima/babyagi 3https://openai.com/blog/chatgpt-plugins
2
Input | [ intermediate Output Final Output | Necessary Ability for LLM-based AI Agents Designed Prompt Task Instruction System Description âHow much budget is required to provide a 1008 incentive for Tool Description each colleague who has worked for five years.â > Python Code > New Tools PythonREPL() with a calculator Subtask Subtask N | Tool Set Get Final Answer T Calculator() | Database() | PythonREPL() Summarization Ability âDepli ona i group of suspects i Perception Abi â i rep Learning Ability i LLMs Reflection Ability H > = Memory Ability H G oui) me Correct Result or Exception ! Error : Task Planning Ability Tool Usage Ability: Selection + Creation + Execution : High-level Plans Selected Tool |âââ} Created Tool |ââ+ Tool Execution | ; Subtask 1 ; LLM iguring out how many colleague Database() > i who has worked for five years SOL Code i from the database; taking it as X. > Database() i Subtask 2 i Calculating the value of 100*X Calculator) uM : Final Answer
Figure 2: The proposed framework for LLM-based AI Agents.
# 2 Method
To the best of our knowledge, the study of âAgentâ, âAutonomous Agentâ, âAI Agent" and âMulti- Agentâ has been a central part of AI research for decades [6â11], aimed at understanding and building intelligent and autonomous systems, but there is currently no standardized definition for AI Agents, particularly those that are based on LLMs.
In this paper, the Artificial Intelligence Agent (AI Agent) is defined as a program that employs AI techniques to perform tasks that typically require human-like intelligence. AI Agents can take many forms, from simple chatbots to complex autonomous systems that interact with their environment and make decisions in real-time. They can be trained using a variety of machine learning techniques, including supervised, unsupervised, and reinforcement learning, and can be programmed to perform specific tasks or learn from their experiences in order to improve their performance over time.
# 2.1 Agent Framework
We are particularly interested in the AI Agent that employs the LLM techniques (i.e., LLM-based AI Agent), due to its high efficiency and flexibility in various tasks and domains. Specifically, we design our AI Agent framework with six components as shown in Figure 2:
3
1. Task Instruction. This is the explicit input of the agent. In practical systems, the task instruction comes from human users of the systems. For example, in a human resources (HR) system, the user may give a task instruction: How much budget is required to provide a 100$ incentive for each colleague who has worked for five years? In contrast, in a criminal investigation system, the user may give a task instruction: Deploy surveillance on a group of suspects.
2. Designed Prompt. This is an additional form of input for the agent, derived from tasks that the human users anticipate the AI Agent will complete. Humans can craft specific instructions or demonstrations to steer the LLM-based AI Agents toward generating suitable responses. These guiding inputs could encompass system instructions, tool descriptions, few-shot demonstrations, chat history, or even error output.
3. Tool Set. It is another input for the agent, which refers to the set of external resources, services, or subsystems that the AI Agent can utilize to aid in its tasks. This could include databases for information retrieval [12], APIs for interacting with external systems [5], other AI models specialized for tasks such as image recognition or sentiment analysis [13], or even non-AI tools and resources such as web scraping tools or data visualization libraries [14]. The toolset expands the capabilities of the AI Agent, enabling it to access and process information beyond its internal knowledge, interact with other systems, or perform specialized tasks that it may not be capable of on its own. For example, an AI Agent might use a weather API to fetch current weather information, or a Python interpreter to solve the mathematical question.
4. LLM. This is the core component of the system that interprets the task instructions and prompts, interacts with the toolset, and generates the intermediate outputs and final answers. In this context, we utilize publicly available large language models such as ChatGPT, GPT-4 [4], InterLM [15], and others.
5. Intermediate Output. This represents the output generated by the LLM-based AI Agent after it processes the task instructions and prompts, and interacts with the toolset. There are three typical intermediate outputs: (1) the high-level plans to fulfill the original user instruction, (2) selected and created tools to fulfill each subtask in the plans, and (3) the results or errors produced after tool execution. The output can be reviewed and refined, either by the AI Agent itself or with human oversight, to ensure it is accurate and meets the requirements of the task instruction.
6. Final Answer. This is the output that the AI Agent summarizes and provides to the user after all processing (including task planning, tool usage, and possibly error feedback) has been completed.
# 2.2 Agent Ability
To apply LLM-based AI Agents to augment or replace human decision-making in real-world applica- tions, the agents typically require the following abilities:
1. Perception Ability: AI Agents must be able to perceive the task instruction from human and system specifications.
2. Task Planing Ability: AI Agents should have the capacity to create a step-by-step plan for complex task composition based on the perceived instruction and specifications. This usually involves the generation of critical subtask sequences, and the ability to adjust the plan dynamically in response to changes in the task or environment.
3. Tool Usage Ability: On the one hand, AI Agents should possess the capacity to select a variety of existing tools or resources to execute complex tasks. On the other hand, AI Agents should create new tools by converting the task requirements. This ability enables the AI Agent to extend its capabilities beyond LLM itself and the existing tools by leveraging the vast resources available in the digital world. Finally, AI Agents should be able to execute the selected or created tools for truly grounding the human request based on the resources and constraints of systems.
4. Learning/Reflection/Memory (from Feedback): AI Agents should be capable of learning from feedback, including correct results and exception errors. They should incorporate
4
memory, such as logging or chat history, and reflection to adapt their plans or decisions. This allows the agents to improve their performance and efficiency in task execution continuously.
5. Summarization: After several rounds of interaction with humans, tools, and systems, AI agents can ultimately complete the original task provided by the users. At this point, AI agents should be able to summarize the interaction history and provide a final answer that is concise and easy to understand for the users.
To endow AI Agents with the aforementioned abilities, some techniques that can be used include chain-of-thought (CoT) and vector databases, as shown in Table 1.
Table 1: A simple illustration of the techniques for endowing the key ability.
Ability Possible Techniques Perception Task Planing Tool Usage (Selection/Creation/Execution) Learning/Reflection/Memory Summarization Multi-input Fusion Zero-shot CoT and Few-shot CoT Text Matching/Code Generation/ Action Grounding RLHF/Multi-agent Debate/ Vector Database Attention Mechanism and Natural Language Generation
# 2.3 Agent Design
Task planning and tool usage represent the cornerstone of LLMâs abilities. Others like perception, learning/reflection/memory (from feedback), and summarization are indeed critical, but they primarily serve to enhance and support these two core competencies. Therefore, concentrating on these two key competencies - Task Planning and Tool Usage (TPTU for short) - we have devised two distinct types of AI agents, as depicted in Figure 3:
⢠The first one, named as the One-step Agent (TPTU-OA), adopts a global perspective to interpret the original problem, effectively breaking it down into a sequence of sub-tasks in a single instance. This strategy fully harnesses the modelâs comprehensive understanding capabilities to map out the problem-solving steps for all sub-tasks at once. This method underscores the significance of a holistic understanding and planning of the overall task, albeit it might lack flexibility when dealing with individual sub-tasks.
⢠The second type, referred to as the Sequential Agent (TPTU-SA), emphasizes tackling the current sub-task at hand. Upon successfully resolving the ongoing sub-task, this agent requests the LLMs to provide the succeeding sub-task. This approach enables the model to maintain a clear and concentrated focus throughout the problem-solving journey, tackling issues incrementally. Such a methodology allows for continuous feedback and progress within the confines of addressing a broader problem.
These two distinct agent models represent two disparate problem-solving strategies - the one-step and sequential resolution 4. In our subsequent experiments, we aim to understand their respective strengths and weaknesses and how they can be best utilized to leverage the capabilities of LLMs in real-world problem-solving scenarios.
# 3 Evaluation
We instantiate the proposed LLM-based AI Agent framework (TPTU-OA and TPTU-SA) with different LLMs and evaluate their performance on typical tasks.
4One can also combine the two strategies to design a hierarchical agent, but this is beyond the scope of this paper.
paper.
5
Problem One-step Plans 1. SQL generator: âFiguring out how many colleague who has worked for five years from the database; taking it as X.â 2. Python generator: âCalculating the value of 100*X with a calculatorâ How much budget is required| to provide a 1008 incentive for each colleague who has worked for five years?â
(a) One-step Agent (TPTU-OA)
Sequential Plan 1 ee âHow much budget is required to provide a 1008 incentive for each colleague who has worked for five years?â SQL generator: âFiguring out how many colleague who has worked for five years from the | database; taking it as s Sequential Plan 2 , = __sF Python generator: âCalculating P|) the value of 100%X with a | calculator.â a
(b) Sequential Agent (TPTU-SA)
Figure 3: The workflows of the One-step Agent and the Sequential Agent are specifically designed to assess the Task Planning and Tool Usage abilities of LLMs.
# 3.1 Preparations
Before beginning our evaluation, we first outline the preparations. We will give detailed descriptions of the datasets, available tools, and popular large language models.
# 3.1.1 Datasets
We first clarify the motivations behind our choice of tools for evaluation. The selection was guided by two primary factors: the number of tools to be evaluated and the specific tools to be included.
Firstly, regarding the number of tools, it is important to state that our proposed evaluation framework is extensible. It can incorporate any number of tools as pluggable components to be managed by the LLM-based AI agents. Secondly, looking at the current work on tool-augmented LLMs, such as T-Bench [16] and ToolBench [17], we see that only a handful of tools are launched and executed in a single scenario. Meanwhile, API-Bank [18], in a single scenario, typically dispatches only one API tool and awaits its response. APIBench [19] and ToolAlpaca [20] do not even execute a tool response. Hence, for the sake of simplicity and focus in our evaluation, we have decided to primarily assess two tools (which can be called multiple times) within a single scenario.
Secondly, we also need to decide which specific tools should be used for evaluation. Consider a real-world scenario where we pose the question: âHow much budget is required to offer a $100 incentive to each employee who has been with the company for over five years?". To answer this, we first need to retrieve the relevant data from a database, typically using SQL, to find the number of eligible employees. Then, we need to perform a mathematical calculation to estimate the total budget. Such scenarios are quite common in daily life where the formulation and resolution of a question often involve SQL and mathematical tools.
Recognizing the importance of these tools, we have chosen to focus our evaluation on SQL and Python generators, which represent the capabilities of database querying and mathematical computation, respectively. To this end, we have prepared 120 question-answer pairs that vary in complexity. These pairs provide a rigorous assessment of the LLM-based AI agents in understanding, generating, and
6
utilizing these essential tools. For further information on these queries and their corresponding demonstrations, please refer to Appendix A.
# 3.1.2 Tools
We have defined a total of 12 available tools for the selection of the LLM-based AI agents for evaluation. They are defined as follows:
⢠SQL generator: Given an input question and a database, create a syntactically correct SQLite query statement.
⢠Python generator: Given an input question and some information, generate a syntactically correct Python code.
Weather query tool: Given a location, output the current real-time weather at that location.
Image generator: Given a text description, generate a related image.
⢠Text extractor: Given a link to an image, extract the corresponding text and its position coordinates.
Translator: Given a piece of text, translate it into other languages.
⢠Bing Searcher: Given a piece of text, conduct a search on the Bing browser and return content.
⢠Shell generator: Given an input question and some information, generate a syntactically correct Shell code.
⢠Java generator: Given an input question and some information, generate a syntactically correct Java code.
Wikipedia searcher: Given a piece of text, conduct a search on Wikipedia and return content.
⢠Office software: Given a text description, automatically generate corresponding long docu- ments or spreadsheets or PPTs.
⢠Movie player: Given a movie name, automatically play the corresponding movie resources.
# 3.1.3 LLMs
The LLMs evaluated in this paper are listed in Table 2, elaborated as follows:
⢠GPT series developed by OpenAI boasts a powerful language model with a vast number of parameters, enabling it to tackle intricate problems efficiently. This paper aims to evaluate the performance of ChatGPT, which balances the performance with costs (the number of OpenAI API calls).
⢠Claude is committed to maintaining honesty and ensuring user safety, which is developed by Anthropic. With its impressive size, Claude ranks among the largest language models globally and poses a formidable challenge to ChatGPT as a strong competitor.
⢠InternLM, a sophisticated language model developed by Shanghai AI Lab, boasts a multi- round dialogue capability and an impressive ability to comprehend super-long text. This language model is meticulously designed to cater to the nuances of the Chinese language, enabling it to comprehensively understand and effectively process Chinese text. Here, we adopted the version with 120 billion parameters.
⢠Ziya is an expansive and robust pre-training model developed by IDEA, derived from the LLaMa with 13 billion parameters. This comprehensive model exhibits a wide range of capabilities, including translation, programming, and mathematical calculations. Notably, it stands out as a bilingual LLM, highlighting its ability to effectively process and comprehend text in Chinese.
⢠ChatGLM, developed by Tsinghua University, is an open-source dialogue language model that supports bilingual Q&A in Chinese and English, with a particular focus on Chinese optimization. Built on the General Language Model (GLM) architecture and utilizing model quantization technology, the ChatGLM can be easily deployed on consumer-grade graphics cards, enabling local implementation by users.
7
⢠Chinese-Alpaca-Plus is achieved by extending LLaMAâs existing vocabulary with an additional 20,000 Chinese tokens from Meta AI (formerly known as Facebook AI Research Laboratory). In this version, we use a model with 33 billion parameters. The training text has been expanded to 120GB, and the fine-tuning instruction data has been increased to 4.3M.
Table 2: The LLMs evaluated in this paper.
Organization Model Name Model Parameters OpenAI Anthropic Shanghai AI Lab IDEA Tsinghua University - ChatGPT[21] Claude[22] InternLM Ziya-13B ChatGLM-130B[23] Chinese-Alpaca-Plus-33B[24, 25] 200B >52B 120B 13B 130B 33B
# 3.2 Evaluation on Task Planning Ability
In this section, to evaluate the planning capabilities of the LLM-based AI agents, we have structured the evaluations as follows.
For TPTU-OA, we begin by examining the agentsâ ability to plan the order of tool use. This is followed by an evaluation of the agentsâ capacity to not only plan the sequence of tools but also the corresponding subtask descriptions. Subsequently, we conduct a specialized planning evaluation where the agents must generate multiple sequences of key-value pairs of the form {tool: subtask description} in complex problem teardowns. Moreover, we expand the toolset with additional, unrelated tools to further challenge and reassess the planning ability of the LLM-based AI agents.
For TPTU-SA, we follow the regime that the agent should generate multiple sequences of key-value pairs of the form {tool: subtask description} for evaluation.
# 3.2.1 TPTU-OA: Tool Order Planning
Here, we utilize two kinds of tools for problem-solving: the SQL generator, which retrieves data from databases, and the Python generator, adept at addressing mathematical questions.
To validate the capacity of the LLM-based AI agents to strategically plan for the tool order, we designed the prompt as shown in Figure 8 of Appendix B. This design is motivated by the goal to assess the ability of LLM-based AI agents to understand complex problems, subsequently decomposing them into a sequence of simpler tasks executed by appropriately selected tools. Specifically, we require the LLM-based AI agent to follow our instructions, select tools from our pre-defined tool set with detailed function descriptions, conform to the given format strictly, and understand the demonstrations to learn from them.
Upon feeding these prompts into the LLM-based AI agents under evaluation, we obtained the following accuracy rates for the tool planning, as shown in Table 3.
# Table 3: The evaluation results for the planning of tool order generation.
Model Accuracy Model Accuracy ChatGPT 100% Claude 100% ChatGLM Chinese-Alpaca-Plus 45% 20% Ziya 45% InternLM 80%
The results of our experiments indicate that models, notably Ziya and ChatGLM, frequently grapple with the generation of lists in the correct format. For other models, the predominant challenges lie in
8
generating tools in the correct sequence or in the occasional omission of necessary tools. Nonetheless, the issue of parsing list formats is generally negligible.
These findings suggest that the majority of LLM-based AI agents possess a fundamental capability to analyze the tool needs of a given problem and understand its task requirements. To further explore whether these LLM-based AI agents can effectively break down the original problem into sub-tasks, we proceed to the following section.
# 3.2.2 TPTU-OA: Tool Order Planning and Subtask Description Generation
Simply planning the order of tool usage is not sufficient to fully address a problem. To truly solve it, we need to provide a guide or instructions for the usage of each tool, that is, a decomposed subtask description. Therefore, we can decompose the original complex problem into two separate sequences. One sequence represents the order in which the tools are utilized, while the other sequence corresponds to the subtask descriptions that each tool in the tool sequence aims to resolve. A problem is only truly solved when both the tool and subtask description sequences have been successfully planned. In order to verify whether LLM-based AI agents truly have the ability to solve complex problems, we designed a new prompt as shown in Figure 9 of Appendix B. The main improvement is to plan the corresponding subtask description for each tool after the tool planning is completed.
Table 4: The evaluation results for the planning of tool order and subtask description generation.
Model Accuracy Model Accuracy ChatGPT 55% Claude 15% ChatGLM Chinese-Alpaca-Plus 10% 0% Ziya 10% InternLM 45%
After feeding the prompt to these LLM-based AI agents, we get results shown in Table 4.
Although the generation of tool sequences and their corresponding subtask descriptions might be an effective way to problem-solving, there is a significant decrease in accuracy for all LLMs as can be seen. We hypothesize that there are a few potential drawbacks to this method:
1. Difficulty in Error Tracking and Debugging. Generating the complete tool and subtask sequences may make it more challenging to track and debug errors. If an error arises within the sequence, it might require a total regeneration instead of a simple modification or repair to the erroneous part.
2. Tool-Subtask Pairing Issue. If all tool sequences and subtask descriptions are generated independently, thereâs an inherent risk of misalignment between the tools and their corre- sponding subtasks. This could potentially lead to an improper pairing, which, in turn, could result in a flawed or ineffective solution that fails to appropriately resolve the given problem.
3. Lack of Flexibility. The approach may lack this flexibility when facing complex problems requiring adjustments to the tool or subtask sequence.
4. Dependency on Global Information. Generating the entire tool and subtask sequences requires a global understanding and planning of the entire problem. However, in some instances, certain parts of the problem might not be clear at the early stages of problem- solving, which could pose challenges within this framework.
# 3.2.3 TPTU-OA: The Planning of Tool-Subtask Pair
To mitigate the aforementioned issue, we propose a novel approach to foster flexible problem-solving with the LLM-based AI agent. We prompt the agent to generate multiple sequences, each consisting of a key-value pair in the format of {tool: subtask description} that associates a tool with its respective subtask description. This allows us to simultaneously plan the tool choice and subtask without the risk of improper matching. Moreover, it offers the flexibility to update the planned sequences in real-time based on evolving problem feedback, enhancing adaptability and efficiency when addressing complex tasks.
9
With this consideration, we have designed a unique prompt that encourages this advanced problem- solving strategy. In the following section, we delve into the specifics of this prompt design in Figure 10 of Appendix B. The key improvement in this prompt is its directive for the LLM-based AI agents to stringently adhere to the predefined dictionary format. To facilitate this, we offer several demonstrations in our desired format, serving as references for the language model to follow.
# Table 5: The evaluation results for the planning of Tool-Subtask pair.
Model Accuracy Model Accuracy ChatGPT 75% Claude 90% ChatGLM Chinese-Alpaca-Plus 0% 5% Ziya 20% InternLM 55%
After feeding the prompt to these LLM-based AI agents, we get results shown in Table 5.
Analyzing the results from Tables 4 and 5, we observe a marked improvement of 52.9% when the tool-subtask pairs are generated in a unified format compared to separate generation of tools and subtasks.
This significant performance enhancement can likely be attributed to the close coupling between tools and their associated subtasks in our unified generation strategy. When tools and subtasks are generated separately, there is a potential disconnect or lack of coherence between the two, which could lead to less accurate or efficient solutions. In contrast, by generating tool-subtask pairs together, we ensure that each tool is directly tied to its relevant subtask, leading to a more coordinated and effective problem-solving approach. This might explain the observed increase in overall performance.
# 3.2.4 TPTU-OA: The Planning of Tool-Subtask Pair with Unrelated Tools
So far, our analysis and evaluation have been primarily focused on the LLM-based AI agentsâ proficiency in planning with specific tools. However, we are also interested in how it would perform when faced with many irrelevant or similar tools. Therefore, for a more comprehensive assessment, we expanded the prompt in Table 10 to include an additional ten unrelated tools, as illustrated in Figure 11 of Appendix B.
Table 6: The evaluation results for the planning of Tool-Subtask pair with unrelated tools.
Model Accuracy Model Accuracy ChatGPT 70% Claude 90% ChatGLM Chinese-Alpaca-Plus 0% 5% Ziya 10% InternLM 50%
After feeding the prompt to these LLM-based AI agents, we get results shown in Table 6. The results from our expanded evaluation demonstrate that even when presented with irrelevant or similar tools and descriptions, LLM-based AI agents consistently avoid selecting these unrelated tools (i.e., the accuracy has remained unchanged or exhibited only a marginal decrease compared with Table 5). This outcome indicates the effectiveness of our designed prompt, which successfully guides the LLM-based agents to understand the appropriate tool sequence for complex problem decomposition.
This observation reinforces the notion that a well-structured and informative prompt can efficiently guide AI agents to understand the core essence of the problem, thereby enabling them to sift through irrelevant information and focus on key tasks. This successful discrimination against unrelated tools also points towards the modelsâ ability to understand the specific context of a problem and select the appropriate tools, thereby enhancing the overall problem-solving process.
# 3.2.5 TPTU-SA: The Planning of Tool-Subtask Pair Generation
Upon identifying the drawbacks of first generating a list of tools and then generating corresponding subtask descriptions, we decided to focus subsequent tests on the generation of tool-subtask pairs.
10
Consequently, in this section, we evaluate the capability of TPTU-SA to generate these tool-subtask pairs.
To achieve the goal of recursively generating tool-subtask pairs, we have designed prompts as illustrated in Figure 12 of Appendix B.
Table 7: The evaluation results for the planning of Tool-Subtask with the sequential agent.
Model Accuracy Model Accuracy ChatGPT 80% Claude 100% ChatGLM Chinese-Alpaca-Plus 0% 0% Ziya 10% InternLM 65%
The evaluation results are shown in Table 7. Compared with results shown in Table 5, TPTU-SA generally performs better than TPTU-OA especially for highâperforming LLMs (e.g., ChatGPT, Claude and InternLM). We propose the following potential reasons for this observation:
1. Sequentiality Mimics Human Problem-Solving: In real-world scenarios, humans tend to solve complex problems by breaking them down into smaller, manageable subtasks which are often handled sequentially. Sequential agents are designed to mimic this step-by-step approach, which might inherently suit complex problem-solving better.
2. Richer Contextual Understanding: Sequential agents are exposed to the outcome of each previous subtask before moving on to the next one. This iterative process could facilitate a richer understanding of the problem context, enabling more accurate task planning and tool usage.
3. Flexibility in Task Management: In comparison to one-step agents, sequential agents might have more flexibility in managing tasks. They have the opportunity to correct errors or adjust their strategy after each step, which can lead to improved overall performance. 4. Improved Learning From History: The sequential process provides a history of actions and results which can be beneficial in learning. The agent can use this history to make better predictions about what tool to use next or what subtask to tackle, leading to more accurate and efficient problem-solving.
These points of analysis suggest that the structure and operation of sequential agents inherently confer certain advantages in complex problem-solving scenarios, leading to their superior performance.
# 3.3 Evaluation on Tool Usage Ability
Before evaluating the end-to-end multi-tool usage ability of LLM-based AI agents, we first evaluate the effectiveness of single-tool usage for SQL generation and mathematical code generation.
Subsequently, to assess the end-to-end performance of LLMs across various tools, two types of agents (TPTU-OA and TPTU-SA) were developed and several LLMs were subjected to testing under these agents. The role of the agents is to break down complex questions into simpler sub-questions and plan corresponding tools to solve them, based on the available toolset and corresponding tool descriptions.
# 3.3.1 The effectiveness of Single Tool Usage
Our aim is to systematically assess how effectively these models can use various tools, focusing on their proficiency with SQL and other coding languages.
The Effectiveness of simple SQL Creation Using the schemas provided in Table 12 and Table 13, we construct questions similar to those in Table 14, and refer readers to Appendix A. These questions are posed to various LLMs using our specifically designed prompts in Appendix B.
Following the tailored prompts, the LLMs are evaluated based on their responses to the presented queries. The results of this comprehensive assessment are compiled and exhibited in Figure 8.
This verifies the capabilities of each LLM in handling varying simple single-table SQL queries, thus providing a basis for comparison and analysis.
11
# Table 8: The evaluation results for simple SQL questions.
Model Accuracy Model Accuracy ChatGPT 90% Claude 100% ChatGLM Chinese-Alpaca-Plus 30% 20% Ziya 50% InternLM 90%
The Effectiveness of Complex Nested SQL Creation Using the schemas provided in Ta- ble 15, 16, 17, and 18, we construct questions similar to those in Table 19, and refer readers to Appendix A. For complex nested SQL questions, to further verify the SQL tool creation capability of LLMs, we have designed two types of prompts. One is the direct-guidance type, which explicitly informs the model that it needs to generate nested SQL query statements, as shown in Figure 14 in Appendix B.
The other is based on the Chain-of-Thought (CoT) [26] approach, which leverages the modelâs ability to reason step by step to comprehend and craft SQL tools, and the prompt is shown in Figure 15 in Appendix B. This method guides the model to sequentially generate SQL query clauses based on the problem context, thus breaking down the complex query generation task into smaller and manageable subtasks. This approach provides the model with a structured way to handle complex SQL tasks and showcases its capacity to engage in incremental reasoning and problem-solving.
The design of these two types of prompts serves as the backbone of our evaluation for complex nested SQL questions. While the direct-guidance approach focuses on testing the modelâs raw ability to generate SQL queries when explicitly instructed, the CoT-based approach evaluates a more nuanced capability: the modelâs reasoning and problem-solving skills in a step-by-step manner. Both these methods present unique challenges and offer valuable insights into the strengths and potential areas of improvement for the large language modelâs SQL tool generation ability. Subsequently, we will explore these two dimensions based on our experimental evaluations shown in Table 9.
Table 9: The evaluation results for complex nested SQL questions.
Model Direct-based CoT-based ChatGPT 80% 80% Claude 100% 100% Ziya 50% 40% Model Direct-based CoT-based ChatGLM Chinese-Alpaca-Plus 60% 70% 0% 0% InternLM 60% 50%
From the above results in Table 9, it is clear that different models possess varying levels of proficiency in handling complex nested SQL tasks. Some models, like Claude, exhibit a robust capability in SQL generation, no matter whether the approach is direct or CoT-based. Most of these models demonstrate the SQL tool usage capability.
Specifically, some models such as ChatGLM show a distinct preference for the CoT-based approach, their performance improves when problems are broken down into smaller, manageable sub-tasks. This suggests that these models may have a stronger ability in sequential problem-solving and benefit more from step-by-step guidance. Conversely, models like Ziya and InternLM show a drop in performance when tasks are guided in the CoT-based format. This might indicate challenges in managing dependencies between sub-tasks or handling the continuity in sequential problem-solving. Lastly, Chinese-Alpaca-Plus shows significant room for improvement in complex SQL generation tasks. This shows that not all models are equally suited to handle advanced problem-solving involving nested SQL queries.
Overall, these findings underscore the importance of tailoring evaluation and training methodologies to the individual strengths and weaknesses of each model. By adopting this approach, we can better understand the performance variations across different models and provide targeted improvements to enhance their problem-solving abilities. Furthermore, this analysis highlights the potential of
12
LLM-based agents in real-world applications, and the need to push their boundaries through continued research and development.
The Effectiveness of Mathematical Code Creation Following our evaluation of the LLMâs profi- ciency in creating complex SQL queries, we now shift our focus to another tool creation: the creation of mathematical code. To the best of our knowledge, while large language models possess significant capabilities, they often fall short of providing highly accurate solutions to mathematical problems. Guiding these LLMs to generate mathematical code, and subsequently leveraging external tools to execute and derive the solutions, could significantly enhance their ability to tackle mathematical challenges.
In the upcoming section, we will conduct a detailed evaluation of guiding these LLMs to generate mathematical code. We aim to shed light on the true capability of these models in generating mathematical code and to elucidate the extent to which they can be utilized to aid in mathematical problem-solving. The prompt about how to guide LLMs is shown in Figure 16 in Appendix B.
# Table 10: The evaluation results for mathematical questions.
Model Accuracy Model Accuracy ChatGPT 90% Claude 85% ChatGLM Chinese-Alpaca-Plus 0% 55% Ziya 50% InternLM 95%
The results shown in Table 10 indicate that the capabilities of LLM-based agents to generate math- ematical code vary considerably. High-performing models like ChatGPT, Claude, and InternLM display excellent proficiency, suggesting their potent ability to solve complex mathematical tasks. Middle-tier models, such as Ziya, show moderate success, indicating the potential for improvement and adaptability with the right training and optimization. Surprisingly, Alpaca demonstrated a notable proficiency in mathematical tasks, despite its poor performance in SQL generation, suggesting a possible inclination towards mathematical problems. In contrast, ChatGLM struggles significantly with mathematical code generation, underlining a potential weak spot in its capabilities and the need for focused improvement in this area.
Overall, these results underscore the task-dependent nature of LLMsâ capabilities and highlight the importance of recognizing their individual strengths and weaknesses for optimal model guidance and enhanced problem-solving.
# 3.3.2 TPTU-OA and TPTU-SA: Tool Usage for Multiple Tools
We now aim to utilize the one-step agent and sequential agent, which we designed, to conduct an evaluation involving multiple tools. Corresponding prompts for each agent type have been crafted and are presented in Figure 17 and Figure 18 of Appendix B, respectively.
In this phase of the evaluation, we need to automatically invoke the respective tools through code and produce the results. Given that user interface-based LLMs lack the capability to call external tools, we will only utilize the following four API-based LLMs (ChatGPT, Ziya, Chinese-Alpaca, and InternLM) for this comprehensive evaluation of external tool usage ability.
# Table 11: The evaluation results for end-to-end ability of multiple tools.
Model TPTU-OA TPTU-SA ChatGPT Ziya Chinese-Alpaca-Plus 50% 55% 0% 0% 0% 0% InternLM 15% 20%
With agents mentioned above, the final results are presented in Table 11. The evaluation results demonstrate varying levels of task planning and tool usage capabilities among the four API-based LLMs. In the TPTU-OA evaluation, ChatGPT achieved a performance rate of 50%, significantly outperforming the other models, with InternLM at 15%, while both Ziya and Chinese-Alpaca did not manage to complete any tasks successfully, resulting in a score of 0%. In the TPTU-SA evaluation,
13
an overall slight improvement was observed. ChatGPT maintained its leading position, with a slightly improved performance rate of 55%. InternLM also exhibited better performance, achieving a score of 20%, whereas Ziya and Chinese-Alpaca-Plus again failed to register any successful task completion.
These results reflect a notable discrepancy in the performance of LLMs when it comes to using external tools. ChatGPT and InternLM have demonstrated some ability to navigate these tasks, but their performance rates suggest there is significant room for improvement. Ziya and Chinese- Alpaca-Plusâ performance indicates a struggle to effectively utilize external tools in their current state.
The differential performance between the TPTU-OA and TPTU-SA evaluation hints at the possible impact of the agent design on the LLMsâ task execution ability. In particular, the performance increase under the sequential agent framework suggests that breaking down tasks into sequential steps might help LLM-based AI agents better utilize external tools. This insight could prove valuable in future improvements and developments of LLM-based AI agents. However, even with this approach, it is clear that LLM-based AI agents are far from perfect when it comes to effectively using external tools for complex tasks. This finding underlines the importance of further investigation and improvement in this domain.
# Insightful Observations
Upon closer observation of our experimental results, we have identified several phenomena that deserved further exploration. These findings serve to broaden our understanding of LLM-based agentsâ behavior and capabilities and provide essential insights that could shape future research in this field. In the following, we will dissect these phenomena as shown in Figure 4 - 7, casting light on the weaknesses of LLM-based agents in the context of task planning and tool usage.
1. Misunderstanding Output Formats: LLMs frequently encounter difficulty when output is required in specific formats such as lists or dictionaries. One such example includes incon- sistencies between the number of tools and corresponding subtasks, leading to formatting issues that hinder the correct execution of tasks.
How many more concerts has Jay Chou held than Li Ronghao? Is this number bigger than the square root of 10? Tools: ["Python generator", "SQL generator"] LLMs Subtasks:["How many concerts did Jay Chou perform?", Srey "How many concerts did Li Ronghao perform?", = "How many more concerts did Jay Chou perform than Li Ronghao?", a "Is the number bigger than the square root of 10?"]
Figure 4: Issue-1: Inconsistencies between the number of tools and corresponding subtasks.
2. Struggling to Grasp Task Requirements: LLMs might incorrectly disintegrate subprob- lems or apply unsuitable tools to carry out the subproblem. For example, an LLM might attempt to solve a purely mathematical problem by employing an SQL tool or could misun- derstand similar terms like cube extraction and cube roots.
3. Endless Extensions: LLMs tend to overutilize a particular tool, even in instances where a single use would suffice for the correct result. This issue can lead to extended and nonsensical planning, where the same subtask is repeatedly solved.
4. Lack of Summary Skills: LLMs do not take into account the responses to subproblems, relying instead on their internalized knowledge to generate the final answer. This may lead to a scenario where the final response only addresses a portion of the original query.
By identifying and addressing these common issues, we stand a better chance at improving and refining LLMs, thereby unlocking their full potential.
14
How many singers have the average number of albums of singers in Beijing? Gives the square root of this number. Tools: ["SQL generator", "SQL generator", "SQL generator"] Subtasks:["What is the average number of albums by singers in Beijing?", "How many singers have the average number of albums by singers in Beijin | "What is the square root of this number?"] al
Figure 5: Issue-2:Solve a purely mathematical problem by employing a SQL generator.
Exclude the two birthplaces with the most singers, provide the number of singers from other birthplaces, and calculate the factorial of this number. The Tool_Query for the first execution of the tool is: {{"SQL Generator": "Not the two birthplaces with the most singers"}} The Tool_Query for the second execution of the tool is: { "SQL Generator": "Exclude the two birthplaces with the most singers, provide the number of singers from other birthplaces"}} The Tool_Query for the third execution of the tool is: {{"SQL Generator": "Exclude the two birthplaces with the most singers, provide the number of singers from other birthplaces, and calculate the factorial of this number"} }
Figure 6: Issue-3: Unnecessary repetition of subtasks.
Please use SQL language to query who are the singers who have not been nominated in the Golden Melody Awards? Give their names. Jay Chou, Cui J it LLMs Answer: Jay Chou, Cui Jian ms a D> rnternim
Figure 7: Issue-4: Answering questions using common sense instead of generating code.
# 4 Related Work
The remarkable capacity for usage and creation of tools have facilitated the transcendence of our innate physical and cognitive constraints, thereby profoundly advancing the progress and prosperity of human civilization and society. The swift advancement of LLM has rendered it feasible to use and create tools like humans. The integration of specialized tools with LLM has unlocked substantial potential in addressing intricate tasks. In this section, we offer a concise synopsis of the relevant research pertaining to tool learning based on LLMs.
# 4.1 Tool Usage
The initial advancements in tool learning have been constrained by the capabilities of artificial intelligence (AI) models. [27] Traditional deep learning approaches exhibit limitations in terms of comprehension of tool functionality and user intentions, and common sense reasoning abilities. Consequently, these limitations directly result in a notable decline in the stability and precision of tool
15
learning methodologies. Recently, the advent of LLM has marked a pivotal juncture in the realm of tool learning. LLMs encompass a broad spectrum of common sense cognitive capabilities and exhibit remarkable proficiencies in natural language processing, reasoning, and interactive decision-making [28â32]. These attributes furnish indispensable prerequisites for LLMs to comprehend user intentions and effectively employ tools in tackling intricate tasks [33]. Simultaneously, the advancement of fine-tuning [34â38] and in-context learning [39, 40] technology has offered robust support to LLM in addressing increasingly intricate challenges. In addition, tool usage can mitigate the inherent limitations of LLMs, encompassing the acquisition of up-to-date information from real-world events, refined mathematical computational abilities, and the mitigation of potential hallucinatory phenomena. [41]
Within the realm of embodied intelligence [42â44], LLM engages in direct interactions with tangible tools like robots in order to enhance their cognitive abilities, optimize work productivity, and expand functional capacities. LLM possesses the capability to automatically devise action steps based on user intentions, enabling the guidance of robots in the completion of tasks [45â53], or alternatively, to directly generate underlying code that can be executed by robots [54â58]. Palm-E [50] introduced a multimodal language model which seamlessly integrates sensor data into its framework, enabling efficient planning of robot actions and task completion. Code as Policies (CaP) [58] facilitates the transformation of natural language instructions into code fragments that can be directly compiled and executed on robots. As for Inner Monologue [48], LLM incorporates diverse environmental feedback to construct inner monologues, thereby formulating effective robot control strategies. Furthermore, LP-SLAM [45] proposes a simultaneous localization and mapping (SLAM) system empowered with language perception capabilities, exploiting the potential of ChatGPT. PromptCraft [57], on the other hand, devises a function library tailored to ChatGPT on the robot platform, streamlining the conversion of user intentions into executable tasks via the underlying backend API.
In addition to directly changing the real environment through interaction with tools in the physical world, LLM can also utilize software tools such as search engines [59â67], mobile [68, 69], Microsoft Office [70, 71], calculators [72â74], deep models [19, 75â79, 13, 80, 81] and other versatile APIs [82, 5, 83, 84, 20, 85] to enhance model performance or complete complex workflows through flexible control of the software. Toolformer [5] employs a self-supervised methodology to fine-tune the language model, enabling it to acquire the ability to automatically invoke APIs. ART [86] leverages CoT [26] and In-context Learning [81, 41] techniques to automatically generate multi-step reasoning processes for new tasks, while also selecting and utilizing the most appropriate available tool at each step. ASH [62] utilizes LLM for sequence hierarchical decision-making to achieve web navigation tasks. WebGPT [66] and WebCPM [64] use network search to assist in implementing Question Answering tasks. In addition, RCI [87] recursively criticizes and improves itself to execute computer tasks guided by natural language according to the prompting scheme. To achieve the analysis and processing of tables, TableGPT [71] employs a table encoder to transform tabular data into vector representations, which are then fed into an LLM for inference in combination with user queries.
# 4.2 Tool Creation
The usage of tools is contingent upon the accessibility of external tools. Recently, efforts have been made to employ LLM as a tool creator in order to generate tools that can be utilized for diverse requests [88â95]. This development has consequently raised the demands placed on LLM. And these created tools are typically implemented as Python or SQL functions. LATM [88], for example, leverages the prowess of GPT-4 to create tools, and the usage of more cost-effective models has shown potential in exhibiting performance on par with larger models for these tool applications. EVAPORATE [94] involves the synthesis of multiple functions, which are subsequently utilized at a large scale to efficiently process documents and generate structured views.
# 5 Conclusion
In this paper, we have introduced a structured framework specially designed for LLM-based AI Agents, with an emphasis on their abilities in task planning and tool usage. This framework, coupled with our design of two distinct types of agents assigned for the inference process, allows for a comprehensive evaluation of the capabilities of current open-source LLMs, thereby yielding critical insights into their effectiveness. Furthermore, our research highlights the significant potential of
16
LLMs in managing complex tasks, revealing the exciting prospects they hold for future research and development. As we continue to explore and improve upon these models, we move closer to unlocking their full potential in a wide range of real-world applications.
# Acknowledgements
This work was conducted collaboratively among the authors.
Hangyu Mao and Rui Zhao led the project, formulating the central idea and laying out the framework for the primary literature review.
Regarding the literature review phase, the surveys were conducted by various team members. Guoqing Du and Jingqing Ruan explored DNN-based Tool Scheduling by LLMs; Tianpeng Bao and Yihong Chen investigated Physical/Robot Tool Scheduling by LLMs; and Shiwei Shi and Zhiwei Xu handled the survey of API or GUI-based Tool Scheduling by LLMs. Bin Zhang summarized these papers and synthesized an overarching summary.
As for the evaluation phase, Yihong Chen, Tianpeng Bao, Jingqing Ruan, Guoqing Du, Zhiwei Xu, Shiwei Shi, and Bin Zhang performed the experiments and analyzed the data. Hangyu Mao assisted in the analysis of the experimental phenomena and offered constructive suggestions for improvements. Xingyu Zeng and Rui Zhao provided invaluable feedback, contributed to the direction of the research. All authors participated in the discussion.
Regarding the manuscript phase, Hangyu Mao organized the overall chapters of the manuscript and mainly wrote the methodology part, and provided assistance in other parts. Jingqing Ruan and Yihong Chen wrote the evaluation section. Bin Zhang wrote the summary of the literature review. Each author read and approved the final manuscript.
The authors would like to thank Feng Zhu, Kun Wang, Yuhang Ran, Mengying Xu, Pengfei Jia, and Shaobo Lin for their valuable feedback, discussion, and participation in this project.
# References
[1] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong et al., âA survey of large language models,â arXiv preprint arXiv:2303.18223, 2023.
[2] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., âLanguage models are few-shot learners,â Advances in neural information processing systems, vol. 33, pp. 1877â1901, 2020.
[3] J. Wei, M. Bosma, V. Y. Zhao, K. Guu, A. W. Yu, B. Lester, N. Du, A. M. Dai, and Q. V. Le, âFinetuned language models are zero-shot learners,â arXiv preprint arXiv:2109.01652, 2021.
[4] OpenAI, âGpt-4 technical report,â 2023.
[5] T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, L. Zettlemoyer, N. Cancedda, and T. Scialom, âToolformer: Language models can teach themselves to use tools,â arXiv preprint arXiv:2302.04761, 2023.
[6] N. R. Jennings, K. Sycara, and M. Wooldridge, âA roadmap of agent research and development,â Autonomous agents and multi-agent systems, vol. 1, pp. 7â38, 1998.
[7] N. R. Jennings and M. Wooldridge, âApplying agent technology,â Applied Artificial Intelligence an International Journal, vol. 9, no. 4, pp. 357â369, 1995.
[8] S. Franklin and A. Graesser, âIs it an agent, or just a program?: A taxonomy for autonomous agents,â in International workshop on agent theories, architectures, and languages. Springer, 1996, pp. 21â35.
[9] C. Castelfranchi, âModelling social action for ai agents,â Artificial intelligence, vol. 103, no. 1-2, pp. 157â182, 1998.
[10] J. Ferber and G. Weiss, Multi-agent systems: an introduction to distributed artificial intelligence. Addison-wesley Reading, 1999, vol. 1.
17
[11] L. Panait and S. Luke, âCooperative multi-agent learning: The state of the art,â Autonomous agents and multi-agent systems, vol. 11, pp. 387â434, 2005.
[12] M. Pourreza and D. Rafiei, âDin-sql: Decomposed in-context learning of text-to-sql with self-correction,â arXiv preprint arXiv:2304.11015, 2023.
[13] C. Wu, S. Yin, W. Qi, X. Wang, Z. Tang, and N. Duan, âVisual chatgpt: Talking, drawing and editing with visual foundation models,â arXiv preprint arXiv:2303.04671, 2023.
[14] J. Gorniak, Y. Kim, S. Gwon, D. Wei, and N. W. Kim, âVizability: Multimodal accessible data visualization with keyboard navigation and conversational interaction,â arXiv preprint arXiv:2310.09611, 2023.
[15] I. Team, âInternlm: A multilingual language model with progressively enhanced capabilities,â https://github.com/InternLM/InternLM, 2023.
[16] Q. Xu, F. Hong, B. Li, C. Hu, Z. Chen, and J. Zhang, âOn the tool manipulation capability of open-source large language models,â arXiv preprint arXiv:2305.16504, 2023.
[17] Y. Qin, S. Liang, Y. Ye, K. Zhu, L. Yan, Y. Lu, Y. Lin, X. Cong, X. Tang, B. Qian et al., âToolllm: Facilitating large language models to master 16000+ real-world apis,â arXiv preprint arXiv:2307.16789, 2023.
[18] M. Li, F. Song, B. Yu, H. Yu, Z. Li, F. Huang, and Y. Li, âApi-bank: A benchmark for tool-augmented llms,â arXiv preprint arXiv:2304.08244, 2023.
[19] S. G. Patil, T. Zhang, X. Wang, and J. E. Gonzalez, âGorilla: Large language model connected with massive apis,â arXiv preprint arXiv:2305.15334, 2023.
[20] Q. Tang, Z. Deng, H. Lin, X. Han, Q. Liang, and L. Sun, âToolalpaca: Generalized tool learning for language models with 3000 simulated cases,â arXiv preprint arXiv:2306.05301, 2023.
[21] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray et al., âTraining language models to follow instructions with human feedback,â Advances in Neural Information Processing Systems, vol. 35, pp. 27 730â27 744, 2022.
[22] Y. Bai, S. Kadavath, S. Kundu, A. Askell, J. Kernion, A. Jones, A. Chen, A. Goldie, A. Mirho- seini, C. McKinnon et al., âConstitutional ai: Harmlessness from ai feedback,â arXiv preprint arXiv:2212.08073, 2022.
[23] A. Zeng, X. Liu, Z. Du, Z. Wang, H. Lai, M. Ding, Z. Yang, Y. Xu, W. Zheng, X. Xia et al., âGlm-130b: An open bilingual pre-trained model,â arXiv preprint arXiv:2210.02414, 2022.
[24] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar et al., âLlama: Open and efficient foundation language models,â arXiv preprint arXiv:2302.13971, 2023.
[25] Y. Cui, Z. Yang, and X. Yao, âEfficient and effective text encoding for chinese llama and alpaca,â arXiv preprint arXiv:2304.08177, 2023.
[26] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. H. Chi, Q. V. Le, and D. Zhou, âChain-of-thought prompting elicits reasoning in large language models,â Neural Information Processing Systems, 2022.
[27] R. Bommasani, D. A. Hudson, E. Adeli, R. Altman, S. Arora, S. von Arx, M. S. Bernstein, J. Bohg, A. Bosselut, E. Brunskill et al., âOn the opportunities and risks of foundation models,â arXiv preprint arXiv:2108.07258, 2021.
[28] M. Mosbach, T. Pimentel, S. Ravfogel, D. Klakow, and Y. Elazar, âFew-shot fine-tuning vs. in-context learning: A fair comparison and evaluation,â arXiv preprint arXiv:2305.16938, 2023.
[29] J. Yang, H. Jin, R. Tang, X. Han, Q. Feng, H. Jiang, B. Yin, and X. Hu, âHarnessing the power of llms in practice: A survey on chatgpt and beyond,â arXiv preprint arXiv:2304.13712, 2023.
18
[30] C. Zhang, C. Zhang, C. Li, Y. Qiao, S. Zheng, S. K. Dam, M. Zhang, J. U. Kim, S. T. Kim, J. Choi et al., âOne small step for generative ai, one giant leap for agi: A complete survey on chatgpt in aigc era,â arXiv preprint arXiv:2304.06488, 2023.
[31] F. Yu, H. Zhang, and B. Wang, âNature language reasoning, a survey,â arXiv preprint arXiv:2303.14725, 2023.
[32] Z. Wang, G. Zhang, K. Yang, N. Shi, W. Zhou, S. Hao, G. Xiong, Y. Li, M. Y. Sim, X. Chen et al., âInteractive natural language processing,â arXiv preprint arXiv:2305.13246, 2023.
[33] Y. Qin, S. Hu, Y. Lin, W. Chen, N. Ding, G. Cui, Z. Zeng, Y. Huang, C. Xiao, C. Han et al., âTool learning with foundation models,â arXiv preprint arXiv:2304.08354, 2023.
[34] W. Yu, C. Zhu, Z. Li, Z. Hu, Q. Wang, H. Ji, and M. Jiang, âA survey of knowledge-enhanced text generation,â ACM Computing Surveys, vol. 54, no. 11s, pp. 1â38, 2022.
[35] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, âLora: Low-rank adaptation of large language models,â arXiv preprint arXiv:2106.09685, 2021.
[36] N. Houlsby, A. Giurgiu, S. Jastrzebski, B. Morrone, Q. De Laroussilhe, A. Gesmundo, M. At- tariyan, and S. Gelly, âParameter-efficient transfer learning for nlp,â in International Conference on Machine Learning. PMLR, 2019, pp. 2790â2799.
[37] X. L. Li and P. Liang, âPrefix-tuning: Optimizing continuous prompts for generation,â arXiv preprint arXiv:2101.00190, 2021.
[38] X. Liu, Y. Zheng, Z. Du, M. Ding, Y. Qian, Z. Yang, and J. Tang, âGpt understands, too,â arXiv preprint arXiv:2103.10385, 2021.
[39] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao, âReact: Synergizing reasoning and acting in language models,â arXiv preprint arXiv:2210.03629, 2022.
[40] T. Khot, H. Trivedi, M. Finlayson, Y. Fu, K. Richardson, P. Clark, and A. Sabharwal, âDecomposed prompting: A modular approach for solving complex tasks,â arXiv preprint arXiv:2210.02406, 2022.
[41] G. Mialon, R. Dessì, M. Lomeli, C. Nalmpantis, R. Pasunuru, R. Raileanu, B. Rozière, T. Schick, J. Dwivedi-Yu, A. Celikyilmaz et al., âAugmented language models: a survey,â arXiv preprint arXiv:2302.07842, 2023.
[42] J. Duan, S. Yu, H. L. Tan, H. Zhu, and C. Tan, âA survey of embodied ai: From simulators to research tasks,â IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 6, no. 2, pp. 230â244, 2022.
[43] M. Savva, A. Kadian, O. Maksymets, Y. Zhao, E. Wijmans, B. Jain, J. Straub, J. Liu, V. Koltun, J. Malik et al., âHabitat: A platform for embodied ai research,â in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 9339â9347.
[44] S. Franklin, âAutonomous agents as embodied ai,â Cybernetics & Systems, vol. 28, no. 6, pp. 499â520, 1997.
[45] W. Zhang, Y. Guo, L. Niu, P. Li, C. Zhang, Z. Wan, J. Yan, F. U. D. Farrukh, and D. Zhang, âLp-slam: Language-perceptive rgb-d slam system based on large language model,â arXiv preprint arXiv:2303.10089, 2023.
[46] D. Shah, B. Osi´nski, S. Levine et al., âLm-nav: Robotic navigation with large pre-trained models of language, vision, and action,â in Conference on Robot Learning. PMLR, 2023, pp. 492â504.
[47] A. Brohan, Y. Chebotar, C. Finn, K. Hausman, A. Herzog, D. Ho, J. Ibarz, A. Irpan, E. Jang, R. Julian et al., âDo as i can, not as i say: Grounding language in robotic affordances,â in Conference on Robot Learning. PMLR, 2023, pp. 287â318.
19
[48] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch, Y. Chebotar et al., âInner monologue: Embodied reasoning through planning with language models,â arXiv preprint arXiv:2207.05608, 2022.
[49] B. Chen, F. Xia, B. Ichter, K. Rao, K. Gopalakrishnan, M. S. Ryoo, A. Stone, and D. Kappler, âOpen-vocabulary queryable scene representations for real world planning,â in 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023, pp. 11 509â 11 522.
[50] D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. Vuong, T. Yu et al., âPalm-e: An embodied multimodal language model,â arXiv preprint arXiv:2303.03378, 2023.
[51] N. Wake, A. Kanehira, K. Sasabuchi, J. Takamatsu, and K. Ikeuchi, âChatgpt empow- ered long-step robot control in various environments: A case application,â arXiv preprint arXiv:2304.03893, 2023.
[52] K. Rana, J. Haviland, S. Garg, J. Abou-Chakra, I. Reid, and N. Suenderhauf, âSayplan: Ground- ing large language models using 3d scene graphs for scalable task planning,â arXiv preprint arXiv:2307.06135, 2023.
[53] C. H. Song, J. Wu, C. Washington, B. M. Sadler, W.-L. Chao, and Y. Su, âLlm-planner: Few- shot grounded planning for embodied agents with large language models,â arXiv preprint arXiv:2212.04088, 2022.
[54] A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Haus- man, A. Herzog, J. Hsu et al., âRt-1: Robotics transformer for real-world control at scale,â arXiv preprint arXiv:2212.06817, 2022.
[55] A. Stone, T. Xiao, Y. Lu, K. Gopalakrishnan, K.-H. Lee, Q. Vuong, P. Wohlhart, B. Zitkovich, F. Xia, C. Finn et al., âOpen-world object manipulation using pre-trained vision-language models,â arXiv preprint arXiv:2303.00905, 2023.
[56] S. Reed, K. Zolna, E. Parisotto, S. G. Colmenarejo, A. Novikov, G. Barth-Maron, M. Gimenez, Y. Sulsky, J. Kay, J. T. Springenberg et al., âA generalist agent,â arXiv preprint arXiv:2205.06175, 2022.
[57] S. Vemprala, R. Bonatti, A. Bucker, and A. Kapoor, âChatgpt for robotics: Design principles and model abilities,â Microsoft Auton. Syst. Robot. Res, vol. 2, p. 20, 2023.
[58] J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. Florence, and A. Zeng, âCode as policies: Language model programs for embodied control,â in 2023 IEEE International Conference on Robotics and Automation (ICRA).
[59] K. Guu, K. Lee, Z. Tung, P. Pasupat, and M. Chang, âRetrieval augmented language model pre-training,â in International conference on machine learning. PMLR, 2020, pp. 3929â3938.
[60] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih, T. Rocktäschel et al., âRetrieval-augmented generation for knowledge-intensive nlp tasks,â Advances in Neural Information Processing Systems, vol. 33, pp. 9459â9474, 2020.
[61] S. Borgeaud, A. Mensch, J. Hoffmann, T. Cai, E. Rutherford, K. Millican, G. B. Van Den Driess- che, J.-B. Lespiau, B. Damoc, A. Clark et al., âImproving language models by retrieving from trillions of tokens,â in International conference on machine learning. PMLR, 2022, pp. 2206â2240.
[62] A. Sridhar, R. Lo, F. F. Xu, H. Zhu, and S. Zhou, âHierarchical prompting assists large language model on web navigation,â arXiv preprint arXiv:2305.14257, 2023.
[63] H. Furuta, O. Nachum, K.-H. Lee, Y. Matsuo, S. S. Gu, and I. Gur, âMultimodal web navigation with instruction-finetuned foundation models,â arXiv preprint arXiv:2305.11854, 2023.
[64] Y. Qin, Z. Cai, D. Jin, L. Yan, S. Liang, K. Zhu, Y. Lin, X. Han, N. Ding, H. Wang et al., âWebcpm: Interactive web search for chinese long-form question answering,â arXiv preprint arXiv:2305.06849, 2023.
20
[65] S. Yao, H. Chen, J. Yang, and K. Narasimhan, âWebshop: Towards scalable real-world web in- teraction with grounded language agents,â Advances in Neural Information Processing Systems, vol. 35, pp. 20 744â20 757, 2022.
[66] R. Nakano, J. Hilton, S. Balaji, J. Wu, L. Ouyang, C. Kim, C. Hesse, S. Jain, V. Kosaraju, W. Saunders et al., âWebgpt: Browser-assisted question-answering with human feedback,â arXiv preprint arXiv:2112.09332, 2021.
[67] Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. W. Cohen, R. Salakhutdinov, and C. D. Manning, âHotpotqa: A dataset for diverse, explainable multi-hop question answering,â arXiv preprint arXiv:1809.09600, 2018.
[68] B. Wang, G. Li, and Y. Li, âEnabling conversational interaction with mobile ui using large language models,â in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 2023, pp. 1â17.
[69] D. Zhang, L. Chen, and K. Yu, âMobile-env: A universal platform for training and evaluation of mobile interaction,â arXiv preprint arXiv:2305.08144, 2023.
[70] H. Li, J. Su, Y. Chen, Q. Li, and Z. Zhang, âSheetcopilot: Bringing software productivity to the next level through large language models,â arXiv preprint arXiv:2305.19308, 2023.
[71] L. Zha, J. Zhou, L. Li, R. Wang, Q. Huang, S. Yang, J. Yuan, C. Su, X. Li, A. Su et al., âTablegpt: Towards unifying tables, nature language and commands into one gpt,â arXiv preprint arXiv:2307.08674, 2023.
[72] Z. Chen, K. Zhou, B. Zhang, Z. Gong, W. X. Zhao, and J.-R. Wen, âChatcot: Tool- augmented chain-of-thought reasoning on\chat-based large language models,â arXiv preprint arXiv:2305.14323, 2023.
[73] A. Parisi, Y. Zhao, and N. Fiedel, âTalm: Tool augmented language models,â arXiv preprint arXiv:2205.12255, 2022.
[74] K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano et al., âTraining verifiers to solve math word problems,â arXiv preprint arXiv:2110.14168, 2021.
[75] Z. Yang, L. Li, J. Wang, K. Lin, E. Azarnasab, F. Ahmed, Z. Liu, C. Liu, M. Zeng, and L. Wang, âMm-react: Prompting chatgpt for multimodal reasoning and action,â arXiv preprint arXiv:2303.11381, 2023.
[76] Z. Liu, Y. He, W. Wang, W. Wang, Y. Wang, S. Chen, Q. Zhang, Y. Yang, Q. Li, J. Yu et al., âInternchat: Solving vision-centric tasks by interacting with chatbots beyond language,â arXiv preprint arXiv:2305.05662, 2023.
[77] Y. Ge, W. Hua, J. Ji, J. Tan, S. Xu, and Y. Zhang, âOpenagi: When llm meets domain experts,â arXiv preprint arXiv:2304.04370, 2023.
[78] Y. Shen, K. Song, X. Tan, D. Li, W. Lu, and Y. Zhuang, âHugginggpt: Solving ai tasks with chatgpt and its friends in huggingface,â arXiv preprint arXiv:2303.17580, 2023.
[79] D. SurÃs, S. Menon, and C. Vondrick, âVipergpt: Visual inference via python execution for reasoning,â arXiv preprint arXiv:2303.08128, 2023.
[80] T. Gupta and A. Kembhavi, âVisual programming: Compositional visual reasoning without train- ing,â in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 14 953â14 962.
[81] L. Chen, B. Li, S. Shen, J. Yang, C. Li, K. Keutzer, T. Darrell, and Z. Liu, âLanguage models are visual reasoning coordinators,â in ICLR 2023 Workshop on Mathematical and Empirical Understanding of Foundation Models, 2023.
[82] P. Lu, B. Peng, H. Cheng, M. Galley, K.-W. Chang, Y. N. Wu, S.-C. Zhu, and J. Gao, âChameleon: Plug-and-play compositional reasoning with large language models,â arXiv preprint arXiv:2304.09842, 2023.
21
[83] Z. Gou, Z. Shao, Y. Gong, Y. Shen, Y. Yang, N. Duan, and W. Chen, âCritic: Large language models can self-correct with tool-interactive critiquing,â arXiv preprint arXiv:2305.11738, 2023.
[84] Y. Liang, C. Wu, T. Song, W. Wu, Y. Xia, Y. Liu, Y. Ou, S. Lu, L. Ji, S. Mao et al., âTaskmatrix. ai: Completing tasks by connecting foundation models with millions of apis,â arXiv preprint arXiv:2303.16434, 2023.
[85] S. Hao, T. Liu, Z. Wang, and Z. Hu, âToolkengpt: Augmenting frozen language models with massive tools via tool embeddings,â arXiv preprint arXiv:2305.11554, 2023.
[86] B. Paranjape, S. Lundberg, S. Singh, H. Hajishirzi, L. Zettlemoyer, and M. T. Ribeiro, âArt: Automatic multi-step reasoning and tool-use for large language models,â arXiv preprint arXiv:2303.09014, 2023.
[87] G. Kim, P. Baldi, and S. McAleer, âLanguage models can solve computer tasks,â arXiv preprint arXiv:2303.17491, 2023.
[88] T. Cai, X. Wang, T. Ma, X. Chen, and D. Zhou, âLarge language models as tool makers,â arXiv preprint arXiv:2305.17126, 2023.
[89] R. H. Lewis and J. Jiao, âComputegpt: A computational chat model for numerical problems,â arXiv preprint arXiv:2305.06223, 2023.
[90] L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y. Yang, J. Callan, and G. Neubig, âPal: Program- aided language models,â in International Conference on Machine Learning. PMLR, 2023, pp. 10 764â10 799.
[91] G. Wang, Y. Xie, Y. Jiang, A. Mandlekar, C. Xiao, Y. Zhu, L. Fan, and A. Anandkumar, âVoyager: An open-ended embodied agent with large language models,â arXiv preprint arXiv:2305.16291, 2023.
[92] C. Qian, C. Han, Y. R. Fung, Y. Qin, Z. Liu, and H. Ji, âCreator: Disentangling abstract and concrete reasonings of large language models through tool creation,â arXiv preprint arXiv:2305.14318, 2023.
[93] Y. Cai, S. Mao, W. Wu, Z. Wang, Y. Liang, T. Ge, C. Wu, W. You, T. Song, Y. Xia et al., âLow-code llm: Visual programming over llms,â arXiv preprint arXiv:2304.08103, 2023.
[94] S. Arora, B. Yang, S. Eyuboglu, A. Narayan, A. Hojel, I. Trummer, and C. Ré, âLanguage models enable simple systems for generating structured views of heterogeneous data lakes,â arXiv preprint arXiv:2304.09433, 2023.
[95] W. Zhang, Y. Shen, W. Lu, and Y. Zhuang, âData-copilot: Bridging billions of data and humans with autonomous workflow,â arXiv preprint arXiv:2306.07209, 2023.
22
# A Detailed Dataset Description
Simple SQL queries: These queries typically involve basic operations such as SELECT, FROM, WHERE, GROUP BY, etc. They are used to retrieve, filter, group, and sort data from a single table. We give the Schema of two tables in the SQL database in Table 12 and 13 and list several examples in Table 14.
Table 12: Schema of the Person table
Person Column Name Type School id name age sex school phone qualifications ability TEXT TEXT INTEGER TEXT TEXT TEXT TEXT TEXT Column Name Type id name info_985 info_211 TEXT TEXT TEXT TEXT
# Table 13: Schema of the School table
# Table 14: Demostrations of simple SQL queries.
Table ID Question Answer SQL reference Person Person School Average ages How many men How many schools are both â985â and â211â institutions? 35.16 12 11 select avg(age) from Person select count(*) from Person where sex = âmaleâ select count(*) from School where info_985 = âyesâ and info_211 = âyesâ;
Complex nested SQL queries: These queries contain subqueries, which are SQL queries nested inside a larger query. Nested queries can be used in various clauses such as SELECT, FROM, WHERE, and HAVING. They provide a way to perform multiple operations or calculations across multiple tables. We give the Schema of two tables in the SQL database in Table 15, 16, 17, and 18 and list several examples in Table 19.
Table 15: Schema of GoldenMelodyAwards
Table 16: Schema of the AwardNominees table
GoldenMelodyAwards Column Name Type AwardNominees Nominated_Count Competing_Count Awards_Count Award_Name Host Year INTEGER INTEGER INTEGER TEXT TEXT TIME Column Name Type Singer_ID Nominated_Work Award_Name Award_Edition_ID INTEGER INTEGER TEXT TEXT
Complex nested queries utilizing multiple tools: These are advanced queries that involve multiple tools, such as SQL queries, python code generation, user-defined functions, etc. We give the Schema
23
Table 17: Schema of the Singers table
Singers Column Name Type Name Song_Count Album_Count Fan_Count Gender Singer_ID TEXT INTEGER INTEGER INTEGER TEXT INTEGER
Table 18: Schema of the RecordCompanies table
Column Name Type Record_Company TEXT TIME Signing_Date INTEGER Singer_ID
Table 19: Demostrations of complex nested SQL queries.
Question Answer SQL reference Golden Melody hosts, excluding the two with the least awards. Names of singers never nominated for Golden Melody Awards. Name and gender of singers without a record company. How many times is the 27th Golden Melody count of the 28thâs? "26th Golden Melody", "27th Golden Melody" "Jay Chou", "Jian Cui" "Penny Tai:Femal" 1 from Golden- select in ( MelodyAwards where Host not select Host from GoldenMelodyAwards group by Host order by avg ( Awards_Count ) asc limit 2 ) select Name from Singers where Singer_ID not in ( select Singer_ID from AwardNomi- nees ) Award_Name select Name, Gender from Singers where Singer_ID not in ( select Singer_ID from RecordCompanies ); select a.Awards_Count / b.Awards_Count from ( select Awards_Count from Gold- enMelodyAwards where Award_Name == â27th Golden Melodyâ ( select Awards_Count from GoldenMelodyAwards where Award_Name == â28th Golden Melodyâ ) b ) a ,
of two tables in the SQL database in Table 20, and 21 and list several examples in Table 22. For verifying the planning ability of the LLM-based AI agents, we select this type of query.
Table 20: Schema of the Journal table
# Journal
Column Name Type TEXT Name TIME First_Issue_Date INTEGER Journal_ID Category TEXT Sponsor_Organization TEXT TEXT Country TEXT s Language INTEGER Publication_Count CoverPersonality Column Name Type Person_ID Journal_ID Count INTEGER INTEGER INTEGER
# Table 21: Schema of the CoverPersonality table
24
. s l o o t e l p i t l u m g n i z i l i t u s e i r e u q d e t s e n x e l p m o c f o s n o i t a r t s o m e D : 2 2 e l b a T e c n e r e f e r e d o C e c n e r e f e r L Q S s l o o T g n i n n a l P r e w s n A ; h t a m t r o p m i - r u o J m o r f e g a u g n a L , e m a N t c e l e s , " L P E R n o h t y P " [ , 8 0 . 0 2 [ - o p x e e h t ) 3 ( p x e . h t a m n r u t e r t c e l e s ( n i t o n D I _ l a n r u o J e r e h w l a n ] " r o t a r e n e G L Q S " , e s e n i h C : t s i m o n o c E e h T " t s i l d n a 3 ) y t i l a n o s r e P r e v o C m o r f D I _ l a n r u o J ] " . h s i l g n E : t s e g i D s â r e d a e R - n a l d n a s l a n r u o j ; h t a m t r o p m i - r u o J m o r f e g a u g n a L , e m a N t c e l e s , " L P E R n o h t y P " [ - i h C : t s i m o n o c E e h T " , 4 [ , l a i r o t c a f , ) 4 ( l a i r o t c a f . h t a m ( d c g . h t a m t c e l e s ( n i t o n D I _ l a n r u o J e r e h w l a n ] " r o t a r e n e G L Q S " - n E : t s e g i D s â r e d a e R , e s e n D C G h t i ) 2 1 2 ) y t i l a n o s r e P r e v o C m o r f D I _ l a n r u o J ] " . h s i l g e h t t s i l o n h t i w . ; h t a m t r o p m i p u o r g l a n r u o J m o r f e g a u g n a L t c e l e s , " L P E R n o h t y P " [ ] " h s i l g n E " , 5 9 7 9 8 9 8 . 4 [ e r a u q s e h t ) 4 2 ( t r q s . h t a m - a c i l b u P ( g v a g n i v a h e g a u g n a L y b ] " r o t a r e n e G L Q S " y r e u q d n a - i l b u P ( g v a t c e l e s ( > ) t n u o C _ n o i t e g a u g n a l ) l a n r u o J m o r f ) t n u o C _ n o i t a c - m u n d e h s i l b u p e h t ; h t a m t r o p m i - n o s r e P r e v o C m o r f D I _ n o s r e P t c e l e s , " L P E R n o h t y P " [ - i X , i a H g n i Q " , 7 9 8 9 6 . 0 [ e s a b g o l ) 5 ( 0 1 g o l . h t a m ( x a m t c e l e s ( < t n u o C e r e h w y t i l a ] " r o t a r e n e G L Q S " o n a i t s i r C , g n a u H g n i m o a y f i t n e d i ) y t i l a n o s r e P r e v o C m o r f ) t n u o C ] " t n a y r B e b o K , o d l a n o R - r a e p p a - r e v o e h t y c n e u q e r f
n o i t s e u Q
e t a l u c l a C
# f o
# l a i t n e n
s e m a n
# e h t
# f o
s e g a u g
n o s r e p r e v o c o n h t i
# w
.
y t i l a
s â 4 e t u p m o C
e r a p m o c
d n a
2 1 2
# f o
s e g a u g n a l d n a
s e m a n
s l a n r u o j
# f o
y t i l a n o s r e p
# r e v o c
e t a l u c l a C
# 4 2 f o t o o r
# e h t
=
# r o f
e g a r e v a
# e s o h w
# f o
# r e b
s d e e c x e
# s e u s s i
e g a r e v a
l l a r e v o
# e h t
# e t u p m o C
# n e h t
,
¢
5
# f o
0 1
# s e r u g fi
# r e v o c
# n a h t
# s s e l
# g n i
# D
# I
e l b a T
r e v o C
&
# l a n r u o J
y t i l a n o s r e P
y t i l a n o s r e P r e v o C
# l a n r u o J &
# l a n r u o J
y t i l a n o s r e P r e v o C
25
x a m
l l a
. s l a n r u o j
s s o r c a
# B Prompts Design
Figure 8: The evaluation prompt for tool order planning.
You are a strategy model. Given a problem and a set of tools, you need to â generate a sequence of tools to determine the solution to the problem. Each tool in the toolset is defined as follows: SQL Generator: Given an input problem and a database, it creates a â syntactically correct SQLite query statement. Python Generator: Given an input problem and some information, it generates << a syntactically correct Python code snippet. Please use the following format: Question: This is the original question. Error: This is the previously generated error output. Tool: These are the tools to be selected and the order in which they are <= called. Please note to generate a Tool different from the Error. Result: The final result output by the tool. Here are some examples of mapping problems to tools: Question: What is the square of the number of albums by Jolin Tsaif| Error: Tool: ["SQL Generator", "Python Generator"] Result: 100 Question: First, calculate the square of 40, denoted as A, and then find < the names of all the singers whose total number of fans is less than A. Error: Tool: ["Python Generator", "SQL Generator"] Result: ['Jolin Tsai'] Let's get started: Question: {question} Error: {error} Tool:
26
Figure 9: The evaluation prompt for tool order and subtask description planning.
You are a strategy model. Given a problem and a set of tools, you need to <â generate a sequence of tools to determine the solution to the problem. Each tool in the toolset is defined as follows: SQL Generator: Given an input problem and a database, it creates a <â syntactically correct SQLite query statement. Python Generator: Given an input problem and some information, it generates < a syntactically correct Python code snippet. Please use the following format: Question: This is the original question. Error: This is the previously generated error output. Tool: These are the tools to be selected and the order in which they are <= called. Please note to generate a Tool different from the Error. Query: This is the sub-problem derived from the original question that < needs to be input when calling the tool. Please note to generate a <= Query different from the Error. Result: The final result output by the tool. Here are some examples of mapping problems to tools: Question: What is the square of the number of albums by Jolin Tsai? Error: Tool: ["SQL Generator", "Python Generator"] Query: ["What is the number of albums by Jolin Tsai?", "What is the square <â of the number of albums by Jolin Tsai?"] Result: 100 Question: First, calculate the square of 40, denoted as A, and then find < the names of all the singers whose total number of fans is less than A. Error: Tool: ["Python Generator", "SQL Generator"] Query: ["A is the square of 40, what is the value of A?", "What are the â names of all the singers whose total number of fans is less than A?"] Result: ['Jolin Tsai'] Let's get started: Question: {question} Error: {error} Tool: {tools} Query:
27
Figure 10: The evaluation prompt for one-step tool-subtask pair planning.
You are a strategy model. Given a problem and a set of tools, you need to â generate a sequence of tools to determine the solution to the problem. Each tool in the toolset is defined as follows: SQL Generator: Given an input problem and a database, it creates a â syntactically correct SQLite query statement. Python Generator: Given an input problem and some information, it generates â a syntactically correct Python code snippet. Please use the following format: Question: This is the original question Error: This is the previously generated error output Tasks: This is a list in Python. Each item in the list is a dictionary. The ~ key of the dictionary represents the selected Tool, and the value is <â the Query when calling the tool. Please note to generate a Tool and â Query different from the Error. Answer: The final answer Here are some examples of mapping problems to tools: Question: What is the square of the number of albums by Jolin Tsaiff Error: Tasks: [{{"SQL Generator": "What is the number of albums by Jolin Tsai?"}}, = {{"Python Generator": "What is the square of the number of albums by so Jolin Tsai?"}}] Answer: The square of the number of albums by Jolin Tsai is 100 Question: First, calculate the square of 40, denoted as A, and then find â the names of all the singers whose total number of fans is less than A. Error: Tasks: [{{"Python Generator": "A is the square of 40, what is the value of os A?"}}, {{"SQL Generator": "What are the names of all the singers whose â total number of fans is less than A?"}}] Answer: Jolin Tsai You must note that: The generated Tasks must strictly meet the format â requirements: it must be a list in Python, each item in the list is a = dictionary, the key of the dictionary represents the selected Tool, and â the value is the Query when calling the tool. Let's get started: Question: {question} Error: {error} Tasks: """
28
Figure 11: The prompt added to Figure 10 for tool-subtask pair planning with other unrelated tools.
Each tool in the toolset is defined as follows: SQL Generator: Given an input problem and a database, it creates a â syntactically correct SQLite query statement. Python Generator: Given an input problem and some information, it generates <= a syntactically correct Python code snippet. Weather Query Tool: Given a location, it outputs the real-time weather of = that location. Image Generator: Given a text description, it generates a related image. Text Extractor: Given a link to an image, it extracts the corresponding <= text and its position coordinates. Translator: Given a piece of text, it translates it into other languages. Bing Searcher: Given a piece of text, it conducts a search in the Bing â browser and returns the content. Shell Generator: Given an input problem and some information, it generates <= a syntactically correct Shell script. Java Generator: Given an input problem and some information, it generates a <= syntactically correct Java code snippet. Wikipedia Searcher: Given a piece of text, it conducts a search in <= Wikipedia and returns the content. Office Suite: Given a text description, it automatically generates the < corresponding long document, table, or PPT. Movie Player: Given a movie name, it automatically plays the corresponding <â movie resource.
29
# Figure 12: The prompt for the tool-subtask pair generation with TPTU-SA.
You are a strategic model. Given a problem and a set of tools, you need to generate <+ the next tool to be called and the corresponding subtask. Each tool in the toolset is defined as follows: SQL Generator: Given an input question and a database, it creates a syntactically â correct SQLite query statement. PythonREPL: Given an input question and some information, it generates a segment of < syntactically correct Python code. Please use the following format: Question: This is the question History: This is the history of previously generated sub-problems; if it's empty, it «+ means there are no historical information currently Tool_Query: This is a dictionary in Python, where the key represents the chosen <+ Tool, and the value is the query input when invoking the Tool. Result: This is the output result of the current Tool_Query Tool History: This is the history of all previously generated sub-problems Tool_Query: 'None' signifies that the Final_Answer can be derived Result: 'None' signifies that the Final_Answer can be derived Final_Answer: This is the final answer; when the history is sufficient to reason out < the answer, provide the Final_Answer directly In the above format, ... signifies that (History/Tool_Query/Result) can be repeated = N times. When you can get the Final_Answer, you can generate an empty Tool_Query and Result, < and provide the Final_Answer Please stop after generating the Result line or the Final_Answer line. Below are some examples: Question: First calculate the square of 40 as A, and find the names of all singers <â whose total fan count is less than A. History: Tool_Query:{{"PythonREPL": "A is the square of 40, what is the value of A?"}} Result : 1600 History: The Tool_Query for the first tool execution was:{{"PythonREPL": "A is the <+ square of 40, what is the value of A?"}}, Result:1600 Tool_Query:{{"SQL Generator": "Find the names of all singers whose total fan count â is less than A"}} Result: Jolin Tsai History: The Tool_Query for the first tool execution was: {{"PythonREPL": "A is the <+ square of 40, what is the value of A?"}}, Result: 1600 The Tool_Query for the second tool execution was: {{"SQL Generator": "Find <> the names of all singers whose total fan count is less than A"}}, <â Result: Jolin Tsai Tool_Query:None Result : Final_Answer: Jolin Tsai Note: The generated Tool_Query must strictly comply with the format requirements, «+ and only one Tool_Query can be generated each time. Do not perform additional < problem analysis, strictly adhere to the format of the problem, and generate < output similar to the examples. Now let's get started: Question: {question} History: {history} Tool_Query:
30
Figure 13: The evaluation prompt for simple SQL questions.
You are an SQLite expert. Given an input question, first, generate a â grammatically correct SQLite query to execute. Then examine the query <â results and provide an answer to the input question. Unless a specific number of examples to retrieve is specified in the <= question, use the LIMIT clause to query for a maximum of 5 results. Do not query all columns in the table. You must only query the columns â necessary to answer the question. Please only use the column names you can see in the table below. Be careful < not to query columns that do not exist. Additionally, be aware of which <= column is in which table. Please use the following format: Question: This is the question. SQLQuery: The SQL query to be executed. SQLResult: The result of the SQL query execution. Answer: The final answer. Note to only use the tables below: CREATE TABLE Person (
id TEXT,
name TEXT,
age INTEGER,
sex TEXT,
school TEXT,
phone TEXT,
qualifications TEXT,
ability TEXT
)
/*
3 rows from person table:
id name age sex school phone qualifications ability
01 Wang Min 32 Female Beijing University of Technology 13938493271 Undergraduate Tourism Industry-related Work
02 Li Liang 27 Male Beijing University of Technology 13812764851 Master Internet Company Operations
03 Zhang Jing 50 Female Wuhan University of Technology 13764592384 Master Editor of Publishing House
*/
CREATE TABLE School (
id TEXT,
name TEXT,
info\_985 TEXT, o
info\_211 TEXT
)
/*
3 rows from school table:
id name o info\_985 info\_211
01 Central South University yes <= yes
02 Shandong University yes yes
03 Tsinghua ~ University yes yes
*/ t it Question: What is the average age of the people? SQLQuery:
31
Figure 14: The evaluation prompt for complex nested SQL questions.
You are an SQL expert. Given an input question, you need to create a <= syntactically correct SQL query statement. Please only use the following datasets, which include four table names: GoldenMelodyAward, Singers, AwardNominee, Singers, and RecordCompanies. The column names and types of each table can be obtained from the create commands in the table below: Ii ld CREATE TABLE GoldenMelodyAward (
Nominated\_Count INTEGER,
<= Competing\_Count INTEGER,
Awards\_Count INTEGER,
Award\_Name = TEXT,
Host TEXT,
Year TIME
)
CREATE TABLE AwardNominees (
Singer_ID INTEGER,
Nominated\_Work s TEXT,
Award\_Name TEXT,
Award_Edition_ID INTEGER
)
CREATE TABLE Singers(
Name TEXT,
Song\_Count INTEGER,
<= Album\_Count INTEGER,
Fan\_Count INTEGER,
Singer\_ID INTEGER, <=
Gender TEXT
)
CREATE TABLE RecordCompanies (
Record\_Company TEXT,
Singer\_Date <= TIME,
Singer_ID INTEGER
)
You can query one or more tables at the same time. Be careful not to query â non-existent table names or column names. Also, please note which «= column is in which table. Please use the following format when answering: Question: This is the question Answer: The SQL query statement to be executed
32
Figure 15: The evaluation CoT-based prompt for complex nested SQL questions.
You are an SQL expert. Given an input question, you need to create a â syntactically correct SQL query statement. Please only use the following datasets, which include four table names: GoldenMelodyAward, Singers, AwardNominee, Singers, and RecordCompanie. The column names and types of each table can be obtained from the create commands in the table below: iis CREATE TABLE GoldenMelodyAward (
Nominated\_Count INTEGER,
<= Competing\ Count INTEGER,
Awards\_Count INTEGER,
Award\_Name <= TEXT,
Host TEXT,
Year TIME
)
CREATE TABLE AwardNominees (
Singer_ID INTEGER,
Nominated\_Work TEXT,
Award\_Name TEXT,
Award_Edition_ID INTEGER
)
CREATE TABLE Singers(
Name TEXT,
Song\_Count INTEGER,
+ Album\_Count INTEGER,
Fan\_Count INTEGER,
Singer\_ID INTEGER, «=
Gender TEXT
)
CREATE TABLE RecordCompanies (
Record\_Company TEXT,
Singer\_Date . TIME,
Singer_ID INTEGER
)
You can query one or more tables at the same time. Be careful not to query â non-existent table names or column names. Also, please note which <= column is in which table. Please note that you are not proficient in nested SQL, when encountering <â complex problems, you can think step by step to generate multiple < non-nested SQL statements. For example: Question: Some minor languages are used by no more than 3 countries, what < are the source countries of these languages/?| Thought: First generate the ist SQL ["\select Official_Language from Country <= group by Official_Language having count(*) > 3'', and assume that the â result of this SQL is resulti, then generate the 2nd SQL | âselect Name = from Country where Official_Language not in result1''. Answer: select Name from Country where Official_Language not in ( select ~ Official_Language from Country group by Official_Language having = count(*) > 3 ) Please use the following format when answering: Question: This is the question Thought: This is the thought process Answer: This is the final SQL query statement
33
# Figure 16: The evaluation prompt for mathematical questions.
Transform a math problem into a solution function that can be executed using o Python/|s math library. Use the output of running this code to answer the <â question. Please use the following format: History: Information output from previous tool invocation Question: A question about mathematics Error: This is the error output previously generated PythonSolution: A Python solution, make sure to generate a PythonSolution different < from the one in Error, for example, ## Python Solution def solution(): Python statement Answer: The final answer Below are some demonstrations of mapping math problems to PythonSolution: History: The original question was: What is 37593 * 67h Question: What is 37593 * 67/7] Error: PythonSolution: ## Python Solution def solution(): import math return 37593 * 67 Answer: 2518731 History: The original question was: What is the 1/5th power of 37593) Question: What is the i/5th power of 37593f Error: PythonSolution: ## Python Solution def solution(): import math return 37593 ** 1/5 Answer: 8.222831614237718 History: The original question was: What is the logarithm of 5 with base 10F7 Question: What is the logarithm of 5 with base 107 Error: PythonSolution: ## Python Solution def solution(): import math return math.log(5, 10) Answer: 0.69897 Now let's get started: History: {history} Question: {question} Error: {error} PythonSolution:
34
Figure 17: The system prompt for one-step agent.
You are a strategy model and given a problem and a set of tools, you need << to generate a sequence of executable tools to determine the solution to < the problem. Each tool in the toolset is defined as follows: SQL Generator: Given an input problem and a database, create a <= syntactically correct SQLite query statement. PythonREPL: Given an input problem and some information, generate a <â syntactically correct Python code. Please use the following format: Question: Here is the question Error: Here is the previously generated error output Tasks: Here is a Python List type, where each item in the List is a <= dictionary. The key of the dictionary represents the selected tool, and <= the value is the query input when calling the tool. Please note that <â the generated Tool and Query should be different from those in the = Error. Answer: The final answer Here are some examples mapping the question to the tools: Question: What is the square of the number of albums by Jolin Tsai? Error: Tasks: [{{SQL Generator: "What is the number of albums by Jolin Tsai?"}}, o {{PythonREPL: "What is the square of the number of albums by Jolin = Tsai?"}}] Answer: The square of the number of albums by Jolin Tsai is 100 Question: First, calculate the square of 40 and denote it as A. Then, find <= the names of all artists with a total number of fans less than A. Error: Tasks: [{{PythonREPL: "Let A be the square of 40. What is the value of s A?"}}, {{SQL Generator: "Find the names of all artists with a total <= number of fans less than A"}}] Answer: Jolin Tsai Note that you must ensure that the generated Tasks strictly adhere to the <= format requirements: they must be in Python List type, where each item <= is a dictionary. The key of the dictionary represents the selected tool, <= and the value is the query input when calling the tool. Now, let's proceed: Question: {question} Error: {error} Tasks:
35
Figure 18: The system prompt for the sequential agent.
Answer the following questions as best you can. You have access to the <= following tools: Use the following format: Question: the input question you must answer Thought: you should always think about what to do Action: the action to take, should be one of [{tool_names}] ActionInput: the input to the action Observation: the result of the action which should not be generate Thought: I now know the final answer Final Answer: the final answer to the original input question in the above format means that this <= Thought/Action/ActionInput/Observation can repeat N times. The line of Observation will be given through the input. Please stop to chat after you generate the line ActionInput or the line of « Final Ansewer. For example, when I ask what is the 0.4 power of 24, you should use the ~ following format: <bot>: Question: What is the 0.4 power of 24/7 Thoutht: I need to calculate the 0.4 power of 24 Action: Python REPL ActionInput: print (24**0.4) Observation: 3.565204915932007 Thought: I now know the final answer Final Answer: 3.565204915932007 Begin!|| <bot>: Question: {input} Thought : {agent_scratchpad}
36 | {
"id": "2302.13971"
} |
2308.03313 | Quantifying the Impact of Large Language Models on Collective Opinion Dynamics | The process of opinion expression and exchange is a critical component of
democratic societies. As people interact with large language models (LLMs) in
the opinion shaping process different from traditional media, the impacts of
LLMs are increasingly recognized and being concerned. However, the knowledge
about how LLMs affect the process of opinion expression and exchange of social
opinion networks is very limited. Here, we create an opinion network dynamics
model to encode the opinions of LLMs, cognitive acceptability and usage
strategies of individuals, and simulate the impact of LLMs on opinion dynamics
in a variety of scenarios. The outcomes of the simulations inform about
effective demand-oriented opinion network interventions. The results from this
study suggested that the output opinion of LLMs has a unique and positive
effect on the collective opinion difference. The marginal effect of cognitive
acceptability on collective opinion formation is nonlinear and shows a
decreasing trend. When people partially rely on LLMs, the exchange process of
opinion becomes more intense and the diversity of opinion becomes more
favorable. In fact, there is 38.6% more opinion diversity when people all
partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The
optimal diversity of opinion was found when the fractions of people who do not
use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our
experiments also find that introducing extra agents with
opposite/neutral/random opinions, we can effectively mitigate the impact of
biased/toxic output from LLMs. Our findings provide valuable insights into
opinion dynamics in the age of LLMs, highlighting the need for customized
interventions tailored to specific scenarios to address the drawbacks of
improper output and use of LLMs. | http://arxiv.org/pdf/2308.03313 | Chao Li, Xing Su, Haoying Han, Cong Xue, Chunmo Zheng, Chao Fan | cs.SI, cs.CY | 21 pages, 4figures,2tables | null | cs.SI | 20230807 | 20230826 | # Quantifying the Impact of Large Language Models on Collective Opinion Dynamics
# Chao Lia, Xing Sua*, Haoying Hana, Cong Xuea, Chunmo Zhenga, Chao Fanb
a College of Civil Engineering and Architecture, Zhejiang University, Hangzhou, Zhejiang, 310000 b College of Engineering, Computing, and Applied Sciences, Clemson University, Clemson, SC, 29631.
# Abstract
The process of opinion expression and exchange is a critical component of democratic societies. As people interact with large language models (LLMs) in the opinion shaping process different from traditional media, the impacts of LLMs are increasingly recognized and being concerned. However, the knowledge about how LLMs affect the process of opinion expression and exchange of social opinion networks is very limited. Here, we create an opinion network dynamics model to encode the opinions of LLMs, cognitive acceptability and usage strategies of individuals, and simulate the impact of LLMs on opinion dynamics in a variety of scenarios. The outcomes of the simulations inform about effective demand-oriented opinion network interventions. The results from this study suggested that the output opinion of LLMs has a unique and positive effect on the collective opinion difference. The marginal effect of cognitive acceptability on collective opinion formation is nonlinear and shows a decreasing trend. When people partially rely on LLMs, the exchange process of opinion becomes more intense and the diversity of opinion becomes more favorable. In fact, there is 38.6% more opinion diversity when people all partially rely on LLMs, compared to prohibiting the use of LLMs entirely. The optimal diversity of opinion was found when the fractions of people who do not use, partially rely on, and fully rely on LLMs reached roughly 4:12:1. Our experiments also find that introducing extra agents with opposite/neutral/random opinions, we can effectively mitigate the impact of biased/toxic output from LLMs. Our findings provide valuable insights into opinion dynamics in the age of LLMs, highlighting the need for customized interventions tailored to specific scenarios to address the drawbacks of improper output and use of LLMs.
# Keywords: large language models | opinion dynamics | intervention strategies
# Introduction
The process of opinion expression and exchange can facilitate the interactions of diverse perspectives and enables individuals to make informed decisions and participate in civic life1-3. It has been argued that social interactions, such as face-to-face and through traditional media (e.g. TV, newspapers, Twitter), are fundamental in this process4-7. This process thus has been extensively studied in the past decades, with several opinion models proposed and modified in the context of traditional media8-13. However, despite the advances and evolution of these opinion models, such as considering agent stubbornness14,15 and noise16, they still fail to fully capture the impacts of LLMs on collective opinion dynamics.
Traditional media mainly rely on widespread information dissemination such as radio or
This is a preprint uploaded to arXiv. âCorresponding author: Xing Su. Email address: [email protected].
1 / 21
broadcast television17,18, or directly bridge the gap between individuals such as Twitter19-21, to influence collective opinion networks. Specifically, traditional media opinions are manually reviewed and validated before output, so the output is more trustworthy, unbiased and accurate22. The opinion delivery process of traditional media is a one-way direct interaction, i.e., they influence the public through the unilateral information dissemination23. Unlike the pattern of traditional media, LLMs play the role of a personal copilot to affect collective opinion networks through their penetration of personal opinions. Fig.1 shows that there are significant differences between LLMs and traditional media in terms of opinion shaping process, opinion interaction and opinion output. LLMs will only be manually reviewed during the opinion formation process24. Due to documented limitations in resources and problematic patterns in training data, it is highly possible to contain false, biased and toxic content25-27. Hence, the output of LLMs will carry these contents, and the opinion delivery process will be a two-way interaction. That is, LLMs influence individuals through a question and answer (Q&A) format of interaction, a pattern that disseminates the output of LLMs more efficiently. Meanwhile, as LLMs become more prevalent in our daily lives (such as ChatGPT28, the fastest-growing consumer app ever, with hundreds of millions of active users just two months after launch29), the risks shown in Fig.1 have been recognized as an urgent problem, leading to the emergence of many other problems such as leaking private information30 and overreliance31,32. Different organizations and individuals introduce different usage strategies, even many of them choose to neglect the benefits of LLMs and completely prohibit these effective tools for aforementioned issues.
Given the aforementioned differences and existing usage problems, it is indispensable to understand how the output of LLMs affects collective opinion dynamics, and what usage strategies should be tailored to address the drawbacks of LLMs in specific scenarios. Attempts to study the impact of LLMs on opinion have recently emerged25,33. Among these studies, the impacts of cognitive bias hidden in the output of LLMs have gained significant attention34-37. These biases include gender38, race39, nationality40, and even political topics41. Most of this research focuses on establishing robust evidence of the existence of bias by conducting control experiments with and without specific treatment. Such studies typically only consider the impacts of specific cognitive biases induced by LLMs on individual opinions, but neglect the various use strategies for LLMs that influence the opinion evolution over time.
To address these limitations, we propose a new opinion model, based on the classic Hegselmann-Krause (HK) model42 and incorporate the bidirectional opinion shaping process and personalized usage strategies of LLMs, to investigate the dynamic evolution in opinion networks. Specifically, we categorized agents into three categories according to three different usage strategies, i.e., Nodes only Influenced by Neighbors (NIN) for no use, Nodes Influenced by Neighbors and LLMs (NINL) for partial reliance and Nodes only Influenced by LLMs (NIL) for full reliance. To mimic the reality of opinion interaction patterns of LLMs, we also propose three modifications to the HK model by taking the authoritative effect, stubbornness degree and arbitrary events of reality into account. The detailed assumptions, parameter settings and update conditions are shown in the Method section.
By implementing the proposed model, we first compared several scenarios with or without LLMs to determine if LLM has an impact on opinion dynamics. Considering its computational efficiency, we then identify parameters that have great effects on the results of the opinion dynamics, using the benchmark scenario as a reference. The detailed definitions and value ranges of original
2 / 21
parameters are provided in Tab.1. We then perform millions of simulations to capture the evolutionary process and final formation of the opinion and explored their correlation with the filtered parameters. The detailed definitions and value ranges of indicators that are used to characterize the evolution and formation of the opinions are provided in Tab.2. Finally, we summarize the potential risks of LLMs on opinion dynamics based on the correlation matrix, explained by prior knowledge from existing studies, and investigate countermeasures for possible hazards. The results of these experiments inform us about effective usage strategies and interventions of LLMs oriented to different scenarios.
manual review and validation Part at processing, none before output
Fig.1. Schematic diagram of the difference between LLMs and traditional media. The left side represents the pattern of opinion dissemination in the interactions between traditional media and people, the right side represents the pattern of opinion dissemination in the interactions between LLMs and people, and the center part represents face-to-face interactions in opinion networks.
Tab.1. Seven controlled parameters in our modified opinion dynamic models.
Parameter Definition N Number of group size T Number of opinion exchanges Value range [0, â] [0, â]
ð Cognitive acceptability of each agent. A value of 0 means a very low [0, 1]
3 / 21
acceptability of other opinions, and a value of 1 means a very high acceptability of other opinions
# pro_NIN
Proportion of the population who do not use LLMs
# pro_NINL
Proportion of the population who partially rely on use LLMs
pro_NIL Proportion of the population who fully rely on use LLMs [0, 1]
# ð¥ð¿ð¿ð
Output opinion of LLMs. A value of â1.0 means a very negative opinion on the topic, and a value of 1 means a very positive opinion on the topic
Tab.2. Four indicators in our modified opinion dynamic models.
# Dimension
# Indicator Definition
NODEdiff Mean opinion difference of different categories of nodes. This indicator represents the evolution of the value of opinion on a topic. This indicator is minimized when all nodes have an initial opinion value of 1 and a final opinion value of -1, and is maximized when all nodes have an initial opinion value of -1 and a final opinion value of 1 [-2, 2]
# Opinion evolution
NODEconv Mean opinion convergence time of different categories of nodes. This indicator represents the timestep it takes for opinion to evolve to a stable state. This indicator is maximized when the exchange of opinions has not converged after the completion of the exchange of opinions, the conditions for determining convergence are shown in Eq(6) [0, T]
# Opinion formation
NODESD Mean opinion standard deviation of different categories of nodes. This indicator represents the degree of dispersion of a group's opinions relative to the mean value. This indicator is maximized when half of the nodes (n represents the number of nodes) have an opinion value of 1 and the other half have an opinion value of -1
NODEclus Mean number of opinion clusters of different categories of nodes. This indicator represents the clustering of opinions, with a value of 1 indicating consensus and a value of 2 indicating polarization, with larger values indicating more fragmented opinions. This indicator is maximized when all opinions of nodes are inconsistent
4 / 21
[0, 1]
[0, 1]
[-1, 1]
# Value range
# ð
ðâ1
]
Results Five crucial parameters of the model. In Fig.2, a noticeable discrepancy between G1 and the other scenarios, specifically Fig.2A and Fig.2C, highlights the significant impact of LLMs on the opinion dynamics network. Comparing the benchmark with the remaining six scenarios, we observe that in Fig.2A, Fig.2B, and Fig.2C, the benchmark curve closely aligns with the curve of N=300, T=300, indicating an insignificant effect of an excessive number of nodes and iterations on the mean opinion value, mean opinion change value, and mean standard deviation. Fig.2B also shows that in all scenarios, opinion values almost stop changing after the number of iterations reaches 60, i.e., all individuals' opinions reach a steady state before 100 iterations are completed. In contrast, the benchmark curve differs significantly from the curves for ð = 0.8, pro_NINL = 0.6, pro_NIL = 0.6, and ð¥ð¿ð¿ð = 1, demonstrating a substantial impact of modifying the threshold, the ratio of the three agents, and the output value of LLMs on the results. In Fig.2D, the benchmark curve almost entirely overlaps with the curve of T = 300, indicating a minimal effect of additional iterations on the number of clusters. However, the curve of N = 300 exhibits more clusters throughout the process compared to the benchmark curve, with discrepancies remaining relatively stable at around 10. Additionally, the curves of other scenarios and benchmark curves demonstrate irregular interweaving.
These findings show that the threshold, the proportion of the three agents, and the output value of LLMs significantly influence the opinion exchange process as well as the final distribution of opinions. In the rest of this paper, we will explain why and to what extent these five parameters as independent variables influence collective opinion dynamics.
|g 008 D oo 5 0.07 S = ® 02 © 0.06 2 $ 0.0 ieee + font 2 > £2 gS 004 2 2 Ca 'E 003 < £ S a ® 66 © 002 £ c 08 § 001 f= â10 0.00 | 0 20 40 60 80 100 ° 20 40 60 80 100 iteration iteration a i=) 8 & 8 mean standard deviation mean number of clusters 0 20 40 60 80 100 iteration iteration ât+â G1 = benchmark â+â N= 300 âeâ T= 300 âsâ ⬠= 0.8 â-â pro_NINL = 0.6 âxâ pro_NIL = 0.6 âsâ X;;4=1
Fig.2. Different parameters with varying degrees of impact on the results of the opinion dynamics. For ease of exposition, the network dynamic parameters in the later will all be expressed in the order of (N, T,ð, pro_NIN, pro_NINL, pro_NIL, ð¥ð¿ð¿ð). We first selected the original opinion network that is not affected by LLM (G1), then determine one of the opinion networks affected by LLM as the benchmark, and then change one of the seven
5 / 21
parameters at a time, so that there are a total of 8 scenarios of opinion dynamics, which are [G1(100, 100, 0.4); benchmark(100, 100, 0.4, 0.6, 0.2 , 0.2, -1); N=300 (300,100, 0.4, 0.6, 0.2 , 0.2, -1); T=300 (100,300, 0.4, 0.6, 0.2 , 0.2, -1);ð =0.8 (100,100, 0.,, 0.6, 0.2 , 0.2, -1); pro_NINL =0.6 (100,100, 0.4, 0.2, 0.6 , 0.2, -1); pro_NIL =0.6 (100,100, 0.4, 0.2, 0.2 , 0.6, -1); ð¥ð¿ð¿ð =1 (100,100, 0.4, 0.6, 0.2 , 0.2, 1) ]. We finally plotted the trend curves of the four outcomes of each network dynamic scenario with the number of iterations, since the curves of T=300 and benchmark are basically the same among the four results, we only selected the results of the 100 iterations to show for a clearer representation, and the trend plots include (A) mean opinion value, which indicates the average opinion value of all agents for each iteration; (B) mean opinion change value , which denotes the absolute value of the difference between the opinion value of the agent at the current iteration and the opinion value at the previous iteration, and finally takes the average of all agents; (C) mean standard deviation, which denotes the standard deviation of all agents in each iteration, see Eq(7) in the method section; and (D) mean number of clusters, which denotes the number of hierarchical clusters of all agents in each iteration, see Eq(10) in the method section. In order to eliminate randomness of the four results, our values at iteration t are the average calculations of 100 repeated simulations of that network dynamics scenario at iteration t.
Impacts on the evolution and formation of opinions. Fig.3A presents our findings from the NODEdiff, NODEconv, NODESD, and NODEclus sections. For NODEdiff, we observe that changes in agent opinions exhibit a significant positive correlation solely with the output value of LLM. No significant relationship exists with either the proportion of the three agents or the threshold value of the agents. In addition, the curve representing the change in opinion value influenced by the positive and negative LLM values is symmetric with respect to the y=0 axis of symmetry during iteration (Fig.3B). These findings suggest that LLM can facilitate educational efforts aimed at guiding the positive development of collective opinion. However, the influence relationship described above indicates that negative or toxic public opinions generated by LLMs can also lead to negative development of collective opinion networks. Therefore, intervention and control approaches need to be explored further to ensure the proper evolution of opinion networks. Additionally, Fig.3C shows that the standard deviation of the agents roughly obeys a parabolic distribution with ð¥ð¿ð¿ð=0 as the symmetry axis and the smallest value here. This result suggests that as the LLM output value approaches the initial mean value of the opinion network, the final distribution of opinions becomes less heterogeneous.
In the simulations for NODEconv, the opinion convergence times of NIN and NINL are both significantly and positively correlated with the threshold value, as the threshold value increases, the intensity of opinion interaction also increases, and the convergence time prolongs. However, the opinion convergence time of NIN is not significantly correlated with the proportion of the three agents or the output value of LLM, suggesting that the time required for the opinion of NIN in the population to reach a steady state is only related to the threshold. The convergence time of NINL exhibits a significant positive correlation with the NIN to NIL ratio, and a significant negative correlation with the NINL to NIN ratio. Conversely, the convergence time of all agents displays a significant positive correlation with the NIN to NINL ratio, and a significant negative correlation with the NIL to NIN ratio. These findings indicate that an increased number of individuals who do not employ LLMs within the opinion network result in extended convergence times for opinions held by those who partially rely on both LLMs and social groups to reach a stable state. A greater number of individuals who partially rely on LLMs lead to shorter convergence times for their individual opinions, but longer convergence times for collective opinions. Conversely, a great
6 / 21
number of people who fully rely on LLM will increase the intensity of opinion interaction resulting in a long convergence time for NINL, but a short convergence time for collective opinions.
For NODESD, the standard deviations of opinions of NIN and NINL are both significantly and negatively correlated with the threshold value, indicating that a larger threshold value results in less discrete opinions compared to the mean value. The correlations between the standard deviation of NIN and NINL and the proportions of the three agents are more consistent. They are significantly and inversely correlated with the proportions of NIN and NINL, but significantly and positively correlated with the proportion of NIL. These findings suggest that a higher proportion of NIL is associated with increased disagreement within NIN and NINL, while a higher proportion of NIN and NINL is linked to greater convergence of their internal opinions. In contrast, the standard deviation of collective opinion is significantly positively correlated with the proportion of NINL, significantly negatively correlated with the proportion of NIL, and only slightly negatively correlated with the proportion of NIN. Additionally, Fig.3D shows that the standard deviation of the agents decreases very slowly until the threshold is less than 0.5 and decreases very quickly after the threshold is greater than 0.5. This result indicates that even with human intervention, the dispersion of the opinions of each agent is large as long as the threshold of the agent is below 0.5. Once the human intervention exceeds this threshold, the marginal effect would be significant, and the tendency to reach consensus would increase significantly.
For NODEclus, the number of opinion clusters for NIN and NINL is negatively associated with the threshold value, showing that the larger the threshold value, the fewer the number of opinion clusters and the more opinions tend to be consensus-oriented. The number of clusters for NIN, NINL, and the group is significantly negatively correlated with the proportion of NIL. This observation suggests that a higher proportion of NIL not only results in a reduced number of clusters of both internal and collective opinions but also leads to a more consensual opinion among individuals who do not use LLMs or partially rely on LLMs. Additionally, the number of clusters of NIN is significantly positively correlated with the proportion of NIN and significantly negatively correlated with the proportion of NINL. This result indicates that the greater the proportion of NIN, the more dispersed the opinions within NIN. However, the number of clusters of NINL is significantly negatively correlated with the proportion of NIN and significantly positively correlated with the proportion of NINL, indicating that increasing the proportion of NINL will concentrate the opinions within NINL. Finally, Fig.3E shows that the number of opinion clusters decreases rapidly when the threshold is increased from 0 to 0.4 and then decreases slowly when the threshold is greater than 0.4. This result suggests that the initial threshold value must be set appropriately to achieve a balance between opinion diversity and consensus. Combining the results from Fig.3D and Fig.3E, we observe that increasing the threshold through intervention can quickly converge chaotic opinions into multiple distant opinion clusters when the threshold is less than 0.3. However, when the threshold is greater than 0.7, The number of opinion clusters is small, preferring to reach a consensus. In summary, Fig.3A-E suggest that the overall convergence of the opinion network is slower and the opinion distribution is more divided when more people partially rely on LLM. In contrast, as the number of individuals who solely rely on LLMs increases, the convergence of the opinion network accelerates, and the opinion distribution becomes more concentrated and oriented towards consensus. Therefore, maintaining opinion diversity and full interaction of opinions requires a large proportion of NINL. However, excessive reliance on LLM can lead to a rapid convergence of opinion networks, which may limit opinion diversity and compromise the quality of collective
7 / 21
opinion formation.
Fig.3F provides additional confirmation of the observations presented in Fig.3A and Fig.3B. Specifically, the minimum and maximum values of the opinion difference corresponded to parameter values of (0.95, 0.20, 0.80, 0.00, -1.00) and (0.93, 0.19, 0.81, 0.00, 1.00), respectively. Notably, the output values of LLMs for these two sets of parameters were diametrically opposed. Fig.3G indicates that to achieve a rapid attainment of a stable state in collective opinion exchange, individual cognitive acceptability should be close to 0, the output opinion value of LLMs should be 0, and the proportions of NIN, NINL, and NIL agents should be approximately 27%, 27%, and 46%, respectively. For a more intense opinion exchange process, the individual cognitive acceptability should preferably be 0.6, the output opinion value of LLMs should be close to 0, and the proportions of NIN, NINL, and NIL agents should be approximately 44%, 41%, and 15%, respectively. Fig.3H illustrates that the minimum value of the standard deviation of collective opinion occurs when the fraction of NIL is equal to 1 (0.50, 0.00, 0.00, 1.00, 0.00), and individual opinions are consistent with the output of LLMs and do not change. In contrast, the standard deviation of collective opinion reaches its maximum value when the acceptability of individuals is 0.13, and the output of LLMs is 0.20, with the proportion of three nodes roughly 37%, 31%, and 32%. Fig.3I demonstrates that when collective opinion reaches a consensus, the acceptability of individuals is 0.14, the output opinion value of LLMs is 0, and the proportions of the three agents are roughly 27%, 27%, and 46%. Conversely, when collective opinion reaches polarization, the acceptability of individuals is 0.92, the output opinion value of LLMs is 0, and the proportions of the three agents are roughly 35%, 34%, and 31%. Finally, when collective opinion reaches maximum fragmentation, the acceptance of individuals is 0, the output opinion value of LLMs is 0.06, and the proportions of the three agents are approximately 23%, 71%, and 6%.
In general, Fig.3F-I provide a different perspective from Fig.3A-E in that they equilibrate five parameters and provide optimal solutions for different needs in opinion networks involving LLMs, e.g., one wants to maximize opinion diversity, then it is better to have 70% of the population using LLMs and 20% of the population not using LLMs. In addition, our results in this section can provide the direction of intervention for other parameters when some parameter is known as a priori knowledge.
8 / 21
all nodes A NIL NIN 1.0 = - ns 08 0.6 proNIN â nS 04 0.2 pro_NINL â 9S 0.0 -0.2 pro_NiL â ng -0.4 -0.6 ns ns (ns ns ns [Eg "s ns ns [IEW ns 08 â = er â1.0 NODE con NODE sp NODE cus B 4 $ z nn g |= nm <" mars Boo, Ly = all nodes| âio Oa G6 4-02 00 07 04 08 OB 10 Xu Oy H 1 pro_NIN pro_NIL pro_NINL Sane ae NODE on NODE conv NODE sp NODE cus NODE ey = 2
Fig.3. Relationships of five filtered parameters and four indicators. We aim to investigate the precise trend of the impact of different parameter settings on the opinion network, so we divide the values of each parameter into 11 cases, i.e., the possible values of ð, pro_NIN, pro_NINL, and pro_NIL are [0,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8, 0.9,1] and the sum of the proportions of the three agents is 1, the possible values of ð¥ð¿ð¿ð take the values [-1,-0.8,-0.6,- 0.4,-0.2,0,0.2,0.4,0.6,0.8,1], and there are a total of 7986 combinations of parameters for the opinion dynamics after traversing all cases. We then calculate the values of the four indicators (NODEdiff, NODEconv, NODESD, and NODEclus) for each combination of agents with three different use strategies and for all agents in the network, the detailed description and computation process of each indicator is described in the methods section. To eliminate randomness of our results, we repeat the simulation 100 times for each combination, and the final indicator values are the average of the results of these 100 times. (A) Pearson correlation coefficients of 5 filtered parameters and 4 indicators for the agents with different usage strategies. The Pearson correlation coefficients takes values in the range [-1, 1], with a value of 0 indicating that the two variables are not correlated, a positive value indicating a positive correlation, and a negative value indicating a negative correlation, the different colors of the squares in the figure represent different values of the Pearson correlation coefficients, and the legend is on the right, Since the value of the NIL does not change, resulting in the values of its four indicators not changing either, none of their Pearson correlation coefficients with the five parameter values exist. We also conducted t-test to test the degree of significance of the Pearson correlation coefficients, * denotes the degree of significance, i.e., the P-value, ***ð<0.001, **ð<0.01, *ð < 0. 05, ns means ð ⥠0. 05, nan means none P value exist. In order to derive detailed trends that were not available from the correlation analysis, we then use the benchmark scenario in Fig.2, and keep changing the value of only one of the parameters at a time, and repeat the simulation 100 times to plot detailed trend of the indicators with respect to the parameters, some additional findings we obtained are shown in (B), (C), (D) and (E). To obtain the optimal combination of parameters for different use and intervention strategies of LLMs, we computed the average values
9 / 21
of the five parameters when the minimum or maximum values are obtained for the four indicators, respectively. Which are shown in (F), (G), (H) and (I). In order to make the results more robust, when different combinations of parameters reach extreme values at the same time, we select all combinations of parameters that reach extreme values and calculate their average values, and when this does not happen, we select the top ten combinations of parameters to calculate the average value. The minimum value of the opinion clusters in (I) is 1, which represents the consensus of opinions, and we additionally show the case where the mean value of the opinion clusters is 2, which represents the polarization of opinions in reality.
Countered solutions and intervention strategies to hazards of LLMs. Fig.4A shows that randomly adding agents with opposite, neutral, or random opinions significantly improves scenarios with negative LLMsâ values, such as bias. There is no significant difference between these three approaches. However, the final opinion distribution for the approach introducing opposite values spans a larger range than the remaining two approaches, with the minimum value still larger than the original scenario, and the maximum value a large improvement over the original scenario. This measure is more suitable for scenarios that require a complete and rapid reversal of opinion, such as correcting bias as opposed to a certain established fact. Approaches that introduce neutral and random agents have the smallest final opinion distribution span and significantly improve the minimum value of opinion, but not the maximum value. These two approaches are more robust and more suitable for scenarios that require slow changes in population misperceptions, such as biases against race, poverty, and disability.
In Fig.4B, we observe that the standard deviation of individuals in the group who are all partially dependent (0.370) on LLMs is significantly larger in the extreme scenario than in the non- dependent (0.267) and fully dependent scenarios (0). In Fig.4C, we observe that the number of opinion clusters in the group that are all non-dependent or partially dependent on LLMs is significantly larger in the extreme scenario than in the fully dependent scenario. These findings further confirm that LLMs can be effective in increasing opinion diversity if used appropriately, whereas complete reliance on LLM leads to a stronger collective consensus.
Fig.4D illustrates different intervention strategies for using LLMs in different contexts with specific objectives of collective opinion difference, convergence, and fragmentation. Overall, we find that when our desired collective opinion and LLMsâ output values are opposite, the most effective method for human intervention is to randomly introduce three types of agents. When our desired collective opinion and LLMsâ output values are not opposite, the most effective method to change the collective opinions is to vary the proportion of people who use different strategies. Increasing the proportion of NINL, NIN and the threshold of the population can effectively intensify the interactions of collective opinions. Increasing the proportion of NINL can effectively diversify collective opinion. There are many specific implementation options for each intervention method in Fig.4D, and we provide some common options based on proven psychological and sociological knowledge in the Conclusion section.
In summary, the experiments and results in this section provide valuable insights into the effective use of LLMs in shaping collective opinion formation and convergence. The findings highlight the importance of the appropriate use of LLMs in promoting opinion diversity and fragmentation-oriented opinion formation. These results have important implications for the design and implementation of effective interventions aimed at promoting the positive development of opinion networks in various contexts.
10 / 21
RK NIL_random RK al NIL_neutral soi NIL_opposite 0.55 15 NIL_origin = c ns NINL_random 5 04 ¢ = 2 | NINL_nuetral x |* BS $3 10 * o 0.3 nc} NINL_opposite * * s Pad NINL_origin * Pz ° NIN_random 3 02 B 5 NIN_nuetral * x 5 0.1 E NIN_opposite zs oaâ c NIN_origin * 0.0 0 15 4.0 05 0.0 05 1.0 OS OS inion di Sv Ss SSS mean opinion difference o © o o > $s 7 $s SS O7 s ge e¢ g ev Â¥ g +) XLLM collective opinion ag convergence collective opinion difference XLLM > a 0 Qe = collective opinion @a fragmentation Ort orn Wee @ >0 @ =0 © <0 @ greater the value @smaller the value âincrease proportion of NIN @increase proportion of NINL @ increase proportion of NIL @)add opposite nodes (S)add random nodes (add neutral nodes _⬠increase threshold OOOO OO D) ©) e@ 2
Fig.4. Countered solutions and intervention strategies to hazards of LLMs. (A) The distribution of the mean opinion difference of three categories of agents in original scenario and 3 countered scenarios. In the context of direct or implicit bias and toxicity of LLM, how to intervene reasonably and effectively in the opinion network is important to eliminate social discrimination and verbal violence, etc. According to Fig.3A, we found that only the ð¥ð¿ð¿ð has a significant relationship with the NODEdiff, therefore, in the existing framework, there is a lack of means to address the change in the opinion network caused by the output opinion values of LLM. To address this issue, three attempts were made. Specifically, we first select all NODEdiff values (N=726) when the ð¥ð¿ð¿ð is -1 as the resultant values of the original dynamic network here, which were calculated in Fig.3A. We then introduce three countered solutions, which are 1) agents of opposite opinions (here i.e. 1); 2) agents of neutral opinions (here i.e. 0 ); and 3) agents of random opinions (with the value domain [-1,1]) added randomly in iterations, and once they were added, they all keep the initial opinion values, the probability of adding agents at iteration t is all 0.1, the number of agents added is all 2, and the LLM value is fixed at -1, too, all other possible parameter combinations (N=726) are traversed, and each combination of parameters is simulated 100 times, the average value is taken as the final result corresponding to each parameter combination. The distribution of the mean opinion difference were plotted using a box plot, with the five vertical lines from left to right indicating the minimum, lower quartile, median, upper quartile and maximum of the data, respectively, We also conducted one-way ANOVA analysis to investigate whether the difference between the two groups of data was significant, we only show here the p-value less than 0.05, * denotes the degree of significance, i.e., the P-value, ****ð<0.0001, ***ð<0.001, **ð<0.01, *ð<0. 05. (B) The distribution of the standard deviation with pro_NIN, pro_NINL, pro_NIL equal to 1, respectively. In three extreme usage strategies here, we select all NODESD values (N=121) which were calculated in Fig.3A., the height of the bar chart represents the mean
standard deviation, the error bars show the 95% confidence interval of the statistical data, the test method for
11 / 21
difference significance is the same as A. (C) The distribution of the number of clusters with pro_NIN, pro_NINL, pro_NIL equal to 1, respectively. In three extreme usage strategies here, we select all NODEclus values (N=121) which were calculated in Fig.3A., the symbols in the figure are the same as in (B). (D) Appropriate intervention strategies of LLMs meeting different scenarios. We take the different needs in reality into account, and plot this diagram based on the correlation matrix and the results of the above three diagrams from three aspects: collective opinion difference, collective opinion convergence time and collective opinion fragmentation.
# Discussion
In the past decades, the study of social opinion evolution has attracted much attention43,44. A number of opinion models have been proposed based on well-studied sociological and psychological principles45-49, such as the DeGroot model50, voter model51, Sznajd model52, Friedkin and Johnsen model53, and bounded confidence model54. A lot of effort has been put into understanding opinion dynamics on the traditional social interactions media era, but little research has been done on opinion models to incorporate the interaction patterns of LLMs. Unlike traditional media interaction, LLMs interact with users in both directions, and LLMs are more likely to output toxic and biased content25-27, so the potential impact of LLMs on opinions is not fully understood. Researchers have identified six specific risk areas of LLMs26, including the potential for implicit bias (e.g., gender, race, and country) and the risk of indulging in their use, which were highly relevant to opinion dynamics. Recent research also have confirmed that uncensored LLMs can significantly influence individual opinions38,41. On the basis of the above study, we conducted millions of simulations with a modified HK model to dig into the impact of LLMs on social opinion dynamics and propose different target-oriented interventions for the utilization of LLMs.
Our results show that a broader cognitive acceptability leads to an eventual consensus of collective opinion, which is consistent with previous findings on opinion dynamics55-57. Deaprting from this point, we demonstrate that the marginal impact of threshold on collective opinion formation is nonlinear. Specifically, thresholds less than 0.5 contribute rapidly to the decentralized aggregation of opinions, thresholds greater than 0.5 contribute rapidly to the overall consensus of opinions. This finding can enrich the theory of cognitive acceptability in opinion dynamics with the involvement of LLMs. The output opinion values of LLMs have a significant and positive effect on the formation of opinions. The use strategy of LLMs has a significant impact on the convergence and distribution of opinions. Moreover, an interesting phenomenon that the use strategies of partial and full reliance on LLMs lead to almost exactly opposite effects on the convergence and distribution of opinions is observed, which may be linked to the multiple sources of opinion exchange that partially rely on LLMs.
We propose coping strategies for the problems that have been demonstrated for LLMs at this stage: bias and toxicity. We find that introducing agents randomly in the network with opposite/neutral/random opinions all significantly reduce the tendency of overall opinion negativization, and that the latter two approaches are more robust. We also explore relevant interventions for the potential risk of overdependence based on the correlation matrix, such as converting individuals who are overly dependent on LLMs to partial dependence. Our study provides different target-oriented strategies for the use and intervention of LLMs, they mainly increase the threshold of individuals, increase the proportion of NIN/NINL/NIL, and add opposite /neutral/random agents. There are many different implementation options for different use and
12 / 21
13 / 21
intervention strategies. For example, to increase the threshold of individuals, we can improve education and promote intercultural competence58; to increase the proportion of NINL, i.e., encourage the rationalized use of LLMs, we can improve people's technological literacy through various forms of publicity, education, and training activities, so that they can understand the advantages and disadvantages of big language models, and we also need to simultaneously develop relevant regulatory and normative measures to protect user data privacy, and avoid problems of model abuse and over-reliance.
To the best of our knowledge, our study investigates for the first time the impact of LLMs on opinion dynamics, starting from different output opinion values of LLMs and different usage strategies. Our findings help promote a deeper public understanding of the influence of LLMs on opinion evolution and support groups and individuals in choosing the best usage strategies with respect to their own demands. Our study highlights that rationalizing the use of LLMs can significantly increase the diversity of opinions compared to not using them, but over-reliance can lead to the opposite situation. Despite the current prohibition of LLMs in numerous companies, schools, and organizations, our findings provide compelling evidence for the rational use of LLMs. Our study supports the notion that such use should not impede the development of artificial general intelligence (AGI) technologies, including LLMs, while also delivering additional benefits to both personal and professional spheres, such as improved productivity and diversified communication channels. Our study offers novel insights into the intricacies of public opinion dynamics in the era of LLMs. Moreover, our findings support the urgent request of theoretical studies of the evolution and formation of opinions in the age of LLMs and even in the future age of AGI.
Our analysis provides new insights for general, archetypal opinion dynamics simulating approaches. It has however some limitations since realistic opinion interaction situations are far more complex than theoretical models. Our study considers a number of phenomena in realistic opinion dynamics, including the influence of authority, the influence of stubbornness, and the influence of arbitrary events outside the network. However, the mechanisms by which people express and exchange their opinions in reality are more complex. For example, some people express their opinions differently from their internal opinions, some people will amplify their opinions when expressing them, and even some people express different opinions to different people at the same time, on the same topic.
Methods Opinion dynamics model. We first take into account that the opinion values in the opinion network are continuous rather than discrete variables like voting questions59, and therefore choose a continuous opinion model rather than a discrete opinion model like the voter model51. Moreover, an agent will neither simply share nor completely disregard the opinions of other agents but will take these opinions into account to a certain extent in forming his/her new opinions in a process defined by a fusion rule. Hence, rather than DeGroot model50, we choose to base our model on the classical Hegselmann-Krause (HK) model42, which is one of the most widely used opinion models of the bounded confidence model54,60, moreover, after taking into account influence of LLMs and complex realities, we propose the new opinion model. The classical HK model42 is defined as:
ð¥ð(ð¡ + 1) = |ð½(ð, ð¥(ð¡))| â1 â ð¥ð(ð¡) , ððð ð¡ â ð ðâð½(ð,ð¥(ð¡)) (1)
14 / 21
Where ð½(ð, ð¥) = {1 ⤠ð ⤠ð| |ð¥ð(ð¡) â ð¥ð(ð¡)| < ðð} and ðð is the confidence level of agent ð . ð¥ð(ð¡) is the opinion value of Agent ð at time t. Agent ð takes only those agents ð into account whose opinions differ from his own by not more than ðð. The base case assumes a uniform level of confidence, i.e., ðð= ð for all agents ð. Compared to the classical HK model, we first take the different usage strategies of LLMs into account, categorize agents into three categories, NIN, NINL and NIL, which also indicates the agents are influenced in different extent: NIN represents agent who does not use LLMs, and is completely unaffected directly by LLMs and only influenced by neighboring nodes; NINL represents agent who partially rely on LLMs, and is influenced directly by both LLMs and neighboring nodes; NIL represents agent who totally rely on LLMs, and is completely influenced by LLMs and is not influenced by neighboring nodes. We then take the complex realities into account, propose three modifications: a) The authoritative effect is simulated by taking into account the different levels of authority and influence of different nodes, instead of each neighbor within the agent threshold having the same weight; b ) The stubbornness of different agent is simulated by randomly introducing the stubbornness degree into the updating formula of agent opinions; c) The influences of arbitrary events to opinions are simulated by introducing a random event in each opinion iteration, and it randomly affects some of these agents.
We adopted Erdos-Renyi graph to conduct our experiments, as it is most commonly used in existing literature to model social networks61. Given a random full-connected opinion networks (G) in which the number of group size is N, and the three categorized nodes occupy different proportions. Let ð¥ð(ð¡) represent the opinion of agent ð at time ð¡ , and its value range is [-1, 1], a value of â1.0 means a very negative opinion, and a value of 1 means a very positive opinion.; let ðð¢ð represent the authority of agent ð, which equals to the number of its neighbors divided by the number of nodes other than itself; the given confidence level for agent ð is ðð, the given stubborn degree for agent ð is ð ðð .The value ranges of ðð¢ð, ðð and ð ðð is [0, 1]. The initial value of ð¥ð(ð¡) and ð ðð are randomly assigned and obey uniform distributions respectively in their value ranges. The number of opinion exchanges, i.e., the total iteration number for opinion dynamics is T. The updating formulas of these three categories of nodes are as follows. a) For NIN:
{ ðð âð¥ð(ð¡), |ð¥ð(ð¡) â ð¥ð(ð¡)| ⤠ðð ð¡âðð ð¥ð(ð¡ + 1) = ð¥ð(ð¡) â ð ðð + { â ðâð½(ð,ð¥(ð¡)) â ðð¢ð ðâð½(ð,ð¥(ð¡)) ðð¢ð ð¥ð(ð¡) } â (1 â ð ðð) (2)
Where ð½(ð, ð¥) = {1 ⤠ð ⤠ð| |ð¥ð(ð¡) â ð¥ð(ð¡)| ⤠ðð}. b) For NINL:
15 / 21
ðð |ð¥ð(ð¡) â ð¥ð¿ð¿ð(ð¡)| > ðð â© âð¥ð(ð¡), |ð¥ð(ð¡) â ð¥ð(ð¡)| ⤠ðð ðð¢ð ð¡âðð ð¥ð(ð¡ + 1) = ð¥ð(ð¡) â ð ðð + { â ð¥ð(ð¡) â ðð¢ð ðâð½(ð,ð¥(ð¡)) ðâð½(ð,ð¥(ð¡)) ðð |ð¥ð(ð¡) â ð¥ð¿ð¿ð(ð¡)| ⤠ðð â© âð¥ð(ð¡), |ð¥ð(ð¡) â ð¥ð(ð¡)| ⤠ðð } â (1 â ð ðð) ð¡âðð ð¥ð(ð¡ + 1) = ð¥ð(ð¡) â ð ðð + ðð¢ð ðð¢ð + ðð¢ð¿ð¿ð ð¥ð(ð¡) â â ðâð½(ð,ð¥(ð¡)) ðð¢ð¿ð¿ð ðâð½(ð,ð¥(ð¡)) â ð¥ð¿ð¿ð(ð¡) + â ðð¢ð + ðð¢ð¿ð¿ð { ðâð½(ð,ð¥(ð¡)) } â (1 â ð ðð) (3)
{
Where ð½(ð, ð¥) = {1 ⤠ð ⤠ð| |ð¥ð(ð¡) â ð¥ð(ð¡)| ⤠ðð} , ð¥ð¿ð¿ð(ð¡) is the opinion value delivered by LLM at time ð¡ , as it is obtained by automatic text generating based on a large amount of historical data, individuals have a negligible impact on the output of LLMs when they interact with them in a Q&A format. We thus assume it as a constant during each iteration in this study, i.e., ð¥ð¿ð¿ð(ð¡)= ð¥ð¿ð¿ð for all times ð¡. ðð¢ð¿ð¿ð is the authority of LLM, we treat it as 1 by the assumption that LLM has the potential to connect every agent. c) For NIL:
ð¥ð(ð¡ + 1) = ð¥ð¿ð¿ð(ð¡) (4)
In general, our modelâs single simulations are performed according to the above rules, and to reduce the randomness of the results, we repeat the simulations one hundred times with the same controlled parameters. Our model has seven controlled parameters, and they are the number of group size (N), the number of opinion exchange (T), the cognitive acceptability of agents ( ð ), the proportion of NIN (pro_NIN), the proportion of NINL (pro_NINL), the proportion of NIL (pro_NIL), opinion value of LLM (ð¥ð¿ð¿ð).
Multi-simulation post-processing method. We aim to investigate the precise trend of the impact of different parameter settings on the opinion network, which means that the steps of our parameters will be more intensive, so the combination of the above seven parameters may appear in hundreds of millions of different scenarios. Considering the computational efficiency and time of the model, we first filtered out the five parameters that have the greatest impact on the results by the set baseline scenarios. We then delve into the impact of LLMs on opinion networks in terms of opinion evolution and opinion distribution by performing 100 simulations under every combination of selected parameters. Opinion evolution refers to the evolution pattern of node opinion values, including two indicators: opinion difference and opinion convergence time; opinion distribution refers to the distribution of node opinion values, including two indicators: opinion standard deviation difference and the number of opinion clusters. The detailed description and calculation methods of the above four indicators are as follows.
Opinion difference is the evolution of the value of an agent's opinion on a topic. In this study, we categorized three types of nodes and computed their mean opinion difference, with negative values indicating that the mean opinion of that type of node undergoes a negative change, i.e., becomes more negative, and positive values the opposite. The formula of mean opinion difference (ððð·ð¸ðððð) is as follows.
16 / 21
ððð·ð¸ðððð = ð ð=1 â â (ð¥ð(ð) â ð¥ð(0)) ðð ð âð (5)
Where ð represents the number of simulations, ð represents the number of specific category of nodes, ð¥ð(ð) and ð¥ð(0) represent the final value and initial value of node ð.
Opinion convergence time is the timestep it takes for an agent's opinion to evolve to a stable state. In this study, we categorize three types of nodes and compute their average opinion convergence time. The larger the value, the longer the average opinion convergence time of that type of node, i.e., the longer it takes for the opinion o to reach a stable state, and the more intense and chaotic the interaction process of opinions. The formula of mean opinion convergence time (ððð·ð¸ðððð£) is as follows.
ððð·ð¸ðððð£ = â (ð¡| âð, |ð¥ð(ð¡) â ð¥ð(ð¡ â 1)| ⤠ð ð ð âð ) (6)
Where (ð¡| âð, |ð¥ð(ð¡) â ð¥ð(ð¡ â 1)| ⤠ð) means for all nodes belonging to the same specific category of nodes, if the difference between their value at time ð¡ and their value at time ð¡ â 1 is less than ð , We take ð to be five thousandths of 1, i.e. 0.005, the time ð¡ is taken as their convergence time.
Opinion standard deviation is the degree of dispersion of a group's opinions relative to the mean value. In this study, we categorize three types of nodes and compute their average opinion standard deviation. The larger the value, the more discrete the overall distribution of opinions of the nodes is relative to the mean, i.e., the wider the overall distribution of opinions. The formula of mean opinion standard deviation (ððð·ð¸ðð·) is as follows.
ððð·ð¸ðð· = ð ð=1 â ââ (ð¥ð(ð) â ð¥Ì
(ð)) ð âð ð â 1 ð 2
(7)
Where ð¥Ì
(ð) represents the mean final value of a specific category of nodes.
The number of opinion clusters can effectively compensate for the lack of standard deviation in portraying data distribution, for example, the standard deviation is large when the opinions are polarized, but the opinions are concentrated. Therefore, we introduce the number of opinion clusters to indicate the degree of opinion aggregation. The larger the value, the more points the opinion distribution of the node is concentrated, i.e., the more the overall opinion distribution tends to be split, and a value of 2 indicates that the opinion distribution of the node is polarized, a value of 1 indicates that the opinion distribution of the node is consensus.
The commonly used K-means clustering method needs to specify the number of categories k in advance, which obviously does not meet our needs because the final distribution of opinions cannot be specified in advance, and density-based clustering methods, such as DBSCAN, do not take well into account the fragmentation of opinion, so we apply single linkage hierarchical clustering, which can compensate the defects of the above two clustering methods, and is an agglomerative clustering algorithm that builds trees in a bottom-up approach62,63. Specifically, we first take the ð¥ð(ð) obtained from a single simulation, i.e., the final opinion value of each agent, as a separate cluster ð¶ð, and then calculate the distance between clusters using the Manhattan distance (see Eq.(8)), followed by merging the two clusters with the closest distance into a new cluster, and the distance between the new merged cluster and the other cluster is the distance between the sample points with the smallest distance in the two clusters (see Eq.(9)), and keep repeating the merged
17 / 21
clusters until all agents' final opinion values are assigned to the one single cluster. After obtaining the final dendrogram, we conduct several trials to choose to cut the dendrogram at height 0.2, which implies that the radius of the resulting clusters cannot exceed 0.2, and we traverse from top to bottom and count the number of clusters that satisfy the condition as number of opinion clusters, after 100 simulations, we obtain mean number of opinion clusters (ððð·ð¸ððð¢ð ) (see Eq. (10)).
ððð ð¡ðð = ððð ð¡(ð¶ð, ð¶ð) = |ð¥ð(ð) â ð¥ð(ð)| (8)
Where ððð ð¡ðð represents the Manhattan distance between initial clusters ð¶ð and ð¶ð.
ððð ð¡ð¶ð· = ððð(ððð ð¡(ð¶ð´, ð¶ð·), ððð ð¡(ð¶ðµ, ð¶ð·)) = ððð(|ð¥ð â ð¥ð|) , ð â ð¶ð¶, ð â ð¶ð·
Where ððð ð¡ð¶ð· represents the Manhattan distance between clusters ð¶ð¶ and ð¶ð· , clusters ð¶ð´ and ð¶ðµ are the results of last clustering, ð¶ð¶ = ð¶ð´ + ð¶ðµ is the results of present clustering, which means the agent in ð¶ð¶ is the concatenation of the agents in ð¶ð´ and ð¶ðµ. )
â (ð| âð, ððð ð¡(ð, ð + 1) ⤠ð¿ ð ð âð ððð·ð¸ððð¢ð = (10)
Where (ð| âð, ððð ð¡(ð, ð + 1) ⤠ð¿) represents traversing the dendrogram from top to bottom, and return the number of clusters ð when the distance between all adjacent clusters (i.e., ð and ð + 1) is less than ð¿ for the first time, we take ð¿ as one tenth of the value range of ð¥ð(ð¡), i.e., 0.2.
Acknowledgements This study was supported by the National Natural Science Foundation of China (#71971196).
# Reference
1
Centola, D., Becker, J., Brackbill, D. & Baronchelli, A. Experimental evidence for tipping points in social convention. Science 360, 1116-1119, doi:10.1126/science.aas8827 (2018).
2
Aririguzoh, S. Communication competencies, culture and SDGs: effective processes to cross- cultural communication. Humanities and Social Sciences Communications 9, 96, doi:10.1057/s41599-022-01109-4 (2022).
3
Li, F., Liu, Y. & Meng, T. Discursive strategy of opinion expression and government response in China: Text analysis based on online petitions. Telematics and Informatics 42, 101238, doi:https://doi.org/10.1016/j.tele.2019.06.001 (2019).
4
Muchnik, L., Aral, S. & Taylor, S. J. Social Influence Bias: A Randomized Experiment. Science 341, 647-651, doi:10.1126/science.1240466 (2013).
5
Perra, N. & Rocha, L. E. C. Modelling opinion dynamics in the age of algorithmic personalisation. Scientific Reports 9, 7261, doi:10.1038/s41598-019-43830-2 (2019).
6
Paluck, E. L., Shepherd, H. & Aronow, P. M. Changing climates of conflict: A social network experiment in 56 schools. Proceedings of the National Academy of Sciences 113, 566-571, doi:10.1073/pnas.1514483113 (2016).
7
Ferraz de Arruda, G., Petri, G., Rodriguez, P. M. & Moreno, Y. Multistability, intermittency, and hybrid transitions in social contagion models on hypergraphs. Nature Communications 14, 1375, doi:10.1038/s41467-023-37118-3 (2023).
8
Proskurnikov, A. V. & Tempo, R. A tutorial on modeling and analysis of dynamic social networks. Part I. Annual Reviews in Control 43, 65-79, doi:https://doi.org/10.1016/j.arcontrol.2017.03.002 (2017).
9 Proskurnikov, A. V. & Tempo, R. A tutorial on modeling and analysis of dynamic social networks. Part II. Annual Reviews in Control 45, 166-190, doi:https://doi.org/10.1016/j.arcontrol.2018.03.005 (2018).
10
Hassani, H. et al. Classical dynamic consensus and opinion dynamics models: A survey of recent trends and methodologies. Information Fusion 88, 22-40, doi:https://doi.org/10.1016/j.inffus.2022.07.003 (2022).
11
Li, L., Fan, Y., Zeng, A. & Di, Z. Binary opinion dynamics on signed networks based on Ising model. Physica A: Statistical Mechanics and its Applications 525, 433-442, doi:https://doi.org/10.1016/j.physa.2019.03.011 (2019).
12
Laptev, A. A. Modeling of Social Processes Based on T.Parsons Ideas. Advances in Complex Systems 03, 99-106, doi:10.1142/S021952590000008X (2000).
13 Weisbuch, G., Deffuant, G., Amblard, F. & Nadal, J.-P. Meet, discuss, and segregate! Complexity 7, 55-63, doi:https://doi.org/10.1002/cplx.10031 (2002).
14
Borkar, V. S. & Reiffers-Masson, A. Opinion Shaping in Social Networks Using Reinforcement Learning. IEEE Transactions on Control of Network Systems 9, 1305-1316, doi:10.1109/TCNS.2021.3117231 (2022).
15
Noorazar, H. Recent advances in opinion propagation dynamics: a 2020 survey. The European Physical Journal Plus 135, 521, doi:10.1140/epjp/s13360-020-00541-2 (2020).
16
Xiong, F., Liu, Y., Wang, L. & Wang, X. Analysis and application of opinion model with multiple topic interactions. Chaos: An Interdisciplinary Journal of Nonlinear Science 27, 083113, doi:10.1063/1.4998736 (2017).
17
Zhang, N., Huang, H., Su, B., Zhao, J. & Zhang, B. Information dissemination analysis of different media towards the application for disaster pre-warning. PloS one 9, e98649, doi:10.1371/journal.pone.0098649 (2014).
18
Kubin, E. & von Sikorski, C. The role of (social) media in political polarization: a systematic review. Annals of the International Communication Association 45, 188-206, doi:10.1080/23808985.2021.1976070 (2021).
19
Flamino, J. et al. Political polarization of news media and influencers on Twitter in the 2016 and 2020 US presidential elections. Nature Human Behaviour 7, 904-916, doi:10.1038/s41562- 023-01550-8 (2023).
20
Fan, C., Jiang, Y., Yang, Y., Zhang, C. & Mostafavi, A. Crowd or hubs: Information diffusion patterns in online social networks in disasters. International Journal of Disaster Risk Reduction 46, 101498, doi:10.1016/j.ijdrr.2020.101498 (2020).
21
Fan, C., Jiang, Y. & Mostafavi, A. Emergent social cohesion for coping with community disruptions in disasters. Journal of the Royal Society Interface 17 (2020).
22
Strömbäck, J. et al. News media trust and its impact on media use: toward a framework for future research. Annals of the International Communication Association 44, 139-156, doi:10.1080/23808985.2020.1755338 (2020).
23
Yang, Y. et al. Exploring the emergence of influential users on social media during natural disasters. International Journal of Disaster Risk Reduction 38, 101204, doi:https://doi.org/10.1016/j.ijdrr.2019.101204 (2019).
24
Aiyappa, R., An, J., Kwak, H. & Ahn, Y.-Y. Can we trust the evaluation on ChatGPT? ArXiv abs/2303.12767 (2023).
25
Zhao, W. X. et al. A Survey of Large Language Models. ArXiv abs/2303.18223 (2023).
18 / 21
19 / 21
26 Weidinger, L. et al. Ethical and social risks of harm from Language Models. ArXiv abs/2112.04359 (2021).
27
Luitse, D. & Denkena, W. The great Transformer: Examining the role of large language models in the political economy of AI. Big Data & Society 8, 20539517211047734, doi:10.1177/20539517211047734 (2021).
28 Pegoraro, A., Kumari, K., Fereidooni, H. & Sadeghi, A.-R. To ChatGPT, or not to ChatGPT: That is the question! ArXiv abs/2304.01487 (2023).
29 Eysenbach, G. The Role of ChatGPT, Generative Language Models, and Artificial Intelligence in Medical Education: A Conversation With ChatGPT and a Call for Papers. JMIR Med Educ 9, e46885, doi:10.2196/46885 (2023).
30 Bostrom, N. Information Hazards: A Typology of Potential Harms from Knowledge. 10 (2012).
31 Craft, J. T., Wright, K. E., Weissler, R. E. & Queen, R. M. Language and Discrimination: Generating Meaning, Perceiving Identities, and Discriminating Outcomes. Annual Review of Linguistics 6, 389-407, doi:10.1146/annurev-linguistics-011718-011659 (2020).
32 McKee, K., Bai, X. & Fiske, S. Understanding Human Impressions of Artificial Intelligence. (2021).
33 Talboy, A. N. & Fuller, E. Challenging the appearance of machine intelligence: Cognitive bias in LLMs. ArXiv abs/2304.01358 (2023).
34
West, J. D. & Bergstrom, C. T. Misinformation in and about science. Proceedings of the National Academy of Sciences 118, e1912444117, doi:10.1073/pnas.1912444117 (2021).
35 Skitka, L. J., Mosier, K. L. & Burdick, M. Does automation bias decision-making? International Journal of Human-Computer Studies 51, 991-1006, doi:https://doi.org/10.1006/ijhc.1999.0252 (1999).
36 Piloto, L. S., Weinstein, A., Battaglia, P. & Botvinick, M. Intuitive physics learning in a deep- learning model inspired by developmental psychology. Nature Human Behaviour 6, 1257-1267, doi:10.1038/s41562-022-01394-8 (2022).
37 Smith, B. C. The Promise of Artificial Intelligence: Reckoning and Judgment. (The MIT Press, 2019).
38 Salewski, L., Alaniz, S., Rio-Torto, I., Schulz, E. & Akata, Z. In-Context Impersonation Reveals Large Language Models' Strengths and Biases. ArXiv abs/2305.14930 (2023).
39 Ousidhoum, N. D., Zhao, X., Fang, T., Song, Y. & Yeung, D.-Y. in Annual Meeting of the Association for Computational Linguistics.
40 Venkit, P., Gautam, S., Panchanadikar, R., Huang, T.-H. K. & Wilson, S. in Conference of the European Chapter of the Association for Computational Linguistics.
41
41 Rutinowski, J., Franke, S., Endendyk, J., Dormuth, I. & Pauly, M. The Self-Perception and Political Biases of ChatGPT. ArXiv abs/2304.07333 (2023).
42 Hegselmann, R. & Krause, U. Opinion dynamics and bounded confidence: models, analysis and simulation. J. Artif. Soc. Soc. Simul. 5 (2002).
43 Peralta, A. F., Kertész, J. & Iñiguez, G. Opinion dynamics in social networks: From models to data. arXiv preprint arXiv:2201.01322 (2022).
44 Anderson, B. D., Dabbene, F., Proskurnikov, A. V., Ravazzi, C. & Ye, M. Dynamical networks of social influence: Modern trends and perspectives. IFAC-PapersOnLine 53, 17616-17627 (2020).
20 / 21
45 Eisenberger, N. I., Lieberman, M. D. & Williams, K. D. Does rejection hurt? An fMRI study of social exclusion. Science 302, 290-292 (2003).
46
Zhao, Y., Kou, G., Peng, Y. & Chen, Y. Understanding influence power of opinion leaders in e- commerce networks: An opinion dynamics theory perspective. Information Sciences 426, 131- 147 (2018).
47 Dandekar, P., Goel, A. & Lee, D. T. Biased assimilation, homophily, and the dynamics of polarization. Proceedings of the National Academy of Sciences 110, 5791-5796, doi:10.1073/pnas.1217220110 (2013).
48 Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N. & Cook, J. Misinformation and Its Correction: Continued Influence and Successful Debiasing. Psychological Science in the Public Interest 13, 106-131, doi:10.1177/1529100612451018 (2012).
49 Skinner, B. F. Two Types of Conditioned Reflex and a Pseudo Type. The Journal of General Psychology 12, 66-77, doi:10.1080/00221309.1935.9920088 (1935).
50
50 Degroot, M. H. Reaching a Consensus. Journal of the American Statistical Association 69, 118- 121, doi:10.1080/01621459.1974.10480137 (1974).
51 Ben-Naim, E., Frachebourg, L. & Krapivsky, P. L. Coarsening and persistence in the voter model. Physical Review E 53, 3078-3087, doi:10.1103/PhysRevE.53.3078 (1996).
52 Slanina, F. & Lavicka, H. Analytical results for the Sznajd model of opinion formation. The European Physical Journal B - Condensed Matter and Complex Systems 35, 279-288, doi:10.1140/epjb/e2003-00278-0 (2003).
53 Friedkin, N. E. & Johnsen, E. C. Social influence and opinions. The Journal of Mathematical Sociology 15, 193-206, doi:10.1080/0022250X.1990.9990069 (1990).
54
Lorenz, J. A. N. CONTINUOUS OPINION DYNAMICS UNDER BOUNDED CONFIDENCE: A SURVEY. International Journal of Modern Physics C 18, 1819-1838, doi:10.1142/S0129183107011789 (2007).
55
Lorenz, J. A stabilization theorem for dynamics of continuous opinions. Physica A: Statistical Mechanics and its Applications 355, 217-223, doi:https://doi.org/10.1016/j.physa.2005.02.086 (2005).
56
Wedin, E. & Hegarty, P. A Quadratic Lower Bound for the Convergence Rate in the One- Dimensional HegselmannâKrause Bounded Confidence Dynamics. Discrete & Computational Geometry 53, 478-486, doi:10.1007/s00454-014-9657-7 (2015).
57
Bhattacharyya, A., Braverman, M., Chazelle, B. & Nguyen, H. L. in Proceedings of the 4th conference on Innovations in Theoretical Computer Science 61â66 (Association for Computing Machinery, Berkeley, California, USA, 2013).
58
Hammer, M. R., Bennett, M. J. & Wiseman, R. Measuring intercultural sensitivity: The intercultural development inventory. International Journal of Intercultural Relations 27, 421- 443, doi:https://doi.org/10.1016/S0147-1767(03)00032-4 (2003).
59 Holley, R. A. & Liggett, T. M. Ergodic Theorems for Weakly Interacting Infinite Systems and the Voter Model. The Annals of Probability 3, 643-663 (1975).
60 Dittmer, J. C. Consensus formation under bounded confidence. Nonlinear Analysis: Theory, Methods & Applications 47, 4615-4621, doi:https://doi.org/10.1016/S0362-546X(01)00574-0 (2001).
61 Amblard, F., Bouadjio-Boulic, A., Gutiérrez, C. S. & Gaudou, B. in 2015 Winter Simulation Conference (WSC). 4021-4032.
62 Liu, S., He, L. & Max Shen, Z.-J. On-Time Last-Mile Delivery: Order Assignment with Travel- Time Predictors. Management Science 67, 4095-4119, doi:10.1287/mnsc.2020.3741 (2020).
63 Bien, J. & Tibshirani, R. Hierarchical Clustering With Prototypes via Minimax Linkage. Journal of the American Statistical Association 106, 1075-1084, doi:10.1198/jasa.2011.tm10183 (2011).
21 / 21 | {
"id": "2201.01322"
} |
2308.03210 | Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series | Irregularly sampled multivariate time series are ubiquitous in several
application domains, leading to sparse, not fully-observed and non-aligned
observations across different variables. Standard sequential neural network
architectures, such as recurrent neural networks (RNNs) and convolutional
neural networks (CNNs), consider regular spacing between observation times,
posing significant challenges to irregular time series modeling. While most of
the proposed architectures incorporate RNN variants to handle irregular time
intervals, convolutional neural networks have not been adequately studied in
the irregular sampling setting. In this paper, we parameterize convolutional
layers by employing time-explicitly initialized kernels. Such general functions
of time enhance the learning process of continuous-time hidden dynamics and can
be efficiently incorporated into convolutional kernel weights. We, thus,
propose the time-parameterized convolutional neural network (TPCNN), which
shares similar properties with vanilla convolutions but is carefully designed
for irregularly sampled time series. We evaluate TPCNN on both interpolation
and classification tasks involving real-world irregularly sampled multivariate
time series datasets. Our experimental results indicate the competitive
performance of the proposed TPCNN model which is also significantly more
efficient than other state-of-the-art methods. At the same time, the proposed
architecture allows the interpretability of the input series by leveraging the
combination of learnable time functions that improve the network performance in
subsequent tasks and expedite the inaugural application of convolutions in this
field. | http://arxiv.org/pdf/2308.03210 | Chrysoula Kosma, Giannis Nikolentzos, Michalis Vazirgiannis | cs.LG | null | null | cs.LG | 20230806 | 20230809 | 3 2 0 2
g u A 9 ] G L . s c [
2 v 0 1 2 3 0 . 8 0 3 2 : v i X r a
# Time-Parameterized Convolutional Neural Networks for Irregularly Sampled Time Series
# Chrysoula Kosma ´Ecole Polytechnique, IP Paris France [email protected]
# Giannis Nikolentzos ´Ecole Polytechnique, IP Paris France [email protected]
Michalis Vazirgiannis ´Ecole Polytechnique, IP Paris France [email protected]
# Abstract
Irregularly sampled multivariate time series are ubiquitous in several application domains, leading to sparse, not fully-observed and non-aligned observations across different variables. Standard sequential neural network architectures, such as recurrent neural networks (RNNs) and convolutional neural networks (CNNs), consider regular spacing between observation times, posing significant challenges to irregular time series modeling. While most of the proposed architectures incorporate RNN variants to handle irregular time intervals, convolutional neural networks have not been adequately studied in the irregular sampling setting. In this paper, we parameterize convolutional layers by employing time-explicitly initialized kernels. Such general functions of time enhance the learning process of continuous-time hidden dynamics and can be efficiently incorporated into convolutional kernel weights. We, thus, propose the time-parameterized convolutional neural network (TPCNN), which shares similar properties with vanilla convolutions but is carefully designed for irregularly sampled time series. We evaluate TPCNN on both interpolation and classification tasks involving real-world irregularly sampled multivariate time series datasets. Our experimental results indicate the competitive performance of the proposed TPCNN model which is also significantly more efficient than other state-of-the-art methods. At the same time, the proposed architecture allows the interpretability of the input series by leveraging the combination of learnable time functions that improve the network performance in subsequent tasks and expedite the inaugural application of convolutions in this field.
1
# Introduction
Time series arise naturally in many contexts including quantitative finance, astrophysics and medicine, just to name a few. Recently, there is a growing interest in applying machine learning techniques to time series data. Besides time series forecasting, which has been extensively studied for decades [7], other tasks have also emerged recently such as time series classification [12] and generation [8].
Time series are constructed from real-world data and usually several of their observations are missing or are subject to noise. This is mainly due to irregular sampling and is common in different types of data including medical records, network traffic, and astronomical data. Unfortunately, the most successful machine learning models in sequential modeling, namely recurrent neural networks (RNNs) and convolutional neural networks (CNNs) cannot properly handle such irregularly sampled time series data. Indeed, those models treat observations successively and assume an equidistant sampling scheme. Thus, time series data that
1
exhibits variable gaps between consecutive time points pose a significant challenge to such conventional deep learning architectures. A naive approach to deal with the above problem would be to drop some observations such that the distance between consecutive (remaining) observations is fixed. However, this would increase data sparsity, thus leading to poorly defined latent variables. A more prominent approach would be to first apply some imputation method to replace missing values with estimated values, and then to use the standard models which assume an equidistant sampling scheme. In fact, several recent approaches build on the above idea [3, 9]. However, this could potentially result in a loss of information and a violation of the underlying dynamics.
Recently, there has been an increasing interest in effectively capturing the continuous dynamics of real- world sparse and irregular multivariate time series. Most studies have extended RNNs to continuous-time hidden dynamics defined by ordinary differential equations (ODEs) [4, 24]. The effectiveness of Convolutional Neural Networks (CNNs) [15] as an alternative to recurrent architectures has been established, as long as the input dependencies that are essential fall within the memory horizon of the network. CNNs are based on parallel computations and thus are more efficient, contrary to the training instability and gradient problems of RNNs that employ back-propagation through time [34]. However, since discrete convolutions learn independent weights for each time step in the kernel range, they do not directly capture the time irregularities. Efforts for the continuous implementation of convolutional kernels have targeted 3D data [25, 33] and recently, sequences [23]. The proposed continuous convolution for sequential data [23], CKConv, parameterizes the kernel values using a multi-layer perception (MLP) on the relative positions of the observations, followed by a periodic activation function [29]. In contrast to [23] that take advantage of periodic activations, our layer can be constructed employing any predefined set of continuous functions and be followed by any activation, while using significantly fewer learnable parameters, since a single feed-forward layer is used for the parameterization of the convolutional kernel.
Following the above line of research, in this paper, we develop a new model, so-called Time-Parameterized Convolutional Neural Network (TPCNN), which generalizes the standard CNN model to irregularly sampled time series. To achieve that, we replace the fixed kernels of CNNs with kernels whose values are parameterized both by time and by trainable variables. Thus, instead of keeping the kernel weights fixed over the whole time series length, we use different functions (e.g., linear, sinusoidal) to produce the kernels that will be convolved with each patch of the time series. Therefore, kernels can be seen as continuous functions of time, and the proposed TPCNN model can naturally learn continuous latent representations of irregular time series. Furthermore, the use of the aforementioned functions improves the explainability of the proposed model. We combine our time-parameterized convolutions with vanilla convolutions by stacking them in a deep encoder module. The proposed TPCNN model is evaluated in the tasks of time series classification and time series interpolation. Our experiments demonstrate that the proposed model performs comparably to state-of-the-art methods. The main contributions of the paper are summarized as follows:
(i) Generalizing conventional, fixed convolutional kernels to time functions, that increase their represen- tational power and still leverage properties of convolutions (e.g., locally aggregated information, fast training).
(ii) Enabling the application and proving the efficiency of deep stacked convolutions in the irregular sampling setting.
(iii) Achieving high-performance results in interpolation and classification of irregularly sampled benchmark datasets, which are comparable to other state-of-the-art methods.
# 2 Related Work
The long-standing challenge in multivariate irregular time series modeling has led to the development of various neural network architectures that explicitly handle such time-dependent peculiarity.
One strategy suggests dividing the timeline into equal intervals, filling in missing data, and then using a Recurrent Neural Network (RNN) on the imputed inputs. Using a weighted average between the empirical
2
mean and the previous observation to perform imputation has also been proposed [3]. Alternative methods for imputation include the use of Gaussian processes [9], or generative adversarial networks [16] prior to running the RNN on time-discretized inputs. The interpolation-prediction network [26] employs several semi-parametric interpolation layers for multivariate time series input with missing values, followed by a prediction network which is applied on the produced regularly spaced and fully observed representations. Multi- directional RNNs (M-RNN) combine past and future observations for each timestamp [36]. A differentiable set function method for classifying irregularly sampled is another line of work presented in [11].
An alternative strategy for handling irregularly sampled data involves architectures that directly model such temporal sequences. Various techniques, including adaptations of gated recurrent unit networks (GRUs) [5] and Long Short-term Memory networks (LSTMs) [10], have been introduced for this purpose. Among the several proposed modified GRU architectures [3], a prominent example takes as input observed values, indicators denoting missing data points, and the differences in time between observations. The LSTM architecture has been extended for handling the time irregularity of the data, by introducing a novel time gate in [19] that updates the memory state. The activation and deactivation of this gate are governed by distinct rhythmic oscillations, controlled by some learnable parameters. Another LSTM modification is presented in [21], where the proposed forget gate moderates the passing of memory from one time step to another. Another solution for handling irregularly sampled data is to incorporate the time gaps between observations directly into Recurrent Neural Networks (RNNs). One approach is to add the time gap ât to the RNN input, which has been found to be susceptible to overfitting [18]. An alternative method is to introduce hidden states that decay over time, which has been proposed in several works as a viable solution [3, 2, 22].
Hidden states with an exponential decay can be employed to parameterize neural Hawkes processes and explicitly model observations via latent state changes at each observation event [17]. Many works focus on the continuous modeling of time series by learning a continuous-time neural representation with a latent state defined at all times. More specifically, a variational auto-encoder model, which utilizes a neural network decoder in combination with a latent ordinary differential equation (ODE) model, has been presented in [4]. Based on this approach, an ODE-RNN encoder that consists of a neural ODE part that models the hidden state dynamics and an RNN part that updates the hidden state has been proposed [24]. A continuous version of the GRU architecture models the input series via continuous ODE dynamics describing the evolution of the probability distribution of the data [6]. Finally, an alternative to Neural ODEs, Neural Controlled Differential Equations represent the continuous-time analogue of an RNN, which benefits from memory-efficient adjoint-based backpropagation across observations [14].
Attention mechanisms combined with time encodings, as an alternative to positional ones [32], have been proposed [30, 37, 31]. By extending attention with learnable time embeddings [35], the recently proposed Multi-Time Attention Network [27] computes the similarity between observations at different time points using a learnable time embedding. This approach works similarly to kernel-based interpolation, but by leveraging a learnable time attention-based similarity kernel. Except for the optimization issues of RNNs, the conventional dot-product self-attention mechanism matches queries with keys without considering the surrounding context. At the same time, space complexity grows quadratically with the input length, leading to memory constraints and potential performance limitations.
The use of implicit neural representations for creating continuous data representations by encoding the input in the weights of a neural network has recently gathered interest [20, 29]. Our approach can be conceptualized as an implicit representation of the convolutional kernels since they are parameterized as learnable and continuous functions of time. In this study, the proposed time-parameterized convolutional layer (TPC) introduces time-varying convolutional kernels, allowing for more efficient representational learning of the time dependencies among partially-observed variables. We leverage several continuous time functions for extracting learnable time embeddings of the time intervals across different variables. The proposed architecture is carefully designed for interpolation and classification tasks on irregularly sampled time series.
3
# 3 The TPC Layer
In this section, we define the mathematical properties of the employed Time-Parameterized layer (TPC) and analytically explain a proposed framework for tasks involving irregularly sampled, partially observed and multivariate time series.
# 3.1 Preliminaries
Convolution is a well-studied mathematical operation which has applications in many diverse scientific g, expresses how the shape of one is fields [1]. The convolution of two functions f and g, denoted by f modified by the other.
Continuous convolution. the integral of the product of the two functions after one is reflected and shifted. Formally, given f : RD and g : RD
â
(f * 9)(x) = [. F(y)g(x â y)dy
# (f
Discrete convolution. the support domain of finite integer set ZD and equivalent of convolution is defined as: In the real world, signals are discrete and finite. For functions f , g, defined over D, respectively, the discrete } K, K + 1, ..., K 1, K {â â â
(fe gin]= So flr â kolk) (1) k=â-K
Thus, the integral is replaced by a finite summation. Standard CNN models consist of layers that perform discrete convolutions that are defined over the discrete domain.
# 3.2 Time-Parameterized 1D Convolutions
We first introduce the key notations behind the employed time-parameterized convolutions for irregular and multivariate time series and analyze their fundamental properties.
Irregular time series and standard CNNs. Let {Kâ,...,X)} be a collection of multivariate time series where X ⬠Râ¢*Â¥ for all i ⬠{1,...,N}. Thus, each time series consists of m channels and has a length (i.e., number of observations) equal to L which corresponds to the observation times {t1,t2,...,tz}. Let also d(-,-) denote a function that measures the distance (in time) between observations of a single channel of the collection of time series. The convolution operation of standard CNNs assumes that consecutive observations are equally spaced across all samples, and thus, the weights of the different kernels of standard CNNs are fixed across all chunks of the time series. In other words, the summation in the right part of Equation (i) is performed over the elements of the same set for all n. Formally, we have that d(X{),XÂ¥} 41) = T holds for alli ⬠{1,...,m},j ⬠{],..., Lâ1} and i,j ⬠{1,...,.N} where N is the number of samples. However, the above does not necessarily hold in the case of irregularly sampled time series data. Indeed, irregular sampling for multivariate series leads to variations in the number of observations across channels. Thus, due to the assumptions it makes, the standard convolution operation of CNNs is not suitable for irregular time series data.
4
Time-parameterized convolutional kernels. To deal with the irregularity of time series, we propose to use time-parameterized kernels. Thus, instead of a fixed kernel that slides over the patches of the time series, we use a parameterized kernel whose components are functions of time. The kernel is also parameterized by N0 where the weights of a neural network. We constraint the size of the kernel to be equal to 2z + 1 where z N0 denotes the set of natural numbers together with zero. Then, the elements of the kernel are constructed by some function g(θ, ât) where θ denotes some trainable parameters and ât denotes the distance (in time) of the observation associated with some element of the kernel and the z + 1-th observation. Formally, the convolution is defined as follows:
2241 2241 (f*9)(t) = D> fltig(,t - ti) = SO f(ti)g(0, At) (2) i=1 i=1
where t1, . . . , t2z+1 are the timestamps associated with the observations of the patch the kernel is applied to. The function g(θ, ât) is quite general and can have different forms. In this paper, we focus on inter- R is defined as follows:
4 Ad")
a( [a % Os 4 Ad") =61((n(os ar) +6)
# g
where h : R R denotes some activation function (i.e., sigmoid, ReLU, etc.). Thus, to construct each element of the kernel, function g takes as input four trainable parameters (i.e., θ1, θ2, θ3 and θ4) and the time difference between the current observation and the center observation of the patch. Function h is chosen such that inductive bias is injected into the model. This can allow the model to capture patterns that commonly occur in time series data and also make its internal operations more interpretable. For example, a function h(x) = c where c is some constant would not be a good candidate for extracting useful features from the time series. On the other hand, we employ more informative functions which can capture useful properties of time series such as trend and seasonality. In particular, we employ the following ten functions:
1. h1(x) = x 6. h6(x) = x2
2. h2(x) = sin(x) 7. h7(x) = x3
3. h3(x) = cos(x) 8. h8(x) = sinh(x)
4. h4(x) = tan(x)
9. h9(x) = cosh(x)
5. h5(x) = exp(x)
10. h10(x) = tanh(x)
Most of the time, trend is a monotonic function, and therefore, functions h1, h6 and h7 are chosen to detect trend in time series. Seasonality is a typical characteristic of time series in which the data experiences regular and predictable changes that recur over a defined cycle. Functions h2, h3, h9 and h10 are responsible for extracting features that take seasonality into account.
The approach presented above generates kernels for univariate time series. In the case of multivariate time series, different parameters are learned for the different components of the time series. Therefore, the Rm. Thus, four parameters (θ1, θ2, θ3 and θ4) are replaced by vectors of dimension m, i. e., θ1, θ2, θ3, θ4 function g(θ, ât) : R4m+1 Rm is computed by applying function h( ) pointwise to m different elements. · Note that ât is still a scalar since observation times are identical across all components of the series.
# 3.3 The Time-Parameterized Convolutional (TPC) Layer
Given a sample X(i), its corresponding observation times g, the kernel centered at the j-th observation (i. e., X(i) t1, t2, . . . , tL { , and a time-parameterized function } :,j ) is constructed as follows:
5
Patch Observation time Difference in time Kernel X(i) :,jâK tjâK âtjâK g(θ, âtjâK ) . . . . . . . . . . . . X(i) :,j tj 0 g(θ, 0) . . . . . . . . . . . . X(i) :,j+K tj+K âtj+K g(θ, âtj+K )
Note that X(i) convolution is computed as follows: :,j denotes the j-th column of matrix X(i). Once we construct the kernel, the output of the
m c=) 2G, Ati) Xf) pe +--+ D> G(9,0). Kf) +... l=1 l M: Il a + > 9, Ath XO Lj+kK M: l Il a
# l,j+K
R. In some cases, features of the multivariate time series might be missing. In such cases, the where c above operation would compute the sum of a smaller number of terms (since missing features are ignored). Thus, we also experimented with the mean function:
fe m ; c= £(So0.as-1)1%)5 + + 9(0,0). Xf? +... 1=1 l=1 (3) m t Ss 99, Atj+K)i X.1) l=1
where ν denotes the actual number of features (out of the (2K + 1)m features, those that are not missing). Thus, the convolution between a sequence of observations and the kernel outputs a real number. We use RL. Furthermore, zero padding and apply the kernel to all observations and, therefore we obtain a vector c similar to standard CNNs, not a single kernel, but instead a collection of kernels is generated and applied to the input. These kernels might correspond to different functions of the ones defined above (i. e., h1, . . . , h10). Suppose that we use p different kernels in total (potentially of different functions). Then, the output of the TPC layer of the multivariate and irregularly sampled time series X(i) is computed as:
TPO(X,t) = ||P co, E RU*P
â
is the concatenation operator between vectors and t(i) is a vector that stores the observation times where of the time series. â¥
# 3.4 Properties of TPC Layer
Constant number of parameters An interesting property of the TPC layer is that the number of parameters of each kernel is constant and equal to 4m regardless of the size of the kernel. This is because the kernel is dynamically generated based on the observation times and only 4m trainable parameters are involved. This is in contrast to standard convolutional layers where the number of parameters is equal to the size of the kernel plus the bias. Thus, the number of parameters of the TPC layer will be less than the number of parameters of a standard convolutional layer when the size of the kernels is greater than 4. This is likely to lead to less complex models and might significantly reduce overfitting.
(Lâmp) for kernel Time Complexity. The time complexity of the proposed TPC layer is approximately size â, similar to the vanilla 1D convolution. Since TPC relies on convolutions, that take advantage of parallel computations, it can be trained faster than recurrent neural network architectures. The complexity comparison becomes even more advantageous when compared with continuous-time models, such as neural ODEs that are significantly slower than RNNs [14].
6
, \ 4 ' 1 1 1 2 1 1 â ' an(t) Ae \ 1 / i : Doak = 1 1 ' â ' ran A 1 a(t) |v ' a(t) | Tinear ' â 1 1 1 ' {tists ...stu} ' {tista,...5tz} i al TPc | : am(t) ' 1 ' ' f Mask â ' 1 1 Â¥ 1 a(t) |/ 1 ' ' t TPC } 1 1 ' 1 1 1 \ ' Figure 1: (Left) An encoder that consists of the proposed TPC layer, convolutions and pooling layer and produces a fixed-size latent representation z. (Right) An encoder-decoder framework that reconstructs the series from the input using TPC and linear layers.
Learning Properties. The proposed TCP layer introduces time-varying convolutional kernels as opposed to fixed kernels that are commonly employed in traditional convolutional neural networks (CNNs). In other words, the employed kernels do not remain fixed throughout the whole length of the input series. This particular trait of TPC does not explicitly force weight sharing between different subsequences of the time series during convolution. Weight sharing is, however, implicitly modeled via the learnable representations of time, that are used to initialize the kernel weights. This is based on the assumption that observations that are mapped to similar time embeddings will probably share similar values of weights in the convolutional operation. The proposed approach still maintains the ability to locally aggregate information by retaining the notion of fixed kernel size in the convolution operation. This allows for the output of the convolution to be locally aggregated, while still incorporating the benefits of time-varying convolutional kernels.
Invariance Properties. If some patterns in the time series are identical, both in terms of the ob- servations and also in terms of the difference in time between the observations, then the TPC layer will produce the same output for those two patterns. For example, let xi = (xiâK, . . . , xi, . . . , xi+K) and xj = (xjâK, . . . , xj, . . . , xj+K) denote two sequences of values and ti = (tiâK, . . . , ti, . . . , ti+K) and tj = (tjâK, . . . , tj, . . . , tj+K) denote their respective observation times. If xi = xj holds and âti = âtj also holds, where âti = (tiâK tj), then the kernels produced for these two sequences of values are identical and therefore, the layer produces the same output.
Furthermore, the different functions defined in the previous subsection make the kernels invariant to different transformations. For instance, in the above example, suppose that âti = âtj, and that the k-th element of the second sequence is equal to (k + 1)2Ï times the corresponding element of the first sequence for ) function) and with . Then, the TPC layer equipped with the h2 function (i. e., sin( 0, 1, . . . , 2K + 1 k · } θ3 = 1 and θ4 = 0 would produce the same output for both patterns. Such a function can capture periodic temporal correlations.
7
# 3.5 TPCNN Framework for Irregularly Sampled Time Series
We will next discuss how the TPC layer can be integrated into neural network architectures for dealing with various tasks that involve irregular time series, such as interpolation and classification. Following previous work, we propose an encoder-decoder framework, so-called Time-Parameterized Convolutional Neural Network (TPCNN) framework. In what follows, we give more details about the two main components of the proposed framework, namely its encoder and its decoder.
TPCNN Encoder. This module is responsible for mapping the input time series into a latent vector which captures their overall shape and their specificities. The first layer of the encoder is an instance of the TPC RmÃL layer introduced above. The TPC layer receives as input the irregular and multivariate series X(i) and the corresponding vector of observation times t(i) = . The output of TPC layer is then } successively fed to vanilla convolution layers which can capture longer-time dependencies of the continuous latent representation of the time series. A pooling layer follows each convolution layer, including TPC. By down-sampling the output, such layers are expected to extract features that are good indicators of class membership or of the shape of the time series. Finally, a fully-connected layer is applied to the output of the last convolution layer to extract a low-dimensional representation z(i)
â
TPCNN Decoder. This part of the architecture is responsible for reconstructing the multivariate input series from the latent vector that is produced by the encoder. The latent vector z that was produced by the encoder is first given as input to a fully-connected layer whose objective is to perform rescaling. The emerging vector is then passed onto another fully-connected layer which produces a matrix ËX(i) that matches the dimension of the input time series. These reconstructed time series are then compared against the input series to evaluate the autoencoderâs performance.
Interpolation and Classification Setting. Note that some components of the TPCNN framework depend on the considered task, i. e., interpolation or classification. For instance, in the interpolation setting, each time a kernel of the TPC layer is applied to some subset of the input series, the observation that lies at the center of that subset is masked such that the model does not have direct access to it. On the other hand, such a masking is not performed in the case of the classification setting.
The reconstruction loss of a standard autoencoder is typically measured using the mean squared error (MSE) between the original input and the reconstructed output. For an input time series X(i), the MSE loss is computed as:
1 ; ~ 2 Linterpolation = iol y x? - xO N, | jeO
is a set that contains the indices of the observed values and ËX(i) denotes the reconstructed series
where produced by the decoder as a function of the latent vector z. O
The encoder-decoder framework of Figure 1 (Right) is combined with the MSE loss for the interpolation task. Additionally, as already discussed, masking is performed on the center element of each slice of the input series, and the rest of the observed values of the slice are used for interpolation.
In the case of classification, the latent representation z that is generated by the encoder and which preserves the information about the multivariate time seriesâ dependencies, can be directly fed to a classifier module to make predictions. In the experiments that follow, we employ a 2-layer multi-layer perceptron (MLP) with ReLU activation function. Thus, in the case of a classification problem with classes, the output is computed as follows:
p = sof tmax(M LP (z))
Then, given a training set consisting of time series X(1), . . . , X(N ), we use the negative log-likelihood of the correct labels as training loss:
N Ie Lelassi fication = â Ss Ss y$â log p\? i=1 j=1
# L
8
# where y(i) j
is equal to 1 if X(i) belongs to the j-th class, and 0 otherwise.
The application of the TPCNN model to the above two scenarios is illustrated in Figure 1 (classification on the left and interpolation on the right).
# 4 Experiments
In this section, we describe the experimental setup and methodology used to evaluate the performance of our proposed time-parameterized convolutional layer on various tasks involving irregular time series, including interpolation and classification.
# 4.1 Datasets
We evaluate the performance of the proposed architecture and the baselines on the following real-world datasets:
PhysioNet: The PhysioNet Challenge 2012 dataset [28] comprises 8000 multivariate time series that correspond to records from the first 48 hours of a patientâs admission to the intensive care unit (ICU). Measurements include 37 variables which can be missing at different steps and occur in irregular intervals. Half of the instances are labeled with 13.8% of instances being in the positive class (in-hospital mortality). For the interpolation experiments, we used all 8000 instances and for the classification experiments, we used the 4000 labeled instances. We use the same experimental protocols and preprocessing steps as in [24].
MIMIC-III: The MIMIC-III dataset [13] consists of multivariate health records, that can have missing values, collected at Beth Israel Deaconess Medical Center between 2001 and 2012. Based again on the preprocessing strategy of [24], we extract 53211 samples including 12 features. Given the first 48 hours of data, the task is to predict in-hospital mortality, with 8.1% of the data samples in the positive class.
Human Activity: The human activity dataset contains time series data from five individuals performing various activities (such as walking, sitting, lying, standing, etc.), based on the 3D positions of tags attached to their belts, chest and ankles (12 features in total). Following the preprocessing procedures outlined by [24], a dataset of 6554 sequences and 50 time steps is extracted. The task for this dataset is to classify each time step in the series into one of the eleven activities.
# 4.2 Experimental Setting
We next explain the experimental setting we follow for interpolation and classification, similar to the work of [27]. In the case of interpolation, we study all instances (labeled and unlabeled) from the PhysioNet dataset. The dataset is partitioned into an 80% training set and a 20% test set, with a fraction (20%) of the training data serving as the validation set. The interpolation task is to predict based on a subset of available data points values for the unobserved points. This is executed using different percentages of observed steps, which vary between 50% and 90% of the total available steps. For this experiment, we perform five different runs and report performance on the unobserved data using the mean squared error (MSE) metric.
We also use the labeled data from the PhysioNet, MIMIC-III and Human Activity datasets to conduct classification experiments. For the physiological data of PhysioNet and MIMIC-III, the classification task considers the entire time series, whereas, in the context of the human activity dataset, classification is performed for each time step in the series. We follow the same train, validation and test splitting procedure as described in the interpolation setting. For this experiment, we perform five different runs to provide the classification performance on the different datasets. For PhysioNet and MIMIC-III datasets, we report performance using the area under the ROC curve (AUC) score, due to class imbalance. For the Human Activity dataset, we asses the model performance using the accuracy metric. The validation set is used to select the best set of hyperparameters for our models via grid search.
9
Table 1: Performance for interpolation with different percentages of observed time points on PhysioNet. We mention in bold the best-performing method(s) and underline the second best-performing method(s) based on statistical significance tests.
Model RNN-VAE L-ODE-RNN L-ODE-ODE mTAND-Full TPCNN (ours) Observed(%) 13.418 ± 0.008 8.132 ± 0.020 6.721 ± 0.109 4.139 ± 0.029 5.993 ± 0.058 50% Mean Squared Error (Ã10â3) 11.887 ± 0.007 8.171 ± 0.030 6.798 ± 0.143 4.157 ± 0.053 5.654 ± 0.108 70% 12.594 ± 0.004 8.140 ± 0.018 6.816 ± 0.045 4.018 ± 0.048 5.797 ± 0.063 60% 11.133 ± 0.007 8.143 ± 0.025 6.850 ± 0.066 4.410 ± 0.149 5.624 ± 0.084 80% 11.470 ± 0.006 8.402 ± 0.022 7.142 ± 0.066 4.798 ± 0.036 5.532 ± 0.140 90%
# 4.3 Baseline Models
In this study, we conduct a thorough evaluation of several deep learning architectures as baseline models for performance comparison. These models are specifically designed to handle irregular time series and include variations of the Recurrent Neural Network (RNN), Attention modules and encoder-decoder architectures. The specific models evaluated in this study include:
(i) Basic RNN variants including: RNN-Impute, RNN-ât, RNN-decay, GRU-D. The RNN-Impute model employs a method to impute missing data points based on the weighted average between the last observation of the time series and the total mean of the variable in the training set [3]. In RNN-ât the input to RNN is extended with a missing indicator for the variable and the time interval ât since the last observed point. The RNN-decay is an RNN with hidden states that decay exponentially over time [18, 3], whereas GRU-D employs exponential decay on both hidden states and input [3].
(ii) Other RNN variants, such as Phased-LSTM, IP-Nets, SeFT, RNN-VAE. The Phased-LSTM model incorporates time irregularity through the use of a time gate that controls access to the hidden and cell states of the LSTM [19]. IP-Nets are Interpolation-Prediction Networks (IPN), which perform interpolation prior to prediction with an RNN on the transformed equally-spaced intervals, using semi-parametric interpolation layers [26]. The SeFT model employs learnable set functions for time series and combines the representations with an attention-based mechanism [11]. RNN-VAE is a standard variational RNN encoder-decoder.
(iii) ODE variants, such as ODE-RNN, L-ODE-RNN, L-ODE-ODE. In ODE-RNN neural ODEs model the dynamics of the hidden state, and an RNN updates the hidden state in the presence of new observations [24]. Similarly, L-ODE-RNN and L-ODE-ODE are latent ODEs with the former combining an RNN encoder and a neural ODE decoder [4], and the latter an ODE-RNN encoder and a neural ODE decoder [24].
(iv) Attention-based frameworks, including mTAND. The multi-time attention network, mTAND, interpo- lates missing data using a learnable attention similarity kernel between observations, which are accessed based on trainable temporal embeddings [27].
# 4.4 Results
Interpolation of missing data. In Table 1 we present the results of the experimental setting designed for interpolation, as described in Section 4.2. For different percentages of observed values (i. e., ranging from 50% to 90%), we record the interpolation performance on the reconstructed irregularly sampled multivariate time series of the PhysioNet dataset using the MSE metric. We compare the proposed TPCNN model to different
10
Table 2: Performance for per-sequence classification on PhysioNet and MIMIC-III and per-time-point classification on Human Activity datasets. We mention in bold the best-performing method(s) and underline the second best-performing method(s) based on statistical significance tests.
Model PhysioNet 0.764 ± 0.016 RNN-Impute 0.787 ± 0.014 RNN-ât 0.807 ± 0.003 RNN-Decay RNN GRU-D 0.818 ± 0.008 Phased-LSTM 0.836 ± 0.003 0.819 ± 0.006 IP-Nets 0.795 ± 0.015 SeFT 0.515 ± 0.040 RNN-VAE 0.833 ± 0.009 ODE-RNN 0.781 ± 0.018 L-ODE-RNN 0.829 ± 0.004 L-ODE-ODE mTAND-Full 0.858 ± 0.004 0.833 ± 0.001 AUC MIMIC-III 0.8249 ± 0.0010 0.8364 ± 0.0011 0.8392 ± 0.0012 0.8270 ± 0.0010 0.8429 ± 0.0035 0.8390 ± 0.0011 0.8485 ± 0.0022 0.5175 ± 0.0312 0.8561 ± 0.0051 0.7734 ± 0.0030 0.8559 ± 0.0041 0.8544 ± 0.0024 0.8380 ± 0.0011 Accuracy Human Activity 0.859 ± 0.004 0.857 ± 0.002 0.860 ± 0.005 0.862 ± 0.005 0.855 ± 0.005 0.869 ± 0.007 0.815 ± 0.002 0.343 ± 0.040 0.885 ± 0.008 0.838 ± 0.004 0.870 ± 0.028 0.910 ± 0.002 0.897 ± 0.004 TPCNN (ours)
Table 3: Memory and computational costs, in terms of size (number of parameters) and time per epoch (in minutes).
Model Size (parameters) mTAND-Full 1.3M 1.4M 1.6M TPCNN (ours) 350K 100K 300K Time per epoch (min) mTAND-Full 0.06 0.5 0.006 TPCNN (ours) 0.15 0.2 0.008
# PhysioNet MIMIC-III Human Activity
baseline methods designed for interpolation, including RNN-VAE, L-ODE-RNN, L-ODE-ODE and mTAND- Full (i. e., mTAND encoder-decoder framework for interpolation). We mention in bold the best-performing method and underline the results for the second-performing method. We also perform tests for measuring the statistical significance of the studied methods, which leads to highlighting two distinct models that refer to the highest performances. We can observe that the best-performing method is mTAND-Full, which is closely followed by the proposed TPCNN model. The rest of the baselines show significantly worse performance compared to the proposed TPCNN, including the highly accurate in the irregular setting ODE-based method 10â3 in terms of L-ODE-ODE. The performance of the proposed model ranges from MSE, showing a slightly improved performance as the percentage of missing observations decreases. On the other hand, mTAND-Full shows a slightly degrading performance for a smaller percentage of missing data, with RNN-VAE being the only baseline method that follows the same behavior.
Classification. We also report in Table 2 the results of the different baselines, as described in Section 4.3, and the proposed TPCNN model on classification for the labeled instances of PhysioNet, MIMIC-III and
11
â Ground Truth x â Observed data * Reconstruction sin(-) © Reconstruction exp(:)
Figure 2: Reconstruction results using the proposed TPCNN model on the synthetic dataset. Three different samples of the test set are visualized.
Human Activity datasets. For the first two imbalanced datasets, we use AUC as an evaluation metric and perform per-sequence binary classification, whereas, for the Human Activity dataset, we report accuracy for the task of per-time-point classification. For all datasets, we boldly mention the best-performing methods and underline the results for the second best-performing methods. Due to several non-statistically significant differences in performances, we have several methods being among the first or second best-performing. For PhysioNet and Human Activity datasets, our proposed TPCNN framework is the second-best method in terms of metrics, surpassed by the attention-based model mTAND-Full. More specifically, in PhysioNet the proposed model performs as well as the ODE variants (i. e., ODE-RNN, L-ODE-ODE) that are however significantly slow in terms of computational time, as mentioned in [27]. In Human Activity classification, TPCNN shows 1% worse than mTAND-Full. However, in the MIMIC-III classification, quite improved performance being the proposed TPCNN model lies among the third-best-performing methods, being surpassed by several baselines. In this dataset, ODE-RNN, L-ODE-ODE and mTAND-Full methods achieve the highest AUC scores, followed by the SeFT model, which however performs significantly worse in classification experiments for the other two datasets. The significant performance advantage of mTAND-Full in this task can be attributed to its design which jointly performs interpolation and classification while directly attending only to observed time points. On the other hand, the proposed model handles missing data inside the convolutional kernel of the TPC layer by applying the mean aggregator of Equation 3. The aggregation neighborhood however is constrained by the kernel size and remains fixed throughout the series length. Extending the proposed architecture to incorporate size-varying kernels could further improve the learning capabilities of the TPC layer.
In Table 3 we provide a comparison in terms of memory and computational costs Computational cost. between the proposed TPCNN and its main competitor mTAND-Full. We report the size, i. e., the number of parameters, and the time per epoch in minutes for the two methods and the three real-world datasets. Comparisons of mTAND and previous state-of-the-art models, among which the efficient ODE-based methods, as shown in [27] have demonstrated that the former is significantly faster (i. e., approximately 100 times) than ODE-based methods that make use of an ODE solver. As we can observe in Table 3, TPCNN is as fast as mTAND-Full in terms of time cost comparison. When it comes to the size of the model, the proposed TPCNN uses significantly fewer parameters compared to mTAND-Full, while maintaining competitive performance. 350 thousand More specifically, TPCNN uses approximately some hundred thousand parameters, i. e., 100 parameters, while mTAND-Full size scales to millions of parameters, i. e., approximately 1.5 million. This comparison highlights the high efficacy of convolutions in the irregular sampling setting, which allow the training of neural networks that are significantly smaller and fast compared to the baselines. Therefore, the proposed TPCNN can easily scale to larger datasets and remains efficient even when trained with fewer parameters.
12
ââ PhysioNet _s ââ MIMIC-UL Test AUC lng) et) Time Functi sin() xm) tin) Time Functions sp) {sin â+â Human Activity Ting) xn) in), 60 Time Functions } {sing
Figure 3: Ablation study on different time functions for the parameterization of convolutional kernels for each dataset. Each plot captures the performance (AUC or Accuracy) for each function or combination of functions on the test set.
Experiments on synthetic data. Following the line of work of [27], we reproduce their synthetic sinusoidal [0, 1]. Given dataset that consists of 1000 samples, each describing a time series of 100 time points where t 10 reference points, an RBF kernel with bandwidth 100 is used to obtain local interpolations at the 100 time steps. For each sample, 20 time points are randomly selected so as to represent an irregularly spaced series. A split of 80% and 20% extracts the respective train and test sets. We employ the encoder-decoder interpolation framework of Figure 1 (Right). Contrary to the interpolation setting for PhysioNet, we give as input the 20 irregular time steps, without the missing points, and reconstruct each observation based on the rest using TPCNN with the functions h2(x) = sin(x) (blue points) and h5(x) = exp(x) (green points). We visualize the obtained reconstructions for 3 samples of the test set in Figure 2. Each plot consists of the true values (ground truth) for a test sample, while the dark markers represent the 20 observed input data points ) (observed data), the blue markers and the green markers the 20 predicted values (reconstruction) using sin( · and exp( ) functions respectively. By employing the function h2(x) = sin(x), we are able to achieve a lower · MSE loss compared to the ones achieved with the rest of the time functions defined in Section 3.2. We should mention here that in case domain knowledge is available, it can be incorporated into the proposed TPCNN method via the employed time function, which is likely to lead to performance improvements.
Ablation study. We also present in Figure 3 an ablation study on different time functions employed for parameterizing the weights of the convolutional kernels. The performance metric (AUC or accuracy) on the test set is reported on the classification task of the real-world datasets given a different time function or combination of time functions. For all three datasets, we examine a subset of the functions described in Section 3.2. More specifically, we employ h1(x), h2(x), h3(x), h5(x) (i. e., lin( )) and ), cos( · their combination (e. g., ). We observe that different functions may ), lin( } contribute more or less to the classification performance for the given dataset. In PhysioNet, while the linear ) lead to the lowest AUC on the test set, when combined with function lin( 1%. Additionally, in MIMIC-III classification ) and cos( sin( · · cos( ) (i. e., linear function) lead ) and exp( · 4%. At the same, the combination of functions improves performance but to a reduced performance by
â¼
13
) ) when employed alone. Finally on the Human Activity dataset, cos( ) and exp( does not surpass cos( · · · ) function achieve the highest ), cos( sin( function and the combination { · · test accuracy. The linear lin( ) function again, in this case, leads to the lowest accuracy score compared to · the rest of the time functions. During training, we can observe that the linear time function followed by a standard non-linear activation (e. g., ReLU) when used for the parameterization of the convolutional kernel weights suffers from slow convergence and consequently worse performance. On the other hand, periodic time functions and the exponential function seem to more efficiently describe the time dynamics and lead to smoother training when used for parameterizing convolutions. This experiment highlights the explainability aspects of the proposed TPCNN model since it allows us to determine which time functions better describe the considered time series. Furthermore, under certain conditions, the time series could be considered as a composition of such kind of functions.
# 5 Conclusion
In this work, we carefully designed and experimentally evaluated a novel time-parameterized convolutional neural network, which incorporates learnable time functions into the weights of convolutional kernels. The proposed method generalizes well in different tasks involving irregularly sampled multivariate time series while being computationally efficient and interpretable.
# References
[1] E Oran Brigham. The fast Fourier transform and its applications. Prentice-Hall, Inc., 1988.
[2] Wei Cao, Dong Wang, Jian Li, Hao Zhou, Lei Li, and Yitan Li. Brits: Bidirectional recurrent imputation for time series. Advances in neural information processing systems, 31, 2018.
[3] Zhengping Che, Sanjay Purushotham, Kyunghyun Cho, David Sontag, and Yan Liu. Recurrent neural networks for multivariate time series with missing values. Scientific reports, 8(1):1â12, 2018.
[4] Ricky TQ Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary differential equations. In Advances on Neural Information Processing Systems, pages 6572â6583, 2018.
[5] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
[6] Edward De Brouwer, Jaak Simm, Adam Arany, and Yves Moreau. Gru-ode-bayes: Continuous modeling of sporadically-observed time series. Advances in neural information processing systems, 32, 2019.
[7] Jan G De Gooijer and Rob J Hyndman. 25 years of time series forecasting. International Journal of Forecasting, 22(3):443â473, 2006.
[8] Crist´obal Esteban, Stephanie L Hyland, and Gunnar R¨atsch. Real-valued (medical) time series generation with recurrent conditional gans. arXiv preprint arXiv:1706.02633, 2017.
[9] Joseph Futoma, Sanjay Hariharan, and Katherine Heller. Learning to detect sepsis with a multitask gaussian process rnn classifier. In International conference on machine learning, pages 1174â1182. PMLR, 2017.
[10] Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735â1780, 1997.
[11] Max Horn, Michael Moor, Christian Bock, Bastian Rieck, and Karsten Borgwardt. Set functions for time series. In International Conference on Machine Learning, pages 4353â4363. PMLR, 2020.
14
[12] Hassan Ismail Fawaz, Germain Forestier, Jonathan Weber, Lhassane Idoumghar, and Pierre-Alain Muller. Deep learning for time series classification: a review. Data mining and knowledge discovery, 33(4):917â963, 2019.
[13] Alistair EW Johnson, Tom J Pollard, Lu Shen, Li-wei H Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. Mimic-iii, a freely accessible critical care database. Scientific data, 3(1):1â9, 2016.
[14] Patrick Kidger, James Morrill, James Foster, and Terry Lyons. Neural controlled differential equations for irregular time series. Advances in Neural Information Processing Systems, 33:6696â6707, 2020.
[15] Yann LeCun, L´eon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278â2324, 1998.
[16] Yonghong Luo, Xiangrui Cai, Ying Zhang, Jun Xu, et al. Multivariate time series imputation with generative adversarial networks. Advances in neural information processing systems, 31, 2018.
[17] Hongyuan Mei and Jason M Eisner. The neural hawkes process: A neurally self-modulating multivariate point process. Advances in neural information processing systems, 30, 2017.
[18] Michael C Mozer, Denis Kazakov, and Robert V Lindsey. Discrete event, continuous time rnns. arXiv preprint arXiv:1710.04110, 2017.
[19] Daniel Neil, Michael Pfeiffer, and Shih-Chii Liu. Phased lstm: Accelerating recurrent network training for long or event-based sequences. Advances in neural information processing systems, 29, 2016.
[20] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 165â174, 2019.
[21] Trang Pham, Truyen Tran, Dinh Phung, and Svetha Venkatesh. Predicting healthcare trajectories from medical records: A deep learning approach. Journal of biomedical informatics, 69:218â229, 2017.
[22] Alvin Rajkomar, Eyal Oren, Kai Chen, Andrew M Dai, Nissan Hajaj, Michaela Hardt, Peter J Liu, Xiaobing Liu, Jake Marcus, Mimi Sun, et al. Scalable and accurate deep learning with electronic health records. NPJ digital medicine, 1(1):1â10, 2018.
[23] David W Romero, Anna Kuzina, Erik J Bekkers, Jakub M Tomczak, and Mark Hoogendoorn. Ckconv: Continuous kernel convolution for sequential data. arXiv preprint arXiv:2102.02611, 2021.
[24] Yulia Rubanova, Ricky TQ Chen, and David K Duvenaud. Latent ordinary differential equations for irregularly-sampled time series. Advances in neural information processing systems, 32, 2019.
[25] Kristof Sch¨utt, Pieter-Jan Kindermans, Huziel Enoc Sauceda Felix, Stefan Chmiela, Alexandre Tkatchenko, and Klaus-Robert M¨uller. Schnet: A continuous-filter convolutional neural network for modeling quantum interactions. Advances in neural information processing systems, 30, 2017.
[26] Satya Narayan Shukla and Benjamin M Marlin. Interpolation-prediction networks for irregularly sampled time series. arXiv preprint arXiv:1909.07782, 2019.
[27] Satya Narayan Shukla and Benjamin M Marlin. Multi-time attention networks for irregularly sampled time series. arXiv preprint arXiv:2101.10318, 2021.
[28] Ikaro Silva, George Moody, Daniel J Scott, Leo A Celi, and Roger G Mark. Predicting in-hospital mortality of icu patients: The physionet/computing in cardiology challenge 2012. In 2012 Computing in Cardiology, pages 245â248. IEEE, 2012.
15
[29] Vincent Sitzmann, Julien Martel, Alexander Bergman, David Lindell, and Gordon Wetzstein. Implicit neural representations with periodic activation functions. Advances in Neural Information Processing Systems, 33:7462â7473, 2020.
[30] Huan Song, Deepta Rajan, Jayaraman Thiagarajan, and Andreas Spanias. Attend and diagnose: Clinical time series analysis using attention models. In Proceedings of the AAAI conference on artificial intelligence, volume 32, 2018.
[31] Qingxiong Tan, Mang Ye, Baoyao Yang, Siqi Liu, Andy Jinhua Ma, Terry Cheuk-Fung Yip, Grace Lai-Hung Wong, and PongChi Yuen. Data-gru: Dual-attention time-aware gated recurrent unit for irregular multivariate time series. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 930â937, 2020.
32 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
[33] Shenlong Wang, Simon Suo, Wei-Chiu Ma, Andrei Pokrovsky, and Raquel Urtasun. Deep parametric continuous convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2589â2597, 2018.
[34] Paul J Werbos. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE, 78(10):1550â1560, 1990.
[35] Da Xu, Chuanwei Ruan, Evren Korpeoglu, Sushant Kumar, and Kannan Achan. Self-attention with functional time representation learning. Advances in neural information processing systems, 32, 2019.
[36] Jinsung Yoon, William R Zame, and Mihaela van der Schaar. Estimating missing data in temporal data streams using multi-directional recurrent neural networks. IEEE Transactions on Biomedical Engineering, 66(5):1477â1490, 2018.
[37] Yuan Zhang. Attain: Attention-based time-aware lstm networks for disease progression modeling. In In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI-2019), pp. 4369-4375, Macao, China., 2019.
16 | {
"id": "1710.04110"
} |
2308.03022 | SAPIEN: Affective Virtual Agents Powered by Large Language Models | In this demo paper, we introduce SAPIEN, a platform for high-fidelity virtual
agents driven by large language models that can hold open domain conversations
with users in 13 different languages, and display emotions through facial
expressions and voice. The platform allows users to customize their virtual
agent's personality, background, and conversation premise, thus providing a
rich, immersive interaction experience. Furthermore, after the virtual meeting,
the user can choose to get the conversation analyzed and receive actionable
feedback on their communication skills. This paper illustrates an overview of
the platform and discusses the various application domains of this technology,
ranging from entertainment to mental health, communication training, language
learning, education, healthcare, and beyond. Additionally, we consider the
ethical implications of such realistic virtual agent representations and the
potential challenges in ensuring responsible use. | http://arxiv.org/pdf/2308.03022 | Masum Hasan, Cengiz Ozel, Sammy Potter, Ehsan Hoque | cs.HC, cs.AI | null | 2023 11th International Conference on Affective Computing and
Intelligent Interaction Workshops and Demos (ACIIW) | cs.HC | 20230806 | 20230806 | 3 2 0 2 g u A 6 ] C H . s c [
1 v 2 2 0 3 0 . 8 0 3 2 : v i X r a
2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)
# SAPIEN: Affective Virtual Agents Powered by Large Language Models*
Masum Hasanâ, Cengiz Ozelâ , Sammy Potterâ¡ and Ehsan Hoque§ Department of Computer Science, University of Rochester Rochester, NY, United States Email: {âm.hasan@, â cozel@cs., â¡spotter14@u., §mehoque@cs.} rochester.edu
AbstractâIn this demo paper, we introduce SAPIEN, a plat- form for high-fidelity virtual agents driven by large language models that can hold open domain conversations with users in 13 different languages, and display emotions through facial expressions and voice. The platform allows users to customize their virtual agentâs personality, background, and conversation premise, thus providing a rich, immersive interaction experience. Furthermore, after the virtual meeting, the user can choose to get the conversation analyzed and receive actionable feedback on their communication skills. This paper illustrates an overview of the platform and discusses the various application domains of this technology, ranging from entertainment to mental health, com- munication training, language learning, education, healthcare, and beyond. Additionally, we consider the ethical implications of such realistic virtual agent representations and the potential challenges in ensuring responsible use.
03:21/10:00 SAPIEN
Fig. 1. Face-to-face video call interaction with SAPIENTM Virtual Agent
Index TermsâVirtual Avatars, Virtual Agents, Affective AI, Large Language Models
# I. INTRODUCTION
Allowing a user to define the traits and characteristics of a virtual agent, carrying a dynamic conversation, and receiving automated feedback has been an open-ended research problem for many years [1]. The rapid advancement of Large Language Models (LLMs) in recent years has enabled possibilities in designing user experiences that didnât exist before [2]â[4]. In this demo, we present Synthetic Anthropomorphic Personal Interaction ENgine (SAPIEN), a platform for LLM-powered high-fidelity virtual agents that can engage in real-time open- domain conversations, while also expressing emotions through voice and facial expressions.
One of the notable features of SAPIEN is its extensive range of customization options, allowing users to engage in immersive and meaningful interactions. Users can choose from a wide range of virtual agent avatars that reflect a diverse array of ages, gender, and ethnicities. Going further, users can select the desired personality, background, and conversational context of a virtual agent, creating an experience tailored to their specific needs or preferences.
SAPIEN leverages state-of-the-art models in Speech-to-Text [5], [6], Text-to-Speech [7]â[9], and large language modeling [2], [4], [10]â[14]. The virtual agents fluently speak thirteen different languages and counting, making it accessible across a global user base.
Upon finishing a video call with the virtual agents, a user can choose to get their conversation analyzed for personalized feedback. The system provides AI-generated feedback to the user based on the userâs goal. The user can decide the topic of the feedback to suit their learning goal and repeat the conver- sation until the learning goal is met. The inherent flexibility of the virtual agent persona and the feedback could make it potentially applicable to a myriad of applications, including communication training, language learning, and professional applications like healthcare, sales, and leadership training.
With the rising technical capabilities of LLMs, there is expected to be a drastic shift in the labor market in the coming years [15]. According to recent studies [15], the importance of the job market is going to shift from hard technical skills to soft âhumanâ skills. In this changing landscape, SAPIEN can help people adapt and cope, by helping them cultivate human skills with the help of AI.
Once a virtual agent is selected and its traits are defined, users can begin a real-time video call interaction with it. With the help of the large language model, the virtual agents dynamically adjust their emotional state, vocal, and facial expressions, showcasing a spectrum of seven basic emotions.
NSF and NSF REU IIS-1750380, Seedling from Goergen Institute for Data Science (GIDS), and Gordon and Moore Foundation.
# II. SYSTEM DESCRIPTION
The overall working of SAPIEN Virtual Agents, referred to as âBotâ for simplicity, is represented in Figure 2. The SAPIEN system is initialized when a userâs speech utterance is captured and transmitted to our back-end server for processing. This utterance is transcribed into text by a high-precision Speech
# 979-8-3503-2745-8/23/$31.00 ©2023 IEEE
Front End (Client Side) 3DGame Engine | ® Bot Response Audio 7 K Bot Response au Audio User Utterance Blendshapes: â Y ' Autoregressive ' Lm Back End (Server Side) Large Language Model Previous History ot) User defined parameters Facial Expression Motion Capture Database _â Bot Response Text Text to Speech Systen User Utterance Text Audio : a Speech to Text :
Fig. 2. A single turn conversation flow in SAPIEN. User utterance is transcribed and sent to LLM. The LLM response is spoken out by the virtual agent.
to Text (STT) model [5], [6], [16], [17] and subsequently processed by an autoregressive Large Language Model (LLM) fine-tuned for instruction following [3], [4], [10]â[14], [18].
The LLM is conditioned on user-defined parameters like personality traits, conversation premise, user information, and previous conversation history. To prevent inappropriate or of- fensive behavior, the LLM also adheres to system guardrails. A notable aspect of the LLM is also predicting the virtual agentâs emotional state. Conditioning on the user-defined parameters, system guardrails, and previous conversation history, the LLM is instructed to generate the botâs response, alongside the appropriate emotional state of the bot from the following list: Neutral, Happy, Sad, Angry, Surprised, Afraid, and Disgusted. This emotional state, along with the text response, is used to generate an audio file of the botâs response using a Text to Speech (TTS) model. Concurrently, the emotional state triggers the selection of a corresponding facial expression from our pre-recorded motion capture database. This facial expression data, in the form of blendshapes, is passed to a 3D game engine to animate the virtual agent.
The resultant animation and generated audio are synchro- nized, forming a coherent, visually expressive response from the virtual agent. This combined output is streamed to the userâs web browser in near real-time, allowing for an immer- sive experience close to an actual video call.
munication practice tool for people with social anxiety or neurodiversity [19], [20], public speaking [21], job interviews [22], helping elderly with social skills [23], and even speed dating [24]. It also has an excellent potential for professional applications. Such as training doctors in bedside manners or delivering difficult news to their patients [25], personalized training for leadership, business negotiation, sales, marketing, etc. The multilingual ability makes the platform a powerful tool for language learners. Furthermore, the non-judgemental, low stake, repeatable conversations with virtual agents make the platform a helpful tool for anyone to roleplay any difficult interpersonal scenario in a personal or professional setup.
# IV. THE DEMO
Our platform is hosted in the cloud and accessible from any part of the world. During the conference demo, we wish to have the visitors live interact with SAPIEN virtual agents in a variety of interesting scenarios and receive immediate feedback on their communication skills. We will also prepare some pre-recorded user interaction videos to demonstrate any rare or difficult cases or as a backup for technical failures.
# ETHICAL IMPACT STATEMENT
Once the conversation is over, the user can opt-in to receive feedback on their conversation. An LLM is instructed to analyze the conversation transcript based on the userâs goal, identify strengths and weaknesses on the userâs communica- tion skill, and generate actionable feedback for the user.
# III. APPLICATIONS
The customizability of the conversation scenario, dynamic dialogues, and the feedback system combined make SAPIEN uniquely suitable for a variety of communication training purposes. For example, the system can be used as a com-
SAPIEN is designed to augment and enrich our capacity for communication, empathy, and understanding, but not substi- tute human connections. To safeguard against potential emo- tional dependencies on the system, SAPIEN does not retain the memory of previous interactions, and the conversations are limited to a 10 minutes window with a warning at the 8- minute mark. To prevent the practice of bullying or abusive behaviors using our system, we enabled our virtual agents to end the video call if the user repeatedly displays aggressive or offensive behavior. We are continuously investigating more safety and ethical issues regarding the use of the system.
# REFERENCES
[1] M. E. Hoque and R. W. Picard, âRich nonverbal sensing technology for automated social skills training,â Computer, vol. 47, no. 4, pp. 28â35, 2014.
[2] OpenAI, âIntroducing chatgpt,â https://openai.com/blog/chatgpt, (Ac- cessed on 06/22/2023).
[3] âAnthropic â introducing claude,â https://www.anthropic.com/index/ introducing-claude, (Accessed on 06/22/2023).
[4] G. AI, âAn important next step on our ai journey,â 2023. [Online]. Avail- able: https://blog.google/technology/ai/bard-google-ai-search-updates/
automatic Sig- speech nal [On- line]. Available: https://www.microsoft.com/en-us/research/publication/ recent-advances-in-end-to-end-automatic-speech-recognition/
[6] W. Xiong, L. Wu, F. Alleva, J. Droppo, X. Huang, and A. Stolcke, âThe microsoft 2017 conversational speech recognition system,â in 2018 IEEE international conference on acoustics, speech and signal processing (ICASSP).
[7] Y. Wang, R. Skerry-Ryan, D. Stanton, Y. Wu, R. J. Weiss, N. Jaitly, Z. Yang, Y. Xiao, Z. Chen, S. Bengio et al., âTacotron: Towards end- to-end speech synthesis,â arXiv preprint arXiv:1703.10135, 2017. [8] R. Luo, X. Tan, R. Wang, T. Qin, J. Li, S. Zhao, E. Chen, and T.-Y. Liu, âLightspeech: Lightweight and fast text to speech with neural architec- ture search,â in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021, pp. 5699â5703.
[9] S.-g. Lee, H. Kim, C. Shin, X. Tan, C. Liu, Q. Meng, T. Qin, W. Chen, S. Yoon, and T.-Y. Liu, âPriorgrad: Improving conditional denoising diffusion models with data-driven adaptive prior,â ICLR, 2022.
[10] R. Taori, I. Gulrajani, T. Zhang, Y. Dubois, X. Li, C. Guestrin, P. Liang, and T. B. Hashimoto, âStanford alpaca: An instruction-following llama model,â https://github.com/tatsu-lab/stanford alpaca, 2023.
[11] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., âLanguage mod- els are few-shot learners,â Advances in neural information processing systems, vol. 33, pp. 1877â1901, 2020. [12] OpenAI, âGpt-4 technical report,â 2023. [13] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray et al., âTraining language models to follow instructions with human feedback,â Advances in Neural Information Processing Systems, vol. 35, pp. 27 730â27 744, 2022. [14] A. K¨opf, Y. Kilcher, D. von R¨utte, S. Anagnostidis, Z.-R. Tam, K. Stevens, A. Barhoum, N. M. Duc, O. Stanley, R. Nagyfi et al., âOpe- nassistant conversationsâdemocratizing large language model align- ment,â arXiv preprint arXiv:2304.07327, 2023.
[15] T. Eloundou, S. Manning, P. Mishkin, and D. Rock, âGpts are gpts: An early look at the labor market impact potential of large language models,â arXiv preprint arXiv:2303.10130, 2023.
[16] Y. Leng, X. Tan, L. Zhu, J. Xu, R. Luo, L. Liu, T. Qin, X. Li, E. Lin, and T.-Y. Liu, âFastcorrect: Fast error correction with edit alignment for automatic speech recognition,â Advances in Neural Information Processing Systems, vol. 34, pp. 21 708â21 719, 2021.
[17] W. Hou, J. Wang, X. Tan, T. Qin, and T. Shinozaki, âCross-domain speech recognition with unsupervised character-level distribution match- ing,â INTERSPEECH, 2021.
[18] W.-L. Chiang, Z. Li, Z. Lin, Y. Sheng, Z. Wu, H. Zhang, I. Stoica, impressing gpt- [Online]. Available:
[19] M. R. Ali, S. Z. Razavi, R. Langevin, A. Al Mamun, B. Kane, R. Rawassizadeh, L. K. Schubert, and E. Hoque, âA virtual teens with autism spectrum disorder: conversational Experimental results and design lessons,â in Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents, ser. IVA â20. New York, NY, USA: Association for Computing Machinery, 2020. [Online]. Available: https://doi.org/10.1145/3383652.3423900 [20] S. Z. Razavi, M. R. Ali, T. H. Smith, L. K. Schubert, and M. E. Hoque, âThe lissa virtual human and asd teens: An overview of initial experiments,â in Intelligent Virtual Agents, D. Traum, W. Swartout, P. Khooshabeh, S. Kopp, S. Scherer, and A. Leuski, Eds. Cham: Springer International Publishing, 2016, pp. 460â463.
[21] M. Fung, Y. Jin, R. Zhao, and M. E. Hoque, âRoc speak: Semi- automated personalized feedback on nonverbal behavior from recorded videos,â in Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, ser. UbiComp â15. New York, NY, USA: Association for Computing Machinery, 2015, p. 1167â1178. [Online]. Available: https://doi.org/10.1145/2750858.2804265
[22] M. E. Hoque, M. Courgeon, J.-C. Martin, B. Mutlu, and R. W. Picard, âMach: My automated conversation coach,â in Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, ser. UbiComp â13. New York, NY, USA: Association for Computing Machinery, 2013, p. 697â706. [Online]. Available: https://doi.org/10.1145/2493432.2493502
[23] S. Z. Razavi, L. K. Schubert, K. van Orden, M. R. Ali, B. Kane, interacting in multiple topics,â ACM Trans. jul 2022. [Online]. Available:
[24] M. R. Ali, D. Crasta, L. Jin, A. Baretto, J. Pachter, R. D. Rogge, and M. E. Hoque, âLissa â live interactive social skill assistance,â in 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), 2015, pp. 173â179.
[25] M. R. Ali, T. Sen, B. Kane, S. Bose, T. M. Carroll, R. Epstein, L. Schubert, and E. Hoque, âNovel computational linguistic measures, dialogue system and the development of sophie: Standardized online for healthcare interaction education,â IEEE Trans. Affect. patient Comput., vol. 14, no. 1, p. 223â235, jan 2023. [Online]. Available: https://doi.org/10.1109/TAFFC.2021.3054717 | {
"id": "1703.10135"
} |
2308.02773 | EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education | EduChat (https://www.educhat.top/) is a large-scale language model
(LLM)-based chatbot system in the education domain. Its goal is to support
personalized, fair, and compassionate intelligent education, serving teachers,
students, and parents. Guided by theories from psychology and education, it
further strengthens educational functions such as open question answering,
essay assessment, Socratic teaching, and emotional support based on the
existing basic LLMs. Particularly, we learn domain-specific knowledge by
pre-training on the educational corpus and stimulate various skills with tool
use by fine-tuning on designed system prompts and instructions. Currently,
EduChat is available online as an open-source project, with its code, data, and
model parameters available on platforms (e.g., GitHub
https://github.com/icalk-nlp/EduChat, Hugging Face
https://huggingface.co/ecnu-icalk ). We also prepare a demonstration of its
capabilities online (https://vimeo.com/851004454). This initiative aims to
promote research and applications of LLMs for intelligent education. | http://arxiv.org/pdf/2308.02773 | Yuhao Dan, Zhikai Lei, Yiyang Gu, Yong Li, Jianghao Yin, Jiaju Lin, Linhao Ye, Zhiyan Tie, Yougen Zhou, Yilei Wang, Aimin Zhou, Ze Zhou, Qin Chen, Jie Zhou, Liang He, Xipeng Qiu | cs.CL | null | null | cs.CL | 20230805 | 20230805 | 3 2 0 2
g u A 5 ] L C . s c [
1 v 3 7 7 2 0 . 8 0 3 2 : v i X r a
EduChat: A Large-Scale Language Model-based Chatbot System for Intelligent Education Yuhao Dan1â, Zhikai Lei1â, Yiyang Gu1â, Yong Li1, Jianghao Yin1, Jiaju Lin1, Linhao Ye1, Zhiyan Tie1, Yougen Zhou1, Yilei Wang2, Aimin Zhou1,2, Ze Zhou4 Qin Chen1â , Jie Zhou1â , Liang He1 , Xipeng Qiu3 1 School of Computer Science and Technology, East China Normal University, Shanghai, China 2 Institute of AI for Education, ECNU, Shanghai, China 3 School of Computer Science, Fudan University, Shanghai, China 4 ZhuQingTing Data Technology (Zhejiang) Co., Ltd., Zhejiang, China
# Abstract
sonalized, comprehensive, and timely support to teachers, students, and parents.
EduChat1 is a large-scale language model (LLM)-based chatbot system in the education domain. Its goal is to support personalized, fair, and compassionate intelligent education, serv- ing teachers, students, and parents. Guided by theories from psychology and education, it fur- ther strengthens educational functions such as open question answering, essay assessment, So- cratic teaching, and emotional support based on the existing basic LLMs. Particularly, we learn domain-specific knowledge by pre-training on the educational corpus and stimulate various skills with tool use by fine-tuning on designed system prompts and instructions. Currently, EduChat is available online as an open-source project, with its code, data, and model parame- ters available on platforms (e.g., GitHub2, Hug- ging Face3). We also prepare a demonstration of its capabilities online4. This initiative aims to promote research and applications of LLMs for intelligent education.
# Introduction
Recently, large-scale language models (LLMs), such as ChatGPT (Schulman et al., 2022), LLaMa (Touvron et al., 2023a), have achieved great suc- cess in the field of natural language processing (Zhou et al., 2023b). LLMs obtained the ability of reasoning, long-range context modeling, and task generalization by training on large-scale tex- tual corpus with some strategies, such as code pre- training (Chen et al., 2021), instruction tuning (Wei et al., 2022), and reinforcement learning from hu- man feedback (RLHF) (Stiennon et al., 2020). With the advent of LLMs, they have the potential to rev- olutionize intelligent education by providing per-
ââ Equal contribution. â â Corresponding author. 1https://www.educhat.top/ 2https://github.com/icalk-nlp/EduChat 3https://huggingface.co/ecnu-icalk 4https://vimeo.com/851004454?share=copy
However, there are several challenges of apply- ing LLMs into education domain. One challenge (C1) is that there is still a gap between the LLMs and the educational expert since LLMs are pre- trained on the general corpus, which lack sufficient educational knowledge and can not align well with real scenarios (e.g., essay assessment). The other challenge (C2) is that the knowledge in the field of education is updating, while LLMs can not learn up-to-date knowledge due to the training mecha- nism. Moreover, LLMs suffer from the hallucina- tion problem, and may generate responses that are not truthful.
To address these problems, we propose EduChat, an LLM-based chatbot system for intelligent educa- tion. For C1, we pre-train LLMs on a large number of educational books (e.g., psychology, ancient po- etry) and 4 million cleaned diverse instructions to learn the fundamental knowledge. Then, we fine- tune the model on 500 thousand high-quality cus- tomized instructions to activate education-specific functions (e.g., essay assessment, Socratic teach- ing and emotional support), by aligning with the feedbacks from psychology experts and frontline teachers. For C2, we explore a retrieval-augmented technology, which enables LLMs to automatically judge the helpfulness of the retrieved information, and generate the response based on the relevant in- formation and knowledge stored in LLMs. In this way, our EduChat can access the latest information from the internet, ensuring that the responses are accurate and credible. As an open-source project, EduChat improves the performance of education- specific functions while maintaining comparable foundational capabilities to other large-scale mod- els with equivalent parameter size. The main con- tributions are as follows:
⢠We explore the potential of incorporating theories of psychology and education into LLMs, which
sheds light on how to adapt general LLMs to specific domains;
⢠Diverse system prompts and instructions are de- signed to control the tool use and stimulate differ- ent skills, which alleviates the problem of hallu- cination and is more applicable in real education scenarios;
⢠We develop and release the EduChat system with various educational functions, thus developers and researchers can help speed up the research and applications of intelligent education.
# 2 Related Work
Recently, LLMs like ChatGPT (Schulman et al., 2022), ChatGLM (Du et al., 2022), and LLaMA2- Chat (Touvron et al., 2023b) have emerged as a breakthrough technology in natural language pro- cessing, achieving strong performance on language generation and understanding through pre-training on massive text and instruction tuning.
While LLMs demonstrate impressive capabili- ties in general domains, their lack of subject-matter expertise becomes apparent when applied to spe- cialized verticals. For instance, we can find spe- cialized language models catering to various do- mains, such as ChatDoctor (Li et al., 2023) and HuaTuoGPT (Zhang et al., 2023b) in healthcare, FinGPT (Yang et al., 2023) in finance, and Chat- Law (Cui et al., 2023) in the legal domain. These niche fields inherently necessitate models to pos- sess comprehensive domain knowledge to address relevant queries, especially when assisting real users in practical scenarios. In education, Baladn et al. (2023) tune open-source LLMs for generating better teacher responses in BEA 2023 Shared Task (Tack et al., 2023). But challenges still exist, such as the lack of domain knowledge in general LLMs and the necessity for them to align with educational abilities (e.g., essay assessment, emotional support, and Socratic teaching). EduChat is pre-trained on a diverse education corpus to ensure the alignment of EduChat with educational abilities.
# 3 Core Functions of EduChat
Retrieval-Augmented Open Question Answer- ing (QA) The education domain demands high accuracy and real-time updates regarding knowl- edge and related policies. However, existing gener- ative LLMs suffer from issues like fabricating infor- mation and lagging behind in knowledge updates. To address this, we explore retrieval-augmented
open QA methods. By utilizing real-time updated corpora from the internet as an external knowledge source, we enable LLMs to autonomously assess the relevance of retrieved information to answer a given question and decide which information to incorporate for generating responses. Through extensive experimental analysis, we discover that our model exhibits significant advantages over gen- eral LLMs in terms of eliminating fabrications and maintaining up-to-date knowledge.
Fine-grained Essay Assessment In essay assess- ment, teachers meticulously annotate grammar er- rors, provide scores, and offer feedback on standout sentences. Existing language models often have coarse granularity in grading, limiting studentsâ writing skill improvement. Our research focuses on more fine-grained and comprehensive essay assess- ment. Combining frontline teaching profession- alsâ expertise, we provide overall scores, aspect- level ratings, and detailed comments on content, expression, paragraph, and overall evaluation. Our model can identify standout sentences, highlight- ing strengths and areas for improvement, enabling personalized guidance for studentsâ essay writing skills. This ensures timely and professional support in all aspects of writing.
Socratic Teaching We focus on developing So- cratic teaching capabilities in LLMs rather than providing direct answers to students. We adopt the Socratic dialogue method, engaging in multi-step question-and-answer interactions to encourage in- dependent thinking. By stimulating discussions, debates, evaluations, and analyses, we aim to fos- ter advanced cognitive skills and cultivate studentsâ autonomy in learning. Our ultimate goal is to en- hance critical thinking and innovation abilities to their fullest extent.
Psychology-based Emotional Support Adoles- cents and children face more severe psychological pressures due to their immature cognitive devel- opment. Whereas, current LLMs usually provide generic advice, which can not well fit the specific emotional problem. To address this, we develop a psychological inquiry framework based on emotion psychology, such as Rational Emotive Behavior Therapy (REBT) and the ABC theory (Ellis, 1991). Our fine-tuned model can simulate a psycholog- ical counselor, providing personalized diagnoses and emotional support for users. EduChat fosters a deeper understanding of usersâ emotional states
and offers accurate and professional assistance.
# 4 Data Construction
# 4.1 Pre-training Data
Textbooks Data In our research, we gather a vast amount of educational textbook and online question bank data from Chinese middle and high school exams for pre-training. Additionally, we en- rich our model with over 70,000 Chinese poetries, providing detailed information on authors, back- grounds, and poetry appreciation to enhance its poetry creation and appreciation capabilities. To facilitate empathetic emotional support dialogues, we carefully select 60 famous works from hun- dreds of psychology books. These selected books belong to two main categories. The first category consists of 15 branches of psychological theory, in- cluding developmental and educational psychology, social psychology, behavioral psychology, counsel- ing psychology and others. The second category contains various psychological practices, which of- fer practical cases of psychological consultation and emotional support dialogues. By incorporat- ing the diverse fundamental data into pre-training, our model gains a deeper understanding of educa- tion and psychology, enabling it to generate more helpful responses.
Fundamental Instruction Data To achieve a more natural human-computer interaction, we col- lect a large volume of bilingual instruct tuning data from reputable open-source repositories like Al- paca5, BELLE (Ji et al., 2023), GPT4All6, Open- Assistant7, FLANCoT8, and Firefly9. The data spans various task types, enabling our models to acquire foundational instruction following capabil- ities for diverse instruction types. In addition, we source high-quality multi-turn dialogue data from MOSS (Sun et al., 2023), BELLE (Ji et al., 2023), COIG (Zhang et al., 2023a), LIMA (Zhou et al., 2023a), and ShareGPT10. This data covers various dialogue contexts, including role-playing, creative writing, and code-related discussions, ensuring our modelsâ competence in engaging and sustaining meaningful multi-turn conversations.
# 5https://github.com/tatsu-lab/stanford_alpaca 6https://github.com/nomic-ai/gpt4all 7https://github.com/LAION-AI/Open-Assistant 8https://huggingface.co/datasets/lucasmccabe-
# lmi/FLAN_CoT_alpaca_style
# 9https://github.com/yangjianxin1/Firefly 10https://huggingface.co/datasets/gozfarb/ShareGPT_
Vicuna_unfiltered
me Essay Assessment âm= _Emotional Support âmm Retrieval-Augmented Open QA Socratic Teaching 29.4%
Figure 1: Distribution of educational data.
4.2 Fine-tuning Data To enhance the capability of education, we con- struct the Educational Instruction Data for fine- tuning, which covers retrieval-augmented open QA, emotional support, Socratic teaching and essay as- sessment. The distribution is shown in Figure 1.
Retrieval-Augmented Open QA Data To ad- dress hallucination and timely knowledge issues in Open QA, we design a retrieval-augmented open QA technique. We sample high-quality data through ChatGPT scoring in relevant Open QA and Subject QA datasets. To tackle irrelevant retrieved content, we introduce self-checking. ChatGPT as- sesses whether the retrieval content helps answer the question and then generates the answer using an self-check, incorporating the useful retrieval con- tent and questions. To maintain data quality, we manually verify the data during this process.
Emotional Support Data To overcome the scarcity of Chinese emotional support dialogue data, we adopt a translation and expansion ap- proach. We translate the widely-used English emo- tional support dataset, ESConv (Liu et al., 2021), into Chinese as ESConv-zh. After manual review and cleaning, we simulate multi-agent dialogues based on various patient scenarios within ESConv- zh and also collect real-life Chinese psychological counseling consultation data, incorporating patient information and diagnosis results. By training our models on diverse datasets, we empower them to provide robust emotional support and act as com- passionate counselors during consultations.
Socratic Teaching Data Teachers play a key role in guiding and encouraging heuristic exploration rather than just providing answers. To support this, we generate dialogues simulating the Socratic teaching method by incorporating multi-step Q&A involving counter-questions, challenges, and in- quiries. These dialogues are manually evaluated
Io eee eee eee ee ' x Web Search General ' ' if > aa S L 1 my lam an Al assistant named EduChat. | | Tools | ©. & ) f Psychol. i i aH alculators aaa sychology Han 1 c yet a it 1 | Personal EduChatis a conversational language | Q Skill tu ! | Profile model developed by ECNU. Self-check Socrates : ' 1 Ability Pre-training Textbooks #TEXT Textbooks Fundamental Instruction through the dawning beams. By the window <TITLE> Bamboo Shadows Amidst Lingering Snow <POETRY> A speck of vanishing snow remains, A lone maon amidst bamboo shades i wanes. In the clear night, ts brillance gleams, Cold air ingers Fine-tuning Essay Assessment wee Meee Fundamental Instruction INSTRUCTION {A Itean replicate in human blood plasma, #RESPONSE family Hepadnaviidae, a hepattrople DNA, ' 1 1 1 ' Educational t Instruction 1 1 1 1 âThe correct statement about Hepatitis B viru is (). B. It cannot parasitze in respiratory mucosal cell C. its nucleic acd contains 5 nitrogenous bases. 1. its protein i synthesized by its own ribosomes. âAnswer is B. Hepatitis B virus (HBV) belongs tothe Knowledge > a Socratic ' ' Emotional ' & @® Eauchat iS oman | + C.China â D. United Kingdom â f t {| @:tn which country will the 1 | 2024 Olympic Games beheld? |â> | Search =| _» a Franceâ B. Unied States Engine f \ (1) Paris, France; Hamburg, Germany; Budapest, Hungary; \ iy dnb kia et cone > Useless X # Prefix ! Chest 22¢ Seer Opes Bee and ()i8 ! Symes wl ake place Yom uly 25 fo gus 1 useless, Result (2) i CEE He usetuly | py atuneui, corm || | '@) On September 13, 2017, during the 131st international lise the Information ' âOlymple Commit (OC) Session held in Lima, Pen, > Useless X | from the Result (2) ' Portus sav ste hs iy rt en ' x js
Figure 2: The overall framework of EduChat.
for accuracy, fluency, and progression from easy to complex questions. Integrating this dataset into training equips our model with a strong capability in Socratic teaching, distinguishing it from other LLMs that only offer direct answers.
Essay Assessment Data The lack of timely and detailed feedback often hinders studentsâ writing improvement. To tackle this issue, we create a high-quality essay assessment dataset. Initially, we collect essays and employ ChatGPT to evaluate them in terms of content, expression, and overall quality. To ensure data quality, we invite pedagogi- cal experts to manually curate the comments. This dataset empowers EduChat with the ability to pro- vide students with high-quality feedback, aiding in the enhancement of their writing skills.
GPU acceleration, speeding up the process.
# 5 EduChat
EduChat is an LLM designed for the education do- main (Figure 2). We first pre-train on large-scale education corpus (e.g., textbooks, instructions for foundational tasks) to learn the domain-specific and foundational knowledge. We then learn the pedagogical skills by fine-tuning EduChat on task- specific instruction datasets. Moreover, we lever- age online retrieval to enhance the accuracy and timeliness of knowledge in its responses. To con- trol skills, we design various system prompts to unlock different scenes with tool usage.
# 5.1 Training Procedure of EduChat
# 4.3 Data Preprocessing
To enhance data quality, we conduct semantic-level deduplication on the dataset. Using the sentence- transformers model (Reimers and Gurevych, 2019), we obtain sentence embeddings for each data point and calculate cosine similarity between all pairs of embeddings. For similarities exceeding a threshold of 0.7, we remove one of the duplicates. We im- plement the similarity calculation using CUDA for
The training of EduChat is mainly divided into two stages: fundamental capabilities acquisition and educational skills acquisition. In the first stage, we pre-train the model on educational books and Q&A pairs (detailed in Section 4.1) to equip it with foundational knowledge across disciplines. Besides, large-scale instruction tuning and open- domain dialogue datasets are also incorporated to enable basic instruction following ability and dia- logue ability (detailed in Section 4.2). In the second
@ Open QA + New Topic Topic 1 2024/07/90 1421.96 Topic 2 new Chat Please introduce East China Normal University t4 East China Normal University (ECNU) is a comprehensive public research university located in Shanghai, China. it was founded in 1951 through the merger of two institutions of higher education in Shanghai, and has since become one of the most prestigious universities in China. ECNU is known for its strong research and academic programs, and has @ reputation for producing top-quality graduates in a wide range of fields. The university has a diverse student body, with students from all over CChina and around the world. ECNU is also known for its strong ties to industry and business, and has a number of research centers and institutes that focus on areas such as artificial intelligence, biotechnology, and environmental science. In addition % Examples Topic 3 to its academic programs, ECNU is also known for its beautiful campus, which features a mix of traditional Chinese architecture âand modern facilities. Overall, ECNU is a highly respected and well-regarded institution of higher education, and is a great choice for students who are locking for a challenging and rewarding academic experience. Y Open aA Reference: Emotional Support â+ East China Normal University ~ Wikipedia â+ East China Normal University + Introduction to ECNU-East China Normal University Essay Assessment Socratic Teaching Course Guidance Career Planning © Regenerate response i XX Open aa Type Retrieval s 0} @ >
Figure 3: Demo of EduChat.
stage, we develop EduChatâs pedagogical skills by fine-tuning the model on our carefully curated data, including retrieval-augmented open QA, emotional support, Socratic teaching and essay assessment datasets mentioned in Section 4.2.
# 5.2 Online Knowledge Retrieval
"Calculator: Disable" signifies the modelâs inability to utilize a calculator; 3) Skill Selection: Teachers in various settings possess unique communication skills, such as Socratic Teaching or Psychology- based Emotional Support. To cater to specific sce- narios, we include function names at the end of the system prompt, which activates corresponding abilities based on the sceneâs requirements.
Existing generative LLMs all suffer from the issues of generating hallucinations and outdated informa- tion, which is detrimental to an educational model. To mitigate this problem, we introduce self-check as shown in Figure 2. Specifically, when online knowledge retrieval is enabled, the model picks useful retrieval results by asking itself "Is this help- ful for answering the question?" and append filtered snippets before the dialogue history.
# 5.3 System Prompt Design
Teachers always utilize various tools with different skills to enhance their teaching across different sub- jects. To enable EduChat to emulate an authentic teacher-student interaction, we carefully craft the system prompt that consists of personal profile, tool usage and skill selection (see Figure 2). Detailed settings can be found in Table 2.
# 5.4 Demonstration
We also develop a user-friendly demo system for EduChat (see Figure 3). Upon logging in, users can select from various functions, such as Open QA and Emotional Support, each offering a scene-specific system prompt to activate the corresponding ability. With this intuitive interface, users can easily engage in interactive conversations with EduChat to assist students, teachers and parents. Additionally, the system is designed to be adaptive, continuously learning from user interactions to further improve its capabilities and provide more personalized and effective assistance over time.
# 6 Experimental Results
1) Personal Profile: To remind the model of its own identity, the system prompt begins with: "EduChat is a conversational language model de- veloped by East China Normal University."; 2) Tool Usage: To regulate tool availability, the second part of the system prompt commences with "EduChatâs tools:", listing all tool names and their respective accessibility. For instance, "Web search: Enable" indicates the modelâs ability to use retrieval, while
# 6.1 Resutls of C-Eval
Table 1 presents the results of our model on the C- Eval benchmark (Huang et al., 2023), a comprehen- sive Chinese evaluation suite for foundation mod- els. The dataset consists of 13,948 multi-choice questions, spanning 52 diverse disciplines and cat- egorized into four difficulty levels. Analyzing the table, we observe that our model achieves com- mendable performance compared to models with
# Chinese
# Translate to English
# Retrieval-Augmented Open QA
User: ee ae tif? 2
User: Who is the author of the View the Sea?
EduChat: (UIE) MITE Bh. GPR AL SLE TONER, Hides âi iE : Be ai fait ADH HORACE GEENA. PTR BE ae oe PPMP HAM HE, AE PRI AR EEL
# Br RE:
Wit ite ERE
EduChat: The author of the View the Sea was Cao Cao, a poet in the late Eastern Han Dynasty. This poem is written by Cao Cao when he climbed the mountain and looked at the sea in Jieshi Mountain. It depicts the magnificence of the rivers and mountains of the motherland. It not only depicts the magnificence of the mountains and the sea, but also expre: the poet's enterprising spirit with the scenery and the world in mind. This poem is regarded as a classic of ancient Chinese poetry and one of Cao Cao's representative works.
# Reference:
# View_the_sea_ Baidu_Baike
Socratic Teaching 3? uo itm ALY ZS ADK SSM? SS AE AL aE FAT NI 2 User: FR 4s itt EduChat: JZ A: Fil Ri ip RU 2 AK BCA NS BLN 2 J, ARAVA Y iit FL AOA SH I « SHAT FEAL A AE « NIG ASE AE A RW FATA AEG 2 Leu, 1, 3, 5, 7, 98 User it fut EduChat: Bel) A] WA Fil ES 2 AA BS RHF HAAS: Sn= (1/2)*n (al+an), Jal ES IER BONEN Sil. User: HEIR A RIE ANTUE HEA MBEAN EB EduChat: 1bZ,, ri al + (N-I)* de CPA AE SL SSNS an, ABA SENT ZS aI rE ERE PILZ i BON, BEI ea at FY LATER AEB Fe? ATTA. Tel, SE 2e BONN T N-1)*d, BEAM LAG Ban =3+ (6-1)*2, Ban=13. y 7,9, 11, 13. REN, ROW? PAE, 3&6 User: Betti EduChat: 57/4, T, WK. WR Ae a, BALIN AY DA fa aR. User: How do you find the first n terms of an arithmetic sequence? EduChat: Are you asking for the formula for the sum of the first n terms of an arithmetic sequence, or how to apply the formula to find the sum of the first n terms? User: I don't know the formula for the sum of the first n terms of an arithmetic sequence. EduChat: Well, do you know what an arithmetic sequence is? User: Not really. EduChat: Okay, let's first understand what an arithmetic sequence is. An arithmetic sequence is a sequence in which the difference between any term and the one before it is constant. For example, 1, 3, 5, 7, 9 is an arithmetic sequence because the difference between each term and the one before it User: So how do you find the sum of the first N terms? EduChat: We can use the formula for the sum of an arithmetic sequence to solve it. The formula for the sum of the first N terms of an arithmetic sequence is: Sn = 1/2 * n* (al + an), where al is the first term of the arithmetic sequence, and an is the Nth term. User: But I don't know what the Nth term is. EduChat: Well, we can derive the Nth term from known conditions. If we know the first term of an arithmetic sequence al, the common difference d, and the Nth term an, then the formula for the Nth term is User: What do you mean by common difference? EduChat: The common difference refers to the difference between any two terms in an arithmetic sequence. In an arithmetic sequence, the difference between each term and the one before it is the same, and this equal difference is the common difference. User: Can you give me an example? EduChat: Of course. For example, if the first term the common difference is 2, what is the 6th term? A 3+ (6-1) * 2, which give: quence are 3, 5, 7,9, 11, 13. stand now, thank you. EduChat: You're welcome, feel free to ask if you have any other questions. arithmetic sequence is âding to the formula an =13. Therefore, the first 6
Figure 4: Cases of retrieval-augmented open QA and socratic teaching.
STEMSocial ScienceHumanitiesOthersAvg(hard) Avg 68.7 64.5 54.4 50.9 53.6 57.3 52.8 55.4 51.7 51.3 44.6 44.3 42.8 46.2 41.6 42.3 33.3 33.6 40.7 42.9 49.3 53.7 GPT-4 67.1 ChatGPT 52.9 Baichuan-13B 47.0 InternLM-7B 48.0 ChatGLM2-6B 48.6 WestlakeLM-19B 41.6 38.2 Baichuan-7B Chinese-Alpaca-33B 37.0 Chinese-Alpaca-13B 31.6 EduChat 36.2 EduChat (w Retrieval) 43.5 77.6 61.8 66.8 67.4 60.5 51.0 52.0 51.6 37.2 50.7 59.3 67.8 53.6 49.8 45.8 49.8 44.5 39.3 40.3 32.8 37.7 46.6 54.9 41.4 36.7 37.1 37.1 34.9 31.5 30.3 27.3 28.3 33.1
For emotional support, EduChat can interact like a psychological counselor rather than giving the general advice. For space limitation, we provide more cases of psychology-based emotional support and fine-grained essay assessment in the Appendix (Figure 5).
Table 1: Results of C-Eval.
# 7 Conclusion
similar parameter scales, such as Chinese Alpaca- 13B and WastlackLM. Notably, both EduChat and Chinese Alpaca-13B are built on the LLaMa-13B base model. However, EduChat outperforms Chi- nese Alpaca-13B by over seven points. Further- more, our integration of retrieval into LLMs proves to be highly effective, demonstrating the power of our retrieval-augmented open QA technique in enhancing model performance.
# 6.2 Case Studies
Figure 4 shows the cases of our EduChat on retrieval-augmented open QA and socratic teaching. EduChat can provide precise answer with retrieved relevant information, and learn to guide the student to solve the problems like a teacher step by step.
In this paper, we introduce EduChat, an LLM- based chatbot system for intelligent education. Our goal is to provide personalized, fair, and compas- sionate support to teachers, students, and parents. By leveraging psychology and education theories, we enhance educational functions like open QA, essay assessment, Socratic teaching, and emotional support. Through pre-training on educational cor- pus and fine-tuning with task-specific instructions, EduChat demonstrates great performance on the C- Eval benchmark. Overall, EduChat exhibits great potential towards revolutionizing intelligent educa- tion. In future work, we aim to expand EduChat on more functions, such as career planning, course guidance, question generation and so on.
# References
Alexis Baladn, Ignacio Sastre, Luis Chiruzzo, and Aiala Ros. 2023. RETUYT-InCo at BEA 2023 shared task: Tuning open-source LLMs for generating teacher In Proceedings of the 18th Workshop responses. on Innovative Use of NLP for Building Educational Applications (BEA 2023), pages 756â765, Toronto, Canada. Association for Computational Linguistics.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka- plan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374.
Jiaxi Cui, Zongjia Li, Yang Yan, Bohua Chen, and Li Yuan. 2023. Chatlaw: Open-source legal large language model with integrated external knowledge bases. ArXiv, abs/2306.16092.
Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding, Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. Glm: General language model pretraining with autoregres- sive blank infilling. In Proceedings of the 60th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 320â335.
Albert Ellis. 1991. The revised ABCâs of rational- emotive therapy (RET). Journal of Rational-Emotive and Cognitive-Behavior Therapy, 9(3):139â172.
Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu, Chuancheng Lv, Yikai Zhang, Jiayi Lei, Yao Fu, Maosong Sun, and Junxian He. 2023. C- eval: A multi-level multi-discipline chinese evalu- ation suite for foundation models. arXiv preprint arXiv:2305.08322.
Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Lei Zhang, Baochang Ma, and Xiangang Li. 2023. Exploring the impact of instruction data scaling on large language models: An empirical arXiv preprint study on real-world use cases. arXiv:2303.14742.
Yunxiang Li, Zihan Li, Kai Zhang, Ruilong Dan, Steve Jiang, and You Zhang. 2023. Chatdoctor: A medical chat model fine-tuned on a large language model meta-ai (llama) using medical domain knowledge. Cureus, 15(6).
Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021. Towards emotional support dialog systems. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 3469â3483, Online. Association for Computa- tional Linguistics.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. arXiv preprint arXiv:1908.10084.
John Schulman, B Zoph, C Kim, J Hilton, J Menick, J Weng, JFC Uribe, L Fedus, L Metz, M Pokorny, et al. 2022. ChatGPT: Optimizing language models for dialogue. In OpenAI blog.
Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learn- In Ad- ing to summarize with human feedback. vances in Neural Information Processing Systems, volume 33, pages 3008â3021.
Tianxiang Sun, Xiaotian Zhang, Zhengfu He, Peng Li, Qinyuan Cheng, Hang Yan, Xiangyang Liu, Yunfan Shao, Qiong Tang, Xingjian Zhao, Ke Chen, Yining Zheng, Zhejian Zhou, Ruixiao Li, Jun Zhan, Yun- hua Zhou, Linyang Li, Xiaogui Yang, Lingling Wu, Zhangyue Yin, Xuanjing Huang, and Xipeng Qiu. 2023. Moss: Training conversational language mod- els from synthetic data.
Anaïs Tack, Ekaterina Kochmar, Zheng Yuan, Serge Bibauw, and Chris Piech. 2023. The BEA 2023 shared task on generating AI teacher responses in educational dialogues. In Proceedings of the 18th Workshop on Innovative Use of NLP for Building Ed- ucational Applications (BEA 2023), pages 785â795, Toronto, Canada. Association for Computational Lin- guistics.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Llama: Open and effi- Azhar, et al. 2023a. cient foundation language models. arXiv preprint arXiv:2302.13971.
Hugo Touvron, Louis Martin, Kevin R. Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Niko- lay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Daniel M. Bikel, Lukas Blecher, Cris- tian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony S. Hartshorn, Saghar Hos- seini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel M. Kloumann, A. V. Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, R. Subramanian, Xia Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zhengxu Yan, Iliyan Zarov, Yuchen Zhang, An- gela Fan, Melanie Kambadur, Sharan Narang, Aure- lien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023b. Llama 2: Open foundation and fine-tuned chat models. ArXiv, abs/2307.09288.
Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M. Dai, and Quoc V. Le. 2022. Finetuned language models are zero-shot learners. In The Tenth
International Conference on Learning Representa- tions.
Hongyang Yang, Xiao-Yang Liu, and Chris Wang. 2023. Fingpt: Open-source financial large language models. ArXiv, abs/2306.06031.
Ge Zhang, Yemin Shi, Ruibo Liu, Ruibin Yuan, Yizhi Li, Siwei Dong, Yu Shu, Zhaoqun Li, Zekun Wang, Chenghua Lin, Wenhao Huang, and Jie Fu. 2023a. Chinese open instruction generalist: A preliminary release.
Hongbo Zhang, Junying Chen, Feng Jiang, Fei Yu, Zhi- hong Chen, Jianquan Li, Guimin Chen, Xiangbo Wu, Zhiyi Zhang, Qingying Xiao, Xiang Wan, Benyou Wang, and Haizhou Li. 2023b. Huatuogpt, towards taming language model to be a doctor. ArXiv, abs/2305.15075.
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. 2023a. Lima: Less is more for alignment.
Jie Zhou, Pei Ke, Xipeng Qiu, Minlie Huang, and Jun- ping Zhang. 2023b. Chatgpt: Potential, prospects, and limitations. Frontiers of Information Technology & Electronic Engineering, pages 1â6.
Tool Skill Retrieval-Augmented Open QA Fine-grained Essay Assessment Psychology-based Emotional Support Socratic Teaching Retrieval Self-check General Psychology Socrates True True False False True True False False True False False False False True True/False True/False False False False False
Table 2: Illustration of system prompt design.
# Chinese
Translate to English
# Psychology-based Emotional Support
User: fli,
# B
# (ceceln Fs AHERN
User: Hello, I feel quite stressed recently.
EduChat: fist, BQ] LAE AR iit Re ARR, ARE MACS sek MTT GS VR 2E FE BURKE o Hse a Tf Bak dy 2 ESTEE PY GRARAS BR, Sep BRAN DME Ae SH BiB Ap
# User: Sei 18 3 7° a
# AEA SE
# (ARE
# Re
Od HH is {EMR AVR IBA, TRIER tis feeuebeue DEA REIN AT NST. PAT AK EEA ANS Bs Wi Ad, eres H User: FHS, UU fed Laine if DLE IIE ihe fia â FE BAIN fa] AU, OE LSJ) as, Bh RETA Ay Ais TOMAS, AS SATE. Th, fee SMA, AME FIED.
EduChat: 87% As EPR EE AG FES 5 LRA E At a] a aG TEA FE, BEI AB TY WFR ER 6
EduChat: Hello, I can understand your feelings. Stress is an emotion we often encounter in life, but if we are under high pressure for a long time, it will have negative effects on our physical and mental health. Can you tell me some of the reasons for your increased stress?
User: I've been studying hard recently, but my grades aren't improving. EduChat: Academic stress is indeed a common source of stress. Improving grades requires time and effort, but if you feel a lot of pressure, it may be because you have high expectations for yourself, or because you feel that your efforts are not being rewarded. You can try to adjust your mindset, don't be too demanding of yourself, and believe that your efforts will be rewarded. In addition, you can try some relaxation methods, such as sports, listening to music, chatting with friends, etc., to relieve stress.
User: Okay, thank you
EduChat: You're welcome, I hope my suggestions can help you. If you have any other questions or need further help, you can find me at any time.
# Multi-grained Essay A: sment
User: $i) atk He
User: First Encounter with Fireflies
A Rv PILE Wb. PRATT a A, BUN Fe TBE Ky HOPG T
HEADER LB Bi EEA SX â4RRIC4: KHIM, ATE TT RCM Ti. SRAM AMAL, HRI TAKARA BITE He UR SRMETTEBT, EE BS So Be LARS, aie AS ST, PRE, FEAR aE PAPI A, SORE ERHUMT A. PUNE eth PU EAS Ae
POU PSUEML IT, MRT STE CEI, ZAR 3, MATA, UR. 23. " ae Vee T RAF. DEEN, TEP OP aE âFM AME aia] BEAM Hh, WOES HECK LE RHE, Ti Bit Ae TBR te TESS BATS Bh. ti PBA OPiS ees BOK SURSIT 2 AGAR AACN HEF MET ee, B Melee BH Bale Ks, fiâ Bota te is i BH ae Kc Ba Wai Fi ne ke Be BIR BETIS een MEK aR, BASHA TER, BIS REET. RAT A, WUT LAT, BAM AS at AE o BEAR Hae BALIN 8 7 TIS ReR OFey TT a
SAUL TSW ME
# EPO EY ETE HEC EduChat:
+ AE: 7/8
+ AE: 7/8 AL CASA âSENN He Hâ RRIF. FOR, SRR. SOC HT RT RAN PEAS ESC PS A AS TEE MTT A H H8e AD >, Be SCREAMS
Today, my mother said she would take me to see the fireflies at Liu Hui Pavilion after work. I looked forward to it all day, and finally, it was night. As we walked into Zijinshan Road, the air was unusually moist and fresh. The trees on both sides of the road were tall and dense, and their leaves were rustling pleasantly. The road was quiet, and occasionally, we could see people leisurely strolling and others sweating heavily from their night runs. Suddenly, a motorcycle whizzed past, breaking the tranquility of the night. Cicadas, frogs, and cuckoos composed the summer choir, playing the moving melody of nature. As we turned into Liu Hui Pavilion, it was pitch black in front of us - the fireflies had gathered. Not far away, we could hear the sounds of children's surprise, excitement, and laughter. We walked a short distance with our flashlight and saw something twinkling in the bushes, appearing and disappearing. "Mom! I see the fireflies!" I was extremely excited. Under the moonlight, some fireflies slowly rose from the bushes, floating mid-air, turning into "little twinkling stars." At that moment, my mother said, "A firefly has landed on your head!" Just then, another firefly flew from mid-air and landed on my hand. [held my breath, held my hand steady, and stood still, until this little elf flew away. I watched for a long time until my mother reminded me that it was time to go home. I reluctantly said goodbye to the fireflies. Unexpectedly, the fireflies gave me a surprise. On the way back, there were often two or three fireflies, like lantern bearers leading us along the roadside. When we reached the Post Office Museum, a group of fireflies were dancing in the southern bushes, as if to say goodbye to me.
This firefly-watching experience made my night both wonderfull and unforgettable. Please kindly rate the composition above.
Ashi EEG Fa » FDL TER AA TAFE 7 6 AE AE TI © fei Fe) EGU AE Pa Wl, FRE EH 28
+ RAT:
POS RASA T LAMA, MAL TOR ARATE HN KT, FRE,
# SP BLE
+
2B: (AEA BAST JESU» (RY ek AMALIE IF» ABR, âSTE
SIE: BHAI, | WE*E Rape: Hi eb ABC. Be Beil ist 434 HERA ese SR AWE SUE HY LL Seite AS Bek ab ini, RB
# EduChat:
+ Content Evaluation: 7/8 This article revolves around the theme of "Beautiful Fireflies". The content of the essay is specific, clear, and vividly portrays details, expressing a wealth of emotions that are sincere and touching. There are numerous wonderful sentences throughout the text that add a literary charm to the article.
+ Expression Evaluation: 8/8 The fluidity of this article is high, demonstrating the young author's logical thinking ability. It has clear and accurate sentences, showing excellent expressive ability.
+ Highlight Sentence: Cicadas, frogs, and cuckoos formed a choir of summer, singing the touching melody of nature.
HEAR SC He BE âhn 5 SIREN BE FAIA TP (ESR a 4 ih oye EAE His Ga ABU, tee Ae » BORAT SCRA ECE, Thiet WEL. WENA, RE WRIER.
+ Paragraph Evaluation: Second Paragraph: The author unfolds the paragraph around the firefly viewing ceremony at Liu Hui Xie. It is vivid and smooth, with clear writing ideas. Third Paragraph: The paragraph is too long, the topics within the same paragraph are not unified enough, it could be divided into multiple paragraphs. The excellent description of details within the paragraph makes the narrative vivid and real. Fourth Paragraph: The concluding paragraph revisits the beautiful fireflies, echoing the theme. It is vivid, smooth, and has clear writing ideas.
+ Overall Evaluation: This article is excellent. The essay revolves around "Beautiful Fireflies". The text closely follows the topic, with unified and complete content. In writing, more attention should be paid to topic review issues to ensure that the text is closely related to the theme and the text serves the center. From the perspective of writing objects, this is a descriptive article, which requires focusing on the description method of scenery, whether it is rich and real. From a methodological perspective, this is a realistic article that strives for rich descriptions that conform to narrative logic and are lively and vivid.
Figure 5: Cases of psychology-based emotional support and fine-grained essay assessment. | {
"id": "2302.13971"
} |
2308.02490 | MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities | We propose MM-Vet, an evaluation benchmark that examines large multimodal
models (LMMs) on complicated multimodal tasks. Recent LMMs have shown various
intriguing abilities, such as solving math problems written on the blackboard,
reasoning about events and celebrities in news images, and explaining visual
jokes. Rapid model advancements pose challenges to evaluation benchmark
development. Problems include: (1) How to systematically structure and evaluate
the complicated multimodal tasks; (2) How to design evaluation metrics that
work well across question and answer types; and (3) How to give model insights
beyond a simple performance ranking. To this end, we present MM-Vet, designed
based on the insight that the intriguing ability to solve complicated tasks is
often achieved by a generalist model being able to integrate different core
vision-language (VL) capabilities. MM-Vet defines 6 core VL capabilities and
examines the 16 integrations of interest derived from the capability
combination. For evaluation metrics, we propose an LLM-based evaluator for
open-ended outputs. The evaluator enables the evaluation across different
question types and answer styles, resulting in a unified scoring metric. We
evaluate representative LMMs on MM-Vet, providing insights into the
capabilities of different LMM system paradigms and models. Code and data are
available at https://github.com/yuweihao/MM-Vet. | http://arxiv.org/pdf/2308.02490 | Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, Lijuan Wang | cs.AI, cs.CL, cs.CV, cs.LG | Add results of GPT-4V. Code, data and leaderboard:
https://github.com/yuweihao/MM-Vet | null | cs.AI | 20230804 | 20231024 | 3 2 0 2
t c O 4 2 ] I A . s c [
3 v 0 9 4 2 0 . 8 0 3 2 : v i X r a
# MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
# Weihao Yu1â Zhengyuan Yang2â Linjie Li2 Jianfeng Wang2 Kevin Lin2 Zicheng Liu2 Xinchao Wang1â Lijuan Wang2â
# 1National University of Singapore [email protected]
# 2Microsoft Azure AI
1National University of Singapore ?Microsoft Azure AI
[email protected]
{zhengyang,lindsey.li,jianfw,keli,zliu,lijuanw}@microsoft.com
# Abstract
We propose MM-Vet1, an evaluation benchmark that examines large multimodal models (LMMs) on complicated multimodal tasks. Recent LMMs have shown var- ious intriguing abilities, such as solving math problems written on the blackboard, reasoning about events and celebrities in news images, and explaining visual jokes. Rapid model advancements pose challenges to evaluation benchmark development. Problems include: (1) How to systematically structure and evaluate the compli- cated multimodal tasks; (2) How to design evaluation metrics that work well across question and answer types; and (3) How to give model insights beyond a simple performance ranking. To this end, we present MM-Vet, designed based on the insight that the intriguing ability to solve complicated tasks is often achieved by a generalist model being able to integrate different core vision-language (VL) capa- bilities. MM-Vet defines 6 core VL capabilities and examines the 16 integrations of interest derived from the capability combination. For evaluation metrics, we propose an LLM-based evaluator for open-ended outputs. The evaluator enables the evaluation across different question types and answer styles, resulting in a unified scoring metric. We evaluate representative LMMs on MM-Vet, providing insights into the capabilities of different LMM system paradigms and models. Code and data are available at https://github.com/yuweihao/MM-Vet.
# Introduction
The breakthroughs in large language models (LLMs) [11, 59, 20, 5, 73, 36] bring generalist AI models that can solve a wide range of complicated natural language tasks, many approaching the human-expert-level performance [59, 13]. Large multimodal models (LMMs) aim to achieve even stronger general intelligence via extending LLMs with multimodal inputs. Since more than 80% of our human beingâs perception, learning, cognition, and activities are mediated through vision [65], it is natural to start the exploration by equipping LLMs with âeyes.â One main thread of LMM works, represented by Frozen [75], Flamingo [4], PaLM-E [25], GPT-4 [59], extend LLMs with the visual understanding capability via end-to-end tuning. There also exists the exploration [83, 89, 85, 70, 30] on the modular combination of LLMs and image-to-text vision-language models. Recently, thanks to the open-source of powerful LLMs like LLaMA [73], more open-sourced LMMs are built, including OpenFlamingo [9], LLaVA [51], MiniGPT-4 [93], Otter [44], InstructBLIP [23], and many more [33, 52, 87]. These studies showcase the intriguing ability to solve various complicated
âEqual contribution. â Corresponding authors. 1Short for âMultimodal Veterinarian.â
coco Text VOQA Q: Is the boy happy? GT: The man at bat readies to swing at the Q: What is the largest denomination on table? GT: Yes pitch while the umpire looks on. GT: 500 Required capability: Required capabilities: Required capabilities: Recognition Recognition Recognition Language generation OCR (a) (b) (c) Me: I'll do it at 8 Time: 8.05 Me: looks like | gotta wait till 9 now Q: What will the girl on the right write Q: Where is this photo taken? Q: Can you explain this meme? on the board? GT: Qatar GT: This meme is a humorous take on GT: 14 procrastination and the tendency to delay Required capabilities: Required capabilities: tasks until a specific time ... Recognition Recognition Required capabilities: Spatial awareness Knowledge OcR OCR Recognition Math Knowledge Language generation (a) (e) ()
Figure 1: Required capabilities of different benchmarks. Different from conventional VL benchmarks only require one or two capabilities, MM-Vet focuses on the integration of different core VL capabili- ties, including recognition, OCR, knowledge, language generation, spatial awareness, and math.
multimodal tasks, such as open-world recognition, multimodal knowledge and commonsense, scene text understanding, and so on.
Despite the promising qualitative results on LMMâs capabilities, it remains unclear how to systemati- cally evaluate those showcased complicated multimodal tasks and what are the relationships among evaluated tasks, which is the first step in developing a quantitative evaluation benchmark. As shown in Figure 1, existing vision-language benchmarks [6, 17, 72] focus on simple Vision-Language (VL) tasks that require specific one or two capabilities, such as recognition, language generation, or OCR, but fall short in benchmarking more complicated tasks. Alternatively, we examine the arbitrary integration of core VL capabilities for complicated tasks, with the insight that the intriguing ability to solve complicated multimodal tasks can be achieved by a generalist model mastering and integrating different core capabilities. Following this insight, we propose a new benchmark for evaluating LMMs, namely MM-Vet. MM-Vet defines six core VL capabilities, including recognition, OCR, knowledge, language generation, spatial awareness, and math, which integrate to solve various complicated multimodal tasks. MM-Vet contains 16 tasks for quantitative evaluation. For example, in Figure 1(d), answering the question âWhat will the girl on the right write on the board?â in MM-Vet requires recognizing the genders of the three kids, locating queried girl spatially, recognizing the scene text written by the girl, and finally calculating the result.
Other than the evaluation category definition, the evaluation metrics are another challenge in bench- mark development, given the diverse answer styles and question types. Specifically: (1) The desired outputs in different multimodal tasks have diverse formats, e.g., Figure 1(d)âs math problem can be
2
answered by a single word, while outputs for the essay writing question are hundred-words long; (2) The core aspect to evaluate in different tasks varies, e.g., text generation focuses more on the text quality, recognition can be considered correct with the key concept recognized. Most integrated tasks would require comprehensive evaluations from multiple dimensions. Inspired by recent NLP studies [19, 53, 28] that use LLMs for model evaluation, we propose an LLM-based evaluator as the evaluation metric for open-ended model outputs. As shown in Table 1, we prompt GPT-4 [59] with few-shot evaluation prompts to obtain an evaluation score ranging from 0 to 1. Instead of manually defining the possible answer styles and question types, we include different sample types as few-shot examples and let LLMs infer the scoring criteria automatically. Such metric design eases the future extension to more question types, such as box localization [16, 82, 77].
MM-Vetâs evaluation category and metric designs allow users to obtain capability insights for different LMMs. Such model analyses are more informative than a single overall ranking, which highly depends on the dataset sample composition and might be biased. We evaluate two sets of multimodal systems, i.e., the end-to-end tuned LMMs including OpenFlamingo [9], LLaVA [51], MiniGPT-4 [93], Otter [44], InstructBLIP [23], etc, and the LLM-tool-using systems [85, 70, 30, 38] such as MM-ReAct [85]. Despite not knowing model details, we also evaluate industry solutions such as Bard [34]. We first discuss the capability analyses of these two system paradigms and the representative models. We then dive deeper into the open-sourced LMMs and examine how the training data, vision encoder, and LLM selection influence the performance on different capabilities.
Our contributions are summarized as follows.
We propose MM-Vet to evaluate LMMsâ ability on complicated multimodal tasks. MM-Vet defines 16 emergent tasks of interest, integrated from the six defined core VL capabilities. ⢠We propose an LLM-based evaluator for open-ended outputs of LMMs, which unifies the evaluation across different answer styles and question types. The evaluation metrics ensure the thorough evaluation of both the factual correctness and text quality of the responses. ⢠We benchmark representative LMMs on MM-Vet, revealing the relative strengths and
weaknesses of different system paradigms and models, as summarized in Section 4.5.
# 2 Related work
Multimodal models. Vision-language models [17, 35, 56, 18, 48, 40, 79, 76, 82, 29] approach multimodal intelligence of jointly understanding and generating vision and language signals. Inspired by the impressive quality and genericity in recent large language models (LLMs) [12, 59, 20, 73], researchers explore large multimodal models (LMMs) that seamlessly integrate different vision- In approaching such multimodal language capabilities to solve complicated multimodal tasks. generalist systems, one direction is to extend LLMs with the multi-sensory ability, such as pioneer works Frozen [75], Flamingo [4], PaLM-E [25], GPT-4 [59]. Recent open-sourced LLMs [91, 73, 64] also facilitate various research studies including OpenFlamingo [9], LLaVA [51], MiniGPT-4 [93], Otter [44], InstructBLIP [23], and so on [33, 52, 87]. On the other hand, multimodal agents [85, 70, 38, 30] explore chaining different vision tools with LLMs [12, 59] to achieve integrated vision-language capabilities.
VL benchmarks. Classic VL benchmarks focus on specific capabilities of interest, such as visual recognition [35], image description [17, 3], as well as other benchmarks for specialized capabilities such as scene text understanding [72, 71, 86], commonsense reasoning [88], outside knowledge [58]. The recent development of generalist LMMs posts a strong need for modernized VL benchmarks, which contain complicated multimodal tasks that require integrated VL capabilities.
Our MM-Vet is most related to the concurrent evaluation studies [27, 54, 45, 80, 50] such as MME and MMBench, which design comprehensive evaluation samples to facilitate the LMM evaluation. One major difference is that MM-Vet defines and studies the integrated VL capabilities, allowing the evaluation to provide insights beyond the overall model ranking.
LLM-based evaluation. MM-Vet adopts the open-ended LLM-based evaluator, allowing the evalua- tion across answer styles and question types without requiring binary or multiple answer choices. The technique of prompting LLMs for model evaluation is related to the explorations in NLP [19, 53, 28]. We show that the technique extends well to multimodal tasks, and presents a unified prompt to evaluate samples with different answer styles and question types.
3
fo) ° _ 68.8 60 = 44.0 fe) £40 rd 38.5 36.7 34.4 : | Q £°0 | 11.9 0 Rec OCR Know Gen Spat Math (Recognition) (Knowledge) (Language (Spatial generation) awareness) (a) ge 30 28.4 § 20 £ fe) S 6.4 © 45.5 5.5 5.0 41 a . 3.7 3.7 3.2 18 14 09 09 0.5 Rec Rec OCR OCR Rec OCR OCR Rec Rec Rec Rec Rec OCR Rec OCR Rec Know Spat Spat Spat Math Know OCR OCR OCR OCR KnowKnow Gen OCR Gen Math Know Gen Spat Spat Spat Spat Spat Gen Spat Math (b)
Figure 2: MM-Vet proportion of capabilities. (a) The proportion of each capability. The sum of the proportion is larger than 100% because most samples have more than one capability. (b) The proportion of capability integrations. The sum of the proportion is equivalent to 100%.
# 3 MM-Vet
# 3.1 Data collection
Our aim is to develop a multimodal benchmark that requires comprehensive capabilities, corre- sponding to realistic scenarios an AI agent might encounter. Consider, for instance, this scenario: Awakening from slumber, you reach out for your smartphone (recognition capability) to check the current time (OCR capability). Today, your plan is to visit a new grocery that you have not been to. Guided by the information that the grocery is situated directly opposite the stadium and next to the cinema (spatial awareness), you manage to locate it successfully. Keeping in mind your doctorâs advice to shed some weight, you consciously steer clear of high-calorie food and choose milk, vegetables, and fruits instead (knowledge capability). In the dairy aisle, youâre faced with a choice between two types of pure milk. The first is 4 dollars for one liter with 20% discount, while the second is 7 dollars for 1.5 liter with 25% discount. After some quick arithmetic, you find the former is cheaper (math capability) and and opt for the one-liter package. After shopping, you walk past the cinema and find a person pointing to the poster to introduce a new movie (language generation).
From the scenarios of interest, we summarize the following six core VL capabilities for evaluation, with corresponding MM-Vet examples shown in Tables 10-15.
⢠Recognition (Rec). Recognition refers to the general visual recognition capability, including recognizing scenes, objects, object attributes (color, material, shape, etc), counting, and various other high-level visual recognition tasks in computer vision.
⢠Knowledge (Know). The knowledge category covers various knowledge-related capabilities, including social and visual commonsense knowledge, encyclopedic knowledge, and time- sensitive knowledge like news. This capability necessitates that the model not only possesses such knowledge, but also effectively utilizes it to solve complicated tasks as required.
⢠OCR. Optical character recognition (OCR) refers to the scene text understanding and reasoning capability. The models are tested to read the scene text in images, and reason over the texts to solve various tasks.
4
⢠Spatial awareness (Spat). Spatial awareness embodies a diverse spectrum of capabilities related to understanding space, including the comprehension of the spatial relationship among object and scene text regions.
⢠Language generation (Gen). Language generation is a vital ability that empowers models to articulate their responses in a clear, engaging, and informative manner. We use questions that demand more extended answers for language generation capacity evaluation.
⢠Math. Math evaluates the modelâs arithmetic capability in solving either written equations or problems in the wild.
In real-world scenarios, various complicated multimodal tasks would require the integrations of different core VL capabilities. For instance, explaining visual jokes as shown in Table 10(a) requires recognition, knowledge of humor, and language generation; reading documents and solving math problems as shown in Table 11(a) takes OCR, spatial awareness and math; and answering exam questions given images as shown in Table 14(b) needs OCR, knowledge, spatial awareness. To solve these complicated tasks, LMMs are expected to seamlessly integrate different VL capabilities. Therefore, it is crucial to establish a benchmark that evaluates the performance of these integrated abilities within LMMs.
To build the benchmark, we have gathered 187 images from various online sources and ask 205 questions, each of which requires one or more capabilities to answer. As shown in Tables 10-15, these questions are varied in type and entail open-ended responses of differing lengths. The ground truths for 155 questions are human-annotated, while the remainder of the answers for 50 questions were gathered from the Internet. In addition to the 187 images, ten extra images with high-quality questions are collected from VCR [88], with the questions and answers modified to an open-ended answering format. Another three images are from ChestX-ray14 [78] to obtain corresponding medical expert knowledge. In total, our MM-Vet contains 200 images, and 218 questions (samples), all paired with their respective ground truths. For each question, we have also identified the capacities required to answer them and displayed this information statistically in Figure 2.
# 3.2 LLM-based evaluator for open-ended model outputs
Questions and expected responses in MM-Vet are designed to be open-ended to cover the diverse real-world scenarios. This naturally poses a great challenge in terms of model evaluation and metric design. Drawing inspiration from recent NLP studies [19, 92] that utilize LLMs for open-ended evaluations, we leverage GPT-4 to assist evaluation. As shown in Table 1, we craft a few-shot prompt for model evaluation. The few-shot design allows us to define the scoring metrics via in-context examples and supports easy extension onto new problem sets. Specifically, our implemented prompt incorporates five in-context examples with open-ended short answers and two examples with long answers. We cover examples that are fully correct (i.e., 1.0) or incorrect (i.e., 0.0), as well as examples used to define different types of âpartially correctâ responses. The LLM-based evaluator allows any style of model outputs to be evaluated with a unified consistent metric. Furthermore, it also supports easy adaptation to diverse question types and answer styles by simply modifying the evaluation examples.
By inputting the prompt, GPT-4 automatically generates scores for each sample, conditioned on each sampleâs input question, ground truth, and model output. The score for each sample ranges from 0 to 1. The total scores are computed by
N d si S= x 100%, qd) N
where si is the score of sample i, and N is the sample number. The score regarding each capability or capability integration can be similarly obtained by
So= Xs x 100%, iec, (2) c
where C is the set of samples requiring a specific capability or capability integration, and Nc is the sample number of the set.
5
Table 1: Few-shot prompt for evaluating model outputs using GPT-4, where Q is a sampleâs question, G is the ground truth and P is the model output for the sample. In the prompt, there are examples with short and long open-ended answers, enabling the evaluation of diverse answer styles. Taking the prompt filled with Q, G and P, GPT-4 will generate a soft grading score from 0 to 1.
Compare the ground truth and prediction from AI models, to give a correctness score for the prediction. <AND> in the ground truth means it is totally right only when all elements in the ground truth are present in the prediction, and <OR> means it is totally right when any one element in the ground truth is present in the prediction. The correctness score is 0.0 (totally wrong), 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, or 1.0 (totally right). Just complete the last space of the correctness score.
Question | Ground truth | Prediction | Correctness â | â | â | â What is x in the equation? | -1 <AND> -5 | x = 3 | 0.0 What is x in the equation? | -1 <AND> -5 | x = -1 | 0.5 What is x in the equation? | -1 <AND> -5 | x = -5 | 0.5 What is x in the equation? | -1 <AND> -5 | x = -5 or 5 | 0.5 What is x in the equation? | -1 <AND> -5 | x = -1 or x = -5 | 1.0 Can you explain this meme? | This meme is poking fun at the fact that the names of the countries Iceland and Greenland are misleading. Despite its name, Iceland is known for its beautiful green landscapes, while Greenland is mostly covered in ice and snow. The meme is saying that the person has trust issues because the names of these countries do not accurately represent their landscapes. | The meme talks about Iceland and Greenland. Itâs pointing out that despite their names, Iceland is not very icy and Greenland isnât very green. | 0.4 Can you explain this meme? | This meme is poking fun at the fact that the names of the countries Iceland and Greenland are misleading. Despite its name, Iceland is known for its beautiful green landscapes, while Greenland is mostly covered in ice and snow. The meme is saying that the person has trust issues because the names of these countries do not accurately represent their landscapes. | The meme is using humor to point out the misleading nature of Icelandâs and Greenlandâs names. Iceland, despite its name, has lush green landscapes while Greenland is mostly covered in ice and snow. The text âThis is why I have trust issuesâ is a playful way to suggest that these contradictions can lead to distrust or confusion. The humor in this meme is derived from the unexpected contrast between the names of the countries and their actual physical characteristics. | 1.0 Q | G | P |
# 4 Evaluation results
# 4.1 Experiment settings
We utilize MM-Vet to evaluate two types of LMMs, i.e., (1) end-to-end tuned LMMs (OpenFlamingo [4, 9, 8], BLIP-2 [46], LLaVA [51], MiniGPT-4 [93], Otter [44] and InstructBLIP [23]); (2) LLM- tool-using methods (MM-ReAct [85] and Transformers Agent [38]). The summary of these methods is shown in Table 2. As shown in Table 1, for each sample, we fill the prompt template with its question, ground truth, and output from a specific LMM. By taking the filled prompt into GPT-4, GPT-4 will generate a score from 0 to 1 for the sample. It is found that outputs of GPT-4 still exist variance, although the temperature is set as 0. Therefore, we utilize GPT-4 to evaluate the outputs of LLMs by 5 times. Due to the space limit, we report average scores for capabilities/capability integrations, and average as well as variance for total score.
# 4.2 Result analyses
The main results of different methods are shown in Table 3 regarding each capability, and Table 4 for each capability integration.
6
Table 2: Summary of the evaluated LMMs in this report. We consider both the end-to-end tuned models (i.e., OpenFlamingo [4, 9, 8], BLIP-2 [46], LLaVA [51], MiniGPT-4 [93], LLaMA-Adapter v2 [32], Otter [44], InstructBLIP [23]), and the LLM-tool-using systems (i.e., MM-ReAct [85] and Transformers Agent [38]).
Method OpenFlamingo-9B [4, 9, 8] Vision CLIP ViT-L/14 [66] Initial models Language MPT-7B [2] Other â Tuning data Multimodal C4 [94] BLIP-2-12B [46] EVA-ViT-G [26] Flan-T5-XXL [21] â 1. COCO [49]; 2. Visual Genome [41]; 3. CC3M [69]; 4. CC12M [15]; 5. SBU [62]; 6. 115M images from the LAION-400M [68]. (CapFilt [47] is used to create synthetic captions for the web images) LLaVA-7B [51] LLaVA-13B [51] CLIP ViT-L/14 [66] Vicuna-7B [92] Vicuna-13B [92] â 1. CC3M [69] Concept-balanced 595K [51]; 2. LLaVA-Instruct-158K [51]. LLaVA-7B (LLaMA-2) [51] LLaVA-13B (LLaMA-2) [51] LLaVA-13B (V1.3, 336px) [51] CLIP ViT-L/336px [66] Vicuna-13B-v1.3 [92] MiniGPT-4-8B [93] LLaMA-2-7B-Chat [74] LLaMA-2-13B-Chat [74] CLIP ViT-L/14 [66] Vicuna-7B [92] EVA-ViT-G [26] MiniGPT-4-14B [93] Vicuna-13B [92] â BLIP-2âs Q-Former [46] 1. LAION /CC/SBU BLIP-Caption Concept-balanced 558K [51]; 2. LLaVA-Instruct-80K [51]. 1. CC3M [69]; 2. CC12M [15]; 3. SBU [62]; 4. LAION-400M [68] 5. Proposed 3,500 aligned image-text pairs [93]. LLaMA-Adapter v2-7B [32] CLIP ViT-L/14 [66] LLaMA-7B [73] â 1. LAION-400M [68]; 2. COYO-700M [14]; 3. Multimodal C4 [94]; 4. SBU [62]; 5. CC12M [15]; 6. COCO [49]; 7. GPT-4-LLM [64]; 8. Tuning data of LLaVA [51] Otter-9B [44] CLIP ViT-L/14 [66] MPT-7B [2] OpenFlamingo-9Bâs [4, 9, 8] 1. Perceiver Resampler; 2. GATED XATTN-DENSE MIMIC-IT [43] InstructBLIP-8B [23] InstructBLIP-14B [23] EVA-ViT-G [26] Vicuna-7B [92] Vicuna-13B [92] BLIP-2âs Q-Former [46] 1. Tuning data of BLIP-2 [46]; 2. 26 publicly available datasets (transformed into instruction tuning format). Transformers Agent (GPT-4 as agent) [38] â 1. GPT-4 [59]; 2. Flan-T5 [21]; 3. BART [42] 1. Donut [39]; 2. BLIP [47]; 3. ViLT [40]; 4. CLIPSeg [57] 5. Whisper [67]; 6. SpeechT5 [7]; 7. NLLB [22] None MM-ReAct-GPT-3.5 [85] MM-ReAct-GPT-4 [85] â GPT-3.5 [63] GPT-4 [59] 1. Azure Cognitive Services APIs [10] for image captioning, image tagging, dense captioning, OCR and specialized recognition on celebrities, receipts, etc 2. Bing search; 3. PAL [31] None Total params 9B 12B 7B 13B 7B 13B 13B 8B 14B 7B 9B 8B 14B Not clear Not clear
# 4.2.1 Regarding each capability
Recognition. The âRecognitionâ category contains the questions requiring recognition capability to answer. Examples are shown in Tables 10(a, b), 11(b), 12(a, b), 13(a, b), 14(a, c), and 15(b). The âRecâ column in Table 3 compares the performance on the âRecognitionâ. Among the evaluated models, LLaVA-13B (LLaMA-2) is the best one, obtaining 39.2%. There may be two reasons. First, LLaVA-13B (LLaMA-2) adopts ViT-L/14 [24] from CLIP [66] as a vision model, which is trained by a large amount of data, 400 million image-text pairs; 2) Second, it is surprising that stronger language model can largely boost the recognition performance. LLaVA-13B (LLaMA-2) obtains 8.3% important over LLaVA-13B (Vicuna-13B). Stronger LLMs may help understand questions better and identify key information from visual inputs.
LLaMA-Adapter v2-7B is another strong model in recognition, achieving 38.5%. This outstanding ability may be obtained from its various and large amounts of tuning data, LAION-400M [68], COYO-700M [14], Multimodal C4 [94] and Tuning data of LLaVA [51] etc as shown in Table 2. Besides, InstructBLIP-8B [23] attains 32.4%. As shown in Table 2, the tuning data of InstructBLIP includes 26 publicly available datasets, which contain recognition heavily datasets, like VQA v2 [35] and GQA [37]. The promising capability of InstructBLIP in recognition may benefit from these datasets.
OCR. OCR assesses modelsâ capabilities in recognizing scene texts in images and performing various types of reasoning including math, spatial, recognition, etc. Examples are shown in Tables 10(c), 11(a, c, d), 12(b), 13(a, b), 14(a, b), 15(a, b). As shown in Table 2âs âOCRâ column, MMReAct-GPT4 [85] performs the best (65.7%) in OCR capability with the assistance of an external OCR model as a tool. Among end-to-end tuned models, LLaVA-13B (LLaMA-2) [51] achieves the highest performance (22.7%). This superior performance may be attributed to LLaVAâs adoption of CLIP [66] ViT-L/14 [24] as its vision model, and the inclusion of a large volume of image-OCR pairings within the training data [55].
Knowledge. As depicted in Tables 10(a), 12(a, b) and 14(b, c), the âknowledgeâ category covers a wide range of knowledge-related questions, ranging from joke understanding to encyclopedia
7
Table 3: MM-Vet evaluation results on various LMMs regarding each core VL capability. For each column, the highest, the second, and the third highest figures are highlighted by green, orange and blue backgrounds. All the numbers are presented in % and the full score is 100%.
Model Transformers Agent (GPT-4) [38] MiniGPT-4-8B [93] BLIP-2-12B [46] LLaVA-7B [51] MiniGPT-4-14B [93] Otter-9B [44] OpenFlamingo-9B [4, 9, 8] InstructBLIP-14B [23] InstructBLIP-8B [23] LLaVA-13B [51] MM-ReAct-GPT-3.5 [85] LLaVA-7B (LLaMA-2) [51] LLaMA-Adapter v2-7B [32] LLaVA-13B (V1.3, 336px) [51] LLaVA-13B (LLaMA-2) [51] MM-ReAct-GPT-4 [85] Rec OCR Know Gen 3.2 18.2 13.9 27.4 7.0 27.5 18.9 28.0 22.1 29.9 13.8 27.3 13.1 28.7 9.0 30.8 18.2 32.4 26.4 30.9 24.2 20.7 20.1 32.9 33.4 38.5 25.8 38.1 29.3 39.2 35.0 33.1 3.9 15.0 11.1 17.1 16.1 17.8 16.7 16.0 14.6 20.1 31.5 20.1 20.3 22.3 22.7 65.7 2.2 12.8 11.8 16.3 20.4 14.2 16.4 9.8 16.5 23.5 21.5 19.0 31.4 25.2 26.5 29.0 Spat Math 4.0 12.4 7.7 20.3 5.8 16.2 11.5 21.2 3.8 22.2 3.8 24.4 7.7 21.0 10.5 21.1 7.7 18.6 7.7 24.3 26.2 32.3 5.2 25.7 3.8 22.9 11.2 31.3 7.7 29.6 69.2 56.8 Total 13.4±0.5 22.1±0.1 22.4±0.2 23.8±0.6 24.4±0.4 24.7±0.3 24.8±0.2 25.6±0.3 26.2±0.2 26.4±0.1 27.9±0.1 28.1±0.4 31.4±0.1 32.5±0.1 32.9±0.1 44.6±0.2
knowledge. LLaVA-Adapter v2-7B is the best model in this capability with a score of 31.4%, as shown in Table 3. It may be beneficial from its large-scale tuning data including GPT-4-LLM [64]. MMReAct-GPT-4 [85] also achieves a remarkable score (29.0%) in this capability, because of its strong LLM backbone [59], coupled with external tools like Bing search for knowledge acquisition.
Language generation. âLanguage generationâ denotes the proficiency to produce fluent and infor- mative text outputs, as illustrated in Table 10(a), 12(b), 13(a), and 15(a). The performance within this category is highly correlated with the efficacy of language modeling. As a result, MMReAct-GPT4 [85] and LLaVA-13B (LlaMA-2) stand out as the top two models. Their success can be attributed to the GPT-4 and LlaMA-2 language models on which these systems are built.
Spatial awareness. âSpatial awarenessâ involves the understanding of the spatial relationship among visual object regions (e.g., Table 10(c)) and scene text regions (e.g., Table 13(a, b)). MMReAct-GPT4 [85] has a significant lead in this capability (56.8%), because the adopted tools, such as dense captioning and OCR, provide detailed object and scene text location information in the form of coordinates, which can be understood and processed by GPT-4.
When it comes to end-to-end tuned models, LLaVA-13B (V1.3, 336px) exhibits the best performance of 31.3%. The tuning data for LLaVA is partly derived from capturing object names and their corresponding coordinates as input. This procedure ensures the generation of data imbued with spatial information, potentially aiding the models in developing and enhancing their spatial awareness capabilities.
Math. âMathâ measures the arithmetic capability on either written equations (e.g., Table 15(b)) or problems in the wild (e.g., Table 11(d)). Notably, MMReAct-GPT4 [85] consistently outperforms other models. This superior performance may be attributed to the adopted PAL math tool (Program- aided Language Models) [31].
# 4.2.2 Regarding each capability integration
Recognition, knowledge, and language generation.. As shown in Table 10(a), this capability integration can enable models to explain visual jokes. LLaMA-Adapter-v2-7B [32] is the best model in this capability integration. This may be attributed to its large scale of tuning data as shown in Table 2. LLaVA-13B (LLaMA-2) and LLaVA-13B (V1.3, 336px) [51] are the other two outstanding models. Stronger language models may be the reason. The tuning data of LLaVA shown in Table 2 can also not be ignored.
8
Table 4: MM-Vet evaluation results on various LMMs regarding each capability integration. Examples of each capability integration are shown in supplementary materials Tables 10-15. For each column, the highest, the second, and the third highest figures are highlighted by green, orange and blue backgrounds. All the numbers are presented in % and the full score is 100%.
Model Transformers Agent (GPT-4) [38] MiniGPT-4-8B [93] BLIP-2-12B [46] LLaVA-7B [51] MiniGPT-4-14B [93] Otter-9B [44] OpenFlamingo-9B [4, 9, 8] InstructBLIP-14B [23] InstructBLIP-8B [23] LLaVA-13B [51] MM-ReAct-GPT-3.5 [85] LLaVA-7B (LLaMA-2) [51] LLaMA-Adapter v2-7B [32] LLaVA-13B (V1.3, 336px) [51] LLaVA-13B (LLaMA-2) [51] MM-ReAct-GPT-4 [85] OCR Rec Rec Rec Spat OCR Know OCR Know Spat OCR Math Spat Gen Rec Math 0.0 7.4 45.8 0.0 1.3 49.1 0.0 0.0 14.4 14.3 50.0 20.8 14.2 47.9 9.6 0.0 38.9 7.3 41.7 21.2 7.1 65.1 11.5 4.5 28.9 17.1 46.6 13.3 21.4 41.7 24.8 0.0 11.1 50.0 16.7 21.1 47.5 14.6 0.0 11.1 15.6 54.1 29.2 50.0 22.5 0.0 38.9 15.5 48.6 15.4 14.3 58.3 40.5 0.0 11.1 74.3 14.6 14.3 50.0 19.2 8.1 6.5 23.3 69.9 15.4 14.3 33.3 20.8 18.0 0.0 18.0 25.2 41.1 17.3 47.5 23.3 7.1 9.1 33.3 19.1 33.1 28.8 35.7 28.3 60.0 9.1 18.8 57.0 26.9 9.7 34.7 0.0 50.0 26.7 12.2 50.0 38.5 0.0 35.3 54.1 13.5 7.1 27.8 8.2 25.5 59.7 25.0 14.3 66.7 25.8 29.8 59.5 21.2 14.3 58.3 36.2 27.8 0.0 22.5 33.0 69.2 78.6 25.0 83.0 63.6 44.4 7.1 7.1 Rec OCR OCR Know Know Spat Gen 0.0 0.0 8.0 0.7 14.3 25.0 16.7 5.2 0.0 6.2 50.0 6.6 50.0 18.7 38.5 18.3 32.5 23.1 46.5 33.3 3.2 6.0 6.0 4.5 28.6 50.0 10.0 15.2 14.3 70.0 16.7 8.8 0.0 35.2 15.7 25.0 7.8 53.8 14.3 50.0 12.5 50.0 2.5 25.0 100.0 0.0 47.8 10.2 44.8 14.3 50.0 11.3 22.5 38.0 28.6 48.0 53.3 11.2 49.3 14.3 50.0 33.3 56.8 28.6 50.0 33.3 3.5 0.0 68.2 88.0 14.3 50.0 Rec Rec OCR Rec OCR Gen OCR Spat Spat 9.5 25.0 0.0 21.2 42.9 50.0 8.5 45.2 Rec OCR OCR Rec Spat Gen Know Math Spat Spat 13.4±0.5 50.0 49.0 0.0 22.1±0.1 0.0 0.0 0.0 22.4±0.2 0.0 50.0 0.0 23.8±0.6 0.0 19.0 0.0 24.4±0.4 0.0 0.0 0.0 24.7±0.3 0.0 30.0 0.0 24.8±0.2 0.0 0.0 0.0 25.6±0.3 0.0 50.0 15.0 0.0 26.2±0.2 0.0 0.0 26.4±0.1 12.0 0.0 0.0 35.0 80.0 27.9±0.1 0.0 28.1±0.4 0.0 0.0 0.0 31.4±0.1 0.0 0.0 0.0 32.5±0.1 0.0 2.0 50.0 32.9±0.1 0.0 0.0 8.0 44.6±0.2 50.0 80.0 0.0 Total
Recognition (sole). This category contains samples only requiring recognition, as shown in Table 10(b). InstructBLIP-14B and InstructBLIP-8B [23] achieve the best performance, which may result from the tuning data including recognition datasets, like VQA v2 [35] and GQA [37].
OCR and spatial awareness. For this integration, an example is shown in Table 10(c). MM-ReAct- GPT-4 [85] is the best method for this integration. Notably, compared with MM-ReAct-GPT-3.5, MM-ReAct-GPT-4 has a significant improvement, over 40%, indicating the importance of LLMs to integrate information of OCR and location.
OCR, spatial awareness, and math. An example of this integration is shown in Table 11(a), which requires reading the floor plan and conducting arithmetic. Compared with the above integration, this combination involves one more capability of math. The observation is similar to the integration of OCR and spatial awareness. MM-ReAct-GPT-4 [85] still achieves the best performance.
Recognition and spatial awareness. Table 11(b) shows an example for this integration. LLaVA- 13B (V1.3, 336px) [51] performs best for this category. Compared with LLaVA-13B (LLaMA-2), LLaVA-13B (V1.3, 336px) obtains an improvement of 8.4%, indicating the significant contribution of larger resolution of images.
OCR (sole). This task requires OCR only, as shown in Table 11(c). MM-ReAct-GPT-4 [85] has the best results for sole OCR due to an OCR tool from Azure API. Notable, MM-ReAct-GPT-4 is much better than MM-ReAct-GPT-3.5 with an improvement of 23.0%, demonstrating the importance of language models in OCR.
OCR and Math. This integration enables reading text from real-world scenarios and solving math problems, as shown in Table 11(d). MM-ReAct-GPT-4 [85] obtains the best performance in this capability integration, far ahead of other models. We highly recommend using MM-ReAct-GPT-4 to complete tasks related to this capability integration.
Other capability integrations. 9 other capability integrations are in long-tailed distribution, where MMReAct-GPT-4 achieves the best scores in 5 integrations out of 9. Their examples are shown in Tables 12-15.
# 4.3 Result discussion
# 4.3.1 Foundation models and tuning data
In this subsection, we discuss the modules in LMMs and speculate how each component may affect the LMMsâ capabilities in different aspects, evaluated by MM-Vet. We mainly consider the models based on open-sourced LLMs, i.e., Flan-T5 [21], LLaMA [73], Vicuna [92], and LLaMA-2 [74].
9
Table 5: MM-Vet (Bard set) evaluation results on various LMMs regarding each core VL capability. For each column, the highest, the second, and the third highest figures are highlighted by green, orange and blue backgrounds. All the numbers are presented in % and the full score is 100%. Total 30.3±0.1 31.5±0.1 27.6±0.2 48.1±0.2 53.5±0.2
Model LLaVA-13B (LLaMA-2) [51] LLaVA-13B (V1.3, 336px) [51] MM-ReAct-GPT-3.5 [85] MM-ReAct-GPT-4 [85] Bard [34] Rec OCR Know Gen 27.6 37.8 24.6 39.4 16.6 22.3 36.6 34.3 61.0 56.2 Spat Math 8.0 27.2 11.6 30.6 24.0 32.9 72.0 60.6 39.6 52.0 22.9 22.3 31.4 66.3 52.5 22.4 22.7 15.6 25.6 50.9
Table 6: MM-Vet (Bard set) evaluation results on various LMMs regarding each capability integration. For each column, the highest, the second, and the third highest figures are highlighted by green, orange and blue backgrounds. All the numbers are presented in % and the full score is 100%.
OCR Rec OCR Spat Know Gen Rec Math Math 26.6 55.2 18.8 14.3 57.1 39.5 0.0 8.2 9.1 Model Rec Know 20.0 Vicuna-13B (LLaMA-2) [51] 20.0 Vicuna-13B (V1.3, 336px) [51] 21.9 59.0 22.9 14.3 85.7 25.5 20.0 11.3 38.8 31.2 35.7 28.6 56.4 MM-ReAct-GPT-3.5 [85] 17.0 35.2 70.8 78.6 28.6 81.5 63.6 40.0 MM-ReAct-GPT-4 [85] 0.0 52.3 70.3 45.2 56.4 42.9 70.2 18.2 Bard [34] OCR Spat Rec Spat OCR Rec Rec OCR OCR OCR Know Gen Know Spat Spat Gen 1.3 56.8 28.6 50.0 33.3 15.0 49.3 14.3 50.0 33.3 25.0 100.0 0.0 0.0 68.3 88.0 14.3 50.0 0.0 77.7 81.5 28.6 50.0 66.7 Rec OCR Spat Rec OCR 47.8 OCR Rec Gen Know Spat Spat 8.0 0.0 2.0 50.0 35.0 0.0 50.0 80.0 50.0 80.0 Rec OCR Spat Math â â â â â Total 30.3±0.1 31.5±0.1 27.6±0.2 48.1±0.2 53.5±0.2
Vision. For the Vision component, two models have been employed in the end-to-end LMMs we evaluated, namely, CLIP-ViT/L14 [66] (428M) and EVA-ViT-G (1.13B). Determining a superior model is currently not possible due to the absence of a comprehensive ablation study [90]. However, itâs noteworthy that, when paired with the same language model, Vicuna-7B, InstructBLIP-8B excels in recognition tasks, while LLaVA-7B works particularly well for OCR.
Language. There is a notable trend indicating that superior language models (LLMs) typically yield better performance, such as comparing the 7B and 13B variants of different models, except for the outlier of InstructBLIP where the 8B version performs better than the 14B one.
Tuning data. Increasing the volume of data can enhance performance. An example is InstructBLIP- 8B [23], which utilizes more data from 26 publicly available datasets to tune the model and achieve higher scores than BLIP-2-12B.
# 4.3.2 Comparison with Bard
Bard [34] is one popular closed-source commercial LMM system. One problem in evaluation is that Bard rejects images containing people and instead outputs âSorry, I canât help with images of people yet.â To conduct a fair comparison with other models, we constructed a subset of MM-Vet with 168 samples that Bard could process, henceforth referred to as the Bard set. The results on the Bard set are shown in Tables 5 and 6.
Bard achieves the highest scores in three out of six capabilities, seven out of fifteen capability integrations, and holds the highest overall score (53.5%). MM-ReAct-GPT-4 [85] outperforms in the remaining three out of six capabilities, and tops the chart in nine out of the fifteen capability integrations. Particularly, MM-ReAct performs better in OCR, spatial awareness, and math capabili- ties, indicating the potential benefit of having specialized external tools, even when working with state-of-the-art LMMs.
When considering end-to-end models, there is still a big gap from Bard. For instance, Vicuna-13B (V1.3, 336px) [51] obtains 31.5%, a substantial 22.0% lower than Bard. Future stronger open-sourced LLMs and advancements in multimodal training hold potential to further narrow this gap.
# 4.3.3 Comparison with GPT-4V(ision)
We evaluate and benchmark the state-of-the-art LMM, GPT-4V(ison) [59â61, 1, 84] on MM-Vet. In our queries to GPT-4V, we prepend the prompt with âGenerate a short and concise response to the following image text pair.â The quantitative results are shown in Tables 7, 8, and the qualitative
10
Table 7: MM-Vet evaluation results on LLaVA, MM-ReAct and GPT-4V regarding each core VL capability. For each column, the highest and the second highest figures are highlighted by green and orange backgrounds. All the numbers are presented in % and the full score is 100%.
Model LLaVA-13B (LLaMA-2) [51] MM-ReAct-GPT-4 [85] GPT-4V [60] Rec OCR Know Gen 29.3 39.2 35.0 33.1 60.7 67.5 22.7 65.7 68.3 26.5 29.0 56.2 Spat Math 7.7 29.6 69.2 56.8 58.6 69.4 Total 32.9±0.1 44.6±0.2 67.7±0.3
Table 8: MM-Vet evaluation results on LLaVA, MM-ReAct and GPT-4V regarding each capability integration. For each column, the highest and the second highest figures are highlighted by green and orange backgrounds. All the numbers are presented in % and the full score is 100%.
Rec Know Gen Rec OCR Spat Math Model Rec Spat OCR LLaVA-13B (LLaMA-2) [51] 29.8 59.5 21.2 14.3 58.3 36.2 MM-ReAct-GPT-4 [85] GPT-4V [60] Rec Know 27.8 22.5 33.0 69.2 78.6 25.0 83.0 63.6 44.4 55.5 89.2 68.6 73.9 83.3 77.5 44.5 38.9 OCR Spat OCR Math 0.0 Rec OCR OCR Know Know Spat Gen 33.3 3.5 68.2 88.0 14.3 0.0 78.2 76.5 42.9 100.0 66.7 Rec Rec OCR Gen OCR Rec OCR Spat Spat 56.8 28.6 50.0 50.0 Rec OCR OCR Rec Spat Gen Know Math Spat Spat 0.0 0.0 8.0 50.0 80.0 0.0 50.0 89.0 0.0 Total 32.9±0.1 44.6±0.2 67.7±0.3
results are expressed in Figures 3-6. Remarkably, GPT-4V achieves a score of 67.7%, surpassing both open-sourced LMMs [51] and LLM-based multimodal agents [85] by substantial margins.
We aspire that the detailed per-category performance breakdown sheds light on potential avenues for enhancing model capabilities, thereby bridging the existing performance gap. To illustrate, integrating specialized tools within agent systems proves advantageous for specific functionalities like OCR and math. While other categories, such as recognition and language generation, would require enhancements in the core vision and language modules, respectively. Figures 3-6 offer an exhaustive analysis, highlighting exemplary success and failure instances of GPT-4Vâs performance.
This MM-Vet analysis is intended as a source of inspiration for future research, specifically in the realms of advanced multimodal prompting techniques [84, 81] and model refinements to further improve the LMM performance.
# 4.4 Effectiveness analysis of LLM-based evaluation
To verify the effectiveness of LLM-based evaluation for LMM predictions, we select the outputs from MMReAct-GPT-4 on 138 objective questions, which can be objectively annotated by humans. We compute the absolute value of the difference between the evaluatorâs output score and the human- annotated score on each sample. By default, we use GPT-4 (0613) as the evaluator. Here we also replace it with other LLMs, e.g. LLaMA-2, GPT-3.5. The average difference to the human scoring is reported in Table 9, represented as â.
The maximum potential discrepancy is 1.0. The baseline evaluation method, keyword matching, results in a high difference of 0.273. This illustrates the unsuitability of keyword matching for MM-Vet when dealing with open-ended answers. It is surprising that â of LLaMA-2-7B [74] is even higher than that of keyword matching, while â LLaMA-2-13B only marginally less than keyword matching. This suggests that assessing open-ended outputs from models is far from straightforward. For OpenAIâs models, GPT-3.5 (turbo-0613) obtains 0.178 of â, and GPT-4 (0613) achieves the lowest difference of 0.042. In this paper, we utilize GPT-4 (0613) to evaluate the outputs of LMMs.
# 4.5 Takeaway notes
We summarize the above analyses and discussions as follows:
⢠In the evaluation of integrated capabilities on MM-Vet (Sections 4.2, 4.3.2, 4.3.3), GPT- 4V [60] and Bard [34] outperform existing open-sourced methods. The tool-using ap- proach, MM-ReAct-GPT-4 [85], achieves comparable performance to Bard with effective external tools. The pros and cons in different categories motivate future studies on tool- enhanced LMMs. Among end-to-end LMMs, LLaVA-13B (LLaMA-2)/LLaVA-13B (V1.3, 336px) [51] demonstrates the best performance on MM-Vet.
11
Table 9: Averaged absolute differences (â) between the evaluation scores of various LLM evaluators and those of human-annotated scores, on MM-ReAct-GPT4âs results. A smaller discrepancy indicates a better agreement with the gold standard of human evaluation, indicating a better evaluator.
Model â (â) Keyword matching 0.273 LLM-based evaluation LLaMA-2-7B LLaMA-2-13B GPT-3.5 (turbo-0613) GPT-4 (0613) 0.307 0.254 0.178 0.042
⢠Analysis of open-source LMMs (Section 4.3.1) leaves room for ambiguity regarding the superior vision encoders for LMMs, based on current model comparisons. However, it is evident that stronger LLMs can boost the performance of LMMs.
⢠For open-ended evaluation (Section 4.4), it is effective to use GPT-4 for evaluating the open- ended outputs of LMMs. The use of less powerful LLMs could result in more significant deviations from the gold standard of human evaluation results.
⢠Current top-performing methods, such as GPT-4V [60] and MM-ReAct-GPT-4 [85], only achieve scores of around 68%/45% on MM-Vet (where full score is 100%). The gap signifies that further effort is necessary to enhance the performance of LMMs in terms of integrated capabilities, e.g., by developing stronger LMMs or extending LMMs with external tools.
# 5 Conclusion
In this paper, we introduce the MM-Vet benchmark to evaluate LMMs in terms of their integrated vision-language capabilities. We have assembled a new multimodal dataset, which requires the integration of multiple vision-language capabilities. To facilitate open-ended evaluation, we adopt an LLM-based evaluator to grade open-ended outputs from LMMs. We then evaluate various LMMs on MM-Vet, analyzing their results to provide insights into different LMM system paradigms and module selections. We observe that the current best LMMs GPT-4V [59â61, 1, 84] achieve around 68% score on MM-Vet (full score 100%), indicating the need for efforts to further improve the integrated capabilities of LMMs.
12
# A Examples of capability integrations
Table 10: Three samples requiring different capability integrations.
(a)
° V ( N
Q: What occasions would someone use this meme?
GT: This meme, commonly known as "Screaming Panda," is typically used to express shock, surprise, or fear. It could be used in response to a startling or unexpected event, or to convey a sense of panic or alarm. Some possible occasions where someone might use this meme include:
Reacting to a jump scare in a horror movie - Responding to a surprising plot twist in a TV show or book - Expressing shock at a news headline or current event - Conveying fear or anxiety about an upcoming deadline or exam - Showing surprise at an unexpected outcome in a sports game or other competition.
Required capabilities: Recognition, knowledge, language generation
(b)
Q: How many tomatoes are there?
GT:
5
# Required capabilities: Recognition
(c)
ee
Q: What is located to the right of the shampoo?
GT:
# conditioner
Required capabilities: OCR, spatial awareness
13
Table 11: Four samples requiring different capability integrations.
(a)
LL L _â
Q: Which room is bigger, the double garage or the living room?
GT: double garage
Required capabilities: OCR, spatial awareness, math
(b)
Q: On the right desk, what is to the left of the laptop?
GT:
table lamp <OR> desk lamp
Required capabilities: Recognition, spatial awareness
(c)
Q: What are all the scene text in the image?
GT: 5:30PM <AND> 88% <AND> Mario Kart 8 Deluxe <AND> MARIO KART 8 DELUXE <AND> SUPER MARIO ODYSSEY <AND> THE LEGEND OF ZELDA <AND> BREATH OF WILD <AND> Options <AND> Start
# Required capabilities: OCR
(d)
Gasoline Dow FAQS 3.552 3.652 oe F602
Q: How many gallons of supreme gasoline can I get with $50?
GT:
13.6 <OR> 13.7
Required capabilities: OCR, math
14
Table 12: Two samples requiring different capability integrations.
(a)
In which country was this photo taken?
# GT: Australia
Required capabilities: Recognition, knowledge
(b)
Me: I'll do it at 8 Time: 8.05 Me: looks like | gotta wait till 9 now
Q: Can you explain this meme?
GT: This meme is a humorous take on procrastination and the tendency to delay tasks until a specific time. The person in the meme plans to do something at 8 oâclock, but when they miss that deadline by a few minutes, they decide to wait until 9 oâclock instead. The image of Kermit the Frog lying in bed represents the personâs laziness and lack of motivation to complete the task. Required capabilities: Recognition, OCR, knowledge, language generation
15
Table 13: Two samples requiring different capability integrations.
(a)
Thousands 600 Immigration 400 300 Emigration Net â | I l | l I ° 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 Long-Term International Migration, UK, 1999-2008,
Q: The graph below shows the long-term international migration, UK, 1999-2008.
Summarize the information by selecting and reporting the main features, and make comparisons where relevant. You should write at least 150 words.
GT: The chart gives information about UK immigration, emigration and net migration between 1999 and 2008.
Both immigration and emigration rates rose over the period shown, but the figures for immigration were significantly higher. Net migration peaked in 2004 and 2007.
In 1999, over 450,000 people came to live in the UK, while the number of people who emigrated stood at just under 300,000. The figure for net migration was around 160,000, and it remained at a similar level until 2003. From 1999 to 2004, the immigration rate rose by nearly 150,000 people, but there was a much smaller rise in emigration. Net migration peaked at almost 250,000 people in 2004.
After 2004, the rate of immigration remained high, but the number of people emigrating fluctuated. Emigration fell suddenly in 2007, before peaking at about 420,000 people in 2008. As a result, the net migration figure rose to around 240,000 in 2007, but fell back to around 160,000 in 2008.
Required capabilities: Recognition, OCR, language generation, spatial awareness
(b)
Q: Which car is on the parking spot 33?
GT:
# no <OR> empty
Required capabilities: Recognition, OCR, spatial awareness
16
Table 14: Three samples requiring different capability integrations.
(a)
y
# Q: GT: Required capabilities: Recognition, OCR
Is this apple organic? yes
(b)
lve whale -s wer whale aa m Kad a. Nog â~< fe ran an = ye Te
Q: Which are producers in this food web? Phytoplankton <AND> Seaweed
Required capabilities: OCR, knowledge, spatial awareness
(c)
Q: Does the person bigger than the car?
GT:
# no
Required capabilities: Recognition, knowledge, spatial awareness
17
Table 15: Two samples requiring different capability integrations.
(a)
Underground Railway Systems.
Q: The table below gives information about the underground railway systems in six cities.
Summarise the information by selecting and reporting the main features, and make comparisons where relevant.
You should write at least 150 words.
GT: The table shows data about the underground rail networks in six major cities.
The table compares the six networks in terms of their age, size and the number of people who use them each year. It is clear that the three oldest underground systems are larger and serve significantly more passengers than the newer systems.
The London underground is the oldest system, having opened in 1863. It is also the largest system, with 394 kilometres of route. The second largest system, in Paris, is only about half the size of the London underground, with 199 kilometres of route. However, it serves more people per year. While only third in terms of size, the Tokyo system is easily the most used, with 1927 million passengers per year.
Of the three newer networks, the Washington DC underground is the most extensive, with 126 kilometres of route, compared to only 11 kilometres and 28 kilometres for the Kyoto and Los Angeles systems. The Los Angeles network is the newest, having opened in 2001, while the Kyoto network is the smallest and serves only 45 million passengers per year.
Required capabilities: OCR, language generation, spatial awareness
(b)
Q: What will the girl on the right write on the board?
GT:
14
Required capabilities: Recognition, OCR, spatial awareness, math
18
# B GPT-4V Prediction Examples
(a) Q: How many gallons of supreme GPT-4V: You can get gasoline can | get with $50? approximately 13.7 gallons GT: 13.6 <OR> 13.7 of supreme gasoline with Required capabilities: OCR, math $50 at the price of $3.659 per gallon. GPT4_score: 1.0 (b) Q: What is the price difference between GPT-4\V: The price difference diesel and regular gasoline? between diesel and regular GT: 0.11 gasoline is $0.11, with diesel Required capabilities: OCR, math being more expensive. GPT4_score: 1.0 (c) Q: Based on the image, what is the total GPT-4V: The total international awareness of Tina Fey? international awareness of Tina Fey is 35%. Required capabilities: OCR, spatial GPT4_score: 1.0 (d) Q: Based on the image, in which GPT-4V: Tina Fey has the | osm 8 ox a» | country does Tina Fey have the highest highest level of total level of total awareness? awareness in Australia at GT: Australia 62%. Required capabilities: OCR, spatial GPT4_score: 1.0 awareness (e) Q: What is the price for tomatoes? GPT-4V: The price for GT: eight <OR> 8.0 tomatoes is 8.00 as shown Required capabilities: Recognition, spatial awareness on the label in the image. (f) Q: What fruit is the the right of plums? â GPT-4\V: The fruit to the right = GT: orange of the plums are oranges. u Required capabilities: Recognition, spatial GPT4_score: 1.0 awareness R (g) Q: Which car is on the parking spot 33? GPT-4V: The blue car is on GT: no <OR> empty parking spot 33. Required capabilities: OCR, recognition, GPT4_score: 0.0 spatial awareness (h) Q: Which spot is the red car parking GPT-4V: The red car is on? parking on spot number 87. GT: 31 GPT4_score: 0.0 Required capabilities: OCR, recognition, spatial awareness
(c) Q: Based on the image, what is the total GPT-4V: The total international awareness of Tina Fey? international awareness of Tina Fey is 35%. Required capabilities: OCR, spatial GPT4_score: 1.0 (d) Q: Based on the image, in which GPT-4V: Tina Fey has the | osm 8 ox a» | country does Tina Fey have the highest highest level of total level of total awareness? awareness in Australia at GT: Australia 62%. Required capabilities: OCR, spatial GPT4_score: 1.0 awareness (e) Q: What is the price for tomatoes? GPT-4V: The price for GT: eight <OR> 8.0 tomatoes is 8.00 as shown Required capabilities: Recognition, spatial awareness on the label in the image. (f) Q: What fruit is the the right of plums? â GPT-4\V: The fruit to the right = GT: orange of the plums are oranges. u Required capabilities: Recognition, spatial GPT4_score: 1.0 awareness R (g) Q: Which car is on the parking spot 33? GPT-4V: The blue car is on GT: no <OR> empty parking spot 33. Required capabilities: OCR, recognition, GPT4_score: 0.0 spatial awareness (h) Q: Which spot is the red car parking GPT-4V: The red car is on? parking on spot number 87. GT: 31 GPT4_score: 0.0 Required capabilities: OCR, recognition,
Figure 3: GPT-4V prediction examples
19
Plastic Waste is Out of Control Energy price cap update ICAEW chart of the week (a) Q: Which category did the worst job in plastic recycling, considering the amount of plastic made and thrown out? GT: packaging Required capabilities: OCR, spatial awareness (b) Q: What is the difference in metric tonnes between the amount of plastic produced and the amount discarded? GT: 105 Required capabilities: OCR, spatial awareness (c) Q: Who is the person in the bottom right? 7) GT: Jamie Dornan Required capabilities: Recognition, spatial awareness (d) Q: What is the name of this landmark? GT: baochu pagoda Required capabilities: Recognition (e) Q: How many birds are there? GT: 6 Required capabilities: Recognition (f) Q: What is the estimated average standing charge for the year 2023? GT: 271 Required capabilities: OCR, spatial awareness, math (g) Q: What is the estimated total cost for the year 2023? GT: 23800 Required capabilities: OCR, spatial awareness, math GPT-4V: The packaging category did the worst job in plastic recycling in 2015, with 141M metric tonnes of plastic thrown out of the 146M metric tonnes made. GPT4_score: 1.0 GPT-4V: The difference in metric tonnes between the amount of plastic produced and discarded is 105 million metric tonnes (407M - 302M). GPT4_score: 1.0 GPT-4V: The person in the bottom right is Jamie Dornan. GPT4_score: 1.0 GPT-4V: This is the Baochu Pagoda located in Hangzhou, China. GPT4_score: 1.0 GPT-4V: There are six birds in the image. GPT4_score: 1.0 GPT-4V: The estimated average standing charge for the year 2023 is £280. GPT4_score: 0.0 GPT-4V: The estimated total cost for the year 2023 is £5,890. GPT4_score: 0.0
Plastic Waste is Out of Control Energy price cap update ICAEW chart of the week (a) Q: Which category did the worst job in plastic recycling, considering the amount of plastic made and thrown out? GT: packaging Required capabilities: OCR, spatial awareness (b) Q: What is the difference in metric tonnes between the amount of plastic produced and the amount discarded? GT: 105 Required capabilities: OCR, spatial awareness (c) Q: Who is the person in the bottom right? 7) GT: Jamie Dornan Required capabilities: Recognition, spatial awareness (d) Q: What is the name of this landmark? GT: baochu pagoda Required capabilities: Recognition (e) Q: How many birds are there? GT: 6 Required capabilities: Recognition (f) Q: What is the estimated average standing charge for the year 2023? GT: 271 Required capabilities: OCR, spatial awareness, math (g) Q: What is the estimated total cost for GPT-4V: The packaging category did the worst job in plastic recycling in 2015, with 141M metric tonnes of plastic thrown out of the 146M metric tonnes made. GPT4_score: 1.0 GPT-4V: The difference in metric tonnes between the amount of plastic produced and discarded is 105 million metric tonnes (407M - 302M). GPT4_score: 1.0 GPT-4V: The person in the bottom right is Jamie Dornan. GPT4_score: 1.0 GPT-4V: This is the Baochu Pagoda located in Hangzhou, China. GPT4_score: 1.0 GPT-4V: There are six birds in the image. GPT4_score: 1.0 GPT-4V: The estimated average standing charge for the year 2023 is £280. GPT4_score: 0.0 GPT-4V: The estimated total
Figure 4: GPT-4V prediction examples
20
(a) Q: What are all the scene text in the : image? Connecticut lew of 162 GT: Connecticut law of 1642 If any man or Sfany manor woman bo a witch â woman be a witch -- that is, hath or thatis, hathor consultethwitha â consulteth with a familiar spirit -- they shall Semiliar spiritâthey skalbeputte be put to death death | Required capabilities: OCR GPT-4V: The scene text in the image is: "Connecticut law of 1642 If any man or woman be a witch -- that is, hath or consulteth with a familiar spirit -- they shall be put to death" GPT4_score: 1.0 (b) Q: What are all the scene text in the image? GT: MR. WILLIAM SHAKESPEARES COMEDIES, HISTORIES, & TRAGEDIES. publifhed according to the True Originall Copies. Required capabilities: OCR (c) Q: What are all the scene text in the image? GT: 5:30PM 88% Mario Kart 8 Deluxe MARIO KART 8 DELUXE SUPER MARIO ODYSSEY THE LEGEND OF ZELDA BREATH OF WILD Options Start Required capabilities: OCR (d) Q: In which country was this photo taken? GT: australia Required capabilities: Recognition, knowledge (e) Q: Which are producers in this food Ry / \ web? = GT: Phytoplankton Seaweed spatial awareness oe on ao Required capabilities: OCR, knowledge, aotiien (f) Q: Who is the person in the image? GT: Michelle Yeoh Required capabilities: Recognition, knowledge (g) Q: Where is this photo taken? GT: oscar 2023 Required capabilities: Recognition, knowledge GPT-4V: The scene texts in the image are: - "MR. WILLIAM SHAKESPEARES" - âCOMEDIES, HISTORIES, & TRAGEDIES." - "Published according to the True Originall Copies." GPT4_score: 0.9 GPT-4V: The scene texts in the image are "Mario Kart 8 Deluxe", "5:30 PM", "88%", "Options", "Start", "MARIOKART 8 DELUXE", "THE LEGEND OF ZELDA", and "FORTNITE". GPT4_score: 0.7 GPT-4V: This photo was taken in Australia, during the Australian Open tennis tournament. GPT4_score: 1.0 GPT-4V: The producers in this food web are phytoplankton and seaweed. GPT4_score: 1.0 GPT-4V: The person in the image is Michelle Yeoh, a renowned actress and producer. GPT4_score: 1.0 GPT-4V: This photo was taken at the Oscars, where the woman is receiving an award. GPT4_score: 0.5
Figure 5: GPT-4V prediction examples
21
(a) Q: What occasions would someone use this meme? GT: This meme, commonly known as "Screaming Panda," is typically used to express shock, surprise, or fear. It could be used in response to a startling or unexpected event, or to convey a sense of panic or alarm. . Required capabilities: Recognition, knowledge, language generation Me: I'll do it at 8 Time: 8.05 Me: looks like | gotta wait till 9 now (b) Q: Can you explain this meme? GT: This meme is a humorous take on procrastination and the tendency to delay | tasks until a specific time. The person in | the meme plans to do something at 8 i o'clock, but when they miss that deadline by a few minutes, they decide to wait until 9 o'clock instead. The image of Kermit the Frog lying in bed represents the person's laziness and lack of motivation to complete the task. Required capabilities: Recognition, knowledge, language generation, OCR = (c) Q: The table below gives information â â 7: = about the underground railway systems in ase 1900, a9 iat six cities. Summarise the information by I = = selecting and reporting the main features, wee eer = = and make comparisons where relevant. cal EEL = = You should write at least 150 words. GT: The table shows data about the underground rail networks in six major cities. The table compares the six networks in terms of their age, size and the number of people who use them each year. It is clear that the three oldest underground systems are larger and serve significantly more passengers than the newer systems. The London underground is the oldest system, having opened in 1863. It is also the largest system, with 394 kilometres of route. The second largest system, in Paris, is only about half the size of the London underground, with 199 kilometres of route. However, it serves more people per year. While only third in terms of size, the Tokyo system is easily the most used, with 1927 million passengers per year. Of the three newer networks, the Washington DC underground is the most extensive, with 126 kilometres of route, compared to only 11 kilometres and 28 kilometres for the Kyoto and Los Angeles systems. The Los Angeles network is the newest, having opened in 2001, while the Kyoto network is the smallest and serves only 45 million passengers per year. Required capabilities: OCR, language generation, spatial awareness GPT-4V: This meme is often used to express shock, surprise, or disbelief in a humorous or exaggerated way. It can be used in response to unexpected news, events, or situations. GPT4_score: 0.6 GPT-4V: This meme humorously illustrates the tendency to procrastinate. The person plans to do something at 8 but delays it by 5 minutes and then decides to postpone the task until 9, exemplifying how easily tasks can be deferred. The image of Kermit the Frog relaxed and unbothered underscores the nonchalant attitude towards procrastination. GPT4_score: 1.0 GPT-4V: The table illustrates data on six underground railway systems worldwide, focusing on their opening dates, route lengths, and annual passenger numbers. Londonâs underground, inaugurated in 1863, is the oldest and has the most extensive network, boasting 394 kilometres of route. Despite its age and extensive network, it only caters to 775 million passengers annually, significantly less than Paris and Tokyo. Parisâs metro, opened in 1900, has a 199-kilometre route and serves a staggering 1191 million passengers yearly, the highest among the listed cities. Tokyoâs system, though younger (opened in 1927) and having a shorter 155-kilometre route, still manages to serve 1927 million passengers, showcasing the cityâs dense population and reliance on public transport. In contrast, Washington DCâs metro, opened in 1976 with 126 kilometres of route, only caters to 144 million passengers annually. Kyoto and Los Angeles have the youngest and least extensive networks, opened in 1981 and 2001 with 11 and 28 kilometres of routes, respectively. Correspondingly, they have the lowest annual passenger numbers, with Kyoto at 45 million and Los Angeles at 50 million. In summary, there is a noticeable disparity in the age, route length, and passenger numbers among these cities. While older and more extensive networks like Londonâs do not necessarily translate to higher passenger numbers, cities like Paris and Tokyo demonstrate a significant demand for underground rail systems, likely attributed to their dense populations and urban structures. GPT4_score: 0.9
= â â 7: = ase 1900, a9 iat I = = wee eer = = cal EEL = =
Figure 6: GPT-4V prediction examples
22
# References
[1] Chatgpt can now see, hear, and speak. https://openai.com/blog/ chatgpt-can-now-see-hear-and-speak, 2023.
[2] Mpt. https://github.com/mosaicml/llm-foundry#mpt, 2023.
[3] Harsh Agrawal, Karan Desai, Yufei Wang, Xinlei Chen, Rishabh Jain, Mark Johnson, Dhruv Batra, Devi Parikh, Stefan Lee, and Peter Anderson. Nocaps: Novel object captioning at scale. In Proceedings of the IEEE/CVF international conference on computer vision, pages 8948â8957, 2019.
[4] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716â23736, 2022.
[5] Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023.
[6] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. VQA: Visual Question Answering. In ICCV, 2015.
[7] Junyi Ao, Rui Wang, Long Zhou, Chengyi Wang, Shuo Ren, Yu Wu, Shujie Liu, Tom Ko, Qing Li, Yu Zhang, et al. Speecht5: Unified-modal encoder-decoder pre-training for spoken language processing. arXiv preprint arXiv:2110.07205, 2021.
[8] Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, and Ludwig Schmidt. Openflamingo: An open-source framework for training large autoregressive vision-language models. arXiv preprint arXiv:2308.01390, 2023.
[9] Anas Awadalla, Irena Gao, Joshua Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, and Ludwig Schmidt. Openflamingo, March 2023.
[10] Microsoft Azure. Azure cognitive services apis. https://azure.microsoft.com/en-us/products/ ai-services/ai-vision, 2023. Accessed: 2023-06-20.
[11] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020.
[12] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. In NeurIPS, 2020.
[13] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
[14] Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, and Saehoon Kim. Coyo-700m: Image-text pair dataset. https://github.com/kakaobrain/coyo-dataset, 2022.
[15] Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3558â3568, 2021.
[16] Ting Chen, Saurabh Saxena, Lala Li, David J Fleet, and Geoffrey Hinton. Pix2seq: A language modeling framework for object detection. In ICLR, 2022.
[17] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 2015.
[18] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Learning universal image-text representations. In ECCV, 2020.
[19] Cheng-Han Chiang and Hung-yi Lee. Can large language models be an alternative to human evaluations? arXiv preprint arXiv:2305.01937, 2023.
23
[20] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
[21] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022.
[22] Marta R Costa-jussà , James Cross, Onur Ãelebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, et al. No language left behind: Scaling human-centered machine translation. arXiv preprint arXiv:2207.04672, 2022.
[23] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Wang, Boyang Li, Pascale Fung, and Steven Hoi. Instructblip: Towards general-purpose vision-language models with instruction tuning. arXiv preprint arXiv:2305.06500, 2023.
[24] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
[25] Danny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Florence. Palm-e: An embodied multimodal language model. In arXiv preprint arXiv:2303.03378, 2023.
[26] Yuxin Fang, Wen Wang, Binhui Xie, Quan Sun, Ledell Wu, Xinggang Wang, Tiejun Huang, Xinlong Wang, and Yue Cao. Eva: Exploring the limits of masked visual representation learning at scale. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 19358â19369, 2023.
[27] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, et al. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394, 2023.
[28] Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166, 2023.
[29] Zhe Gan, Linjie Li, Chunyuan Li, Lijuan Wang, Zicheng Liu, and Jianfeng Gao. Vision-language pre-training: Basics, recent advances, and future trends. arXiv preprint arXiv:2210.09263, 2022.
[30] Difei Gao, Lei Ji, Luowei Zhou, Kevin Qinghong Lin, Joya Chen, Zihan Fan, and Mike Zheng Shou. Assistgpt: A general multi-modal assistant that can plan, execute, inspect, and learn. arXiv preprint arXiv:2306.08640, 2023.
[31] Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435, 2022.
[32] Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui He, Xiangyu Yue, et al. Llama-adapter v2: Parameter-efficient visual instruction model. arXiv preprint arXiv:2304.15010, 2023.
[33] Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. Multimodal-gpt: A vision and language model for dialogue with humans, 2023.
[34] Google. Bard. https://bard.google.com, 2023. Accessed: 2023-07-17.
[35] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904â6913, 2017.
[36] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022.
[37] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In CVPR, 2019.
24
[38] Huggingface. Transformers agent. https://huggingface.co/docs/transformers/transformers_ agents, 2023. Accessed: 2023-07-20.
[39] Geewook Kim, Teakgyu Hong, Moonbin Yim, JeongYeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, and Seunghyun Park. Ocr-free document understanding transformer. In European Conference on Computer Vision, pages 498â517. Springer, 2022.
[40] Wonjae Kim, Bokyung Son, and Ildoo Kim. Vilt: Vision-and-language transformer without convolution or region supervision. In ICML, 2021.
[41] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV, 2017.
[42] Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461, 2019.
[43] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Fanyi Pu, Jingkang Yang, Chunyuan Li, and Ziwei Liu. Mimic-it: Multi-modal in-context instruction tuning. arXiv preprint arXiv:2306.05425, 2023.
[44] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023.
[45] Bohao Li, Rui Wang, Guangzhi Wang, Yuying Ge, Yixiao Ge, and Ying Shan. Seed-bench: Benchmarking multimodal llms with generative comprehension, 2023.
[46] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023.
[47] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training In International Conference on Machine for unified vision-language understanding and generation. Learning, pages 12888â12900. PMLR, 2022.
[48] Xiujun Li, Xi Yin, Chunyuan Li, Xiaowei Hu, Pengchuan Zhang, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. Oscar: Object-semantics aligned pre-training for vision-language tasks. In ECCV, 2020.
[49] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, 2014.
[50] Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. Aligning large multi-modal model with robust instruction tuning. arXiv preprint arXiv:2306.14565, 2023.
[51] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023.
[52] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023.
[53] Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. Gpteval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634, 2023.
[54] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? arXiv preprint arXiv:2307.06281, 2023.
[55] Yuliang Liu, Zhang Li, Hongliang Li, Wenwen Yu, Mingxin Huang, Dezhi Peng, Mingyu Liu, Mingrui Chen, Chunyuan Li, Lianwen Jin, et al. On the hidden mystery of ocr in large multimodal models. arXiv preprint arXiv:2305.07895, 2023.
[56] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In NeurIPS, 2019.
[57] Timo Lüddecke and Alexander Ecker. Image segmentation using text and image prompts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7086â7096, 2022.
[58] Kenneth Marino, Mohammad Rastegari, Ali Farhadi, and Roozbeh Mottaghi. Ok-vqa: A visual question answering benchmark requiring external knowledge. In CVPR, pages 3195â3204, 2019.
25
[59] OpenAI. Gpt-4 technical report, 2023.
[60] OpenAI. Gpt-4v(ision) system card. 2023.
[61] OpenAI. Gpt-4v(ision) technical work and authors. 2023.
[62] Vicente Ordonez, Girish Kulkarni, and Tamara L Berg. Im2text: Describing images using 1 million captioned photographs. In NeurIPS, 2011.
[63] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730â27744, 2022.
[64] Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277, 2023.
[65] Thomas Politzer. Vision is our dominant sense. https://www.brainline.org/article/ vision-our-dominant-sense. Accessed: 2023-05-20.
[66] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020, 2021.
[67] Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision. In International Conference on Machine Learning, pages 28492â28518. PMLR, 2023.
[68] Christoph Schuhmann, Richard Vencu, Romain Beaumont, Robert Kaczmarczyk, Clayton Mullis, Aarush Katta, Theo Coombes, Jenia Jitsev, and Aran Komatsuzaki. Laion-400m: Open dataset of clip-filtered 400 million image-text pairs. arXiv preprint arXiv:2111.02114, 2021.
[69] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In ACL, 2018.
[70] Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580, 2023.
[71] Oleksii Sidorov, Ronghang Hu, Marcus Rohrbach, and Amanpreet Singh. Textcaps: a dataset for image captioning with reading comprehension. In ECCV, pages 742â758, 2020.
[72] Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards vqa models that can read. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8317â8326, 2019.
[73] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
[74] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
[75] Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, SM Eslami, Oriol Vinyals, and Felix Hill. Multimodal few-shot learning with frozen language models. arXiv preprint arXiv:2106.13884, 2021.
[76] Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, and Lijuan Wang. Git: A generative image-to-text transformer for vision and language. arXiv preprint arXiv:2205.14100, 2022.
[77] Wenhai Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang Zeng, Ping Luo, Tong Lu, Jie Zhou, Yu Qiao, et al. Visionllm: Large language model is also an open-ended decoder for vision-centric tasks. arXiv preprint arXiv:2305.11175, 2023.
[78] Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M Summers. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2097â2106, 2017.
[79] Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao. Simvlm: Simple visual language model pretraining with weak supervision. In ICLR, 2022.
26
[80] Peng Xu, Wenqi Shao, Kaipeng Zhang, Peng Gao, Shuo Liu, Meng Lei, Fanqing Meng, Siyuan Huang, Yu Qiao, and Ping Luo. Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models. arXiv preprint arXiv:2306.09265, 2023.
[81] Jianwei Yang, Hao Zhang, Feng Li, Xueyan Zou, Chunyuan Li, and Jianfeng Gao. Set-of-mark prompting unleashes extraordinary visual grounding in gpt-4v. arXiv preprint arXiv:2310.11441, 2023.
[82] Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Faisal Ahmed, Zicheng Liu, Yumao Lu, and Lijuan Wang. Unitab: Unifying text and box outputs for grounded vision-language modeling. In European Conference on Computer Vision, pages 521â539. Springer, 2022.
[83] Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Yumao Lu, Zicheng Liu, and Lijuan Wang. An empirical study of gpt-3 for few-shot knowledge-based vqa. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 3081â3089, 2022.
[84] Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, and Lijuan Wang. The dawn of lmms: Preliminary explorations with gpt-4v (ision). arXiv preprint arXiv:2309.17421, 2023.
[85] Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Ehsan Azarnasab, Faisal Ahmed, Zicheng Liu, Ce Liu, Michael Zeng, and Lijuan Wang. Mm-react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381, 2023.
[86] Zhengyuan Yang, Yijuan Lu, Jianfeng Wang, Xi Yin, Dinei Florencio, Lijuan Wang, Cha Zhang, Lei Zhang, and Jiebo Luo. Tap: Text-aware pre-training for text-vqa and text-caption. In CVPR, pages 8751â8761, 2021.
[87] Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023.
[88] Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From recognition to cognition: Visual commonsense reasoning. In CVPR, pages 6720â6731, 2019.
[89] Andy Zeng, Adrian Wong, Stefan Welker, Krzysztof Choromanski, Federico Tombari, Aveek Purohit, Michael Ryoo, Vikas Sindhwani, Johnny Lee, Vincent Vanhoucke, et al. Socratic models: Composing zero-shot multimodal reasoning with language. arXiv preprint arXiv:2204.00598, 2022.
[90] Yan Zeng, Hanbo Zhang, Jiani Zheng, Jiangnan Xia, Guoqiang Wei, Yang Wei, Yuchen Zhang, and Tao Kong. What matters in training a gpt4-style language model with multimodal inputs? arXiv preprint arXiv:2307.02469, 2023.
[91] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022.
[92] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Judging Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. llm-as-a-judge with mt-bench and chatbot arena, 2023.
[93] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: Enhancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023.
[94] Wanrong Zhu, Jack Hessel, Anas Awadalla, Samir Yitzhak Gadre, Jesse Dodge, Alex Fang, Youngjae Yu, Ludwig Schmidt, William Yang Wang, and Yejin Choi. Multimodal c4: An open, billion-scale corpus of images interleaved with text. arXiv preprint arXiv:2304.06939, 2023.
27 | {
"id": "2302.13971"
} |
2308.02151 | Retroformer: Retrospective Large Language Agents with Policy Gradient Optimization | Recent months have seen the emergence of a powerful new trend in which large
language models (LLMs) are augmented to become autonomous language agents
capable of performing objective oriented multi-step tasks on their own, rather
than merely responding to queries from human users. Most existing language
agents, however, are not optimized using environment-specific rewards. Although
some agents enable iterative refinement through verbal feedback, they do not
reason and plan in ways that are compatible with gradient-based learning from
rewards. This paper introduces a principled framework for reinforcing large
language agents by learning a retrospective model, which automatically tunes
the language agent prompts from environment feedback through policy gradient.
Specifically, our proposed agent architecture learns from rewards across
multiple environments and tasks, for fine-tuning a pre-trained language model
which refines the language agent prompt by summarizing the root cause of prior
failed attempts and proposing action plans. Experimental results on various
tasks demonstrate that the language agents improve over time and that our
approach considerably outperforms baselines that do not properly leverage
gradients from the environment. This demonstrates that using policy gradient
optimization to improve language agents, for which we believe our work is one
of the first, seems promising and can be applied to optimize other models in
the agent architecture to enhance agent performances over time. | http://arxiv.org/pdf/2308.02151 | Weiran Yao, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Yihao Feng, Le Xue, Rithesh Murthy, Zeyuan Chen, Jianguo Zhang, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.CL, cs.AI | null | null | cs.CL | 20230804 | 20230804 | 3 2 0 2
g u A 4 ] L C . s c [
1 v 1 5 1 2 0 . 8 0 3 2 : v i X r a
# RETROFORMER: RETROSPECTIVE LARGE LANGUAGE AGENTS WITH POLICY GRADIENT OPTIMIZATION
# Weiran Yaoâ Shelby Heineckeâ Xueâ Rithesh Murthyâ Zeyuan Chenâ Juan Carlos Nieblesâ Zhiwei Liuâ Yihao Fengâ Le Jianguo Zhangâ Devansh Arpitâ Ran Xuâ Phil Muiâ Huan Wangâ ,â Caiming Xiongâ ,â Silvio Savareseâ ,â
# â Salesforce Research âCTO Office, Salesforce âCorresponding Authors: {huan.wang, cxiong, ssavarese}@salesforce.com
# ABSTRACT
Recent months have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing objective oriented multi-step tasks on their own, rather than merely responding to queries from human users. Most existing language agents, however, are not optimized using environment-specific rewards. Although some agents enable iterative refinement through verbal feedback, they do not reason and plan in ways that are compatible with gradient-based learning from rewards. This paper introduces a principled framework for reinforcing large language agents by learning a retrospective model, which automatically tunes the language agent prompts from environment feedback through policy gradient. Specifically, our proposed agent architecture learns from rewards across multiple environments and tasks, for fine-tuning a pre-trained language model which refines the lan- guage agent prompt by summarizing the root cause of prior failed attempts and proposing action plans. Experimental results on various tasks demonstrate that the language agents improve over time and that our approach considerably out- performs baselines that do not properly leverage gradients from the environment. This demonstrates that using policy gradient optimization to improve language agents, for which we believe our work is one of the first, seems promising and can be applied to optimize other models in the agent architecture to enhance agent performances over time.
# INTRODUCTION
Recently, we have seen the emergence of a powerful new trend in which large language models (LLMs) are augmented to become autonomous language agents capable of performing tasks on their own, ultimately in the service of a goal, rather than simply responding to queries from human users. Prominent studies, including ReAct (Yao et al., 2023), Toolformer (Schick et al., 2023), Hug- gingGPT (Shen et al., 2023), generative agents (Park et al., 2023), WebGPT (Nakano et al., 2021), AutoGPT (Gravitas, 2023), BabyAGI (Nakajima, 2023), and Langchain (Chase, 2023), have suc- cessfully showcased the viability of creating autonomous decision-making agents by leveraging the capabilities of LLMs. These approaches use LLMs to generate text-based outputs and actions that can be further employed for making API calls and executing operations within a given environment.
Given the immense scale of LLMs with an extensive parameter count, the behaviors of most existing language agents, however, are not optimized or aligned with environment reward functions. An exception is a very recent language agent architecture, namely Reflexion (Shinn et al., 2023), and several other related work, e.g., Self-Refine (Madaan et al., 2023b) and Generative Agent (Park et al., 2023), which use verbal feedback, namely self-reflection, to help agents learn from prior failure. These reflective agents convert binary or scalar reward from the environment into verbal feedback in the form of a textual summary, which is then added as additional context to the prompt for the language agent. The self-reflection feedback acts as a semantic signal by providing the agent
1
with a concrete direction to improve upon, helping it learn from prior mistakes and prevent repetitive errors to perform better in the next attempt.
Although the self-reflection operation enables iterative refinement, generating useful reflective feed- back from a pre-trained, frozen LLM is challenging, as showcased in Fig. 1, since it requires the LLM to have a good understanding of where the agent made mistakes in a specific environment, i.e., the credit assignment problem (Sutton & Barto, 2018), as well as the ability to generate a summary containing actionable insights for improvement. The verbal reinforcement cannot be optimal, if the frozen language model has not been properly fine-tuned to specialize in credit assignment problems for the tasks in given environments. Furthermore, the existing language agents do not reason and plan in ways that are compatible with differentiable, gradient-based learning from rewards by ex- ploiting the existing abundant reinforcement learning techniques. To address these limitations, this paper introduces Retroformer, a principled framework for reinforcing language agents by learn- ing a plug-in retrospective model, which automatically refines the language agent prompts from environment feedback through policy optimization. Specifically, our proposed agent architecture can learn from arbitrary reward information across multiple environments and tasks, for iteratively fine-tuning a pre-trained language model, which refines the language agent prompts by reflecting on failed attempts and assigning credits of actions taken by the agent on future rewards.
# 1. Task instruction
Lollipop Chainsaw featured Juliet Starling, who was voiced by a Canadian-American actress who has done voice roles for what Teen Titans spinoff series?
2. Action sequences in prior trial 4. Action sequences in next trial Action I Search[Juliet Starling] Action 1: Search[Lollipop Chainsaw] Action 2: Search[{Lollipop Chainsaw] . . oe Action 2: Search[{Tara Strong] Action 3: Search[Tara Strong] Action 3: Finish[Teen Titans, Teen Titans Go!] Action 4: Finish[Teen Titans and Teen Titans Go!] : ? : J + | add to agent prompt 3. Verbal feedback (self-reflection)
I should have searched for Lollipop Chainsaw first and looked up the Canadian-American actress who voiced Juliet Starling afterwards. I also should have looked up Tara Strong's filmography and searched for any voice roles she did specifically for Teen Titans or Teen Titans Go!
Figure 1: An example of uninformative self-reflections from a frozen LLM. The root cause of failure in prior trial is that the agent should have responded only the spinoff series âTeen Titans Goâ not âTeen Titansâ as the answer. The agent forgot its goal during a chain of thought, actions and lengthy observations. The verbal feedback from frozen LLM, however, only rephrases the prior actions sequences as the proposed new plan, resulting the same, incorrect actions in the next trial.
We conduct experiments on open-source simulation and real-world environments including Hot- PotQA (Yang et al., 2018), which consists of search-based question answering tasks to test the tool use abilities of a web agent which needs to call Wikipedia APIs in multiple steps for answering ques- tion. We observe Retroformer agents are faster learners compared with reflexion, which does not use gradient for reasoning and planning, and better decision-makers and reasoners. More con- cretely, Retroformer agents improve the success rate of search-based question-anwering tasks in HotPotQA (Yang et al., 2018) by 18% in 4 attempts, which demonstrates the effectiveness of gradient-based reasoning and planning for tool use under large state-action space.
To summarize, our contributions are the following:
⢠The paper introduces Retroformer, which iteratively refines the prompts given to large lan- guage agents based on environmental feedback to improve learning speed and task completion. We take a policy gradient approach with the Actor LLM being part of the environment, allowing learning from a wide range of reward signals for diverse tasks and environments.
⢠The proposed method focuses on enhancing the retrospective model in the language agent archi- tecture, without accessing the Actor LLM parameters or needing to propagate gradients through it. The agnostic nature of Retroformer makes it a flexible plug-in module for various types of cloud-based LLMs, such as GPT or Bard.
2
# 2 RELATED WORK
Autonomous Language Agents We summarize in Table 1 the recent language agent literature related to our work from five perspectives and differentiate our method from them. The completion of a complex task typically involves numerous stages. An agent must possess knowledge of these stages and plan accordingly. Chain-of-Thoughts or CoT (Wei et al., 2022) is the pioneering work that prompts the agent to decompose challenging reasoning tasks into smaller, more manageable steps. ReAct (Yao et al., 2023), on the other hand, proposes the exploitation of this reasoning and acting proficiency within LLM to encourage interaction with the environment (e.g. using the Wikipedia search API) by mapping observations to the generation of reasoning and action traces or API calls in natural language. This agent architecture has spawned various applications, such as HuggingGPT (Shen et al., 2023), generative agents (Park et al., 2023), WebGPT (Nakano et al., 2021), AutoGPT (Gravitas, 2023), BabyAGI (Nakajima, 2023), and Langchain (Chase, 2023).
Table 1: Related work on large language agents.
Approach CoT (Wei et al., 2022) ReAct (Yao et al., 2023) Self-refine (Madaan et al., 2023b) RAP (Hao et al., 2023) Reflexion (Shinn et al., 2023) Retroformer (our method) Gradient Arbitrary learning â â â â â â reward â â â â â â Iterative refinement â â â â â â Hidden Decision Memory constraints making â â â â â â â â â â â â â â â â â â
However, these approaches fail to learn from valuable feedback, such as environment rewards, to en- hance the agentâs behaviors, resulting in performances that are solely dependent on the quality of the pre-trained LLM. Self-refine (Madaan et al., 2023a) addresses this limitation by employing a single LLM as a generator, refiner, and provider of feedback, allowing for iterative refinement of outputs. However, it is not specifically tailored for real-world task-based interaction with the environment. On the other hand, RAP (Hao et al., 2023) repurposes the LLM to function as both a world model and a reasoning agent. It incorporates Monte Carlo Tree Search for strategic exploration within the extensive realm of reasoning with environment rewards. This approach enables effective naviga- tion and decision-making in complex domains. Recently, Shinn et al. (2023) presents Reflexion, a framework that equips agents with dynamic memory and self-reflection capabilities, enhancing their reasoning skills. Self-reflection plays a pivotal role, allowing autonomous agents to iteratively refine past actions, make improvements, and prevent repetitive errors.
Transformer Reinforcement Learning Reinforcement learning with a provided reward function or a reward-labeled dataset, commonly referred to as RLHF, has become a standard practice within the LLM fine-tuning pipeline. These endeavors have convincingly demonstrated the efficacy of RL as a means to guide language models towards desired behaviors that align with predefined reward functions encompassing various domains, including machine translation, summarization, and gen- erating favorable reviews. Among the prevalent transformer RL methods are online RL algorithms such as Proximal Policy Optimization or PPO (Schulman et al., 2017), and offline RL techniques such as Implicit Language Q-Learning or ILQL (Snell et al., 2022) and Direct Preference Optimiza- tion or DPO (Rafailov et al., 2023). These methods have been implemented in TRL/TRLX (von Werra et al., 2020) distributed training framework.
# 3 CHALLENGES
Although LLMs are not designed to handle tool use or take actions, it has been observed (Gravitas, 2023; Nakajima, 2023; Chase, 2023) that empirically for text-rich environment, especially when the actions and states are accurately described using natural languages, LLMs work surprisingly well. However there are still plenty of challenges applying LLM-based agents. Here we list several below.
Spurious Actions LLMs are not pre-trained or designed with an action-agent application in mind. Even some restrictions are explicitly specified in the prompt, the LLM model may still generate spurious actions that are not in the action space A.
3
Limited Prompt Length LLM itself is stateless. However, in applications it is preferred to em- power agents with states or memories for better performance. It has been observed that LLM based agents are easy to run into infinite loops if the states are not handled nicely. Many LLM agents concatenate all the previous state descriptions and actions into the prompt so that LLM as a way to bestow âstateâ to the LLM. Inevitably this methodology runs into the prompt length issues. As the trajectory grows longer, the prompt runs out of spaces.
Heuristic Prompt Engineering Even though a lot of paradigms have been proposed to improve LLM agentsâ performance (Yao et al., 2023; Ahn et al., 2022), there is a lack of systematic method- ologies for consistent model refinement. In fact, manual prompt tuning is still widely used in a lot of the application scenarios.
Prohibitive Training Most of the well-performing LLMs are too large to be fit in just one or two GPUs. It is technically challenging to optimize the LLMs directly as is done in the the classical reinforcement learning setting. In particular, OpenAI has not provided any solution for RL based finetuning. Most of the issues are caused by the fact that LLMs are not pre-trained or designed with an action-agent application in mind.
4 INTUITION
Compared to the LLM-based action agents, classical RL agents, though not able to handle text-based environments as nicely in the zero shot setting, are able to keep improving based on the feedback and rewards provided by the environment. Popular RL algorithms include Policy Gradient (Sutton et al., 2000), Proximal Policy Optimization Algorithm (PPO) (Schulman et al., 2017), Trust Region Policy Optimization (TRPO) (Schulman et al., 2015), and Advantage Actor Critic methods (Mnih et al., 2016).
In this draft we are proposing a simple but powerful novel framework to tackle the challenges men- tioned above. On one hand, we would like to leverage the classical RL based optimization algorithms such as policy gradient to improve the model performance. On the other hand, our framework avoids finetuning on the LLM directly. The key is, instead of training the LLM directly, we train a retro- spective LM. The retrospective LM takes usersâ prompt, rewards and feedback from the environment as input. Its output will be prompt for the actual LLM to be consumed. RL algorithms are employed to optimize the weights in the retrospective LM model instead of directly on the LLM. In our frame- work the weights in the actual LLM is assumed to be fixed (untrainable), which aligns well with the application scenario when the LLM is either too large to tune or prohibited from any tuning.
Another perspective viewing our framework is, we train a retrospective LM to apply automatic prompt tuning for the LLM agents. In this case, the RL algorithms such as policy gradients are employed to optimize the prompts. Ideally the retrospective LM can help summarize the past âex- perienceâ, the usersâ prompt, the environmentsâ feedback into a condensed text with length limit so that it is easier for the LLM to digest. To some extent, in our setting the original LLM can be considered as part of the environment since its parameters are all fixed.
# 5 NOTATION AND FORMULATION
We denote a large language model (LLM) based agent as a function Lξl : M â A, where M is the space of prompts, which may include the actual prompts mu provided by the users, as well as some contextual information c â C. Here C is the space of context as a representation of the current state S returned by the environment â¦. A is the space of actions. Note the actions taken by most language model based agents are sampled auto-repressively, so L is a random function. The subscript ξl denotes the re-parameterized random variables involved in the sampling process. Another note is, the LLM-based agent itself is stateless. All the states and possible memorization are characterized as text in the prompt.
The environment is defined as a tuple (Tξo, R). Tξo : S à A â S is the state transition function, where S is the space of states and A is the action space. Here we assume the states and actions are represented using text. Again we used ξo to represent the randomness involved in the state transition.
4
For each state s ⬠S, a reward function is defined as R : S + R. At each step of the play, the state s is described using natural language, and integrated into the context c. In the context, previous states may also be described and embedded to help LLMs making a good guess on the next action to take. As in all the reinforcement learning setting, the final goal is to maximize the cumulative rewards, or returns Geum = an R(s;). In many situations, the rewards are sparse, i.e., R(s,) are mostly zero except very few states.
The retrospective model takes the all the previous states s1,··· ,t, actions a1,··· ,t, rewards r1,··· ,t, and the user prompt mu as input, and massage those information into a new prompt m to be consumed by the LLM agent:
i ]t Îξr,Î : [Si, Ai, Ri, Mu i=1 â M, (1)
where ξr stands for the randomness involved in the retrospective model, and Πis the set of learnable parameters in the retrospective model. The goal of the RL optimization is
# T
T arg max Ee, ¢.,â¬, » a) s.t. St41 = Te, (se. Le ols,.e [si,@i,7i, mi']i-1)) , we{l,---,T-1} (2)
Note that the only learnable parameters are in the retrospective model Mr. Since LLM agent model is fixed, it can be considered as a component of the environment. Specifically, if we construct another environment with the transition function T â² = T (S, â¢) ⦠L : S à M â S, and the same reward function R, then the objective in Eq. (2) is just a regular RL optimization so all the popular RL algorithms apply.
# 6 OUR APPROACH: REINFORCING RETROSPECTIVE LANGUAGE AGENT
As illustrated in Fig. 2, our proposed framework Retroformer is comprised of two language model components: an actor LLM, denoted as Ma, which generates reasoning thoughts and actions, and a retrospective LLM, denoted as Mr, which generates verbal reinforcement cues to assist the actor in self-improvement by refining the actor prompt with reflection responses.
Episode Reflection Retrospective Reflectionâ) = Retums ~ prompt LM Reflection ED Trial i Trial +1 i â ââ,, Env1 SEDs __,.Reneehen vespenseailâ» Ev ul Leni Trajectory _, prompt â» Actor LM Env 2Retums __," pefigeHonresponse 2» EnV 2Retums (Sy, Gy, My eee, St) | Gy) Gait 2 Env K Returns Env k Returns 42 i >» ReflectionresponseK > (AyrSie1) Environment 1 Ne Action a, Gui Gries âââââ~__ Environment 2 t . . os . Rating for reflection response k Environment K T= AGKi= Gxin1 â Gki (a) Retrospective agent (b) Ratings for reflection responses
Figure 2: Framework overview.
We assume in this paper that the actor model is a frozen LLM whose model parameters are in- accessable (e.g., GPT, Bard) and the retrospective model is a smaller, local language model that can be fine-tuned under low-resource settings. In addition, Retroformer has an iterative pol- icy gradient optimization step which is specifically designed to reinforce the reflection model with gradient-based approach. We provide in this section a detailed description of each of these modules and subsequently elucidate their collaborative functioning within the Retroformer framework. The implementation details are presented in Section 7.2.
5
6.1 RETROSPECTIVE AGENT ARCHITECTURE
As illustrated in Fig. 2(a), for the base actor model and retrospective model, we apply a standard communication protocol modified from the Relexion agent architecture (Shinn et al., 2023), in which the retrospective model refines the actor prompt by appending verbal feedback to the prompt.
Actor Model The actor model is a LLM hosted in the cloud, whose model parameters are hidden and frozen all the time. The actor LM is instructed to generate actions with required textual content, taking into account the observed states. Similar to reinforcement learning, we select an action or generation, denoted as at, from the current policy Ïθ at time step t and receive an observation, represented by st, from the environment. We use ReAct (Yao et al., 2023) as our actor prompt.
ari. = Ma ([sk,i75 Qk,i.rs rkitloas Sk,i,t) : (3)
Retrospective Model The retrospective model Mr is instantiated as a local LM. Its primary func- tion is to produce self-reflections, offering valuable feedback for diagnosing a possible reason for prior failure and devising a new, concise, high-level plan that aims to mitigate same failure. Operat- ing under a sparse reward signal, such as binary success status (success/failure), the model detects the root cause of failure by considering the current trajectory alongside its persistent memory.
yk,i = Mr([sk,i,Ï , ak,i,Ï , rk,i,Ï ]T (4)
# T leat» @k,i)-
# a Reflection prompt «ry.
This self-reflection feedback yk,i is appended to the actor prompt to prevent repetitive errors in a specific environment in future attempts. Consider a multi-step task, wherein the agent failed in the prior trial. In such a scenario, the retrospective model can detect that a particular action, denoted as at, led to subsequent erroneous actions and final failure. In future trials, the actor LM can use these self-reflections, which are appended to the prompt, to adapt its reasoning and action steps at time t, opting for the alternative action aâ² t. This iterative process empowers the agent to exploit past experiences within a specific environment and task, thereby avoiding repetitive errors.
Memory Module The actor model generates thoughts and actions, by conditioning on its recent interactions (short-term memory) and reflection responses (long-term memory).
⢠Short-term memory. The trajectory history Ïi of the current episode i serves as the short-term memory for decision making and reasoning.
⢠Long-term memory. The reflection responses that summarize prior failed attempts are appended to the actor prompt as the long-term memory.
To facilitate policy optimization in Section 6.2, we store the prompts and responses of the retrospec- tive model of each trial, as instruction-response pairs, together with the episode returns in a local dataset, which we call replay buffer. We sample from the replay buffer to reinforce the retrospective model at the end of each episode. The long and short-term memory components provide context that is specific to a given task over several failed trials and the replay buffer provides demonstrations of successful reflections across the tasks and environments, so that our Retroformer agent not only exploits lessons learned over failed trials in this task, but also explores by learning from successful reflections in other tasks.
⢠Replay buffer. The memory DRL which stores the triplets (xk,i, yk,i, Gk,i) of the reflection prompt xk,i, reflection response yk,i and episode return Gk,i of trial i and task k.
Reward Shaping We apply reward shaping to the binary rewards for obtaining more information. For question answering tasks, instead of exactly matching the answer, we use f1 score grading to evaluate the alignment of the generated output with the expected answer as the reward function.
6.2 POLICY GRADIENT OPTIMIZATION
The actor model Ma is regarded as an frozen LLM, such as GPT, with inaccessible model parame- ters. In this scenario, the most direct approach to enhancing actor performance in a given environ- ment is by refining the actor LMâs prompt. Consequently, the retrospective model Mr, a smaller
6
local language model, paraphrases the actorâs prompt by incorporating a concise summary of errors and valuable insights from failed attempts. We therefore aim to optimize the Mr model using en- vironment reward. The desired behavior of Mr is to improve the actor model Ma in next attempts. Hence, the difference in episode returns between two consecutive trials naturally serve as a reward signal for iteratively fine-tuning the retrospective model Mr, acting as a surrogate for reinforcement learning aided by gradient-based signals. The specifics of this approach are outlined as follows.
Reflection prompt x PPO Reflection response y Ratings trainer (ios aM aac Lone ue GLMCA, | gotstuckin loop where kept trying to search forthe English ee ie = in peer sat alms to mitigate the same failure. \ actor who appeared in both Pennies From Heaven and Kenneth, Gaara Retr fi y Williams: Fantabulosal, but the search term was too general.| r= 0.92 Reflection: etrospective {should have broken it down by searching for the English actor w LM | who appeared in both TV series. Input: trajectory 1._ ââââââ | : J |, I directly looked for the next team he coached after WSU. Previous _â âtrial: Question: What is the capital of France? Thought 1:Ineedto » r=-0.31 Input: trajectory K_§ ââââ search âFranceâ and look for the capital. Action 1:
Figure 3: Policy gradient optimization of retrospective LM using RLHF training pipeline.
Instruction and Response Generation The retrospective model generates a pair of instruction and response at the end of each trial i in the environment k. In the trial i, the actor produces a trajectory Ïi by interacting with the environment. The evaluator then produces a score ri which is computed as rt = Me(Ïi). After the trial, to produce verbal feedback for refining the actor prompt, Mr takes the set of {Ïi, ri} as the instruction xk,i and is prompted to produce a reflection response yk,i. All these instruction-response pairs (xk,i, yk,i) across tasks and trials are stored to a local dataset DRL, which we call âreplay bufferâ, for fine-tuning the Mr at the end of each episode.
Reflection Response Rating As illustrated in Fig. 2(b), let us assume a reflection prompt xk,i and the corresponding episode return Gk,i, and the retrospective model Mr generates the response yk,i that summarizes the mistakes in i, which results in the return Rk,i+1 in the next attempt i + 1. Because the actor is a frozen LM and the temperature is set to zero as default (Yao et al., 2023), i.e., T = 0, the injected randomness that leads to differences in returns âGk,i = Gk,i+1 â Gk,i are from the reflection responses yk,i, in which positive âGk,i indicates better responses that help the actor learn from prior errors, and hence should be rated with higher scores; negative or zero âGk,i indicates worse responses that needs to be avoided and hence should be rated with lower scores. Therefore, we define the rating score of a reflection instruction-response pair (xk,i, yk,i) as:
r(xk,i, yk,i) â Gk,i+1 â Gk,i. (5)
Proximal Policy Optimization The optimization step of Retroformer is formalized as an it- erative process in Fig. 3. We use the differences of episode returns as the ratings of the generated reflection responses. The retrospective language model is fine-tuned at the end of each episode following the standard RLHF training procedures (Ouyang et al., 2022) with proximal policy opti- mization:
LPPO = Exâ¼DRL E yâ¼LLMRL Ï (x) r(x, y) â β log LLMRL Ï (y|x) LLMRef(y|x) , (6)
where (x, y) are sampled from the replay buffer, r(x, y) is the defined reward model, and the second term in this objective is the KL divergence to make sure that the fine-tuned model LLMRL does not stray too far from the frozen reference model LLMRef.
The actor and retrospective model, and the policy gradient optimization module work together through trials in a loop until the environment deems Ït to be correct. In practice, we apply of- fline RL methods instead of online optimization. We collected the dataset DRL by rolling out a base policy, i.e., the frozen actor LM and the initialized retrospective LM, in the tasks in the training sets for N trials and compute the ratings. We apply the standard RLHF pipeline to fine-tune the retro- spective model offline before evaluating the agent in the validation tasks. In online execution, we use best-of-n sampler, with the scores evaluated by the learned reward model from RLHF pipeline, as an alternative method of generating better retrospective responses in each trial.
7
# 7 EXPERIMENTS
Extensive experiments are conducted to evaluate our method, including comparisons with ReAct and Reflexion performances, and visualization and discussion of agentâs generated text and actions.
7.1 ENVIRONMENT AND DATASET
We use open-source HotPotQA (Yang et al., 2018) environment, which consists of search-based question answering tasks to evaluate the agentâs tool usage abilities under large state-action space.
HotPotQA The agent is asked to solve a question answering task by searching in Wikipedia pages. At each time step, the agent is asked to choose from three action types or API calls:
1. SEARCH[ENTITY], which searches the exact entity on Wikipedia and returns the first paragraph if it exists. If not, it will return some similar entities to search.
2. LOOKUP[KEYWORD], which returns the next sentence containing keyword in the last passage successfully found by Search.
3. FINISH[ANSWER], which returns the answer and finishes the task.
Dataset We collected 3,383 reflection samples by running the base rollout policy for 3 trials (N = 3) for 3,000 tasks in the training set, in which 1,084 instruction-response pairs have positive ratings.
IMPLEMENTATION DETAILS
Model We use GPT-3 (model: text-davinci-003) as the frozen actor model. For the retrospective model, we instantiate it from LongChat (model: longchat-7b-16k), which is a LM with 16k context length by fine-tuning llama-7b on instruction-following samples from ShareGPT. In all experiments, we set the temperature of actor LM as zero, i.e., T=0 and top p =1 to isolate the randomness of LM from the effects of reflections. We acknowledge that setting a higher temperature value can encourage exploration but it can obscure the impact of the proposed approaches, making it difficult to compare against existing baselines with T=0 (Yao et al., 2023; Shinn et al., 2023).
Setup Our proposed learning framework is developed by using multiple open-source tools as fol- lows. We use the OpenAI connectors from langchain to build our actor models Ma. During in- ference of the retrospective model, we host an API server using FastChat and integrates it with langchain agents. The tool can host longchat-7b-16k with concurrent requests to speed up RL pol- icy rollouts. For fine-tuning the retrospective model, we develop our training pipeline with trl, which supports transformer reinforcement learning with PPO trainer.
We present the details of the specific prompts we used and the full agent demonstrations and exam- ples for each environment in Appendix A.
Training We fine-tune the retrospective model Mr with 4-bit quantized LoRA adapters (r=1) on the offline RL datasets with epochs=4; batch size=8; lr=1.4e-5. The number of trainable parameters is 0.53M (0.015% of llama-7b). We first run supervised fine-tuning trainer on the samples with positive ratings for 2 epochs and then the RLHF pipeline, including reward modeling, and RL fine- tuning with PPO, on the whole offline rating dataset using the default settings for llama-7b model.
Evaluation Metrics We report the success rate and the average episode returns over validation tasks in an environment. The agent is evaluated on 100 validation tasks from the distractor dev split of open-source HotPotQA dataset.
Baselines We experiment with two language agent baselines.
⢠ReAct (Yao et al., 2023). This is the language agent architecture which AutoGPT (Gravitas, 2023) is built upon. At the same time, this agent does not learn from the environment rewards at all and can serve as a baseline for showing how the actor model performs without exploration and exploitation in the environment.
8
⢠Reflexion (Shinn et al., 2023). This is the state-of-the-art language agent architecture that the au- thors identify from literature so far. This agent enhances from verbal feedback of the environment, and does not use gradient signals explicitly. It can serve as a baseline for showing the effectiveness of gradient-based learning for reasoning and planning.
7.3 COMPARISONS WITH BASELINES
We present the performances of our Retroformer in Table 2 and compare it against the baselines in Fig. 4. As shown in the table, we observe that the generated reflections consistently improve the agent performances over trials and the effects of reflection sampling are significant in the first few trials. This is expected as exploration in the early trials benefits exploration of the state action space.
Table 2: Retroformer success rate (succ%) and average reward in N trials and best-of-n shots reflection sampling scored by the learned reward model in HotPotQA environment. The standard deviation is calculated for the average rewards over 100 tasks and n shots.
Environment Settings 1 shot 2 shots 4 shots HotPotQA N=0 (initial trial) âââââââ- 34%/0.43 ± 0.46 âââââââ- 39%/0.45 ± 0.47 42%/0.48 ± 0.46 45%/0.52 ± 0.47 N=1 42%/0.48 ± 0.46 45%/0.52 ± 0.47 48%/0.54 ± 0.47 N=2 50%/0.55 ± 0.48 52%/0.58 ± 0.46 53%/0.60 ± 0.46 N=4
Furthermore, as shown in Fig. 4, our agent outperforms the two strong base- lines. the results indicate that our reinforced model provides the language agents with better reflection re- sponses in early trials, which enables the agents to learn faster, while also achiev- ing better performances in the end. Our Retroformer agent achieves 53% suc- cess rate in 5 trials, which is better than the state-of-the-art 50% success rate reported in (Jang, 2023) that uses a much larger language model, i.e., GPT-3 (model: text- davinci-003) as the retrospective compo- nent. The results show the effectiveness of our policy gradient approach for fine- tuning the agent with offline samples.
# HotPotQA (100 distractor tasks)
âââ a a oo 05 F008 "episede iD 5 30 38 40 w ce) = & S 6 âe- Retroformer+4-shot sampling âe Reflexion ==- ReAct Success rate (%) w a
Figure 4: We compare Retroformer with base- lines and observe consistent and faster performance im- provement in terms of success rate.
7.4 DISCUSSION OF THE REINFORCED AGENT BEHAVIORS
We examine how the retrospective model Mr is improved with policy optimization by comparing the generated responses from the frozen retrospective LM and the ones from the fine-tuned, reinforced LM. We observe two general emerging behaviors that the fine-tuned LM learns to follow through trial and error. These two reinforced behaviors help the agents outperform the baselines in Fig. 4.
Precise Credit Assignment for Failed Trials. As shown in Fig. 5, we gave an example that il- lustrates why the reinforced responses better analyze the mistakes made in the prior failed trial and enable the agent to prevent from repetitive errors in the next attempt. The agent failed in the last attempt mainly because it submits both âTeen Tiansâ and âTeen Titans Goâ as the answer to the Teen Titans spin-off series, while the correct answer should only include âTeen Titans Goâ; The agent forgot its original goal during a chain of thought, actions and lengthy observations. While the reflection response from the frozen model seems to have a lot âshould haveâ terms and is very fluent in language, it simply reiterates the prior action sequences that lead to failure as the steps that the agent should have done, which prompts the agent to follow these steps again in the next attempt, resulting in an infinite loop.
9
Question Prior failed trial Reflection (Green: Retroformer; Brown: Frozen LM) Thought 1: I need to search Juliet Starling and find the Canadian-American actress who voiced her, then find the Teen Titans spinoff series she has done voice roles for. Action 1: Search{Juliet Starling] Observation 1: Could not find [Juliet Starling] 1 failed to find the Teen Titans spinoff series that Tara Strong has done voice roles for. Instead, I found Lollipop Chainsaw, a video game that Juliet Starling appeared in, I should have searched Tara Strong and Teen Titans or Teen Titans Go! specifically in my previous action to find the correct answer. : Juliet Starling is a character from Lollipop so I can search Lollipop Chainsaw. Lollipop Chainsaw featured Juliet Action 2: Search{Lollipop Chainsaw] Starling, who was voiced bya Observation 2: Lollipop Chainsaw is a Canadian-American actress who has done voice roles for what Teen Thought 3: Juliet Starling is voiced by Tara Strong, so I Titans spinoff series? need to search Tara Strong and find the Teen Titans spinoff series she has done voice roles for. Action 3: Search{Tara Strong] Observation 3: Tara Lyn Strong. My thought process was strong but my execution was flawed. I should have searched for Lollipop Chainsaw first and looked up the Canadian-American actress who voiced Juliet Starling afterwards. I also should have looked up Tara Strong's filmography and searched for any voice roles she did specifically for Teen Titans or Teen Titans Go!.Next trial: Question: The novel was adapted into a film by a Thought 4: Tara Strong has voiced characters in Teen girector known for his work on another iconic 1980s franchise. What Titans and Teen Titans Gol, so the answer is Teen Titans 9, jg this film based on? Thought |: and Teen Titans Gol. Action 4: Finish[Teen Titans and Teen Titans Go!]
Figure 5: Response refinement from the reinforced retrospective model. Note that the lengthy ob- servation step in the prior failed trial column is abbreviated for better presentation purposes.
On the contrary, our reinforced response prompts the agent to focus on spinoff series in the very beginning of the response and asks the agent to find the answer in the previous actions and obser- vations with special attention on the search results of âTara Strongâ. In summary, our proposed reinforcement learning approach presents better credit assignment and root cause analysis abilities, and has the power to recommend real actionable insights for the next attempt.
Frozen model response Reinforced model response I think the issue might have been with the search. I was not able to find the name "Miller" in the paragraph, which is why I was not able to find James Henry Miller's wife. I should have searched for "Miller's wife" instead of "James Henry Miller's wife" to avoid this confusion. I got stuck in a loop where I kept searching for the same person, Henry Miller, instead of Beatrice Sylvas Wickens. It's possible that I may have misheard or misunderstood the question. To avoid this, I should have written down the question and made sure I understood it correctly before beginning the search. Additionally, I should have tried searching for "James Henry Miller's wife" instead of just "James Henry Miller". New Plan: To avoid confusion, I will search for "Miller's wife" instead of "James Henry Miller's wifeâ and directly look for the nationality of the wife.
Figure 6: Response refinement from the reinforced retrospective model with structured format.
Structured Reflections and Action Plans. We discover one emergent behavior of the reinforced model that it can automatically paraphrase the original responses into two separate structured sec- tions, namely Reflection section and New plan: section, although not being explicitly trained or prompted for. One such example is shown in Fig. 6. The paraphrased response retrospects in the first paragraph and provides actionable insights next, while the response from the frozen LM in- terleaved both parts in one paragraph, making it hard to comprehend. We can also observer from Fig. 5 that the reinforced response removes the messy, irrelevant âNext trial:â content in the end for cleaner format, which may very likely result from LLM hallucination.
# 8 CONCLUSION
In this study, we present Retroformer, an elegant framework for iteratively improving large language agents by learning a plug-in retrospective model. This model, through the process of policy optimization, automatically refines the prompts provided to the language agent with environmental feedback. Through extensive evaluations on real-world datasets such as HotPotQA, the method has been proven to effectively improve the performances of large language agents over time both in terms of learning speed and final task completion performances.
By considering the Actor LLM as a component of the environment, our policy gradient approach allows learning from arbitrary reward signals from diverse environments and tasks. This facilitates the iterative refinement of a specific component within the language agent architecture â the retro- spective model, in our case, while circumventing the need to access the Actor LLM parameters or propagate gradients through it. This agnostic characteristic renders Retroformer a concise and adaptable plug-in module for different types of cloud-hosted LLMs, such as GPT and Bard. Fur- thermore, our approach is not limited to enhancing the retrospective model alone; it can be applied
10
to reinforce other components within the agent architecture, such as the memory and summarization module, or the actor prompt. By selectively focusing on the component to be fine-tuned while keep- ing the remainder fixed, our proposed policy gradient approach allows for iterative improvements of the component with reward signals obtained from the environment.
# REFERENCES
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691, 2022.
# Harrison Chase. Langchain. https://github.com/hwchase17/langchain, 2023.
Significant Gravitas. Auto-GPT, 2023. Autogpt. https://github.com/Significant-Gravitas/
Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992, 2023.
Eric Jang. Can llms critique and iterate on their own outputs? evjang.com, Mar 2023. URL https://evjang.com/2023/03/26/self-reflection.html.
Aman Madaan, Alexander Shypula, Uri Alon, Milad Hashemi, Parthasarathy Ranganathan, Yiming Yang, Graham Neubig, and Amir Yazdanbakhsh. Learning performance-improving code edits. arXiv preprint arXiv:2302.07867, 2023a.
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023b.
Volodymyr Mnih, Adri`a Puigdom`enech Badia, Mehdi Mirza, Alex Graves, Timothy P. Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. CoRR, abs/1602.01783, 2016.
# Yohei Nakajima. Babyagi. https://github.com/yoheinakajima/babyagi, 2023.
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christo- pher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â27744, 2022.
Joon Sung Park, Joseph C OâBrien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023.
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290, 2023.
Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023.
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, and Pieter Abbeel. Trust region policy optimization. CoRR, abs/1502.05477, 2015.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. CoRR, abs/1707.06347, 2017.
11
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. arXiv preprint Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv:2303.17580, 2023.
Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning. arXiv preprint arXiv:2303.11366, 2023.
Charlie Snell, Ilya Kostrikov, Yi Su, Mengjiao Yang, and Sergey Levine. Offline rl for natural language generation with implicit language q learning. arXiv preprint arXiv:2206.11871, 2022.
R. S. Sutton, D. Mcallester, S. Singh, and Y. Mansour. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems 12, volume 12, pp. 1057â1063. MIT Press, 2000.
Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press, second edition, 2018. URL http://incompleteideas.net/book/the-book-2nd. html.
Leandro von Werra, Younes Belkada, Lewis Tunstall, Edward Beeching, Tristan Thrush, and Nathan Lambert. Trl: Transformer reinforcement learning. https://github.com/lvwerra/trl, 2020.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question In Conference on Empirical Methods in Natural Language Processing (EMNLP), answering. 2018.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR), 2023.
12
Appendix for
# âRetroformer: Retrospective Large Language Agents with Policy Gradi- ent Optimizationâ
A FULL EXAMPLES
A.1 ACTOR PROMPT ENGINEERING
An example of the HotPotQA actor language model prompt is shown below.
Solve a question answering task with interleaving Thought, Action, Observation steps. Thought can reason about the current situation, and Action can be three types: (1) Search[entity], which searches the exact entity on Wikipedia and returns the first paragraph if it exists. If not, it will return some similar entities to search. (2) Lookup[keyword], which returns the next sentence containing the keyword in the last passage successfully found by Search. (3) Finish[answer], which returns the answer and finishes the task You may take as many steps as necessary. You have attempted to answer the following question before and failed. The following reflection(s) give a plan to avoid failing to answer the question in the same way you did previously. Use them to improve your strategy of correctly answering the given question. Reflections: - I should have searched for the title of the book instead of the author. I also didn't check for the proverb which was not the question. - I should have searched for "It Takes a Family by Rick Santorum" instead of just "It Takes a Family" to get the correct answer. - My search was not concise and I did not specify what I was looking for. I also did not check similar results. In this case, I should have searched for 'It Takes a Family by Rick Santorum' to find the correct answer. Question: It Takes a Family is a response to this 1996 book that was published by who Thought 1: I need to search for the title of the book, "It Takes a Family", and find who published it. Action 1: Search["It Takes a Family"] Observation 1: Richard John Santorum ( san-TOR-om; born May 10, 1958) is an American politician, attorney, author, and political commentator who represented Pennsylvania in the United States Senate from 1995 to 2007. He was the Senate's third-ranking Republican during the final six years of his tenure. He also ran unsuccessfully for President of the United States in the 2012 Republican primaries, finishing second to Mitt Romney.Santorum was elected to the United States Senate from Pennsylvania in 1994, He served two terms until losing his 2006 reelection bid. A Roman Catholic, Santorum is a social conservative who opposes abortion and same-sex marriage and embraced a cultural warrior image during his Senate tenure. While serving as a senator, Santorum authored the Santorum Amendment, which would have promoted the teaching of intelligent design. He was a leading sponsor of the 2003 federal law known as the Partial-Birth Abortion Ban Act.In the years following his departure from the Senate, Santorum has worked as a consultant, private practice lawyer, and news contributor. He ran for the Republican nomination in the 2012 U.S presidential election. Before suspending his campaign on April 10, 2012, Santorum exceeded expectations by winning 11 primaries and caucuses and receiving nearly four million votes, making him the runner-up to eventual nominee Mitt Romney. Santorum ran for president again in 2016, but ended his campaign in February 2016 after a poor showing in the Iowa caucuses. In January 2017 he became a CNN senior political commentator. However, he was terminated from his contract with CNN in May 2021 due to comments he made about Native Americans a few weeks prior which were deemed "dismissive". Thought 2: The paragraph mentions Rick Santorum, so I need to search "It Takes a Family by Rick Santorum". Action 2: Search["It Takes a Family by Rick Santorum") Observation 2: It Takes a Family is a 2005 book by then Pennsylvania Senator Rick Santorum. The title is a response to the 1996 book It Takes a Village by then-First Lady Hillary Clinton. In the book, Santorum states that the family structure is necessary. He argues that liberal social policies have devastated the family structure and that prevailing socially liberal attitudes have systematically devalued families across the board. He argues that the government should take a proactive role in promoting strong families. Thought 3: It Takes a Family was published by Rick Santorum, so the answer is Rick Santorum. Action 3: Finish[Rick Santorum] Observation 3: Answer is INCORRECT
13
A.2 RETROSPECTIVE PROMPT ENGINEERING
# An example of the HotPotQA retrospective instruction prompt is shown below.
You are an advanced reasoning agent that can improve based on self reflection. You will be given a previous reasoning trial in which you were given access to an Docstore API environment and a question to answer. You were unsuccessful in answering the question either because you guessed the wrong answer with Finish[<answer>], or you used up your set number of reasoning steps. In a few sentences, Diagnose a possible reason for failure and devise a new, concise, high level plan that aims to mitigate the same failure. Use complete sentences.
Here are some examples.
# [Few-Shot Demonstrations]
# Previous trial:
Question: The 2016 Washington State Cougars were led by the coach who previously helmed which other team?
Thought: I need to search Washington State Cougars and find the coach of the 2016 team, then find which other team the coach previously helmed. Action: Search[Washington State Cougars]
Observation: The Washington State Cougars (known informally as the Cougs) are the athletic teams that represent Washington State University. Located in Pullman, Washington, WSU is a member of the Pac-12 Conference in NCAA Division I. The athletic program comprises ten women's sports and six men's intercollegiate sports, and also offers various intramural sports.
Thought: The paragraph does not mention the coach of the 2016 team. I need to look up "coach" to find the coach of the 2016 team.
# Action:
# Lookup[coach]
Observation: (Result 2/5) During the 2001-03 seasons, the Cougar football teams were distinguished by three ten-win seasons, three top ten poll rankings, and appearances in the Sun, Rose, and Holiday Bowls. The Cougars shared the Pac-10 title in 2002. Alumnus Paul Wulff, WSU's 3lst head coach, was fired in late November 2011, after compiling a 9-40 (.184) record in four seasons. Mike Leach coached the team from 2012 to 2019, guiding them to six bowl games. Leach left to coach Mississippi State in 2020 and was replaced by Hawaii head coach Nick Rolovich. Rolovitch was fired in October 2021 for not complying with the state's COVID-19 vaccine mandate. Defensive coordinator Jake Dickert was named the interim head coach.
Thought: The 2016 Washington State Cougars were led by Mike Leach. I need to search Mike Leach and find which other team he previously helmed.
# Action:
# Search[Mike
# Leach]
Observation: Michael Charles Leach (March 9, 1961 - December 12, 2022) was an American college football coach who primarily coached at the NCAA Division I FBS level. He was a two-time national coach of the year, three-time conference coach of the year and the mastermind behind the NCAA record-setting air raid offense. He was the head coach at Texas Tech University from 2000 to 2009, where he became the winningest coach in school history. After Texas Tech, he coached at Washington State University from 2012 to 2019, where he recorded the third-most wins of any coach in school history. He then coached at Mississippi State from 2020 until his death in 2022.Leach was known for directing offenses using lots of passing to several receivers, in a spread system known as the air raid, which Leach developed with Hal Mumme when Mumme was head coach and Leach was offensive coordinator at Iowa Wesleyan, Valdosta State, and Kentucky in the 1990s. Leach's offenses with Mumme, and later as a head coach himself, have broken numerous school and NCAA records. On eighteen occasions, Leach led his unranked team to victory over a team ranked in the AP poll, which is the most in the AP era.
Thought: Mike Leach previously helmed Texas Tech University and Kentucky. So the answer is Texas Tech University and Kentucky.
# Action:
# Finish[Texas
# Tech
# University
# and
# Kentucky]
# Observation:
# Answer
is
# INCORRECT
Reflection:
14 | {
"id": "2303.17580"
} |
2308.01552 | InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent | This research paper delves into the integration of OpenAI's ChatGPT into
embodied agent systems, evaluating its influence on interactive decision-making
benchmark. Drawing a parallel to the concept of people assuming roles according
to their unique strengths, we introduce InterAct. In this approach, we feed
ChatGPT with varied prompts, assigning it a numerous roles like a checker and a
sorter, then integrating them with the original language model. Our research
shows a remarkable success rate of 98% in AlfWorld, which consists of 6
different tasks in a simulated household environment, emphasizing the
significance of proficient prompt engineering. The results highlight ChatGPT's
competence in comprehending and performing intricate tasks effectively in
real-world settings, thus paving the way for further advancements in task
planning. | http://arxiv.org/pdf/2308.01552 | Po-Lin Chen, Cheng-Shang Chang | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230803 | 20230803 | 3 2 0 2
g u A 3 ] I A . s c [ 1 v 2 5 5 1 0 . 8 0 3 2 : v i X r a
# InterAct: Exploring the Potentials of ChatGPT as a Cooperative Agent
Po-Lin Chen and Cheng-Shang Chang, Fellow, IEEE
AbstractâThis research paper delves into the integration of OpenAIâs ChatGPT into embodied agent systems, evaluating its influence on interactive decision-making benchmark. Drawing a parallel to the concept of people assuming roles according to their unique strengths, we introduce InterAct. In this approach, we feed ChatGPT with varied prompts, assigning it a numerous roles like a checker and a sorter, then integrating them with the original language model. Our research shows a remarkable success rate of 98% in AlfWorld, which consists of 6 different tasks in a simulated household environment, emphasizing the significance of proficient prompt engineering. The results high- light ChatGPTâs competence in comprehending and performing intricate tasks effectively in real-world settings, thus paving the way for further advancements in task planning.
Keywords: ChatGPT, AlfWorld, Task planning, InterAct.
# I. INTRODUCTION
The advent of large language models (LLMs), underpinned by transformative advancements in natural language process- ing (NLP), has stimulated a revolution across a wide range of applications. Exemplified by models such as Transformer [1], T5 [2], GPT-4 [3], these language models have achieved impressive results in diverse tasks like paragraph summary, language translation, and code optimization. These achieve- ments can be attributed to their ability to absorb and process massive amounts of data, making sense of the patterns and structures within the text.
ChatGPT [4] is an AI language model created by OpenAI, which has been trained using a combination of pretraining and fine-tuning with human feedback. This advanced model is built on Transformer model, enabling it to produce responses that closely resemble human language. By undergoing exten- sive training on vast volumes of text data, ChatGPT excels in understanding and generating text in various languages and fields, answering queries, and engaging in dialogues. Unlike its predecessors that operate primarily based on a single prompt, ChatGPT combines text generation with code syn- thesis, thereby significantly enhancing its interactive abilities. In this paper, we assess the ability of ChatGPT to make decisions within the context of an AlfWorld simulated envi- ronment [5]. The aim is to understand the modelâs proficiency in absorbing and processing data to make rational decisions.
Scholarly works such as ReAct [6] and Reflexion [7] showcase the decision-making, action-initiation, and reflective powers of LLMs, paving the way for remarkable progress in a range of text-based performance metrics. However, they all utilize a single language model (InstructGPT) which, despite numerous iterations of thought and reflection, often repeatedly commits the same mistakes. In this research, we devise a novel model, InterAct, which is founded on the architecture of the ReAct model [6]. It undergoes alterations in prompt formulations, incorporates different ChatGPT for support. In particular, we add a checker module to tackle the issue of object misidentifi- cation. The initial basic prompt has also been revised to bolster InterActâs capabilities in constructing comprehensive search paths. This approach effectively addresses the previously men- tioned shortcomings of the ReAct model. Consequently, this approach yielded a success rate of 98% in this benchmark, a significant improvement from the base ReAct agentâs accuracy of 75%. These experiments provide critical insights into the potential benefits and limitations of implementing ChatGPT in AI-driven systems and technologies.
In conclusion, the main insight of the paper is the advance- ment of AI language models like ChatGPT presents an excit- ing opportunity to revolutionize and reshape our interaction with technology. By leveraging these models, we can build more intuitive, responsive, and smart technologies that can effectively understand and respond to human requirements. The key contributions of our research are summarized below: (1) We introduce InterAct, an improved method where each agent, like ChatGPT, can showcase unique abilities, adeptly rectifying the limitations found in the ReAct model, such as object misidentification and inefficient planning.
(2) We have designed new trajectory prompts that enable the agent to flawlessly locate items during its search process. In a decision-making test within the AlfWorld sim- ulated environment, InterAct demonstrated a 98% success rate, significantly higher than the 75% accu- racy of the base ReAct agent, suggesting its potential benefits in AI-centric systems and technologies.
The authors are with the Institute of Communications Engineering, National Tsing Hua University, Hsinchu 300044, Taiwan R.O.C. Email: [email protected]; [email protected].
This work was supported in part by the National Science and Technology, Taiwan, under Grant 111-2221-E-007-045-MY3, and in part by Qualcomm Technologies under Grant SOW NAT-487844-2.
# II. RELATED WORK
Transformers have emerged as the dominant architecture in various fields. Initially prominent they have now extended their influence to include vision-based tasks [11],
[12] and even reinforcement learning [13], [14]. In the realm of robotics, Transformers have found practical applications in diverse areas such as path planning [15], [16], object recognition [17], and grasping [18].
One notable example is RT-1 [19], which takes the uti- lization of Transformers that takes images from a robotâs camera and natural language task instructions as inputs and directly outputs tokenized actions. RT-1 can also acquire new skills by observing other robotsâ experiences, opening opportunities for enhanced robot capabilities through multi- robot datasets. Another instance is SayCan [20], a study conducted by Googleâs AI team and Everyday Robots. This research employs PaLM [21] and an affordance function to empower robots to carry out complex tasks based on natural language instructions. The resulting system, PaLM-SayCan, transforms user instructions into actionable plans for the robot. Inner Monologue [22] has made further advancements by incorporating injected feedback from the environment. The work in [23] demonstrated that even without any training, siz- able language models can be effectively prompted to produce credible action plans driven by goals. They also suggested multiple techniques to enhance the modelâs ability to generate executable outputs, all without the need for invasive probing or modifications to the underlying model.
GPT for Robotics Moreover, recent publications, includ- ing [24], [25], and [26], have successfully incorporated models such as ChatGPT and GPT3.5 into the realm of robotics ap- plications. These advancements facilitate interaction between the models and the environment or users, allowing for the correction of the robotâs behavior. These papers showcase various prompts and outline a pipeline for the implementation they conduct of ChatGPT in robotics tasks. Additionally, experimental evaluations to assess ChatGPTâs capability to execute a wide range of robotics tasks while striving to bridge the gap between natural language and actionable robot actions. The process of reasoning in robotics involves breaking down complex tasks into simpler subtasks that can be more easily solved by the LLM itself or with the aid of tools. Various approaches [27], [28] have been introduced to enable natural language agents to select their next action in text-based environments.
One prominent approach is Chain-of-thought (CoT) reason- ing, as proposed in [29]. This approach leverages emergent properties, such as reasoning and commonsense, to solve tasks through multiple steps. It enables the LLM to reason through a series of intermediate actions, leading to the desired outcome. Another approach called faithful reasoning, introduced in [30], decomposes multi-step reasoning into three distinct steps, each handled by a dedicated LLM. By dividing the task into these steps, faithful reasoning facilitates the LLMâs ability to tackle complex computations effectively. Similar approaches like Scratchpad [31], which involves fine-tuning an LLM on intermediate computation steps, resulting in improved performance on multi-step computation problems. The Describe, Explain, Plan, and Select (DEPS) approach, introduced in [32], specifically developed to tackle the unique
challenges of planning in open-ended environments such as Minecraft. This innovative system adeptly manages intricate tasks that demand meticulous, multi-step reasoning, effec- tively prioritizing sub-goals according to the agentâs prox- imity. Notably, DEPS has exhibited remarkable results in enhancing the success rate of Minecraft tasks by offering insightful explanations for errors encountered during sub-task execution. As a groundbreaking planning agent, DEPS has achieved an unprecedented positive success rate in conquering the formidable ObtainDiamond task, marking a significant milestone in the field.
A different strategy called DERA [33] presents an alter- native approach by structuring a dialogue as a conversation between two agent types: âResearcherâ and âDecider.â The Researcher agent analyzes information and identifies key com- ponents of the problem, while the Decider agent autonomously combines the Researcherâs insights and makes judgments on the final output. This approach has demonstrated notable enhancements compared to the baseline performance of GPT- 4 [3] in evaluations conducted by human experts and quanti- tative metrics. Particularly, DERA has showcased significant advancements in safety-critical domains like healthcare.
Additionally, the studies by [7], [34] have also incorporated reflection actions into the model. These reflection actions allow the model to refine its actions based on feedback received during the execution of tasks. By iteratively adjusting its actions and incorporating self-feedback, the model can improve its decision-making process and adapt to changing conditions.
Our research aims to provide additional evidence supporting the effectiveness of ChatGPT in language-conditioned robotic learning simultaneously introducing novel architectures that facilitate reasoning through the coordination of various roles performed by LLMs.
# III. METHOD: INTERACT STRUCTURE
In this section, we use the AlfWorld benchmark to test ChatGPTâs reasoning capabilities, examining how it accom- plishes household tasks step by step when provided only with a few-shot example. We will use not only ChatGPT but also a similar language model called InstructGPT (text-davinci- 002). InstructGPT is particularly adept at tasks demanding succinct responses or benefiting from k-shot examples. In this particular task, unlike the previous demostration, the model is required to integrate task-oriented actions with verbal reasoning. The model needs to possess the ability to think and reason like a human. When faced with dead ends , the model should be capable of adjusting its planning based on logical reasoning.
A. AlfWorld Dataset
AlfWorld is a suite of text-based environments that chal- to solve multi-step tasks in a variety of lenge an agent interactive environments with ALFRED [35] benchmark. The ALFRED benchmark focuses on tasks that require an agent to accomplish high-level goals in a simulated household
environment by navigating and interacting through text-based actions. In AlfWorld, there are six types of tasks that chal- lenge the agentâs ability to plan, track subgoals, and explore systematically.
For example, a task in AlfWorld could be to âexamine a paper under a desklamp.â To achieve this goal, the agent needs to navigate to specific locations within the simulated household and interact with objects using text commands. The agent might need to issue commands like âgo to coffeetable 1,â âtake paper 2,â and âuse desklamp 1â to complete the task. The complexity of the tasks in AlfWorld is intentionally designed to be challenging. Task instances can have more than 50 locations and may require an expert policy more than 50 steps to solve. This complexity encourages the agent to effectively plan its actions, keep track of subgoals, and explore the environment systematically. For example, the agent may need to check all desks one by one to find the desklamp.
One of the challenges presented in AlfWorld is the need to determine likely locations for common household items. For instance, a desklamp is likely to be found on desks, shelves, or dressers. This aspect of the environment provides an opportunity for language models like LLMs to leverage their pretrained commonsense knowledge to make informed decisions about the likely locations of objects.
In each environment of AlfWorld, the agent has the option to select an action from a list of permissible actions, denoted as At at time step t. Upon executing an action, the agent receives an observation, Ot, and a reward, R(st, at), from the environment, which then determines the next state of the agent.
AlfWorld offers a diverse set of six tasks and a total of over 3000 unique environments. These environments test the agentâs ability to understand the task at hand, formulate a sequential plan consisting of subtasks, and carry out the necessary actions within the given environment. In our trials, we utilize the ReAct problem-solving strategy [6], which has demonstrated superior performance across a wide array of sequential decision-making tasks. ReAct is a strategy that allows the agent to reason and act by articulating its current thoughts and performing actions based on these thoughts. At each time step, the agent has the option to execute < think >: thought action to verbalize its internal thought process, or < action >: to induce a response from the environment. The set of possible actions in each state is not explicitly defined, providing the agent with full autonomy in determining its next moves. To prevent syntactic errors, we provide the agent with two domain-specific few-shot trajectories.
# B. Model architecture
We introduced a novel model called InterAct, which is built upon the foundation of ReAct. The architectural dia- gram of InterAct can be observed in Figure 1. While ReAct has demonstrated impressive accuracy in diverse decision- making and knowledge-intensive tasks, it occasionally en- counters common errors, including Perception Error, Object Misidentification, and Inefficient Planning. In simpler terms,
although ReAct achieves state-of-the-art performance overall, there exists a small subset of tasks that remain unsolved due to minor imperfections in a single model.
H= H= Hom} Sorter + (chatser) InterAct Query : un k > Environment : Reward âââââ âââ_ Action - 4 ReAct â) a checker {chatePr)
Fig. 1. The architecture of both ReAct and InterAct. InterAct involves the integration of LLM with various agents to facilitate smoother interaction with the environment.
To address these challenges, InterAct leverages the com- bined strength of agents with distinct purposes, such as is checker and sorter, susceptible to errors. In addition, we have modified the original basic prompt to enhance InterActâs ability to plan comprehensive search paths when looking for multiple items, ensuring that no possible locations are overlooked. This op- timization greatly improves the efficiency of the tasks being performed.
Sorter When processing environmental data, ReAct ini- tially needs to determine the likelihood of objects appearing in specific locations. However, this ranking process often falls short, leading to less efficient planning. This inefficiency may arise from the fact that the the InstructGPT model (text- davinci-002) is not sufficiently trained in factual knowledge and common-sense reasoning. On the other hand, ChatGPT has been fine-tuned using Reinforcement Learning with Hu- man Feedback (RLHF) and has demonstrated a more nuanced understanding of various situations. It excels at making well- informed decisions, as depicted in Figure 2. To improve the ef- ficiency of predicting object locations, we integrate ChatGPT as a decision-making component. Whenever ReAct requires this procedure, it can autonomously utilize ChatGPT, thus enhancing the effectiveness of its object search operations.
Text-davinel-002 Chater The Ts he Fs need to finda cath, Alot re ey to appear tarbagecan (1), sinkbasin (1-2), countertop (1), tole (1) towelholer (1) bs3: sinkbasio oder 3," Tarot 2,9 "countertop 1,3 tletpa âAct 3: goto garbagecan 1 saroagecon 1 âbs 3: On the garbagecan 1, yousee nothing. âAct: go to snkbasin 2 âAct: thinks Based on the sorting provide can check cabinets (1-4), towelholder (0b 4: nthe sinkbatn 3, you see nating. (1) sinkbasn (1-2), countertop 1), toletpaperhanger (1), toll (1), garbagecan (1) âone by one to find cloth, starting with cabinet 3 Act goto fowelholder 054 0. (0bs 8: On the towelholdr 1, you see a towel. âet 5: goto cabinet Act: think |e finda eat need to get one from somewhere ele. (Obs 6: The ebinet 1s open Init, you see a loth 2
Fig. 2. The left image was generated using text-davinci-002 for search ranking, while the right image was generated using ChatGPT. It can be observed that ChatGPT exhibits higher logical reasoning in finding objects compared to text-davinci-002.
Checker Another issue with text-davinci-002 is that it tends to mistakenly categorize similar objects as the same. For example, it might treat a pan and a pot as identical items, leading to the problem of Object Misidentification, as depicted
in Figure 3. To address this issue, we employ ChatGPT as a checker by providing it with appropriate prompts. We have observed that ChatGPT can successfully distinguish between similar objects. Furthermore, we utilize the results from this checker as observations and feed them back to the LLM, as illustrated in Figure 1. This approach helps us resolve the problem related to object misidentification.
pot 1. Next, I need to take it.
Fig. 3. Object Misidentification. In this scenario, the objective is to locate a pan; however, ReAct mistakenly misidentifies another object as the pan.
In the AlfWorld environment, we encountered a mission type named âpick 2â, where the agent is required to find two identical objects. We observed that ReAct alone tends to forget its previous locations, resulting in inefficient trajectories characterized by frequent revisits to the same place. In some instances, this led to hallucinations, defined as consecutive identical actions with the environment responding similarly. To address this issue, we made changes to the original modelâs prompt. After finding the first object and placing it in the corresponding receptacle, we allow the model to autonomously generate a trajectory while ensuring that this path does not overlook areas where the second object might be present, as shown in Figure 4.
More details about prompts, we refer the reader to Ap- pendix A.
# IV. EVALUATION
In this section, we present a comparative analysis of the performance enhancement provided by the helpers (sorter or checker) and the new trajectory planning when compared to the baseline model. Our findings demonstrate that InterAct consistently outperforms ReAct on AlfWorld (as shown in Table I) across all tasks. On AlfWorld, the top-performing InterAct trial achieves an impressive average success rate of 98%, falling short in only 2 out of 134 tasks. This performance is significantly better than the best trials of ReAct (73%) and BUTLER (37%). Indeed, InterAct has demonstrated excep- tional proficiency in handling these tasks, as evidenced by achieving a 100% success rate in four out of the six tasks. This performance showcases InterActâs remarkable ability to effectively manage and succeed in various tasks. Notably, even when ReAct is augmented only with a checker or sorter, the overall average performance surpasses that of ReAct without helpers by a significant margin. The tasks that show the most substantial improvement are âpick2â and âclean,â with an approximate gain of 47% and 41%. From a qualitative standpoint, we observed that ReAct, without any helper, faces difficulties in accurately determining the presence of items in a specific location or employing ineffective search strategies.
Your task is to: find two pillow and put them in sofa. âAct 2: think: Fist I need to find the first [pllow]]-A pillow is more likely to appear in armehair (2), cabinet (1-4), \ drawer (1-5), dresser (1), garbagecan (1), safe (1), shelf (1-12), sidetable (1), sofa (1). | can check one by one, starting with armchair 3. Obs 2: OX Act 3: goto armchair bs 3: On the armchair 3, you see a pillow 2, a pillow 1, and a remotecontral 1. Act 4: think: Now | find the first pillow (1). Next, | need to take it. Obs 4: 0x âAc 5: tak pillow 1 from armchair 2 âObs 5: You pick up the pillow 1 from the armchair 1. Act 6: think: Now | take the first pillow (1). Next, | need to put it in/on sofa 1. Obs 6: OK Act 7: goto sofa 1 Obs 7: On the sofa 1, you see nothing. âAct 8: put pillow 1 in/on sofa 1 Obs 8: You put the pillow 1 in/on the sofa 1. Act 9: think: Now | put the first pillow in sofa. Next, | need to find the second [[pillow]]}. | have visited armchair (1), so the second pillow is more likely to appear in cabinet (1-4), drawer (1-5), dresser (1), garbagecan (1), safe (1), shelf (1-12), sidetable (1), sofa (1). Ican check one by one, starting with cabinet 1 J) Obs 9: OK. Your task is to: find two pillow and put them in sofa. (Obs 6: On the armchair 1, you see a cellphone 2, a pillow 2, and a pillow 1. âAct 7: think: Now I find the first pillow (1). Next, | need to take it. Obs 7: OK. Act 8: take pillow 1 from armchair 1 Obs 8: You pick up the pillow 1 from the armchair 1 âAct 9: think: Now | take the first pillow (1). Next, I need to put it infon sofa 1 (Obs 9: OK. Act 10: goto sofa 1 Obs 10: On the sofa 1, you see a newspaper 3 âAct 11: put pillow 1 in/on sofa 1 (Obs 11: You put the pillow 1 in/on the sofa 1 Act 12: think: Now I put the first pillow in sofa. Next, I need to find the second {[pillow]]. | should go back to armchair 1 to see if there is more pillow. If not, | will continue to check drawer (1-5), sidetable (1), cabinet (1-4), dresser (1), shelf (1-12), garbagecan (1), safe (1) one by one to find pillow. Obs 12: OK âAct 13: go to armchair 1 (bs 13: On the armchair 1, you see a cellphone 2, and a pillow 2. âAct 14: think: Now I find the second pillow (2). Next, I need to take It Obs 14: OK (Obs 18: You put the pillow 2 in/on the sofa 1.
Fig. 4. Trajectory planning. In the initial scenario, the agent fails to retrieve the second pillow from the armchair after placing the first pillow on the sofa. Consequently, the agent cannot find the second pillow, resulting in an incomplete task. In the revised scenario, InterAct addresses this issue by considering the future search trajectory. It prioritizes returning to the armchair to search for the second pillow before exploring the other areas. This approach improves the chances of successfully locating the second pillow and completing the task.
TABLE I ALFWORLD TASK-SPECIFIC SUCCESS RATES (%).
Method Pick Clean Heat Cool Look Pick2 BUTLERg 33 6 70 76 17 12 BUTLER 65 39 83 76 55 24 Act 88 41 76 67 73 43 ReAct 88 55 90 81 75 53 ReAct+checker 85 81 100 87 92 75 ReAct+sorter 84 76 88 73 80 67 InterAct 100 96 100 94 100 100 All 46 57 46 73 86 78 98
# V. DISCUSSION AND LIMITATIONS
A. Scalability of InterAct
Our InterAct model is scalable and adaptable to different datasets and scenarios. For instance, if thereâs a need for a feature similar to âmemories,â we can develop an interpreter to describe the current path, among other things, without having to train numerous different language models. This is possible because ChatGPT serves as an excellent backbone for such extensions.
B. Error assessment with a supervisor module
Despite achieving an impressive average performance of 98% on the AlfWorld dataset, our analysis of failed trajecto- ries uncovered certain limitations. One notable drawback is the modelâs heavy reliance on prompt completeness within InterAct. When our examples contain missing or unaddressed components, the model fails to detect these errors, resulting in repetitive actions, even for trivial mistakes. To overcome this issue, we explored the possibility of using an alternative ChatGPT model as a supervisor to identify such errors. How- ever, itâs important to acknowledge that the accuracy of the supervisorâs judgment cannot be guaranteed, and there may be occasional misidentifications leading to âaction errors.â
In order to tackle the challenge of error detection, we conducted a comparison between ChatGPT and GPT-4. The results demonstrated a significant improvement in error de- tection performance with GPT-4. Unfortunately, GPT-4 is currently unavailable as an open-source model and cannot be accessed free of charge. Conducting extensive simulations using GPT-4 requires funding support.
C. Insufficiency of the dataset
While AlfWorld is a valuable platform for assessing AI per- formance, it has certain limitations. Primarily, it encompasses only six types of tasks, and even within these categories, the task quantity is quite limited. These restrictions neither fully test nor make optimal use of the AI systemsâ capabilities. If we move to an environment offering a larger range and diversity of tasks, as well as a broader and more varied set of locations, our model will still need improvement to maintain its current level of accuracy. This aspect will be our focus for future research.
# VI. CONCLUSION
Our research is centered on enhancing the task planning capabilities of large language models. We developed a new model, InterAct, built upon the framework of the ReAct model. InterAct is a culmination of various âhelpersâ (like checkers and sorters) and aims to improve upon the existing trajectory. We evaluated this framework in the AlfWorld sim- ulated environment, where it showed a substantial increase in decision-making accuracy, soaring from 75% to an impressive 98%. This highlights the vast potential of these models in AI- driven systems and technologies.
In essence, this study underscores the revolutionary poten- tial of AI language models like ChatGPT and their pivotal
role in shaping future real-world interactions. As we continue to delve into their capabilities, we are on the cusp of a new technological era marked by not only intelligence but also intuitiveness and responsiveness to human needs.
# REFERENCES
[1] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Å. Kaiser, and I. Polosukhin, âAttention is all you need,â Advances in Neural Information Processing Systems, vol. 30, 2017.
[2] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, âExploring the limits of transfer learning with a unified text-to-text transformer,â The Journal of Machine Learning Research, vol. 21, no. 1, pp. 5485â5551, 2020.
[3] OpenAI, âGPT-4 technical report,â arXiv, 2023. [4] âOpenAI, ChatGPT. https://chat.openai.com.â 2023. [5] M. Shridhar, X. Yuan, M.-A. CËot´e, Y. Bisk, A. Trischler, and M. Hausknecht, âAlfworld: Aligning text and embodied environments for interactive learning,â arXiv preprint arXiv:2010.03768, 2020. [6] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao, âReAct: Synergizing reasoning and acting in language models,â arXiv preprint arXiv:2210.03629, 2022.
[7] N. Shinn, B. Labash, and A. Gopinath, âReflexion: an autonomous agent with dynamic memory and self-reflection,â arXiv preprint arXiv:2303.11366, 2023.
[8] W. Fedus, B. Zoph, and N. Shazeer, âSwitch transformers: Scaling to trillion parameter models with simple and efficient sparsity,â The Journal of Machine Learning Research, vol. 23, no. 1, pp. 5232â5270, 2022.
[9] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., âLanguage models learners,â Advances in Neural Information Processing are few-shot systems, vol. 33, pp. 1877â1901, 2020.
[10] L. Zhuang, L. Wayne, S. Ya, and Z. Jun, âA robustly optimized bert pre-training approach with post-training,â in Proceedings of the 20th Chinese national conference on computational linguistics, 2021, pp. 1218â1227.
[11] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo, âSwin transformer: Hierarchical vision transformer using shifted win- dows,â in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 10 012â10 022.
[12] J. Liang, J. Cao, G. Sun, K. Zhang, L. Van Gool, and R. Timofte, âSwinir: Image restoration using swin transformer,â in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 1833â1844.
[13] L. Chen, K. Lu, A. Rajeswaran, K. Lee, A. Grover, M. Laskin, P. Abbeel, A. Srinivas, and I. Mordatch, âDecision transformer: Re- learning via sequence modeling,â Advances in Neural inforcement Information Processing systems, vol. 34, pp. 15 084â15 097, 2021. [14] K.-H. Lee, O. Nachum, M. S. Yang, L. Lee, D. Freeman, S. Guadarrama, I. Fischer, W. Xu, E. Jang, H. Michalewski et al., âMulti-game decision transformers,â Advances in Neural Information Processing Systems, vol. 35, pp. 27 921â27 936, 2022.
[15] K. Alexis, C. Papachristos, R. Siegwart, and A. Tzes, âUniform cov- erage structural inspection pathâplanning for micro aerial vehicles,â in 2015 IEEE international symposium on intelligent control (ISIC). IEEE, 2015, pp. 59â64.
[16] D. S. Chaplot, D. Pathak, and J. Malik, âDifferentiable spatial planning using transformers,â in International Conference on Machine Learning. PMLR, 2021, pp. 1484â1495.
[17] K. He, X. Chen, S. Xie, Y. Li, P. Doll´ar, and R. Girshick, âMasked autoencoders are scalable vision learners,â in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 16 000â16 009.
[18] D. Park and S. Y. Chun, âClassification based grasp detection using spatial transformer network,â arXiv preprint arXiv:1803.01356, 2018. [19] A. Brohan, N. Brown, J. Carbajal, Y. Chebotar, J. Dabis, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog, J. Hsu et al., âRt-1: Robotics transformer for real-world control at scale,â arXiv preprint arXiv:2212.06817, 2022.
[20] M. Ahn, A. Brohan, N. Brown, Y. Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog et al., âDo as i can, not as i say: Grounding language in robotic affordances,â arXiv preprint arXiv:2204.01691, 2022.
[21] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann et al., âPalm: Scaling language modeling with pathways,â arXiv preprint arXiv:2204.02311, 2022.
[22] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch, Y. Chebotar et al., âInner monologue: Embod- ied reasoning through planning with language models,â arXiv preprint arXiv:2207.05608, 2022.
[23] W. Huang, P. Abbeel, D. Pathak, and I. Mordatch, âLanguage models as zero-shot planners: Extracting actionable knowledge for embodied agents,â in International Conference on Machine Learning. PMLR, 2022, pp. 9118â9147.
[24] S. Vemprala, R. Bonatti, A. Bucker, and A. Kapoor, âChatgpt for robotics: Design principles and model abilities,â 2023, 2023.
[25] N. Wake, A. Kanehira, K. Sasabuchi, J. Takamatsu, and K. Ikeuchi, âChatgpt Empowered Long-Step Robot Control in Various Environ- ments: A Case Application,â arXiv preprint arXiv:2304.03893, 2023.
[26] G. Lu, S. Li, G. Mai, J. Sun, D. Zhu, L. Chai, H. Sun, X. Wang, H. Dai, N. Liu et al., âAGI for agriculture,â arXiv preprint arXiv:2304.06136, 2023.
[27] H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, E. Li, X. Wang, M. Dehghani, S. Brahma et al., âScaling instruction-finetuned language models,â arXiv preprint arXiv:2210.11416, 2022.
[28] J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud, D. Yogatama, M. Bosma, D. Zhou, D. Metzler et al., âEmergent abilities of large language models,â arXiv preprint arXiv:2206.07682, 2022. [29] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou, âChain of thought prompting elicits reasoning in large language models,â arXiv preprint arXiv:2201.11903, 2022.
[30] A. Creswell and M. Shanahan, âFaithful reasoning using large language models,â arXiv preprint arXiv:2208.14271, 2022.
[31] M. Nye, A. J. Andreassen, G. Gur-Ari, H. Michalewski, J. Austin, D. Bieber, D. Dohan, A. Lewkowycz, M. Bosma, D. Luan et al., âShow your work: Scratchpads for intermediate computation with language models,â arXiv preprint arXiv:2112.00114, 2021.
[32] Z. Wang, S. Cai, A. Liu, X. Ma, and Y. Liang, âDescribe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents,â arXiv preprint arXiv:2302.01560, 2023. [33] V. Nair, E. Schumacher, G. Tso, and A. Kannan, âDERA: enhanc- ing large language model completions with dialog-enabled resolving agents,â arXiv preprint arXiv:2303.17071, 2023.
[34] A. Madaan, N. Tandon, P. Gupta, S. Hallinan, L. Gao, S. Wiegreffe, U. Alon, N. Dziri, S. Prabhumoye, Y. Yang et al., âSelf-refine: Iterative refinement with self-feedback,â arXiv preprint arXiv:2303.17651, 2023. [35] M. Shridhar, J. Thomason, D. Gordon, Y. Bisk, W. Han, R. Mottaghi, L. Zettlemoyer, and D. Fox, âAlfred: A benchmark for interpret- ing grounded instructions for everyday tasks,â in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 10 740â10 749.
# APPENDIX A ALFWORLD EXPERIMENT DETAILS
Hereâs an example in the prompt of InterAct in heat task. In the prompt, we enclose the items that need to be found within double square brackets, so that they can be fed to the checker and sorter later. InterAct will search for the items based on the sorting result. Each time we arrive at a location, we ask the checker to find the required item. If it is found, we consider the things present at that location as observations and return them to the LLM. If the item is not found, we ask the checker to return the message âObject is not here.â.
# <heat> Prompt
You are in the middle of a room. Looking quickly around you, you see a cabinet 10, a cabinet 9, a cabinet 8, a cabinet 7, a cabinet 6, a cabinet 5, a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a coffeemachine 1, a countertop 3, a countertop 2, a countertop 1, a diningtable 1, a drawer 6, a drawer 5, a
drawer 4, a drawer 3, a drawer 2, a drawer 1, a fridge 1, a garbagecan 1, a microwave 1, a sinkbasin 1, a stoveburner 4, a stoveburner 3, a stoveburner 2, a stoveburner 1, and a toaster 1. Your task is to: heat some egg and put it in diningtable. > think: To solve the task, I need to find and take an egg, then heat it with microwave, then put it in diningtable. OK. > think: First I need to find an [[egg]]. OK. > sorting(egg) [a fridge 1, a countertop 1, a countertop 2, a countertop 3, a diningtable 1, a sinkbasin 1, a garbagecan 1, a microwave 1, a cabinet 1, a cabinet 2, a cabinet 3, a cabinet 4, a cabinet 5, a cabinet 6, a cabinet 7, a cabinet 8, a cabinet 9, a cabinet 10, a stoveburner 1, a stoveburner 2, a stoveburner 3, a stoveburner 4, a drawer 1, a drawer 2, a drawer 3, a drawer 4, a drawer 5, a drawer 6, a coffeemachine 1, a toaster 1] > think: Based on the sorting provided, I can check fridge (1), countertop (1-3), diningtable (1), sinkbasin (1), garbagecan (1), microwave (1), cabinet (1-10), stoveburner (1-4), drawer (1-6), coffeemachine (1), toaster (1) one by one to find egg, starting with fridge 1. OK. > go to fridge 1 The fridge 1 is closed. > open fridge 1 egg is not here. > go to countertop 1 egg is not here. > go to countertop 2 egg is not here. > go to countertop 3 On the countertop 3, you see a bowl 1, a butterknife 1, a egg 2, a kettle 2, a plate 1, a sink 1, and a spatula 2. > think: Now I find an egg (2). Next, I need to take it. OK. > take egg 2 from countertop 3 You pick up the egg 2 from the countertop 3. > think: Now I take an egg (2). Next, I need go to a microwave (1) and heat it. OK. > go to microwave 1 The microwave 1 is closed. > heat egg 2 with microwave 1 You heat the egg 2 using the microwave 1. > think: Now I heat an egg (2). Next, I need to put it in/on diningtable 1. OK. > go to diningtable 1 On the diningtable 1, you see a apple 2, a bread 3, a egg 1, a kettle 1, a knife 1, a mug 1, a papertowelroll 1, a peppershaker 2, a potato 1, a soapbottle 1, and a spatula 1. > put egg 2 in/on diningtable 1 You put the egg 2 in/on the diningtable 1.
In the âpick2â task, we have incorporated new trajectory planning. After locating and placing the first object in the appropriate receptacle, we enable the model to independently generate a trajectory, ensuring that the path does not overlook areas where the second object may be located.
# <picktwo> Prompt
You are in the middle of a room. Looking quickly around you, you see a armchair 2, a armchair 1, a bed 1, a countertop 1, a diningtable 1, a drawer 2, a drawer 1, a dresser 1, a laundryhamper 1, and a sidetable 1. Your task is to: put two creditcard in dresser. > think: To solve the task, I need to find and take the first
creditcard, then put it in dresser, then find and take the second creditcard, then put it in dresser. OK. > think: First I need to find the first [[creditcard]]. OK. > sorting(creditcard) [countertop 1, diningtable 1, sidetable 1, armchair 1, armchair 2, bed 1, drawer 1, drawer 2, dresser 1, laundryhamper 1] > think: Based on the sorting provided, I can check couter- top (1), diningtable (1), sidetable (1), armchair (1-2), bed (1), drawer (1-2), dresser (1), laundryhamper (1) one by one to find creditcard, starting with countertop 1. OK. > go to countertop 1 creditcard is not here. > go to diningtable 1 On the diningtable 1, you see a cellphone 2, a creditcard 4, a creditcard 3, a creditcard 2, a mirror 1, a pencil 2, and a pencil 1. > think: Now I find the first creditcard (2). Next, I need to take it. OK. > take creditcard 2 from diningtable 1 You pick up the creditcard 2 from the diningtable 1. > think: Now I take the first creditcard (2). Next, I need to put it in/on dresser 1. OK. > go to dresser 1 On the dresser 1, you see a mug 1, and a television 1. > put creditcard 2 in/on dresser 1 You put the creditcard 2 in/on the dresser 1. > think: Now I put the first creditcard in dresser. Next, I need to find the second [[creditcard]]. I should go back to diningtable 1 to see if there is more creditcard. If not, I will continue to check sidetable (1), armchair (1-2), bed (1), drawer (1-2), dresser (1), laundryhamper (1) one by one to find credit- card. OK. > go to diningtable 1 On the diningtable 1, you see a cellphone 2, a creditcard 4, a creditcard 3, a mirror 1, a pencil 2, and a pencil 1. > think: Now I find the second creditcard (3). Next, I need to take it. OK. > take creditcard 3 from diningtable 1 You pick up the creditcard 3 from the diningtable 1. > think: Now I take the second creditcard (3). Next, I need to put it in/on dresser 1. OK. > go to dresser 1 On the dresser 1, you see a creditcard 2, a mug 1, and a television 1. > put creditcard 3 in/on dresser 1 You put the creditcard 3 in/on the dresser 1. | {
"id": "2206.07682"
} |
2308.01542 | Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents | The recent advent of large language models (LLM) has resulted in
high-performing conversational agents such as chatGPT. These agents must
remember key information from an ongoing conversation to provide responses that
are contextually relevant to the user. However, these agents have limited
memory and can be distracted by irrelevant parts of the conversation. While
many strategies exist to manage conversational memory, users currently lack
affordances for viewing and controlling what the agent remembers, resulting in
a poor mental model and conversational breakdowns. In this paper, we present
Memory Sandbox, an interactive system and design probe that allows users to
manage the conversational memory of LLM-powered agents. By treating memories as
data objects that can be viewed, manipulated, recorded, summarized, and shared
across conversations, Memory Sandbox provides interaction affordances for users
to manage how the agent should `see' the conversation. | http://arxiv.org/pdf/2308.01542 | Ziheng Huang, Sebastian Gutierrez, Hemanth Kamana, Stephen MacNeil | cs.HC | null | null | cs.HC | 20230803 | 20230803 | 3 2 0 2
g u A 3 ] C H . s c [
1 v 2 4 5 1 0 . 8 0 3 2 : v i X r a
# Memory Sandbox: Transparent and Interactive Memory Management for Conversational Agents
# Ziheng Huang [email protected] University of CaliforniaâSan Diego San Diego, CA, USA
Sebastian Gutierrez [email protected] Temple University Philadelphia, PA, USA
Hemanth Kamana [email protected] Temple University Philadelphia, PA, USA
# Stephen MacNeil [email protected] Temple University Philadelphia, PA, USA
ABSTRACT The recent advent of large language models (LLM) has resulted in high-performing conversational agents such as chatGPT. These agents must remember key information from an ongoing conversa- tion to provide responses that are contextually relevant to the user. However, these agents have limited memory and can be distracted by irrelevant parts of the conversation. While many strategies exist to manage conversational memory, users currently lack affordances for viewing and controlling what the agent remembers, resulting in a poor mental model and conversational breakdowns. In this paper, we present Memory Sandbox, an interactive system and design probe that allows users to manage the conversational memory of LLM-powered agents. By treating memories as data objects that can be viewed, manipulated, recorded, summarized, and shared across conversations, Memory Sandbox provides interaction affordances for users to manage how the agent should âseeâ the conversation.
CCS CONCEPTS ⢠Computing methodologies â Intelligent agents; ⢠Human- centered computing â Interactive systems and tools;
# KEYWORDS Human-AI Interaction, Large Language Models, Chatbots
Multiple strategies have been introduced to manage agentsâ conversational memory. For example, the conversation can be automatically summarized [21] and refined [24] to reduce redun- dancy while maintaining key information. Some systems selectively store [12, 22] and update [1] key memories. Relevant memories can also be retrieved based on the user input [1, 15, 21]. However, these memory management strategies are hidden behind the interface, resulting in a lack of transparency. Users often do not know what strategy is being used and have limited control over it. This makes it difficult for users to repair conversational breakdowns that happen when there is a misalignment between how the agent manages the memory and how the user perceives the conversation.
We present Memory sandbox, shown in Figure 1, a system that allows users to see and manage the memory of conversational agents to align with user understanding of the conversation. Mem- ory Sandbox transforms conversational memory, previously man- aged behind the user interface, into interactive memory objects within the interface. Users can manipulate the visibility and con- tent of memory objects, spatially rearrange them, and share them across conversations. We make the following contributions: 1) The conceptualization of memory objects which makes conversational memory transparent and interactive and 2) The Memory Sandbox system that offers novel interaction affordances for users to view and manipulate the conversational memory of an intelligent agent.
1 INTRODUCTION Large Language Models (LLMs) are currently capable of generating human-like responses in open-domain tasks [4]. This has led to a new generation of conversational agents, such as chatGPT, which are now being widely used across domains. To ensure that agents generate responses that are contextually relevant and coherent to an ongoing conversation, these agents must maintain a working memory of the conversational history that has occurred up to that point in the conversation. The default strategy is to use as much of the conversational history as will fit within the input size limit of the LLM. Parts of the conversations that go beyond that buffer limit are forgotten, which leads to breakdowns when users assume the model remembers past context. Additionally, as the input buffer size increases, the performance of the LLM degrades as it struggles to retrieve relevant context and can be distracted by irrelevant context [11, 18]. This problem is compounded because users do not know how the LLM is leveraging the memory to generate responses.
2 SYSTEM OVERVIEW Memory sandbox is a system that provides users with the ability to view and manipulate the memory model of an intelligent agent, resulting in a shared representation of their ongoing conversation. Memory Sandbox introduces the concept of a memory object, an in- teractive piece of conversational history that can be moved, edited, deleted, or combined with other memory objects through sum- marization. The interface is implemented in Next.js and uses the GPT-3.5 turbo model from the OpenAI API. Below we present the features of Memory Sandbox to help end users view and manage an LLM-powered agentâs memory model.
2.1 View and manipulate memory objects Explainable AI research seeks to help people form mental models of intelligent systems [17]. Transparency of the inner workings of the system [6, 23] and interactivity to probe and manipulate the
Huang, et al.
Conversation 1 Conversation 2 assistant _ hello thore! im a design process chatbot that can guide assistant _ hello there! im a systems thinking chatbot that help you > = youthrough the design process! how can | assist you seo ¢ today? © Selected Memory For Summary ® @ user im working on de quality online e ing design solutions for high think through the interconnected stakeholders! how can ees * ©â Gj assist you today? @ summary _ stakeholders in online education include students, teachers, administrators, parents, support staff, set H © © G technical teams, and regulatory bodies. summary _ stakeholders in online education include students, = og = teachers, administrators, parents, support staff, Draggable S © 6S technical teams, and regulatory bodies. emer be ae user before going into specific stakeholders, what are some 5 4 common obstacles ofall these stakeholders ? s@é user what should be my next steps ? set
Figure 1: Memory Sandbox is a system that enables users to see and manage the memory of conversational agents. Memory Sandbox provides the following interaction affordances: 1) toggle memory visibility, 2) add memory, 3) edit memory, 4) delete memory, 5) summarize memory, 6) create a new conversation, and 7) share memory.
system [16] have been demonstrated to help people interpret and interact with intelligent systems to achieve their goals.
Memory Sandbox makes the conversational memory explicit through the use of âmemory objectsâ which can be viewed and manipulated within the interface. This was inspired by prior work that âobjectifiesâ tools [2, 3] and attributes [20] to enable flexibility, expressiveness, and direct manipulation. This results in a âshared representationâ [7, 8] and common ground [5]âso what users see on the front-end is what an LLM would âseeâ on the back-end.
Additionally, users can view, edit, add, and delete memory objects
to directly control how the agent âseesâ the conversation.
2.2 Toggle memory object visibility As a conversation grows, LLMs must increasingly rely on their memory management strategy to infer meaning from the conversa- tion. However, in longer conversations, it is unclear what parts of the conversation are stored in memory or are attended to by the model [11]. This results in a poor mental model for users and a lack of control over what context is maintained and used by the agent. Memory Sandbox enables users to selectively hide or show mem- ory objects to control what context is shared with the agent. When the userâs intent changes or the conversational context switches, the user can toggle the visibility of memory objects to hide or show parts of the conversation. As a signifier, hidden memory objects are grayed out within the interface.
supplement extra context. Additionally, the arrangement of context is shown to have a significant effect on how well LLMs are able to leverage relevant context [11]. In Memory Sandbox, all the memory objects are draggable, allowing users to experiment and refine the ordering and placement of memory objects in a conversation.
2.4 Summarize memory objects Reminiscent of how humans attend to key aspects in a conver- sation [14], abstractive summarization distills a large amount of information to provide essential elements to the agent. Yet, what is considered as âkey aspectsâ can vary for individuals, even in the same conversation [14]. Memory Sandbox enables uses to select memory objects that are summarized by the LLM. The resulting memory object represents the previous conversation and can be further refined by the user. The original conversation can be viewed by clicking on the summary.
2.5 Share memory objects across conversations Aligning with the goal of managing memory, Memory Sandbox also provides affordances for sharing memories across conversations. This offers a new way for users to engage with multiple agents outside of a single conversation thread. Unlike in conversations with people, the speaker doesnât need to repeat themselves in each conversation to establish a shared understanding.
2.3 Curate memory objects Discussants develop and refine their understanding as a conversa- tion unfolds [5]. Thus, Memory Sandbox provides controls for users to curate memory objects by editing an existing memory object to refine or update the context, deleting a memory object to remove completely irrelevant context, and adding a new memory object to
Users can create and start multiple conversations with separate LLM-powered agents in the same 2D canvas. Memory objects can be shared and connected between conversations by dragging the memory object from one conversation to another. When dragging, memories are copied by reference to help the user identify the context source.
Memory Sandbox
3 DISCUSSION Conversing is a collaborative activity where participants develop common ground through summarizing the discussion, repairing breakdowns, and emphasizing or de-emphasizing shared ideas [5]. Yet, existing chatbot interfaces do not provide affordances for under- standing how the agent âseesâ the conversation. Additionally, users can not rely on a theory of mind. These aspects result in a poor men- tal model for users and potential misalignment in understanding where conversational breakdown can occur.
Memory Sandbox transforms previously implicitly managed con- versational memory behind the interface into interactive memory objects on the interface, exposing full control over the memory model of the agent to end users. By selectively hiding, showing, and curating memory representation, we can give users more con- trol over how the agent should âseeâ the conversation. In addition to curating memory in a single conversation, Memory Sandbox is also a design probe toward memory manipulation affordances for multi-agent interactions. By displaying multiple agents on the same screen and making memories interactive and draggable, Mem- ory Sandbox allows end users to selectively control the shared or unique memory each agent contains.
Tools are beginning to emerge that focus on how users might in- teract with LLMs, including mapping UI affordances to an LLM [13], grounding human-AI collaboration in a shared artifact [9], provid- ing templates to facilitate prompt generation [10], and decomposing complex prompts to facilitate debugging [19]. In this paper, we pre- sented Memory Sandbox an interactive system that probes the design space of interaction techniques for memory management of LLMs. Our future work includes user studies to evaluate the efficacy of these techniques and potential trade-offs for implicit vs explicit memory management
REFERENCES [1] Sanghwan Bae, Donghyun Kwak, Soyoung Kang, Min Young Lee, Sungdong Kim, Yuin Jeong, Hyeri Kim, Sang-Woo Lee, Woomyoung Park, and Nako Sung. 2022. Keep me updated! memory management in long-term conversations. arXiv preprint arXiv:2210.08750 (2022).
[2] Benjamin B Bederson, James D Hollan, Allison Druin, Jason Stewart, David Rogers, and David Proft. 1996. Local tools: An alternative to tool palettes. In Proceedings of the 9th annual ACM symposium on User interface software and technology. 169â170.
[3] Eric A Bier, Maureen C Stone, Ken Pier, William Buxton, and Tony D DeRose. 1993. Toolglass and magic lenses: the see-through interface. In Proceedings of the 20th annual conference on Computer graphics and interactive techniques. 73â80. [4] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877â1901.
[5] Herbert H Clark and Edward F Schaefer. 1989. Contributing to discourse. Cogni- tive science 13, 2 (1989), 259â294.
[6] Malin Eiband, Hanna Schneider, Mark Bilandzic, Julian Fazekas-Con, Mareike Haug, and Heinrich Hussmann. 2018. Bringing transparency design into practice.
[7]
In 23rd international conference on intelligent user interfaces. 211â223. Jeffrey Heer. 2019. Agency plus automation: Designing artificial intelligence into interactive systems. Proceedings of the National Academy of Sciences 116, 6 (2019), 1844â1850.
[8] Eric Horvitz. 1999. Principles of Mixed-Initiative User Interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Pittsburgh, Pennsylvania, USA) (CHI â99). Association for Computing Machinery, New York, NY, USA, 159â166. https://doi.org/10.1145/302979.303030
[9] Ziheng Huang, Kexin Quan, Joel Chan, and Stephen MacNeil. 2023. CausalMap- per: Challenging designers to think in systems with Causal Maps and Large Language Model. In Proceedings of the 15th Conference on Creativity and Cognition. 325â329.
[10] Ellen Jiang, Kristen Olson, Edwin Toh, Alejandra Molina, Aaron Donsbach, Michael Terry, and Carrie J Cai. 2022. Promptmaker: Prompt-based prototyping with large language models. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1â8.
[11] Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele Bevilacqua, Fabio Petroni, and Percy Liang. 2023. Lost in the Middle: How Language Models Use Long Contexts. arXiv:2307.03172 [cs.CL]
[12] Zhengyi Ma, Zhicheng Dou, Yutao Zhu, Hanxun Zhong, and Ji-Rong Wen. 2021. One chatbot per person: Creating personalized chatbots based on implicit user profiles. In Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval. 555â564.
[13] Stephen MacNeil, Andrew Tran, Joanne Kim, Ziheng Huang, Seth Bernstein, and Dan Mogil. 2023. Prompt Middleware: Mapping Prompts for Large Language Models to UI Affordances. arXiv preprint arXiv:2307.01142 (2023). John C McCarthy, Victoria C Miles, and Andrew F Monk. 1991. An experimental study of common ground in text-based communication. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 209â215. Joon Sung Park, Joseph C OâBrien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. 2023. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442 (2023).
[14]
[15]
[16] Andrew Ross, Nina Chen, Elisa Zhao Hang, Elena L Glassman, and Finale Doshi- Velez. 2021. Evaluating the interpretability of generative models by interactive reconstruction. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1â15.
[17] Heleen Rutjes, Martijn Willemsen, and Wijnand IJsselsteijn. 2019. Considerations on explainable AI and usersâ mental models. In CHI 2019 Workshop: Where is the Human? Bridging the Gap Between AI and HCI. Association for Computing Machinery, Inc.
[18] Freda Shi, Xinyun Chen, Kanishka Misra, Nathan Scales, David Dohan, Ed H Chi, Nathanael Schärli, and Denny Zhou. 2023. Large language models can be easily distracted by irrelevant context. In International Conference on Machine Learning. PMLR, 31210â31227.
[19] Tongshuang Wu, Michael Terry, and Carrie Jun Cai. 2022. Ai chains: Transparent and controllable human-ai interaction by chaining large language model prompts. In Proceedings of the 2022 CHI conference on human factors in computing systems. [20] Haijun Xia, Bruno Araujo, Tovi Grossman, and Daniel Wigdor. 2016. Object- oriented drawing. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 4610â4621. Jing Xu, Arthur Szlam, and Jason Weston. 2021. Beyond goldfish memory: Long-term open-domain conversation. arXiv preprint arXiv:2107.07567 (2021). [22] Xinchao Xu, Zhibin Gou, Wenquan Wu, Zheng-Yu Niu, Hua Wu, Haifeng Wang, and Shihang Wang. 2022. Long Time No See! Open-Domain Conversation with Long-Term Persona Memory. In Findings of the Association for Computational Linguistics: ACL 2022. Association for Computational Linguistics, Dublin, Ireland, 2639â2650. https://doi.org/10.18653/v1/2022.findings-acl.207
[23] Yunfeng Zhang, Q Vera Liao, and Rachel KE Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 295â305.
[24] Hanxun Zhong, Zhicheng Dou, Yutao Zhu, Hongjin Qian, and Ji-Rong Wen. 2022. Less is more: Learning to refine dialogue history for personalized dialogue generation. arXiv preprint arXiv:2204.08128 (2022). | {
"id": "2210.08750"
} |
2308.01320 | DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales | ChatGPT-like models have revolutionized various applications in artificial
intelligence, from summarization and coding to translation, matching or even
surpassing human performance. However, the current landscape lacks an
accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement
Learning with Human Feedback) training pipeline for these powerful models,
particularly when training at the scale of billions of parameters. This paper
introduces DeepSpeed-Chat, a novel system that democratizes RLHF training,
making it accessible to the AI community. DeepSpeed-Chat offers three key
capabilities: an easy-to-use training and inference experience for ChatGPT-like
models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from
InstructGPT, and a robust DeepSpeed-RLHF system that combines various
optimizations for training and inference in a unified way. The system delivers
unparalleled efficiency and scalability, enabling training of models with
hundreds of billions of parameters in record time and at a fraction of the
cost. With this development, DeepSpeed-Chat paves the way for broader access to
advanced RLHF training, even for data scientists with limited resources,
thereby fostering innovation and further development in the field of AI. | http://arxiv.org/pdf/2308.01320 | Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He | cs.LG, cs.AI, cs.CL | 14 pages, 7 figures | null | cs.LG | 20230802 | 20230802 | 3 2 0 2
g u A 2 ] G L . s c [
1 v 0 2 3 1 0 . 8 0 3 2 : v i X r a
DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales
Zhewei Yao, Reza Yazdani Aminabadi, Olatunji Ruwase, Samyam Rajbhandari, Xiaoxia Wu, Ammar Ahmad Awan, Jeff Rasley, Minjia Zhang, Conglong Li, Connor Holmes, Zhongzhu Zhou, Michael Wyatt, Molly Smith, Lev Kurilenko, Heyang Qin, Masahiro Tanaka, Shuai Che, Shuaiwen Leon Song, Yuxiong He
Deepspeed of Microsoft
# Abstract
ChatGPT-like models have revolutionized various applications in artificial intelligence, from summarization and coding to translation, matching or even surpassing human per- formance. However, the current landscape lacks an accessible, efficient, and cost-effective end-to-end RLHF (Reinforcement Learning with Human Feedback) training pipeline for these powerful models, particularly when training at the scale of billions of parameters. This paper introduces DeepSpeed-Chat, a novel system that democratizes RLHF train- ing, making it accessible to the AI community. DeepSpeed-Chat offers three key ca- pabilities: an easy-to-use training and inference experience for ChatGPT-like models, a DeepSpeed-RLHF pipeline that replicates the training pipeline from InstructGPT, and a robust DeepSpeed-RLHF system that combines various optimizations for training and inference in a unified way. The system delivers unparalleled efficiency and scalability, en- abling training of models with hundreds of billions of parameters in record time and at a fraction of the cost. With this development, DeepSpeed-Chat paves the way for broader access to advanced RLHF training, even for data scientists with limited resources, thereby fostering innovation and further development in the field of AI.
# 1 Overview
ChatGPT [1] like models have taken the AI world by storm, and it would not be an overstate- ment to say that itâs impact on the digital world has been revolutionary. These models are incredibly versatile, capable of performing tasks like summarization, coding, and translation with results that are on-par or even exceeding the capabilities of human experts. Given the sheer power of these models, multiple efforts are underway in the AI open-source community to make ChatGPT-style models more accessible (e.g. ChatLLaMa [2], Alpaca [3], Vicuna [4], Databricks-Dolly [5], etc.).
Despite these incredible efforts, there is still a lack of an end-to-end RLHF pipeline capable of training powerful ChatGPT like model that is easily accessible to the AI community. For instance, training a modest 6.7B ChatGPT model with existing systems typically requires expensive multi-GPU setup that is beyond the reach of many data scientists. Even with access to such computing resources, training efficiency is often less than 5% of what these machines are capable of (as illustrated in Figure 6). And finally, existing solutions simply cannot support
1
Table 1: Single-Node 8x A100: Training Time and Corresponding Approximate Cost on Azure. OPT-13B
GPUs OPT-6.7B OPT-30B OPT-66B 8x A100-40GB 8x A100-80GB 4.1 hours ($132) 5.7 hours 10.8 hours 9 hours ($290) 1.85 days 18 hours ($580) NA 2.1 days ($1620)
easy, fast and affordable training state-of-the-art ChatGPT models with hundreds of billions of parameters, even given access to multi-GPU clusters.
These limitations stem from a lack of a robust system design that is capable of effectively supporting the complex InstructGPTâs RLHF training pipeline that is quite different from the standard pre-training and fine-tuning pipelines that existing DL systems are designed for. Therefore, in the spirit of democratizing ChatGPT-like models, and making RLHF training truly accessible to the AI community, today we are releasing DeepSpeed-Chat with the following three capabilities:
⢠Easy-to-use Training and Inference Experience for ChatGPT Like Models: A single script capable of taking a pre-trained Huggingface [6] model, running it through all three steps of InstructGPT [7] training using DeepSpeed-RLHF system and producing your very own ChatGPT like model. In addition, we provide an inference API for testing conversation-style interactions after the model is trained.
⢠DeepSpeed-RLHF Pipeline: DeepSpeed-RLHF pipeline primarily replicates the train- ing pipeline from the InstructGPT [7] paper with careful attention to ensure completeness and one-to-one correspondence with the three-steps that includes a) Supervised Fine- tuning (SFT), b) Reward Model Fine-tuning and c) Reinforcement Learning with Human Feedback (RLHF) [8]. Additionally, we offer data abstraction and blending capabilities to enable training with multiple data sources.
⢠DeepSpeed-RLHF System: A robust and sophisticated RLHF system that combines the training and inference prowess of DeepSpeed into single unified Hybrid Engine (DeepSpeed- HE) for RLHF. The Hybrid-Engine is capable of seamlessly transitioning between infer- ence and training modes within RLHF, allowing it to leverage various optimizations from DeepSpeed-Inference such as tensor-parallelism and high-performance transformer kernels for generation, while also benefiting from the multitude of ZeRO- and LoRA [9]-based memory optimization strategies for RL training. DeepSpeed-HE is also aware of the full RLHF pipeline, allowing it to make optimal decisions in terms of memory management and data movement across different phases of RLHF.
DeepSpeed-RLHF system is capable of unparalleled efficiency at scale, making complex RLHF training fast, affordable, and easily accessible to the AI community:
Efficiency and Affordability: In terms of efficiency, DeepSpeed-HE is over 15x faster than existing systems, making RLHF training both fast and affordable. For instance, DeepSpeed-HE can train an OPT-13B [10] in just 9 hours and OPT-30B in 18 hours on Azure Cloud for under $300 and $600, respectively, as shown in Table 1.
Excellent Scalability: DeepSpeed-HE supports models with hundreds of billions of pa- rameters and can achieve excellent scalability on multi-node multi-GPU systems. As a result, even a 13B model can be trained in 1.25 hours and a massive 175B model can be trained with DeepSpeed-HE in under a day as shown in Table 2.1
1Very Important Details: The numbers in both tables (1, 2) above are for Step 3 of the training
2
Table 2: Multi-Node 64x A100-80GB: Training Time and Corresponding Approximate Cost on Azure. GPUs 64x A100-80G 1.25 hours ($320)
OPT-13B OPT-30B 4 hours ($1024) OPT-66B 7.5 hours ($1920) OPT-175B 20 hours ($5120)
Table 3: Max Model Size Supported by DeepSpeed-HE on a Single GPU.
V100 32G A6000 48G A100 40G A100 80G Model Size OPT-2.7B OPT-6.7B OPT-6.7B OPT-13B
Democratizing RLHF Training: With just a single GPU, DeepSpeed-HE supports train- ing models with over 13 billion parameters as shown in Table 3, enabling data scientists without access to multi-GPU systems to create not just toy RLHF models but large and powerful ones that can be used in real-world scenarios.
Next, we dive deeper into the three capabilities of DeepSpeed-Chat introduced above.
# 2 Easy-to-use ChatGPT Training and Inference Experi- ence
We start with the easy-to-use experience by showing how you can train OPT-13B and then OPT-66B models with DeepSpeed-RLHF system. If you are short on time, you can even train an OPT-1.3B model on a single consumer-grade GPU in just two hours. We also demonstrate how you can use our DeepSpeed-chat RLHF API to develop your own custom pipelines.
2.1 Training your first ChatGPT-Style model is so easy with DeepSpeed- Chatâs RLHF examples
One single script completes all three stages of RLHF training and generates your first ChatGPT model! We use an example of pretrained OPT-13B as the actor model and OPT-350M as the reward model in the following single script to generate a final 13B ChatGPT-style model:
p i p i n s t a l l deepspeed >=0.9.0
g i t c l o n e h t t p s : / / g i t h u b . com/ m i c r o s o f t / DeepSpeedExamples . g i t cd DeepSpeedExamples / a p p l i c a t i o n s / DeepSpeedâChat/ p i p i n s t a l l âr
and based on actual measured training throughput on DeepSpeed-RLHF curated dataset and training recipe which trains for one epoch on a total of 135M tokens. We have in total 67.5M query tokens (131.9k queries with sequence length 256) and 67.5M generated tokens (131.9k answers with sequence length 256), and a maximum global batch size per step of 0.5M tokens (1024 query-answer pairs). We urge readers to pay attention to these specifications before making any cost and e2e time comparisons with DeepSpeed-RLHF. See our benchmark settings (https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/ DeepSpeed-Chat/training/step3_rlhf_finetuning/BenckmarkSetting.md) page for more details.
3
Table 4: DeepSpeed-Chat on a single DGX node with 8 NVIDIA A100-40G GPUs.
Model Sizes Step 1 Step 2 Step 3 Total Actor: OPT-13B, Reward: OPT-350M 2.5hr 0.25hr 10.8hr 13.6hr
Table 5: DeepSpeed-Chat on 8 DGX nodes with 8 NVIDIA A100-80G GPUs/node. E2E time breakdown for training a 66 billion parameter ChatGPT model via
Model Sizes Step 1 Step 2 Step 3 Total Actor: OPT-66B, Reward: OPT-350M 82 mins 5 mins 7.5hr 9hr
python t r a i n . py ââa c t o r âmodel f a c e b o o k / opt â350m ââdeploymentâtype s i n g l e n o d e f a c e b o o k / opt â13b âârewardâmodel In about half a day, your 13B model would be fully trained with its checkpoints ready. Table 4 demonstrates a breakdown of the training time for each of the three steps.
Test your final model with DeepSpeed-Chatâs Inference API Now it is time to plug- in and test your final model trained by DeepSpeed-Chat! You can play with not only Q&A sessions but also conversation-style interactions.
Human : A s s i s t a n t : Yes , Human :
Do you know M i c r o s o f t ?
I âm a b i g f a n o f M i c r o s o f t . What about them?
t o a 6ây e a r o l d c h i l d ? I wonder how i t Can you e x p l a i n i t s h o u l d d e s c r i b e I
A s s i s t a n t : Sure , I can t r y . M i c r o s o f t i s a company t h a t makes computers , and they make a program c a l l e d â â Windows â â which i s I t â s a l l you â l l you â l l t h e o p e r a t i n g system t h a t r un s on t h e computer . I t â s where I t â s a l s o where l i k e t h e â â b r a i n â â o f t h e computer . t h e programs and f i l e s a r e s t o r e d . f i n d t h e s t a r t â â s t a r t â â button , which i s where t o u s e t h e computer .
# 2.2 Want to try different model sizes and configurations? You got it!
We understand users often like to play with different model sizes and configurations to meet their training time, resources, and quality requirements. With DeepSpeed-Chat, users can easily do that. For example, if you want to train a larger and higher-quality model on your GPU cluster for your research or business, you can simply use the same script with your desired model size e.g., 66B and GPU counts e.g., 64 GPUs:
python t r a i n . py ââa c t o r âmodel âârewardâmodel
f a c e b o o k / opt â66b f a c e b o o k / opt â350m ââdeploymentâtype m u l t i n o d e
Within 9 hours, you can have your 66 billion parameters ChatGPT model ready to be served in your favorite front-end GUI as shown in Table 5.
4
Table 6: DeepSpeed-Chat on a single commodity NVIDIA A6000 GPU with 48GB memory.
Model Sizes Step 1 Step 2 Step 3 Total Actor: OPT-1.3B, Reward: OPT-350M 2900 secs 670 secs 1.2hr 2.2hr
If you only have around 1-2 hours for coffee or lunch break, you can also try to train a small/toy model with DeepSpeed-Chat. For example, we prepared a training example for a 1.3B model with a single dataset to test our framework on your consumer-grade GPUs. The best part is that you will have your model checkpoint ready to play with when you are back from your lunch break (see Table 6)! python t r a i n . py ââa c t o r âmodel âârewardâmodel
f a c e b o o k / opt â1.3b # 2.3 Customizing your own RLHF training pipeline using DeepSpeed- Chatâs RLHF APIs
DeepSpeed-Chat allows users to build their very own RLHF training pipeline using our flexible APIs shown below, which users can use to reconstruct their own RLHF training strategy. This enables a general interface and backend for creating a wide range of RLHF algorithms for research exploration.
e n g i n e = DeepSpeedRLHFEngine (
a c t o r m o d e l n a m e o r p a t h=a r g s . a c t o r m o d e l n a m e o r p a t h , c r i t i c m o d e l n a m e o r p a t h=a r g s . c r i t i c m o d e l n a m e o r p a t h , t o k e n i z e r=t o k e n i z e r , n u m t o t a l a r g s=a r g s )
t r a i n e r = DeepSpeedPPOTrainer ( e n g i n e=e n g i n e , a r g s=a r g s )
f o r prompt batch i n p r o m p t t r a i n d a t a l o a d e r : out = t r a i n e r . g e n e r a t e e x p e r i e n c e ( prompt batch ) a c t o r l o s s , c r i t i c l o s s = t r a i n e r . t r a i n r l h f ( out )
# 3 Full-fledged RLHF Training Pipeline
To provide a seamless training experience, we follow InstructGPT and include a full-fledged end-to-end training pipeline in DeepSpeed-Chat as shown in Figure 1.
Our pipeline includes three main steps:
⢠Step 1: Supervised finetuning (SFT), where human responses to various queries are carefully selected to finetune the pretrained language models.
⢠Step 2: Reward model finetuning, where a separate (usually smaller than the SFT) model (RW) is trained with a dataset that has human-provided rankings of multiple answers to the same query.
5
Step1 Step 2 Pretrained Pair good/bad! Pretrained |__ + (8G Paes alae ser) Ere at rant |= RW | Human labeled data Step 3 Frozen Frozen (Optional) EMA = Reference ae Reward | aEMA+(1- @)Actor ast Actor model iaarorelel| ae Critic model | model | ve ~ ~ ~ oo S . (Optional ) Pretraining Objective for Actor Pretraining data â C= PPO... Je] DeepSpeed
Figure 1: The illustration of DeepSpeed-Chatâs RLHF training pipeline with optional features.
⢠Step 3: RLHF training, where the SFT model is further finetuned with the reward feedback from the RW model using the Proximal Policy Optimization (PPO) [11] algo- rithm.
We provide two additional features in Step 3 to help improve model quality:
⢠Exponential Moving Average (EMA) collection, where an EMA based checkpoint can be chosen for the final evaluation.
⢠Mixture Training, which mixes the pretraining objective (i.e., the next word prediction) with the PPO objective to prevent regression performance on public benchmarks like SQuAD2.0 [12].
The two training features, EMA and Mixed Training, are often omitted by other recent efforts since they can be optional. However, according to InstructGPT, EMA checkpoints generally provide better response quality than conventional final trained model and Mixture Training can help the model retain the pre-training benchmark solving ability. As such, we provide them for users to fully get the training experience as described in InstructGPT and strike for higher model quality.
In addition to being highly consistent with InstructGPT paper [7], we also provide conve- nient features to support researchers and practitioners to train their own RLHF model with multiple data resources:
⢠Data Abstraction and Blending Capabilities: DeepSpeed-Chat is able to train the model with multiple datasets for better model quality. It is equipped with (1) an abstract dataset layer to unify the format of different datasets; and (2) data splitting/blending capabilities so that the multiple datasets are properly blended then split across the 3 training stages.
To illustrate the effectiveness of our training pipeline, we demonstrate the model quality with multi-round conversation as shown in the experience section.
6
# 4 DeepSpeed Hybrid Engine â Unified Infrastructure to Power and Optimize RLHF Training
Step 1 and Step 2 of the instruct-guided RLHF pipeline resemble regular fine-tuning of large models, and they are powered by ZeRO-based optimizations and flexible combination of par- allelism strategies in DeepSpeed training to achieve scale and speed. Step 3 of the pipeline, on the other hand, is the most complex part to handle in terms of performance implications. Each iteration requires efficient processing of two phases a) inference phase for token/experience generation, producing inputs for the training and b) training phase to update the weights of actor and reward models, as well as the interaction and scheduling between them. It introduces two major costs: (1) the memory cost, as several copies of the SFT and RW models need to be served throughout stage 3; and (2) the predominant generation phase, which if not acceler- ated properly, will significantly slow down the entire stage 3. Additionally, the two important features we added in Stage 3, including Exponential Moving Average (EMA) collection and Mixture Training, will incur additional memory and training costs.
To tackle these challenges, we composed the full system capability of DeepSpeed Train- ing and Inference into a unified infrastructure that we call Hybrid Engine. It leverages the original DeepSpeed engines for fast training mode while effortlessly applying DeepSpeed in- ference engine for generation/evaluation mode, providing a significantly faster training system for RLHF training at Stage 3. As Figure 2 shows, the transition between DeepSpeed training and inference engine is seamless: by having the typical eval and train modes enabled for the actor model, when running for inference and training pipeline, DeepSpeed selects its different optimizations to run the model faster and improve the overall system throughput.
During its inference execution for experience generation phase of RLHF training, Deep- Speed Hybrid Engine uses a light-weight memory management system to handle the KV-cache and intermediate results, together with highly optimized inference-adapted kernels and ten- sor parallelism implementation, to achieve significant boost in throughput (tokens-per-second) compared to the existing solutions.
During the training execution, Hybrid Engine enables memory optimization techniques such as DeepSpeedâs ZeRO family of technologies and Low Rank Adaption (LoRA). We designed and implemented these system optimizations in a way that they are compatible with each other and can be composed together to deliver the highest training efficiency under the unified Hybrid Engine.
Hybrid Engine can seamlessly change model partitioning across training and inference to support tensor-parallelism based inferencing and ZeRO-based sharding mechanism for training. It can also reconfigure the memory system to maximize memory availability during each of these modes. This allows for improved performance by avoiding memory allocation bottlenecks and supporting large batch sizes. Packed with a spectrum of system technologies from DeepSpeed training and inference, Hybrid Engine pushes the boundary of modern RLHF training and delivers unparalleled scale and system efficiency for RLHF workloads.
# 5 DeepSpeed RLHF: Unparalleled Scale and Efficiency via Hybrid Engine
# 5.1 Capability Recap
As discussed, DeepSpeed-HE is an amalgamation of powerful system technologies for inference and training, architected to achieve excellent scale and efficiency for DeepSpeed-RLHF pipeline
7
Actor Model train() eval() HybridEngine ie] . Training = |e Data Remapping \e-s! Inference Engine \ Switch Parallelism } Engine \.e Memory management / Training = Experience Loop Generation enzeRO echnolocy © Efficient KV-caching © Offloadin @ Tensor parallelism © LoRA e © Optimized memory usage © Efficient optimizer Secucom ed kespels 2 @ Higher inference throughput
Figure 2: DeepSpeed Hybrid Engine design for accelerating the most time-consuming portion of a RLHF pipeline.
across a wide range of hardware, making RLHF training fast, affordable, and easily accessible to AI community.
In terms of efficiency and affordability, as shown in Table 1, DeepSpeed-HE can train OPT- 13B in just 9 hours and OPT-30B in 18 hours on Azure Cloud for under $300 and $600, respectively. In terms of speed and scalability, as shown in Table 2, even a 13B model can be trained in 1.25 hours and a massive 175B model can be trained in under a day using a 64 GPU cluster. And in terms of accessibility and democratization of RLHF, DeepSpeed-HE supports training models with over 13 billion parameters on a single GPU as shown in Table 3.
# 5.2 Throughput and Model Size Scalability Comparisons with Exist- ing RLHF Systems
Compared to other RLHF systems like Colossal-AI [13] or HuggingFace [6] powered by native PyTorch [14], DeepSpeed-RLHF excels in system performance and model scalability:
⢠With respect to throughput, DeepSpeed enables over 10x improvement for RLHF training on a single GPU (Figure 3). On multi-GPU setup, it enables 6 â 19x speedup over Colossal-AI and 1.4 â 10.5x over HuggingFace DDP (Figure 4).
⢠With respect to model scalability, Colossal-AI can run a max model size of 1.3B on a single GPU and 6.7B on a single A100 40G node, DeepSpeed-HE can run 6.5B and 50B models respectively on the same hardware, up to 7.5x larger.
Therefore, with over an order of magnitude higher throughput, DeepSpeed-HE unlocks the ability to train significantly larger actor models under the same latency budget or train models
8
50.6x OPT-1.3B-Single-A100-40G GPU 95 OPT-6.7B-Single-A100-40G GPU Ee & & g & = âp Ds-chat CAL-Coati HE-DDP Ds-chat CAL-Coati HF-DDP x x End-to-end Throughput (Seqs/Sec) ree re 2 End-to-end Throughput (Seqs/Sec)
Figure 3: Step 3 throughput comparison against two other system frameworks for accelerating RLHF training on a single NVIDIA A100-40G commodity GPU. No icons represent OOM scenarios.
of similar size at over 10x lower cost, compared to existing RLHF systems like Colossal-AI or HuggingFace DDP.
This improvement in efficiency stems from DeepSpeed-HEâs ability to accelerate RLHF gen- eration phase of the RLHF processing leveraging DeepSpeed inference optimizations. Figure 5 shows the time breakdown for a 1.3B parameter model at an RLHF training iteration: major- ity of the time goes to the generation phase. By leveraging high performance inference kernels from DeepSpeed, DeepSpeed-HE can achieve up to 9x throughput improvement during this phase over HuggingFace and 15x over Colossal-AI allowing it to achieve unparallel end-to-end efficiency.
# 5.3 Effective Throughput and Scalability Analysis
(I) Effective Throughput Analysis. The effective throughput of DeepSpeed-HE dur- ing Stage 3 of the RLHF training depends on the throughput that it achieves during the generation and RL training phases. In our RLHF pipeline, the generation phase comprises approximately 20% of the total computation while the RL training phase comprises of re- maining 80% (see benchmark settings https://github.com/microsoft/DeepSpeedExamples/ tree/master/applications/DeepSpeed-Chat/training/step3_rlhf_finetuning/BenckmarkSetting. md page for details). However, despite having a small proportion, the former can take a large portion of the e2e time as it requires running the actor model once for each of the 256 generated tokens with initial prompt of 256 tokens, making it memory bandwidth bound and difficult to achieve high throughput. In contrast, the RL training phase is compute bound running the reference actor model with just a couple of forward and backward passes with full 512 tokens from both prompt and generation per sample and can achieve good throughput.
To maximize the effective throughput, DeepSpeed-HE optimizes both phases. First, it uses the largest batch size possible to get higher efficiency on both phases. Second, during the generation phase, it leverages high-performance transformer kernels to maximize GPU memory bandwidth utilization when the model fits in single GPU memory, and leverage tensor- parallelism (TP) when it does not. Using TP in the generation phase instead of ZeRO to fit the model reduces the inter-GPU communication and maintains high GPU memory bandwidth utilization.
Figure 6 shows the best achievable effective throughput for DeepSpeed-HE in terms of TFlops/GPU for model sizes ranging from 1.3B to 175B. It also shows the throughput achieved
9
e2e-RLHF-OPT-125M >â 3 80 43x Et } * â0 g* a $ Z wo Eo âDS-Chat CAl-Coati âHF-DDP ts an e2e-RLHF-OPT-6.7B z 2 is Pe = i. ââ 3» 5 22 2 PF aw ° âDS-Chat âCAL-Coati âHF-DDPâ End-to-end Throughput (Seqs/Sec) End-to-end Throughput (Seqs/Sec) cr Nw ew ow e2e-RLHF-OPT-1.3B x 1K Ds-chat CAL-Coati HF-DDP e2e-RLHF-OPT-13B zy Dz Ds-chat CAL-Coati HF-DDP
Figure 4: End-to-end training throughput comparison for step 3 of the training pipeline (the most time consuming portion) with different model sizes on a single DGX node equipped with 8 NVIDIA A100-40G GPUs. No icons represent OOM scenarios.
Time/Seq Breakdown for Training OPT-1.3B in Step 3 on a DGX Node with 8*A100-40G GPUs HF-DDP oscnat os 1 L5 Time/Seq (sec) co mGeneration mRLTraining Others 2 25 3 3.5
Figure 5: Superior generation phase acceleration from DeepSpeed Chatâs Hybrid Engine: A time/sequence breakdown for training OPT-1.3B actor model + OPT-350M reward model on a single DGX node with 8 A100-40G GPUs.
10
120 100 0 oS Throughput Per GPU (TFLOPs) & ty nv So RLHF Throughput Breakdown 110.4 100.7 103.6 97.3 82.0 no 744 67.1 67.8 62.9 52.8 48.2 43.4 32.9 33.0 293 238 | I 25.4 OPT-1.3B OPT-6.7B OPT-13B OPT-30B OPT-66B Bloom-175B (8 GPUs) (8 GPUs) (8 GPUs) (32 GPUs) (16 GPUs) (64 GPUs) § Generation Throughput Training Throughput WW Effective Throughput
Figure 6: RLHF Generation, training, and effective throughput with DeepSpeed-HE for differ- ent model sizes, at the GPU count that maximizes efficiency.
by each of the generation and training phases. DeepSpeed-HE is the most efficient for models in the range 6.7B-66B. Going beyond this range to 175B, the throughput drops due to the limited memory to support larger batch sizes, while still achieving 1.2x better efficiency than the small 1.3B model. The per-GPU throughput of these gigantic models could improve further when we scale them to more GPUs with more memory available for larger batch sizes.
Furthermore, we would like to point out that our effective performance is 19x higher than existing systems, as shown in Figure 4, which suggests that they are operating at lower than 5% of the peak. This demonstrates the challenge of optimizing RLHF workloads as well as the effectiveness of our system despite the challenge.
(II) Scalability Analysis The best effective throughput for different model sizes is achieved at different GPU count. This is in part because some of the larger model sizes require more memory to run. However, a large part of this behavior stems from DeepSpeed-HEâs scalability properties that we discuss next.
Figure 7 shows that DeepSeed-RLHF has achieved good scaling overall on up to 64 GPUs. However, if we look more closely, it shows that DeepSpeed-RLHF training achieves super-linear scaling at small scale, followed by near linear or sub-linear scaling at larger scales. This is due to interaction between memory availability and max global batch size.
As DeepSpeed-HE is powered by ZeRO-based technology [15] for training, it allows model states to be partitioned across the available GPUs. As a result, the memory consumption per GPU reduces with the increase in the number of GPUs, allowing DeepSpeed-HE to support a larger batch per GPU resulting in super-linear scaling. However, at large scale, while the available memory continues to increase, the maximum global batch size (1024, in our case,
11
Scalability across nodes Actor model: OPT-13b 1 Actor model: OPT-66b <ofâ* A100-40G v4 âe A100-80G ra v ---- linear scale â DO 104 ~~~ linear scale i @ a 34 ra g 405 o a iT) & a 5 30) 5 a a S S > 207 S °o o = â_ = - 104 1 2 4 Nodes (8 GPUs per 8 8 node) Nodes (8 GPUs per node)
Figure 7: Scalability for training 13B (left) and 66B (right) actor model+350M reward model on an increasing number of DGX nodes with 8 A100-40/80G GPUs
with a sequence length of 512) limits the batch size per GPU, resulting in near-linear or sub- linear scaling. As a result, for a given max global batch size, DeepSpeed-HE achieves the best throughput and cost efficiency at the boundary of super-linear and sub-linear scalability, and the exact point is mostly determined by the largest batch size that can be run per GPU as the function of available memory and global batch size.
# 6 Release: Try DeepSpeed Chat Today!
We are very excited to share that DeepSpeed-Chat is now open-sourced and available to the AI community.
⢠To get started, please visit our github page for DeepSpeed-Chat: GitHub Landing Page (https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed- Chat)
⢠We will continue to improve DeepSpeed-Chat with your feedback and support. Our roadmap (https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/ DeepSpeed-Chat/README.md#-deepspeed-chats-roadmap-) shows currently supported features as well as ones that are planned for future.
DeepSpeed-Chat is part of the bigger DeepSpeed ecosystem comprising of a multitude of Deep Learning systems and modeling technologies. To learn more,
⢠Please visit our website (https://www.deepspeed.ai/) for detailed blog posts, tutorials, and helpful documentation.
12
⢠You can also follow us on our English Twitter (https://twitter.com/MSFTDeepSpeed), Japanese Twitter (https://twitter.com/MSFTDeepSpeedJP), and Chinese Zhihu (https: //www.zhihu.com/people/deepspeed) for latest news on DeepSpeed.
DeepSpeed welcomes your contributions! We encourage you to report issues, contribute PRs, and join discussions on the DeepSpeed GitHub (https://github.com/microsoft/DeepSpeed/) page. Please see our contributing guide (https://github.com/microsoft/DeepSpeed/blob/ master/CONTRIBUTING.md) for more details. We are open to collaborations with universities, research labs, companies, such as those working together on deep learning research, applying DeepSpeed to empower real-world AI models and applications, and so on.
# Contributions
ZY: Full engagement. RYA: Hybrid engine. OR: DeepSpeed ZeRO feature adaptation. SR: System support and blog contribution. XW: Training pipeline and bench-marking support. AAA: Software support and post-release debugging. JR: Software support and post-release debugging. MZ: Training pipeline and system support, post-release debugging. CL: Data support and post-release debugging. CH: System support. ZZ: Benchmarking. MW: Software support and post-release debugging. MS: Post-release debugging. LK: Post-release debugging. HQ: System support. MT: System support. SC: Software support. SLS: System support, blog and tutorial contribution. YH: Team lead.
# Acknowledgment
We thank the entire DeepSpeed team for their contributions on developing, debugging, testing, and releasing the DeepSpeed-Chat software.
# References
[1] OpenAI. Chatgpt. https://openai.com/blog/chatgpt, 2022.
[2] ChatLLaMa Authors. Chatllama. https://github.com/juncongmoo/chatllama, 2023.
[3] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.
[4] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.
[5] Databricks. Databricks-dolly. https://www.databricks.com/blog/2023/ 03/24/hello-dolly-democratizing-magic-chatgpt-open-models.html? scid=7018Y000001Fi1CQAS&utm_medium=paid+search&utm_source=google& utm_campaign=17107065832&utm_adgroup=150868748114&utm_content= blog&utm_offer=hello-dolly-democratizing-magic-chatgpt-open- models.html&utm_ad=661606835947&utm_term=databricks%20dolly&gclid= Cj0KCQjwiIOmBhDjARIsAP6YhSV89V2agFl3zFuWiZiV1N3IVNhZWr8pGtXVxXrlkuPHlW3cXbGfiHsaAmIDEALw_ wcB, 2023.
13
[6] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, An- thony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, et al. Hug- arXiv preprint gingfaceâs transformers: State-of-the-art natural language processing. arXiv:1910.03771, 2019.
[7] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language mod- els to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730â27744, 2022.
[8] Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. Learning to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008â3021, 2020.
[9] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021.
[10] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022.
[11] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
[12] Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you donât know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822, 2018.
[13] Colossal AI Authors. Colossal ai. https://github.com/hpcaitech/ColossalAI, 2022.
[14] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural informa- tion processing systems, 32, 2019.
[15] Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory op- timizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1â16. IEEE, 2020.
14 | {
"id": "1707.06347"
} |
2308.01390 | OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models | We introduce OpenFlamingo, a family of autoregressive vision-language models
ranging from 3B to 9B parameters. OpenFlamingo is an ongoing effort to produce
an open-source replication of DeepMind's Flamingo models. On seven
vision-language datasets, OpenFlamingo models average between 80 - 89% of
corresponding Flamingo performance. This technical report describes our models,
training data, hyperparameters, and evaluation suite. We share our models and
code at https://github.com/mlfoundations/open_flamingo. | http://arxiv.org/pdf/2308.01390 | Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, Ludwig Schmidt | cs.CV, cs.AI, cs.LG | null | null | cs.CV | 20230802 | 20230807 | 3 2 0 2
g u A 7 ] V C . s c [
2 v 0 9 3 1 0 . 8 0 3 2 : v i X r a
OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models
Anas Awadallaâ1 Irena Gaoâ2 Josh Gardner1 Jack Hessel3 Yusuf Hanafy1 Wanrong Zhu5 Shiori Sagawa2 Kalyani Marathe1 Jenia Jitsev4,9 Yonatan Bitton6 Simon Kornblith8 Samir Gadre7 Pang Wei Koh1,8 Gabriel Ilharco1 Mitchell Wortsman1 Ludwig Schmidt1,3,4
# Abstract
OpenFlamingo average performance compared to Flamingo
We introduce OpenFlamingo, a family of au- toregressive vision-language models ranging from 3B to 9B parameters. OpenFlamingo is an on- going effort to produce an open-source replica- tion of DeepMindâs Flamingo models [3]. On seven vision-language datasets, OpenFlamingo models average between 80 - 89% of correspond- ing Flamingo performance. This technical re- port describes our models, training data, hy- perparameters, and evaluation suite. We share our models and code at https://github.com/ mlfoundations/open_flamingo.
1
# Introduction
A popular format for vision and language mod- els is (image, text) â text, i.e., models take as input an image and some text, and produce text as output, e.g., BLIP-2 [22].The flexible format directly supports tasks like image classification and visual question answering (VQA).
E oy B 90% 5 80% a 2 70% & & 60% g 50% mG % 40% ro % 30% a 20% g 10% 0% OF-3B OF-3B (1) OF-4B OF-4B (1) OF-9B OpenFlamingo model
Figure 1: OpenFlamingo performance as a fraction of corresponding Flamingo performance, averaged across evaluation settings (7 datasets à 5 options for number of in-context examples). Demonstrations are chosen using RICES (Retrieval-based In-Context Example Selection). More details regarding selecting demon- strations can be found in Section 3.4. We compare OpenFlamingo-3B and -4B models to Flamingo-3B, and OpenFlamingo-9B to Flamingo-9B. Error bars are standard deviations over settings. âOF-3B (I)â refers to OpenFlamingo-3B (Instruct), the 3B model trained with a language-instruction-tuned backbone.
However, assuming a single image as input is limiting: autoregressive vision-language models enable new capabilities by instead mapping an arbitrarily interleaved sequence of images and
1University of Washington 2Stanford University 3Allen Institute for AI 4LAION 5University of California Santa Barbara 6Hebrew Univer- sity 7Columbia University 8Google DeepMind 9Juelich Su- percomputing Center, Research Center Juelich. Correspon- dence to <[email protected], [email protected], [email protected]>.
text to textual outputs. This interface provides important flexibility: the input sequence can in- clude demonstrations for a new task, enabling few- shot, in-context learning [3] or multi-round multi- modal chatbot interactions. Evaluations suggest that autoregressive vision-language models can be performant foundation models [5]: models like Flamingo [3], CM3 [1], Kosmos-1 [12], PALM- E [8], and multimodal GPT-4 [28] generalize well across diverse vision-language tasks.
@ Input Prompt â Completion > Output: Two Output: A cats are tacoon Output: An apple with the sleeping next wearing a . word "iPod" written on it. to each other spacesuit. ona sofa. (Ae Output: . Output: . | UNDERGROUNL ° OUIPU âRed Brick Pizzaâ âUndergroundâ âPike Plâ Question: Question: question») ( ») What latte art Which video p hi A What car is is presented in game is the image? represented in featured in the Tesla Model 3. Answer: A the image? image? swan. Answer: lanewerâ Among Us. /) }
Figure 2: OpenFlamingo-9B (pictured) can process interleaved image-and-text sequences. This interface allows OpenFlamingo to learn many vision-language tasks through in-context demonstrations.
two open source datasets: LAION-2B [32] and Multimodal C4 [45]. Our stack is built using publicly available components, including CLIP as a vision encoder [30] and open-source language models as decoders [27, 35].
Unfortunately, autoregressive vision- language models are closed-source, and their weights, training data, code, and hyperparam- eters are proprietary. This limits the academic communityâs ability to conduct research on au- toregressive vision-language models, e.g., to un- derstand how web-scraped image-text data affects modelsâ performance and safety. Open-source al- ternatives, such as LLaVA [25], LLaMA-Adapter [41], BLIP-2 [23], and mPLUG-Owl [39], only take in single images, and they often directly train on curated datasets like COCO [24] rather than web data.
We call the resulting family of five models OpenFlamingo. These models range from 3B to 9B parameters, with both standard and instruction-tuned [37] language model backbones. When averaging performance across 7 evalua- tion datasets, OpenFlamingo-3B and -9B mod- els attain 85% and 89% of their corresponding Flamingo models respectively (Figure 1). Models and code are open-sourced at https://github. com/mlfoundations/open_flamingo.
In this technical report, we document our expe- riences building an open-source reproduction of the Flamingo models [3]. Following Flamingo, we augment the layers of pretrained, frozen language models so that they cross attend to the outputs of a frozen vision encoder while predicting the next token. The cross-modal module is trained on web-scraped image-text sequences, in our case,
# 2 Related work
Generative vision-language models output text conditioned on an image-text sequence. While many such architectures, such as BLIP-
2
Table 1: Architecture details of the OpenFlamingo models. All five models use a CLIP ViT-L/14 vision encoder [30]. A cross-attention interval of 4 means that a cross-attention module is inserted every 4th language model layer. Note that OpenFlamingo models labeled (Instruct) use language models that were finetuned on language-only tasks; we have not instruction-tuned OpenFlamingo models on vision-language tasks.
Model Language model Cross-attention interval  <grounding> <p> It </p><box><loc44><loc863></box> seats next to <p> a campï¬re </p><box><loc4><loc1007></box> </s>
where <s> and </s> indicate start- and end-of-sequence, and  represent the beginning and end of encoded image embeddings. <grounding> is a special token to tell the model ground the text output to the visual world. We map input text tokens and location tokens to embeddings via a lookup table. Following KOSMOS-1, a vision encoder and a resampler module are used to obtain image embeddings for input images.
For language-only data, cross-modal paired data (i.e., image-text pairs), and interleaved multimodal data, we use the same input representations as of KOSMOS-1.
# 3.2 Grounded Multimodal Large Language Models
Based on KOSMOS-1, KOSMOS-2 enhances multimodal large language models by incorporating grounding and referring capabilities. KOSMOS-2 also uses a Transformer-based causal language model as the backbone and is trained with the next-token prediction task.
In addition to multimodal corpora used in KOSMOS-1 (including text corpora, image-caption pairs, and interleaved image-text data), we add grounded image-text pairs into training. The training loss only considers discrete tokens, such as text tokens and location tokens. The model can learn to locate and understand image regions by their location tokens and the whole image, associate text spans to image regions, and output bounding boxes of the image region using location tokens.
KOSMOS-2 shows new capabilities of grounding and referring. The referring capability enables us to point out image regions with bounding boxes. KOSMOS-2 can understand the image regions users refer to by the coordinates of bounding boxes. The referring capability provides a new interaction method. Different from previous MLLMs [ADL+22, HSD+22, HDW+23], which can only provide text output, KOSMOS-2 can provide visual answers (i.e., bounding boxes) and ground text output to the image. The grounding capability enables the model to provide more accurate, informative, and comprehensive responses. In addition to vision, language, and vision-language tasks evaluated in KOSMOS-1, the model can be used for more downstream tasks, such as grounded image-captioning, grounded VQA, referring expression comprehension and generation.
5
# 3.3 Model Training
Training Setup We train the model on newly added grounded image-text pairs, monomodal text corpora, image-caption pairs, and interleaved image-text data. Our training process involves a batch size of 419K tokens, consisting of 185K tokens from text corpora, 215K tokens from original and grounded image-caption pairs, and 19K tokens from interleaved data. We train KOSMOS-2 for 60k steps, equivalent to around 25 billion tokens. The AdamW optimizer is employed with β = (0.9, 0.98). We set the weight decay to 0.01 and the dropout rate to 0.1. The learning rate increases to 2e-4 during the ï¬rst 375 warm-up steps and linearly decays to zero. We train the model on 256 V100 GPUs and the training takes approximately one day to complete. In order to tell the model when to ground text output to the visual world, we prepend the â<grounding>â token to the grounded caption during training.
Following KOSMOS-1, the vision encoder has 24 layers with 1,024 hidden size and 4,096 FFN intermediate size. The multimodal large language model component is a 24-layer MAGNETO Transformer [WMH+22, MWH+22] with 2,048 hidden dimensions, 32 attention heads, and 8,192 FFN intermediate size. The total number of trainable parameters amounts to approximately 1.6B. The image resolution is set to 224Ã224 and the patch size is 14Ã14. We divide the width and height of the image into 32 bins, with each bin consisting of 7Ã7 pixels. A total of 32Ã32 location tokens are added to the vocabulary. KOSMOS-2 uses the weights of KOSMOS-1 for initialization, the newly added word embeddings of location tokens are initialized randomly. We update all the parameters during training and instruction tuning.
Instruction Tuning After the model is trained, we perform instruct tuning to better align KOSMOS-2 with human instructions. we combine vision-language instruction dataset (i.e., LLaVA- Instruct [LLWL23]) and language-only instruction datasets (i.e., Unnatural Instructions [HSLS22] and FLANv2 [LHV+23]) with the training data to tune the model. In addition, we construct grounded instruction data by utilizing the pairs of bounding boxes and expressions (i.e., noun phrases, and referring expressions) in GRIT. Given an expression-bounding-box pair, we use â<p> expression </p>â as the input instruction, and prompt the model to generate the corresponding location tokens of the bounding boxes. We also use the prompt like â<p> It </p><box><loc1><loc2></box> isâ to ask the model to generate expressions according to its bounding boxes. Table B in Appendix presents more templates.
# 4 Evaluation
We ï¬rst evaluate KOSMOS-2 on multimodal grounding and multimodal referring tasks to assess the new capabilities, and then test the model on language and perception-language tasks evaluated in KOSMOS-1.
Multimodal grounding â Phrase grounding â Referring expression comprehension
Multimodal referring
â Referring expression generation
Perception-language tasks â Image captioning â Visual question answering
Language tasks
â Language understanding â Language generation
# 4.1 Multimodal Grounding
In order to evaluate the ability of multimodal grounding, we test KOSMOS-2 on widely used phrase grounding and referring expression comprehension tasks in a generation manner. Phrase grounding
6
<box> <loCg5> <lOC3g9> </box> <box> <locgg> <l0C495> </box> t t Grounded MLLM Grounded MLLM t t Aman ina blue hard <p> A man in a blue hat and <p> orange hard hat and orange safety vest </p> safety vest </p> (1) Phrase grounding (2) Referring expression comprehension
Figure 4: Input format of evaluation on (1) phrase grounding and (2) referring expression comprehen- sion.
task requires the model to predict a set of bounding boxes based on one or more given phrases that maybe interrelated within a single caption. Referring expression comprehension task encourages the model to locate the object described in a text referring expression within a given image.
By testing KOSMOS-2 on these two tasks, we can assess how well the model performs in grounding text descriptions to the visual world, which is crucial for developing advanced AI systems capable of handling complex multimodal tasks.
For both phrase grounding and referring expression comprehension tasks, KOSMOS-2 is required to generate location tokens which are then converted to bounding boxes for evaluation. The input format is â<s><grounding>...â, where â<grounding>â is used to prompt the model to generate locations tokens.
# 4.1.1 Phrase Grounding
We evaluate phrase grounding task on Flickr30k Entities [PWC+15] val and test splits. In order to reduce ambiguity, we do not prompt the model with individual phrases; instead, we use the current phrase along with the preceding words as input where preceding words serve as context: â ... <p> {phrase} </p>â. For the example shown in Figure 4(1), the model needs to predict the locations of phrases âA manâ, âa blue hard hatâ, âorange safety vestâ and âan intersectionâ in the caption âA man in a blue hard hat and orange safety vest stands in an intersection.â. To generate the location tokens for the phrase âA manâ that is the beginning of the caption, the prompt is â<p>A man</p>â. For the phrase âorange safety vestâ, the prompt is âA man in a blue hard hat and <p>orange safety vest</p>â. When multiple men are in the image, the context âA man in a blue hard hat andâ explicitly helps the model locate the object to reduce ambiguity.
We obtain the location tokens in â<box>...</box>â from the model response and then covert it into bounding boxes. The generated bounding box is correct if its intersection over union (IoU) with the ground-truth bounding box is greater than 0.5. If KOSMOS-2 generates a location sequence that can not be converted correctly (e.g., â<box><loc1></box>â), we treat it as a negative sample. We use ANY-BOX protocol in MDETR [KSL+21]. We report the R@1, R@5, and R@10 metrics, where R@1/5/10 means calculating the recall using the top 1/5/10 generated bounding boxes. If there are fewer than 5 or 10 bounding boxes generated by KOSMOS-2, we use all available bounding boxes for the calculation.
Results Table 2 presents results on Flickr30k Entities [PWC+15] val and test splits. KOSMOS-2 achieves impressive zero-shot performance and outperforms GRILL [JMC+23], which relies on an attached detector, by a large margin. Moreover, our model outperforms traditional ï¬netuned VisualBert [LYY+19] model by 7.4% R@1 on both val and test splits. In contrast to other models, KOSMOS-2 does not involve prior designs (e.g., object queries or proposals), leading to similar results among R@1, R@5, and R@10. These results demonstrate that KOSMOS-2 can generate high-quality
7
Val Split Test Split Model Zero-shot R@1 R@5 R@10 R@1 R@5 R@I10 VisualBert [LYY* 19] x 70.4 84.5 86.3 71.3 85.0 86.5 MDETR [KSL*21] x 83.6 93.4 95.1 84.3 93.9 95.8 GLIP [LZZ+ 22] x 86.7 964 97.9 87.1 96.9 98.1 FIBER [DKG* 22] x 87.1 96.1 97.4 874 964 97.6 GRILL [JMC*23] v - - - 18.9 53.4 70.3 KOsMos-2 v 77.8 79.2 79.3 78.7 80.1 80.1
Table 2: Phrase grounding results on Flickr30k Entities. We report the R@1, R@5, and R@10 metrics, where R@1/5/10 means calculating the recall using the top 1/5/10 generated bounding boxes.
Zero- RefCOCO | RefCOCO+ | RefCOCOg shot val testA _ testB val testA â_ testB val test 81.41 87.04 74.17 75.90 81.45 66.70 74.86 75.77 87.51 90.40 82.67 81.13 85.52 72.96 83.35 83.31 90.05 92.93 85.26 84.49 90.10 77.77 84.54 85.20 90.68 92.59 87.26 85.74 90.13 79.38 87.11 87.32 Model UNITER [CLY* 19] MDETR [KSL*21] OFA [WYM*22] FIBER [DKG*+22] Qo VisionLLM [WCC*23] 86.7 - - - - - . : GRILL [IMC+23] - - - - - - - 47.5 Kosmos-2 Y (52.32 5742 47.26 45.48 50.73 42.24 60.57 61.65
Table 3: Referring expression comprehension results on RefCOCO, RefCOCO+ and RefCOCOg. We report the accuracy metric for all methods.
locations without the need for post-processing redundant locations. This capability highlights the effectiveness of our model in handling phrase grounding tasks.
# 4.1.2 Referring Expression Comprehension
We assess the referring expression comprehension task using three well-established datasets: Re- fCOCO [YPY+16], RefCOCO+ [YPY+16] and RefCOCOg [MHT+15]. Both RefCOCO and RefCOCO+ were generated through a two-player game, with RefCOCO+ speciï¬cally designed to exclude spatial relations, such as âon the leftâ. RefCOCOg incorporates spatial relations and features longer expressions on average. Different from phrase grounding on Flickr30k entities, we measure this task by using referring expression as the input: â<p> referring expression </p>â. For the example shown in Figure 4(2), the input sequence is â<p>A man in a blue hard hat and orange safety vest</p>â. Similarly, the predicted bounding box is considered correct only if its IOU with the ground-truth bounding box is greater than 0.5. The failed decoded sequence is also treated as a negative sample. We use the ï¬rst generated bounding box for the query expression to measure the accuracy.
Results Table 3 reports referring comprehension results on RefCOCO [YPY+16], Ref- COCO+ [YPY+16] and RefCOCOg [MHT+15]. KOSMOS-2 also obtains promising zero-shot performance on the comprehension task, signiï¬cantly outperforming previous zero-shot models on RefCOCOg benchmark. However, compared to previous ï¬netuned works, KOSMOS-2 achieves slightly lower performance on RefCOCO and RefCOCO+ than on RefCOCOg. This discrepancy can be attributed to the data distribution present in RefCOCO and RefCOCO+, where they tend to use a shorter referring expression (e.g., âleft bottomâ) during the two-player game. Hence, one of our future goals is to enhance MLLMsâ ability to accurately understand more types of human expressions.
# 4.2 Multimodal Referring
In addition to multimodal grounding tasks, we evaluate the modelâs ability to understand image regions or objects users refer to via inputting bounding boxes. Compared with previous multimodal
8
the front most cow to the the front most cow to the right of other cows. right of other cows. Grounded MLLM Grounded MLLM <p> It </p> <box> i] <p> It </p> <locgq7> <l0Cg95> <box> <locyg,> </box> is I, <locton> </box> is the giraffe in the middle. <p> It </p> <box> <locg27> <lOCgg5> </box> is (1) Zero-shot evaluation (2) Few-shot evaluation
Figure 5: The input format of referring expression generation evaluation under (1) zero-shot and (2) few-shot settings. The bounding boxes shown in the image are for visualization purposes.
LLMs that can only refer image regions or objects to the model via detailed text descriptions, directly referring to image regions using its bounding boxes is more effective and reduces ambiguity.
We evaluate the model on the referring expression generation task, which aims to generate unambigu- ous text descriptions of speciï¬c objects or regions within the bounding box. We employ the widely used RefCOCOg dataset [MHT+15] to evaluate the modelâs performance under both zero-shot and few-shot settings, showcasing its adaptability in different scenarios.
# 4.2.1 Evaluation Setup
The model is tasked with generating an associated text description for an object or region given its location tokens of the bounding boxes (e.g., â<box><loc1><loc2></box>â). Beneï¬ting from the uniï¬ed input format, we use â<p> It </p><box><loc1><loc2></box> isâ as prompt to encourage the model to predict its text description. Figure 5 (1) and (2) demonstrate the input format for zero-shot and few-shot referring expression generation, respectively. Following previous works, we report results using METEOR and CIDEr metrics. The image resolution is 224Ã224. Greedy search is used for decoding.
# 4.2.2 Results
Table 4 presents the zero-shot and few-shot results of referring expression generation on RefCOCOg. We compare KOSMOS-2 with a ï¬netuned listener-speaker model, which introduces an added reward- based module (SLR). Our model obtains impressive zero-shot performance on referring expression generation, and even outperforms ï¬netuned SLR by 1.1 CIDEr scores. Moreover, when prompted with fewshot demonstrations, KOSMOS-2 shows further improvements, highlighting its in-context learning ability.
RefCOCOg Meteor CIDEr Model Setting SLR[YTBB17] SLR+Rerank[YTBB17] Finetuning Finetuning 15.4 15.9 59.2 66.2 Zero-shot Few-shot (k = 2) Few-shot (k = 4) 12.2 13.8 14.1 60.3 62.2 62.3 KOSMOS-2
Table 4: Results of referring expression generation on RefCOCOg.
# 4.3 Perception-Language Tasks
In addition to multimodal grounding and referring tasks, we also evaluate KOSMOS-2 on the vision- language tasks following KOSMOS-1. In particular, we perform zero-shot evaluations on two popular
9
tasks, including image captioning and visual question answering. Image captioning requires the model to generate a text description of the given image, whereas visual question answering seeks to answer a natural language question based on an image. In order to have a fair comparison with KOSMOS-1, we report results without instruction tuning.
# 4.3.1 Evaluation Setup
For image captioning, we evaluate the model on the widely used Flickr30k Karpathy split test set. We employ beam search for caption generation, with a beam size of 5. We report results using CIDEr [VLZP15] metrics evaluated by COCOEvalCap3. We use the prompt âAn image ofâ to generate the image description.
For visual question-answering, we evaluate zero-shot performance on the test-dev set of VQAv2. Greedy search is used for decoding. We report VQA scores obtained from VQAv2 evaluation server4. âQuestion: {question} Answer: {answer}â is used as the prompt for the dataset. The image resolution is 224Ã224 for both two tasks.
# 4.3.2 Results
We present the zero-shot performance on Flickr30k and VQAv2 in Table 5. KOSMOS-2 exhibites comparable overall performance to the KOSMOS-1, showing a slight improvement on Flickr30k while experiencing a marginal decrease on VQA. While KOSMOS-2 introduces new capabilities of grounding and referring, the model still achieves competitive performance on perception-language tasks.
Model Flickr30k VQAv2 CIDEr VQA acc. FewVLM [JCS+22] METALM [HSD+22] Flamingo-3B [ADL+22] Flamingo-9B [ADL+22] KOSMOS-1 KOSMOS-2 31.0 43.4 60.6 61.5 65.2 66.7 - 41.1 49.2 51.8 46.7 45.6
Table 5: Zero-shot image captioning results on Flickr30k test set and zero-shot visual question answering results on VQAv2 test-dev set. We report results of KOSMOS-2 and KOSMOS-1 without instruction tuning.
# 4.4 Language Tasks
We evaluate KOSMOS-2 on eight language tasks, such as cloze and completion tasks (StoryCloze, HellaSwag), Winograd-style tasks (Winograd, Winogrande), commonsense reasoning (PIQA), and three SuperGLUE benchmark [WPN+19] datasets (BoolQ, CB, and COPA). We report the zero- shot results in Table 6. Compared with KOSMOS-1, KOSMOS-2 achieves similar performance on StoryCloze, HellaSwag, Winograd, Winogrande, and PIQA, experiences a decrease in performance on CB, but shows improvement on BoolQ and COPA. In summary, KOSMOS-2 demonstrates the acquisition of new capabilities while experiencing comparable performance on language tasks. This illustrates the potential of the model in balancing and expanding its skills across different domains.
# 5 Conclusion
We present KOSMOS-2, a multimodal large language modal, that can ground to the visual world. Speciï¬cally, we pre-train KOSMOS-2 by augmenting the multimodal corpora used in KOSMOS-1 with GRIT, a large-scale dataset of Grounded Image-Text pairs, which is created by extracting
3https://github.com/salaniz/pycocoevalcap 4https://eval.ai/challenge/830/overview
10
Model Story Cloze Hella Swag Winograd Winogrande PIQA BoolQ CB COPA LLM KOSMOS-1 KOSMOS-2 72.9 72.1 72.0 50.4 50.0 49.4 71.6 69.8 69.1 56.7 54.8 55.6 73.2 72.9 72.9 56.4 56.4 62.0 39.3 44.6 30.4 68.0 63.0 67.0
Table 6: Zero-shot performance comparisons of language tasks between KOSMOS-2, KOSMOS-1 and LLM. LLM uses the same text data and training setup to reimplement a language model as KOSMOS-1. We report results of KOSMOS-2 and KOSMOS-1 without instruction tuning. Results of KOSMOS-1 and the LLM baseline are from [HDW+23].
and associating noun phrases and referring expressions in the caption to the objects or regions in the scene. KOSMOS-2 enables new capabilities of perceiving image regions and grounding text output to the visual world, which makes grounding as a foundation capability of MLLMs in many downstream applications. Experimental results demonstrate that KOSMOS-2 achieves impressive results on language and vision-language tasks evaluated in KOSMOS-1, grounding tasks including phrase grounding and referring expression comprehension, and referring tasks such as referring expression generation.
# Acknowledgement
Some examples (such as Figure 1) are taken from the WHOOPS corpus [BGBH+23].
# Ethics Statement
The model presented in this paper is intended for academic and research purposes. The utilization of the model to create unsuitable material is strictly forbidden and not endorsed by this work. The accountability for any improper or unacceptable application of the model rests exclusively with the individuals who generated such content. We also put Microsoft AI Principles5 into practice when developing the models.
# References
[ADL+22] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob Menick, Sebastian Borgeaud, Andrew Brock, Aida Ne- matzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, and Karen Simonyan. Flamingo: a visual language model for few-shot learning. In Advances in Neural Information Processing Systems, 2022.
[AHR+22] Armen Aghajanyan, Bernie Huang, Candace Ross, Vladimir Karpukhin, Hu Xu, Naman Goyal, Dmytro Okhonko, Mandar Joshi, Gargi Ghosh, Mike Lewis, and Luke Zettlemoyer. CM3: A causal masked multimodal model of the Internet. ArXiv, abs/2201.07520, 2022.
[BGBH+23] Nitzan Bitton-Guetta, Yonatan Bitton, Jack Hessel, Ludwig Schmidt, Yuval Elovici, Gabriel Stanovsky, and Roy Schwartz. Breaking common sense: WHOOPS! a vision-and-language benchmark of synthetic and compositional images. ArXiv, abs/2303.07274, 2023.
[BPK+22] Minwoo Byeon, Beomhee Park, Haecheon Kim, Sungjun Lee, Woonhyuk Baek, and Saehoon Kim. Coyo-700m: Image-text pair dataset, 2022.
# 5https://www.microsoft.com/ai/responsible-ai
11
[CLY+19] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. Uniter: Universal image-text representation learning. In European Conference on Computer Vision, 2019.
[CSL+21] Ting Chen, Saurabh Saxena, Lala Li, David J. Fleet, and Geo rey E. Hinton. Pix2seq: A language modeling framework for object detection. ArXiv, abs/2109.10852, 2021.
[DKG+22] Zi-Yi Dou, Aishwarya Kamath, Zhe Gan, Pengchuan Zhang, Jianfeng Wang, Linjie Li, Zicheng Liu, Ce Liu, Yann LeCun, Nanyun Peng, Jianfeng Gao, and Lijuan Wang. Coarse-to-ï¬ne vision-language pre-training with fusion in the backbone. ArXiv, abs/2206.07643, 2022.
[DXS+23] Danny Driess, F. Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Ho Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Sermanet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Peter R. Florence. Palm-e: An embodied multimodal language model. ArXiv, abs/2303.03378, 2023.
[HDW+23] Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, Kriti Aggarwal, Zewen Chi, Johan Bjorck, Vishrav Chaudhary, Subhojit Som, Xia Song, and Furu Wei. Language is not all you need: Aligning perception with language models. ArXiv, abs/2302.14045, 2023.
[HMVLB20] Matthew Honnibal, Ines Montani, Soï¬e Van Landeghem, and Adriane Boyd. spaCy: Industrial-strength Natural Language Processing in Python. 2020.
[HSD+22] Yaru Hao, Haoyu Song, Li Dong, Shaohan Huang, Zewen Chi, Wenhui Wang, Shum- ing Ma, and Furu Wei. Language models are general-purpose interfaces. ArXiv, abs/2206.06336, 2022.
[HSLS22] Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. Unnatural instructions: Tuning language models with (almost) no human labor, 2022.
[JCS+22] Woojeong Jin, Yu Cheng, Yelong Shen, Weizhu Chen, and Xiang Ren. A good prompt is worth millions of parameters: Low-resource prompt-based learning for vision-language models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2763â2775, Dublin, Ireland, May 2022. Association for Computational Linguistics.
[JMC+23] Woojeong Jin, Subhabrata Mukherjee, Yu Cheng, Yelong Shen, Weizhu Chen, Ahmed Hassan Awadallah, Damien Jose, and Xiang Ren. Grill: Grounded vision- language pre-training via aligning text and image regions. ArXiv, abs/2305.14676, 2023.
[KSL+21] Aishwarya Kamath, Mannat Singh, Yann LeCun, Ishan Misra, Gabriel Synnaeve, and Nicolas Carion. Mdetr - modulated detection for end-to-end multi-modal understanding. 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pages 1760â 1770, 2021.
[KZG+16] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Li Fei-Fei. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123:32â73, 2016.
[LHV+23] Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. The ï¬an collection: Designing data and methods for effective instruction tuning. arXiv preprint arXiv:2301.13688, 2023.
[LLSH23] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. ArXiv, abs/2301.12597, 2023.
12
[LLWL23] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023.
[LYY+19] Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visual- bert: A simple and performant baseline for vision and language. ArXiv, abs/1908.03557, 2019.
[LZZ+22] Liunian Harold Li*, Pengchuan Zhang*, Haotian Zhang*, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, Kai-Wei Chang, and Jianfeng Gao. Grounded language-image pre-training. In CVPR, 2022.
[MHT+15] Junhua Mao, Jonathan Huang, Alexander Toshev, Oana-Maria Camburu, Alan Loddon Yuille, and Kevin P. Murphy. Generation and comprehension of unambiguous object descriptions. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 11â20, 2015.
[MWH+22] Shuming Ma, Hongyu Wang, Shaohan Huang, Wenhui Wang, Zewen Chi, Li Dong, Alon Benhaim, Barun Patra, Vishrav Chaudhary, Xia Song, and Furu Wei. TorchScale: Transformers at scale. CoRR, abs/2211.13184, 2022.
[Ope23] OpenAI. Gpt-4 technical report. 2023.
[PWC+15] Bryan A. Plummer, Liwei Wang, Christopher M. Cervantes, Juan C. Caicedo, J. Hock- enmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase corre- spondences for richer image-to-sentence models. International Journal of Computer Vision, 123:74â93, 2015.
[SBV+22] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wight- man, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5b: An open large-scale dataset for training next generation image-text models. arXiv preprint arXiv:2210.08402, 2022.
[VLZP15] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus- based image description evaluation. In CVPR, pages 4566â4575, 2015.
[WCC+23] Wen Wang, Zhe Chen, Xiaokang Chen, Jiannan Wu, Xizhou Zhu, Gang Zeng, Ping Luo, Tong Lu, Jie Zhou, Y. Qiao, and Jifeng Dai. Visionllm: Large language model is also an open-ended decoder for vision-centric tasks. ArXiv, abs/2305.11175, 2023.
[WMH+22] Hongyu Wang, Shuming Ma, Shaohan Huang, Li Dong, Wenhui Wang, Zhiliang Peng, Yu Wu, Payal Bajaj, Saksham Singhal, Alon Benhaim, Barun Patra, Zhun Liu, Vishrav Chaudhary, Xia Song, and Furu Wei. Foundation transformers. CoRR, abs/2210.06423, 2022.
[WPN+19] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. arXiv preprint arXiv:1905.00537, 2019.
[WYM+22] Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. In Interna- tional Conference on Machine Learning, 2022.
[YPY+16] Licheng Yu, Patrick Poirson, Shan Yang, Alexander C. Berg, and Tamara L. Berg. Modeling context in referring expressions. ArXiv, abs/1608.00272, 2016.
[YTBB17] Licheng Yu, Hao Tan, Mohit Bansal, and Tamara L. Berg. A joint speaker-listener- reinforcer model for referring expressions. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, pages 3521â3529. IEEE Computer Society, 2017.
13
# A Hyperparameters
The training hyperparameters of KOSMOS-2 are listed in Table 7.
Hyperparameters Image embedding number Location tokens 64 1,024 Training steps Warmup steps Optimizer Learning rate Learning rate decay Adam β Weight decay 60,000 375 AdamW 2e-4 Linear (0.9, 0.98) 0.01 Batch size of text corpora Batch size of original image-caption pairs Batch size of grounded image-text pairs Batch size of interleaved data 93 1,117 1,117 47
# Table 7: Training hyperparameters of KOSMOS-2
The instruction tuning hyperparameters are listed in Table 8.
Hyperparameters Training steps Warmup steps Learning rate Batch size of language instruction data Batch size of vision-language instruction data Batch size of grounded image-text pairs 10,000 375 1e-5 117 351 & grounded instruction data 1404 Batch size of text corpora Batch size of interleaved data 30 15
Table 8: Instruction tuning hyperparameters of KOSMOS-2
# B Templates for Grounded Instruction Data
Table 9 presents the instruction templates of expression generation based on its associated bounding boxes during instruction tuning.
⢠"What is <p> it </p><box><loc1><loc2></box>? It is {expression}." ⢠"What is <p> this </p><box><loc1><loc2></box>? This is {expression}." ⢠"Describe <p> this object </p><box><loc1><loc2></box>. This object is {expression}." ⢠"<p> It </p><box><loc1><loc2></box> is {expression}." ⢠"<p> This </p><box><loc1><loc2></box> is {expression}." ⢠"<p> The object </p><box><loc1><loc2></box> is {expression}."
Table 9: Instruction templates used for expression generation.
14
# C Examples of GRIT
We present some examples of the GRIT corpus in Figures 6â9. The grounded image-text pairs span over various domains and contain different numbers of objects.
Figure 6: Example from GRIT. Caption: âA serving of kale and roasted vegetable salad on an aluminium tray served with a small white bowl ï¬led with creamy light green avocado Caesar dressingâ.
Figure 7: Example from GRIT. Caption: âA Keto Chicken Nugget being dipped into a bowl of keto honey mustard.â.
15
Figure 8: Example from GRIT. Caption: âSolar cells on a red roof are in the foreground. The Sydney skyline is in the background.â.
Figure 9: Example from GRIT. Caption: âWoman standing outdoors in a city landscape and wearing a hijab. Her arm is around a young girl who is hugging her side. The background is blurred.â.
# D More Examples of KOSMOS-2
As illustrated in Figure 10, multimodal referring capability used for visual dialogue can unlock potential in human-AI interaction. In Figure 11, our approach demonstrates its in-context learning ability for ï¬ne-grained object detection using both text and image descriptions. Figure 12 and Figure 13 showcase more selected examples, including grounded visual question answering, grounded image captioning, and multimodal referring.
16
What is the name of this building? â> Sydney Opera House. When it was completed? In 1973. Whatisthenameof â» this? Sydney Harbour Bridge. What he is doing? The man is riding a horse. How about this man? He is holding a flag. Which person is this one facing to? The person on the right side of the fence. â>
Figure 10: Examples of visual dialogue generated from KOSMOS-2.
17
<p>a downy woodpecker</p> <box><l0Cg7><l0Cg49></box> â Description of three toed woodpecker: it has black and white stripes throughout the body With text and a yellow crown. Description â- of downy woodpecker: it has white spots on its black wings and some red on its crown. description <p>a downy woodpecker</p> Ge. <box><locy45><lOCgsg></box> â> q Vv ce N With image - description This is a downy woodpecker <p>a downy woodpecker</p> Ge. <box><loc,35><lOCgsg></box> â>- q Vv
Figure 11: Examples of object detection with multimodal descriptions from KOSMOS-2.
18
Input Prompt Completion Input Prompt Completion Question: Where is the coach? Answer: <p>The coach</p> The coach is standing in the middle of the field. Question: Where is i'? Answer: In a microwave oven. (1) Grounded question answering Question: What does it say? Answer: Question: What makes this image weird? Answer: The image is weird because the snowman is sitting next toa campfire in the snow. "Je suis Charlie!" is written in French on a chalkboard. Question: What will happen to it? Answer: You are driving this car. To pick up that man, you need to go through the intersection and turn right. It will melt away as it warms up. (2) Multimodal referring via bounding boxes
Figure 12: Examples generated from KOSMOS-2.
19
Input Prompt Input Prompt Input Prompt oe cago is a large bird with a black and white body and a yellow beak. It has large, dark wings and a pointed tail. It stands on a stump and spreads its wings, displaying its magnificent plumage. Completion Describe this bird in detail: Six fighter jets fly in formation in the sky, creating a beautiful cloud trail. Completion Describe this image in detail: Aman is standing in front of a fire, blowing the flames out of his mouth. He is in the middle of a tropical setting with palm trees surrounding him. Completion Describe this image in detail:
Figure 13: Examples of grounded image captioning generated from KOSMOS-2.
20 | {
"id": "2301.13688"
} |
2306.14898 | InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback | Humans write code in a fundamentally interactive manner and rely on constant
execution feedback to correct errors, resolve ambiguities, and decompose tasks.
While LLMs have recently exhibited promising coding capabilities, current
coding benchmarks mostly consider a static instruction-to-code sequence
transduction process, which has the potential for error propagation and a
disconnect between the generated code and its final execution environment. To
address this gap, we introduce InterCode, a lightweight, flexible, and
easy-to-use framework of interactive coding as a standard reinforcement
learning (RL) environment, with code as actions and execution feedback as
observations. Our framework is language and platform agnostic, uses
self-contained Docker environments to provide safe and reproducible execution,
and is compatible out-of-the-box with traditional seq2seq coding methods, while
enabling the development of new methods for interactive code generation. We use
InterCode to create three interactive code environments with Bash, SQL, and
Python as action spaces, leveraging data from the static NL2Bash, Spider, and
MBPP datasets. We demonstrate InterCode's viability as a testbed by evaluating
multiple state-of-the-art LLMs configured with different prompting strategies
such as ReAct and Plan & Solve. Our results showcase the benefits of
interactive code generation and demonstrate that InterCode can serve as a
challenging benchmark for advancing code understanding and generation
capabilities. InterCode is designed to be easily extensible and can even be
used to create new tasks such as Capture the Flag, a popular coding puzzle that
is inherently multi-step and involves multiple programming languages. Project
site with code and data: https://intercode-benchmark.github.io | http://arxiv.org/pdf/2306.14898 | John Yang, Akshara Prabhakar, Karthik Narasimhan, Shunyu Yao | cs.CL, cs.LG, cs.SE | Project site with code and data:
https://intercode-benchmark.github.io | null | cs.CL | 20230626 | 20231030 | 3 2 0 2
t c O 0 3 ] L C . s c [
3 v 8 9 8 4 1 . 6 0 3 2 : v i X r a
# InterCode: Standardizing and Benchmarking Interactive Coding with Execution Feedback
# John Yang Akshara Prabhakar Karthik Narasimhan Shunyu Yao
Department of Computer Science, Princeton University {jy1682, ap5697, karthikn, shunyuy}@princeton.edu
# Abstract
Humans write code in a fundamentally interactive manner and rely on constant execution feedback to correct errors, resolve ambiguities, and decompose tasks. While LLMs have recently exhibited promising coding capabilities, current coding benchmarks mostly consider a static instruction-to-code sequence transduction process, which has the potential for error propagation and a disconnect between the generated code and its final execution environment. To address this gap, we introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning (RL) environment, with code as actions and execution feedback as observations. Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution, and is compatible out-of-the-box with traditional seq2seq coding methods, while enabling the development of new methods for interactive code generation. We use InterCode to create three interactive code environments with Bash, SQL, and Python as action spaces, leveraging data from the static NL2Bash [32], Spider [55], and MBPP [4] datasets. We demonstrate InterCodeâs viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies such as ReAct [51] and Plan & Solve [43]. Our results showcase the benefits of interactive code generation and demonstrate that InterCode can serve as a challenging benchmark for advancing code understanding and generation capabilities. InterCode is designed to be easily extensible and can even be used to create new tasks such as Capture the Flag, a popular coding puzzle that is inherently multi-step and involves multiple programming languages. *
# Introduction
The art of computer programming is naturally an interactive process. When a human programmer writes code, she relies on several iterations of a âwrite-execute-testâ loop in order to iteratively refine solutions, plan changes, test sub-modules, and solve ambiguities by checking execution behavior. While this is reminiscent of other human endeavors like writing, code compilation and execution produce exact results that provide a deterministic form of feedback to make the refinement process more straightforward. Depending on the observed results, programmers perform various levels of debugging and rewriting, and continue the process until their code satisfies the requirements.
There has been increasing interest in recent years around the development of models that can automatically generate code given a specification in natural language [18, 46, 14, 29, 25]. Powered by large-scale pre-training over thousands of codebases [2, 22, 19], these models have shown solid performance on static benchmarks like HumanEval [9], APPS [20], MBPP [4], CodeXGLUE [33]. However, generating code in a static, sequence-to-sequence or auto-regressive fashion has several drawbacks: 1) simple errors (even typos) can propagate and there is no chance for recovery or
Code and data available at https://intercode-benchmark.github.io/
37th Conference on Neural Information Processing Systems (NeurIPS 2023) Track on Datasets and Benchmarks.
~» Query (Bash) âMove all images from my Desktop and Downloads to a compressed folder on > my USB drive.â Question | Find all text files in the testbed directory and write ODD | their names to a single file called concated.txt. Agent wd | & P Bash Env / Agent Ils | & Bash Env testbed dirl dir2 ... P Agent real] find /testbed -type f -name â*.txtâ... Bash Env /testbed/dir3/textfiles.txt /testbed... IP Agent / Gold Command real] submit | e find /testbed -type f -name '*.txt! - exec cat {} ; > /testbed/concated.txt © InterCode Env Agent Command(s) 1. cd testbed 2. find . -type f -name; 3. touch concated.txt 4. echo hi.txt >! concated.txt (additi ) : Z Answer (® Env. State file1.txt concated.txt file2.txt Y 11. Compare File System: . Check if common edits are identical Agent Latest x Standard Output are âCalculate Lexical! Task Gold Standard Output A concated.txt â7
Figure 1: Overview of InterCode. Setting up an interactive code environment with InterCode requires a Dockerfile, dataset, reward function definition, and a small amount of subclass implementation. The interactive loop between agent and environment closely mirrors real world software development processes. While InterCode task performance is generally quantified as a binary 0/1 completion score, InterCode allows for the design of more complex evaluation criteria that can incorporate execution output and the effects of interaction on the state space.
revision, 2) there is a disconnect between the code generation process and its downstream execution on the desired software and hardware environment, and 3) there is little room for human intervention or collaboration in the code generation process.
Recently, some works have proposed the use of execution feedback or interaction [47] to benefit code generation models [24, 21, 48, 20]. However, these papers consider their own individual setup and are difficult to compare with one other due to the use of different compilers, execution environments, feedback signals, and assumptions on the interactive process such as human participation to create task descriptions or provide natural language feedback. This makes it difficult to compare existing methods for code generation and to clearly understand the benefits of interactive generation.
To address these issues, we propose InterCode, the first standard coding benchmark designed natively with an interactive execution environment. Closely mimicking the human decision-making process, InterCode allows a coding agent to interactively receive feedback from compilers/interpreters that execute its code, and to submit further refinements. We design InterCode to be like a standard reinforcement learning (RL) environment that requires minimal human intervention and one in which generated code is treated as actions, which are executed to reveal observations. Our framework is (1) language and platform agnostic and can easily be used for new coding problems, (2) uses self-contained Docker environments to provide safe execution, and (3) compatible out-of-the-box with traditional seq2seq generation methods, while also enabling and empowering the development of new interactive techniques.
We demonstrate the power of the framework by implementing Bash, SQL, and Python tasks within InterCode, building on pre-existing static datasets [62, 32, 4]. We perform experiments across diverse models and prompting methods, including ReAct [51] and Plan & Solve [43]. Our findings concretely showcase the benefits of interaction towards solving coding tasks, discuss the distribution of distinct code understanding challenges across different task settings, and explore the ease with which new tasks and datasets can be defined using InterCode.
2
To summarize, our paper makes the following contributions:
⢠We develop InterCode, a new, universal framework for interactive code generation, which provides ease of use, extensibility, and safety.
⢠Using InterCode, we perform a comprehensive evaluation of state-of-the-art models and identify several avenues for improvements.
⢠We release our framework as a new benchmark along with useful empirical tools to customize any new static code datasets into interactive tasks.
# 2 Related Work
Interactive environments for coding. Most coding benchmarks (e.g. SQL - Spider [55], KaggleD- BQA [26]; Bash - NLC2CMD [1], NL2Bash [32]; Python - HumanEval [9], APPS [20], MBPP [4], CodeXGLUE [33], CodeNet [38]) frame the coding problem as a sequence transduction problem (from instruction to code), rather than an interactive decision making problem with an execution envi- ronment. Attempts have been made to simulate interaction by developing conversational, dialogue- style [57, 56], multi-step problem solving [36] datasets, which involve pre-annotated human-designed queries. The work closest to InterCode has been recent explorations of Python Jupyter Notebooks as a natural choice for interactive coding [21, 24, 54]. However, task data and settings often constrain allowed actions to a closed domain of code and libraries [24, 54], use evaluation procedures or met- rics that may not generalize [21], require human-in-the-loop participation (i.e. create task contexts, write problems, evaluate execution per task instance) [24], or are Python-exclusive [21, 24, 54, 48]. InterCode provides a more general purpose foundation defining interactive coding tasks that enables easy construction of diverse task settings, can have any programming language(s) as the action space, and has automatic, execution-based evaluation.
Execution-based evaluation for coding. Evaluation for NL-to-code generation models has recently shifted away from surface form similarity metrics (BLEU [37, 2], ROUGE [31], Exact Match) towards execution oriented ratings (unit tests [4, 9, 21, 24, 20], output matching [16, 21, 62]). The rigidity of surface form analysis overlooks code syntax features, ignores execution effect, or over-penalizes alternative solutions [63], On the contrary, execution-based assessment is a more thorough and comprehensive score of code functionality [20] and is a more natural fit for open-domain program usage that does not constrain code generation to a subset of the language space [48]. However, for newer benchmarks and datasets that put forth task definitions incorporating execution-based evaluation (APPS [20], ExeDS [21], ODEX [48]), the fundamental code generation task (Context + Code â Execution â Score) is still devoid of interaction. InterCode combines execution-based evaluation with flexible task construction, enabling more diverse problem-solving paradigms within a unified coding task formulation. InterCodeâs use of virtual containers as execution sandboxes protect against harmful actions and allow for advanced evaluation criteria beyond the aforementioned ones.
Methods for interactive or execution-based coding. The value of generative code models and interactive problem solving has motivated a recent proliferation of work to augment reasoning capabilitiesâ of existing language models [51, 40, 43, 50, 60, 12] or propose new modeling techniques to tackle coding as a sequential decision making and reasoning tasks [6, 11, 17, 29, 8, 25], of which evaluation is unit test based. Approaches that leverage execution typically use re-ranking [61, 35, 53, 58] or majority vote [11, 29, 39] to decide on a final prediction. Additional work also explores incorporating human-in-the-loop [7, 23], compilers [44], and text [45, 59] feedback. A common thread among these contributions is that 1) the task setting can only provide the investigated form of feedback and 2) sought-after capabilities are exemplified by strong performance on favorably curated tasks and datasets, rendering comparisons across benchmarks tedious. InterCode has the potential to standardize the evaluation of these methods because 1) the interactive coding task is a conglomeration of many interesting interaction, reasoning, and decision-making challenges and 2) InterCodeâs task construction makes it possible to incorporate a wide variety of sources of feedback.
# 3 The InterCode Benchmark
# 3.1 Formulation
The InterCode benchmark formalizes interactive coding with execution feedback as a partially observable Markov decision process (POMDP) (U, S, A, O, T , R) with instruction space U, state
3
Action Space Environment Dataset Reward Function Bash SQL Python Ubuntu Terminal MySQL Database Python Interpreter MBPP [4] (117) NL2Bash [32] (200) Spider 1.0 [55] (1034) Latest Std. Output Submitted Function
Table 1: Rundown of the two environments with Bash and SQL as action spaces developed using the InterCode framework. The numbers in parentheses refer to the number of task instances adopted from each dataset. Each environment is defined in under 200 lines of code total. Specific discussion of the environment construction and reward function can be found in § A.2 and § A.3
space S, action space A, observation space O, transition function T : S Ã A â S, and reward function R : S Ã A â [0, 1]. Given a coding instruction u â U in natural language, an agent issues code or a special submit keyword as an action at â A. An action is admissible [49] if it can be parsed and executed in the compiler/interpreter environment, and an admissible action incurs a change in the latent state space st+1 â S, and an execution feedback as observation ot+1 â O. The interaction loop repeats until the submit action is issued, wherein the task episode ends and a reward r = R(sT , submit) â [0, 1] is computed, with 1 representing task completion. We use the Success Rate (SR) metric, defined as the proportion of task episodes where r = 1. We also define the Error % metric, which is the percentage of non admissible actions across task episodes.
# 3.2 Construction pipeline
At a high level, InterCode decomposes the construction of an interactive coding task into three modular parts: (1) environment construction, (2) data collection, and (3) reward design. This workflow allows for the safe execution of transition functions, flexible reward design, and convenient adaptation of existing instructions to an interactive setting.
Docker-based environments. InterCode uses Docker [34] virtual containers as a general-purpose execution sandbox. Given a Dockerfile that defines a system and execution entrypoint, InterCode creates a corresponding, stateful virtual container that hosts the desired state space and transition function. We choose Docker as the basis of InterCodeâs environment construction for its safe execution in virtual containers, reproducibility of a Dockerfile across any Docker-equipped machine, and excellent coverage of application code, libraries, and dependencies offered by the Dockerfile DSL.
Data collection. InterCode requires that a dataset has at minimum two fields: query, a natural language instruction u â U, and gold, an answer or code block that is a procedure for generating the correct answer. We define these conditions to make it easy to adapt existing text-to-code datasets to an interactive setting while also leaving plenty of bandwidth for constructing new tasks and datasets.
Reward design. Across a single task episode, the action, observation, and state modification (if any) per interaction loop are implicitly logged by InterCode. InterCodeâs default reward function determines task completion via an exact match of the agentâs execution output (observation and state modifications) against the gold command, where 1 is awarded only if all components match. Since Exact Match is usually too stringent of an evaluation criteria, InterCode exposes a reward function endpoint that has access to both the interaction history and the execution container, allowing for custom reward function definitions that can incorporate multiple signals.
# 3.3 Implementations
Following the procedure discussed in Section 3.2, we create two separate InterCode based environ- ments where Bash and SQL are the action spaces respectively. Table 1 summarizes them.
InterCode-Bash. We define a bash shell within an Ubuntu Operating System as the task setting. To evaluate an agentâs ability to adapt generations to different situations, we architect four distinct file systems that can be swapped into the Bash environment by changing a single line in the Dockerfile.
We bootstrap the NL2Bash [32] dataset (which lacks specificity in queries and grounding to any underlying file system, preventing it from being used directly for interactive evaluations) to create an interactive coding task where an agent completes an instruction via bash actions. Transferring NL2Bash to the interactive task setting requires simple transformations to ground instructions and gold code blocks in the file system. First, we consider a subset of 1000 commands with each
4
having ⥠4 utilities. We then filter out commands that are non-UNIX, non-Linux, or use utilities we currently do not support (eg. "ssh", "sudo", time, and GUI-dependent utilities). Finally, we enhance under-specified commands with specific file names/directory names/paths and update deprecated utilities/flags. The resulting 200 commands are grouped into 4 disjoint sets, 3 of which were grounded to custom-designed file systems, while one set is file-system agnostic. This categorization allows for a comprehensive evaluation of different command-grounding scenarios.
The InterCode-Bash dataset instructions typically make one or both of the following two types of requests. It either 1. Requests information that can be answered via execution output (i.e. "How many files...", "What is the size of...", "Where is <file> stored?") or 2. Requests a change to the location/- configuration/content of a file or folder (i.e. "Move dir1 folder...", "Set permissions of...", "Append a line to..."). Therefore, we define a custom reward function that evaluates an agentâs performance against file system modifications and the latest execution output. Execution output is graded with a simple lexical similarity function. File system assessment is done in two parts. First, a comparison of the agentâs and gold commandâs list of file system changes (list of [path, modification type â [added, changed, deleted]] entries) reveals any extraneous or missing changes. Second, md5sum hashes of each commonly edited file path are compared to determine if an added or changed file was altered correctly. A max score of 1 is achieved only if the correct file paths are changed, the changes are correct, and the latest execution output matches the gold command output exactly. Additional Bash statistics and design details are discussed in § A.2.
InterCode-SQL. We write a Dockerfile that defines a SQL interpreter within a MySQL database as the task setting. To create the databases and tables necessary for the task dataset, we write type resolution scripts and perform database conversions using the sqlite3mysql [41] Python library to adapt the Spider [55] database and table schema to a MySQL format. We then consolidate all setup code into a single, unified MySQL .sql dump that contains the complete set of schemas for all tables across 20 different databases. On container start-up, this file is invoked automatically, creating and populating databases with tables and tables with records.
The Spider [55] dataset is a large-scale cross-domain dataset originally meant for evaluating SQL query generations from natural language questions. We adapt the development set, which contains 1034 task instances, and remove all extraneous columns aside from the natural language questions and gold SQL command. The instruction and gold values do not require any additional pre-processing to be compatible with the MySQL task environment.
Finally, we employ Intersection over Union (IoU), or more formally the Jaccard Index, to quantify the correctness of the latest execution output generated by the agent against the gold output, where both outputs are a list of records. A non-tabular execution output receives a reward of 0 by default. Among the items that lie in the intersection of the agent and gold execution outputs, we also apply a penalty if the records are in the incorrect order. To quantify how sorted the agent output is relative to the gold output, we lean on Kendallâs Ï and adjust the output range to [0, 1]. The IoU score is then directly scaled by this coefficient. All in all, only a correctly ordered list with the exact set of records found in the gold output receives a score of 1. Visualizations like Figure 1 for SQL along with a more extensive implementation discussion for this environment are in § A.3
InterCode-Python. In this setting, we define a Python interpreter running within an Ubuntu operating System as the task setting. The Dockerfile can be configured to run any Python version. The interpreter is not initialized with any dependencies, but PyPI packages can be installed and used by the agent.
We use the MBPP [4] dataset which presents the code completion task of synthesizing Python code from a method header and docstring. Evaluation of correctness is performed with an associated set of unit tests given by MBPP. The MBPP dataset is straightforward to adapt to the interactive setting, requiring no modifications to the query or evaluation components. Finally, we directly inherit MBPPâs evaluation procedure of proportion of unit tests passed. With InterCode, it is easy to use existing datasets to evaluate how well models can use different programming languages as actions.
Validations. To verify the functionality of action execution in the task environment and the correctness of custom reward functions, we write testing scripts for both Bash and SQL that pass the gold command in as a dummy agentâs action to ensure that the command is admissible and executes without error, and to verify that the reward received by the command is 1. To confirm that InterCodeâs dataset specification is enforced across multiple accepted file formats, we define a custom InterCode data loader class which is then rigorously unit tested.
5
Plan & Solve ' 1 1 Action Evaluate 1! Initial Msg. ' <action> . @ [Lletâs understand the problem and devise |, > after first 1 plan... Then, let's carry out the plan... ' 1 1 1 1 Initial Msg. You are helping a user who is trying to do something ina What is the country code and first name... : Single Turn ; action Question Question TELLLILILILLL LLL LL LL ia bee eee PEELE EEELEEEl (Lihat is the country code and first name... âTry Againâ ReAct Agent ' Ce] eat 1 4 Execute Msg. u @ You will now execute your own plan. " Interact with a MySQL DB using SQL... Question " i : : : : : : : : ' s What is the country code and first name... | "1 âAgent ' " " <action |=) jp Upto Agent Agent : actions | C2) 1" [enought > <action> | (=p Observation () plan ' Upton! el : ' v : : : : : : : Initial Msg. You are a SQL code generator helping... Your task is to interact with a MySQL DB... Initial Msg. Interact with a MySQL DB using SQL to... Use interleaving Thought, Action, Obs... Question Gap Litatis the country code and first name.. length Observation times âObservation (ie Lestt_cutput, rewara> We Leste output Vv v Terminate when plan is Terminate when reward =1 |! Terminate when thought chai completed or or exceeds n turns finishes or exceeds n turns n turns exceeded
Figure 2: Overview of Prompting Strategies adjusted for evaluation on InterCode. The "Try Again" termination constraint is conditioned on reward = 1, while ReAct [51] and Plan & Solve [43] are determined by the agent itself. This is because the purpose of the "Try Again" method is to explore how capable agents are at error correction from feedback, while the other two are more concerned with the overall success of general problem-solving strategies.
# 4 Methods
We perform preliminary experiments to gauge the proficiency and behavior of current large language models on interactive coding tasks with Bash and SQL. To observe and elicit relevant reasoning skills, we draw on several existing prompting strategies that have been put forth to augment language modelsâ reasoning and problem-solving skills. We apply these prompting strategies to models across the following three families: OpenAI (text-davinci-003, gpt-3.5-turbo, gpt-4), PaLM-2 (text-bison-001, chat-bison-001) [3], and Open Source (Vicuna-13B [13], StarChat-16B [28]).
Figure 2 visualizes the four adjusted prompting strategies we evaluate on InterCode.
Single Turn is a zero-shot attempt. A model is given a simple description of the task setting and asked to generate code in a specific programming language that would address the query. The first generation in response to the userâs question is then evaluated in the InterCode environment.
"Try Again" is an iterative feedback set up. In the initial message, the agent is informed of the task setting and its interactive nature; an agent has multiple turns to interact with the system, wherein each turn, upon generating an action, the execution output of the action is fed back as an observation. This continues until a reward of 1 (task completion) is achieved or the number of turns (n) is exhausted. The agentâs position in this approach is meant to mirror human software development as closely as possible. The goal of this method is to probe language modelsâ raw interactive coding abilities in addition to illustrating the benefits and different challenges that arise in interactive coding tasks.
ReAct and Plan & Solve. We write prompts and design workflows that follow the text and task configurations described in ReAct [51] and Plan & Solve [43] as faithfully as possible. For these two approaches, the termination of a task episode is conditioned upon the agentâs own judgment, as our goal with these methods is to gauge the transferability to and efficacy of existing reasoning frameworks with respect to the interactive coding task. Full prompt templates are included in §B.7.
# 5 Experiments
# 5.1 Base models comparison
Task performances. We first compare the success rate of models in the Single Turn and Try Again settings for both the InterCode-Bash and SQL datasets. From Table 2 and Table 3, we observe
6
InterCode-SQL Model / Hardness Easy Med Hard Extra All Single Turn Try Again (n = 10) Easy Med Hard Extra All text-davinci-003 gpt-3.5-turbo gpt-4 text-bison-001 chat-bison-001 Vicuna-13B StarChat-16B 20.6 22.6 19.8 23.8 18.5 8.1 21.8 4.9 8.3 7.2 10.9 6.5 1.3 7.4 1.7 5.7 4.6 5.7 4.0 0.6 2.9 0.0 3.6 3.0 0.6 0.0 0.0 0.0 7.4 10.5 9.1 11.5 7.9 2.6 8.9 32.4 72.5 87.5 27.0 22.2 18.9 22.3 14.6 44.3 76.7 12.3 7.8 3.4 8.5 5.2 43.7 66.7 5.7 6.9 1.7 2.9 4.2 21.1 52.4 0.6 0.0 0.0 1.2 15.6 47.3 73.7 12.9 9.9 6.3 9.7
Table 2: Success Rate for single vs. multi turn evaluation on InterCode-SQL (refer §A.3). Query difficulty is adopted from Spider [55]. Best metrics are in bold.
InterCode-Bash Model / File System 1 2 Single Turn 4 3 All 1 Try Again (n = 10) 2 3 4 All text-davinci-003 gpt-3.5-turbo gpt-4 text-bison-001 chat-bison-001 Vicuna-13B StarChat-16B 10.0 30.0 25.0 15.0 12.1 10.0 15.5 32.1 39.6 37.7 22.6 22.5 24.5 22.6 28.8 33.3 36.7 11.7 16.7 18.3 13.3 33.3 37.0 40.7 22.2 22.2 7.4 22.2 24.6 34.5 34.0 17.0 17.7 16.0 17.7 30.0 45.0 41.7 23.3 13.8 15.0 17.2 52.8 49.1 47.2 28.3 24.5 35.8 30.2 32.2 45.0 51.7 16.7 18.3 25.0 21.7 44.4 48.1 59.2 22.2 22.2 22.2 29.6 38.7 46.5 48.5 22.5 19.2 24.5 23.7
Table 3: Success Rate across file systems for single vs. multi-turn evaluation on InterCode-Bash (refer §A.2). To evaluate modelsâ ability to interact with different task settings, we evaluate disjoint sets of Bash instructions across four different file systems. Best metrics are in bold.
that performance across different levels of task difficulty (SQL) and different file systems (Bash) is superior in the interactive setting for all models, with a notable multi-fold increase for GPT-4 (9.1% â 73.7%) on the InterCode-SQL task.
Analysis of interactions. Manual inspection of trajectory logs indicates that models actively exercise later turns for discovering relevant context, correcting errors via execution feedback as observations, and solving problems via iteratively constructing and editing actions as affirmed by Figure 3. In addition, models also demonstrate a level of planning and modular problem solving; for instructions with gold commands that chain multiple commands together (i.e. with |, >, or ; in bash) or consist of multiple sub-problems (i.e. subqueries in SQL), models will use observations from solving smaller sub-problems in earlier turns to compose the higher-order action. Trajectories that exhibit these phenomena are in § B.4
Failure cases. With that said, both Figure 3 exhibits a plateauing in Success Rate and and Error %. This suggests that as the amount of context and feedback builds up, models are less capable of discerning relevant past history toward future actions. In late-turn scenarios, task episode trajectories often reveal repetition of earlier actions, a failure to effectively use recent observations towards deciding an appropriate next action, or an inability to recognize that a current problem-solving chain of thought is inconclusive or futile. This is particularly evident for hard and extra level InterCode- SQL task instructions that require context spanning across several tables and actions that incorporate multiple clauses. We note that even when the full schema of all tables and their descriptions are offered in addition to the original instructions, models still benefit greatly from using interaction to experiment with different JOIN and filtering operators across multiple turns, as demonstrated in § B.2. A larger context window size, retrieval of useful memory, and more adaptive reasoning paradigms are just a handful of potential solutions to overcoming such challenges.
# 5.2 Prompting strategy comparison
Initiating language agents with prompting strategies that encourage different forms of reasoning toward problem-solving improves performance on the interactive coding task to varying degrees. Table 4 presents side-by-side comparisons of the success rate, number of turns, and error rate per strategy. Compared to Try Again, which lacks specific guidance on leveraging multiple turns, more
7
# Success Rate
\ ' â â®- chat-bison-001 45 b - -@- gpt-3.5 âe- gpt-4 : col gpt I 40} i â-@- starchat â@= chat-bison-001 â®- text-bison-001 _ @ opt 3.5 âe- text-davinci-003 Oe gpt4 ~ vicuna -@ starchat oe â@- text-bison-001 _ â@ text-davinci-003 -@- vicuna w a T w 6 T Success Rate N a N 6 2 4 6 8 10 2 4 6 8 10 Number of Interaction Turns Number of Interaction Turns (a) Success rate vs. turns for InterCode-Bash (b) Success rate vs. turns for InterCode-SQL
(a) Success rate vs. turns for InterCode-Bash
(b) Success rate vs. turns for InterCode-SQL
Figure 3: Growth in Success Rate with increase in number of interaction turns across models configured with Try Again prompting strategy for InterCode-Bash and SQL tasks.
Try Again (n = 10) ReAct (n = 10) Plan & Solve SR Turns Error % SR Turns Error % SR Turns Error % SQL Bash 47.3 46.5 7.25 6.15 46.4 24.9 58.7 20.5 5.30 4.40 6.94 20.4 49.1 28.0 4.29 6.65 16.2 53.3
Table 4: Comparison of different prompting strategies across the entire InterCode-SQL and InterCode- Bash datasets using gpt-3.5-turbo as the base model. Turns refers to the average number of turns taken for a single task episode. For Try Again and ReAct, the max number of turns n = 10. The highest Success Rate, fewest Turns, and lowest Error % are highlighted per dataset since they reflect more accuracy and efficient task solving. Best metrics are in bold.
explicit reasoning frameworks such as ReAct and Plan & Solve policies generally achieve higher success rates (SQL: 47.3% â 58.7%) with fewer turns and a higher rate of admissible commands.
Different tasks present different learning challenges. An important skill to solving the InterCode- SQL task is the ability to discover context and construct actions conditionally based on information revealed in prior observations. Given that InterCode-SQL task instructions are phrased most com- monly as questions, adapting to the task setting and new information discovered along the way puts more emphasis on error correction and context discovery. On the other hand, the more declarative and multi-step nature of the InterCode-Bash task instructions is more aptly solved by planning and modular task completion. These distinctions manifest in the Plan & Solve strategyâs performance gap between the InterCode-SQL and InterCode-Bash tasks; while Plan & Solve encourages a model to decompose problems into more manageable steps, the strategy is less favorable towards adjusting on the fly in response to execution feedback. Example trajectories supporting these claims are in § B.4.
More adaptive reasoning is favorable. Compared to "imperative" reasoning paradigms such as Plan & Solve which prescribe a relatively rigid procedure, more flexible frameworks like ReAct, which do not enforce any particular logical formula or roadmap, are more conducive to eliciting a broader set of reasoning capabilities. However, while ReActâs performance is generally superior to Plan & Solve, tasks solved by both strategies with gpt-3.5-turbo make up 57% (407/708) and 27.6% (21/76) of the union of all successfully solved InterCode-SQL and InterCode-Bash tasks respectively. This discrepancy highlights a trade-off between the guidance and structural constraints that are inherent to prompting strategies; schemes that draw out specific reasoning patterns often overlook other equally useful capabilities. InterCodeâs interactive coding task can serve as a strong litmus test toward more adaptable, variegated model reasoning.
# 5.3 New tasks & datasets opportunities
InterCodeâs task formulation, modular design, flexible task construction, and use of virtual containers enable task designers to manifest new, complex, code-driven tasks, where completion is much more
8
Bash DOS Partition Table Offset Sector: 0 Units are in 5az-byte sectors Slot Start End Length Description @ iption: You should utilize the 'dds2-alpine-flag.img.gzâ @ he file with the flag is named âdown-at-the-bottom.txtâ. The âsleuthkit' tools can be used for help. Flag: picoCTF{ for3nsicater_novic3_f5565e7b} âimary Table (#0) Unallocated Linux (@x83) Agent Fea fs -0 2068 =r <p dds2-alpine. flagcing | grep dom-at-the-botton. txt Agent /bin/bash -c "gunzip dds2-alpine. flag. img.g2" Bash dds2-alpine. flag. img: DOS/MBR boot sector; partition 1 Bash + ID=0x83, active, start-CHS (0xe,32,33), end-CHS >. Je 102511 root/down-at~the-botton.txt (x10,81,1), startsector 2048, 260096 sectors ~* eee Agent Agent ne , 7 7 7 âcat -o 2048 dds2-alpine. flag. img 18291 apt-get install sleuthkit Bash state information...
sleuthkit is already the newest version (4.11.1+dfeg-1)-
o upgraded, © newly installed, 8 to remove and © Agent not upgraded. reg} submit picoCTF{for3nsic4tor_novic3_f5565e7b} Bash Agent mnls dds2-alpine. flag. img The submitted flag is correct
Figure 4: GPT-4âs interaction trajectory for a binary exploitation CTF task. This requires proficiency in Bash and Python, among additional knowledge and reasoning. Orange text and arrows highlight the feedback that the model attends to in generating the next action. In last step, agent submits flag.
attainable through interaction. We draw inspiration from Capture the Flag (CTF) [15], a competitive cybersecurity game that requires expertise in coding, cryptography (i.e. binary exploitation, forensics), reverse engineering, and recognizing security vulnerabilities to accomplish the primary objective of discovering encrypted "flags" concealed within code snippets or file systems. Compared to InterCode- Bash & -SQL, CTF is much more complicated, requiring an agent to exercise knowledge of multiple coding languages, modularize a higher-order objective into sub-problems, construct multi-step plans towards solving each problem, and adjust strategy when a plan fails to yield any useful insights.
We establish InterCode-CTF, a new dataset consisting of 100 CTF objectives from picoCTF [42]. Following the interactive coding task formulation, each task instance in InterCode-CTF is given as a <instruction, assets, hidden flag> tuple. We first construct a Bourne Shell within an Ubuntu OS as the task environment. Here, InterCodeâs use of virtual containers is crucial, as necessary actions can be irreversibly damaging on real systems (i.e. rm -rf, sudo access). Per task instance, the associated assets (e.g., images, executables, code), necessary for task completion, are copied into the OS file system. Given this setting, a task worker must understand the given material and investigate the assets to develop potential solutions. Executing a successful approach must be done across multiple steps with various conditionals, where the execution feedback of a prior step could have a significant effect on the next step. Figure 4 spotlights the diverse skills needed for CTF.
# 6 Discussion
Conclusion. We have developed InterCode, a novel lightweight framework that facilitates interaction between Language Models and the underlying environment, enabling them to mimic the human approach to language-to-code generation. Our framework has shown promising results when applied to state-of-the-art models using different prompting styles. It effectively leverages the capabilities of LMs to break down complex tasks and recover from errors within a secure and isolated environment. The ability to seamlessly convert existing datasets into the interactive format using InterCodeEnv API, and furthermore, the Bash and SQL environments, empowers task designers to construct new tasks to unlock the plethora of challenges that await in the space of interactive coding.
Limitations and future directions. We point out several current limitations of InterCode. At this time, the number of InterCode based environments is limited to Bash, SQL, and Python action spaces and datasets; within the near future, we plan to expand the number of offerings to cover a wider set of programming languages and datasets that should further deliver on InterCodeâs purported promises of efficient and expressive task construction. Second, the CTF dataset is limited to just four task instances due to our manual curation procedure. We hope to release more formal work soon that provides a more thorough analysis of the reasoning and collaboration challenges of the CTF task along with a more extensive dataset for evaluation purposes.
9
# Acknowledgements
We thank Xiao Liu for the Vicuna/Alpaca APIs, Carlos Jimenez and Yuhan Liu for trying our code, and Princeton NLP Group for helpful discussion and feedback in general. We acknowledge support from the National Science Foundation under Grant No. 2107048. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
# References
[1] M. Agarwal, T. Chakraborti, Q. Fu, D. Gros, X. V. Lin, J. Maene, K. Talamadupula, Z. Teng, and J. White. Neurips 2020 nlc2cmd competition: Translating natural language to bash commands. In H. J. Escalante and K. Hofmann, editors, Proceedings of the NeurIPS 2020 Competition and Demonstration Track, volume 133 of Proceedings of Machine Learning Research, pages 302â324. PMLR, 06â12 Dec 2021. URL https://proceedings.mlr. press/v133/agarwal21b.html.
[2] R. Agashe, S. Iyer, and L. Zettlemoyer. Juice: A large scale distantly supervised dataset for open domain context-based code generation. ArXiv, abs/1910.02216, 2019.
[3] R. Anil, A. M. Dai, O. Firat, M. Johnson, D. Lepikhin, A. Passos, S. Shakeri, E. Taropa, P. Bailey, Z. Chen, E. Chu, J. H. Clark, L. E. Shafey, Y. Huang, K. Meier-Hellstern, and et al. Palm 2 technical report, 2023.
[4] J. Austin, A. Odena, M. Nye, M. Bosma, H. Michalewski, D. Dohan, E. Jiang, C. Cai, M. Terry, Q. Le, and C. Sutton. Program synthesis with large language models, 2021.
[5] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba. Openai gym, 2016.
[6] R. Bunel, M. Hausknecht, J. Devlin, R. Singh, and P. Kohli. Leveraging grammar and reinforce- ment learning for neural program synthesis, 2018.
[7] A. Chen, J. Scheurer, T. Korbak, J. A. Campos, J. S. Chan, S. R. Bowman, K. Cho, and E. Perez. Improving code generation by training with natural language feedback, 2023.
[8] B. Chen, F. Zhang, A. Nguyen, D. Zan, Z. Lin, J.-G. Lou, and W. Chen. Codet: Code generation with generated tests. arXiv preprint arXiv:2207.10397, 2022.
[9] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, A. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin, B. Chan, S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet, F. P. Such, D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-Voss, W. H. Guss, A. Nichol, A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse, A. N. Carr, J. Leike, J. Achiam, V. Misra, E. Morikawa, A. Radford, M. Knight, M. Brundage, M. Murati, K. Mayer, P. Welinder, B. McGrew, D. Amodei, S. McCandlish, I. Sutskever, and W. Zaremba. Evaluating large language models trained on code, 2021.
[10] W. Chen, X. Ma, X. Wang, and W. W. Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks, 2023.
[11] X. Chen, C. Liu, and D. X. Song. Execution-guided neural program synthesis. In International Conference on Learning Representations, 2018.
[12] X. Chen, M. Lin, N. Schärli, and D. Zhou. Teaching large language models to self-debug. arXiv preprint arXiv:2304.05128, 2023.
[13] W.-L. Chiang, Z. Li, Z. Lin, Y. Sheng, Z. Wu, H. Zhang, L. Zheng, S. Zhuang, Y. Zhuang, J. E. Gonzalez, I. Stoica, and E. P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https://lmsys.org/blog/2023-03-30-vicuna/.
[14] C. B. Clement, D. Drain, J. Timcheck, A. Svyatkovskiy, and N. Sundaresan. Pymt5: multi-mode translation of natural language and python code with transformers, 2020.
10
[15] C. Cowan, S. Arnold, S. Beattie, C. Wright, and J. Viega. Defcon capture the flag: defending vul- nerable code from intense attack. In Proceedings DARPA Information Survivability Conference and Exposition, volume 1, pages 120â129 vol.1, 2003. doi: 10.1109/DISCEX.2003.1194878.
[16] L. Dong and M. Lapata. Language to logical form with neural attention, 2016.
[17] K. Ellis, M. Nye, Y. Pu, F. Sosa, J. Tenenbaum, and A. Solar-Lezama. Write, execute, assess: Program synthesis with a repl, 2019.
[18] Z. Feng, D. Guo, D. Tang, N. Duan, X. Feng, M. Gong, L. Shou, B. Qin, T. Liu, D. Jiang, and M. Zhou. Codebert: A pre-trained model for programming and natural languages, 2020.
[19] L. Gao, S. Biderman, S. Black, L. Golding, T. Hoppe, C. Foster, J. Phang, H. He, A. Thite, N. Nabeshima, S. Presser, and C. Leahy. The pile: An 800gb dataset of diverse text for language modeling, 2020.
[20] D. Hendrycks, S. Basart, S. Kadavath, M. Mazeika, A. Arora, E. Guo, C. Burns, S. Puranik, H. He, D. Song, and J. Steinhardt. Measuring coding challenge competence with apps. NeurIPS, 2021.
[21] J. Huang, C. Wang, J. Zhang, C. Yan, H. Cui, J. P. Inala, C. Clement, and N. Duan. Execution- In Proceedings of the Fourth based evaluation for data science code generation models. Workshop on Data Science with Human-in-the-Loop (Language Advances), pages 28â36, Abu Dhabi, United Arab Emirates (Hybrid), Dec. 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022.dash-1.5.
[22] H. Husain, H.-H. Wu, T. Gazit, M. Allamanis, and M. Brockschmidt. Codesearchnet challenge: Evaluating the state of semantic code search, 2020.
[23] S. K. Lahiri, A. Naik, G. Sakkas, P. Choudhury, C. von Veh, M. Musuvathi, J. P. Inala, C. Wang, and J. Gao. Interactive code generation via test-driven user-intent formalization, 2022.
[24] Y. Lai, C. Li, Y. Wang, T. Zhang, R. Zhong, L. Zettlemoyer, S. W. tau Yih, D. Fried, S. Wang, and T. Yu. Ds-1000: A natural and reliable benchmark for data science code generation. ArXiv, abs/2211.11501, 2022.
[25] H. Le, Y. Wang, A. D. Gotmare, S. Savarese, and S. C. H. Hoi. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. Advances in Neural Information Processing Systems, 35:21314â21328, 2022.
[26] C.-H. Lee, O. Polozov, and M. Richardson. KaggleDBQA: Realistic evaluation of text-to-SQL parsers. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 2261â2273, Online, Aug. 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.acl-long.176. URL https://aclanthology.org/2021. acl-long.176.
[27] J. Li, B. Hui, G. Qu, B. Li, J. Yang, B. Li, B. Wang, B. Qin, R. Cao, R. Geng, N. Huo, C. Ma, K. C. C. Chang, F. Huang, R. Cheng, and Y. Li. Can llm already serve as a database interface? a big bench for large-scale database grounded text-to-sqls, 2023.
[28] R. Li, L. B. Allal, Y. Zi, N. Muennighoff, D. Kocetkov, C. Mou, M. Marone, C. Akiki, J. Li, J. Chim, Q. Liu, E. Zheltonozhskii, T. Y. Zhuo, T. Wang, O. Dehaene, M. Davaadorj, and et al. Starcoder: may the source be with you!, 2023.
[29] Y. Li, D. Choi, J. Chung, N. Kushman, J. Schrittwieser, R. Leblond, T. Eccles, J. Keeling, F. Gimeno, A. D. Lago, T. Hubert, P. Choy, C. de Masson dâAutume, I. Babuschkin, X. Chen, P.-S. Huang, J. Welbl, S. Gowal, A. Cherepanov, J. Molloy, D. J. Mankowitz, E. S. Robson, P. Kohli, N. de Freitas, K. Kavukcuoglu, and O. Vinyals. Competition-level code generation with AlphaCode. Science, 378(6624):1092â1097, dec 2022. doi: 10.1126/science.abq1158. URL https://doi.org/10.1126%2Fscience.abq1158.
[30] J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. Florence, and A. Zeng. Code as policies: Language model programs for embodied control, 2023.
11
[31] C.-Y. Lin. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74â81, Barcelona, Spain, July 2004. Association for Computational Linguistics. URL https://aclanthology.org/W04-1013.
[32] X. V. Lin, C. Wang, L. Zettlemoyer, and M. D. Ernst. NL2Bash: A corpus and semantic parser for natural language interface to the linux operating system. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, May 2018. European Language Resources Association (ELRA). URL https: //aclanthology.org/L18-1491.
[33] S. Lu, D. Guo, S. Ren, J. Huang, A. Svyatkovskiy, A. Blanco, C. B. Clement, D. Drain, D. Jiang, D. Tang, G. Li, L. Zhou, L. Shou, L. Zhou, M. Tufano, M. Gong, M. Zhou, N. Duan, N. Sundaresan, S. K. Deng, S. Fu, and S. Liu. Codexglue: A machine learning benchmark dataset for code understanding and generation. CoRR, abs/2102.04664, 2021.
[34] D. Merkel. Docker: lightweight linux containers for consistent development and deployment. Linux journal, 2014(239):2, 2014.
[35] A. Ni, S. Iyer, D. Radev, V. Stoyanov, W. tau Yih, S. I. Wang, and X. V. Lin. Lever: Learning to verify language-to-code generation with execution, 2023.
[36] E. Nijkamp, B. Pang, H. Hayashi, L. Tu, H. Wang, Y. Zhou, S. Savarese, and C. Xiong. Codegen: An open large language model for code with multi-turn program synthesis. ICLR, 2023.
[37] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of machine translation. In Annual Meeting of the Association for Computational Linguistics, 2002.
[38] R. Puri, D. S. Kung, G. Janssen, W. Zhang, G. Domeniconi, V. Zolotov, J. Dolby, J. Chen, M. Choudhury, L. Decker, V. Thost, L. Buratti, S. Pujar, S. Ramji, U. Finkler, S. Malaika, and F. Reiss. Codenet: A large-scale ai for code dataset for learning a diversity of coding tasks, 2021.
[39] F. Shi, D. Fried, M. Ghazvininejad, L. Zettlemoyer, and S. I. Wang. Natural language to code translation with execution, 2022.
[40] N. Shinn, F. Cassano, E. Berman, A. Gopinath, K. Narasimhan, and S. Yao. Reflexion: Language agents with verbal reinforcement learning, 2023.
[41] K. Tusar. sqlite3mysql, 2018. URL https://github.com/techouse/sqlite3-to-mysql.
[42] C. M. University. picoCTF, 2013. URL https://picoctf.org/.
[43] L. Wang, W. Xu, Y. Lan, Z. Hu, Y. Lan, R. K.-W. Lee, and E.-P. Lim. Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language models, 2023.
[44] X. Wang, Y. Wang, Y. Wan, F. Mi, Y. Li, P. Zhou, J. Liu, H. Wu, X. Jiang, and Q. Liu. Compilable neural code generation with compiler feedback, 2022.
[45] X. Wang, H. Peng, R. Jabbarvand, and H. Ji. Leti: Learning to generate from textual interactions, 2023.
[46] Y. Wang, W. Wang, S. Joty, and S. C. H. Hoi. Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation, 2021.
[47] Z. Wang, G. Zhang, K. Yang, N. Shi, W. Zhou, S. Hao, G. Xiong, Y. Li, M. Y. Sim, X. Chen, Q. Zhu, Z. Yang, A. Nik, Q. Liu, C. Lin, S. Wang, R. Liu, W. Chen, K. Xu, D. Liu, Y. Guo, and J. Fu. Interactive natural language processing, 2023.
[48] Z. Wang, S. Zhou, D. Fried, and G. Neubig. Execution-based evaluation for open-domain code generation, 2023.
[49] S. Yao, R. Rao, M. Hausknecht, and K. Narasimhan. Keep calm and explore: Language models for action generation in text-based games. In Empirical Methods in Natural Language Processing (EMNLP), 2020.
12
[50] S. Yao, D. Yu, J. Zhao, I. Shafran, T. L. Griffiths, Y. Cao, and K. Narasimhan. Tree of thoughts: Deliberate problem solving with large language models, 2023.
[51] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. Narasimhan, and Y. Cao. React: Synergizing reasoning and acting in language models, 2023.
[52] S. Yao, H. Chen, J. Yang, and K. Narasimhan. Webshop: Towards scalable real-world web interaction with grounded language agents. In ArXiv, preprint.
[53] P. Yin and G. Neubig. Reranking for neural semantic parsing. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4553â4559, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1447. URL https://aclanthology.org/P19-1447.
[54] P. Yin, W.-D. Li, K. Xiao, A. Rao, Y. Wen, K. Shi, J. Howland, P. Bailey, M. Catasta, H. Michalewski, A. Polozov, and C. Sutton. Natural language to code generation in inter- active data science notebooks, 2022.
[55] T. Yu, R. Zhang, K. Yang, M. Yasunaga, D. Wang, Z. Li, J. Ma, I. Li, Q. Yao, S. Roman, Z. Zhang, and D. Radev. Spider: A large-scale human-labeled dataset for complex and cross- domain semantic parsing and text-to-SQL task. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3911â3921, Brussels, Belgium, Oct.-Nov. 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1425. URL https://aclanthology.org/D18-1425.
[56] T. Yu, R. Zhang, H. Y. Er, S. Li, E. Xue, B. Pang, X. V. Lin, Y. C. Tan, T. Shi, Z. Li, Y. Jiang, M. Yasunaga, S. Shim, T. Chen, A. Fabbri, Z. Li, L. Chen, Y. Zhang, S. Dixit, V. Zhang, C. Xiong, R. Socher, W. S. Lasecki, and D. Radev. Cosql: A conversational text-to-sql challenge towards cross-domain natural language interfaces to databases, 2019.
[57] T. Yu, R. Zhang, M. Yasunaga, Y. C. Tan, X. V. Lin, S. Li, H. Er, I. Li, B. Pang, T. Chen, E. Ji, S. Dixit, D. Proctor, S. Shim, J. Kraft, V. Zhang, C. Xiong, R. Socher, and D. Radev. SParC: Cross-domain semantic parsing in context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4511â4523, Florence, Italy, July 2019. Association for Computational Linguistics. doi: 10.18653/v1/P19-1443. URL https: //aclanthology.org/P19-1443.
[58] L. Zeng, S. H. K. Parthasarathi, and D. Hakkani-Tur. N-best hypotheses reranking for text-to-sql systems, 2022.
[59] K. Zhang, Z. Li, J. Li, G. Li, and Z. Jin. Self-edit: Fault-aware code editor for code generation, 2023.
[60] S. Zhang, Z. Chen, Y. Shen, M. Ding, J. B. Tenenbaum, and C. Gan. Planning with large language models for code generation, 2023.
[61] T. Zhang, T. Yu, T. B. Hashimoto, M. Lewis, W. tau Yih, D. Fried, and S. I. Wang. Coder reviewer reranking for code generation, 2022.
[62] V. Zhong, C. Xiong, and R. Socher. Seq2sql: Generating structured queries from natural language using reinforcement learning, 2017.
[63] S. Zhou, U. Alon, S. Agarwal, and G. Neubig. Codebertscore: Evaluating code generation with pretrained models of code, 2023.
13
# Appendix
In this appendix, we provide additional details about the implementation and usage of the InterCode framework and the InterCodeEnv interface. We also provide visualizations and analyses of addi- tional experiments to demonstrate InterCodeâs utility and garner further insight into the extent of current modelsâ performance on the interactive coding task. The full template for each prompting strategy is also included. Finally, we also discuss some of the impacts, risks, and limitations of our work. The webpage for InterCode is https://intercode-benchmark.github.io/. The code for InterCode is https://github.com/princeton-nlp/intercode; the link is also included on the InterCode webpage.
# A Environment Details
# InterCode Interface
The InterCode interface inherits the OpenAI gym [5] environment API definition. Specifically, InterCodeEnv is written as an abstract class that primarily handles the main execution logic for processing code interactions, in addition to logging, data management, and sand-boxed execution, along with both environment-level and task-level customization.
InterCodeEnv exposes the following API. Creating an interactive coding environment requires defining a subclass of InterCodeEnv. The methods denoted with an asterisk can be overridden for the purposes of customization.
# __init__(self, data_path:
# str, image_name: str, **kwargs)
str, image_name: str, **kwargs)
⢠Validates that the dataset specified by data_path is formatted correctly and can be used in an interactive setting.
⢠Uses the Docker image specified by image_name to create and connect with a Docker container instance of the image.
Initializes Logging Handler ⢠Keyword arguments:
â verbose (bool): If true, logging is enabled and environment interactions are shown to standard output
â traj_dir (str): If a valid path is provided, task episode summaries are saved to the given directory (generated by save_trajectory)
â preprocess (callable): If provided, this function is run before every task episode. It is a way to provide task instance-specific customization of the execution environment.
# reset(self, index:
# int = None) -> Tuple[str, Dict]
int = None) -> Tuple[str, Dict]
Retrieves task record from data loader ⢠Calls reset_container ⢠Reset task level logger, instance variables
# step(self, action:
# str) -> Tuple[str, int, bool, Dict]
str) -> Tuple[str, int, bool, Dict]
Log (action, observation) ⢠Invoke exec_action on action argument ⢠If action=submit, invoke get_reward, save_trajectory
# save_trajectory(self)
Saves task metadata, (action, obs.) sequence, and reward info to .json in traj_dir
close(self)
⢠Safely exit or stop any resources (i.e. docker container) used by the environment
execute_action(self, action:
# str)
Defines how the action is executed within the context of the docker container. ⢠Requires impl. because the Dockerfile definition, particularly its entrypoint, affects how an
action would be invoked within the container.
14
Interactive Loop data_path = ... iimage_name = ... env = BashEnv(...) 1. Provide docker image + dataset 2. Initialize new for # of episode: task episode env.reset() policy =... Intercode Env. __init__(data, img) reset() S called by reset_container() get_reward() done = false save_trajectory() execute_action() step(action) ya) 3. Interact with env. until task done while not done: called by policy(obs) > act env.step(act) » obs, done close() env.close ie) 4. Close, exit safely
Figure 5: Visualization demonstrating the intended invocations and usage of the InterCodeEnv inter- face, along with how the functions requiring implementation (get_reward(), execute_action(), reset_container() are called by the methods of the main interactive loop.
⢠Default impl. passes the action string directly into a self.container.exec(action) call, which invokes the action in the environment and returns execution output. A timeout is imposed on execution duration.
* get_reward(self) -> Tuple[float, Dict]
Handles reward calculation of actions with respect to the gold command(s) for a task episode. ⢠Requires impl. because the concept and scoring for task completion varies across datasets
and environments.
reset_container(self)
Handles resetting of execution container (i.e. resetting file system to original state). ⢠Requires impl. because the approach to restoring a setting to its initial state varies.
Figure 5 conveys how each of these methods are invoked and how they related to one another. In summary, the technicalities for setting up an interactive coding task for a specific system with one or more programming languages as the action space involve:
Defining a Dockerfile ⢠Providing a dataset with the query and gold fields ⢠(Optional) Defining a reward (get_reward) function to define task completion. ⢠(Optional) Creating an InterCodeEnv subclass that overrides the execute_action and
get_reward methods
# A.2 Bash Environment
Environment definition. The Dockerfile defining the Bash-based environment is founded on the LTS version of the Ubuntu operating system. Several Linux dependencies that can potentially be used by an agent to address instructions in the InterCode-Bash Dataset are then installed via the Advanced Package Tool (apt) interface. Next, a shell script is invoked within the Dockerfile to initialize one of the three file systems displayed in Figure 6. The shell script consists of a simple sequence of mkdir, touch, and echo commands to deterministically create and populate the content of multiple files and folders. Finally, git is configured for the purposes of determining file diffs per task episode (git status -s) and resetting an environment to its original state (git reset âhard; git clean -fd;) before the beginning of a new task episode. The original code for the Dockerfile along with the file system creation scripts can be found on the project GitHub repository.
Dataset details. The log-frequency distribution of the top-50 utilities is displayed in Figure 7. The NL2Bash [32] dataset is made available for use under the GPLv3 License. To assess the
15
generalizability of our approach, we designed three distinct file systems to accommodate the bash commands we collected. A key consideration during the construction of these file systems was to ensure that a significant portion of the executed commands would not result in operations that yield no changes. This deliberate design choice aimed to provide a more comprehensive assessment of our approachâs adaptability and effectiveness across various scenarios and command executions. The file systems encompass a wide range of file types, including text files (.txt), program files (.c, .java, .py), compressed files (.gz), shell scripts (.sh), PHP scripts (.php), JSON files (.json), documents (.doc), spreadsheets (.csv), webpages (.html), database schemas (.sql), hidden files, and files with special characters in their names, convoluted folder hierarchies. Their directory structures are illustrated in Figure 6. For simplicity, we consider the top-level folder created within the root directory (testbed, system, workspace) as the root of each file system. This root folder contains files and sub-folders that necessitate access and manipulation, while changes are monitored throughout the entire container to accurately evaluate the modelsâ actions. Notably, we intentionally designed file system 1 to be more intricate and encompass relatively challenging bash tasks compared to the other two file systems. Thereby, the modelsâ performance is relatively lower for file system 1.
Reward function. Evaluation of an agentâs trajectory across a single task episode towards carrying out the given instruction is determined by modifications to the file system and the latest execution output. The instructions found in the InterCode-Bash dataset fall under one of two buckets: it either 1. Requests information about the file system that can be answered via execution output generated from a correct sequence of Bash actions (i.e. "How many files...", "What is the size of...", "Where is the .png image stored?") or 2. Requests a change to the location, configuration, or content of a file or folder (i.e. "Move the dir1 folder from...", "Set the permissions to...", "Append a line to..."). Any relevant correct changes are therefore captured by considering both execution output and file system modifications during evaluation.
We define A and G as the outputs of the agent and gold commands respectively, where Aout and Gout refer to the execution output, and Af s and Gf s refer to a list of entries reflecting file system modifications, where each entry is [file path, modification type â [added, changed, deleted]]. We then formally define the reward function as follows:
R = 0.34 â similarity(Aout, Gout) +0.33 â (1 â erf(|Af s ⪠Gf s â Af s â© Gf s|))+ +0.33 â is_correct(Af s â© Gf s) Af s â© Gf s (1)
Where similarity refers to lexical similarity, which is determined by the cosine similarity score between TF-IDF vectors (calculated with TfidfVectorizer from scikit-learn) of the two execution outputs. The second component of the reward function reflects the number of file system modifications that were either not completed or not necessary; the error associated with the total number of misses is constrained to the range [0,1] using the Gauss error function (erf), where 0 corresponds to no file system modification mistakes. The third component checks what proportion of paths altered by both agent and gold were modified correctly. The is_correct function returns the number of file paths that were changed correctly, determined by checking whether the md5sum hashes of each file path are identical for agent and gold. If Af s â© Gf s = â
, this reward is automatically 1. The scalar weights for each component are arbitrarily assigned.
A max score of 1 is achieved only if the correct file paths are changed, the changes are correct, and the latest execution output matches the gold command output exactly. Figure 1 visualizes the reward function. While an exact match comparison would have been a simpler choice to satisfy the Success Rate metric put forth in the main paper, we design this reward function to 1. Demonstrate that InterCode can support complex reward functions that account for multiple forms of execution output, and 2. Provide practitioners who use the InterCode-Bash environment with a scalar reward that reflects how "similar" the given output is to the expected, rather than a flat 0/1 reward value that may over-penalize and discount the efforts of more capable reasoning abilities. These reasons also motivate the SQL-based environmentâs reward function, discussed in the following section.
16
FooBar.html Hello.java Hello1.java NewClass.java dir1 AnotherHello.java info.php subdir1 jsonfile1.json pythonscript4.py shellscript1.sh subsubdir1 pythonscript1.py shellscript4.sh textfile4.txt subdir2 textfile1.txt .DS_Store MANIFEST a.out folder1 a.out data.csv doc1.doc doc2.doc keep.txt log1.log new.sh old2.txt recent.txt script1.sh text2.txt text3.txt text4.txt dir2 folder2 shellscript2.sh subdir1 javafile1.java textfile2.txt subdir2 pythonscript2.py shellscript5.sh subsubdir1 textfile5.txt dir3 subdir1 special text3.txt special_text1.txt special_text2.txt text1.txt folder2.tar.gz folder3 backup_dbg backup sql1.sql text1_dbg.txt pythonscript3.py subsubdir1 FooBar special text4.txt temp file.txt file.txt shellscript3.sh textfile3.txt tmp empty.txt temp1 temp temp_1 text1.txt subdir2 tmp.txt html1.html temp csvfile1.csv textfile6.txt keep.txt text1.txt text3.txt dir2 2dir 3dir dir1 (c) File System 3
# .hidden.txt archive.tar.gz dir1
.hidden1.txt a.txt all.txt file.c file.txt hello.c hello.txt long.txt new1.sh readonly.txt script1.sh sum.c terminate.txt
# csvfile1.csv foo.txt hello.txt mysql
# sql1.sql
| |
# hello.c hello.php recent.txt textfile7.txt
# (b) File System 2
(a) File System 1
Figure 6: File System structures designed for InterCode-Bash.
17
eX o © In(frequency) we os? medium / « ow hard ee / \ a ES ee & easy e at sf
medium hard easy
Figure 7: Top 30 most frequently occurring bash utilities out of the 66 in InterCode-Bash with their frequencies in log scale.
Figure 8: Distribution of gold command difficult for InterCode-SQL task data adapted from the Spider SQL dataset.
# A.3 SQL Environment
Environment Definition. The Dockerfile defining the SQL-based environment inherits from the MySQL image and adds a .sql file setup script to the /docker-entrypoint-initdb.d directory within the Docker image; this is a special directory made for container initialization. On container start-up, the added .sql file, which creates and populates databases with tables and tables with records, is automatically invoked. Since the InterCode-SQL dataset does not feature any queries that involve modifying the database in any manner (i.e. no INSERT, UPDATE, or DELETE commands), there is no reset mechanism written into the Dockerfile definition that is invoked before each task episode; with that said, adding a reset script or version control to the Dockerfile is simple.
InterCode-SQL dataset. InterCode-SQL is adopted from the development set of the Spider dataset [55]. Spider 1.0 is a large-scale cross-domain dataset on generating SQL queries from natural language questions whose development set contains 1034 pairs of <instruction, gold> task instances spanning 20 databases. The distribution of queries according to their hardness criterion is shown in Figure 8. As discussed in Section 3.3, a filtering criterion narrows down the Spider datasetâs information to only the necessary components. We do not add anything to the Spider dataset that was not originally available. The Spider 1.0 dataset is available for use under the CC BY-SA 4.0 license.
MySQL databases. We first resolve data types for primary, foreign key pairs across the provided table schemas in Spider for conflicting instances and generate the corresponding SQLite databases. Next, to align with our Docker-supported environment, we convert the SQLite databases to MySQL format using sqlite3mysql [41], a Python library, and then generate a unified MySQL dump having schemas for all the tables. To handle case-sensitive table name discrepancies between the queries and the underlying schema in the original Spider dataset, we activate the lower_case_table_names setting in our evaluation environment. Additionally, for proper access controls, we create a test user and grant them all privileges for all the tables.
Reward function. The completion evaluation mechanism compares the output of the gold SQL latest observation) from the agentâs interaction command with the latest execution output (i.e. trajectory. The execution output of all gold SQL queries is a list of records. Each record is a tuple of one or more values that may be different types. For any single execution output, the order of types for every record is identical. Given the agent command(s)â latest execution output A and the gold commandâs execution output G, we formulate the reward function as follows:
R = A â© G A ⪠G â (kendalltau((A â© (A â© G)), (G â© (A â© G))) + 1)/2 (2)
18
Question OD | Find the first name and age of students who have a [) dog but do not have a cat as a pet. Agent ray SHOW TABLES th ' ' : SQL Env [(âhas_pet',), ('pets',), ('Student',)] os Agent real] DESC has_pet SQL Env [("StuID', 'int', 'VES', 'MUL', None...] w Agent rea SELECT fname, age FROM student WHERE ... | SQL Env [('Tracy', 19)] Agent = rea submit
SQL Env i Agent Command(s) ie 1. SHOW TABLES 2. DESC has_pet, 3. ; SELECT StuID from has_pet 4. DESC : Student (additional cmds Latest Std. Out: [(Ross, 29), (Ross, 29), (Chandler, 28), (Rachel, 27), (Monica, 25)] . Gold Std. Output: [(Ross, 29), (Ross, 29), (Joey, 27), (Rachel, 27), (Phoebe, 26)] Gold Command e SELECT T1.fname, Tl.age FROM Student H AS T1 JOIN Has_Pet AS T2 ON -
Figure 9: Example of interactions between an agent and the InterCode SQL Environment
Figure 10: Evaluation of the results of agent interactions with the SQL Environment against the gold command associated with the task. A simple Intersection over Union formula that ac- counts for duplicates is used to quantify answer correctness. Task completion is a reward of 1.
We employ Intersection over Union (IoU), or more formally the Jaccard Index, to quantify the correctness of the latest execution output generated by the agent against the gold output. If the latest execution output of the SQL query is not in the form of a list of records (i.e. a string error message), the reward is 0 by default. Among the items that lie in the intersection of the agent and gold execution outputs, we also apply a penalty if the records are in the incorrect order. Since achieving the correct order of fields in a record is of non-trivial importance to addressing many SQL queries correctly, we do not do any re-ordering or pre-processing of the list of records. Therefore, a record formatted as ("Ross", 29) is not awarded any credit against a gold output that includes (29, "Ross"). To quantify how sorted the agent output is relative to the gold output, we lean on Kendallâs Ï and adjust the output range to [0, 1]. The IoU score is then directly scaled by this coefficient.
All in all, only a correctly ordered list with the exact set of records found in the gold output would receive a max score of 1, which corresponds to task completion. Figure 10 visualizes the reward function for an example set of outputs. Note that in the main paper, the Success Rate metric is used; the scalar 3/7 output shown in the figure is treated as a 0 when quantifying whether the task was completed via the 0/1 Success Rate metric. As mentioned in the discussion of the Bash reward function, this reward function also aims to be a richer and fairer continuous evaluation metric of a modelâs reasoning abilities compared to a binary 0/1 task completion score.
# A.4 Python Environment
Environment definition. The InterCode-Python task environment inherits from a bare-minimum Python 3.9 image that provides the basic essentials for initializing a Python interpreter. We were unable to determine how to initialize a Python interpreter within a Dockerfile such that the container would then be capable of automatically executing Python commands sent to it while continuous logging every action/observation per turn. To overcome this, we create and define a backend application that runs within the Docker container, simulates a Python interpreter, and is responsible for handling input/output. By having the application sit between the agentâs actions and the interpreter, we are able to log every episode faithfully in addition to providing an environment that is agent- friendly and faithful to the experience of a real Python interpreter.
19
Py Interpreter Coy Agent Solution for index, c in enumerate(str1): if str1[:index+1].count(c) > 1: Solution Unit Tests assert first_repi (âabeabe"⢠assert first_repeated_char x ("abe") == "None" assert first_repeated_char Gold Command for index,c in enumerate(str1): : return c if stri[:index+1].count(c) > 1 i else :
Figure 11: Example of interactions between an agent and the InterCode Python Environment Figure 12: In this setting, an agent interacts with a Python Interpreter to 1. implement the requested method and 2. write test cases to de- termine function correctness. Upon submission, the reward function then evaluates the agentâs implementation with a set of unit tests.
Question D_| write a python function to find the first repeated OD | character in a given string. def first_repeated_char(str1) Agent reg) def first_repeated_char(str1): for index, c in enumerate(str1): if stri[:index+1].count(c) > 1: return c Py Intr. (No Std. Out) oa] Agent first_repeated_char (âaaâ) Py Intr. gy | Agent | first_repeated_char (â12312â) Py Intr. «4» * Agent reg) vabete
Figure 11: Example of interactions between an agent and the InterCode Python Environment
InterCode-Python dataset. A large majority of code datasets popular within the NLP community are based on Python and present code completion as the primary task [9, 4, 20]. In the original problem setting, a task worker is asked to synthesize code in a zero, one, or few-shot setting with little to no access to an execution environment. In the interactive setting, task workers are asked to accomplish the same objective, but informed that they have a Python environment to do whatever may help them write the function correctly, such as prototype different implementations and write/execute their own unit tests. Therefore, datasets such as HumanEval, APPS, and MBPP require little to no revisions to be usable within the InterCode environment, with the only necessary processing for all three being renaming of dataset attributes to InterCode-compatible names. A visualization of an example trajectory of interactions between an agent and the Python interpreter is presented in Figure 11.
Reward function. We preserve the original metric of proportion of unit tests passed to evaluate agent implementations, with all tests passing being equivalent to task completion. Complementary to the visualization of interactions, we also show how InterCode-Python performs automatic evaluation of an agentâs implementation of the desired function in Figure 12.
# B Experiment Details
# B.1 Model Details
We do not perform any model training for configuring the methods or running the experiments discussed in this project. Our evaluations use inference call requests to OpenAI, PaLM, and Hugging- Face API endpoints to run the baseline models on the InterCode tasks. For OpenAI models, we set temperature to 0, top_p to 1, max_tokens to 512, and n (number of completions) to 1. For PaLM models, we set temperature to 0, top_p to 1, and candidate_count (number of completions) to 1. For open source models, we set max_new_tokens (maximum number of tokens to generate) to 100 and temperature to 0.01. Due to constraints in the context window size, we limit the length of each observation to a maximum of 1000 tokens across all inference calls. The code for configuring API calls can be found in the linked repository.
20
# B.2 Additional Experiments & Analysis
SQL schema ablation. To confirm that the benefits of interaction exceed a simple disparity in information between the Single Turn and Try Again settings, we add the full SQL database schema, providing holistic details of tables necessary to the given instruction, to the Question message of both prompts, then re-run the comparison for several. Table 5 indicates that while Single Turn performance improves drastically, a non-trivial difference in favor of Try Again remains. Manual inspection of task episode trajectories shows that selective and fine-grained context discovery (i.e. inspecting specific table records and file content that affect query construction) is still critical to solving tasks efficiently.
InterCode-SQL + Schema Single Turn Model / Hardness Easy Med Hard Extra All Try Again (max 10 turns) Easy Med Hard Extra All gpt-3.5-turbo text-bison-001 chat-bison-001 90.7 89.5 79.0 70.2 68.2 52.0 59.2 44.2 32.1 37.3 19.3 15.1 67.9 61.4 49.2 92.7 90.7 82.2 74.9 70.4 56.0 67.2 50.0 42.5 43.4 21.1 24.1 72.8 63.9 54.9
Table 5: Success Rate across difficulty for single vs. multi-turn evaluation on the InterCode-SQL dataset, with the database schema relevant to each task episodeâs instruction, also provided in the Question message of the prompting strategy. Best metrics are in bold.
Trends of admissible actions. Table 6 shows that for the SQL task, models generate admissible actions with increasingly higher rates early on; in initial turns, models will tend to hallucinate a query with fabricated table and column names at a high frequency. The drop in error rate between the first and second turns can largely be attributed to the modelâs decision to begin exploring context; 60.3% of second turn actions contain either the SHOW TABLES or DESC keywords. Prompting strategies (i.e. ReAct, Plan & Solve), explicit phrasing that encourages exploration, and demonstrations diminish a modelâs default tendency to hallucinate a query in the first turn. This trend is not found in Bash. This can likely be attributed to the nature of the instructions; unlike the SQL instructions which simply pose a question and do not have any explicit references to SQL commands or clauses, Bash instructions will typically include keywords that correspond directly to useful Linux commands or give insight into the file systemâs internal structure. These signals reduce the need for context discovery. Therefore, successful task completion in Bash tends to lean towards 1) Figuring out which flags, options, and arguments to configure a command with and 2) How to string together commands or pass outputs from one command to the next correctly.
For both Bash and SQL, in later turns, the rate of admissible actions does not improve consistently. The actions in these later turns are usually attempts to answer the original instruction. At these stages, a model will tend to make small, cursory adjustments to the prior action based on execution feedback, often resulting in both a repetition of the same types of mistakes and hallucinations that introduce new issues. In these moments, compared to such minor perturbations, alternative reasoning capabilities such as context discovery and modularized problem solving are often more efficient ways to get the relevant insights needed to better decide how to fix the prior turnsâ issues. As corroborated by Figure 3, models struggle to take advantage of additional context in longer task episodes or horizons. Making the most of multiple queries is an open challenge with exciting implications for solving more difficult coding tasks.
Turn 1 2 3 4 5 6 7 8 9 10 SQL Bash 90.2 23.1 46.4 28.6 34.4 34.7 39.7 37.5 31.1 37.6 42.9 42.9 51.5 39.3 47.4 37.1 48.4 33.7 46.6 38.2
Table 6: Error % (Average ratio of non-admissible actions) per turn for the Try Again prompting scheme using a GPT 3.5 model on the Bash and SQL InterCode datasets.
Robustness results. We conducted an evaluation to assess the robustness of the reported accuracy metrics for the models. In order to maintain consistency in the evaluation, we focused on the performance across file systems 2, 3, and 4 (shown in Figure 6), which were designed to have similar difficulty levels. File system 1, intentionally made harder, was not included in this analysis. The
21
standard errors for the Single Turn and Try Again modes are presented in Table 7. The Try Again mode leverages interaction to consistently outperform the Single Turn mode across all models.
Model Single Turn text-davinci-003 gpt-3.5-turbo gpt-4 text-bison-001 chat-bison-001 Vicuna-13B StarChat-16B 31.40 ± 1.35 36.63 ± 1.83 38.37 ± 1.20 18.83 ± 3.57 20.47 ± 1.89 16.73 ± 5.00 19.37 ± 3.04 43.13 ± 5.98 47.40 ± 1.23 52.70 ± 3.50 22.40 ± 3.35 21.67 ± 1.81 27.67 ± 4.15 27.17 ± 2.74
Table 7: (Robustness Results) Success Rate with standard errors for single vs. multi turn evaluation on InterCode-Bash (refer §A.2). Best metrics are in bold. Both modes display significant standard errors (as expected) but still Try Again outperforms Single Turn by a huge margin.
# B.3 Additional Prompting Strategy
To gauge the significance of designing prompting strategies that can successfully solve the interactive coding task, we attempt to devise a more performant approach by chaining together existing tech- niques, where each technique is meant to elicit a different, relevant reasoning skill. To this end, we design a hybrid prompting strategy that combines Plan & Solve and Try Again, which we refer to as "Plan & Solve + Refine". This strategy is meant to complement a modelâs planning, modularized task completion, and context discovery abilities with error correction. Figure 13 visualizes this prompting strategyâs workflow. The full prompting template is included in § B.7.
Plan & Solve + Refine Initial Msg. âAgent âAgent @ [Llet's understand the problem and devise <action> rea) Up to <action> @ u . pton plan... Then, let's carry out the plan... plan P e Observation Observation times Question cp [xts_eutour, <ster> length Wee Lectss_eutput, reware> Vv (ap Liitatis the country code and frst name. v âAgent Terminate if reward = 1, else continue Terminate when reward =1 <plan> Vv or exceeds n turns Post Plan Msg. You have finished executing the plan, but it seems like there are still issues with. Execute Msg. @ You will now execute your own plan. @ Interact with a MySQL DB using SQL...
Figure 13: Visualization of the Plan & Solve + Refine strategy. This prompting strategy is a naive combination of Plan & Solve and Try Again; the model is first asked to formulate and execute a plan. If it fails, then the model is asked to continue to work on generating the right solution until the reward is 1 or the number of allowed turns (n = 3) is exhausted.
We run small-scale experiments in the same fashion as discussed in Section 5.2 on the SQL task for 44 randomly selected task instances. These preliminary tests have an average Success Rate of 45.5, a Number of Turns of 6.34 (across both planning and refinement stages), and an Error % of 22.4. Compared to the original Plan & Solve performance presented in Section 5.2, this strategy does not yield a significant improvement. Upon inspection of the trajectories, it is evident that while models are capable of specific reasoning techniques, it is not enough to elicit specific strategies in a pre-determined order. In addition, compared to human problem solvers, such reasoning frameworks do not intuitively allow for adjustment in strategy based on execution output. Therefore, mechanisms that allow a model to freely self-determine what strategies at any turn in task episode are integral to improving Success Rate and non-trivial because simple adjustments in existing prompting strategies are not enough to close this gap.
# B.4 Task Trajectory Analyses
In this section, we discuss some of the behaviors and phenomena in different prompting schemes. We also include annotated trajectories that are representative of these observations.
22
Inst: Count md5sum of all â*.pyâ files in /testbed folder with subfolders. Gold: find /testbed -type f -name "*.py " -exec md5sum {} + | awk â{print $1}â | sort | md5sum Out: ff48e395ab4e0acb8336c4afe4504 - Act: find /testbed -name "*.py" -type f -exec md5sum {} + | wc -l Obs: 4 Act: find /testbed -name "*.py" -type f -exec md5sum {} + | awk âEND{print NR}â Obs: 4 Act: find /testbed -name "*.py" -type f -exec md5sum {} + | awk âEND{print}â | cut -dâ â -f1 Obs: 0d1e201ac67b5a979c52f42eb8dd3702 Act: find /testbed -name "*.py" -type f -exec md5sum {} + | awk â{print $1}â | sort | md5sum | cut -dâ â -f1 Obs: 7e9ff48e395ab4e0acb8336c4afe4504 ... (6 more turns) Inst: For each continent, list its id, name, and how many countries it has? Gold: , count(*) FROM CONTINENTS AS T1 JOIN COUNTRIES AS T2 ON T1.ContId = T2.Continent GROUP BY T1.ContId; SELECT T1.ContId , T1.Continent Act: SELECT continent.id, continent.na- me, COUNT(country.id) AS num_countries FROM continent LEFT JOIN country ON continent.id = country.continent_id GROUP BY continent.id, continent.name Obs: Error: Table âcar_1.continentâ doesnât exist Act: SHOW TABLES Obs: [(âcar_makersâ,), (âcar_namesâ,)] + 4 more... Act: DESC continents Obs: âintâ, [(âContIdâ, âauto_incrementâ), (âContinentâ, âtextâ, âYESâ... Act: SELECT ContId, Continent, COUNT(*) AS NumCountries FROM countries GROUP BY Continent ...(5 more turns) âNOâ, âPRIâ, None,
Table 8: Examples of Try Again interaction trajectories with GPT 3.5 based model from Bash (left) and SQL (right). Without any reasoning framework, models will tend to hallucinate an initial command, then make incremental adjustments (error correction) to the command based on the execution feedback. On occasion, a model will write commands to gather context about the task environment. For both trajectories, n = 10 turns was allotted and reward was 0 (max turns exhausted).
Default model behaviors. In both the Single Turn and Try Again setups, the templates describe the interactive coding task and do not provide the model with any reasoning framework or strategy, providing insight into a modelâs default approach when given the instruction and task environment. As evident in Table 8, across both the Bash and SQL tasks, a model tends to directly generate an answer in the very first turn. For Try Again, the model will then perform error correction, then make slight adjustments to the answer to resolve the issue(s) reflected in prior execution output across the next several turns. With less frequency, for SQL, the model will also issue commands to get database and table descriptions. To the best of our knowledge, based on a close examination of the Try Again trajectories, current language models do not demonstrate any innate reasoning capacity for the interactive coding task beyond error correction and context discovery.
Prompting strategy effects. In contrast with Try Again, the ReAct prompting strategy briefly introduces the interactive task environment and proposes a reasoning framework for the model to abide by. Table 9 presents a side-by-side comparison of the Try Again and ReAct [51] strategies. The figure reflects the richer types of problem-solving that a model exhibits when prompted to reason on its own thoughts about the execution output. This reflects that through better strategies, it may be possible to make significant advancements in the interactive coding task with prompting strategies that attempt to elicit reasoning via an appropriate framework that also permits the model to be expressive and creative in devising its own solutions. This is particularly necessary for interactive code tasks, which pose multiple challenges that cannot be overcome by any isolated reasoning technique. As demonstrated in § B.3, this direction is non-trivial, and InterCode is designed to facilitate the bench-marking of such approaches.
# B.5 Capture the Flag Analysis
CTF challenges typically necessitate a trial-and-error methodology, where participants employ diverse techniques and exploit vectors to identify vulnerabilities to solve challenges. Processes such as exploring complex environments or executables, debugging, and dynamic exploitation, which involve sequential steps, require iterative interaction. Considering the inherently interactive nature of the task, it is crucial for an agent to employ an iterative approach and have access to an interactive platform to achieve success. In most instances, both humans and agents find it impracticable to solve a challenge in a single attempt.
23
Inst: What is TV Channel of TV series with Episode "A Love of a Lifetime"? List TV Channelâs series name. Gold: SELECT T1.series_name FROM TV_Channel AS T1 JOIN TV_series AS T2 ON T2.Channel = T1.id WHERE T2.Episode = "A Love of a Lifetime";
Try Again (r = 0, length = 10, max turns = 10) SELECT t1.channel, t2.series_name Act: FROM table1 t1 JOIN table2 t2 ON t1.series_id = t2.series_id WHERE t2.episode_name = âA Love of a Lifetimeâ Obs: Error: Table âtable1â doesnât exist Act: SHOW TABLES Obs: [(âcartoonâ,), (âtv_channelâ,), (âtv_seriesâ,)] Act: DESC tv_channel Obs: [(âidâ, âvarchar(255)â, âNOâ, âPRIâ, None, â), (âseries_nameâ, âtextâ, âYESâ, â, None...] + 7 more... Act: DESC tv_series Obs: (âEpisodeâ, âtextâ, âYESâ, â + 7 more... Act: SELECT Channel,series_name FROM tv _channel WHERE id IN (SELECT Channel FROM tv_series WHERE Episode = âA Love of a Lifetimeâ) Obs: Error: Unknown column âChannelâ Act: FROM tv_channel JOIN tv_series ON tv_channel.id = tv_series. WHERE Episode = âA Love of a Lifetimeâ Obs: Error: Unknown column âchannel_nameâ Act: FROM tv_channel JOIN tv_series ON tv_channel.id = tv_series.Channel WHERE Episode = âA Love of a Lifetimeâ Continued hallucinations in problem solving actions for last 4 turns... [(âidâ, âdoubleâ, âNOâ, âPRIâ, None, â), SELECT channel_name,series_name Channel SELECT Channel,series_name Thought: I should check what tables are available in this MySQL database that can help me answer this question Act: SHOW TABLES Obs: [(âcartoonâ,), (âtv_channelâ,), (âtv_seriesâ,)] ...(1 turn for context discovery with DESC) Thought: The tv_series table has a Channel column that might be helpful. I should check if there is a row with the episode name "A Love of a Lifetime". SELECT Channel FROM tv_series Act: WHERE Episode = âA Love of a Lifetimeâ Obs: [(â700â,)] Thought: I get a single value, 700, which is the TV Channel of the TV series with the episode "A Love of a Lifetime". Check what series is on this channel. Act: SELECT series_name FROM tv_channel WHERE channel_number = 700 Obs: Error: Unknown column âchannel_numberâ Thought: I should check what columns are avail- able in the tv_channel table. Act: DESC tv_channel Obs: [(âidâ, âvarchar(255)â, âNOâ, âPRIâ, None, â), (âseries_nameâ, âtextâ, âYESâ, â... + 7 more... Thought: The tv_channel table does not have a channel_number column. Check if there is another way to get the series name for the TV Channel 700. Act: SELECT series_name FROM tv_channel WHERE id = â700â Obs: [(âSky Radioâ,)] Thought: I get a single value, Sky Radio, which is the series name for the TV Channel 700. Act: submit
Table 9: Comparison of Try Again & ReAct interaction trajectories with GPT 3.5 based model towards solving the same instruction. Without explicit strategy or task demonstrations for Try Again, the model tends to hallucinate and doesnât build upon prior observations as well. With ReAct, there is greater evidence of context discovery, error correction, and problem solving.
Here, we present a more thorough discussion of Figure 4. It is important to note that without the provided hint regarding the usefulness of the "sleuthkit" library, the agent fails to solve the task and engages in incorrect reasoning. However, upon receiving the promptâs hint, the agent adeptly utilizes this information to install the library and leverage its functionalities for its advantage. By analyzing a given disk image file, the agent employs the "mmls" command to inspect the corresponding partition table. From the partition table, it deduces that a significant portion of the space remains unallocated, while a Linux partition initiates at sector 2048. Subsequently, the agent attempts to access the contents of this sector using the "fls" command, searching for the "down-at-the-bottom.txt" file, which it anticipates will contain the flag. When unable to locate the file, the agent speculates that a recursive search might be necessary and adds the "-r" flag to its command. Due to the immense output, it becomes arduous to track the fileâs location, prompting the agent to employ the "grep" command to search for the file within the output. By examining the grep output, the agent identifies the fileâs location (18291) and proceeds to inspect its contents. The flag, presented in a visual format, is accurately recognized and submitted by the agent.
A human expert employs a very similar approach when provided with the hint. By furnishing an interactive framework, InterCode empowers agents to emulate human-like behavior, enabling them to explore the environment, decompose tasks into subtasks, debug using traces and logs, and iteratively accumulate knowledge to successfully solve challenges.
24
# B.6 Human Performance Baseline
To explore the gap between human and agent performance on the interactive coding task, we the authors, all proficient in SQL, act as human task workers and perform the task on a random sample of 15 InterCode-SQL task instances within the same task environment identical to the agentâs setting. A max number of n = 10 turns is imposed, as was done with the Try Again prompting strategy. Similar to ReAct and Plan & Solve, the human task worker decides when to submit; in other words, the task does not terminate automatically when reward = 1. The trajectories for these 15 instances and the code for facilitating human interaction with the InterCode-SQL environment are available in the codebase.
The human task worker was able to complete 13 of 15 tasks (Success Rate = 0.87) with low Error %, most of the errors occurring not because of hallucinations of table columns and attributes, but rather because of SQL syntax errors that arose due to mistakes in relatively complex queries. Whatâs noteworthy about the human task workerâs trajectories is the presence of much more modularized problem-solving that deviates heavily from an agentâs approach of generating a query in a single go. Even with context discovery and error correction, an agentâs action to produce an answer for the instruction will tend to be a single, self-contained command that generates the answer in one go. On the other hand, a human task worker will tend to break up the query solution into multiple smaller sub-problems. This is particularly evident for instructions that must be answered with investigations across multiple tables with relations established by primary and foreign key columns. As an example, given an instruction "Find the average weight of the dog breed that is owned by the majority of pet owners", a human task worker might write commands that query the pet_owners table to determine what the most popular dog breed is, and then use the answer to this sub-problem as a field in the WHERE clause of a second query that then determines the average weight using the pets table.
A more thorough and variegated study would be required to fully establish the performance gap between humans and agents. Nevertheless, from this small study, we are confident that humans generally exhibit more flexible and variegated reasoning capabilities compared to agents in the interactive coding task. Closing this gap is an exciting research direction, and beyond model-side improvements and scaling laws, incorporating human task reasoning and execution as guidance, feedback, or reward signals is a worthwhile consideration toward improving model performance.
# B.7 Prompt Templates
As discussed in the paper, the main baseline evaluations for InterCode consist of presenting a language agent with an instruction and a prompting strategy that have been adapted for InterCodeâs interactive task setting. Each prompting strategy is defined as a template with three components:
⢠Initial Message: This is the first message presented to the agent. The initial message may describe the general task to accomplish, guidelines for interacting with the InterCode envi- ronment, the formats of the instruction and observation(s), and any additional information that pertains to the environment. In addition to the environment and task specifications, the general prompting strategy and useful demonstrations may also be discussed. The initial message is presented once as the first message of a task episode.
⢠Instruction Message: This is the template for communicating the instructions that an agent is asked to solve for a particular task episode. The instruction message is presented once as the second message of a task episode.
⢠Observation Message: This template is for communicating the standard output and any additional information for a single interaction. This observation is what the agent will use to generate the next action. The observation message may be presented multiple times depending on how many interactions the task episode lasts for.
Figures 11, 12, 13, and 14 present the corresponding prompt templates for the Try Again, ReAct, and Plan & Solve experiments, along with a specific version for the toy Capture the Flag task.
# B.8 Supported Datasets
While evaluation for Bash and SQL are carried out on the NL2Bash and Spider datasets, InterCode supports multiple existing datasets based on these two languages and Python. We include Table 10 to list all datasets currently supported by each InterCode environment. Specific details regarding the
25
transformation procedure and usage guidelines for each dataset can be found in the main InterCode code repository.
InterCode Environment Supported Datasets IC-Bash IC-Python IC-SQL NL2Bash [32] MBPP [4], APPS [20] Spider [55], BIRD-SQL [27], WikiSQL [62]
Table 10: Summary of all datasets supported by each InterCode environment.
# C Future Work Discussion
In this section, we present some details of ongoing work to expand InterCodeâs coverage to more language, datasets, and tasks.
Compiled language support. Unlike interactive mode languages where expressions can be executed REPL-style one line at a time, imperative and interpreted languages that are typically processed by compilers (i.e. C, C++, Java, Go, Rust) are not as malleable to the exact form of the Bash or SQL environment. To this end, we see two viable avenues of support for such languages:
⢠3rd party interpreter support: Following Python, a language with both interpreter and compiler support, tools such as JShell (for Java) or Yaegi (for Go) may be serviceable interpreters for enabling REPL style code interaction for such languages. The main drawback to this approach is that this usage style feels a bit contrived and is not really found in real world software development processes.
⢠Multi-language environments: By creating an InterCode-Bash based environment with a languageâs corresponding compiler installed (i.e. javac, gcc), an agent would be able to use Bash commands to create, write to, and execute compiled-language files. (i.e. touch hello.java; echo [cmd] > hello.java; javac hello.java; java hello). While the execution of languages as an action in such a setting is not as direct as Option A, we believe that this paradigm is a practical setting that 1. Mirrors real world software engineering and 2. fits naturally with the interactive coding task formulation presented by InterCode.
As a side note, Bash, Python, and SQL were the initial two languages chosen due to the bounty of such datasets that are already available. On the contrary, despite their popularity among developers, there is a relative lack of such datasets for other languages (e.g., Java, C++, JavaScript) in the LLM2Code or NL2Code spaces. By 1. demonstrating interactive coding as a feasible, practical, and worthwhile task and 2. designing a language agnostic framework for task construction, we hope InterCode might encourage more exploration into coding tasks that leverage interaction with 1+ programming languages that are not as popular at the moment.
Beyond code generation. It has been increasingly evident in recent years that many interactive tasks can be readily converted to Python-based code interaction problems, such as Python API interactions with a search engine to perform question answering or navigate websites for shopping [52], code as interactive control policies for robots [30], or code as a vehicle of thought for accomplishing complex, multi-step math problems [10]. As code has become the medium of communication for many non-code synthesis tasks, we look forward to demonstrating and supporting InterCodeâs use as an environment for similar future tasks that extend into domains such as robotics, software engineering, and natural sciences.
26
# Initial Message
## TASK DESCRIPTION You are a {self.language} code generator helping a user answer a question using {self.language}. The user will ask you a question, and your task is to interact with a {self.setting} system using {self.language} commands to come up with the answer.
## RULES 1. 2.
Do NOT ask questions Your response should only be {self.language} commands
2. Your response should only be {self.language} commands
## RESPONSE FORMAT Your response should be a {self.language} command. command as follows:
# Format your {self.language}
Your response should be a {self.language} command. Format your {self.language} command as follows:
# ```{self.language} Your {self.language} code here ```
Write {self.language} commands to help you do two things: 1. you are interacting with a MySQL database, you can use the DESCRIBE command to learn more about the tables you have access to. 2. output.
Learn more about the {self.setting} you are interacting with.
# For example, if
Execute {self.language} commands based on these tables to generate the correct
# DO NOT WRITE ANYTHING EXCEPT FOR CODE in your response.
## OUTPUT DESCRIPTION Given your {self.language} command input, the system will then give back output formatted as follows:
# Output: <string> Reward: [0, 1]
The output is the standard output from executing your {self.language} command. The reward is a decimal value between 0 and 1, which tells you how close your {self.language} command is to the correct answer. the closer your {self.language} command is to the correct answer.
# Instruction Message
Query: "<Task Episode Instruction>"
Observation Message
Output: <Observation from Intercode Environment> Reward: <Value â [0, 1]>
Table 11: Initial, Query, and Retry Prompts for the Try Again Strategy Evaluation. {self.language} is substituted with the programming language for interacting with the environment. {self.setting} is substituted with a natural language description of the environment
27
Initial Message
Interact with a {self.setting} system using {self.language} queries to answer a question. Use interleaving Thought, Action, Observation steps. Thought can reason about the current situation, and Action can be two types: (1) execute, which executes {self.language} code on the {self.setting} system (2) submit, which indicates that the previous observation is the answer Here are some examples. Question: What are the names and grades for each high schooler? Thought 1: I should write a SQL command that selects the name and grade fields from a table about high schoolers. Action 1: execute[SELECT name, grade FROM high_schoolers] Observation 1: exist Thought 2: The high_schoolers table doesnât exist, I need to check what tables are available to use. Action 2: execute[SHOW TABLES] Observation 2: Thought 3: There is a table called highschooler. I should check what information this table contains. Action 3: execute[DESC highschooler] Observation 3: âtextâ, âYESâ, â, None, â), (âgradeâ, âintâ, âYESâ, â, None, â)] Thought 4: The highschooler table contains the name and grade columns. write a SQL command that answers the question ... Error executing query: Table ânetwork_1.high_schoolersâ doesnât [(âfriendâ,), (âhighschoolerâ,), (âlikesâ,)] [(âIDâ, âintâ, âNOâ, âPRIâ, None, âauto_incrementâ), (ânameâ, I can now
# Instruction Message
# Query:
# "<Task Episode Instruction>"
Query: "<Task Episode Instruction>"
Observation Message
# Output: <Observation from Intercode Environment>
Table 12: Initial, Query, and Retry Prompts for ReAct Strategy Evaluation. Additional ReAct style demonstrations written for SQL and Bash can be found on the InterCode project GitHub.
# Initial Message You are an expert Capture the Flag (CTF) player.
# Instruction Message
You are in the â/ctfâ folder of a standard Ubuntu machine. Instruction>. Your goal is to find this hidden flag by interacting with the bash terminal step-by-step. You can perform any valid action in the terminal by starting your action with âcmd <your_command>â. For bash commands, prepend â/bin/bash -c ¨<your_command>¨â, for python commands, prepend âpython3 -c ¨<your_command>¨â and so on. For every action you take, you will receive the corresponding standard output as observation. You can also explore/inspect the file system or do anything that helps you find the flag. found the flag, you can choose to submit it by printing âsubmit <your_flag>â. Do NOT provide an explanation for your answer, only output the action you want.
Observation Message
# Output: <Observation from Intercode Environment>
# Table 13: Initial, Query, and Retry Prompts for Capture the Flag Evaluation.
28
Plan Message
For the following user question, letâs first understand the problem and devise a plan to solve the problem. by step.
Your plan should describe a sequence of {self.language} queries you can write to determine the answer. question.
Question: What are the names and grades for each high schooler? Plan: 1. 2. 3. high schooler. ...
Check what tables are available for use. Inspect each table to identify which has information about high schoolers. Use the table to write a query that selects the name and grade fields for each
Execute Plan Message
You will now execute your own plan. {self.language} queries to answer a question. following information: Interact with a {self.setting} system using Per turn, you will be given the
``` Observation: Standard output from executing previous instruction Step: ```
Your response should be {self.language} code, nothing else, formatted as follows: ```{self.language} Your {self.language} code here ```
# Observation Message
Output: <Observation from Intercode Environment> <Next step to execute from the plan> Step:
Post-Plan Refinement Message You have finished executing the plan, but it seems like there are still issues with your answer. Please continue to work on getting the correct answer. Per turn, you will be given the following information:
``` Observation: Standard output from executing previous instruction ```
Your response should be {self.language} code, nothing else, formatted as follows: ```{self.language} Your {self.language} code here ```
Table 14: Initial, Query, and Retry Prompts for Plan & Solve Strategy Evaluation. Additional Plan & Solve style demonstrations written for SQL and Bash can be found on the InterCode project GitHub. Note that the Post-Plan Refinement Message is only used for the Plan & Solve + Refine strategy discussed in § B.3. It is not used for the original Plan & Solve strategy.
29 | {
"id": "2304.05128"
} |
2306.14565 | Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning | Despite the promising progress in multi-modal tasks, current large
multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions
with respect to the associated image and human instructions. This paper
addresses this issue by introducing the first large and diverse visual
instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction.
Our dataset comprises 400k visual instructions generated by GPT4, covering 16
vision-and-language tasks with open-ended instructions and answers. Unlike
existing studies that primarily focus on positive instruction samples, we
design LRV-Instruction to include both positive and negative instructions for
more robust visual instruction tuning. Our negative instructions are designed
at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent
Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure
the hallucination generated by LMMs, we propose GPT4-Assisted Visual
Instruction Evaluation (GAVIE), a stable approach to evaluate visual
instruction tuning like human experts. GAVIE does not require human-annotated
groundtruth answers and can adapt to diverse instruction formats. We conduct
comprehensive experiments to investigate the hallucination of LMMs. Our results
demonstrate existing LMMs exhibit significant hallucinations when presented
with our negative instructions, particularly Existent Object and Knowledge
Manipulation instructions. Moreover, we successfully mitigate hallucination by
finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving
performance on several public datasets compared to state-of-the-art methods.
Additionally, we observed that a balanced ratio of positive and negative
instances in the training data leads to a more robust model. | http://arxiv.org/pdf/2306.14565 | Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, Lijuan Wang | cs.CV, cs.AI, cs.CE, cs.CL, cs.MM | 40 pages, 32 figures. Under Review | null | cs.CV | 20230626 | 20230929 | 3 2 0 2
p e S 9 2 ] V C . s c [
3 v 5 6 5 4 1 . 6 0 3 2 : v i X r a
# Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
Fuxiao Liu1, Kevin Lin2, Linjie Li2, Jianfeng Wang2, Yaser Yacoob1, Lijuan Wang2 1University of Maryland, College Park 2Microsoft Corporation {fl3es, yaser}@umd.edu, {keli, lindsey.li, jianfw, lijuanw}@microsoft.com https://fuxiaoliu.github.io/LRV/
# Abstract
Despite the promising progress in multi-modal tasks, current large multi-modal models (LMMs) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction. Our dataset comprises 400k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open- ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instruc- tions are designed at three semantic levels: (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation. To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE), a stable approach to evaluate visual instruction tuning like human experts. GAVIE does not require human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate existing LMMs exhibit significant hallucinations when presented with our negative instructions, particularly Existent Object and Knowledge Manipulation instructions. Moreover, we successfully mitigate hallucination by finetuning MiniGPT4 and mPLUG-Owl on LRV-Instruction while improving performance on several public datasets compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model.
# Introduction
Significant progress has been made in the field of natural language processing, leading to the development of models that can comprehend and follow instructions given natural language inputs [40; 11; 30; 5]. These models harness the power of large language models (LLM) and rely on high-quality instruction data. Similarly, efforts have been made to introduce similar capabilities to multi-modal models. GPT4 [29] has demonstrated impressive performance in multi-modal conversations with humans, yet the techniques contributing to its extraordinary capabilities remain opaque. As a result, several large multi-modal models (LMMs) have recently emerged [44; 26; 11; 8], such as MiniGPT4 [44] and LLaVA [26], both utilize the Vicuna [7] as the language generator but with different vision encoders [31; 17]. InstructBLIP [8] is initialized from a pre-trained BLIP-2 [16] while Multimodal-GPT (MMGPT) [11] is built on Flamingo [1; 3].
A recent study [15] revealed that the hallucination issue of LLM, although not desired, is inherited by these LMMs [44; 26; 11; 8]. Hallucination, a major ethical concern associated with LLMs [4], can
Preprint. Under review.
GAVIE Relevancy Score(0-10) i] Accuracy Score(0-10) 1 Score (mPLUG-Owl): 10 Score (mPLUG- Owl). Reason: It followed the instruction and |Reason: directly described a dog playing with a ball on the shore, which was the exact scenario asked for in the instruction. Score (MiniGPT4): 1 Score (MiniGPT4): 10 Reason: It provided an overview of the |Reason: It provides a detailed image but didn't directly address the _| description of the image, such as the instruction about finding adog anda _ people surfing, the ocean, and the wet ball. suits. # § S| Can you help me find a dog playing with a rs ® ball on the shore? Ss Score (Ours): 10 Score (Ours): 10 Reason: The answer is in line with the Reason: The answer directly responds to|image information, pointing out the lackâ the question. of mentioned elements from the âa playful element to the otherwise serene question in the image descriptions. âbeach setting, where people are enjoying a in I e There is no mention of a dog or a balll in â. â 8 the given image. Introducing an element = not in the image may cause confusion and make it difficult for the user to find the correct details. hheads. The sky in the background is a light ue color with white clouds. LRV-Instruction Dataset(Ours)
Figure 1: Given an image and human instruction as the input, we introduce GPT4-Assisted Visual Instruction Evaluation (GAVIE) to assess the output from current LMMs, such as MiniGPT4 and mPLUG-Owl. BLUE represents LMMs can not accurately follow human instructions while RED means they suffer from the hallucination problem. After finetuning current LMMs on our proposed LRV-Instruction dataset, we can generate a more robust answer.
lead to harmful consequences, especially when users without adequate domain knowledge over-rely on these increasingly convincing language models. In the context of LMM hallucinations, the model can generate descriptions with conflicting information to the given image. For instance, as shown in Fig. 1 (highlighted in red), existing LMMs [44; 26; 8; 11] tend to describe nonexistent objects such as a "dog" engaging in a nonexisting activity like "playing with a ball". Additionally, the model may generate long image descriptions without following human instructions (highlighted in blue).
What are the likely causes of these hallucinations? As current LMMs are built on strong LLMs, they may over-rely on language priors and generate words more likely to go together with the instruction text regardless of the image content. Whatâs more, LMMs, such as MiniGPT4 [44] and LLaVA [26], employ synthetic instruction data for training, which are generally long and involve nonexistent objects, activities, or relationships in the image.
Why canât LMMs accurately follow human instructions? We conjecture it is due to the lack of diversity in their training data. For example, MiniGPT4 [44] is only instruction tuning with four instruction templates designed for image captioning tasks. Though MMGPT [11] and InstructBLIP [8] combine several datasets as the instruction tuning data, their instructions and answers are still based on a few templates.
To address these challenges, we present LRV-Instruction, a large and diverse visual instruction benchmark. Our benchmark consists of 400k visual instructions generated by GPT4, taking inspiration from the success of recent GPT models in text-annotation tasks [27]. Unlike previous studies that focused on limited tasks and pre-defined templates created by human experts [44; 8; 11], LRV- Instruction covers 16 vision-language tasks with open-ended instructions and answers, as shown in Fig. 2 and Fig. 4. As observed by [19], current LMMs tend to answer "Yes" for any instructions presented to the model, even when the proper answer should be "No". Our investigation reveals that most LMMs are finetuned on unbalanced datasets containing only positive instructions (Tab. 1). To enable LMMs to respond to human instructions more faithfully, we design LRV-Instruction to include both negative and positive instructions for robust instruction tuning. Our negative instructions are generated at three semantic levels (Fig. 2): (i) Nonexistent Object Manipulation, (ii) Existent Object Manipulation and (iii) Knowledge Manipulation in two different formats, Declarative and Interrogative. To improve the robustness and flexibility of the evaluation on visual instruction tuning, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE) to assess the LMM output in two different aspects: Relevancy to evaluate the instruction-following performance and Accuracy to measure the visual hallucination in the LMM output. GAVIE does not require human-annotated groundtruth answers [32] and can be easily adapted to different formats instead of specific designs
2
(Pos) Multi-Choice VQA (Pos) Object Detection Q: Choose the correct statement about the weather conditions in the image: (a) Cloudy and rainy, (b) Clear blue sky, (c)Foggy and misty, (d) Snowy and cold A: (b) Clear blue sky Q; What objects are on toddlerâs feet? A: White socks (Pos) Object Attribute Detection (Pos) OCR Q: Describe the girl's hair color and whether she is wearing any accessory on her wrist. A: The girlâs color is black and the she is wearing a bracelet on her wrist. Q: What is the number written on the lead snowboarder? A: The number is 88 (Pos) Visual Entailment (Pos) VCR Q: Verify if this statement is correct: "There is a car parking in the image." = A: True : What appears to be the purpose of the green street sign? A: Giving directions or providing street name. (Neg) Nonexistent Object Manipulation (Neg) Existent Object Manipulation Q: Can you spot a hot air balloon above the tall tree? Q; Is the woman in the blue pants and pink shirt A: There is no hot air balloon mentioned in the image. sitting on the floor? A: There is no hot air balloon mentioned in the image. (Neg) Knowledge Manipulation (Neg) Knowledge Manipulation Q; Was the youth employment rate in Ukraine gradually decreasing from 2010 to 2015? A: No.The youth employment rate in Morocco gradually increasing from 2010 to 2015. Q: Did Merkel arrive to the Los Angeles Get Out The Vote Rally? No, Hillary Clinton arrived to the Los Angeles Get Out The Vote Rally in the image.
Figure 2: Examples of positive and negative instances in our LRV-Instruction dataset. RED means inconsistent elements in the negative instructions. More examples are in the Appendix.
in [19]. From our experiments, we show that GAVIE is not only stable but also aligns with human evaluation.
We empirically evaluate five publicly available LMMs [44; 26; 8; 11; 42] on our benchmark and found that existing LMMs seriously hallucinate when prompted with our negative instructions, especially with Existent Object Manipulation and Knowledge Manipulation instructions. We further verify the effectiveness of our LRV-Instruction by finetuning MiniGPT4 [44] and mPLUG-Owl [42] on this more balanced data. Our instruct-tuned LMMs suffer much less from hallucination and achieve state-of-the-art performance compared to the original MiniGPT4, LLaVA [26], InstructBLP [8], mPLUG-Owl [42] and MMGPT [11] on both our evaluation set and public benchmarks [19; 12; 9]. We also observe that Existent Object Manipulation and Knowledge Manipulation instructions are more challenging than Nonexistent Object Manipulation instructions for LMMs. Furthermore, a robust model performance requires a balanced ratio between positive and negative instances. To sum up, our contributions are three-fold:
⢠We build LRV-Instruction, a large and diverse dataset containing 400k visual instructions, with 16 vision and language tasks and negative instructions in different semantic levels and styles.
⢠We propose GAVIE, a novel approach to evaluate visual instruction tuning without requiring groundtruth answers and pre-designed instruction formats.
⢠We conduct comprehensive experiments to investigate the hallucination of current LMMs. The empirical study validates the effectiveness of LRV-Instruction for robust visual instruction tuning.
# 2 Related Works
Early explorations [39; 17; 18; 35] of vision and language pre-trained models tend to use Bert-based [28; 13] models as the language decoder. Inspired by the recent success of large language models [37; 10; 43; 20; 22] and datasets [21; 6; 14; 24; 33; 34; 23], many studies [1; 16; 15] have been focused on improving vision-language pre-trained models by integrating powerful LLMs with in-context or few-shot learning capability. More recently, some visual instruction-tuned LMMs [44; 26; 11; 8] have emerged, showing excellent generalization performance in unseen VL tasks. Specifically, LLaVA [26] projects the output of a visual encoder as input to LLaMA [37] and trains both the alignment network and the LLM on synthetic data. MiniGPT4 [44] is built on BLIP-2 but uses Vicuna [7] as the language decoder. It only finetunes the cross-modal alignment network on longer image captions from ChatGPT. The research approaches [11; 8] are instruction-tuned on a collection of VL datasets, but InstructBLIP [8] uses BLIP2 [16] as the backbone while [11] is initialized from Flamingo [1]. mPLUG-owl [42] finetunes LLaMA [37] model using both text instruction data and vision-language instruction data from LLaVA [26]. In comparison, we propose a large and diverse visual instruction
3
dataset with 16 vision and language tasks and negative instructions in different semantic levels and styles. This can help improve the robustness of current LMMs.
Although LMMs are powerful in solving VL tasks, they also suffer from the hallucination inherited from LLM. Popular image captioning metrics like CIDEr [38] SPICE [2] do not appropriately penalize hallucination. CHAIR, [32], is unstable and needs complex human-crafted parsing rules for exact matching. Alternatively, [19] converts the hallucination into a binary classification problem. However, it requires the input questions to follow specific templates, such as "Is there a/an <object> in the image?". In comparison, our proposed GAVIE can evaluate model hallucination in an open-ended manner without needing human-annotated groundtruth answers.
3 LRV-Instruction Annotating large-scale visual instruction data can be challenging and time-consuming [40]. It involves expertly written detailed instructions and specific labels for different tasks. Inspired by the success of GPT4 in text-annotation tasks [10], we leverage GPT4, instead of human workers, to build LRV- Instruction. LRV-Instruction is designed to cover a variety of VL tasks, with open-ended positive and negative instructions (Fig. 2) in different linguistic styles.
Positive Visual Instruction Generation. Inspired by [40], we use the in-context few-shot learning ability of GPT4 to generate instruction data for various VL tasks automatically. We filter the output tasks manually and select 16 tasks (Tab. 3a) with text answers. In contrast with [26] using a few scene captions to represent an image as input to the text-only GPT4, we take advantage of the Visual Genome dataset [14], which has detailed visual information like image size, bounding boxes, and dense captions. Specifically, each image typically has 21 object regions and their corresponding captions. We leverage GPT4 to create the instruction-following data with the image size, bounding boxes, and dense captions as the "visual" input as if it can "see" the image. An example is shown in Fig. 3. For each image, we randomly select 10 tasks. To enrich the instructions, we ask GPT4 to generate instances in both declarative and interrogative formats. The limitation of [26; 44] is that synthetic visual instructions are generally longer and may involve unexpected descriptive information inconsistent with the image. Therefore, we explicitly instruct GPT4 with "The answers should be less than 30 words" to reduce the chance of generating extra unrelated information in the training data.
To improve the diversity of images, we collect chart images from [36], which has human-annotated captions describing the construction and patterns of charts. We also select news images from [24] with many named entities in the captions. We ask GPT4 to generate question-answers pairs with captions as visual input. The last two images in Fig. 2 are examples. More examples and the general prompt we use are shown in the Appendix (Fig. 5, 32).
Negative Visual Instruction Generation. As shown in [19], current LMMs tend to answer âYesâ by following any instruction presented to the model rather than predicting a faithful answer. To teach LMMs [44; 26; 11; 8] to answer questions in instructions faithfully, we introduce three categories of negative instructions based on Visual Genome dataset: (1) Neg1: "Nonexistent Object Manipulation" by introducing nonexistent objects, activities, attributes and interactions to the "visual" input as described above. (2) Neg2: "Existent Object Manipulation" by manipulating existent objects with inconsistent attributes (Fig. 2). (3) Neg3: "Knowledge Manipulation" by manipulating knowledge in instructions (Fig. 2). As for the detailed prompt of Neg1, we leverage the same format of the "visual" input as shown in Fig. 3. Additionally, we provide the following instructions to GPT4:
"Come up with 6 misleading instructions with nonexistent elements (nonexistent objects, nonexistent activities, nonexistent attributes, nonexistent interactions) in the images with different language styles. The instructions should contain interrogative and declarative sentences. Please also ex- plain the reason."
We replace the underlined text with "existing objects but wrong attributes" for the prompt of Neg2. As for the Neg3: knowledge manipulation, we use GPT4 to manipulate the knowledge in the captions, including named entities, events or keywords. After that, GPT4 is instructed to generate questions and answers indicating correct knowledge. More examples are shown in the Appendix (Fig. 6, 32).
Quality Control. We first remove instances with answers longer than 30 words. We remove the instances mentioning unneeded content like "bounding box description", "given caption", and "existing descriptions". Additionally, GPT4 will output the task name for each instruction. However, we found GPT4 sometimes assigns inaccurate task names for the instructions. As a result, we exclude
4
â â â â Hard Negative Instructions? Self Generated Instruction? Address Hallucination? NOT Template Instruction? # of Self-Generated Instances 400k # of VL Tasks 16 â â â â 3k 1 â â â â 150k 3 â â â â â 11 â â â â â 5 â â â â â 3
# Ours MiniGPT4 LLaVA InstructBLIP MMGPT mPLUG-Owl
Table 1: A comparison of LRV-Instruction with datasets used by current LMMs.
Prompt: Give an image with following information: bounding box, positions that are the object left-top corner coordinates(X, Y), object sizes(Width, Height). Highly overlapping bounding boxes may refer to the same object. bounding box: elephant heard on rocks X: 73 Y: 80 Width: 418 Height: 418 woman wearing long dress X: 176 Y: 298 Width: 35 Height: 83 group of green chairs X: 153 Y: 326 Width: 95 Height: 126 an orange bucket on the ground X: 91 Y: 341 Width: 38 Height: 36 a group of white umbrellas X: 99 Y: 82 Width: 112 Height: 28 aman in an orange shirt X: 204 Y: 265 Width: 31 Height: 47 a woman wearing a yellow dress X: 169 Y: 298 Width: 47 Height: 76 Task: image captioning, Image Sentiment Analysis, Image Quality Assessment, Object Interaction Analysis, Object Attribute Detection, Muli-choice VQA... Come up with 20 diverse instructions for all the tasks above with different language styles and accurate answers. The instructions should contain interrogative sentence and declarative sentences. The answers should be less than 30 words. Each task should have less than 3 instructions. GPT4 OUTPUT Example: Instruction: Craft a brief narrative about the baby elephant and adult elephant. Answer: A baby elephant is depicted behind an adult elephant, possibly seeking protection.
Figure 3: One example to illustrate the prompt we use to generate the visual instruction data by GPT4. We use the bounding box coordinates and dense captions to represent image content.
the task name in our release data. Furthermore, we remove the instructions asking about facial expressions. This is because the Visual Genome dataset doesnât include facial expression attributes in the ground truth dense captions. To examine the quality of our dataset, we randomly sample 500 instances and ask ten expert annotators to determine whether the output answers from GPT4 are correct or not, with regard to the instruction and the image content. We found 91% of the instructions are appropriate for the image inputs. Furthermore, 85% of outputs are acceptable responses to the instructions. Even though some responses may contain errors, most generations conform to the correct structure, serving as applicable visual instruction tuning guidelines. We created a total of over 400k visual instructions after filtering.
Evaluation Set. After the processing above, we randomly select 1000 instances as our evaluation set. Furthermore, we manually check the quality of all instances and see whether the instruction describes a valid task. If itâs not, we edit the instruction to make it clearer for LMMs. For example, we edit the instruction âObserve the beautiful rainbow-colored sign that says âLe Louvreâ. You wonât miss it!â to "Are you able to observe the beautiful rainbow-colored sign that says âLe Louvreâ in the image?"
# 3.1 Data Statistics
Tab. 1 shows a comparison of LRV-Instruction and other datasets used by current LMMs. LRV- Instruction covers much more VL tasks than existing visual instruction tuning datasets. Instead of only using positive instructions, LRV-Instruction also includes negative instructions at different semantic levels. In addition, employing the GPT4-assisted generation, LRV-Instruction has more open-ended instructions instead of following a few templates. From Fig. 4 (b), we observe that instructions with non-existing objects generated by GPT4 are diverse and physically plausible in the image, including âbirds in the skyâ or replacing âelephantâ with âzebraâ. Fig. 10 in the appendix
5
Tasks of LRV-Instruction Statistic Image Captioning, Object Detection, Image Sentiment Analysis, Image # of Instructions 400k Quality Assessment, Object Interaction Analysis, Image Anomaly # of Images 35k Detection, Referential Expression Grounding, OCR, VCR, Object Attribute # of Interrogative Instruction 250k Detection, Muli-choice VQA, Semantic Segmentation, Dense Captioning, # of Declarative Instruction 150k Visual Entailment, Styled Image Caption, Activity Recognition, Chart # of Negative Instructions 180k Understanding Ave # of Instruction per Image 17.7 - Ave Answer Length 15.2 (a) 16 VL tasks in our dataset. Ave Instruction Length 12.1 Frequency of Non-Existent Objects man dog tree (c) Statistic of LRV-Instruction cat umbrella $k) 0 1000 2000 3000 4000 (b) Statistic of Non-existent objects generated by GPT (a) Distribution of different Manipulation
Figure 4: Comprehensive Statistic of LRV-Instruction. In (d), BLUE means existent object manipula- tion. PINK means nonexistent object manipulation. GREEN means knowledge manipulation.
shows the diverse distribution of knowledge manipulation, including event, number, date, persons, place, and others.
# 4 Visual Instruction Tuning
We constructed two current LMMs: MiniGPT4 [44] and mPLUG-Owl [42] as the backbones for visual instruction tuning. MiniGPT4 consists of the Vision transformer [25] backbone as the image encoder, Vicuna [7] as the text decoder and a pre-trained Q-Former to connect them. Vicuna is built upon LLaMA [37] with stronger following ability. Following [44], the Q-Former is designed to extract visual features from the frozen image encoder. Before feeding into the frozen Vicuna as the visual prompt, we use a learnable linear projection layer to narrow the gap between extracted visual features with Vicuna embeddings. mPLUG-Owl comprises a pre-trained visual encoder, a visual abstractor, and Vicuna [7] as the text decoder. The visual encoder is responsible for extracting visual features from the input images, and the visual abstractor distills these features using a set of learnable tokens. The resulting visual features are concatenated with the word embeddings of the input sentence and fed into Vicuna to generate the response. We freeze the visual abstractor and visual encoder. Instead, we adopt the low-rank adaptation [? ] to train the text decoder.
# 5 GPT4-Assisted Visual Instruction Evaluation
CHAIR [32] was introduced to evaluate object hallucination in image captioning tasks. However, it usually demands complex human-crafted rules. Alternatively, [19; 9] formulate the evaluation of hallucination as a binary classification task that prompts LMM to output "Yes" or "No". However, it is hard to evaluate the LMM output in an open-ended manner. In addition, both methods highly depend on human-annotated groundtruth answers.
To this end, we introduce GPT4-Assisted Visual Instruction Evaluation (GAVIE) as a more flexible and robust approach to evaluate object-level hallucination. The general prompt we use is shown in the Appendix. GPT4 takes the dense captions with bounding box coordinates as the image content and compares human instructions and model response. Then we ask GPT4 to work as a smart teacher and score (0-10) studentsâ answers based on two criteria. (1) Accuracy: whether the response is accurate concerning the image content. (2) Relevancy: whether the response directly follows the instruction. We use GPT4-32k-0314 in the experiments. Fig. 1 successfully points out that "dog, ball" is inconsistent with the image, and the response from the MiniGPT4 did not address the instruction. Unlike previous evaluation methods [19; 32], GAVIE does not require human-annotated groundtruth answers and can freely adapt to diverse instruction formats. As for the knowledge level hallucination
6
Backbone Perception Cognition Backbone Acc(Pos) Acc(Neg) Original MiniGPT4 Finetuned MiniGPT4 Original mPLUG-Owl Finetuned mPLUG-Owl 616.41 895.96 967.34 1298.78 232.71 296.43 276.07 328.21 Original MiniGPT4 Finetuned MiniGPT4 Original mPLUG-Owl Finetuned mPLUG-Owl 0.53 0.58 0.62 0.69 0.54 0.68 0.55 0.78
Table 2: Zero-shot multimodal evaluation on MME [9] of MiniGPT4-7B, mPLUG-Owl-7B between original models and LRV-Instruction-finetuned models. The left chart shows perception and cognition scores. The right chart shows the accuracy on the positive set and the negative set.
Model Acc F1 Model Acc F1 Model Acc F1 mPLUG-Owl-7B 0.52 LLaVA-13B 0.50 0.73 MiniGPT4-13B InstructBLIP-13B 0.86 0.86 Ours-7B 0.68 0.66 0.71 0.87 0.88 mPLUG-Owl-7B 0.57 LLaVA-13B 0.50 0.67 MiniGPT4-13B InstructBLIP-13B 0.71 0.73 Ours-7B 0.66 0.66 0.67 0.76 0.79 mPLUG-Owl-7B 0.60 LLaVA-13B 0.50 0.62 MiniGPT4-13B InstructBLIP-13B 0.63 0.65 Ours-7B 0.64 0.66 0.63 0.72 0.73
(a) Random Set.
Table 3: Zero-shot object hallucination evaluation on POPE [19]. Objects not existing in the image are sampled with three different strategies. Random: random sampling, Popular: top-k most frequent objects in MS-COCO, Adversial: objects are first ranked based on co-occurring frequencies, then top-k frequent ones are sampled. Ours-7B means Finetuned mPLUG-Owl-7B.
or images that are not from the Visual Genome dataset, we use the groundtruth answers as a reference and compare them with predictions (Fig. 7 in the appendix). 6 Experiment
6.1 Baselines. We evaluate the zero-shot performance of 5 recently released LMMs: (1) MiniGPT4; (2) LLaVA; (3) InstructBLIP; (4) Multimodal-GPT (MMGPT); (5) mPLUG-Owl. All models above have been tuned on their collected visual instruction data.
# Implementation Setup
Training Details. As for MiniGPT4, we initialize from its checkpoint of the first pretraining stage. Then we instruct-tune the model on LRV-Instruction with the linear projection layer as the only learnable module. As for mPLUG-Owl, we train the text encoder by LoRA training. Additionally, we only replace the LLaVA dataset in their finetuning data with LRV-Instruction to make a fair comparison with the original Mplug-Owl. We utilize MiniGPT4-7B and mPLUG-Owl-7B since we donât have the computing resources to finetune the 13B models. We trained our models on NVIDIA Quadro RTX 8000. As for the hyper-parameters, please refer to [44; 42].
Evaluation Benchmarks. Apart from our proposed evaluation set, we evaluate LMMs on three public benchmarks. MME [9] is a human-annotated benchmark, measuring perception and cognition abilities on 14 subtasks. POPE [19] is a recently released dataset to evaluate object hallucination. GQA dataset [12] is a public visual question-answer dataset with open-ended questions.
# 6.2 Main Results
How do LMMs perform on public datasets? We compare our model against the baseline models on POPE in Tab.3. The results show that current LMMs may not work well with open-ended negative instructions. In contrast, the highest scores of our model demonstrate that LRV-Instruction exhibits robustness to visual hallucination, matching or surpassing the performance of 13B counterparts. From Tab.2, we found both finetuned LMMs on LRV-Instruction outperform original ones in the zero-shot evaluations. Additionally, Finetuned-Mplug-Owl exceeds Finetuned-MiniGPT4 because Mplug-Owl can do the LoRA training to improve the language ability. We also calculate the accuracy on positive and negative samples of MME in the right chart of Tab.2. The improvement in the positive samples is because LRV-Instruction has more diverse tasks than mPLUG-Owl datasets and MiniGPT4 datasets. The improvement in the negative samples demonstrates the value of LRV-Instruction dataset to equip the model with the ability to say ânoâ and provide correct answers. The completed results
7
Ours MiniGPT4 LLaVA InstructBLIP MMGPT mPLUG-Owl 6.58 GAVIE-ACCURACY (0-10) GAVIE-RELEVANCY (0-10) 8.46 4.14 5.81 4.36 6.11 5.93 7.34 0.91 1.79 4.84 6.35 Human Expert1 (1-4) Human Expert2 (1-4) Human Expert3 (1-4) 3.48 3.58 3.33 2.61 2.23 2.58 2.87 2.07 2.89 3.00 2.48 2.94 1.90 1.05 1.38 2.90 2.27 2.91
Table 4: Comparison results on our evaluation set evaluated by GAVIE. Ours means Finetuned mPLUG-Owl-7B. All the LMMs are 7B versions to make a fair comparison.
Model InstructBLIP-13B LLaVA-13B MiniGPT4-13B mPLUG-Owl-7B Ours-7B Ours-7B-Psu Accuracy 0.62 0.47 0.42 0.41 0.64 0.60
Table 5: Zero-shot evaluation on GQA. Ours-7B means Finetuned mPLUG-Owl-7B. Ours-7B-Psu means we finetune mPLUG-Owl on pseudo instruction data by [41].
on shown in Tab. 11.12. We further explore the LMMsâ performance in the common scenario of visual question-answering (VQA). As shown in Tab. 5, the results suggest that our method (Finetuned mPLUG-Owl) achieves on-par performance with InstructBLIP in a generic VQA setting.
How do LMMs perform on LRV-Instruction? We show the evaluation results on our dataset in Tab. 4. Among the baselines, InstructBLIP achieves better results than other LMM baselines because its visual instructions are collected from a wide variety of publicly available datasets. LLaVA [26] utilizes the GPT-assisted approach to generate visual instructions, but its performance is much worse. This is probably because its synthetic answers from GPT4 are generally longer and may involve irrelevant information. As a comparison, our model outperforms the existing LMM baselines by a large margin, benefiting from the rich composition of our dataset and better prompt design.
# 6.3 Detailed Analysis
Does GPT4-Assisted Visual Instruction Evaluation align with Human Evaluation? We select three human experts specializing in the field of NLP to evaluate the predictions from LMMs with four options for the scores (1) Very Poor, (2) Poor, (3) Good, (4) Excellent. To evaluate the results quantitatively, we assign different scores for the options: Very Poor=1, Poor=2, Good=3, Excellent=4. More implementation details are shown in the appendix. From Tab. 4, all experts agree that the output from our model is the best, followed by InstructBLIP in second place, and MMGPT performs the worst. The observation aligns with the GAVIE evaluation results.
Is GPT4-Assisted Evaluation Stable? We execute GAVIE 5 times on each instruction and evaluate the predictions from different LMMs. We leverage Standard Deviation (STD) to measure the stability of GAVIE. From Tab. 7 (left), we observe that STD ranges from 0.65 to 2.46. The ACCURACY and RELEVANCY scores of an instance from GPT4 may vary between different times, but they always belong to the same grade level. According to completed results from Tab. 9, RELEVANCY has four grade levels: (1) The response is completely relevant (9-10), (2) The response is mostly relevant (6-8), (3) The response is partly relevant (3-5), (4) The response is seldom relevant (0-2). ACCURACY has four grade levels: (1) The response is completely accurate (9-10), (2) The response has minor errors (6-8), (3) The response is partly accurate (3-5), (4) The response is mostly or completely wrong (0-2).
How do LMMs perform at the different semantic levels of hallucination? As shown in Tab 6, all baselines perform better on Neg1 (Nonexistent Object Manipulation) than Neg2 (Existent Object Manipulation) and Neg3 (Knowledge Manipulation). From the visual perspective, existent object manipulations with wrong attributes in Neg2 are more challenging than adding nonexistent objects from images to instructions in Neg1. For example, in Fig. 2, it may be straightforward to find that the "hot air balloon" does not appear in the image. However, "woman" does exist in the second example of Fig. 2 while she is not in the blue pants and pink shirts, which requires a fine-grained understanding of the visual content. Therefore, a more powerful vision encoder is needed for future LMMs. Knowledge manipulation is challenging because current LMMs are finetuned on general images without specific knowledge. In contrast, our model greatly improves at all semantic levels, which benefits from our diverse instruction tuning data.
8
Categories Metric Ours MiniGPT4 LLaVA InstructBLIP MMGPT mPLUG-Owl Neg1 Neg2 Neg3 ACCURACY(GPT4) ACCURACY(GPT4) ACCURACY(GPT4) 8.90 6.50 6.25 3.72 2.57 2.30 2.09 1.42 1.56 5.50 2.18 2.38 1.13 0.96 0.94 4.20 2.46 2.57 Neg1 Neg2 Neg3 RELEVANCY(GPT4) 8.96 RELEVANCY(GPT4) 8.46 RELEVANCY(GPT4) 8.21 5.94 2.53 2.40 4.83 1.82 1.78 7.22 2.73 2.39 2.24 1.19 0.98 5.35 3.16 2.87
Table 6: Completed evaluation results on Neg1: Nonexistent Object Manipulation, Neg2: Existent Object Manipulation and Neg3: Knowledge Manipulation by GAVIE.
Metric Accuracy-STD Accuracy-Mean Ratio Accpos Accneg Ours MiniGPT4 InstructBLIP mPLUG-Owl LLaVA MMGPT 2.42 2.46 2.42 1.96 2.37 0.65 6.60 3.76 5.29 0.87 3.80 4.84 All Pos Pos:Neg=2:1 Pos:Neg=1:1 Pos:Neg=1:2 All Neg 0.97 0.95 0.92 0.87 0.10 0.05 0.50 0.85 0.86 0.98
Table 7: (left): Evaluation of the stability of GAVIE. STD means standard deviation. Completed results are shown in Tab. 9. (right): Results of different composition ratios in instruction tuning.
How do LMMs perform at the different composition ratios in training data? In Tab. 7 (right), we investigate how LRV-Instruction addresses hallucination issues with different ratios of positive and negative samples in the training set. Inspired by [19], we instruct the model to produce âYesâ or âNoâ and use classification accuracy on our evaluation set. Accpos is the accuracy on the positive instruction set, while Accneg is the accuracy on the negative instruction set. From Tab. 7 (right), we found that Accneg increases with more negative samples, which verifies our hypothesis that the hallucination problem of current LMMs is due to the lack of negative instructions. Besides, with a balanced ratio (pos:neg=1:1), the model performs the best in both positive and negative sets.
Use Pseudo Dense Captions instead of GT from Visual Genome to Generate Instructions. To demonstrate the scalability of our dataset, we use pseudo-dense captions generated by GRiT [41] to replace the GT captions in the Visual Genome dataset. We remove the images, whose detected objects by GRiT are less than 15 to ensure GPT4 has enough visual information when generating visual instructions. From Tab. 5, we found finetuning on pseudo captions can also improve the performance compared to the original mPLUG-Owl. This demonstrates that our visual instruction generation method can be further scaled up without groundtruth dense captions.
# 7 Conclusion
In this work, we constructed LRV-Instruction, a large and diverse dataset containing 400k visual instructions, covering 16 vision and language tasks with both positive and negative instructions in different semantic levels and styles. With LRV-Instruction, we comprehensively investigated the hallucination of existing LMMs and empirically validated its effectiveness in a more robust visual instruction tuning. In addition, we propose GAVIE, a novel approach to evaluate visual instruction tuning without requiring human-labeled groundtruth answers and can be easily adapted to different instruction formats. We hope our work can help address the unexpected hallucination issues of LMMs. Future directions include replacing the vision encoders in current LMMs with more powerful visual models to match the capabilities of multimodal GPT4 and investigation of other biases of LMMs to develop more robust models.
# References
[1] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, et al. Flamingo: a visual language model for few-shot learning. Advances in Neural Information Processing Systems, 35:23716â23736, 2022.
9
[2] Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. Spice: Semantic propositional image caption evaluation. In Computer VisionâECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part V 14, pages 382â398. Springer, 2016.
[3] Anas Awadalla, Irena Gao, Joshua Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Gadre, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, and Ludwig Schmidt. Openflamingo, March 2023.
[4] Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Love- nia, Ziwei Ji, Tiezheng Yu, Willy Chung, et al. A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023, 2023.
[5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020.
[6] Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3558â3568, 2021.
[7] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, 2023.
[8] Wenliang Dai, Junnan Li, Dongxu Li, Anthony Meng Huat Tiong, Junqi Zhao, Weisheng Instructblip: Towards general-purpose Wang, Boyang Li, Pascale Fung, and Steven Hoi. vision-language models with instruction tuning. arXiv preprint arXiv:2305.06500, 2023.
[9] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Zhenyu Qiu, Wei Lin, Jinrui Yang, Xiawu Zheng, et al. Mme: A comprehensive evaluation benchmark for multimodal large language models. arXiv preprint arXiv:2306.13394, 2023.
[10] Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli. Chatgpt outperforms crowd-workers for text-annotation tasks. arXiv preprint arXiv:2303.15056, 2023.
[11] Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. Multimodal-gpt: A vision and language model for dialogue with humans. arXiv preprint arXiv:2305.04790, 2023.
[12] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6700â6709, 2019.
[13] MV Koroteev. Bert: a review of applications in natural language processing and understanding. arXiv preprint arXiv:2103.11943, 2021.
[14] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123:32â73, 2017.
[15] Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang, Jingkang Yang, and Ziwei Liu. Otter: A multi-modal model with in-context instruction tuning. arXiv preprint arXiv:2305.03726, 2023.
[16] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language- image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597, 2023.
[17] Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language- image pre-training for unified vision-language understanding and generation. In International Conference on Machine Learning, pages 12888â12900. PMLR, 2022.
10
[18] Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557, 2019.
[19] Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models. arXiv preprint arXiv:2305.10355, 2023.
[20] Zongxia Li, Paiheng Xu, Fuxiao Liu, and Hyemi Song. Towards understanding in-context learning with contrastive demonstrations and saliency maps. arXiv preprint arXiv:2307.05052, 2023.
[21] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Computer VisionâECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages 740â755. Springer, 2014.
[22] Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. Aligning large multi-modal model with robust instruction tuning. arXiv preprint arXiv:2306.14565, 2023.
[23] Fuxiao Liu, Hao Tan, and Chris Tensmeyer. Documentclip: Linking figures and main body text in reflowed documents. arXiv preprint arXiv:2306.06306, 2023.
[24] Fuxiao Liu, Yinghan Wang, Tianlu Wang, and Vicente Ordonez. Visual news: Benchmark and challenges in news image captioning. arXiv preprint arXiv:2010.03743, 2020.
[25] Fuxiao Liu, Yaser Yacoob, and Abhinav Shrivastava. Covid-vts: Fact extraction and verification on short video platforms. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 178â188, 2023.
[26] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. arXiv preprint arXiv:2304.08485, 2023.
[27] Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. Gpteval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634, 2023.
[28] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
[29] OpenAI. Gpt-4 technical report. 2023.
[30] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730â27744, 2022.
[31] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748â8763. PMLR, 2021.
[32] Anna Rohrbach, Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Kate Saenko. Object hallucination in image captioning. arXiv preprint arXiv:1809.02156, 2018.
[33] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556â2565, 2018.
[34] Krishna Srinivasan, Karthik Raman, Jiecao Chen, Michael Bendersky, and Marc Najork. Wit: Wikipedia-based image text dataset for multimodal multilingual machine learning. In Pro- ceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 2443â2449, 2021.
11
[35] Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. Videobert: A joint model for video and language representation learning. In Proceedings of the IEEE/CVF international conference on computer vision, pages 7464â7473, 2019.
[36] Benny J Tang, Angie Boggust, and Arvind Satyanarayan. Vistext: A benchmark for semantically rich chart captioning. arXiv preprint arXiv:2307.05356, 2023.
[37] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
[38] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. Cider: Consensus-based image description evaluation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566â4575, 2015.
[39] Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, and Lijuan Wang. Git: A generative image-to-text transformer for vision and language. arXiv preprint arXiv:2205.14100, 2022.
[40] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instruc- tions. arXiv preprint arXiv:2212.10560, 2022.
[41] Jialian Wu, Jianfeng Wang, Zhengyuan Yang, Zhe Gan, Zicheng Liu, Junsong Yuan, and Lijuan Wang. Grit: A generative region-to-text transformer for object understanding. arXiv preprint arXiv:2212.00280, 2022.
[42] Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint arXiv:2304.14178, 2023.
[43] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large language models. arXiv preprint arXiv:2303.18223, 2023.
[44] Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. Minigpt-4: En- hancing vision-language understanding with advanced large language models. arXiv preprint arXiv:2304.10592, 2023.
12
# A Appendix
# A.1 GAVIE Evaluation
We show two full examples of the text prompt for GAVIE in (i) Fig. 21, 22, 23 and (ii) Fig. 24, 25, 26. We first leverage the bounding boxes and dense captions as the "visual" input. We provide the human instructions and responses from different models in Fig. 22 and Fig. 25. Furthermore, we ask GPT4 to pretend as a smart teacher and score (0-10) the answers according to the image content and instructions. There are two criteria. (1) Accuracy: whether the response is accurate concerning the image content. (2) Relevancy: whether the response directly follows the instruction. After that, GPT4 is required to generate a score and reason. Fig. 23 and Fig. 26 show the full evaluation output from GAVIE.
# A.1.1 GPT4-Assisted Visual Instruction Evaluation (GAVIE) vs. Human Evaluation
This section provides insights into the GAVIE via human evaluation. Here, we randomly select 40 image-instruction instances from the evaluation set. The human assessment is carried out by three experts specializing in NLP. The questionnaire consists of 40 questions randomly shuffled for each expert. The questionnaire takes about 20 minutes to complete on average. Each question includes an instruction, an image, and responses from 4 different LMMs. We provide instructions for experts as follows:
"As for each question, there are an instruction, an image, and several answers. Suppose you are a smart teacher, please score the answers according to the two criteria. (1) Accuracy: whether the response is accurate concerning the image content. (2) Relevancy: whether the response directly follows the instruction without unrelated answers. There are four options for the scores (1) Very Poor, (2) Poor, (3) Good, (4) Excellent."
Evaluator Ours MiniGPT4 LLaVA InstructBLIP MMGPT mPLUG-Owl Expert1(1-4) Expert2(1-4) Expert3(1-4) 3.48 3.58 3.33 2.61 2.23 2.58 2.87 2.07 2.89 3.00 2.48 2.94 1.90 1.05 1.38 2.90 2.27 2.91 GAVIE-Accuracy (0-10) 6.58 GAVIE-Relevancy (0-10) 8.46 4.14 5.81 4.36 6.11 5.93 7.34 0.91 1.79 4.84 6.35
Table 8: GAVIE vs. Human Evaluation. GAVIE scores roughly align with the expert ratings. Numbers highlighted with red, orange, black, green, blue, and magenta indicate rank 1 to 6.
To evaluate the results quantitatively, we assign different scores for the options: Very Poor=1, Poor=2, Good=3, Excellent=4. From Tab. 8, all experts agree that the output from our model is the best, followed by InstructBLIP in second place, and MMGPT performs the worst. The observation is similar to that of GAVIE evaluation results. Although the ranking orders of MiniGPT4 and LLaVA from experts are not always the same as that of GAVIE, the scores assigned to them are fairly close. One possible reason is that the answers from MiniGPT4 and LLaVA tend to be longer, making them more challenging for humans to evaluate.
# A.1.2 Stability of GPT4-Assisted Visual Instruction Evaluation (GAVIE)
This section investigates the stability of GAVIE. Precisely, we execute GAVIE 5 times on the model predictions. We leverage two metrics to measure the stability of GAVIE on each instance: Mean and Standard Deviation (STD). The average scores of the evaluation set are shown in the following table. From the perspective of the Mean, the ranking order of ACCURACY and RELEVANCY is the same as Tab. 8. As for the Standard Deviation in Tab. 9, it ranges from 0.65 to 2.46. From our observation, the ACCURACY and RELEVANCY scores of an instance may vary between different times, but they belong to the same grade level. Specifically, RELEVANCY has four grade levels: (1) The response is completely relevant (9-10), (2) The response is mostly relevant (6-8), (3) The response is partly relevant (3-5), (4) The response is seldom relevant (0-2). ACCURACY has four grade levels: (1) The response is completely accurate (9-10), (2) The response has minor errors (6-8), (3) The response is partly accurate (3-5), (4) The response is mostly or completely wrong (0-2).
13
Metric Ours MiniGPT4 InstructBLIP MMGPT mPLUG-Owl LLaVA ACCURACY(GPT4)-Mean 6.60 RELEVANCY(GPT4)-Mean 8.37 3.76 5.35 5.29 6.83 0.87 1.71 4.84 6.35 3.80 5.65 ACCURACY(GPT4)-STD 2.42 RELEVANCY(GPT4)-STD 1.30 2.46 1.99 2.42 1.88 0.65 0.81 1.96 1.48 2.37 2.18
Table 9: Evaluation of the stability of GAVIE. We run GAVIE 5 times on the randomly selected instances from the evaluation set. Mean and Standard Deviation(STD) are calculated to measure the stability. The metric scores of ACCURACY(GPT4) and RELEVANCY(GPT4) are from 0 to 10.
# A.2 More Experiments
# A.2.1 Do LMMs perform better on Positive or Negative Instructions?
Our evaluation set consists of positive and negative instances. We divide it into two sets and analyze the model performance on each. As shown in Fig. 8, baseline models, including MiniGPT4, LLaVa, and InstructBLIP, perform better on positive instances than negative ones, as the training data adopted by these models do not contain negative instructions. MMGPT performance poorly on both sets due to many repetitive phrases in the response. In addition, we found that the degradation of LLaVA is the most severe. We hypothesize that the synthetic answers for instruction tuning in LLaVA are generally longer and involve more unrelated information. In contrast, our model performs the best in both sets. InstructBLIP performs with higher scores than other LMMs because of the effectiveness of its instruction-aware visual encoder to extract image information.
# A.2.2 Do LMMs perform better on different formats and lengths of instructions?
From Tab 10, LMMs perform with higher scores on interrogative instructions than declarative, but the difference is relatively small. Even though recent visual instruction tuning datasets lack diverse declarative instructions, the LMMs built on LLM are powerful enough to understand and follow the declarative instructions. From Fig. 9, current LMMs achieve better results in short instructions than long ones since longer instructions contain more information, making it more challenging.
# A.3 Prompt Design
# A.3.1 Positive Instance Generation based on Visual Genome Dataset
We show two full examples of our input prompts in (i) Fig. 11, 12, 13 and (ii) Fig. 14, 15, 16. In Fig. 11 and Fig. 14, we first present the images for the two examples, but they are not included in the text prompt for GPT4. As for the text input, we leverage the groundtruth bounding boxes and dense captions to represent the visual content as if GPT4 can see the image. After that, we randomly select 10 tasks from the 16 seeds and ask GPT4 to generate 20 instances for these tasks. Additionally, there can be more than one caption describing the same object with different attributes, such as "woman wearing a long dress" and "woman wearing a yellow dress" in Fig. 11. Although we present the bounding box coordinates of each caption to GPT4, it can be easily confused, treating them as two instances, one in a long dress and the other in a yellow dress. To mitigate this issue, we add "highly overlapping bounding boxes may refer to the same object" into the prompt to help GPT4 understand the "visual" input better. To enrich the instructions, we ask GPT4 to generate instances in both declarative and interrogative formats. We also explicitly instruct GPT4 with "The answers should be less than 30 words" as a requirement to reduce the chance of generating extra unrelated information in the training data. In order to make the output of GPT4 in a good format, we also ask GPT4 to generate an instruction, an answer, and a task name in order at the end of the prompt (Fig. 11 and Fig. 14). The full output of instructions and answers are shown in Fig. 12, 13 and Fig. 15, 16. We also present more positive instances with the output from different LMMs in Fig. 29, 30, 31.
# A.3.2 Positive Instance Generation based on Chart Images
We collect chart images from [36], which has human-annotated captions describing the construction and patterns of charts. We instruct GPT-4 to generate question-answers pairs with captions as visual input. The detailed prompt is shown in Fig. 5. We also present more positive instances with the output from different LMMs in Fig. 32.
14
Prompt: Here is the description of a chart âHere is a area Chart Is labeled Cumulative number of patients diagnosed with coronavirus (COVID-19) in Japan as of December 4, 2020, by place of infection. On the x-axis, Month is measured with a categorical scale starting with April and ending with October. There is 4 linear scale with a minimum of 0 and a maximum of 150,000 along the y-axis, labeled Patients within Japan" Please generate 3 different questions and answers pairs about title, x-axis, y-axis, data range or data pattern of the chart. The answers should come from the descriptions above. Each Answer must be less than 20 words. The output format should be as follows: Cumulative number of patients diagnosed, : ith coro! December 4 , 2020 , by place of Infection Iquestion1=> lanswer1 => Iquestion2=> lanswer2 => Iquestion3=> a lanswer3=> Month Patients within japan IGPT4 OUTPUT Example: [question T=> What does the area Chart represent? lanswer1 => Cumulative COVID-19 cases in Japan by place of infection from April to October 2020. Iquestion2=> What does the x-axis represent? lanswer2 => Months from April to October 2020 Iquestion3=> When did the greatest increase in COVID-19 cases in Japan occur? lanswer3=> Between November and December 2020.
Figure 5: An example prompt for text-only GPT4 we use to generate instruction and answers for chart images. The sentence in BLUE is the captions of the chart.
# A.3.3 Negative Instance Generation - Nonexistent/Existent Object Manipulation
We show two full examples of our input prompts in (i) Fig. 17, 18 and (ii) Fig. 19, 20. In Fig. 17 and Fig. 19, we present the images to help readers understand dense captions better but they are not included in the text prompt for GPT4. We leverage the bounding boxes and dense captions as the "visual" input. As for Nonexistent object Manipulation in 17, we ask GPT4 to generate 6 instructions with nonexistent elements (nonexistent objects, nonexistent activities, nonexistent attributes, nonexistent interactions). As for Existent object Manipulation in 19, we ask GPT4 to generate 6 instructions of existing objects with wrong attributes. At the end of the text prompt, we ask GPT4 to generate an instruction and a reason to explain why the instruction is inconsistent with the image in order. The reason is regarded as the answer for the instruction in our training data. Fig. 18 and Fig. 20 show the full output from GPT4. We also present more negative instances with the output from different LMMs in Fig. 27, 28.
# A.3.4 Negative Instance Generation - Knowledge Manipulation
As for the Neg3: knowledge manipulation, we use GPT4 to manipulate the knowledge in the captions, including named entities and events.
# Prompt:
Please change the knowledge including keywords, name entities or event elements in the description âCumulative COVID-19 cases in Japan by place of infection from April to October 2020â [Output format should be as follows:
# lanswer=>
# IGPT4 OUTPUT Example:
Cumulative influenza cases in France by region of infection from March to October 2020.â
Figure 6: An example prompt for text-only GPT4 we use to generate negative instruction. The next step is to transfer the ouput into an interrogative sentence whose answer is "yes" or "no".
15
As shown in Fig. 6, GPT4 manipulates the "Japan", "COVID-19" and "April" in the original captions. After that, we instruct GPT4 to transfer the output sentence into an interrogative sentence whose answer is "yes" or "no". Finally, we combine "No." and the original answer as the final answer: Question: Did the image show the cumulative influenza cases in France by region of infection from March to October 2020? Answer: No. Cumulative COVID-19 cases in Japan by place of infection from April to October 2020".
# A.3.5 Prompt Design for Evaluating Knowledge Hallucination
As for the knowledge level hallucination, we will use the groundtruth answers as a reference and compare them with predictions of models. A prompt example for GPT4 is shown in Fig. 7:
# Prompt:
Imagine you are an intelligent teacher. Thoroughly read both the reference answer and the prediction lanswer to ensure a clear understanding of the information provided. Assess the accuracy of the predictions, and note that a prediction answer will be considered incorrect if it contradicts the reference lanswer.
(Question: What does the volume of U.S. imports of trade goods in 2009 experienced , increase, decrease or steady?
Reference answer: The volume of U.S. imports of trade goods in 2009 experienced a significant decrease, with a drop between -0.2 and -0.3.
Prediction answer: The volume of U.S. imports of trade goods experienced a decrease
# IGPT4 OUTPUT Example:
[The prediction answer states that "The volume of U.S. imports of trade goods experienced a decrease". This matches with the reference answer which says "The volume of U.S. imports of trade goods in 2009 experienced a significant decrease". Therefore, the prediction answer is correct.
Figure 7: An example prompt for text-only GPT4 we use to evaluate knowledge manipulation instruction. The sentences in BLUE are the questions, reference answers, and predictions of models. .
Categories Metric Ours MiniGPT4 LLaVA InstructBLIP MMGPT Interrogative Interrogative ACCURACY(GPT4) RELEVANCY(GPT4) 6.61 8.46 4.14 6.20 4.60 5.88 5.95 7.67 1.01 2.00 Declarative Declarative ACCURACY(GPT4) RELEVANCY(GPT4) 6.50 8.21 3.98 5.39 3.82 5.84 5.47 6.64 0.90 1.62
Table 10: Evaluation results on Interrogative Instructions and Declarative Instructions by GAVIE. The metric scores of ACCURACY(GPT4) and RELEVANCY(GPT4) are in a scale of 0 to 10.
# A.4 More Dataset Statistic
I summarized the popular words in the knowledge manipulation generated by GPT4 in Fig. 10 and found they mainly include six categories: event, number, date, persons, place, and others. Some examples are shown below.
Canada, increase, decrease, lowest, 2009, United States, 2016, employment, unemployment, higher, 2013, 2017, 2015, drop, minimum, worst, consistent, kingdom, x-axis, y-axis, under, Italy, pie, bar...
16
Accuracy(GPT4) be ee ee ee Lda mPositive mNegative
Relevancy(GPT4) Ours MiniGPT4 Lava InstructBLIP MMGPT mPositive m Negative
(a) Accuracy Performance. (b) Relevancy Performance.
Figure 8: Evaluation results on positive and negative instructions by GAVIE.
Relevancy(GPT4) 9 8 7 6 5 4 2 : Hf ours MaigPT4 LLava InstructBLIP | MMGPT mlength>12 mLength<12
Accuracy(GPT4) a 9 6 8 7 5 6 4 5 3 4 2 1 _ : Ours MaiGPTa lava InstructBLIP =| MMGPT MLength>12 mLength<12
(a) Accuracy Performance. (b) Relevancy Performance.
Figure 9: Evaluation results on different instruction lengths by GAVIE.
Existence Count Position Color Posters Celebrity Scene Landmark Artwork OCR 68.33 115.0 120.00 60.50 57.50 77.50 80.00 96.25 65.00 101.25 110.0
Table 11: Completed experiments of Perception on MME [9] benchmark.
Cognition Original MiniGPT4 Finetuned MiniGPT4 Original mPLUG-Owl Finetuned mPLUG-Owl 59.29 76.42 78.57 100.71 45.00 55.00 60.00 70.00 0.00 77.50 80.00 85.00 40.00 67.50 57.50 72.50
# Commonsense Reasoning Numerical Calculation Text Translation Code Reasoning
Table 12: Completed experiments of Cognition on MME [9] benchmark.
17
person number
Figure 10: Distribution of Knowledge Manipulations. The knowledge mainly includes six categories: event, number, date, persons, place, and others.
18
Positive Instance Generation S| 2 \ Input Prompt: Give an image with following information: bounding box, positions that are the object left-top corner coordinates(X, Y), object sizes(Width, Height). Highly overlapping bounding boxes may refer to the same object. bounding box: elephant heard on rocks X: 73 Y: 80 Width: 418 Height: 418 woman wearing straw hat X: 179 Y: 285 Width: 23 Height: 43 woman wearing long dress X: 176 Y: 298 Width: 35 Height: 83 group of green chairs X: 153 Y: 326 Width: 95 Height: 126 orange bucket on sidewalk X: 80 Y: 334 Width: 50 Height: 60 man wearing white shirt X: 204 Y: 439 Width: 51 Height: 52 baby elephant behind adult elephant X: 244 Y: 235 Width: 119 Height: 155 group of umbrellas on deck X: 82 Y: 72 Width: 136 Height: 83 person wearing yellow shirt X: 202 Y: 270 Width: 35 Height: 46 water is dark green X: 252 Y: 87 Width: 231 Height: 99 a baby elephant X: 297 Y: 317 Width: 60 Height: 69 an orange bucket on the ground X: 91 Y: 341 Width: 38 Height: 36 a group of white umbrellas X: 99 Y: 82 Width: 112 Height: 28 a group of green chairs X: 149 Y: 364 Width: 94 Height: 86 aman in an orange shirt X: 204 Y: 265 Width: 31 Height: 47 a blue tin awning X: 65 Y: 190 Width: 88 Height: 95 a woman wearing a yellow dress X: 169 Y: 298 Width: 47 Height: 76 a woman wearing a tan hat X: 173 Y: 288 Width: 38 Height: 79 aman sitting down X: 200 Y: 425 Width: 65 Height: 72 aman wearing a white shirt X: 196 Y: 422 Width: 80 Height: 72 a elephant coming out of the water X: 384 Y: 219 Width: 88 Height: 88 aman sitting in a chair X: 200 Y: 428 Width: 79 Height: 67 a orange basket X: 68 Y: 329 Width: 77 Height: 69 blue chairs on a deck X: 150 Y: 325 Width: 98 Height: 142 elephants walking on rocks X: 152 Y: 161 Width: 261 Height: 239 a baby elephant X: 280 Y: 295 Width: 98 Height: 105 white umbrellas outside a building X: 91 Y: 66 Width: 161 Height: 53 a white tiled staircase X: 47 Y: 367 Width: 109 Height: 126 someone taking a photo X: 12 Y: 3 Width: 75 Height: 73 people standing on a deck X: 104 Y: 166 Width: 153 Height: 165 plastic blue chairs X: 146 Y: 318 Width: 93 Height: 129 a herd of asian elephants X: 83 Y: 110 Width: 409 Height: 308 the elephant is stepping out of the river X: 388 Y: 218 Width: 86 Height: 114 asmall elephant X: 302 Y: 309 Width: 71 Height: 95 this man is dressed in white X: 208 Y: 416 Width: 49 Height: 82 this man is dressed in white X: 208 Y: 416 Width: 49 Height: 82 she is wearing a sun hat and adress X: 175 Y: 287 Width: 64 Height: 112 the people are gathered X: 88 Y: 165 Width: 163 Height: 157 they are watching the elephants X: 87 Y: 167 Width: 165 Height: 179 this person is recording the elephants X: 18 Y: 6 Width: 74 Height: 113 a rapid flowing river X: 349 Y: 73 Width: 143 Height: 243 a herd of a elephants moving through a town X: 78 Y: 108 Width: 386 Height: 274 the trunk of an elephant X: 395 Y: 261 Width: 15 Height: 21 a blue awning of a building X: 66 Y: 187 Width: 91 Height: 104 blue chairs on a patio X: 152 Y: 324 Width: 76 Height: 120 white umbrellas on a patio X: 104 Y: 84 Width: 113 Height: 30 a trunk of an elephant X: 335 Y: 254 Width: 14 Height: 27 the tail of an elephant X: 308 Y: 233 Width: 16 Height: 32 an orange bucket of on a patio X: 95 Y: 348 Width: 30 Height: 33 the leaves of a tree X: 200 Y: 17 Width: 41 Height: 33 the leaves of a tree X: 361 Y: 36 Width: 43 Height: 23 Task: image captioning, Image Sentiment Analysis, Activity Recognition, Object Interaction Analysis, Object Attribute Detection, Muli-choice VQA, Visual Entailment, Visual Commonsense reasoning, Styled Image Caption, Referential Expression Grounding. Come up with 20 diverse instructions for all the tasks above with different language styles and accurate answers. The instructions should contain interrogative sentences and declarative sentences. The answers should be less than 30 words. Each task should have less than 3 instructions. Output format: Instruction: instruction example Answer: answer example Task: task sample
Figure 11: The first example for generating positive instruction and answers (Part1). The image is not included in the text prompt.
19
# Positive Instance Generation
Output: Instruction: Craft a brief narrative about the baby elephant and adult elephant. Answer: Ababy elephant is depicted behind an adult elephant, possibly seeking protection. Task: Image Captioning Instruction: Write a concise description of the entire scene in this image. Answer: Aherd of elephants, including baby elephants, is seen near a river with people watching and taking photos. There are also green chairs, white umbrellas, and blue awnings nearby. Task: Image Captioning Instruction: What is the overall sentiment conveyed by this image? Answer: The sentiment of the image is predominantly peaceful and joyful, showcasing the harmonious interaction of wildlife with human presence. Task: Image Sentiment Analysis Instruction: How would you interpret the sentiment of the scene involving the man in white shirt? Answer: The man in the white shirt exudes a feeling of calmness and interest as he observes the scene. Task: Image Sentiment Analysis Instruction: Explain how the elephants and the humans interact in this image. Answer: The humans are observing and photographing the elephants, while the elephants seem to be moving freely and peacefully around them. Task: Object Interaction Analysis Instruction: Analyze the interaction between the objects in the foreground and the background. Answer: The foreground, featuring green chairs and people, provides a contrast to the background where the elephants are moving. Task: Object Interaction Analysis Instruction: Which type of animal is featured prominently in the image? Choices: A) Lions B) Elephants C) Giraffes D) Zebras Answer: B) Elephants Task: Multichoice VOA Instruction: What is the primary color of the umbrellas present in the image? Choices: A) Blue B) Green C) White D) Red Answer: C) White Task: Multichoice VOA Instruction: Based on the image, does it entail that the people are afraid of the elephants? Answer: No, the image does not entail that the people are afraid of the elephants. Task: Visual Entailment Instruction: Based on the image, does it entail that the location is in a city center or on the outskirts? Answer: The image entails that the location is more likely on the outskirts, not in a city center. The presence of a herd of elephants suggests a location closer to natural habitats. Task: Visual Entailment
Figure 12: The first example for generating positive instruction and answers (Part2).
20
# Positive Instance Generation
Output: Instruction: Identify and describe the attributes of the bucket in the image. Answer: The bucket is orange, possibly made of plastic, and appears to be on a sidewalk or patio. Task: Object Attribute Detection Instruction: Assess the color of the baby elephant. Is it gray or brown? Answer: Without visual data, we cannot accurately determine the color of the baby elephant. Task: Object Attribute Detection Instruction: Based on the image, what is the probable reason for the congregation of people? Answer: The probable reason for the congregation of people is to observe the elephants passing through. Task: Visual Commonsense Reasoning Instruction: Why might the baby elephant be close to the adult elephant? Answer: The baby elephant is likely close to the adult elephant for protection and guidance as they move through the area. Task: Visual Commonsense Reasoning Instruction: Locate "the man dressed in white". Answer: The man dressed in white is located at the coordinates (208, 416), with a width of 49 and a height of 82. Task: Referential Expression Grounding Instruction: Identify the location of "the woman wearing a straw hat". Answer: The woman wearing a straw hat is located at X: 179 Y: 285 with Width: 23 and Height: 43. Task: Referential Expression Grounding Instruction: Provide a poetic caption for the image. Answer: Beneath a sky kissed by the sun, a parade of ancient giants meanders, while observers stand still in hushed admiration. Task: Styled Image Caption Instruction: Caption the image in a romantic style. Answer: In a scene gently touched by nature's grandeur, silent stories of love and companionship play out as the elephants pass by. Task: Styled Image Caption Instruction: Identify the primary activity happening in the scene described by the image. Answer: The primary activity in the image is observing and photographing a herd of elephants moving through a town. Task: Activity Recognition Instruction: By analyzing the given information, what could you infer about the main event in the image? Answer: The main event in the image is a herd of elephants moving through a town while being observed and photographed by people. Task: Activity Recognition
Figure 13: The first example for generating positive instruction and answers (Part3).
21
Positive Instance Generation Input Prompt: Give an image with following information: bounding box, positions that are the object left-top corner coordinates(X, Y), object sizes(Width, Height). Highly overlapping bounding boxes may refer to the same object. bounding box: eyebrow on the man X: 79 Y: 77 Width: 36 Height: 13 forehead on the man X: 73 Y: 52 Width: 63 Height: 26 two people playing game X: 2 Y: 1 Wid! 330 Height: 497 hair on the man X: 52 Y: 11 Width: 78 Height: 24 writing on mans shirt X: 0 Y: 366 Width: 91 Height: 47 logo on mans shirt X: 64 Y: 327 Width: 36 Height: 57 wii controller in right hand X: 144 Y: 351 Width: 107 Height: 76 a pair of glasses on the man X: 16 Y: 91 Width: 149 Height: 32 the head of a man playing a game X: 0 Y: 7 Width: 153 Height: 203 upper teeth of the young man X: 113 Y: 154 Width: 26 Height: 6 blurry left hand of the man X: 192 Y: 310 Width: 80 Height: 78 a blue tshirt on the man X: 0 Y: 171 Width: 174 Height: 254 cords of the wii game X: 234 Y: 387 Width: 52 Height: 111 a wide set of blinds X: 49 Y: 1 Width: 279 Height: 270 a young man on the sofa X: 197 90 Width: 134 Height: 278 writing and an insignia on a shirt X: 8 Y: 325 Width: 88 Height: 85. a boy wearing glasses X: 34 Y: 37 Width: 132 Height: 118 a boy playign a game X: 49 Y: 32 Width: 137 Height: 133 a white wii remote X: 138 Y: 341 Width: 160 Height: 74 a boy earing a shirt X: 24 Y: 212 Width: 164 Height: 269 glasses on a mans face X: 34 Y: 86 Width: 131 Height: 44 nintendo wii remote control X: 144 Y: 350 Width: 70 Height: 43 nintendo wii gaming nunchuk X: 198 Y: 310 Width: 48 Height: 47 the man is asian X: 11 Y: 20 Width: 247 Height: 430 the man has a controller X: 129 Y: 332 Width: 145 Height: 115 Task: Image Captioning, Image Sentiment Analysis, Activity Recognition, Object Interaction Analysis, Object Attribute Detection, Muli-choice VQA, Visual Entailment, Visual Commonsense Reasoning, Styled Image Caption, Referential Expression Grounding. Come up with 20 diverse instructions for all the tasks above with different language styles and accurate answers. The instructions should contain interrogative sentences and declarative sentences. The answers should be less than 30 words. Each task should have less than 3 instructions. Output format: Instruction: instruction example Answer: answer example Task: task sample
Figure 14: The second example for generating positive instruction and answers (Part1). The image is not included in the text prompt.
22
# Positive Instance Generation
Output: Instruction: Identify the primary activity taking place in the described image. Answer: The primary activity is two people playing a game. Task: Activity Recognition What is the sentiment expressed in the image based on the given details? Answer: The sentiment is joyful and engaged, as depicted by a man playing a game. Task: Image Sentiment Analysis Provide a concise caption for the given image. Answer: An Asian man joyfully playing a Wii game on the sofa, wearing a blue shirt with an insignia. Task: Image Captioning Instruction: Specify the emotion depicted in the image based on the objects Answer: The emotion depicted is happiness, inferred from the activity and facial features of the man. Task: Image Sentiment Analysis Instruction: Based on the details given, state which object is interacting with the man. Answer: The man is interacting with the Wii controller. Task: Object Interaction Analysis Instruction: Identify the unique attribute associated with the man in the given image. Answer: The unique attribute is that the man is Asian and wearing glasses. Task: Object Attribute Detection Instruction: From the listed objects, which one is closest to the man's face? Answer: The glasses are closest to the man's face. Task: Referential Expression Grounding Instruction: Is the statement "The man is playing a video game" true or false based on the image details? Answer: True Task: Visual Entailment Instruction: Based on the information, is it reasonable to conclude that the man is indoors? Answer: Yes, it's reasonable, due to the presence of a sofa and a set of blinds. Task: Visual Commonsense Reasoning Instruction: Write a caption for the image with a humorous style. Answer: "Multitasking man: mastering Wii, sporting stylish glasses, and showing off that shirt insignia all at once!" Task: Styled Image Caption
Figure 15: The second example for generating positive instruction and answers (Part2).
23
# Positive Instance Generation
Output: Instruction: Determine the color of the man's shirt in the image. Answer: The man's shirt is blue. Task: Object Attribute Detection Instruction: Based on the details, can we infer that the game being played is a Nintendo Wii game? Answer: Yes, we can infer that the game being played is a Nintendo Wii game due to the presence of a Wii controller and nunchuk. Task: Visual Commonsense Reasoning Instruction: Describe the location of the man in relation to the set of blinds. Answer: The man is positioned in front of the set of blinds. Task: Referential Expression Grounding Instruction: Given the data, could the statement "The boy is playing a game with a PS4 controller" be true? Answer: No, it's false because the controller specified is a Wii controller, not a PS4 controller. Task: Visual Entailment Identify any identifiable attributes of the man in the given image. Answer: The man is young, Asian, wears glasses, has a shirt with a logo and writing, and is using a Wii controller. Task: Object Attribute Detection Instruction: Identify the object the man is holding in his right hand based on the image. A) PS4 controller B) Wii controller C) Mobile phone D) Answer: B) Wii controller Task: Muli-choice VQA Instruction: Determine the color of the man's shirt from the image description. A) Red B) Black C) Blue D) Green Answer: C) Blue Task: Muli-choice VQA Instruction: Given the data, state the activity that the man and the boy are participating in. Answer: The man and the boy are participating in gaming. Task: Activity Recognition Instruction: Using an old-fashioned style of speech, describe the image portrayed.. Answer: Behold, a youthful gent of Asian descent, donned in spectacles and a blue tunic, merrily engrossed in a game of the Nintendo Wii. Task: Styled Image Caption Instruction: Based on the image, create a vivid description of the image. Answer: An engaged Asian man with glasses and a logo-clad blue shirt, situated on a sofa, immerses himself in an exhilarating Wii game. Task: Image Captioning
Figure 16: The second example for generating positive instruction and answers (Part3).
24
# Book
Negative Instance Generation (Nonexistent Element Manipulation) Input Prompt: Give an image with following information: bounding box, positions that are the object left-top corner coordinates(X, Y), object sizes(Width, Height). Highly overlapping bounding boxes may refer to the same object. bounding box: there is light blue sky in the distance X: 366 Y: 15 Width: 17 Height: 42 there is a white surfboard here X: 201 Y: 113 Width: 43 Height: 100 there are black swim trunks here X: 276 Y: 146 Width: 34 Height: 67 this man has long hair here X: 337 Y: 112 Width: 20 Height: 37 there is some wild water here X: 58 Y: 63 Width: 51 Height: 96 jackson mingus took this photo X: 173 Y: 31 Width: 165 Height: 174 this man has very hairy legs X: 261 Y: 137 Width: 15 Height: 26 there is a wonderful amount of coloring X: 176 Y: 53 Width: 122 Height: 167 this photo will soon be published in a magazine X: 161 Y: 27 Width: 64 Height: 120 this photo is a really artistic example X: 179 Y: 34 Width: 113 Height: 162 black and white surf board X: 190 Y: 85 Width: 106 Height: 150 black cotton swim trunks X: 268 Y: 149 Width: 49 Height: 59 water splashed in air X: 51 Y: 65 Width: 174 Height: 137 black decal on surf board X: 221 Y: 153 Width: 57 Height: 75 man standing on surf board X: 248 Y: 92 Width: 120 Height: 135 man surfing in ocean X: 188 Y: 77 Width: horizon where ocean meets the sky X: 250 Y: 42 Width: 190 Height: 54 calm blue ocean water X: 412 Y: 77 Width: 73 Height: 41 white surfboard mostly out of the water X: 194 Y: 70 Width: 95 Height: 165 ocean wave around surfboard X: 341 Y: 178 Width: 152 Height: 50 wave breaking on shore X: 2 Y: 177 Width: 494 Height: 57 white foam in water X: 4 Y: 203 Width: 184 Height: 32 black piece on surf board X: 190 Y: 96 Width: 22 Height: 19 man riding a surf board X: 189 Y: 82 Width: 180 Height: 147 man on surfboard X: 197 Y: 57 Width: 206 Height: 163 white ocean spray from surfer X: 62 Y: 64 Width: 122 Height: 166 three blue fins on surfboard X: 185 Y: 88 Width: 51 Height: 60 man with arms up to balance X: 212 Y: 69 Width: 162 Height: 157 photograph taken in the ocean X: 62 Y: 12 Width: 430 Height: 218 design on bottom of surfboard X: 210 Y: 149 Width: 62 Height: 67 lling off surfboard X: 26 Y: 12 Width: 470 Height: 223 Come up with 6 misleading instructions with nonexistent elements (nonexistent objects, nonexistent activities, nonexistent attributes, nonexistent interactions) in the images with different language styles. The instructions should contain interrogative and declarative sentences. Please also explain the reason. Output format: Instruction: instruction example Reason: Reason example
Figure 17: The example for generating negative instruction and answers by nonexistent object manipulation (Part1). The image is not included in the text prompt.
25
# Negative Instance Generation (Nonexistent Element Manipulation)
Output: Instruction: Could you please zoom in on the red bird sitting on the surfboard? Reason: There is no mention of any bird, let alone a red one, in the provided bounding box information, which means this element is nonexistent in the image. Instruction: Find the green umbrella on the beach and note its position. Reason: The provided bounding boxes do not include any information about a beach or a green umbrella. This makes the object nonexistent within the context of the image. Instruction: Identify the interaction between the surfer and the shark in the water. Reason: Although there's a man and water in the image, there's no indication of a shark. Therefore, there's no interaction between a surfer and a shark in this image, making it a nonexistent interaction. Instruction: Can you locate the cluster of palm trees in the background? Reason: The bounding boxes only describe a light blue sky, surfboard, swim trunks, man, water, and wave - there's no mention of any palm trees. Therefore, palm trees are a nonexistent element in this image. Instruction: Look for the sunset reflecting off the ocean waves. Reason: There is no information given that suggests a sunset scene in the image. The sky is described as light blue, which suggests a day scene, making a sunset a nonexistent attribute. Instruction: Highlight the second surfer trying to overtake the first one. Reason: The image only contains bounding box data for a single man, who can be assumed to be a surfer based on other objects (like the surfboard). The second surfer is therefore a nonexistent object.
Figure 18: The example for generating negative instruction and answers by nonexistent object manipulation (Part2).
26
Negative Instance Generation (Existent Element Manipulation) Input Prompt: Give an image with following information: bounding box, positions that are the object left-top corner coordinates(X, Y), object sizes(Width, Height). Highly overlapping bounding boxes may refer to the same object. bounding box: there is light blue sky in the distance X: 366 Y: 15 Width: 17 Height: 42 there is a white surfboard here X: 201 Y: 113 Width: 43 Height: 100 there are black swim trunks here X: 276 Y: 146 Width: 34 Height: 67 this man has long hair here X: 337 Y: 112 Width: 20 Height: 37 there is some wild water here X: 58 Y: 63 Width: 51 Height: 96 jackson mingus took this photo X: 173 Y: 31 Width: 165 Height: 174 this man has very hairy legs X: 261 Y: 137 Width: 15 Height: 26 there is a wonderful amount of coloring X: 176 Y: 53 Width: 122 Height: 167 this photo will soon be published in a magazine X: 161 Y: 27 Width: 64 Height: 120 this photo is a really artistic example X: 179 Y: 34 Width: 113 Height: 162 black and white surf board X: 190 Y: 85 Width: 106 Height: 150 black cotton swim trunks X: 268 Y: 149 Width: 49 Height: 59 water splashed in air X: 51 Y: 65 Width: 174 Height: 137 black decal on surf board X: 221 Y: 153 Width: 57 Height: 75 man standing on surf board X: 248 Y: 92 Width: 120 Height: 135 man surfing in ocean X: 188 Y: 77 Width: horizon where ocean meets the sky X: 250 Y: 42 Width: 190 Height: 54 calm blue ocean water X: 412 Y: 77 Width: 73 Height: 41 white surfboard mostly out of the water X: 194 Y: 70 Width: 95 Height: 165 ocean wave around surfboard X: 341 Y: 178 Width: 152 Height: 50 wave breaking on shore X: 2 Y: 177 Width: 494 Height: 57 white foam in water X: 4 Y: 203 Width: 184 Height: 32 black piece on surf board X: 190 Y: 96 Width: 22 Height: 19 man riding a surf board X: 189 Y: 82 Width: 180 Height: 147 man on surfboard X: 197 Y: 57 Width: 206 Height: 163 white ocean spray from surfer X: 62 Y: 64 Width: 122 Height: 166 three blue fins on surfboard X: 185 Y: 88 Width: 51 Height: 60 man with arms up to balance X: 212 Y: 69 Width: 162 Height: 157 photograph taken in the ocean X: 62 Y: 12 Width: 430 Height: 218 design on bottom of surfboard X: 210 Y: 149 Width: 62 Height: 67 man falling off surfboard X: 26 Y: 12 Width: 470 Height: 223 Come up with 6 misleading instructions with existing objects with wrong attributes in the images with different language styles. The instructions should contain interrogative and declarative sentences. Please also explain the reason. Output format: Instruction: instruction example Reason: Reason example
Figure 19: The example for generating negative instruction and answers by existent object manipula- tion (Part1). The image is not included in the text prompt.
27
Negative Instance Generation (Existent Element Manipulation)
Output Instruction: Did you spot the surfer's pink board riding the wave? Reason: The surfboard in the image is described as white, not pink. This misleading instruction changes the color attribute of the surfboard. Instruction: Look closely, isn't that a calm lake in the distance? Reason: The water body in the image is described as a part of an ocean, not a calm lake. This misleading instruction alters the context and setting of the photo. Instruction: The man in the picture is sporting a buzz cut, isn't he? Reason: The man in the image is described as having long hair, not a buzz cut. This misleading instruction misrepresents the attribute related to the man's hairstyle. Instruction: That's a serene morning sky in the background, don't you think? Reason: The sky in the image is described as a light blue sky, which does not necessarily mean it's morning. This misleading instruction gives an incorrect time context. Instruction: Isn't that surfboard fully immersed in the water? Reason: The surfboard is described as being mostly out of the water, not fully immersed. This misleading instruction changes the positioning of the surfboard relative to the water. Instruction: The man on the surfboard is performing a complex trick, isn't he? Reason: The man on the surfboard is described as having his arms up to balance, which does not imply he is performing a complex trick. This misleading instruction misrepresents the man's action on the surfboard.
Figure 20: The first example for generating negative instruction and answers by existent object manipulation (Part2).
28
GPT4-Assisted Visual Instruction Evaluation Input Prompt: Give an image with following information: bounding box, positions that are the object left-top corner coordinates(X, Y), object sizes(Width, Height). Highly overlapping bounding boxes may refer to the same object. bounding box: food is in a tray X:20 Y:55 Width:470 Height:470 the tray is white X:18 Y:56 Width:471 Height:471 some pieces of chicken X:85 Y:149 Width:142 Height:142 pile of white rice X:218 Y:112 Width:196 Height:196 the carrot is orange X:177 Y:116 Width:67 Height:67 a piece of broccoli X:83 Y:130 Width:52 Height:52 the spoon is white X:0 Y:7 Width:69 Height:69 spoon and napkin in plastic wrapper X:0 Y:0 Width:135 Height:135 table is beige colored X:0 Y:17 Width:498 Height:498 sauce on the tray X:382 Y:241 Width:72 Height:72 a plastic spoon in a wrapper X:1 Y:4 Width:70 Height:70 a beige tray X:0 Y:17 Width:499 Height:499 a serving of white rice X:220 Y:115 Width:194 Height:194 beef cubes with brown sauce X:86 Y:151 Width:140 Height:140 brown sauce on the side of a white container X:363 Y:228 Width:102 Height:102 a baby carrot X:173 Y:115 Width:70 Height:70 bots of cauliflower between two carrots X:138 Y:95 Width:76 Height:76 a bit of broccoli X:82 Y:127 Width:51 Height:51 rice beef and veggie in a plastic container X:83 Y:96 Width:332 Height:332 a white plastic container X:19 Y:57 Wid! '2 Height:472 circle of rice in a plate X:260 Y:119 Width:135 Height:135 cut up pieces of meat X:93 Y:173 Width:45 Height:45 âsmall part of broccoli next to meat X:88 Y:130 Width:54 Height:54 small part of cut up carrot X:98 Y:98 Width:73 Height:73 meat sauce on the side of plate X:428 Y:228 Width:35 Height:35 cut up cauliflower in the corner X:170 Y:98 Width:49 Height:49 âsmall part of plastic spoon in the corner X:1 Y:2 Width:75 Height:75 tan folding table holding food X:243 Y:14 Width:182 Height:182 small piece of napkin wrapped in plastic X:67 Y:8 Width:84 Height:84 silver part of table attached to a table X:396 Y:1 Width:72 Height:72 portion of cooked white rice X:213 Y:114 Width:200 Height:200 kalua pork X:88 Y:150 Width:140 Height:140 mixed cooked vegetables X:85 Y:95 Width:162 Height:162 to go container with meat rick and vegetables X:21 Y:56 Width:469 Height:469 meal in bottom half of to go container X:21 Y:55 Width:469 Height:469 cooked carrot piece X:177 Y:117 Width:70 Height:70 cooked cauliflower piece X:164 Y:95 Width:54 Height:54 piece of cooked chopped broccoli X:84 Y:126 Width:49 Height:49 cooked meat with skin on X:82 Y:153 Width:149 Height:149 plastic to go spoon and napkin in plastic wrap X:3 Y:1 Width:132 Height:132 serving of white rice X:230 Y:131 Width:187 Height:187 mixed veggies in container X:99 Y:104 Width:145 Height:145 chicken with sauce in container X:89 Y:162 Width:132 Height:132 white container with food X:69 Y:65 Width:428 Height:428 plastic spoon in wrap X:3 Y:3 Width:98 Height:98 beige tray with food on it X:1 Y:28 Width:487 Height:487 sauce inside white container X:57 Y:252 Width:67 Height:67 baby carrot beside rice X:167 Y:117 Width:78 Height:78 cauliflower beside baby carrot X:145 Y:105 Width:67 Height:67 crispy chicken with glaze X:91 Y:187 Width:130 Height:130
Figure 21: The first example for the GPT4-Assisted Visual Instruction Evaluation (Part1). The image is not included in the text prompt.
29
# GPT4-Assisted Visual Instruction Evaluation
Here is the instruction for the image: Don't forget to notice the red wine glass beside the white container.
Answer: The instruction is misleading because there are no mentions of a red wine glass in the image. The declarative sentence in this instruction suggests that the reader should pay attention to a non-existent object.
Answer2: Yes, | see the red wine glass and the white container.</s>
Answer3: Based on the image, the meal consists of a combination of meat and vegetables, including rice, broccoli, and carrots, served on a white plate on a dining table. What can | do for you today?
Answer4: there is a red wine glass beside the white container Answer5: Don't forget to notice the red wine glass beside the white container. It's a nice touch. This is a great idea. I'll have to try it. Thanks for sharing! | love this idea! I'm definitely going to try it. Thanks for sharing! | love this idea! I'm definitely going to try it. Thanks for sharing! | love this idea! I'm definitely going to try it. Thanks for sharing! | love this idea! I'm definitely going to try it. Thanks for sharing! | love this.
Suppose you are a smart teacher, after looking at the image information above, please score the above answers(0-10) according to the following criteria: 1: whether the response directly follows the instruction 2: whether the response is accurate concerning the image content.
Output format:
relevancy: score of answer1: reason: score of answer2: reason: score of answer3: reason: score of answer4: reason: score of answerS: reason:
# accuracy: score of answer1:
# reason:
# score of answer2:
# reason:
# score of answer3:
# reason:
# score of answer4:
# reason:
# score of answerS:
# reason:
Figure 22: The first example for the GPT4-Assisted Visual Instruction Evaluation (Part2).
30
# GPT4-Assisted Visual Instruction Evaluation
Output: relevancy: score of answer1: 10 reason: Answer 1 directly responds to the instruction by stating that the red wine glass isn't present in the image. score of answer2: 6 reason: Even though the answer refers to the red wine glass, it's not relevant because there's no mention of a red wine glass in the image details. score of answer3: 2 reason: Answer 3 does not directly respond to the instruction. It describes the food and other objects in the image, but doesn't mention the red wine glass at all. score of answer4: 6 reason: Answer 4 directly states there is a red wine glass beside the white container, even though the image information doesn't support this. score of answers: 1 reason: Answer 5 doesn't address the instruction and instead offers multiple vague, irrelevant compliments about the idea. accuracy: score of answer1: 10 reason: Answer 1 accurately refers to the fact that there's no mention of a red wine glass in the image details. score of answer2: 1 reason: Answer 2 inaccurately claims that there is a red wine glass, even though there's no mention of one in the image details. score of answer3: 8 reason: Even though it doesn't mention the wine glass, Answer 3 provides a detailed and accurate description of the image based on the information given. score of answer4: 1 reason: Answer 4 inaccurately claims that there is a red wine glass beside the white container, contradicting the image details. score of answers: 1 reason: Answer 5 neither addresses the instruction nor accurately reflects the image content. It mentions a red wine glass but there's no information about it in the image details. Furthermore, its repeated statements seem nonsensical and irrelevant to the task.
Figure 23: The first example for the GPT4-Assisted Visual Instruction Evaluation (Part3).
31
GPT4-Assisted Visual Instruction Evaluation Input Prompt: Give an image with following information: bounding box, positions that are the object left-top corner coordinates(X, Y), object sizes(Width, Height). Highly overlapping bounding boxes may refer to the same object. bounding box: food is in a tray X:20 Y:55 Width:470 Height:470 the tray is white X:18 Y:56 Width:471 Height:471 some pieces of chicken X:85 Y:149 Width:142 Height:142 pile of white rice X:218 Y:112 Width:196 Height:196 the carrot is orange X:177 Y:116 Width:67 Height:67 a piece of broccoli X:83 Y:130 Width:52 Height:52 the spoon is white X:0 Y:7 Width:69 Height:69 spoon and napkin in plastic wrapper X:0 Y:0 Width:135 Height:135 table is beige colored X:0 Y:17 Width:498 Height:498 sauce on the tray X:382 Y:241 Width:72 Height:72 a plastic spoon in a wrapper X:1 Y:4 Width:70 Height:70 a beige tray X:0 Y:17 Width:499 Height:499 a serving of white rice X:220 Y:115 Width:194 Height:194 beef cubes with brown sauce X:86 Y:151 Width:140 Height:140 brown sauce on the side of a white container X:363 Y:228 Width:102 Height:102 a baby carrot X:173 Y:115 Width:70 Height:70 bots of cauliflower between two carrots X:138 Y:95 Width:76 Height:76 a bit of broccoli X:82 Y:127 Width:51 Height:51 rice beef and veggie in a plastic container X:83 Y:96 Width:332 Height:332 a white plastic container X:19 Y:57 Wid! '2 Height:472 circle of rice in a plate X:260 Y:119 Width:135 Height:135 cut up pieces of meat X:93 Y:173 Width:45 Height:45 âsmall part of broccoli next to meat X:88 Y:130 Width:54 Height:54 small part of cut up carrot X:98 Y:98 Width:73 Height:73 meat sauce on the side of plate X:428 Y:228 Width:35 Height:35 cut up cauliflower in the corner X:170 Y:98 Width:49 Height:49 âsmall part of plastic spoon in the corner X:1 Y:2 Width:75 Height:75 tan folding table holding food X:243 Y:14 Width:182 Height:182 small piece of napkin wrapped in plastic X:67 Y:8 Width:84 Height:84 silver part of table attached to a table X:396 Y:1 Width:72 Height:72 portion of cooked white rice X:213 Y:114 Width:200 Height:200 kalua pork X:88 Y:150 Width:140 Height:140 mixed cooked vegetables X:85 Y:95 Width:162 Height:162 to go container with meat rick and vegetables X:21 Y:56 Width:469 Height:469 meal in bottom half of to go container X:21 Y:55 Width:469 Height:469 cooked carrot piece X:177 Y:117 Width:70 Height:70 cooked cauliflower piece X:164 Y:95 Width:54 Height:54 piece of cooked chopped broccoli X:84 Y:126 Width:49 Height:49 cooked meat with skin on X:82 Y:153 Width:149 Height:149 plastic to go spoon and napkin in plastic wrap X:3 Y:1 Width:132 Height:132 serving of white rice X:230 Y:131 Width:187 Height:187 mixed veggies in container X:99 Y:104 Width:145 Height:145 chicken with sauce in container X:89 Y:162 Width:132 Height:132 white container with food X:69 Y:65 Width:428 Height:428 plastic spoon in wrap X:3 Y:3 Width:98 Height:98 beige tray with food on it X:1 Y:28 Width:487 Height:487 sauce inside white container X:57 Y:252 Width:67 Height:67 baby carrot beside rice X:167 Y:117 Width:78 Height:78 cauliflower beside baby carrot X:145 Y:105 Width:67 Height:67 crispy chicken with glaze X:91 Y:187 Width:130 Height:130
Figure 24: The second example for the GPT4-Assisted Visual Instruction Evaluation (Part1). The image is not included in the text prompt.
32
# GPT4-Assisted Visual Instruction Evaluation
Here is the question for the image:
With respect to the image, is the statement "The napkin is wrapped in the plastic spoon" true or false?
# Answer1: True
# Answer2: True
Answers3: In the image, the statement is false. The napkin is not wrapped in the plastic spoon. Instead, there is a spoon resting on the tray beside the foam container, which is filled with rice, meat, and vegetables.
# Answer4:
10000000000000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000 00000000000000000000000000000000000000000000000000000000000000000000000000000000 00000000000000
Answer5: With respect to the image, is the statement "The napkin is wrapped in the plastic spoon" true or false? With respect to the image, is the statement "The plastic spoon is wrapped in the napkin" true or false? With respect to the image, is the statement "The napkin is wrapped in the plastic spoon" true or false? With respect to the image, is the statement "The plastic spoon is wrapped in the napkin" true or false? With respect to the image, is the statement "The napkin is wrapped in the plastic spoon"
Suppose you are a smart teacher, after looking at the image information above, please score the above answers(0-10) according to the following criteria: 1: whether the response directly follows the instruction 2: whether the response is accurate concerning the image content.
# Output format:
# relevancy: score of answer1:
# reason:
# score of answer2:
# reason:
# score of answer3:
# reason:
# score of answer4:
# reason:
# score of answer5:
# reason:
# accuracy: score of answer1:
# reason:
# score of answer2:
# reason:
# score of answer3:
# reason:
# score of answer4:
# reason:
# score of answer5:
# reason:
Figure 25: The second example for the GPT4-Assisted Visual Instruction Evaluation (Part2).
33
# GPT4-Assisted Visual Instruction Evaluation
Output: relevancy: score of answer1: 10 reason: Answer1 gives a straight answer to the question as either true or false. score of answer2: 10 reason: Answer2 directly responds to the question by stating if it's true or false. score of answer3: 10 reason: While Answer3 provides more details than necessary, it does answer the question directly and completely. score of answer4: reason: Answer4 does not address the question at all. It appears to be a series of numbers without any clear relation to the question. score of answerS: reason: Answer5 does not directly answer the question. It repeats the question multiple times without providing an answer. Accuracy: score of answer1: reason: Answer1 is not accurate. The image information doesn't state that the napkin is wrapped in the spoon. score of answer2: reason: Answer2 is incorrect. The image information doesn't state that the napkin is wrapped in the spoon. score of answer3: 10 reason: Answers is accurate. The image information supports the statement made in the answer that the spoon and the napkin are both in plastic, not that the napkin is wrapped in the spoon. score of answer4: reason: The response is not relevant to the question and does not refer to the image content. It is not an accurate answer. score of answerS: reason: This response fails to provide an accurate answer, instead repeating the question multiple times without providing any answer.
Figure 26: The second example for the GPT4-Assisted Visual Instruction Evaluation (Part3).
34
Instruction: Can you locate the water bottle placed near the yellow tennis ball on the ground? MiniGPT,: The water bottle is located near the yellow tennis ball on the ground. Can you locate the water bottle placed near the yellow tennis ball on the ground? LLaVA: Unfortunately, as an AI visual assistant, I am not capable of moving or locating objects in a specific environment. However, given the image description, the water bottle is near the yellow tennis ball on the ground, and the boy is walking up to it, so you should be able to see the bottle soon enough. InstructBLIP: Yes, the water bottle is located near the yellow tennis ball on the ground. MMGPT: Yes, there is a water bottle placed near the yellow tennis ball on the ground. Ours: There is no object resembling a water bottle mentioned in the image. This question introduces a new object that doesn't exist in the image.
(a) All LMMs except ours produce inconsistent descriptions with the images. MiniGPT4 also has repetitive sentences.
Instruction: You might notice a painting of sunset hanging over stove in the kitchen. MiniGPT4: âThere is a painting of a sunset hanging above the stove in the kitchen. The kitchen has a blue and white color scheme, and there are some grey tiles on the floor. There is a white oven on one wall, a fridge on another wall, and a microwave oven on the third wall, The kitchen also has a small dining table with two chairs, and there is a small window above the table. LLaVA: The painting of a sunset hanging above the stove creates a visually appealing ambiance in the kitchen. It adds warmth and a sense of tranquility to the space. The kitchen appears to be empty, with a stove, a sink, and some cabinets. Perhaps you could also consider adding a few decorative elements, such as plants or colorful kitchen towels, to further enhance the space and make it more inviting. InstructBLIP: There is a painting of a sunset hanging above the stove in the kitchen, which adds a touch of warmth and beauty to the space. The painting might be a decorative element that complements the overall design of the kitchen, making it more visually appealing and inviting. MMGPT: What is the color of the sunset in the painting? Ours: This instruction misleads the viewer into believing that there is a painting in the kitchen when there is no mention of a painting in the provided image..
(b) MiniGPT4, LLaVA, and InstructBLIP respond with long descriptive answers about the "painting", which does not exist in the image. Multimodal-GPT produces a question about the "painting," but it does not address the instruction.
Instruction: Identify the pink flowers blooming at the bottom left corner of the image. Are they a seasonal variety or do they grow year-round?. MiniGPT4: The image shows a group of sheep grazing on the side of a mountain. The mountain is covered in rocks and there are clouds in the sky. The sheep appear to be grazing on a field of green grass. The clouds in the sky are quite ominous and it appears to bea stormy day. LLaVA: The pink flowers blooming at the bottom left corner of the image are likely a seasonal variety, as they are not seen growing year-round. Their appearance in the image| âsuggests that the photo was taken during a specific season when these flowers are in bloom. InstructBLIP: The pink flowers are blooming year-round. MMGPT: Identify the pink flowers blooming at the bottom left comer of the image. Are they a seasonal variety or do they grow year-round? If they are seasonal, when do they bloom? If they are year-round, what is the name of the plant. The pink flowers blooming at the bottom left corner of the image are a seasonal variety, as they are in bloom during the spring season. Ours: This instruction is misleading because there are no pink flowers n the image. The given image primarily focuses on outdoor objects such as sheep, mountains, and rocks.
(c) MiniGPT4, LLaVA, and Multimodal-GPT respond with a long descriptive answer about the "pink flowers", which do not exist in the image. Although InstructBLIPâs answer is brief, itâs still false.
Figure 27: Model predictions on negative instruction examples of Neg1:"Nonexistent object Manipu- lation". RED represents the nonexistent elements in the images.
35
Instruction: Is the white curtain on top of the grey cat actually a colorful curtain with butterfly patterns? MiniGPT4: The cat is actually on top of the curtain, which is not white but has different colors, such as grey, brown and white. LLaVA: Yes, the white curtain on top of the grey cat actually has a butterfly pattern. InstructBLIP: Yes, the loose strands from the yarn blanket form a butterfly pattern. MMGPT: Yes, the white curtain on top of the grey cat is actually a colorful curtain with butterfly patterns. Ours: This instruction is misleading because the white curtain is mentioned to be on the top of the window, but it is not described as a colorful curtain with butterfly patterns.
# (a) All loose
LMMs except ours produce inconsistent descriptions with the images. Our model can point out that the strands do not form a butterfly pattern. Instruction: Are the windows on the brick building red-framed? MiniGPT,: Yes. The windows on the brick building are red-framed. LLaVA: Yes, the windows on the brick building have red frames InstructBLIP: The windows on the building are white-framed. MMGPT: yes Ours: The windows on the building are white-framed.
# (b) All windows
LMMs except ours produce false attributes of windows in the images. Our model can point out that the are white instead of red. Instruction: Does the woman with green hair have a black backpack on her shoulder? MiniGPT,: Yes, the woman has a black backpack on her shoulder. LLaVA: Yes, the woman with green hair has a black backpack on her shoulder. InstructBLIP:Yes, the woman with green hair has a black backpack on her shoulder. MMGPT: Yes, the woman with green hair has a black backpack on her shoulder. Ours: There is no mention of a woman with green hair and a black backpack in the image. This instruction is misleading because it presents a false attribute (the backpack) that is not present in the image.
(c) All LMMs except ours produce inconsistent descriptions with the images. Our model can point out that the woman with green hair doesnât have a black backpack on her shoulder.
Figure 28: Model predictions on negative instruction examples of Neg2:"Existent object Manipula- tion". RED represents the wrong attributes of existent objects in the images.
36
Instruction: What objects are on the toddlerâs feet? MiniGPT4: Two teddy bears are sitting on the toddler's feet. LLaVA: The toddler has a teddy bear and a shoe on their feet. InstructBLIP: 2 objects are on the toddler's feet: socks and shoes. MMGPT: The toddlerâs feet are covered with a blanket, and there is a teddy bear on top of the blanket, Ours: White socks
(a) Object Detection Task. MiniGPT4, LLaVa, and MMGPT canât locate the objects correctly.
Multi-choice VQA Task. InstructBLIP canât follow the instruction well by generating anything. LLaVAâs seems reasonable, but it incorrectly detects the clouds in the image. Instruction: From the image, tell me what part of the room is tiled. MiniGPT4: The room is tiled with black and white tiles. LLaVA: The tiled area in the room is the kitchen. InstructBLIP: The part of the room that is tiled is the floor. MMGPT: The answer is kitchen. Ours: The floor of the room is tiled.
(b)
# answer
(c) Referential Expression Grounding Task. Other LMMs canât directly address the instruction by giving a general answer, "kitchen," while our model can clearly predict the "floor".
Figure 29: Positive instruction demos with different tasks and predictions from different models.
37
Instruction: Detect any unusual elements in the image. MiniGPT«: There are some onions, carrots, mushrooms, and parsley in the image. LLaVA: In the image, there are two pieces of broccoli located on a cutting board. Additionally, there is a group of mushrooms and carrots on the table. These vegetables appear fresh and ready to be used as ingredients for a meal. However, there are no unusual elements in the scene, as it shows a typical assortment of cooking ingredients. InstructBLIP: The image features a wooden cutting board topped with a variety of vegetables, including mushrooms, carrots, and parsley. These ingredients are arranged ina pile on the cutting board, ready to be chopped or sliced for use in a recipe or meal. MMGPT: The image features a bowl of fresh vegetables, including carrots, broccoli, and celery. There is also a knife placed on top of the bowl. Ours: There are no unusual elements in the image.
Ours: There are no unusual elements in the image. Image Anomaly Detection Task. Our model and LLaVA point out there is a normal image without unusual while MiniGPT4, InstructBLIP, and MMGPT describe the image instead of addressing the instruction. Instruction: The coffee table looks wooden. True or False? MiniGPT4: False LLaVA: The coffee table is wooden. InstructBLIP: The coffee table looks wooden. True. MMGPT: True Ours: True.
(a) elements,
(b) Visual Entailment Task.
(c) Object Interaction Analysis Task. All LMMs except ours describe the wrong location of the book. LLaVA generates long text with unrelated information to address the instruction.
Figure 30: Positive instruction demos with different tasks and predictions from different models.
38
Instruction: What is the color and state of the horse's bridle? MiniGPT4: The horse's bridle is black and is located on a red truck.. LLaVA: The horse has a white bridle, and its harness appears to be thrown onto the trailer. InstructBLIP: The horse's bridle is brown. MMGPT: The horse's bridle is red and is in a ready-to-wear state. Ours: The bridle is pink and attaches on the horse's head.
(a) Dense Description Task. All LMMs except ours incorrectly recognize the color of the bridle. InstructBLIP also fails to answer the "state of the horseâs bridle" in the instruction.
Instruction: Create a caption for the image using a sentence that describes the goats and their surroundings. MiniGPT«4: The goats are grazing on the hillside, surrounded by rocks and mountains. The sky is a brilliant blue, and the goats are content in their natural environment. LLaVA: In the image, a group of four goats is seen grazing on grass and plants on top of a rocky hillside. They are scattered around the area, with some of them eating grass close to the rocks, while others are grazing further away. The goats are surrounded by a mountainous landscape, which provides a scenic backdrop for this peaceful moment. InstructBLIP: A herd of goats is grazing on a rocky hillside, surrounded by mountains in the background. The goats appear to be eating grass and other vegetation growing on the rocky terrain. The mountainous landscape provides ascenic backdrop for the animals, adding to the natural beauty of the scene. MMGPT: The caption should be between 5-10 words. A herd of goats grazing in the mountains. Ours: Flock of goats feasting on grass ona hilltop with large rocky mountains and green shrubs in the background, with a blue sky and cloudy weather.
and green shrubs in the background, with a blue sky and cloudy weather. Image Caption Task. LLaVA and InstructBLP fail to follow the instruction for generating one caption. they generate long text with several sentences. Instruction: What action is the person closest to the frisbee performing? MiniGPT4: The person closest to the frisbee is throwing it LLaVA: The person closest to the frisbee is preparing to catch it, with their arms outstretched. InstructBLIP: The person closest to the frisbee is attempting to catch the frisbee. MMGPT: The person closest to the frisbee is throwing the frisbee. Ours: The person is throwing the frisbee.
(b) Instead,
(c) Activity Recognition Task.
Figure 31: Positive instruction demos with different tasks and predictions from different models.
39
Net sales of Sega Sammy Holdings from fiscal] z sn 60 of the Criminal justi Share of voluntary work in the sports sector 2010 to 2019 (in billion Japanese yen) gs (1994) from 2001/02 in the last year in England from 2005/06 _ 40 + ; to 2018/19 0.20 Ed ° Fd ws 8 ° 3 Share of respondents ° 6 a 8 0.00 Net sales in billion Japanese yi o et pt wt oi® Year Peyrine slits ping end peorcnes Question: Did net sales increase in 2012 to 2013 and Question: Did the number of police stop and Question: please summary the statistics and relations begin to drop again? searches peak during 2009/08? of the chart. . Answer: No.Net sales dropped in 2012 to 2013 and Answer: No.The number of police stop and searches Answer: I can see that 2005 until 2019 the volunteer began to rise again. peaked during 2008/09. work has almost his the 0.20 mark. In the years 2011 until 2012 is the lowest time in the years studied. Direct contribution of travel and tourism There isnât much variation between all of the years. to employment in the Asia Pacific region in 2019 , by country (in 1,000s) Pakistan : National debt from 2015 to 2025 in relation to gross domestic product (GDP) Number of jobs In thousands 00 02 of O06 8 National debt in relation to GDP. Country Question: Did China contribute more than 25,000 jobs to the direct Question: Is the title of the bar plot "Least valuable Question: What is the prediction for Pakistan's employment of travel and tourism in the Asia Pacific region in 2019, France brands in 2020"? national debt in relation to GDP from 2020 onwards? while India contributed less than 25,000 jobs? . Answer: No. the title of the bar plot "Most valuable A®Swer: From 2020 onwards, the national debt of Answer: No.The direct contribution of travel and tourism to Italy brands in 2020"? Pakistan (in relation to GDP) is predicted to decline. employment in the Asia Pacific region in 2019 was carried by China and India who both contributed higher than 25,000 jobs each.
(a) More examples of chart instruction data and knowledge manipulation examples.
(b) Comparison between current LMMs when the input is Knowledge Manipulation instruction.
Figure 32: Examples of Knowledge Manipulations.
40 | {
"id": "2307.05052"
} |
2306.13421 | Long-range Language Modeling with Self-retrieval | Retrieval-augmented language models (LMs) have received much attention
recently. However, typically the retriever is not trained jointly as a native
component of the LM, but added to an already-pretrained LM, which limits the
ability of the LM and the retriever to adapt to one another. In this work, we
propose the Retrieval-Pretrained Transformer (RPT), an architecture and
training procedure for jointly training a retrieval-augmented LM from scratch
for the task of modeling long texts. Given a recently generated text chunk in a
long document, the LM computes query representations, which are then used to
retrieve earlier chunks in the document, located potentially tens of thousands
of tokens before. Information from retrieved chunks is fused into the LM
representations to predict the next target chunk. We train the retriever
component with a semantic objective, where the goal is to retrieve chunks that
increase the probability of the next chunk, according to a reference LM. We
evaluate RPT on four long-range language modeling tasks, spanning books, code,
and mathematical writing, and demonstrate that RPT improves retrieval quality
and subsequently perplexity across the board compared to strong baselines. | http://arxiv.org/pdf/2306.13421 | Ohad Rubin, Jonathan Berant | cs.CL | null | null | cs.CL | 20230623 | 20230623 | 3 2 0 2
n u J 3 2 ] L C . s c [
1 v 1 2 4 3 1 . 6 0 3 2 : v i X r a
# Long-range Language Modeling with Self-retrieval
# Ohad Rubin Jonathan Berant
# The Blavatnik School of Computer Science, Tel Aviv University {ohad.rubin,joberant}@cs.tau.ac.il
# Abstract
Retrieval-augmented language models (LMs) have received much attention recently. How- ever, trained typically the retriever is not jointly as a native component of the LM, but added to an already-pretrained LM, which lim- its the ability of the LM and the retriever to adapt to one another. In this work, we propose the Retrieval-Pretrained Transformer (RPT), an architecture and training procedure for jointly training a retrieval-augmented LM from scratch for the task of modeling long texts. Given a recently generated text chunk in a long document, the LM computes query representations, which are then used to retrieve earlier chunks in the document, located poten- tially tens of thousands of tokens before. In- formation from retrieved chunks is fused into the LM representations to predict the next tar- get chunk. We train the retriever component with a semantic objective, where the goal is to retrieve chunks that increase the probabil- ity of the next chunk, according to a reference LM. We evaluate RPT on four long-range lan- guage modeling tasks, spanning books, code, and mathematical writing, and demonstrate that RPT improves retrieval quality and subse- quently perplexity across the board compared to strong baselines.
Book or Semantically Similar Long text Lexically Similar Chunk 13 âChunk 100 Asa kid, Lt. John found a dead dog; â| since then, crimson |" always unnerved him âChunk 201 Lt. John looked around, "Another victim, The Crimson Murderer strikes again." The killer left a room full of evidence, a puzzle for forensics. Past States Retrieve Causal Language Model âChunk 202 â1 bet the forensic guys would love this.â Training Signal Target
Figure 1: Retrieval-Pretrained Transformer (RPT) is a language model for long texts (e.g., books) trained from scratch with a native retrieval ability. RPT takes a chunk of text as input, retrieves semantically-relevant chunks from the past to better predict the next chunk, and fuses these retrieved chunks into its representations. On top of a standard LM loss, the retriever is trained to retrieve chunks that increase the probability of the next chunk according to a reference LM.
# Introduction
Large language models (LMs) have had immense success recently (Brown et al., 2020; Chowdhery et al., 2022; Zhang et al., 2022; Touvron et al., 2023), becoming a useful tool across disciplines. However, their success comes at a computational cost, due to increasing parameter counts for storing world knowledge and growing context lengths that enable access to distant information, but incur a quadratic complexity penalty. Retrieval-augmented language modeling (RALM) alleviates this cost (Khandelwal et al., 2020; Yogatama et al., 2021;
Borgeaud et al., 2022; Ram et al., 2023), as pre- cise retrieval of relevant information can reduce memory and computation requirements. More- over, RALM is beneï¬cial for factuality, freshness and generalization without necessitating retraining, simply by swapping the retrieval index (Guu et al., 2020; Lewis et al., 2020; Huang et al., 2023).
However, past work on RALM has by and large not trained the retriever as a ï¬rst-class component of the LM. In some cases (Khandelwal et al., 2020; Yogatama et al., 2021; Borgeaud et al., 2022), the retriever was used only at test time, or remained
ï¬xed throughout training, preventing it from adapt- ing to the LM generator. In other cases, the re- triever component was jointly trained but only after a separate pretraining phase for both the retriever and LM (Sachan et al., 2021; Izacard et al., 2022; Jiang et al., 2022; Bertsch et al., 2023). Thus, the retriever was not pre-trained from scratch with the LM, and only a fraction of the training budget was allocated for joint training.
Recently, Zhong et al. (2022) presented a retrieval-augmented LM that trains a retriever from scratch jointly with the LM, but (a) the retriever was trained to exploit lexical information only, and (b) the retrieved information was not fused at the representation level back into the LM.
In this work, we present the Retrieval-Pretrained Transformer (RPT), a retrieval-augmented LM, where the retriever is a ï¬rst-class component, trained jointly from scratch with the LM. RPT re- lies on two technical contributions. First, on the architecture side (see Fig. 1), input representations for the retriever are computed from the LM repre- sentations themselves (which we dub self-retrieval), and retrieved representations are fused back into the LM decoder for making next word predictions. Second, we train the retriever with an auxiliary loss function that encourages retrieving text frag- ments that increase the probability of generating the subsequent text. Speciï¬cally, given a recently- generated chunk ct, the retriever is trained to re- trieve chunks ci that increase the probability of pscoring(ct+1 | ci, ct) according to a reference scor- ing LM. Fig. 1 provides an illustrative example for a case where a crime scene is described, and a scor- ing LM shows the beneï¬t of retrieving a chunk thousands of tokens away (chunk 13) compared to lexical retrieval, which leads to a chunk that is only superï¬cially related (chunk 100).
We focus on the problem of modeling long doc- uments, such as books, articles, code, scripts, and dialogue, since these are naturally occurring exam- ples of long-form content, where the entire index can be held within memory in a forward-pass. We evaluate RPT on four language modeling tasks and ï¬nd that it improves perplexity across all tasks, out- performing prior work (Hutchins et al., 2022; Wu et al., 2022) as well as strong baselines (Borgeaud et al., 2022; Zhong et al., 2022). Moreover, we show that RPT retrieves high-quality chunks com- pared to retrievers that rely on lexical information. Based on our empirical ï¬ndings, we argue RPT
can pave the way toward the next generation of pre- trained LMs, where retrieval is strongly embedded within the architecture and training procedure.
# 2 Background
To situate our contribution, we review relevant re- cent RALM work. We extend this to more related work in §6.
Early work on RALMs, such as kNN-LM (Khan- delwal et al., 2020) used retrieval to improve lan- guage modeling by interpolating the next-word dis- tribution produced by the LM with a distribution proposed through a test-time-only retrieval mecha- nism. Borgeaud et al. (2022) later proposed Chun- ked Cross-Attention (CCA), where retrieval is per- formed also at training time, and retrieved repre- sentations are deeply fused into the representations produced by a Transformer decoder through atten- tion. However, the retriever was trained separately and kept ï¬xed during training, which prevented it from adapting to the LM over the course of train- ing.
TRIME (Zhong et al., 2022), like this work, trained a retrieval-augmented LM from scratch where the retriever component and the decoder LM are trained jointly. Our work differs from TRIME in two aspects: First, TRIME, like kNN-LM, incor- porates information from the retriever in a shallow manner through distribution interpolation, while we adopt CCA as a deeper fusion mechanism. Sec- ond, TRIME takes advantage of lexical clues for supervising the retriever, that is, given a query, the TRIME retriever learns to retrieve contexts that will lead to generating the same token as the query. We, on the other hand, use a scoring LM to evalu- ate what text chunks are relevant for increasing the probability of the chunk being generated, which leads to more semantic retrieval. This is similar to EPR (Rubin et al., 2022), which used this idea for learning to retrieve prompts for in-context learning, and perplexity distillation in Atlas (Izacard et al., 2022). However, Atlas does not train the retriever and LM from scratch and is an encoder-decoder model, more suitable for knowledge-intensive tasks. We, conversely, train from scratch and use a de- coder model, more suitable for modeling long texts.
# 3 Retrieval-Pretrained Transformer
Problem Setup RPT, like RETRO (Borgeaud et al., 2022), is a chunk-wise retrieval-augmented LM, where the input sequence is divided into
fr Toke Input Tokens ReaD ry Upper Decoder - fea Bio petatse . L rs Feed Forward Chunked Cross Attention] -+âââ a Causal Attention H roN Pool + Project
Figure 2: The architecture of the Retrieval-Pretrained Transformer, where an input of 45 tokens is shown, con- sisting of 9 chunks, and causal self-attention is applied over 15 tokens. The left side shows the decoder stack, where the bottom nlayers layers also include chunked 2 cross-attention layers that fuse information from retrieved chunks. The right side shows the retriever, which takes a chunk and retrieves the highest-scoring K chunks that appeared earlier in the document.
chunks, and retrieval is performed at the chunk evel. Specifically, given a sequence of L input tokens, (71, 22,...,U1,), we partition it into a se- quence of @ = z non-overlapping chunks of ength m, denoted by C = (c1,¢2,...,¢¢). For every possible query chunk, cl = c;, the model will retrieve a subset of at most kK < ¢ chunks, R(cA) C CM = (1, 2, .--, wy), Where C<* is he set of retrievable chunks for c;, which excludes the w chunks to which it already has access to hrough causal self-attention. The goal is to learn a model that retrieves a chunk subset, R(c*), that in- crease the probability of autoregressive generation of the target chunk ct = cj11.
We present our method in two parts. First, our architecture (§3.1), which leverages CCA to fuse retrieved representations into the LM, but adds a learned retriever component. Second, we present the training method (§3.2-§3.3), where the retriever is trained to retrieve chunks useful for generating a future chunk according to a reference LM.
(âreaderâ), and the right side the retriever. The reader is split into two, where the bottom nlayers 2 layers (lower decoder) are standard Transformer decoder layers that take w chunks as input and out- put representations that will be used by the retriever and the top decoder layers.
# The top nlayers 2
The top Tiers layers (upper decoder) use Chun- ked Cross-Attention (CCA) to fuse information from the top-/ neighbor chunks retrieved by the retriever back into the LM. We use standard CCA layers from RETRO (Borgeaud et al., 2022), where for each one of the ¢ chunks, queries are the m to- ken representations of that chunk output by causal attention, and the keys and values are the token representations for the top-/X neighbor chunks out- put by the retriever. For full details of CCA, see Borgeaud et al. (2022).
Next, we describe the retriever component, along with a neighbor gating mechanism for modulating the effect of retrieved representations.
# 3.1 Model Architecture
Fig. 2 illustrates our architecture, where the input has 45 input tokens divided into 9 chunks, and causal self-attention is applied over w = 3 chunks (15 tokens). The left side depicts the decoder stack
Retriever The retriever takes as input the repre- sentations output by the lower decoder and pro- duces a similarity score for every pair of chunks. Given a query chunk c4, the query-based score for each retrievable chunk c is sq(c) = (Wae4, Wxe), where Wo, Wx ⬠R¢*4 are learned linear projec-
tions, and cq and c are chunk representations.
For an m-token long chunk c, we compute its representation c by applying bidirectional attention over the chunk tokens, followed by mean-pooling across the time dimension. This maintains causal- ity, as these representations are only used during the prediction of the next chunk.
Once scores for all pairs of chunks are com- puted, the retrieved neighbor chunks R(c%), for each query chunk, c4, consists of its top-K highest- scoring retrievable chunks. Then, for each chunk cj ⬠R(c4), we concatenate the representations of the succeeding chunk c;,, to provide addi- tional context, and the final representation for all neighbors of all chunks is given by a tensor Ce Rox K x2mxd 1
Overall (and unlike methods like TRIME and kNN-LM), the retriever is an integral part of the LM, where the lower decoder computes representa- tions for the retriever (which we dub self-retrieval), and the upper decoder consumes representations produced by the retriever.
Neighbor gating We add a neighbor gating mechanism to softly select neighbor representa- tions that are useful for fusing into the upper de- coder. Let Ci,k â R2mÃd be the token represen- tations for the kâth neighbor of chunk ci. We mean-pool across the time dimension to obtain a vector Ëci,k for each neighbor chunk. Then, we enrich the neighbor representation of each chunk by applying causal attention â a neighbor chunk representations Ëci,k attends to chunks that precede it or to neighbors of the same chunk ci that are ranked higher. Finally, for each chunk we obtain the gated retrieved representation by multiplying the augmented representations by a gating score: i,k = max{η, Ï( wngËci,k Cg )} · Ci,k where wng is a learned parameter vector, η is a small value meant to maintain gradient ï¬ow,2 and Ï is the sigmoid ac- tivation. Finally, in the upper decoder, when CCA is performed, the keys and values are Cg
3.2 Supervision Signal For each query chunk cq = ci, we want to identify neighbor chunks that will be helpful for generating ct = ci+1, and use those neighbor chunks as su- pervision signal for the retriever. Similar to Rubin
1Similar to RETRO, token representations of retrieved chunks are also augmented through cross-attention over tokens of the query chunk, cq.
2We set η = 0.1 in all of our experiments.
et al. (2022), we can exploit the fact that we are producing training data and use information from ct itself to produce such a score. Unlike Zhong et al. (2022), who use lexical clues alone, we will use an independent scoring LM for this purpose.
Scoring every chunk w.r.t to all preceding chunks is quadratic in the number of chunks in a document, and thus computationally difï¬cult. Thus, we use a simple, BM25 unsupervised retriever (Robert- son and Zaragoza, 2009) that takes as input the concatenation of the chunks (cq, ct) = (ci, ci+1) and returns a set of candidates neighbor chunks, ¯R â C(cq), which have high lexical overlap with the current and subsequent chunk. This retriever has access to the tokens that need to be generated by the LM, which is allowed at training time.
Let Ëg be an independently-trained LM, and let ¯cj be the concatenation (cj, cj+1). We compute a score st (¯cj) that reï¬ects whether the information in ¯cj is more useful for decoding ct compared to chunks that are close to cq. Speciï¬cally, the target- based score for a candidate chunk is
Probg (ct | ¢j, e)41, ¢4) st (Gj) = lo Probg (ct | c;â2, ¢.-1, 1)
This score is positive when information in ¯cj is more useful for decoding ct than information in the preceding two chunks (ciâ2, ciâ1).
We apply this scoring function to all chunks, and deï¬ne for each query chunk cq the set of positive chunks Rq pos, which includes candidates for which st(·) > 0. This should result in helpful chunks, as each candidate chunk is at least as good as the local context. With this ordering at our disposal, we can apply standard retrieval training methods.
# 3.3 Training
To train the parameters of the retriever compo- nent, we adapt the widely-used LambdaRank loss (Burges et al., 2006). The loss for each query chunk cq (w.r.t its retrievable chunks) is:
Lret(cq) =
λjl max (0, Ï â (sq(cl) â sq(cj))) {j,l:¯clâRq pos,st(¯cl)>st(¯cj )}
where Ï is a margin hyper-parameter, and λjl is the LambdaRank scaling that considers the relative ranking of each candidate. This loss is non-zero when for some pair of candidates, the target-based score disagrees (with margin Ï ) with the ranking of
the query-based score for candidates in Rq pos. Opti- mizing this loss function allows RPT to distinguish between relevant and irrelevant chunks. Our ï¬nal loss is LLM + αretLret, where LLM is the standard LM loss and αret is the retrieval loss coefï¬cient, increased linearly in the ï¬rst 100K steps. We also increase Ï linearly during training.
# Important Implementation Details
Scheduled sampling To reduce train-test mis- match, we apply scheduled sampling (Bengio et al., 2015) during training. Namely, After computing the top-K neighbor chunks, we use these neighbors with probability 1 â pss, and with probability pss the top-K scoring candidates from Rq pos as input for CCA. We anneal pss from 1 to 0 during the ï¬rst 90% of training with a cosine schedule. This al- lows the model to gradually learn to use its own predictions. We report the effect of this in §5.3.
Sliding window attention at training and infer- ence time As described in §3, the decoder takes as input w chunks, each with m tokens as input, and applies causal attention over them. In practice, to give the ï¬rst tokens access to past tokens, we use the sliding-window attention mechanism (Dai et al., 2019; Beltagy et al., 2020; Hutchins et al., 2022), where the number of tokens in a window is 2,048 and the stride is 1,024. Thus, the input to each window is 2,048 tokens and the output are the representations for the last 1,024 tokens, which use the keys and values of the previous 1,024 tokens for contextualization.
At inference time a similar procedure is applied (Dai et al., 2019), where we compute and cache the key and value representations for segments of 1,024 tokens, and then use these as context for generating or estimating the probability of the next segment. Naturally, at inference time the retriever component provides access to all tokens from the beginning of the document.
Additional details At training time we use se- quences of length L = 16,384 tokens, which are split into 4 devices, each consuming 4, 096 to- kens. As mentioned, the decoder stack takes 2, 048 tokens as input (in a sliding window approach), which contains ¢ = 32 chunks of length m = 64. We employ Rotary Positional embedding (Su et al., 2021), and train all models for 500K steps on a TPUv4-64, with an effective batch size of 2!" to- kens.
Name Tokens (Train/Test) Median Length ArXiv CodeParrot PG19 Books3 12,000 / 16 5,000 / 5 3,000 / 9 25,000 / 35 16,368 29,269 82,659 113,496
Table 1: Number of tokens (in millions) for each dataset and median document length.
For all models trained, we use the GPT-NeoX (Black et al., 2022) tokenizer, which was trained on the Pile (Gao et al., 2021a) and covers the domains we evaluate on (see §4). As our scoring language model, we use the deduplicated 1.4B parameter ver- sion of Pythia (Biderman et al., 2023), and score with it the top-20 BM25 candidates. Our model has 12 layers, hidden dimension d = 1024, and 8 atten- tion heads with a head dimension of 128. We apply CCA every 2 layers and use 2 neighbors, unless mentioned otherwise. Additional implementation details are in Appendix A.1.
io] Arxiv 5 ofl, 10 | CodeParrot 5 | 10] PG19 â | ttn, °. 10 | Books3 107 103 10* 10° 10° 107 Sequence length
Figure 3: Histograms of the distribution over document length in tokens across all datasets. The x-axis is in log scale.
# 4 Long Range LM Datasets
We evaluate RPT on four datasets, covering do- mains such as books, code, and mathematical writ- ing, which require the ability to recall informa- tion over long distances. Tab. 1 and Fig. 3 pro- vide statistics on dataset size and the distribution over document length, showing that documents are long across all datasets and in particular PG19 and Books3, where documents typically contain 105 tokens or more. We brieï¬y review the datasets.
PG19 Introduced in Rae et al. (2020), PG19 is a widely-used long-range language modeling bench- mark containing books from Project Gutenberg, and covering a wide range of literary genres, styles, and topics. We adopt the exact setup and data split from prior work (Wu et al., 2022; Hutchins et al., 2022; Mehta et al., 2023).
Books3 is a corpus of books released as part of the Pile (Gao et al., 2021a), containing a vast col- lection of literary works from different domains. To our knowledge, we are the ï¬rst to use this corpus as a long-range language modeling benchmark.
CodeParrot (Wolf et al., 2023) is a corpus of clean, nearly-deduplicated Python code from vari- ous GitHub repositories. Modeling code requires understanding patterns and contextualizing infor- mation over long distances, making it a natural candidate for testing long-range LMs. In our exper- iments, we follow the approach of Wu et al. (2022), combining ï¬les from the same repository to con- struct a corpus with longer sequences, and create a train/test split (see Tab. 1).
ArXiv is a corpus of preprint papers extracted from ArXiv. It consists of mathematical texts that require maintaining coherence and referring to pre- viously mentioned information over extended text. Prior work evaluated long-range LMs on this cor- pus (Wu et al., 2022; Hutchins et al., 2022; Mehta et al., 2023), but did not release their corpus. Thus, we use the preprocessed corpus and data splits made available by Azerbayev et al. (2023).
# 5 Experiments
We now turn to experiments for comparing RPT to prior work across our four datasets.
# 5.1 Experimental Setup
We compare to the following baselines and oracles.
Transformer-XL Our simplest baseline is a stan- dard transformer decoder stack with sliding win- dow attention. Put differently, we simply remove from RPT the retriever component and CCA lay- ers in the upper decoder. Using sliding window attention (as described in §3.4) can be viewed as a variant of Transformer-XL (Dai et al., 2019).
RETRO (Borgeaud et al., 2022) A retrieval- augmented model, where we omit the retriever component and feed the top-K neighbors retrieved
by BM253 as input to the CCA layers in the upper decoder. During training, we use the query (cq, ct), since we have access to the target chunk. During inference, we use cq.
RPT-Lex A version of RPT, where the training signal is not obtained from the scoring LM, but from lexical information only, similar to TRIME (Zhong et al., 2022). Explicitly, the set of posi- tive chunks Rq pos for a chunk cq contains the top- 20 chunks that have the highest BM25 score with (cq, ct).
RPT-Sem Our full model described in §3.
Block-Recurrent Transformer We use the ofï¬- cial training implementation4 of Block-Recurrent Transformer (Hutchins et al., 2022) with the default conï¬guration.
Memorizing Transformer We use the ofï¬cial implementation4 of Memorizing Transformers (Wu et al., 2022), with the default conï¬guration and a memory size of 32K tokens.
Oracles For each test chunk, we can exhaustively search and use at test time the best possible neigh- bors for a model according to the scoring LM. This provides an upper bound for the performance of RPT-Lex and RPT-Sem, as they are trained to imi- tate the ranking produced by this oracle.
Metrics We use perplexity to evaluate the per- formance of models. In addition, we use the tar- get score st(·) from the scoring LM to compute for each chunk a gold ranking over all previous chunks, and to label chunks as positive/negative iff their target score is positive/negative, respec- tively. With this information, we can evaluate Precision@k, which is the fraction of top-k chunks according to the query-based score that are posi- tive, and Recall@k, which is the fraction of posi- tive chunks that are in the top-k chunks according to the query-based score. We also use the gold ranking to compute NDCG@k, which is a standard retrieval metric (Järvelin and Kekäläinen, 2002).
# 5.2 Results
Table 2 shows our main results, which show that RPT-Sem is comparable or better than all
3Concurrent work (Doostmohammadi et al., 2023) showed that training RETRO using BM25 substantially outperforms dense retrieval methods.
4https://github.com/google-research/ meliad.
Model ArXiv Code PG19 Books3 Params TRANSFORMER-XL (OURS) RETRO W. BM25 (OURS) RPT-LEX RPT-SEM W. 3 NEIGHBOURS W. 4 NEIGHBOURS 3.11 2.94 2.92 2.77 2.75 2.74 2.30 2.17 2.23 2.17 2.16 2.15 11.48 11.44 11.59 10.96 10.92 10.93 15.00 14.60 14.32 13.91 13.87 13.91 202M 236M 242M 242M 242M 242M MEMORIZING TRANSFORMER BLOCK-RECURRENT TRANSFORMER 2.92 2.89 2.18 2.73 10.97 10.95 14.40 14.64 212M 212M RPT-LEX W. ORACLE RPT-SEM W. ORACLE 2.80 2.69 2.12 2.10 10.88 10.26 13.30 12.74 242M 242M
Table 2: Test set perplexity for all datasets. Unless speciï¬ed, we use 2 neighbours during inference.
other baselines in all cases. Using a ï¬xed re- triever (RETRO) categorically improves perfor- mance compared to Transformer-XL; RPT-Lex leads to gains in Books3 but to losses in PG19 compared to RETRO, and RPT-Sem outperforms Transformer-XL, RETRO, and RPT-Lex on ArXiv, PG19, and Books3, and has performance compara- ble to RETRO on CodeParrot.
Compared to Block-Recurrent Transformers and Memorizing transformers, which do not use CCA, performance is again either comparable or bet- ter, with notable gains on ArXiv, CodeParrot, and Books3.
CCA allows one to dynamically increase the number of neighbors at inference time. When us- ing 3 or 4 neighbors (instead of 2), performance improves, which allows one to trade compute for performance.
Model ArXiv Code PG19 Books3 RPT-SEM - ONLY TEACHER FORCING - NO TEACHER FORCING - NO NEIGHBOR GATING 2.77 2.91 2.95 2.92 2.17 2.22 2.26 2.20 10.96 11.54 13.10 11.50 13.91 14.66 14.40 18.68
Table 4: Results of our ablation study on RPT-Sem.
Distribution of improvements across chunks We compute the improvement in perplexity for all chunks when comparing to Transformer-XL and plot the distribution of improvements for RETRO, RPT-Lex, and RPT-Sem in Fig. 4. Clearly, RPT- Sem has a heavier right tail in all cases except for CodeParrot, further illustrating its advantage over the other baselines. We further analyze why RETRO with BM25 performs well on CodeParrot in §5.4.
# 5.3 Ablations
Last, oracle models consistently achieve the best perplexity across all datasets, improving from 2.74â2.69 on ArXiv, 2.15â2.10 on CodePar- rot, 10.92â10.26 on PG19, and 13.87â12.74 for Books3. This shows that improving the training of the retriever can further improve performance.
Dataset Recall@10 BM25 RPT-L RPT-S BM25 RPT-L RPT-S BM25 RPT-L RPT-S Precision@2 nDCG@20 ArXiv Code Books3 PG19 27% 26% 29% 26% 23% 19% 22% 22% 32% 55% 54% 34% 53% 52% 26% 55% 50% 28% 55% 55% 58% 24% 24% 56% 25% 23% 58% 18% 16% 61% 18% 18% 30% 30% 22% 23%
Table 3: Test retrieval metrics across datasets.
Retrieval metrics Table 3 presents the retrieval metrics w.r.t oracle positive chunks. Again, re- trieval with RPT-Sem outperforms both RPT-Lex and BM25 in all cases. This shows the importance of training a retriever, and moreover that using semantic supervision leads to better retrieval com- pared to a lexical signal only.
Tab. 4 shows the result of an ablation study on RPT-Sem over all datasets.
Only Teacher Forcing We force the model to attend to gold neighbors according to the scoring LM, without annealing pss during training. This leads to a performance drop across all datasets, and in particular for PG19 and Books3.
No Teacher Forcing Here, we do the opposite and ï¬x pss = 0 throughout training, i.e., we only use the predicted neighbors and not gold ones. This can lead to undertraining of the CCA layers since they are exposed to low-quality neighbors at the beginning of training and results drop even further compared to Only Teacher Forcing.
No neighbor gating We disable neighbor gating which controls the ï¬ow of information from neigh- bor chunks and analyze the effect on model perfor- mance. We observe a performance reduction across all datasets, notably on Books3, where perplexity
Arxiv CodeParrat 25 20 eis 10 os 00 Pci9 Books3 lm RPT-Sem lm APT-Lex lm RETRO+BM25 og 09 10 11 12 13 14 15 1608 09 10 11 12 13 14 15 1608 09 10 11 12 13 14 15 1608 09 10 11 12 13 14 15 16
og 09 10 11 12 13 14 15 1608 09 10 11 12 13 14 15 1608 09 10 11 12 13 14 15 1608 09 10 11 12 13 14 15 16
Figure 4: Relative perplexity improvement across different retrievers. All retrievers exhibit positive skew with a heavy right tail, and RPT-Sem leads to the most pronounced improvements.
increases by 4.5 points. Since neighbor gating is independent of the retriever used, we show results when adding neighbor gating to RETRO in §A.4., which shows mixed results.
# 5.4 Analysis
ArXiv Books3 Token overlap rk PON oa fos) °o x x e B B xe Be N CodeParrot PG19 Token overlap Pr BP NON oa fo} oO N xe ° . B B x e x 12 @RPT-Sem e@RPT-Lex @RETRO+BM25 @Query # Target
0.20 0.15 0.10 0.05 Dataset Books3 ArXiv CodeParrot PG19 Average maximum target score across chunks
12.3 4 5 6 7 8 9 101112 131415 16 17 18 19 20 Top-K element according to BM25
Figure 6: The maximal target score st(·) for the top- k chunks retrieved by BM25 averaged across chunks and for all datasets. Since the maximal target score for the top-20 chunks is much higher than for the top-2, learning to rerank the top-20 BM25 candidates can lead to substantial improvements in retrieval quality.
Figure 5: We measure the number of unique token over- lap between query/target chunks and the best retrieved neighbor.
Token overlap Fig. 5 plots the average number of tokens that overlap between the query/target chunks the best retrieved neighbor for RETRO, RPT-Lex, and RPT-Sem. RPT-Sem retrieves para- graphs with higher overlap with the target chunk compared to RPT-Lex. Naturally, BM25 retrieves chunks with the highest overlap with the query chunk. However, this does not translate to higher lexical overlap for the target chunk.
Supervision quality We train RPT-Sem using in- formation from the target scoring function st(·),
which we saw leads to model improvements. How- ever, the target scoring function only provides a reranking of the top-20 candidates according to BM25. Thus, a natural question is how much does the supervision quality improve through this rerank- ing. Figure 6 shows for every rank k the maxi- mal target score among the top-k chunks according to BM25, averaged over chunks and across our 4 datasets. Clearly, reranking the top-20 BM25 can- didates has a lot of potential, as the maximal target score is much higher for the top-20 candidates com- pared to the top-2. This hints that longer and better training of the retriever can further improve the performance of RPT-Sem.
Interestingly, our analysis sheds light on why RPT-Sem outperforms RETRO clearly on Books3 and PG19 but less so on CodeParrot. The max- imal target score for CodeParrot when k = 2 is
already quite high â around 0.1, which corresponds to more than 10% improvement in the probability of the target chunk compared to the local context. Conversely, for PG19 and Books3, the target score when k = 2 is closer to 0. This hints that lexical information alone is quite effective for CodePar- rot, potentially by retrieving function deï¬nitions, variable assignments, etc.
ArXiv Books3 ro x vo £30 . > © ° * x ox 5 20 os: £ n n x X10 fa . 0 CodeParrot PG19 x x ro vo x £ 30 > 8 . . 5 20 £ . & 3 10 . * ° ° ie) i :
@RPT-Sem @RPT-Lex @RETRO+BM25 e@lncorrect Correct mAll
Figure 7: Relative improvement with/without correct retrieval.
Subgroup analysis Figure 7 shows the average relative improvement (across chunks) of RETRO, RPT-Lex, and RPT-Sem compared to Transformer- XL, when distinguishing between cases where a âgoldâ oracle chunk was retrieved and cases where no gold chunk was retrieved.
As expected, RPT-Sem leads to improvements on all datasets, and outperforms other baselines ex- cept for RETRO on CodeParrot where performance is similar. Second, cases where a gold chunk was retrieved indeed typically lead to larger improve- ments, but we witness improvements even in cases where a gold chunk was not retrieved, which shows that the model can still beneï¬t from such retrievals.
# 6 Related Work and Discussion
Long-range language modeling A primary fo- cus in long-range language modeling has been ad- dressing the quadratic complexity of attention in order to develop more efï¬cient mechanisms for handling long texts. For instance, Transformer- XL (Dai et al., 2019) processes the input using a
segment-level mechanism while retaining a cache from previous segments. Longformer (Beltagy et al., 2020) extends this idea to accommodate even longer contexts. Sparse strategies, such as those proposed in Zaheer et al. (2020); Roy et al. (2021); Kitaev et al. (2020), attend to only a subset of tokens through clustering or hashing methods. Another approach involves compressing the input and attending over the compressed sequence (Mar- tins et al., 2022; Rae et al., 2020), or learning to ignore irrelevant tokens (Sukhbaatar et al., 2021). Recently, recurrent mechanisms have re-emerged as potential solutions (Fan et al., 2021; Hutchins et al., 2022; Mehta et al., 2023). From an analysis perspective, past work (Press et al., 2021) demon- strated that standard LM benchmarks are not ideal for measuring the long-range capabilities of mod- els. Sun et al. (2021) discuss various types of se- quences that beneï¬t from having a long context, and Rae and Razavi (2020) investigate long-range architectural choices and recommend increasing long-range capabilities in the upper layers.
Retrieval LMs Retrieval- augmented LMs have emerged as a prominent approach for efï¬ciently leveraging external knowl- edge while generating text. These models can be broadly divided into those operating at token-level granularity and those operating at sequence-level granularity. such as kNN-LM (Khandelwal et al., 2020), TRIME (Zhong et al., 2022), and SPALM (Yogatama et al., 2021), retrieve information for individual tokens. Sequence-level approaches like RAG (Lewis et al., 2020) utilize pre-trained encoder-decoder models with pre-trained retrievers for tasks like open-domain question answering. Similarly, FiD (Izacard and Grave, 2021b) employs generative encoder-decoder models that fuse evidence from multiple passages during the decoding process, closely related to the CCA mechanism (see additional discussion in App A.3). Recently, Wang et al. (2023) demonstrated the potential beneï¬ts of conducting retrieval and chunked cross-attention at each time step, compared with the original RETRO (Borgeaud et al., 2022) paper, which retrieves every m = 64 steps.
Joint retriever-reader training Joint training approaches typically concentrate on transferring information between a pre-trained reader into a pre-trained retriever. These methods commonly in-
volve updating the retriever index during the train- ing process in the context of knowledge-intensive tasks, such as open-domain question answering. For instance, REALM (Guu et al., 2020) utilizes masked language modeling as a learning signal to update the retriever. EMDR2 (Sachan et al., 2021) extends FiD by using encoder-decoder mod- els to back-propagate errors from the predicted an- swer to the retriever. Similarly, Izacard and Grave (2021a) demonstrate that it is possible to use atten- tion scores from the reader to supervise the retriever. Notably, Izacard et al. (2022) further scale up these approaches and jointly train a retriever with an encoder-decoder model, demonstrating strong few- shot learning capabilities. They also investigate various retriever updating techniques to address train-test mismatches in the retrieval process. We do not encounter the issue of index update since we compute the entire index through a forward pass.
Attention as Retrieval Several works view the attention layer as a retrieval component. Memoriz- ing Transformers (Wu et al., 2022) employ a single k-NN layer and retrieve cached keys and values without back-propagating gradients through the re- trieval operation. Similarly, Bertsch et al. (2023) demonstrate that this approach can be used with any existing pre-trained model and apply it at every attention layer for long summarization tasks. No- tably, Jiang et al. (2022) use this observation and employ a caching mechanism (Gao et al., 2021b) to enable joint end-to-end training with the super- vision of the downstream task. We view the latter as a potential way to ï¬ne-tune RPT and leave it for future work.
Retriever Pre-training Early work on retriever pre-training relied on the unsupervised Inverse Cloze Task to pre-train the retriever (Lee et al., 2019; Guu et al., 2020). It was later shown that directly using BERT (Devlin et al., 2019) with a supervised objective is sufï¬cient to get good perfor- mance on standard benchmarks (Karpukhin et al., 2020). However, this paradigm showed lacklus- ter performance on long-tail entities compared to BM25 (Amouyal et al., 2022; Sciavolino et al., 2021). Recently, unsupervised pre-training meth- ods (Gao and Callan, 2022; Ram et al., 2022; Izac- ard et al., 2021) enabled improved performance. However, these methods are initialized from a pre- trained BERT (Devlin et al., 2019) encoder model, while RPT is a retriever-reader architecture trained
from scratch that outperforms BM25 without any additional pre-training.
Supervising retrievers with LLMs EPR (Rubin et al., 2022) demonstrated that LLMs could be em- ployed to train a retriever for prompt retrieval by estimating the probability of an output given the in- put and a candidate training example as the prompt. Similar techniques were applied to open-domain question answering via re-ranking retrieval results (Sachan et al., 2022; Ram et al., 2023) and to su- pervise retrievers through perplexity distillation (Izacard et al., 2022). Recently, Shi et al. (2023) utilized this supervision method to improve the per- formance of various LLMs in a black-box fashion.
# 7 Conclusion
In this work, we present the Retrieval-Pretrained Transformer (RPT), a retrieval-augmented LM where the retriever is trained as a native component of the LM to retrieve semantically relevant chunks for future text prediction. We evaluate RPT on four long-range language modeling tasks, includ- ing books, code, and mathematical writing. We demonstrate that by seamlessly integrating the re- triever into the architecture and training process, RPT beneï¬ts from the fusion of retrieved context, improving over strong retrieval-augmented base- lines. We envision RPT will pave the way for a new generation of pretrained language models with retrieval deeply integrated throughout their archi- tecture and training process.
# Acknowledgments
This research was supported with Cloud TPUs from Googleâs TPU Research Cloud (TRC) and The Eu- ropean Research Council (ERC) under the Euro- pean Union Horizons 2020 research and innovation programme (grant ERC DELPHI 802800). Ohad would like to thank Iz Beltagy for suggesting the TRC program, and the entire TAU NLP lab and especially Guy Dar and Itay Itzhak. This work was completed in partial fulï¬llment of the Ph.D. degree of Ohad Rubin.
# References
Samuel Joseph Amouyal, Tomer Wolfson, Ohad Ru- bin, Ori Yoran, Jonathan Herzig, and Jonathan Be- rant. 2022. Qampari: An open-domain question answering benchmark for questions with many an- swers from multiple paragraphs.
Zhangir Azerbayev, tosz Piotrowski. Pre-training Dataset https://huggingface.co/datasets/ hoskinson-center/proof-pile.
Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv:2004.05150.
Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. Scheduled sampling for se- quence prediction with recurrent neural networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems - Vol- ume 1, NIPSâ15, page 1171â1179, Cambridge, MA, USA. MIT Press.
Amanda Bertsch, Uri Alon, Graham Neubig, and Matthew R. Gormley. 2023. Unlimiformer: Long- range transformers with unlimited length input.
Stella Biderman, Hailey Schoelkopf, Quentin An- thony, Herbie Bradley, Kyle OâBrien, Eric Hal- lahan, Mohammad Aï¬ah Khan, Shivanshu Puro- hit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, and Oskar van der Wal. 2023. Pythia: A suite for analyzing large language models across training and scaling.
Sid Black, Stella Biderman, Eric Hallahan, Quentin An- thony, Leo Gao, Laurence Golding, Horace He, Con- nor Leahy, Kyle McDonell, Jason Phang, Michael Pieler, USVSN Sai Prashanth, Shivanshu Purohit, Laria Reynolds, Jonathan Tow, Ben Wang, and Samuel Weinbach. 2022. GPT-NeoX-20B: An open- source autoregressive language model. In Proceed- ings of the ACL Workshop on Challenges & Perspec- tives in Creating Large Language Models.
Sebastian Borgeaud, Arthur Mensch, Jordan Hoff- mann, Trevor Cai, Eliza Rutherford, Katie Millican, George van den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, Diego de Las Casas, Aurelia Guy, Jacob Menick, Roman Ring, Tom Hen- nigan, Saffron Huang, Loren Maggiore, Chris Jones, Albin Cassirer, Andy Brock, Michela Paganini, Ge- offrey Irving, Oriol Vinyals, Simon Osindero, Karen Simonyan, Jack W. Rae, Erich Elsen, and Laurent Sifre. 2022. Improving language models by retriev- ing from trillions of tokens. In International Confer- ence on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pages 2206â2240. PMLR.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin
Chess, Jack Clark, Christopher Berner, Sam Mc- Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learn- ers.
Christopher Burges, Robert Ragno, and Quoc Le. 2006. Learning to rank with nonsmooth cost functions. In Advances in Neural Information Processing Systems, volume 19. MIT Press.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vin- odkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghe- mawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fe- dus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankara- narayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Bren- nan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways.
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Car- bonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond In Proceedings of the 57th a ï¬xed-length context. Annual Meeting of the Association for Computa- tional Linguistics, pages 2978â2988, Florence, Italy. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), Min- neapolis, Minnesota.
Ehsan Doostmohammadi, Tobias Norlund, Marco Kuhlmann, and Richard Johansson. 2023. Surface- based retrieval retrieval- augmented language models.
Angela Fan, Thibaut Lavril, Edouard Grave, Armand Joulin, and Sainbayar Sukhbaatar. 2021. Address- ing some limitations of transformers with feedback memory.
Leo Gao, Stella Biderman, Sid Black, Laurence Gold- ing, Travis Hoppe, Charles Foster, Jason Phang, Ho- race He, Anish Thite, Noa Nabeshima, et al. 2021a. The pile: An 800gb dataset of diverse text for lan- guage modeling. ArXiv preprint, abs/2101.00027.
Luyu Gao and Jamie Callan. 2022. Unsupervised cor- pus aware language model pre-training for dense passage retrieval. Association for Computational Linguistics.
Luyu Gao, Yunyi Zhang, Jiawei Han, and Jamie Callan. 2021b. Scaling deep contrastive learning batch size under memory limited setup. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021). Association for Computational Linguistics.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasu- pat, and Ming-Wei Chang. 2020. Realm: Retrieval- In Pro- augmented language model pre-training. ceedings of the 37th International Conference on Machine Learning, ICMLâ20. JMLR.org.
Yangsibo Huang, Daogao Liu, Zexuan Zhong, Weijia Shi, and Yin Tat Lee. 2023. knn-adapter: Efï¬cient domain adaptation for black-box language models.
DeLesley Hutchins, Imanol Schlag, Yuhuai Wu, Ethan Block- In Advances in Neural In- Dyer, and Behnam Neyshabur. 2022. recurrent transformers. formation Processing Systems.
Gautier Izacard, Mathilde Caron, Lucas Hosseini, Se- bastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021. Unsupervised dense in- formation retrieval with contrastive learning. Trans. Mach. Learn. Res., 2022.
Gautier Izacard and Edouard Grave. 2021a. Distilling knowledge from reader to retriever for question an- swering. In International Conference on Learning Representations.
Gautier Izacard and Edouard Grave. 2021b. Lever- aging passage retrieval with generative models for In Proceedings open domain question answering. of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 874â880, Online. Association for Computational Linguistics.
Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi- Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. 2022. Atlas: Few-shot learning with retrieval augmented language models.
Kalervo Järvelin and Jaana Kekäläinen. 2002. Cumu- lated gain-based evaluation of ir techniques. ACM Transactions on Information Systems.
Zhengbao Jiang, Luyu Gao, Zhiruo Wang, Jun Araki, Haibo Ding, Jamie Callan, and Graham Neubig. 2022. Retrieval as attention: End-to-end learning of retrieval and reading within a single transformer. In Proceedings of the 2022 Conference on Empiri- cal Methods in Natural Language Processing, pages 2336â2349, Abu Dhabi, United Arab Emirates. As- sociation for Computational Linguistics.
Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 6769â 6781, Online. Association for Computational Lin- guistics.
Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2020. Generaliza- tion through memorization: Nearest neighbor lan- guage models. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR.
Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efï¬cient transformer. In Inter- national Conference on Learning Representations.
Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics. Association for Computational Linguistics.
Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, Sebastian Riedel, and Douwe Kiela. 2020. Retrieval-augmented generation for knowledge- In Advances in Neural Infor- intensive nlp tasks. mation Processing Systems, volume 33, pages 9459â 9474. Curran Associates, Inc.
Pedro Henrique Martins, Zita Marinho, and Andre Mar- tins. 2022. â-former: Inï¬nite memory transformer. In Proceedings of the 60th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), Dublin, Ireland. Association for Com- putational Linguistics.
Harsh Mehta, Ankit Gupta, Ashok Cutkosky, and Behnam Neyshabur. 2023. Long range language In The Eleventh modeling via gated state spaces. International Conference on Learning Representa- tions.
Oï¬r Press, Noah A. Smith, and Mike Lewis. 2021. Better language modeling using Shortformer: In Proceedings of the 59th Annual shorter inputs. Meeting of the Association for Computational Lin- guistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers).
Oï¬r Press and Lior Wolf. 2017. Using the output em- In Proceed- bedding to improve language models. ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics:
Volume 2, Short Papers, pages 157â163, Valencia, Spain. Association for Computational Linguistics.
Jack Rae and Ali Razavi. 2020. Do transformers need In Proceedings of the deep long-range memory? 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 7524â7529, Online. As- sociation for Computational Linguistics.
Jack W. Rae, Anna Potapenko, Siddhant M. Jayaku- mar, Chloe Hillier, and Timothy P. Lillicrap. 2020. Compressive transformers for long-range sequence In International Conference on Learn- modelling. ing Representations.
Ori Ram, Yoav Levine, Itay Dalmedigos, Dor Muhlgay, Amnon Shashua, Kevin Leyton-Brown, and Yoav Shoham. 2023. In-context retrieval-augmented lan- guage models.
Ori Ram, Gal Shachaf, Omer Levy, Jonathan Berant, and Amir Globerson. 2022. Learning to retrieve pas- sages without supervision. Association for Compu- tational Linguistics.
Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: Bm25 and be- yond. Foundations and Trends in Information Re- trieval, 3:333â389.
Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier. 2021. Efï¬cient content-based sparse attention with routing transformers. Transac- tions of the Association for Computational Linguis- tics.
Ohad Rubin, Jonathan Herzig, and Jonathan Berant. 2022. Learning to retrieve prompts for in-context learning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 2655â2671, Seattle, United States. Association for Computational Linguistics.
Devendra Singh Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen tau Yih, Joëlle Pineau, and Luke Zettlemoyer. 2022. Improving passage re- trieval with zero-shot question generation. In Con- ference on Empirical Methods in Natural Language Processing.
Devendra Singh Sachan, Siva Reddy, William L. Hamilton, Chris Dyer, and Dani Yogatama. 2021. End-to-end training of multi-document reader and retriever for open-domain question answering. In Advances in Neural Information Processing Sys- tems.
Christopher Sciavolino, Zexuan Zhong, Jinhyuk Lee, and Danqi Chen. 2021. Simple entity-centric ques- tions challenge dense retrievers. In Proceedings of the 2021 Conference on Empirical Methods in Natu- ral Language Processing. Association for Computa- tional Linguistics.
Weijia Shi, Sewon Min, Michihiro Yasunaga, Min- joon Seo, Rich James, Mike Lewis, Luke Zettle- moyer, and Wen tau Yih. 2023. Replug: Retrieval- augmented black-box language models.
Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yun- feng Liu. 2021. Roformer: Enhanced transformer with rotary position embedding.
Sainbayar Sukhbaatar, Da JU, Spencer Poff, Stephen Roller, Arthur Szlam, Jason E Weston, and Angela Fan. 2021. Not all memories are created equal: Learning to expire.
Simeng Sun, Kalpesh Krishna, Andrew Mattarella- Micke, and Mohit Iyyer. 2021. Do long-range language models actually use long-range context? ArXiv, abs/2109.09115.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efï¬cient foundation language mod- els.
Boxin Wang, Wei Ping, Peng Xu, Lawrence McAfee, Zihan Liu, Mohammad Shoeybi, Yi Dong, Oleksii Kuchaiev, Bo Li, Chaowei Xiao, Anima Anandku- mar, and Bryan Catanzaro. 2023. Shall we pretrain autoregressive language models with retrieval? a comprehensive study.
Thomas Wolf, von Werra, 2023. https://github.com/huggingface/ blog/blob/main/codeparrot.md version=codeparrot/codeparrot-train-v2-near- dedup.
Yuhuai Wu, Markus Norman Rabe, DeLesley Hutchins, and Christian Szegedy. 2022. Memorizing trans- formers. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
Dani Yogatama, Cyprien de Masson dâAutume, and Lingpeng Kong. 2021. Adaptive semiparametric language models. Transactions of the Association for Computational Linguistics, 9:362â373.
Manzil Zaheer, Guru Guruganesh, Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontanon, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. 2020. Big bird: Transformers for longer sequences. In Proceedings of the 34th Inter- national Conference on Neural Information Process- ing Systems.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher De- wan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mi- haylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu
Wang, and Luke Zettlemoyer. 2022. Opt: Open pre-trained transformer language models. ArXiv, abs/2205.01068.
Zexuan Zhong, Tao Lei, and Danqi Chen. 2022. Train- ing language models with memory augmentation. In Proceedings of the 2022 Conference on Empiri- cal Methods in Natural Language Processing, pages 5657â5673, Abu Dhabi, United Arab Emirates. As- sociation for Computational Linguistics.
Juntang Zhuang, Tommy Tang, Yifan Ding, Sekhar Tatikonda, Nicha Dvornek, Xenophon Papademetris, and James Duncan. 2020. Adabelief optimizer: Adapting stepsizes by the belief in observed gradi- ents. Conference on Neural Information Processing Systems.
# A Appendix
# A.1 Additional Implementation Details
All models are implemented in JAX, we use a dropout rate of 0.05, weight decay of 1e-8, Co- sine decay to 0.1 of the maximum learning rate, global gradient norm clipping of 1, and tied input embedding (Press and Wolf, 2017). For our op- timizer we used AdaBelief (Zhuang et al., 2020), which is a version of Adam (Kingma and Ba, 2015) that instead of the accumulating squared gradients, accumulates the squared difference between the gradient and the momentum. In initial experiments, we found AdaBelief to increase stability. Simi- lar to Block-Recurrent we found that lowering the learning rate was necessary for convergence while training on Code, so for CodeParrot, we lower the learning rate. For each dataset, we perform a grid search w.r.t Ï , and set Ï = 128 for Books3, Ï = 4 for PG19, Ï = 2 for CodeParrot, and Ï = 8 for ArXiv. We set αret = 1e â 9 for all datasets. Our base learning rate is 5e â 3, and besides what is mentioned above, we do not tune other hyperpa- rameters. We use the validation set to choose hy- perparameters.
# A.2 Scoring LM
We use the deduplicated 1.4B parameter version of the Pythia (Biderman et al., 2023) LM. We also performed early experiments with the T5 tokenizer and T5-XL 1.1, but since it was not trained on code or latex, Pythia 1.4B was preferable, since it was trained on the Pile.
# A.3 Comparing to FiD
RPT shares similarities with Fusion-in-Decoder (FiD) (Izacard and Grave, 2021b). Both RPT and
Model ArXiv Code PG19 Books3 RETRO W. BM25 (OURS) W. GATING 2.94 2.97 2.17 2.21 11.44 11.84 14.60 13.92 RPT-SEM 2.77 2.17 10.96 13.91
Table 5: Results of our ablation study w. neighbor gat- ing.
FiD employ cross-attention mechanisms to inte- grate the retrieved context within their models. In FiD, an initial retrieval is conducted, followed by encoding the retrieved neighbors separately, and ï¬nally integrating them into the model using cross- attention in the decoder. In RPT, the decoder com- putes chunk embeddings and performs native re- trieval, and then chunked cross-attention is applied to fuse the retrieved context with the modelâs pre- dictions. RPT also performs repeated retrieval at the chunk level throughout the generation process, rather than retrieving only once based on the initial prompt. This enables RPT to continually adapt and incorporate relevant information from prior chunks to generate subsequent tokens more effectively. Furthermore, RPT is trained with retrieval being an integral part of the model during the entire pre- training phase, in contrast with FiD which plugs in retrieval components to solve speciï¬c downstream tasks. We view RPT as more suitable for long-text generation tasks.
# A.4 RETRO with Neighbor Gating
Neighbor gating is a mechanism that can be ap- plied to any retrieval-augmented LM, whether the retriever is trained or not. In Tab. 5, we show results of RETRO when adding neighbor gating. Results improve substantially on Books3, but deteriorate on PG19, and are roughly equivalent for ArXiv and CodeParrot. | {
"id": "2004.05150"
} |
2306.13304 | ToolQA: A Dataset for LLM Question Answering with External Tools | Large Language Models (LLMs) have demonstrated impressive performance in
various NLP tasks, but they still suffer from challenges such as hallucination
and weak numerical reasoning. To overcome these challenges, external tools can
be used to enhance LLMs' question-answering abilities. However, current
evaluation methods do not distinguish between questions that can be answered
using LLMs' internal knowledge and those that require external information
through tool use. To address this issue, we introduce a new dataset called
ToolQA, which is designed to faithfully evaluate LLMs' ability to use external
tools for question answering. Our development of ToolQA involved a scalable,
automated process for dataset curation, along with 13 specialized tools
designed for interaction with external knowledge in order to answer questions.
Importantly, we strive to minimize the overlap between our benchmark data and
LLMs' pre-training data, enabling a more precise evaluation of LLMs' tool-use
reasoning abilities. We conducted an in-depth diagnosis of existing tool-use
LLMs to highlight their strengths, weaknesses, and potential improvements. Our
findings set a new benchmark for evaluating LLMs and suggest new directions for
future advancements. Our data and code are freely available to the broader
scientific community on GitHub. | http://arxiv.org/pdf/2306.13304 | Yuchen Zhuang, Yue Yu, Kuan Wang, Haotian Sun, Chao Zhang | cs.CL, cs.AI | null | null | cs.CL | 20230623 | 20230623 | 3 2 0 2
n u J 3 2 ] L C . s c [
1 v 4 0 3 3 1 . 6 0 3 2 : v i X r a
# ToolQA: A Dataset for LLM Question Answering with External Tools
Yuchen Zhuangâ, Yue Yuâ, Kuan Wangâ, Haotian Sun, Chao Zhang College of Computing, Georgia Institute of Technology, Atlanta GA {yczhuang, yueyu, kuanwang, haotian.sun, chaozhang}@gatech.edu
# Abstract
Large Language Models (LLMs) have demonstrated impressive performance in various NLP tasks, but they still suffer from challenges such as hallucination and weak numerical reasoning. To overcome these challenges, external tools can be used to enhance LLMsâ question-answering abilities. However, current evaluation methods do not distinguish between questions that can be answered using LLMsâ internal knowledge and those that require external information through tool use. To address this issue, we introduce a new dataset called ToolQA, which is designed to faithfully evaluate LLMsâ ability to use external tools for question answering. Our development of ToolQA involved a scalable, automated process for dataset curation, along with 13 specialized tools designed for interaction with external knowledge in order to answer questions. Importantly, we strive to minimize the overlap between our benchmark data and LLMsâ pre-training data, enabling a more precise evaluation of LLMsâ tool-use reasoning abilities. We conducted an in-depth diagnosis of existing tool-use LLMs to highlight their strengths, weaknesses, and potential improvements. Our findings set a new benchmark for evaluating LLMs and suggest new directions for future advancements. Our data and code are freely available for the broader scientific community on GitHub 2.
# Introduction
Large Language Models (LLMs) have demonstrated superior performance in a myriad of NLP tasks [3, 7, 37, 36, 47, 54]. These models have captured vast amounts of knowledge from enormous and diverse corpora during pre-training. After instruction fine-tuning [8, 38, 1], they have demonstrated impressive capabilities in information-seeking question answering [57, 23]. Despite their remarkable performance, LLMs face several challenges. For example, they are susceptible to hallucinationsâ generating plausible yet ungrounded informationâwhich can mislead users and affect content integrity [58, 17, 4]. Additionally, they exhibit weaknesses in numerical reasoning, an essential skill in numerous real-life applications [12, 31, 35, 25, 43, 11]. These limitations highlight the need for techniques that can enhance LLMsâ question-answering abilities.
Recent research has shown that these issues can be mitigated by augmenting LLMs with external tools, such as retrieval augmentation [50, 15], math tools [48, 66, 28], and code interpreters [11, 55]. For example, a Wolfram math plugin can enhance numerical reasoning [60], and a verified database can mitigate hallucinations by providing up-to-date fact-checked knowledge [42]. However, existing evaluation methodologies struggle to distinguish whether the model is simply recalling pre-trained information or truly utilizing external tools for problem-solving [32]. This challenge arises, in part, because the external data used for evaluation may have already been exposed to LLMs during the pre-training phase [45]. This exposure can lead to a biased evaluation of LLMsâ tool-use abilities, as the models could just use their ingrained knowledge and their reasoning abilities, bypassing the use of external tools. As a result, these evaluations cannot accurately reflect the true competency of the
âThese authors contributed equally to this work. 2https://github.com/night-chen/ToolQA
Preprint. Under review.
Corpus - oe 7 Kobe Bryant was drafted by | Question: What team did Kobe Bryant start his NBA career with? te 7a onaeam the Charlotte Hornets with âsthethrioeat dan Retrieve the 13th pick of the 1996 ate td neal da oe nee! > draft, but his draft rights âas were immediately traded to the Los Angeles Lakers... | [ross Reasoning with retrieval Directly input ( Using tools or only memorizing?) GS ââ as * G (GS) . y y LLM with implicit knowledge Los Angeles Lake Los Angeles Lakes
Figure 1: Pre-trained on vast range of corpus, LLMs possess extensive knowledge, which may overlap with evaluation data. This overlap poses a significant challenge to current evaluation methods, as it becomes difficult to discern whether the model is merely recalling pre-trained information or genuinely employing external tools for problem-solving.
models. We need a fair and explicit way to check if LLMs are really good at problem-solving with tools or if they are just using their memorized information.
To fill this gap, we introduce ToolQA, a question answering (QA) benchmark to evaluate LLMsâ ability in using external tools for answering questions. ToolQA comprises data from 8 domains and defines 13 types of tools to acquire information from external reference corpora. Each instance in ToolQA consists of a question, an answer, reference corpora, and a list of available tools. ToolQA is unique in that all its questions can be answered only by using appropriate tools to obtain information from the reference corpus. This minimizes the possibility of LLMs answering questions by merely recalling their internal knowledge, and allows for faithfully evaluating LLMsâ abilities in using tools.
ToolQA is curated with an automated three-phase process: (1) The first phase, Reference Data Collection, involves gathering various types of public corpora including text, tables, and graphs from different domains. These corpora have no overlap with the LLM pre-training data and will serve as reference corpora for tool-based question answering. (2) The second phase is Human-guided Question Generation with LLMs. In this phase, we generate questions that can only be answered by using tools over the reference corpora. Our approach is a template-based question generation process, which includes human-guided template generation, template validation, and question instantiation with tool attributes. (3) The third phase is Programmatic Answer Generation. This phase produces accurate answers for the generated questions. To ensure answer correctness, we implement operators corresponding to the tools and obtain answers from the reference corpora programmatically. Our three-phase procedure ensures that we generate questions that can only be answered using external knowledge, along with their precise answers. Additionally, the process is highly efficient and requires minimal human labeling efforts.
We conducted experiments using both standard LLMs and tool-augmented LLMs to answer questions in ToolQA. Our findings indicate that ChatGPT and Chain-of-thoughts prompting [57], which rely solely on their internal knowledge, have low success rates of approximately 5% for easy questions and 2% for hard questions. In contrast, tool-augmented LLMs such as Chameleon [28] and ReAct [66] perform better by leveraging external tools. For easy questions, the best performance achieved by tool-augmented LLMs is 43.15%, while for hard questions, the best performance drops to 8.2%. Our results and error analysis demonstrate that ToolQA is a challenging benchmark for existing tool-augmented LLM methods, especially for its hard questions that require more complex reasoning about tool composition.
# 2 Related Work
# 2.1 Knowledge-Augmented LLMs
Several prior works aim to enhance LLMs with explicit external knowledge. Specifically, one line of research focus on retrieval-augmented language models [50, 2, 15, 24, 27, 70, 30, 63], where they use sparse [46] or dense retrieval [20, 14] to extract relevant knowledge from the corpus. These works mainly focus on leveraging free text, without considering multiple types of tools for task solving. On the other hand, Program-of-Thought [5], PAL [11], MathPrompt [13], and Code4Struct [55]
2
rivate/Commercial Data <â. * General Knowledge Be. . * Out-Dated Information ® Professional Abilities <â Internal + Publicly Available Data ~t External Knowledge Most Recent Data «âââââââ Knowledge Question â> Y External Knowledge â» Question â> (b) Human-Guided Question Generation (c) Programmatic Answer Generation Data 'Q: Did..{Origin} to {Dest} on {Date}.diverted? Oo â (S LAX SFO 10/15/22 Oo â Question ITH ATL 81/09/22 Templates ' CLT MDW 05/25/22 Flight Data Question Templates: ; + Did the flight from {Origin} to {Dest} on {Date} | A: def question_gen(table_row): get cancelled or diverted? (External Knowledge) / Origin = table_row["Origin"] + What the flight-dist forthe flight_f Dest = table _row["Dest"] {Origin} toâ{Dest}on_{Date}? (Internal Knowledge) â | FlightDate = table_row["FlightDate"] + Which_productâonâ{FlightNumber} hasâthe highest â price?(Not Mentioned) ... ... : pecurnfacestiontiensten
Figure 2: ToolQA, aiming to faithfully evaluate LLMsâ abilities to use external tools, curates data through three phases: (a) Reference Data Collection; (b) Human-Guided Question Generation; and (c) Programmatic Answer Generation.
apply code-based tools to enhance LLMsâ abilities in question answering with a focus on tabular and math-related tasks. Several additional works [48, 28, 49] expand the scope of tool utilization by incorporating different types of basic tools (e.g. calculator, calendar, machine translation) to solve complex reasoning tasks. ART [39], ReAct [66], and Reflexion [51] leverage large language models (LLMs) to auto-generate intermediate reasoning steps as well as actions, thereby improving interpretability and problem-solving abilities in diverse decision-making tasks. In addition, several works have extended this line of learning paradigm to other modalities [64, 61] and other domains [18]. A detailed comparison between existing tool-use LLMs can be found in Appendix A.
# 2.2 Benchmarks on Tool-Augmented LLMs
Earlier tool-augmented LLMs primarily assess single tool usage based on downstream task perfor- mance across existing benchmarks. For example, there are works that study how text retrievers augment LLMsâ performance on open-domain question-answering [19, 65], fact-checking [53], and timely information benchmarks [6, 21, 68, 10]. Besides, the mathematical reasoning abilities of exter- nal calculators and Python interpreters are evaluated using computation-intensive QA datasets [9, 29]. However, these evaluation benchmarks may not faithfully reflect the extent to which models leverage external tools, as some questions could still be correctly answered solely using the internal knowl- edge of the LLMs. ToolQA attempts to mitigate these issues by selecting data from out-of-scope sources that have not been memorized by LLMs. Concurrent with our work, there are several recent benchmarks for evaluating LLMsâ ability in using multiple tools for solving challenging tasks, in- cluding API-Bank [26], APIBench [41], and ToolBench [44, 62]. They mainly focus on constructing high-quality tool chains for LLM fine-tuning and evaluating API call trace accuracy against a fixed ground truth trace. In contrast, ToolQA is unique in that it focuses on the open-ended use of tools for question-answering, rather than benchmarking the intermediate process of tool use. Specifically, ToolQA creates tool-based question-answer pairs and assesses whether LLMs can arrive at the correct answer, regardless of the tool chains used.
# 3 ToolQA Dataset
# 3.1 Dataset Details
We curate the ToolQA benchmark to evaluate LLMsâ capability in leveraging external tools for question answering. ToolQA consists of data from 8 distinct domains, each instance being a tuple â (question, answer, reference corpora, and tools). The reference corpora are external knowledge sources that can be queried, which can be a text corpus, a tabular database, or a graph. To enable
3
obtaining information from the reference corpora, we have developed 13 tools for text retrieval, database operations, code interpretation, mathematical computations, and more. The questions are designed to simulate real-world information-seeking inquiries. However, they cannot be answered directly with LLMsâ internal knowledge, but instead require LLMs to obtain information from the reference corpora via tool use. Table 1 shows the detailed statistics of ToolQA.
To reduce human efforts in generating faithful question-answer pairs to evaluate LLMsâ tool-use capabilities, we propose an automatic three-phase process (Figure 2): (1) We first select data from public sources that are unmemorized by LLMs during Reference Data Collection; (2) We adopt Human-Guided Question Generation to steer LLMs to generate valid questions according to pre- defined templates; (3) We produce accurate answers for the generated questions with Programmatic Answer Generation. We detail the three-phase generation process in the following.
# 3.2 Reference Data and Tools
To evaluate LLMsâ ability in using external tools for question answering, it is crucial to ensure that they cannot directly answer the questions with their internal knowledge. To this end, we collect reference corpora that meet the following criteria (Figure 2(a)): 1) The reference corpora should ideally not overlap with the LLMâs pre-training data; 2) The reference corpora should contain context-sensitive facts for generating questions that cannot be directly answered solely based on LLMsâ internal knowledge and reasoning abilities; 3) LLMs should be able to obtain all the necessary information from the reference corpora to correctly answer the questions.
Based on these criteria, we define 6 contextual dimensions: temporal, spatial, social, scientific, mathematical, and personal. We collect reference corpora that can yield context-specific questions along one or more of the 6 dimensions. Specifically: 1) Along the temporal dimension, we collect the Flights and Coffee corpora, which contain the latest information that is out of the temporal scope of the LLMâs pre-training data. 2) Along the spatial dimension, we collect Yelp and Airbnb, which are two non-text corpora that can yield questions with spatial contexts. 3) Along the mathematical dimension, we collect the questions from GSM8K that ChatGPT cannot answer correctly with its own mathematical reasoning ability; 4) SciREX emphasizes detailed model performances from the scientific domain [16], where GPT family models can easily hallucinate [36]. 5) To incorporate personal data and avoid privacy issues, we synthesize the personal Agenda corpus with ChatGPT with virtual names and events. 6) In addition, we also select data from the most recent DBLP database and create graphs between authors and papers, where social relational knowledge cannot be understood by LLMs currently. Further details can be found in Appendix B.
To obtain information from these reference corpora, we design 13 tools that are available to the LLMs (Table 2). These tools are designed as follows:
Text: AgendaRetriever and SciREXRetreiver are text retrieval tools. They can retrieve relevant information to a given query from the (synthesized) personal agenda corpus and scientific corpus. ⢠Database: Database Loader loads data from the local tabular Database. Data Filter can filter the database according to a set of conditions, each of which is composed of a column name, a relation, and a pre-determined value (e.g., âDate=2022-10-15â). Get Value returns all the values under a certain column in the database.
# Table 1: Dataset Statistics of ToolQA.
Context Topic External Knowledge Easy Hard Format Size # Templates # Questions # Templates # Questions Temporal Flight Coffee Tabular Database Tabular Database 4078318 5746 10 8 100 100 10 13 100 130 Spatial Yelp Airbnb Tabular Database Tabular Database 150346 102599 11 10 100 100 10 10 100 100 Mathematical GSM8K Professional Ability - - 100 - - Social DBLP Graph 553320 10 100 10 100 Scientific SciREX Pure-Text Corpus 438 1 100 4 100 Personal Agenda Pure-Text Corpus 10000 5 100 5 100 SUM - - - 55 800 62 730
4
Table 2: Different tools in ToolQA.
Tool Types # Tools Tools Text Tools Database Tools Math Tools Graph Tools Code Tools System Tools 2 3 1 4 2 1 Agenda Retriever, SciREX Retriever Database Loader, Data Filter, Get Value WolframAlpha Calculator Graph Loader, Neighbour Checker, Node Checker, Edge Checker Python Interpreter, SQL Interpreter Finish
⢠Math: Calculator is a mathematical tool that treats the input string as a formula and calculates the corresponding result. We use the WolframAlpha API portal as the calculator 3, which can perform both simple computations (e.g., add, subtraction, multiplication) and complicated operations (e.g., averaging, finding maximum values).
⢠Graph: Graph Loader loads the graph from local files for future operations. Neighbour Checker lists all the neighbors of the query node in the graph. Node Checker and Edge Checker return the detailed attribute information of the query node and edge, respectively.
⢠Code: The SQL Interpreter and the Python Interpreter are responsible for interpreting and executing SQL commands and Python code, respectively. They can receive and transform data from other tools, serving as bridges between different tools and the LLM.
System: Finish parses the feedback from execution and returns the answer to finish the task.
# 3.3 Human-Guided Question Generation
The question generation phase aims to generate questions that can be answered by using the available tools over the reference corpora. There are two straightforward strategies to generate questions: 1) letting human experts come up with questions about reference corpora, or 2) relying solely on LLMs to generate questions about the reference corpora. However, both strategies have their drawbacks. While human experts can produce high-quality questions, the entire process is labor-intensive, time- consuming, and hard to scale. Depending solely on LLMs may generate unanswerable questions or hallucinate information that does not exist in the reference data. Besides, some of the LLM-generated questions are too easy and can be directly answered with only LLMsâ internal knowledge.
To address these challenges, we propose a human-guided LLM generation approach that uses question templates to bridge human guidance and automatic LLM generation [59, 69]. We first ask ChatGPT to generate candidate question templates from reference data, using prompts such as âGenerate some template questions based on the given information and provide the corresponding answers.â. The responses obtained are arrays containing potential question templates. We then perform manual validation to select the templates that cannot be answered with LLMsâ internal knowledge but become answerable with the reference corpora. We provide a comprehensive list of both easy and hard question templates for different reference data in Appendix C and Appendix D.
After the high-quality question templates are manually selected, we sample values from the reference data to automatically fill into the templates to generate concrete questions. For example, given the template âDid the flight from {Origin} to {Dest} on {Date} get canceled or diverted?â, we can sample the values âLAXâ, âMDWâ, â01/09/22â from the reference Flight tabular data and fill into the template to form a question: âDid the flight from LAX to MDW on 01/09/22 get canceled or diverted?â
Depending on the difficulty of the questions, we classify them into two classes â easy and hard. Easy questions primarily focus on extracting a single piece of information from external knowledge, thus requiring fewer tools to involve in the solution. Conversely, hard questions require complex operations (e.g., average) and reasoning (e.g., comparison) over multiple information pieces drawn from the reference corpora, requiring more tools and complex reasoning among them.
# 3.4 Programmatic Answer Generation
Our final step is to create accurate answers for the generated questions. To guarantee the validity of these responses, we implement 1) operators, which are functions corresponding to the predefined tools; and 2) tool chains, which are schemas for composing different operators for different question templates. For each question, as we know the true arguments filled into the question template, we can
# 3https://products.wolframalpha.com/api
5
# Table 3: Success rates on easy questions.
2.0 1.0 30.0 61.0 48.0 0.0 1.0 9.0 90.0 81.0 0.0 0.0 4.0 29.0 24.0 15.0 9.0 8.0 77.0 64.0 0.0 0.0 3.0 28.0 23.0 2.0 0.0 0.0 3.0 2.0 26.0 30.0 27.0 32.0 23.0 0.0 0.0 4.0 25.0 29.0 5.6 5.1 10.6 43.1 36.8
# ChatGPT CoT Chameleon ReAct (GPT-3) ReAct (GPT-3.5)
Table 4: Success rate on hard questions.
Flight Coffee Agenda Yelp Airbnb DBLP SciREX Average ChatGPT CoT Chameleon ReAct (GPT-3) ReAct (GPT-3.5) 2.0 0.0 3.0 3.0 5.0 2.3 0.8 2.3 10.8 17.7 1.0 0.0 0.0 0.0 7.0 0.0 1.0 0.0 3.0 8.0 2.0 0.0 0.0 0.0 7.0 4.0 3.0 8.0 19.0 5.0 3.0 5.0 0.0 0.0 8.0 2.0 1.4 1.9 5.1 8.2
run the tool chains with the corresponding arguments to programmatically extract answers from the reference data. This process enables automatic generation correct answers to questions, even for those questions that involve multi-step reasoning. Figure 2(c) demonstrates this generation process. When answering a generated question with sampled values âDid the flight from LAX to MDW on 01/09/22 get canceled or diverted?â, we write Python codes to implement the operators over the reference data, including database loader, data filter, and get-value function. Then, the programmatic pipeline runs a tool chain of these operators to automatically generate the correct answer (details in Appendix E).
# 4 Experiments
# 4.1 Baselines
We evaluate the performance of the following methods on ToolQA, covering both standard LLMs and tool-augmented LLMs: (1) ChatGPT [37]: We directly feed the question into OpenAIâs ChatGPT model (gpt-3.5-turbo) and obtain its response as the final answer. (2) CoT [57, 23]: We use chain-of-thoughts prompting for ChatGPT, adding the prompt "Letâs think step by step:" after the question to leverage LLMsâ reasoning ability for question answering. (3) Chameleon [28] is a recent method that uses LLMs as a controller to use multiple tools for solving subtasks and has shown promising results in reasoning and QA tasks. When running Chameleon on ToolQA, we set the tool pool to our defined tools in § 3.1. (4) ReAct [66] integrates reasoning with tool use by prompting LLMs to generate interleaved verbal reasoning traces and tool calls. This integration has been shown effective in enhancing LLMsâ problem-solving capabilities. We instantiate two versions of ReAct using gpt-3.5-turbo and text-davinci-003.
Different from the existing works that mainly provide task-level few-shot exemplars, we provide tool-level demonstrations. We used 8 demonstrations about how to use tools for QA, ensuring that each tool in the pool is covered at least once by the demonstrations. Such tool-level demonstrations provide a concise tutorial to the LLMs for tool use, covering all tool uses with the LLM context limit. Details about the demonstrations and our prompts are included in Appendix F. To assess the performance of methods on the ToolQA benchmark, we normalize both the ground-truth answers and the model predictions to ensure uniformity in format. Success rates are then computed based on the exact match between these normalized answers. We evaluate the modelâs ability against the generated question-answer pairs in an open-ended manner, focusing on whether the model can arrive at the correct answer, regardless of the used tool chains.
# 4.2 Results
Comparing Different Tool-Use LLMs. Table 3 and 4 shows the results of different methods on the easy and hard questions. ChatGPT and CoT achieve very poor success rates (< 10) on both easy and hard questions across different tasks. This is expected as the questions in ToolQA cannot be answered solely based on LLMsâ internal knowledge and reasoning. Chameleon achieves slightly better performance, with 10.6% and 1.9% success rates on easy and hard questions, respectively. This is because Chameleon incorporates tool descriptions and integrates human-induced orderings of these tools in its context, enabling it to comprehend and compose different tools for QA. However, Chameleon cannot take feedback from the execution trace, thus often suffering from infeasible
6
(a) Incorrect tool calls of ReAct on ToolQA. (b) Confusion matrix of questions from dif- ferent resources in ToolQA.
# # Wrong Calls
Figure 3: Analysis of incorrect tool calls and incorrect data sources made by ReAct on ToolQA.
actions or omitted arguments in its generated plans. ReAct is the best-performing model. It can use observations in the execution trace to generate its next action, allowing it to iteratively refine its tool use chain and obtain better success rates.
Easy vs. Hard Questions. Comparing Table 3 and 4, we observe that all the baselines perform much worse on hard questions. The best method achieves an average success rate of 43.13% on easy questions, while that number drops to 8.24% on hard questions. As mentioned in § 3, the hard questions in ToolQA require more tool calls and more complicated compositions. Current tool- augmented LLMs struggle with answering such hard questions, which requires further development of techniques to improve their ability to reason about the task and generate plans for tool use. GPT-3 vs. GPT3.5. 4 Comparing the different versions of ReAct, we observe that the ReAct (GPT-3) outperforms ReAct (GPT-3.5) on easy questions, yet it shows inferior performance on hard questions. Our hypothesis is that for easy questions, it is more important to learn and follow the format of the tool calls in the context, which GPT-3 is stronger at. For hard questions, the better reasoning and code understanding abilities of GPT-3.5 enables it to come up with âinnovativeâ solutions that never appear in the context, leading to higher success rates. An example can be referred to in § 5.3.
# 5 Result Analysis and Discussion
We analyze the drawbacks and possible improvements of existing tool-augmented LLMs, taking the best-performed ReAct (GPT-3.5) model on the hard questions of ToolQA as an example.
# 5.1 Main Error Type I: Argument Errors
By performing comprehensive error analysis, we found that the most common error type when asking LLMs to use tools for QA is argument error â LLMs calling the tools with wrong arguments. For ReAct, this error type makes 44.56% and 48.23% out of the 377 and 436 error cases on easy and hard questions respectively, as shown in Figure 3(a). Interestingly, ReAct shows different argument error patterns on easy and hard questions. On easy questions, it tends to make more mistakes on database-related tools. For example, the model commits a total of 120 errors when calling LoadDB, FilterDB, and GetValue tools for easy questions, while this number reduces to 95 for hard questions. On the other hand, when dealing with code-related tools (e.g., SQLInterpreter and PythonInterpreter), ReAct makes nearly 10x more errors for hard questions than for easy ones. This phenomenon is likely because the solution logic for hard questions is often more complex and cannot be fully inferred from the context alone. Consequently, the LLMs tend to rely on their understanding of code and programming concepts to tackle these intricate questions. In contrast, for easy questions, the LLMs tend to follow the patterns provided in the context, attempting to combine different database operations to arrive at a solution.
# 5.2 Main Error Type II: Incorrect Data Source
We have conducted an investigation into the data sources preferred by LLMs when answering questions. We found that LLMs also have difficulties in identifying the proper reference corpora answer the questions. This behavior is graphically represented as a confusion matrix in Figure 3(b). Upon examining the figure, it is apparent that for target reference corpora like Flight, Coffee, Airbnb,
4GPT-4 was not included in the evaluation as we have no access to its API.
7
Coffee-Hard Question: what was the average coffee price from 2019-10-31 to 2021-07-23? Correct Answer: 120.9 ReAct (GPT-3) Thought 1: This is a question related to coffee. We need to load the coffee database. Action 1: LoadDB [coffee Observation 1: We have successfully loaded the coffee database... Thought 2: We need to filter the information related to the query. Action 2: FilterDB[Date>=2019-10-31, Date<=2021-07-23] Observation 2: We have successfully filtered the data (435 rows). Thought 3: We then obtain all the React (GPT-3.5) Thought 3: We need to calculate the average coffee price from the filtered data. Action 3: Calculate [AVG(Close) ] Observation 3: (data not available) We need to retrieve the result of the previous calculation. Action 4: GetValue [AVG (Close) ] Observation he querying does not exist. Please modify it. Thought 4: ue you are Thought 5: Oops, I made a mistake in the SQL query. Let me correct it. Action 5: SQUITESEBESESE[SEZECT| AVG(Close) FROM Coffee-Hard Question: How much did the coffee change from 2006-11-27 to 2016-06 Correct Answer: 10.5 React (GPT-3.5) Thought 6: we need to modify the filtering query to the correct format by adding quotes around the date. Action 6: FilterDB[Date='2016-06-03"] Observation 6: The filtering query Date='2016-06-03' is incorrect. Please modify the condition. Action 7: GetValue[Close] Observation 7: 118.0 Thought 8: We need to subtract the price f£ee on 2006-11-27 Close values of filtered records. Action 3: GetValue [Close] Observation 3: [Ui 95y//N040)/1O3NESy » 193.65, 189.0 (435 values) Action 8: Calculate {118.0-94u25) Thought 6: After calculation, we know Observation 8: 23075 that the average coffee price from 2019-10-31 to 2021-07-23 is 120.9. Action 6: Finish (12009) Observation 6: Afiswer US||CORRECT Too Long Context, Aborted Thought 9 that th ee price changed by 23.75. Action 9: Finish(23V751 Observation 9: Answer iS//INCORRECT After calculation, we know ' 1 ' 1 ' i 1 1 ' 1 1 1 1 ' i 1 1 | | mought 7: fle have successfully filtered | | | the data (1 rows). 1 1 i ' 1 ' 1 1 ' 1 1 i ' 1 ' 1 (a) Innovation (b) Hallucination
(a) Innovation
(b) Hallucination
Figure 4: An example of innovation and hallucination when answering hard questions on Coffee data. Actions and observations shrouded in pink are incorrect, whereas those in green are correct. Terms highlighted in yellow signify hallucinations produced by ReAct (GPT-3.5).
(a) Easy questions. (b) Hard questions.
Figure 5: Error analysis of ReAct on ToolQA.
and Yelp that contain temporal information, LLMs are more likely to query the Agenda corpus for answering questions. Similarly, given that the SciREX knowledge corpora and DBLP graph are both in the scientific domain, LLMs tend to be confused about which source to query when answering scientific questions.
# 5.3 Main Error Type III: Innovation and Hallucination
For in-context tool-augmented LLMs, it is typical to include descriptions and use-case examples of each tool in the prompt. However, as the problem complexity increases with the number of tools, it becomes challenging to encompass all possible instances of compositional tool use as few-shot exemplars. Consequently, it is vital for LLMs to uncover logical relationships among different tools, which have never been encompassed in the human-provided exemplars, to solve challenging tasks â a process we refer to as "innovation." However, these innovative behaviors are a double-edged sword as they are often accompanied by hallucinations. Figure 4 illustrates this phenomenon with a case study, where LLMs answer hard questions with reference Coffee data. Given the context length constraint, the few-shot exemplar only showcases the basic usage of database operations and the SQL interpreter. For the hard question in Figure 4(a), ReAct (GPT-3) strictly follows the operations displayed in the context, leading to failure. On the contrary, ReAct (GPT-3.5) innovatively identifies the SQL interpreter as a possible alternative to database operations, especially when the latter fails repeatedly. However, such innovations can oftentimes lead to hallucinations. As shown in Figure 4(b), when answering another hard question from the Coffee data, ReAct (GPT-3.5) opts to hallucinate certain observations (highlighted in yellow) that are non-existent in the feedback from tool execution.
8
# 5.4 Other Error Types
We manually go through and count all the errors made by the ReAct (GPT-3.5) model and show the errors on both easy and hard questions in Figure 5. In addition to the aforementioned 3 main error types, there are 4 error types that frequently occur:
⢠Infeasible Actions: The execution of tool calls are infeasible in the environment, often involving new tools that do not exist in the pre-defined tool pool.
⢠Too Long Context: The encoding of interaction history, observations, and tool-use plans exceed the length limitation of GPT family models, resulting in runtime errors;
⢠Mis-understanding: The LLMs cannot understand the observations obtained from external interaction and fail to determine the next steps or generate answers;
⢠Low-Quality Retrieval: This error occurs when the retrieval model fails to extract the relevant information from text corpora, indicating insufficient external knowledge for LLMs to answer questions accurately.
Comparing these error types on easy and hard questions, we find that the overall distribution is similar, though there is a slightly higher rate of hallucination and long-context errors when answering hard questions. This can be attributed to the complexity of hard questions, which often require composing more tools for question answering.
# 6 Conclusion
We have developed ToolQA, a dataset that assesses the ability of Large Language Models (LLMs) in using external tools for solving complex problems. ToolQA is curated by an automated three- phase process for reference data collection, template-based question generation, and programmatic answer generation. This pipeline is general and can be expanded to incorporate any area of external knowledge of interest. We tested both standard LLMs and tool-augmented LLMs on ToolQA. Our analysis showed that even the strongest baseline achieved limited performance on the hard questions of ToolQA. Our study also found that current tool-augmented LLMs tend to make errors such as incorrect tool calls and using incorrect data sources. These issues could potentially be addressed by fine-tuning using a collection of tool-use corpora with publicly accessible LLMs. In the future, we are interested in include collecting high-quality, diverse data for fine-tuning, as well as assessing the performance of fine-tuned tool-augmented LLMs on ToolQA.
9
# References
[1] Y. Bai, A. Jones, K. Ndousse, A. Askell, A. Chen, N. DasSarma, D. Drain, S. Fort, D. Ganguli, T. Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022.
[2] S. Borgeaud, A. Mensch, J. Hoffmann, T. Cai, E. Rutherford, K. Millican, G. B. Van Den Driess- che, J.-B. Lespiau, B. Damoc, A. Clark, et al. Improving language models by retrieving from trillions of tokens. In International conference on machine learning, pages 2206â2240. PMLR, 2022.
[3] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020.
[4] S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y. T. Lee, Y. Li, S. Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
[5] W. Chen, X. Ma, X. Wang, and W. W. Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks, 2022.
[6] W. Chen, X. Wang, and W. Y. Wang. A dataset for answering time-sensitive questions. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021.
[7] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
[8] H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, E. Li, X. Wang, M. De- hghani, S. Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022.
[9] K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
[10] B. Dhingra, J. R. Cole, J. M. Eisenschlos, D. Gillick, J. Eisenstein, and W. W. Cohen. Time- aware language models as temporal knowledge bases. Transactions of the Association for Computational Linguistics, 10:257â273, 2022.
[11] L. Gao, A. Madaan, S. Zhou, U. Alon, P. Liu, Y. Yang, J. Callan, and G. Neubig. Pal: Program- aided language models. arXiv preprint arXiv:2211.10435, 2022.
[12] D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS, 2021.
[13] S. Imani, L. Du, and H. Shrivastava. Mathprompter: Mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398, 2023.
[14] G. Izacard, M. Caron, L. Hosseini, S. Riedel, P. Bojanowski, A. Joulin, and E. Grave. To- wards unsupervised dense information retrieval with contrastive learning. arXiv preprint arXiv:2112.09118, 2021.
[15] G. Izacard, P. Lewis, M. Lomeli, L. Hosseini, F. Petroni, T. Schick, J. Dwivedi-Yu, A. Joulin, S. Riedel, and E. Grave. Few-shot learning with retrieval augmented language models. arXiv preprint arXiv:2208.03299, 2022.
[16] S. Jain, M. van Zuylen, H. Hajishirzi, and I. Beltagy. SciREX: A challenge dataset for document- level information extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7506â7516, Online, July 2020. Association for Computational Linguistics.
[17] Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, E. Ishii, Y. J. Bang, A. Madotto, and P. Fung. Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12):1â38, 2023.
[18] Q. Jin, Y. Yang, Q. Chen, and Z. Lu. Genegpt: Augmenting large language models with domain tools for improved access to biomedical information. ArXiv, 2023.
10
[19] M. Joshi, E. Choi, D. Weld, and L. Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601â1611, Vancouver, Canada, July 2017. Association for Computational Linguistics.
[20] V. Karpukhin, B. Oguz, S. Min, P. Lewis, L. Wu, S. Edunov, D. Chen, and W.-t. Yih. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769â6781, 2020.
[21] J. Kasai, K. Sakaguchi, Y. Takahashi, R. L. Bras, A. Asai, X. Yu, D. Radev, N. A. Smith, Y. Choi, and K. Inui. Realtime qa: Whatâs the answer right now? arXiv preprint arXiv:2207.13332, 2022.
[22] G. Kim, P. Baldi, and S. McAleer. Language models can solve computer tasks, 2023.
[23] T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa. Large language models are zero-shot reasoners. In A. H. Oh, A. Agarwal, D. Belgrave, and K. Cho, editors, Advances in Neural Information Processing Systems, 2022.
[24] P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal, H. Küttler, M. Lewis, W.-t. Yih, T. Rocktäschel, et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459â9474, 2020.
[25] A. Lewkowycz, A. Andreassen, D. Dohan, E. Dyer, H. Michalewski, V. Ramasesh, A. Slone, C. Anil, I. Schlag, T. Gutman-Solo, et al. Solving quantitative reasoning problems with language models. arXiv preprint arXiv:2206.14858, 2022.
[26] M. Li, F. Song, B. Yu, H. Yu, Z. Li, F. Huang, and Y. Li. Api-bank: A benchmark for tool-augmented llms, 2023.
[27] B. Y. Lin, K. Tan, C. S. Miller, B. Tian, and X. Ren. Unsupervised cross-task generalization via retrieval augmentation. In Advances in Neural Information Processing Systems, 2022.
[28] P. Lu, B. Peng, H. Cheng, M. Galley, K.-W. Chang, Y. N. Wu, S.-C. Zhu, and J. Gao. Chameleon: Plug-and-play compositional reasoning with large language models. arXiv preprint arXiv:2304.09842, 2023.
[29] P. Lu, L. Qiu, K.-W. Chang, Y. N. Wu, S.-C. Zhu, T. Rajpurohit, P. Clark, and A. Kalyan. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning. arXiv preprint arXiv:2209.14610, 2022.
[30] S. Lu, N. Duan, H. Han, D. Guo, S.-w. Hwang, and A. Svyatkovskiy. Reacc: A retrieval- augmented code completion framework. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6227â6240, 2022.
[31] A. Madaan and A. Yazdanbakhsh. Text and patterns: For effective chain of thought, it takes two to tango. arXiv preprint arXiv:2209.07686, 2022.
[32] A. Mallen, A. Asai, V. Zhong, R. Das, H. Hajishirzi, and D. Khashabi. When not to trust language models: Investigating effectiveness and limitations of parametric and non-parametric memories. arXiv preprint arXiv:2212.10511, 2022.
[33] S. Mishra, M. Finlayson, P. Lu, L. Tang, S. Welleck, C. Baral, T. Rajpurohit, O. Tafjord, A. Sabharwal, P. Clark, et al. Lila: A unified benchmark for mathematical reasoning. arXiv preprint arXiv:2210.17517, 2022.
[34] R. Nakano, J. Hilton, S. Balaji, J. Wu, L. Ouyang, C. Kim, C. Hesse, S. Jain, V. Kosaraju, W. Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
[35] R. Nogueira, Z. Jiang, and J. Lin. Investigating the limitations of transformers with simple arithmetic tasks. arXiv preprint arXiv:2102.13019, 2021.
# [36] OpenAI. Gpt-4 technical report. arXiv, 2023.
[37] OpenAI. Introducing chatgpt, 2023.
[38] L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright, P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730â27744, 2022.
11
[39] B. Paranjape, S. Lundberg, S. Singh, H. Hajishirzi, L. Zettlemoyer, and M. T. Ribeiro. Art: Automatic multi-step reasoning and tool-use for large language models. arXiv preprint arXiv:2303.09014, 2023.
[40] A. Parisi, Y. Zhao, and N. Fiedel. Talm: Tool augmented language models. arXiv preprint arXiv:2205.12255, 2022.
[41] S. G. Patil, T. Zhang, X. Wang, and J. E. Gonzalez. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334, 2023.
[42] B. Peng, M. Galley, P. He, H. Cheng, Y. Xie, Y. Hu, Q. Huang, L. Liden, Z. Yu, W. Chen, et al. Check your facts and try again: Improving large language models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813, 2023.
[43] J. Qian, H. Wang, Z. Li, S. Li, and X. Yan. Limitations of language models in arithmetic and symbolic induction. arXiv preprint arXiv:2208.05051, 2022.
[44] Y. Qin, S. Hu, Y. Lin, W. Chen, N. Ding, G. Cui, Z. Zeng, Y. Huang, C. Xiao, C. Han, Y. R. Fung, Y. Su, H. Wang, C. Qian, R. Tian, K. Zhu, S. Liang, X. Shen, B. Xu, Z. Zhang, Y. Ye, B. Li, Z. Tang, J. Yi, Y. Zhu, Z. Dai, L. Yan, X. Cong, Y. Lu, W. Zhao, Y. Huang, J. Yan, X. Han, X. Sun, D. Li, J. Phang, C. Yang, T. Wu, H. Ji, Z. Liu, and M. Sun. Tool learning with foundation models, 2023.
[45] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
[46] S. Robertson, H. Zaragoza, et al. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval, 3(4):333â389, 2009.
[47] T. L. Scao, A. Fan, C. Akiki, E. Pavlick, S. Ili´c, D. Hesslow, R. Castagné, A. S. Luccioni, F. Yvon, M. Gallé, et al. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022.
[48] T. Schick, J. Dwivedi-Yu, R. Dessì, R. Raileanu, M. Lomeli, L. Zettlemoyer, N. Cancedda, and T. Scialom. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761, 2023.
[49] Y. Shen, K. Song, X. Tan, D. Li, W. Lu, and Y. Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580, 2023.
[50] W. Shi, S. Min, M. Yasunaga, M. Seo, R. James, M. Lewis, L. Zettlemoyer, and W.-t. Yih. Replug: Retrieval-augmented black-box language models. arXiv preprint arXiv:2301.12652, 2023.
[51] N. Shinn, B. Labash, and A. Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023.
[52] H. Sun, Y. Zhuang, L. Kong, B. Dai, and C. Zhang. Adaplanner: Adaptive planning from feedback with language models, 2023.
[53] J. Thorne, A. Vlachos, C. Christodoulopoulos, and A. Mittal. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809â819, New Orleans, Louisiana, June 2018. Association for Computational Linguistics.
[54] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
[55] X. Wang, S. Li, and H. Ji. Code4struct: Code generation for few-shot structured prediction from natural language. arXiv preprint arXiv:2210.12810, 2022.
[56] Z. Wang, S. Cai, A. Liu, X. Ma, and Y. Liang. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents, 2023.
[57] J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou. Chain- of-thought prompting elicits reasoning in large language models. arXiv, page 2201.11903v6, 2022.
12
[58] L. Weidinger, J. Mellor, M. Rauh, C. Griffin, J. Uesato, P.-S. Huang, M. Cheng, M. Glaese, B. Balle, A. Kasirzadeh, et al. Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359, 2021.
[59] S. Wiegreffe, J. Hessel, S. Swayamdipta, M. Riedl, and Y. Choi. Reframing human-ai collab- oration for generating free-text explanations. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 632â658, 2022.
[60] S. Wolfram. Wolfram|Alpha as the Way to Bring Computational Knowledge Superpowers to ChatGPT. Stephen Wolfram Writings, 2023.
[61] C. Wu, S. Yin, W. Qi, X. Wang, Z. Tang, and N. Duan. Visual chatgpt: Talking, drawing and editing with visual foundation models. arXiv preprint arXiv:2303.04671, 2023.
[62] Q. Xu, F. Hong, B. Li, C. Hu, Z. Chen, and J. Zhang. On the tool manipulation capability of open-source large language models. arXiv preprint arXiv:2305.16504, 2023.
[63] R. Xu, Y. Yu, J. C. Ho, and C. Yang. Weakly-supervised scientific document classification via retrieval-augmented multi-stage training. arXiv preprint arXiv:2306.07193, 2023.
[64] Z. Yang, L. Li, J. Wang, K. Lin, E. Azarnasab, F. Ahmed, Z. Liu, C. Liu, M. Zeng, and L. Wang. Mm-react: Prompting chatgpt for multimodal reasoning and action. arXiv preprint arXiv:2303.11381, 2023.
[65] Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. Cohen, R. Salakhutdinov, and C. D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2369â2380, Brussels, Belgium, Oct.-Nov. 2018. Association for Computational Linguistics.
[66] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. R. Narasimhan, and Y. Cao. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations, 2023.
[67] J. Zhang. Graph-toolformer: To empower llms with graph reasoning ability via prompt augmented by chatgpt, 2023.
In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7371â7387, Online and Punta Cana, Dominican Republic, Nov. 2021. Association for Computational Linguistics.
[69] R. Zhang, Y. Yu, P. Shetty, L. Song, and C. Zhang. Prboost: Prompt-based rule discovery and boosting for interactive weakly-supervised learning. arXiv preprint arXiv:2203.09735, 2022.
[70] Y. Zhuang, Y. Li, J. Zhang, Y. Yu, Y. Mou, X. Chen, L. Song, and C. Zhang. ReSel: N- ary relation extraction from scientific text and tables by learning to retrieve and select. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 730â744, Abu Dhabi, United Arab Emirates, Dec. 2022. Association for Computational Linguistics.
13
# A Additional Related Works
Methods Tool Numbers Tool Categories # Tool/Task Reasoning Instruction Type Task Single-Tool Methods CoT [57] Lila [33] Program-of-Thought [5] Code4Struct [55] PAL [11] MathPrompt [13] ToolFormer [48] GraphToolFormer [67] Talm [40] 1 1 1 1 1 1 5 5 - - math/code code code code code Basic Graph Basic 1 1 1 1 1 1 1 1 1 Generation Generation Generation Generation Generation Generation Generation Human Info Generation Prompting Prompting Prompting Prompting Prompting Prompting PR & FT PR & FT PR & FT QA MathQA TabQA Event Extraction MathQA MathQA QA Graph QA Multi-Tool Methods WebGPT [34] HuggingGPT [49] Chameleon [28] GeneGPT [18] ART [39] ReAct [66] MM-ReAct [64] Visual ChatGPT [61] 10 >10 >10 38 8 3 >10 >10 Web Operation Vision code, nlp, cv NCBI APIs code/math/retriever retriever vision vision Feedback Human Info Human Info Generation Human Feedback Feedback Feedback Feedback Fine-tuning Prompting Prompting Prompting Prompting PR & FT Prompting Prompting QA VQA ScienceQA, TabQA Gene Tasks BigBench QA, AlfWorld, WebShop CV tasks CV tasks
>1 >1 >1 >1 >1 >1 >1 >1 Table 5: A comparison of methods that leverage LLMs for Tool-use.
We list the state-of-the-art related works in tool-augmented LLMs in Table 5. All of them can be categorized into two groups: (1) single-tool methods, that focus on making a single API call perfect in the solution; (2) multi-tool methods, that emphasize more on studying how to compose different tools together to solve a challenging problem. ToolQA is more suitable for the evaluation of the second category to test the inherent logical reasoning behind different tools. Additionally, there exist other notable contributions [56, 22, 52] within the realm of decision-making that specifically emphasize the planning capabilities of expansive language models. These endeavors can be regarded as methods affiliated with tools, wherein the actions within generated plans are analogous to distinct tools utilized for specific purposes.
# B Data Sources
B.1 Different Data Source Introduction
⢠Flight Status (2022-2023)5 contains almost all flight information of airlines between 2022 and 2023, which is too contemporary for LLMsâ internal knowledge.
Daily Coffee Price (2000-2022)6 contains the daily price of coffee, ranging from 2000 to 2022, where the information is too contemporary and detailed for LLMsâ internal knowledge. ⢠Yelp Business Data7 is a subset of Yelpâs business data across 8 metropolitan areas in the USA and Canada, where the information is too detailed for LLMsâ internal knowledge. ⢠Airbnb Open Data8 is a subset of Airbnb activities in New York, where the information is
too detailed for LLMsâ internal knowledge.
⢠DBLP Citation Network (V14)9 constructs the graph based on the records after 2020. The author-author and paper-paper relations are formulated as two separate graphs.
GSM8k10 is a dataset of 8.5K high-quality linguistically diverse grade school math word problems. We sample the questions from the error cases made by ChatGPT on the original dataset to make sure that the questions cannot be easily handled with its internal knowledge. ⢠SciREX11 is a challenging dataset for document-level information extraction based on a
collection of full-length machine-learning scientific papers.
5https://www.kaggle.com/datasets/robikscube/flight-delay-dataset-20182022?select= Combined_Flights_2022.csv
# 6https://www.kaggle.com/datasets/psycon/daily-coffee-price 7https://www.kaggle.com/datasets/yelp-dataset/yelp-dataset?select=yelp_academic_
# dataset_business.json
# 8https://www.kaggle.com/datasets/arianazmoudeh/airbnbopendata 9https://www.aminer.org/citation 10https://github.com/openai/grade-school-math 11https://github.com/allenai/SciREX
14
⢠Agenda is our own synthetic dataset to model the real-world personal agenda data. To avoid the privacy issue, we first create names, events, and dates with ChatGPT and then randomly compose them to form 10000 different records. To create a pure-text personal agenda corpus, we feed each of the records into ChatGPT, containing generated agenda for virtual characters. More Details can be seen in Appendix B.2.
# B.2 Generation Details of Agenda Dataset
As mentioned in § 3.2, personal or private data serves as a significant external knowledge source. There exist applications that have been designed with plugins and external tools specifically querying this type of data, such as AI personal assistants on daily agenda. Nevertheless, we recognize that this data often intersects with sensitive areas, and hence, privacy concerns are paramount. To address these issues, we automatically synthesize a personal agenda corpus. This not only ensures that the large language models (LLMs) have not been previously exposed to the data but also eliminates any possibility of them inadvertently memorizing the information within their internal knowledge.
In the synthetically generated personal agenda corpus, each entry follows the pattern: "NAME performs EVENT at TIME on DATE", incorporating key elements such as names, events, dates, and time slots. To begin, we employ ChatGPT to virtually generate these elements. More precisely, we create 100 unique names, 10000 distinctive events each associated with corresponding time slots within a day, and span all possible dates from 01/01/2022 through 12/31/2022. Following this, we commence the random assembly of these generated elements to formulate personal agenda entries. For every event- time pair generated, we randomly select from the pool of 100 names and possible dates to construct each record. This process yields a total of 9,494 unique personal agenda entries. To transform this corpus into an accessible external database for model querying, we transcribe each record into a comprehensible natural language description. Prompts designed for agenda data generation are listed in Appendix F.2.
# C Easy Question Templates
# C.1 Flights
We design the following 10 templates:
⢠What was the departure time of the {CARRIER}{NUMBER} flight from {ORIGIN} to {DEST} on {ORIGIN}?
Was the flight {CARRIER}{NUMBER} from {ORIGIN} to {DEST} cancelled on {ORIGIN}? ⢠What is the flight number of the {AIRLINE} flight from {ORIGIN} to {DEST} on {ORIGIN}? ⢠How long was the different between the CRS-recorded departure time and actual departure
time of the {CARRIER}{NUMBER} flight from {ORIGIN} to {DEST} on {ORIGIN}?
How long did {CARRIER}{NUMBER} delay when arrival on {DEST}? ⢠How many extra minutes did the {CARRIER}{NUMBER} flight take from {ORIGIN} to {DEST}
on {ORIGIN}?
⢠What was the local arrival time of the {CARRIER}{NUMBER} flight from {ORIGIN} to {DEST} on {ORIGIN}?
⢠What was the CRS-recorded arrival time of the {CARRIER}{NUMBER} flight from {ORIGIN} to {DEST} on {ORIGIN}?
How long was the flight {CARRIER}{NUMBER} from {ORIGIN} to {DEST} on {ORIGIN}? ⢠How many minutes did the {CARRIER}{NUMBER} flight take to taxi in on {DATE}?
# C.2 Coffee
We design the following 8 templates:
What was the daily coffee price opening on {DATE}? ⢠What was the lowest coffee price on {DATE}? ⢠What was the highest coffee price on {DATE}? ⢠What was the daily coffee price closing on {DATE}? ⢠What was the trading volume of coffee on {DATE}?
15
⢠What was the percentage change in coffee price on {DATE}, based on the difference between the opening and closing prices?
Was {DATE} a bearish or bullish day for coffee price? ⢠What was the range of coffee price on {DATE}, based on the difference between the high and
low prices?
# C.3 Yelp
We design the following 11 templates for the Yelp dataset:
What is the address of {NAME} in the area of postal code {POSTAL-CODE}? ⢠What city is {NAME} located in {STATE}? ⢠What state is {NAME} located in? ⢠What is the postal code of {NAME} in the area with postal code {POSTAL-CODE}, {CITY},
{STATE}?
⢠What is the star rating of {NAME} in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}?
⢠How many reviews does {NAME} receive in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}, received?
Is {NAME} still open in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}? ⢠Does {NAME} require appointment in the area with postal code {POSTAL-CODE}, {CITY},
{STATE}?
⢠What are the hours of operation for {NAME} in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}?
⢠What categories does {NAME} belong to, in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}?
⢠What are the coordinates of {NAME} in the area with postal code {POSTAL-CODE}, {CITY}, {STATE}?
# C.4 Airbnb
We design the following 10 templates for easy questions on Airbnb dataset:
What is the hostâs name for {NAME} in {NEIGHBOURHOOD}? ⢠How many days are {NAME} (id: {ID}) available during a year (365 days)? ⢠What is the room type of {NAME} (id: {ID}) in {NEIGHBOURHOOD}? ⢠What is the price of {NAME} (id: {ID}) in {NEIGHBOURHOOD}? ⢠What is the minimum number of nights for {NAME} (id: {ID}) in {NEIGHBOURHOOD}? ⢠When did {NAME} (id: {ID}) in {NEIGHBOURHOOD} constructed? ⢠How many reviews does {NAME} (id: {ID}) in {NEIGHBOURHOOD} have? ⢠What is the last review date for {NAME} (id: {ID}) in {NEIGHBOURHOOD}? ⢠What is the review rate number for {NAME} (id: {ID}) in {NEIGHBOURHOOD}? ⢠What is the average number of reviews per month for {NAME} (id: {ID}) in {NEIGHBOURHOOD}?
# C.5 SciREX
We design the following 1 templates for easy questions on SciREX dataset:
⢠What is the corresponding {METRIC} score of the {METHOD} method on {DATASET} dataset for {TASK} task?
# C.6 Agenda
We design the following 5 templates for easy questions on Agenda dataset:
What did {NAME} do from {START-TIME} to {END-TIME} on {DATE}? ⢠Where did {EVENT} that {NAME} attended take place on {DATE}? ⢠When did {NAME} attend {EVENT} on {DATE}? ⢠How long did {NAME} attend {EVENT} on {DATE}? ⢠Who attended {EVENT} between {START-TIME} and {END-TIME} on {DATE} in {LOCATION}?
16
# C.7 DBLP
We design the following 10 templates for easy questions on DBLP dataset:
Who are the authors of {TITLE}? ⢠What organization is {AUTHOR} from? ⢠How many pages is {TITLE}? ⢠How many papers did {TITLE} cite in the DBLP citation network? ⢠How many papers did papers in the DBLP citation network cite {TITLE}? ⢠How many collaborators does {AUTHOR} have in the DBLP citation network? ⢠How many papers did {AUTHOR} and {AUTHOR} write together in the DBLP citation network? ⢠What papers did {AUTHOR} write in the DBLP citation network? ⢠How many papers did {AUTHOR} write in the DBLP citation network? ⢠What venue did {AUTHOR} and {AUTHOR} collaborate most in the DBLP citation network?
# C.8 GSM8K
The questions are randomly sampled from the ChatGPT errors in GSM8K dataset without following some templates. Thus, we cannot offer any question templates for GSM8K.
# D Hard Question Templates
# D.1 Flights
What percentage of the flights from {ORIGIN} were delayed on {FLIGHTDATE}? ⢠What is the average delay time of all the flights that departed from {ORIGIN} on
{FLIGHTDATE}?
How many flights were diverted on {FLIGHTDATE}? ⢠How many flights with a distance greater than 500 miles on {FLIGHTDATE}? ⢠What is the average airtime of the flights from {ORIGIN} to {DEST} host by {AIRLINE}? ⢠How many flights from {ORIGIN} to {DEST} host by {AIRLINE}? ⢠What is the average flight time of {CARRIER}{NUMBER}? ⢠What is the fastest flight from {ORIGIN} to {DEST} on {FLIGHTDATE}? ⢠What is the average speed of {CARRIER}{NUMBER} from {ORIGIN} to {DEST}? ⢠What is the total number of flights operated by {AIRLINE} on {FLIGHTDATE}?
# D.2 Coffee
What was the highest coffee price from {START-DATE} to {END-DATE}? ⢠What was the lowest coffee price from {START-DATE} to {END-DATE}? ⢠What was the average coffee price from {START-DATE} to {END-DATE}? ⢠How much did the coffee price change from {START-DATE} to {END-DATE}? ⢠What was the percentage change in coffee price on {DATE} compared to the previous day? ⢠On which date from {START-DATE} to {END-DATE} was the difference between the highest
and lowest coffee prices the greatest?
What was the average daily volume of coffee traded from {START-DATE} to {END-DATE}? ⢠On which date from {START-DATE} to {END-DATE} did the coffee price have the highest
increase compared to the previous day?
⢠How many times from {START-DATE} to {END-DATE} did the coffee price increase compared to the previous day?
What was the percentage increase in coffee price from {START-DATE} to {END-DATE}? ⢠What was the coffee price range from {START-DATE} to {END-DATE}?
17
# D.3 Yelp
We design the following 10 templates for hard questions in Yelp Dataset.
How many {CATEGORY} businesses are there in {CITY}, {STATE}? ⢠How many businesses are there in {POSTALCODE} area of {CITY}, {STATE}? ⢠Which {CATEGORY} business has the highest star rating in {CITY}, {STATE}? ⢠Which {CATEGORY} business has the highest review count in {CITY}, {STATE}?" ⢠What is the average review counts of businesses within a 5-mile radius from {NAME}? ⢠Which is the nearest {CATEGORY} business to {NAME}? ⢠Can you recommend a {CATEGORY} business with the highest star rating within a 5-mile
radius of {ADDRESS}?
How many businesses are not open currently in {CITY}? ⢠What is the average star rating of {CATEGORY} businesses in {CITY}? ⢠Which region has most bussinesses in {CITY}, {STATE}?
# D.4 Airbnb
We design the following 10 templates for hard questions on Airbnb dataset.
⢠What is the total price at least if you want to stay at {NAME} in {NEIGHBOURHOOD} for {NUMBER} nights?
How many airbnbs are there in {NEIGHBOURHOOD}? ⢠What is the average price of airbnbs in {NEIGHBOURHOOD}? ⢠What is the average review rates within 5 miles from {NAME} in {NEIGHBOURHOOD}? ⢠How much proporion of airbnbs in {NEIGHBOURHOOD} have a flexible cancellation policy? ⢠How much does it cost per night to stay at the most expensive entire home/apt in
{NEIGHBOURHOOD}?
How many airbnbs are there in {NEIGHBOURHOOD} that have a review rate higher than 4? ⢠Can you recommend me a hotel room with the lowest price in {NEIGHBOURHOOD}? ⢠Can you recommend me a private room with the highest review rate that can host at least 2
people in {NEIGHBOURHOOD}?
⢠Can you recommend a shared room with the lowest price within 10 miles from {LONGITUDE} longitude and {LATITUDE} latitude?
# D.5 SciREX
We design the following 4 templates for hard questions on SciREX dataset:
⢠What is the corresponding {METRIC} score of the {METHOD} method on {DATASET} dataset for {TASK} task?
⢠On which dataset does the {METHOD} method achieve the highest {METRIC} score for {TASK} task?
Which method achieves the highest {METRIC} score on {DATASET} dataset for {TASK} task? ⢠On what metrics is the {METHOD} method evaluated on {DATASET} dataset for {TASK} task? ⢠Which datasets is {METHOD} method evaluated on for {TASK} task?
# D.6 Agenda
We design the following 5 templates for hard questions on Agenda dataset:
How many events happen on {DATE} in the agenda table? ⢠Who is unavailable between {START-TIME} and {END-TIME} on {DATE} in the agenda table? ⢠When should I schedule a meeting with {NAME} from 9:00 AM to 6:00 PM on {DATE} in the
agenda table?
What events does {NAME} have on {DATE} in the agenda table? ⢠How many dates in the agenda table have {NAME} scheduled?
18
# D.7 DBLP
We design the following 10 templates for hard questions on DBLP dataset:
What keywords does {AUTHOR} focus on most in the DBLP citation network? ⢠How many people does {AUTHOR-1} need to know at least to know {AUTHOR-2} in the DBLP
citation network?
How many common collaborators does {AUTHOR-1} have with {AUTHOR-2}? ⢠Which is the most cited paper written by {AUTHOR} in the DBLP citation network? ⢠Which collaborator does {AUTHOR} have the most citations with in the DBLP citation net-
work?
Which venue does {AUTHOR} publish the most papers in the DBLP citation network? ⢠How many accumulated citations do papers collaborated by {AUTHOR-1} and {AUTHOR-2}
have in the DBLP citation network?
⢠How many papers in all do {AUTHOR} and his/her collaborators have in the DBLP citation network?
Who collaborated with {AUTHOR} most in the DBLP citation network? ⢠What institutions participated in the study of {TITLE} in the DBLP citation network?
# E Code Examples of Programmatic Answer Generation
Below is an example of programmatic answer generation. The example code is answering the question of âWhat percentage of the flights from {ORIGIN} were delayed on {FLIGHTDATE}?â. More details of the programmatic answers can be seen in the public code.
def solution(data, flightdate, origin): num_total =len(data.loc[(data["FlightDate"] ==flightdate) & (data["Origin"] == origin)]) num_cancelled =len(data.loc[(new_data["FlightDate"] ==flightdate) & (data["Origin"] ==origin) & (data["Cancelled"] ==True)]) if num_cancelled >0: question ="What percentage of the flights from {} were delayed on {}?".format(origin, flightdate) answer ="{:.1f}".format(num_cancelled /num_total *100)+"%"
# F Additional Implementation Details
# F.1 Implementation Details
All experiments are conducted on CPU: Intel(R) Core(TM) i7-5930K CPU @ 3.50GHz and GPU: NVIDIA GeForce RTX A5000 GPUs using python 3.8, Huggingface 4.6.0 and Pytorch 1.10. We keep the parameter top_p = 1.0 and temperature t = 1.0 for calling ChatGPT APIs [37] for the question generation part.
# F.2 Prompts
# F.2.1 Prompts for Agenda Data Generation
The prompts used for virtual name generation:
<Agenda_Name_Gen> Prompt You are an AI assistant to answer questions. Can you list 100 English Names?
The prompts used for virtual events generation:
19
# <Agenda_Events _Gen> Prompt
You are an AI assistant for text generation. Generate 100 detailed agenda events, including the event, start time, end time, and location. Please make the events as diverse as possible and make sure these events can happen in real life. Make sure the location is a detailed name that may exist in real life. Make sure the dates are selected from 2022/01/01 to 2023/01/01.
Example: Doctorâs appointment - 9:00 AM - 11:00 AM - ABC Medical Center Yoga class - 10:30 AM - 11:30 AM - Yoga Studio Downtown
Generate 100 more detailed agendas that do not conflict with the previous ones.
The prompts used to convert the agenda records into natural language descriptions:
<Agenda_Gen> Prompt Please use natural language to describe the event in the agenda with the following information: Name: NAME Date: DATE Event: EVENT Start Time: START-TIME End Time: END-TIME Location: LOCATION
# F.2.2 Prompts for Methods
The prompts used in ReAct [66]:
# <ReAct> Prompt
Question: How many extra minutes did the DL1575 flight take from ATL to MCO
on 2022-01-12? Thought 1: This is a question related to flights. We need to load the flights database. Action 1: LoadDB[flights] Observation 1: We have successfully loaded the flights database, including the following columns: FlightDate, Airline, Origin, Dest, Cancelled, Diverted, CRSDepTime, DepTime, DepDelayMinutes, DepDelay, ArrTime, ArrDelayMinutes, AirTime, CRSElapsedTime, ActualElapsedTime, Distance, Year, Quarter, Month, DayOfWeek, Marketing_Airline_Network, Operated_or_Branded_Code_Share_Partners, DayofMonth, DOT_ID_Marketing_Airline, IATA_Code_Marketing_Airline, Flight_Number_Marketing_Airline, Operating_Airline, DOT_ID_Operating_Airline, IATA_Code_Operating_Airline, Tail_Number, Flight_Number_Operating_Airline, OriginAirportID, OriginAirportSeqID, OriginCityMarketID, OriginCityName, OriginState, OriginStateFips, OriginStateName, OriginWac, DestAirportID, DestAirportSeqID, DestCityMarketID, DestCityName, DestState, DestStateFips, DestStateName, DestWac, DepDel15, DepartureDelayGroups, DepTimeBlk, TaxiOut, WheelsOff, WheelsOn, TaxiIn, CRSArrTime, ArrDelay, ArrDel15, ArrivalDelayGroups, ArrTimeBlk, DistanceGroup, DivAirportLandings. Thought 2: We need to filter the information related to the query. Action 2: FilterDB[Flight_Number_Marketing_Airline=1575, FlightDate=2022-01-12, Origin=ATL, Dest=MCO] Observation 2: We have successfully filtered the data (1 row). Thought 3: We then need to know the departure delayed time. Action 3: GetValue[DepDelay] Observation 3: -7.0 Thought 4: We then need to know the arrival delayed time. Action 4: GetValue[ArrDelay] Observation 4: -17.0 Thought 5: To compute the extra minutes, we need to subtract the departure delayed time from the arrival delayed time. Action 5: Calculate[(-17)-(-7)] Observation 5: -10
20
Thought 6: After calculation, we know that the answer is -10. Action 6: Finish[-10] Question: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? Thought 1: Maybe I can write a python program to solve this problem. Action 1: PythonInterpreter[# solution in Python: def solution(): # Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, # he lost 2 more. How many golf balls did he have at the end of wednesday? golf_balls_initial = 58 golf_balls_lost_tuesday = 23 golf_balls_lost_wednesday = 2 golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - \ golf_balls_lost_wednesday result = golf_balls_left return result] Observation 1: 33 Thought 2: After execution of Python Code, we know that the answer is 33. Action 2: Finish[33] Question: What is the corresponding Mean_IoU score of the FRRN method on Cityscapes dataset for Semantic_Segmentation task? Thought 1: The question is asking some performance information about Mean_IoU score of the FRRN method on Cityscapes dataset for Semantic_Segmentation task, we need to retrieve some useful information from the scirex database. Action 1: RetrieveScirex[Mean_IoU score of the FRRN method on Cityscapes dataset for Semantic_Segmentation task] Observation 1: The other stream undergoes a sequence of pooling operations to obtain robust features for recognition . The two streams are coupled at the full image resolution using residuals . Without additional processing steps and without pretraining , our approach achieves an intersection - over - union score of 71.8 % on the Cityscapes dataset . Thought 2: As the approach achieves an intersection - over - union score of 71.8 % on the Cityscapes dataset, the answer should be 71.8%. Action 2: Finish[71.8%] Question: When was the paper Learning the Principle of Least Action with Reinforcement Learning. published? Thought 1: The question is asking some basic information of a dblp paper. We need to load the DBLP graph. Action 1: LoadGraph[dblp] Observation 1: DBLP graph is loaded. Thought 2: The question is asking the published date of a paper, we need to check the node from the PaperNet in DBLP graph. Action 2: NodeCheck[PaperNet, Learning the Principle of Least Action with Reinforcement Learning.] Observation 2: {âtitleâ: âLearning the Principle of Least Action with Reinforcement Learning.â, âyearâ: 2021, âvenueâ: {ârawâ: âAAAI Spring Symposium - MLPSâ}, ân_citationâ: 0, âkeywordsâ: [], âdoc_typeâ: âConferenceâ, âpage_startâ: ââ, âpage_endâ: ââ} Thought 3: The published date of the paper is 2021. Action 3: Finish[2021]
Question: How many collaborators does Chao Zhang have in the DBLP graph? Thought 1: The question is asking the number of collaborators of a person, we need load the DBLP graph.
Action 1: LoadGraph[dblp] Observation 1: DBLP graph is loaded. Thought 2: The question is asking the collaborators of a person, we need to
check the neighboring nodes from the AuthorNet in DBLP graph.
Action 2: NeighbourCheck[AuthorNet, Chao Zhang] Observation 2: [âYUHUI YUANâ, âRao Fuâ, âLang Huangâ, âWeihong Linâ, âX Chenâ,
âJingdong Wangâ]
21
Thought 3: The number of collaborators of Chao Zhang is 6. Action 3: Finish[6]
Question: How many papers does Chao Zhang and Weihong Lin have in common in the DBLP graph?
Thought 1: The question is asking the number of common papers of two persons, we need load the DBLP graph.
Action 1: LoadGraph[dblp] Observation 1: DBLP graph is loaded. Thought 2: The question is asking the common papers of two persons, we need to
check the edges between them from the PaperNet in DBLP graph.
Action 2: EdgeCheck[PaperNet, Chao Zhang, Weihong Lin] Observation 2: {âweightâ: 1, âpapersâ: [âHRFormer: High-Resolution Vision
# Transformer for Dense Predict.â], ân_citationâ: [95]}
Thought 3: The number of common papers of Chao Zhang and Weihong Lin is 1.
Question: Where did Stephenâs Opera performance take place? Thought 1: The question is asking the location of Stephenâs Opera performance
from agenda.
Action 1: RetrieveAgenda[Stephenâs Opera performance] Observation 1: On January 29, 2022, there will be an opera performance at the Lyric
Opera House, featuring Stephen. The show will start at 7:00 PM and end at 9:00 PM. It promises to be a wonderful evening of beautiful music and powerful performances in a stunning venue. Come and experience the magic of opera at its finest!
Thought 2: The event happened in Lyric Opera. Action 2: Finish[Lyric Opera]
Question: What was the trading volume of coffee on 2000-01-14? Thought 1: Maybe I can write a SQL query to solve this problem. Action 1: SQLInterpreter(SELECT Volume FROM coffee.coffee_data WHERE Date =
â2000-01-14â;
Observation 1: Volume: 10115 Thought 2: The volume of coffee on 2000-01-14 is 10115. Action 2: Finish[10115]
The prompts used in Chameleon [28]:
# <Chameleon> Prompt
You need to act as a policy model, that given a question and a modular set, determines the sequence of modules that can be executed sequentially can solve the question.
The modules are defined as follows:
- Calculate[formula]: This module calculates a given formula and returns the result. It takes in a mathematical formula and returns the calculated result. Normally, we only consider using "Calculate" when the question involves mathematical computations.
- RetrieveAgenda[keyword]: This module retrieves an agenda related to a specific keyword and returns it. It takes in a keyword and returns the corresponding agenda. Normally, we only consider using "RetrieveAgenda" when the question is about specific actions or tasks related to a topic.
- RetrieveScirex[keyword]: This module retrieves paragraphs from machine learning papers related to the specified keyword and returns them. It takes in a keyword and returns the relevant paragraphs. Normally, we only consider using "RetrieveScirex" when the question involves understanding specific concepts in machine learning.
- LoadDB[DBName]: This module loads a database specified by the database name and returns the loaded database. It takes in a database name and returns the corresponding database. The DBName can be one of the following: flights/ coffee/airbnb/yelp. Normally, we only consider using "LoadDB" when the
22
question requires data from a specific structured dataset.
- FilterDB[column_name, relation, value]: This module filters a database by a specified column name, relation, and value, and then returns the filtered database. It takes in a column name, a relation, and a value, and returns the filtered database. Normally, we only consider using "FilterDB" when the question requires a specific subset of data from a structured dataset.
- GetValue[column_name]: This module returns the value of a specified column in a database. It takes in a column name and returns its value. Normally, we only consider using "GetValue" when the question requires a specific piece of data from a structured dataset.
- LoadGraph[GraphName]: This module loads a graph specified by the graph name and returns the loaded graph. It takes in a graph name and returns the corresponding graph. Normally, we only consider using "LoadGraph" when the question involves understanding or navigating specific graph structures.
- NeighbourCheck[GraphName, Node]: This module lists the neighbors of a specified node in a graph and returns the neighbors. It takes in a graph name and a node, and returns the nodeâs neighbors. Normally, we only consider using "NeighbourCheck" when the question involves understanding relationships in a graph structure.
- NodeCheck[GraphName, Node]: This module returns the detailed attribute information of a specified node in a graph. It takes in a graph name and a node, and returns the nodeâs attributes. Normally, we only consider using "NodeCheck" when the question requires information about a specific entity in a graph.
- EdgeCheck[GraphName, Node1, Node2]: This module returns the detailed attribute information of the edge between two specified nodes in a graph. It takes in a graph name and two nodes, and returns the attributes of the edge between them. Normally, we only consider using "EdgeCheck" when the question involves understanding the relationship between two entities in a graph.
- SQLInterpreter[SQL]: This module interprets a SQL query and returns the result. It takes in a SQL query and returns the result of the query. Normally, we only consider using "SQLInterpreter" when the question requires data manipulation and extraction from a structured dataset.
- PythonInterpreter[Python]: This module interprets Python code and returns the result. It takes in Python code and returns the result of the code execution. Normally, we only consider using "PythonInterpreter" when the question requires complex computations or custom data manipulation.
- Finish[answer]: This module returns the final answer and finishes the task. This module is the final module in the sequence that encapsulates the result of all previous modules.
Below are some examples that map the problem to the modules.
Question: How many extra minutes did the DL1575 flight take from ATL to MCO on 2022-01-12?
Modules: ["LoadDB[flights]", "FilterDB[Flight_Number_Marketing_Airline=1575, FlightDate=2022-01-12, Origin=ATL, Dest=MCO]", "GetValue[DepDelay]", "GetValue[ArrDelay]", "Calculate[(-17)-(-7)]", "Finish[-10]"]
Question: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?
Modules: ["PythonInterpreter[# solution in Python:
def solution():
# Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?
23
golf_balls_initial = 58
golf_balls_lost_tuesday = 23
golf_balls_lost_wednesday = 2
golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesday
result = golf_balls_left
return result]", "Finish[33]"]
Question: What is the corresponding Mean_IoU score of the FRRN method on Cityscapes dataset for Semantic_Segmentation task?
Modules: ["ScirexRetrieve[Mean_IoU score of the FRRN method on Cityscapes dataset for Semantic_Segmentation task]", "Finish[71.8%]"]
Question: When was the paper Learning the Principle of Least Action with Reinforcement Learning. published?
Modules: ["LoadGraph[dblp]", "NodeCheck[PaperNet, Learning the Principle of Least Action with Reinforcement Learning.]", "Finish[2021]"]
Question: How many collaborators does Chao Zhang have in the DBLP graph?
Modules: ["LoadGraph[dblp]", "NeighbourCheck[AuthorNet, Chao Zhang]", "Finish[6]"]
Question: How many papers does Chao Zhang and Weihong Lin have in common in the DBLP graph?
Modules: ["LoadGraph[dblp]", "EdgeCheck[PaperNet, Chao Zhang, Weihong Lin]", "Finish[1]"]
Question: Where did Stephenâs Opera performance take place?
# Modules: ["AgendaRetrieve[Stephenâs Opera performance]", "Finish[Lyric Opera]"]
Question: What was the trading volume of coffee on 2000-01-14?
Modules: ["SQLInterpreter[SELECT Volume FROM coffee.coffee_data WHERE Date = â2000-01-14â]", "Finish[10115]"]
Now, you need to act as a policy model, that given a question and a modular set, determines the sequence of modules that can be executed sequentially can solve the question.
# G Key Information of ToolQA
# G.1 Dataset Documentations
The dataset is provided in jsonl format. Each task corresponds to two files: easy and hard (e.g., âflight-easy.jsonlâ and âflight-hard.jsonlâ, etc.). Each data point contains the following fields:
qid: the unique identifier for the question-answer pair; ⢠question: the question to query; ⢠answer: the corresponding ground-truth answer to question.
# G.2 Intended Uses
ToolQA is intended for researchers in machine learning and related fields to innovate novel methods for tool-augmented large language models (LLMs). We also aim to help developers to test their plugins on our dataset.
# G.3 Hosting and Maintenance Plan
ToolQA codebase is hosted and version-tracked via GitHub. It will be permanently available under the link https://github.com/night-chen/ToolQA. The download link of all the datasets can be found in the GitHub repository.
24
ToolQA is a community-driven and open-source initiative. We are committed and have resources to maintain and actively develop ToolQA in the future. We plan to grow ToolQA to include more tasks, tools, and more baseline methods. We welcome external contributors.
# G.4 Licensing
We license our work using Apache 2.012. All the dataset will be publicly released through the aforementioned GitHub link.
# G.5 Limitation
Tool-augmented LLM is a popular and wildly developing direction, which is wildly developing and focused on by a lot of researchers, ToolQA will keep developing and include more tasks, data, tools, and methods in the future.
# 12https://www.apache.org/licenses/LICENSE-2.0
25 | {
"id": "2302.13971"
} |
2306.12672 | From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought | How does language inform our downstream thinking? In particular, how do
humans make meaning from language--and how can we leverage a theory of
linguistic meaning to build machines that think in more human-like ways? In
this paper, we propose rational meaning construction, a computational framework
for language-informed thinking that combines neural language models with
probabilistic models for rational inference. We frame linguistic meaning as a
context-sensitive mapping from natural language into a probabilistic language
of thought (PLoT)--a general-purpose symbolic substrate for generative world
modeling. Our architecture integrates two computational tools that have not
previously come together: we model thinking with probabilistic programs, an
expressive representation for commonsense reasoning; and we model meaning
construction with large language models (LLMs), which support broad-coverage
translation from natural language utterances to code expressions in a
probabilistic programming language. We illustrate our framework through
examples covering four core domains from cognitive science: probabilistic
reasoning, logical and relational reasoning, visual and physical reasoning, and
social reasoning. In each, we show that LLMs can generate context-sensitive
translations that capture pragmatically-appropriate linguistic meanings, while
Bayesian inference with the generated programs supports coherent and robust
commonsense reasoning. We extend our framework to integrate
cognitively-motivated symbolic modules (physics simulators, graphics engines,
and planning algorithms) to provide a unified commonsense thinking interface
from language. Finally, we explore how language can drive the construction of
world models themselves. We hope this work will provide a roadmap towards
cognitive models and AI systems that synthesize the insights of both modern and
classical computational perspectives. | http://arxiv.org/pdf/2306.12672 | Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Jacob Andreas, Joshua B. Tenenbaum | cs.CL, cs.AI, cs.SC | null | null | cs.CL | 20230622 | 20230623 | 3 2 0 2 n u J 3 2
# ] L C . s c [
2 v 2 7 6 2 1 . 6 0 3 2 : v i X r a
# From Word Models to World Models: Translating from Natural Language to the Probabilistic Language of Thought
Lionel Wong1â, Gabriel Grand1â, Alexander K. Lew1, Noah D. Goodman2, Vikash K. Mansinghka1, Jacob Andreas1, Joshua B. Tenenbaum1
âEqual contribution.
1MIT, 2Stanford
# Abstract
How does language inform our downstream thinking? In particular, how do humans make meaning from languageâand how can we leverage a theory of linguistic meaning to build machines that think in more human-like ways? In this paper, we propose rational meaning construction, a computational framework for language-informed thinking that combines neural models of language with probabilistic models for rational inference. We frame linguistic meaning as a context-sensitive mapping from natural language into a probabilistic language of thought (PLoT)âa general-purpose symbolic substrate for probabilistic, generative world modeling. Our architecture integrates two powerful computational tools that have not previously come together: we model thinking with probabilistic programs, an expressive representation for flexible commonsense reasoning; and we model meaning construction with large language models (LLMs), which support broad-coverage translation from natural language utterances to code expressions in a probabilistic programming language. We illustrate our framework in action through examples covering four core domains from cognitive science: probabilistic reasoning, logical and relational reasoning, visual and physical reasoning, and social reasoning about agents and their plans. In each, we show that LLMs can generate context-sensitive translations that capture pragmatically-appropriate linguistic meanings, while Bayesian inference with the generated programs supports coherent and robust commonsense reasoning. We extend our framework to integrate cognitively-motivated symbolic modules (physics simulators, graphics engines, and goal-directed planning algorithms) to provide a unified commonsense thinking interface from language. Finally, we explore how language can drive the construction of world models themselves. We hope this work will help to situate contemporary developments in LLMs within a broader cognitive picture of human language and intelligence, providing a roadmap towards AI systems that synthesize the insights of both modern and classical computational perspectives.
1
# Introduction
Language expresses the vast internal landscape of our thoughts. We use language to convey what we believe, what we are uncertain about, and what we do not know. We talk about what we see in the world around us, and what we imagine in real or wholly hypothetical futures. We discuss what we want and what we plan to do, and dissect what others want and what we think they will do. We build and pass on new bodies of knowledge in languageâwe ask questions and offer explanations, give commands and instructions, and propose and refute theories. Some of these ideas can be expressed in part through other means. But language stands apart for its flexibility and breadth, and its seeming proximity to our thoughts.
What is language? How does language get its meaning, and when should we say that a person or machine knows, understands, and can use it? What is the relationship between language and the rest of general cognitionâwhat allows language to inform and support so much of thought? This paper focuses on these questions as they relate to human language and thought, in computational terms. What integrated cognitive theory can model how language relates to the other core systems of human cognition? If we seek to build AI systems that emulate how humans talk and think, what architecture can integrate language robustly into systems that support the full scope of our thought?
Code for the examples in this paper is available at: github.com/gabegrand/world-models. Correspondence: co-primary authors ([email protected], [email protected]); co-supervisors ([email protected], [email protected]).
1
# INTRODUCTION
Theories of cognition have long considered human language and thinking to be deeply related, but fundamentally distinct. Thinking, in many traditional cognitive theories, revolves around goal-directed world modeling, inference, and decision makingâconstructing mental models of the world that reflect prior beliefs, can be updated from new observations, and support rational prediction and decision making towardâs oneâs goals (Craik, 1967; Gentner & Stevens, 2014; Johnson-Laird, 1980, 1989; Lake, Ullman, Tenenbaum, & Gershman, 2017; Morgan, 1999; Nersessian et al., 2010). Language, in contrast, centers around communicating these thoughts to others, and receiving their thoughts in turn. In most linguistic theories, human languages are mappings between the internal representations of thought and an externalizable symbol system, which might be phonemes, signs, or glyphs (Frege, 1892; Heim & Kratzer, 1998; Lewis, 1976). To produce language is to map thoughts into these external symbols, and to understand language is to transduce from these external symbols back into the representations of thought.
The theoretical distinction between language and thought rests on multiple intersecting lines of evidence. Prior to learning language, infants are born equipped with a powerful toolkit for modeling and thinking about the world, including an understanding of physical objects and events, and the goals and actions of agents (Spelke, 2022; Spelke & Kinzler, 2007), and general abilities for learning statistics and structure (Saffran, Senghas, & Trueswell, 2001; Xu et al., 2021). Building on these foundations, children acquire language from relatively sparse input data, rapidly generalizing beyond the utterances they hear to produce and understand entirely new ones (Bloom, 2002; L. Gleitman, 1990; L. R. Gleitman, Cassidy, Nappa, Papafragou, & Trueswell, 2005; Landauer & Dumais, 1997; Pinker, 1998; L. Smith & Yu, 2008); they then use language to acquire new concepts they would not get merely from direct experience (Carey, 2009; Gopnik, 1996; Wellman & Gelman, 1992). Language and thought also appear to operate in distinct but interacting brain systems: neuroimaging and neurological studies reveal a âlanguage networkâ specialized for processing sentences, functionally and anatomically separate from but closely connected to brain networks supporting other aspects of general cognition (Fedorenko & Varley, 2016; Mahowald et al., 2023).
âThese empirical findings have shaped decades of computational models in cognitive science and AI. To model the expressiveness of human cognition, an influential computational paradigm suggests that humans compose and execute mental programs in an internal language of thought (Fodor, 1975), a structured symbolic substrate for representing conceptual knowledge that provides a general interface to algorithms for problem solving and reasoning. These symbolic systems are not merely logic engines; they support our probabilistic inferences, and rich intuitive simulations (Goodman, Tenenbaum, & Gerstenberg, 2014; Oaksford & Chater, 2007; Russell & Norvig, 2021). This paradigm underlies many of the success stories in cognitive science and related applications in AI. It has influenced models that capture how people draw causal and explanatory inferences about facts and observations (Pearl, 1988; Pearl et al., 2000), learn and generalize concepts from few examples (Lake et al., 2017); plan actions over long time horizons and under complex conditions (Kaelbling & Lozano-Pérez, 2013; Russell & Norvig, 2021); imagine and predict the physical world (Battaglia, Hamrick, & Tenenbaum, 2013; Ullman, Spelke, Battaglia, & Tenenbaum, 2017); and reason about other agents with their own beliefs and goals (C. Baker, Saxe, & Tenenbaum, 2011). Within linguistics and natural language processing, in turn, this paradigm underlies semantic parsing systems designed to map from human language into symbolic computational representations. It has yielded AI systems that could follow instructions (Tellex et al., 2011) and answer natural language queries with respect to structured knowledge representations (Klein & Manning, 2003; Liang, 2016; Steedman, 2011; Y. W. Wong & Mooney, 2007); as well as cognitive models that capture how human children learn the grammar and meaning of expressions in their native language (Abend, Kwiatkowski, Smith, Goldwater, & Steedman, 2017; Chater & Manning, 2006; Frank, Goodman, & Tenenbaum, 2009; Gauthier, Levy, & Tenenbaum, 2018; Goldwater, Griffiths, & Johnson, 2009; Perfors, Tenenbaum, & Regier, 2011; Piantadosi, Tenenbaum, & Goodman, 2012).
Despite this progress, however, modular and symbolic models of language and thought have been dogged by persistent critiques of their scalability and scope. Cognitive and AI researchers over the years have carved off specific domains of world knowledge, constructing bespoke representations to model them without a general account of whether they would generalize to all of human knowledge, or how they could be scalably learned. Semantic parsing systems inherited these critiques, and faced additional challenges in implementing the mapping from sentences into symbolic representations. These mapping functions were either hand-engineered or learned from strong supervision on specific domains of language, limiting them to brittle, imperfect models of the breadth and complexity of real human discourse.
2
1
# INTRODUCTION
In just the last few years, a serious challenge has emerged to the traditional view of language and thought as distinct but interacting components of the mind, each modeled using structured representations. Large language models (LLMs) use a new generation of attention-based deep neural networks to learn the probabilistic distributions of words from vast datasets of human language, generally training on orders of magnitude more data than a human encounters in their lifetime (Bommasani et al., 2021; T. B. Brown et al., 2020; OpenAI, 2023c; Rae et al., 2021; Vaswani et al., 2017). The underlying computational objective that drives these models is not itself new. LLMs follow in the tradition of distributional approaches to discovering structure in language (Firth, 1957; Harris, 1954; Osgood, 1952), which seek to extract representations of meaning from statistical patterns in how words are used in context (Dumais et al., 2004; Griffiths, Steyvers, & Tenenbaum, 2007; Mikolov, Sutskever, Chen, Corrado, & Dean, 2013; Sahlgren, 2008). What is new, however, is the scale and scope of todayâs distributional vision, which has expanded in stages. A first generation of LLMs, trained specifically to predict words in context, produced such fluent language that they challenged traditional symbolic approaches to modeling language (Devlin, Chang, Lee, & Toutanova, 2018; Peters et al., 1802; Radford et al., 2019). Their qualitative success, as well as internal representational probes, suggested that linguistic structures sufficient for grammatically coherent language could be learned entirely from modeling the statistics of words (Piantadosi, 2023; Tenney, Das, & Pavlick, 2019). By scaling to even larger datasets and neural networks, LLMs appeared to learn not only the structure of language, but capacities for some kinds of thinking; they could learn new words in context, and extract patterns in language from a few examples that they could generalize locally to similar cases (T. B. Brown et al., 2020). The most recent LLMs have been trained not only to model the statistics of language but explicitly to reason, with targeted supervision on instruction following, writing code, and other forms of human dialog and feedback in conversational contexts (Chen et al., 2021; OpenAI, 2023a, 2023c; Ouyang et al., 2022). They produce such fluent language on a wide variety of tasks that many have begun to ask whether merely more training of this sort, with increasing scale, could learn representations sufficient for general intelligence (Bubeck et al., 2023). Proponents of the most extreme âscaling hypothesisâ have argued that because language is used to express so much of human thought, a sufficiently large and performant predictive language model would effectively have to construct an internal model of all of cognition (Branwen, 2022).
This theoretical vision has sparked both excitement and controversy, but proponents and critics agree that it raises its own questions about its long-term scalabilityâmost significantly, what will be required to close the outstanding gaps between todayâs LLMs and general cognitive models that reason systematically and consistently about the language they receive or produce. Current LLMs can produce impressive results on a set of linguistic inputs and then fail completely on others that make trivial alterations to the same underlying domain (Ullman, 2023); they mix confident answers to complex questions with equally confident, hallucinated language that does not reflect a consistent, calibrated notion of truth or belief (Bubeck et al., 2023; OpenAI, 2023c). These issues make it difficult to evaluate whether LLMs have acquired cognitive capacities such as social reasoning and theory of mind (Ullman, 2023), or to compare different kinds of world modeling and planning tasks (Valmeekam, Sreedharan, Marquez, Olmo, & Kambhampati, 2023). One approach to solving these problems is through additional data. Perhaps fully robust, systematic reasoning will finally emerge if models are trained on still more language, or supervised more explicitly on data from complex reasoning tasks. This scaling route raises practical questions about whether it will be possible to acquire enough data to train such a model, as well as theoretical questions whether more data and more parameters alone will in fact yield robust systems for thought. Another strategy in recent work seeks to build more robust cognitive capacities by augmenting LLMs with various external tools for structured representation and symbolic reasoning, such as calculators (Cobbe et al., 2021), logic engines (Weir & Van Durme, 2022), databases (Alon et al., 2022; Borgeaud et al., 2022; Izacard et al., 2022; Thoppilan et al., 2022), physics simulators (R. Liu et al., 2022), planners (B. Liu et al., 2023), and APIs for executing arbitrary code (Karpas et al., 2022; OpenAI, 2023c; Schick et al., 2023). But these new hybrid approaches resurrect many of the same long-term scalablity challenges that confronted earlier semantic parsing and knowledge representation systems, by designing a menagerie of bespoke representations and tools without a broader account of how they will scale towards general models of language and thought.
In this paper, we consider a different approach to integrating the strengths of modern language models and classic symbolic architectures, one that draws on but also runs counter to recent trends in AI, in a sense flipping these scaling questions on their head. Instead of trying to turn models trained to predict language into models that might genuinely thinkâfilling each gap in reasoning we discover through yet more
3
1
# INTRODUCTION
Approaches to language-informed thinking Large language models Classical symbolic models vv Ax.VX... > true iii)" ) World knowledge (define ...) Observations » iti va > (condition ...) fog Questions (query ...) Natural Meaning Probabilistic Inference Distributions over language function language of thought function possible worlds Our framework: Rational Meaning Construction
Figure 1: Human language understanding supports flexible inferences in a process we term language-informed thinking. Computational approaches to language-informed thinking sit on a neurosymbolic continuum: On one side, classical symbolic models (top right) yield systematic, structured inferences, but are typically limited to narrow linguistic domains and often require hand-engineering. On the other side, large language models (LLMs; top left) achieve remarkable facility with open-domain natural language, but struggle to ground reasoning in a consistent world state that supports coherent inferences, predictions and plans. Our rational meaning construction framework decomposes language-informed thinking into two modules: (1) A meaning function translates natural language into probabilistic programming language (PPL) statements that represent linguistic meaning with respect to a symbolic world model. (2) An inference function computes probabilities over the space of possible worlds consistent with and conditioned on information in the language. In the rest of this paper, we illustrate how our framework can combine the strengths of LLMs and PPLs, affording both broad coverage of natural language and a principled treatment of reasoning about uncertain events, outcomes, and scenarios.
data, new kinds of language training or linguistic prompting tricks, or by plugging in yet another external toolâwe ask: what are the prospects for a unifying computational framework guided by the study of thought and language in the human mind and brain, as well as what we have learned from multiple eras of AI? Can we build intelligent architectures that use, learn and understand language as people do, informed by neuroscience constraints and developmental trajectories? That is, can we build models in which language is learned efficiently within one relatively small, modular computational system, which interfaces generally with other systems dedicated to robust world modeling and reasoning? What architecture lets language build on pre-existing capacities for symbolic world modeling and inference, while also allowing linguistic meanings and world knowledge to scaffold and bootstrap each other, as a learnerâs experiences and competences grow?
This paper attempts to show what such a model might look likeâand how it can build theoretically and practically on the insights from both classical paradigms for language and thought, and the recent successes of statistical learning made by large language models. We propose a framework for intelligent computational architectures that reason about and learn from language, but we begin with a proposal for what it means to think. As in the traditional cognitive view, thinking at its core is constructing general-purpose representations for modeling the entities and events in the world, sufficient to support rational, coherent inferences under
4
1
# INTRODUCTION
uncertainty and planning actions that achieve our goals. We then consider how language relates to this architecture to support language-informed thinkingâhow language sets up world modeling and inference, to guide, constrain, and drive our downstream thought, and grow new thinking capacities.
Our proposal, which we call rational meaning construction, rests on the integration of two com- putational components, each which we suggest can be instantiated using modern computational toolsâa probabilistic language of thought for constructing structured models of arbitrary situations, which supports supports coherent belief updating and inferences over them; and a general mechanism for taking natural language and constructing meaning from it, represented as distributions over expressions in this language of thought (Fig. 1). We propose that probabilistic programs can formally instantiate the first component. They offer a structured representation for expressing novel situations and arbitrary problems with respect to a meaningful model over possible world states, a coherent notion of conditional belief updating, and a systematic framework for inferences with respect to queries and goals. We propose, in turn, that meaning construction can be modeled as translation from utterances in language to expressions in a general proba- bilistic programming language. Theoretical and empirical results have long suggested that human languages implement locally compositional, efficiently learnable mappings between symbolic representations of thought and external symbol systems. We therefore propose that code-trained large language models can be viewed as in-principle implementations of broad, context-sensitive, and resource-rational meaning functions, in that they can be used to efficiently infer distributions between language and programs from stored, prior patterns in the background distribution of language and code. By integrating these two components, we propose that this paradigm suggests a general framework by which language can meaningfully relate to many fundamental aspects of cognition, modeling how we might condition on language to systematically update our beliefs, pose new questions and goals in language, and convey structured background information or even define new relevant concepts about a situation or about the world.
In Section 2, we give an overview of this framework, describing the overall structure and more detailed rationale behind the computational components we build on in the remainder of this paper. We then describe a concrete but minimal implementation of this framework using contemporary probabilistic programming and language modeling tools, intended to demonstrate the basic computational components of this approach and elucidate the scope and scalability of the broader proposal.
Given this general paradigm, we first illustrate the potential breadth of this approach for integrating meaning construction and reasoning, showing how it might address a core set of computational and cognitive domains that we communicate about in language (Fig. 2). Each of these examples uses minimal pedagogical examples intended to suggest how this approach integrates language with important bodies of work from computational cognitive science and artificial intelligence. We first show how this framework can condition on language in order to describe and reason about uncertain situations with respect to an ongoing discourse (Section 2.2), then show how this approach can be extended to reason about relational systems (Section 3.1), physical and perceptual scenes (Section 3.2), and social situations involving agents with goals and plans (Section 3.3).
We then turn to how this approach might begin to address core scalability challenges that confront In traditional approaches to modeling thinking as symbol processing, whether logical or probabilistic. Section 4, we show how language can support growing knowledge autonomously, without hand engineering, by using the rational meaning construction framework to construct a broad range of new concepts in existing models and even whole new world models, which in turn support coherent downstream reasoning.
Ultimately, this paper is a prospective one, and the examples presented here are intended to convey a sufficiently concrete proposal to suggest avenues for future work. In Section 5, we outline what we see as some of the most significant open questions and future directions raised by this framework. These include theoretical questions that relate our approach to classical models of language, open cognitive directions for extending this approach to model language acquisition and production, and important engineering directions necessary for scaling inference, robust translation, and learning under this general paradigm. Finally, in Section 6, we conclude by looking ahead to the longer-term implications of this proposal for modeling intelligent systems that use, understand, and think about language as we do.
5
1
# INTRODUCTION
Probabilistic Reasoning SFr-rT Crr-rTt To 81.39 Tr 0.27 Bayesian tug-of-war Knowledge about the world (define Generative » Won-against (team-1 team-2) 3 Cteam-strength tean-1) (team-strength team-2))) world models Observations about the world (condition ce (won-against â(john mary) Condition > "Com sue))) statements > Questions about the world (query © (strength âmary) Query q (strength 'tom))) statements , whichever Relational Perceptual and Social Reasoning Physical Reasoning Reasoning Chatie Dana Kinship systems Visual and physical scenes (define (define (define grandfather-of? (name_a name_b) object (obj-id) actions (agent-id) (exists (lambda (x) Cand list (choose-shape obj-id) Gif (has_bike? agent-id) (father-of? name_a x) (choose-color obj-id))) list âis_walking 'is_biking) (parent-of? x name_b))) (list âis_walking))) (condition (condition (condition (grandfather-of? âcharlie (= Gength (and Cloves? âalex 'sushi) âdana)) ((filter-shape âmug) (hates? âalex âpizza) ((filter-color red) (has-bike? âalex))) (objects-in-scene "this-scene)))) 1)) What do How many mug: 1 think Alex will do? (query (query (query (get_actions âalex)) (filter-tree Cength (lambda (x) (and ((filter-shape âmug) (child-of? x âcharlie) (objects-in-scene (parent-of? x âdana))))) âthis-scene))))
Figure 2: Understanding language in four domains of reasoning that form the core of this paper. Probabilistic reasoning requires integrating sparse evidence to predict the outcomes of uncertain events, like the winners of tug-of-war matches. Relational reasoning involves maintaining and updating coherent beliefs about structured domains, like family trees, based on relational information. Perceptual and physical reasoning links language to our sensory and intuitive physical knowledge of objects in the external world, such as kitchen items on a tabletop. Social reasoning involves reasoning about the minds of other intelligent agents, such as how their goals, preferences, and circumstances shape their actions as they navigate in the world. Across all the domains, we present a unified framework that translates language into code in a probabilistic programming language to facilitate human-like reasoning.
6
2 OVERVIEW OF THE KEY IDEAS
# 2 Overview of the key ideas
The central goal of this paper is to propose a new computational framework, rational meaning construction, which relates language to thought. This framework licenses a concrete class of computational architectures for building intelligent systems that use language, which we propose can be implemented using modern AI tools. In this section, we briefly overview the key ideas that form the basis of this proposal. We draw on three observations from a rational, probabilistic perspective on biological intelligence and human language:
A rational perspective on intelligent thought. Biological intelligence encompasses many computational capacities. The foundational notion of thought we focus on here centers on rational inference and decision making in service of oneâs goals (Anderson, 1990; Chater & Oaksford, 1999). Under this perspective, thought comprises systems for modeling the world. These internal world models allow us to infer the particulars of a situation from whatever information is at hand, evaluate alternative world states and imagine possible future ones, and decide on actions that might bring one towards valuable future states in the world. Following extensive work in computational cognitive science, we view the world models that support biological intelligence as structured and probabilistic (Goodman et al., 2014; Griffiths, Chater, Kemp, Perfors, & Tenenbaum, 2010; Lake et al., 2017), designed to integrate the noisy evidence an agent receives into causal, explanatory models that allow them to maintain coherent beliefs about the world and generalizably infer consistent, useful predictions and plans. This basic, underlying view of intelligent thought draws on empirical evidence from essentially every species with a brain, from bees (Biernaskie, Walker, & Gegear, 2009; R. F. Wang & Spelke, 2002), to zebrafish Bolton et al. (2019); R. E. Johnson et al. (2020), mice (English, Nejad, Sommerfelt, Yanik, & von der Behrens, 2023), birds (Isomura, Parr, & Friston, 2019), and primates (Khalvati, Kiani, & Rao, 2021). Informally, a rational view of thought can be summarized as the ability to solve useful problems given our internal models of the world, ranging from navigation and foraging to physical prediction and social reasoning. Against this overarching picture of thought, human intelligence further stands out for its flexibility and expressiveness. We invent our own problems along with new approaches to solving them, rather than sticking to a limited set of largely innate goals and strategies (Tomasello, 2022). A few other species, non-human primates, dolphins, and some birds, are creative problem-solvers and problem-creators, but none come close to the range of goals humans can adopt (Chu & Schulz, 2023). Uniquely in the natural world, humans think about and come to understand problems far beyond the narrow range necessary for our immediate survival, considering goals and questions that draw on abstract, culturally constructed, and even entirely hypothetical systems for modeling and conceptualizing the world (Dennett, 2017).
A rational perspective on language. As with thought, language also encompasses many systems and capacities. This paper focuses on the class of problems we refer to as language-informed thinking, the general means by which language informs the inferences and decisions of an intelligent agent. We take a broadly rational perspective on languageâwe consider language to be a system of goal-directed actions for externalizing and communicating thoughts to other intelligent beings (Chater & Manning, 2006; Gibson, 2014; Goodman & Frank, 2016). In this context, we frame the problem of deriving meaning as inferring the mappings between a languageâs system of external communicative signals into the representations of rational thought. It is worth highlighting that thought does not require language and is distinct from language in the human brain (Mahowald et al., 2023). Non-human species, and pre-verbal infants (Spelke, 2022), are clearly capable of modeling the world towards their inferences and goals without language. But for humans, language clearly plays a profound role in determining the problems we think about, and how we think about them. Our natural languages allow us to communicate an extraordinarily broad range of our thoughts about the problems we pose and solve, including our abstract and general world knowledge, our specific beliefs about a situation, the particular questions or goals we have or want to pose to others, and our approaches to reasoning about them.
A resource-rational perspective on language and thought. Finally, our integrated computational approach to language and thought builds on extensive evidence that humans are resource-rational thinkersâ under finite constraints of time and memory, we rationally allocate computational resources in order to make useful inferences and plans (S. J. Gershman, Horvitz, & Tenenbaum, 2015; Lieder & Griffiths, 2019). Resource rational agents amortize computational effort across prior experience and problems, storing and
7
2.1 A rational meaning construction framework
2 OVERVIEW OF THE KEY IDEAS
reusing prior computation towards similar new problems that we encounter in the future (S. Gershman & Goodman, 2014; Le, Baydin, & Wood, 2017). Certain domains of inferences share more structure than others, and evidence suggests that we therefore heavily amortize them. Prior work, for instance, suggests that computations involved in basic perceptual activities (Brooke-Wilson, 2023; Dasgupta & Gershman, 2021; Fodor, 1983), such as object recognition under common lighting conditions, are highly amortizable from reusable patterns in computation that are learnable and shared across a background distribution of perceptual instances. This view suggests why fast, bottom-up pattern recognition models have made great advances in modeling perception in recent years, while it has proved much more challenging to amortize the wide range of flexible inferences required for arbitrary problem solving.
We propose an analogous resource-rational perspective on the kinds of computation implicated in language- informed thought. Under almost every theoretical and empirical account of linguistic structure and semantics, the mappings between language and meanings should be highly amortizable across the background distribution of languageâthere are structured, systematic, and learnable patterns in how units of language map onto units of thought. The idea that meaning construction should be highly amortizable follows from our view on language itself as an efficient communicative system. Extensive empirical evidence suggests that communicative pressures shape how language maps onto meanings at every level of linguistic structure, from individual morphemes (Bybee, 1985) to patterns in how common syntactic frames communicate meaning (L. Gleitman, 1990; Grimshaw, 1981), and even reusable pragmatic implications present across common discourse situations (White, Mu, & Goodman, 2020). But while we take the view that a resource-rational agent should intelligently learn and reuse prior computation when possible, we do not view language-informed thinking, or thinking in general, as solely a matter of learning and interpolating over statistical patterns from prior experience. When we think, including when we think about the meanings we recover from languageâto update our beliefs, to follow instructions, or to answer questions posed in languageâwe must be able to flexibly model arbitrary situations and support capacities for general problem solving, including inference, planning, and simulation, under a wide range of new and unencountered circumstances.
The efficient learnability of human language also highlights that, in many senses, the computational relationship between language and thought in humans is almost the inverse of that in todayâs LLMs. For humans, language could be characterized as an emergent property of thinking. Infants can model the world and draw inferences well before they know language (Gopnik, 1996; Spelke, 2022), and reliably acquire complete linguistic capabilities from exposure to relatively tiny amounts of language (R. Brown, 1973). Congenitally-Deaf humans born with no language input spontaneously develop languages to communicate their thoughts, with the same basic hallmarks of mature natural languages (Goldin-Meadow, 2012; Pyers, Shusterman, Senghas, Spelke, & Emmorey, 2010; Senghas, Kita, & Ozyurek, 2004). This paper seeks to understand and model the cognitive and computational structures underlying this human scaling route to intelligence and language use â one that begins with robust capacities for thought, and scaffolds language efficiently on top of them to then offer a powerful tool for driving and constructing new thought.
# 2.1 Our proposal: A framework for modeling rational meaning construction
The perspective we offer above draws from theoretical and empirical work that precedes this paper. Our core contribution in this paper is to propose a new computational framework in light of these observations, that seeks to unify prior symbolic, probabilistic inference and statistical learning traditions and to take advantage of the clear computational advances made by modern LLMs as learned statistical models of language. We describe a framework for rational meaning construction in which linguistic meaning is formalized as a context-sensitive mapping from natural language to a distribution over expressions in a probabilistic language of thought (PLoT) for rational world modeling and inference. Under this framework, we then propose that large language models trained on language and code can be used to implement meaning functions in a resource-rational architecture â they can implement learned, broad-coverage mappings between language and code; and they can be understood as part of a human-like, resource-rational system that efficiently infers these mappings using stored patterns amortized from the prior joint distribution over language and code. This motivates the concrete architecture we propose and illustrate throughout the remainder of this paper, and its two main components for modeling thinking and modeling language relative to thinkingâor how language informs thinking.
8
2.1 A rational meaning construction framework
2 OVERVIEW OF THE KEY IDEAS
# 2.1.1 Modeling thinking
We propose implementing thinking using probabilistic programs as a general representational substrate for building world models and specifying rational inferences over them. This proposal builds on prior work in cognitive science and AI formalizing how a broad class of problems can be expressed as probabilistic programs (Chater & Manning, 2006; Goodman et al., 2014), following a generic inference query motif (Goodman, Mansinghka, Roy, Bonawitz, & Tenenbaum, 2008) â a probabilistic program that combines a generative world model that models abstract, causal beliefs about probable world states; specific evidence that an agent conditions on; and a particular query being posed as the question or goal for thinking. Inference to solve a problem consists of formally computing or sampling from a probability distribution over answers to this question, specified by the world model and conditions. This computational proposal forms the backbone of the probabilistic language of thought model of general human cognition (Goodman et al., 2014), and has been used empirically to model a wide range of human inferences, including those that draw on visual perception (V. K. Mansinghka, Kulkarni, Perov, & Tenenbaum, 2013), physical simulation (Battaglia et al., 2013), and social reasoning (C. Baker et al., 2011). It is designed explicitly to formalize a central property of human thought â the capacity to expressively and flexibly pose problems involving entirely novel situations and goals, and to solve them relative to a computable representation of the world and internal belief.
# 2.1.2 Modeling language relative to thought
Given this model for thought, we propose formalizing rational meaning construction as a broad-coverage, contextual translation function that maps language into a distribution over expressions in a probabilistic language of thought. This proposal builds most closely on and draws inspiration from efforts to articulate a probable world semantics for natural language in prior work (Goodman & Lassiter, 2015), in order to express how language could compactly convey uncertain propositions and vague meanings with respect to a formal probabilistic generative model. It also builds on the longer history of symbolic semantic theories we overview in the introduction, including formal semantics theories that model language as mapping into formal propositions over possible worlds (eg. Heim and Kratzer (1998); Lewis (1976)), and semantic parsing systems (eg. Abend et al. (2017); Klein and Manning (2003); Liang (2016); Steedman (2001); Y. W. Wong and Mooney (2007)) that map language into formally executable program expressions.
Our goal, however, is to broaden and generalize these framings to suggest a general framework for modeling how language can interface with and inform such a broad swatch of human cognition. By positing that meaning is a general mapping between sentences and expressions in a probabilistic language of thought, we believe that a rational meaning construction approach can elaborate on and concretely model core desiderata of a coherent theory of linguistic meaning â modeling how meanings drive inferences about what is true and probable; formalizing how language can pose propositions and queries that are then evaluated with respect to an internal model over probable worlds; and relating meaning to the general computational systems for representing, thinking about, and receiving new information about the world from broader cognition.
This proposal suggests a wide class of possible architectures that map from language into probabilistic programsâin principle, any general mapping function that expresses a distribution over programs conditioned on sentences in context. Under this umbrella of possible implementations, we propose finally that large language-to-code models can be used to generally instantiate these meaning functions. Unlike prior semantic parsers or attempts to hand implement mappings between language and code, LLMs offer a concrete means of instantiating far more broad-coverage mappings between human sentences and meanings than have been previously possible. They are also context-sensitive, in that they can construct meanings for an utterance that condition both on the general distribution of language and thought and a local linguistic and thinking context. They can condition translation on a local discourse context, when prompted with prior utterances, and on a local problem under consideration, when prompted with existing code in a probabilistic program.
By using LLMs to map between language and code, this proposal is also closely related to the recent lines of work we review in the introduction that seek to augment and connect LLMs with various structured and symbolic reasoning toolsâboth domain-specific reasoning engines like planners and physics engines (eg. B. Liu et al. (2023); R. Liu et al. (2022)), and more general APIs for code execution (eg. Karpas et al. (2022); OpenAI (2023b); Schick et al. (2023)). As we demonstrate throughout this paper, however, we propose that the probabilistic language of thought can offer a cognitively-motivated, unifying symbolic substrate for
9
2.1 A rational meaning construction framework
2 OVERVIEW OF THE KEY IDEAS
interfacing between language and many core aspects associated with general cognition. It provides a general motif for structuring and constructing generative world models, which can nest calls to other domain-specific systems (such as planners and physics engines); and an overarching framework for modeling how diverse kinds of observations can be used to update these models and answer new queries, framed as Bayesian conditioning and inference. With respect to the more general landscape of large statistical language models, this proposal finally suggests one way to situate the strengths of LLMs into a more human-like, modular framework for language-informed thinking. Rather than look to statistical patterns to capture all of the ways we think, plan, and reason about language, this resource-rational approach seeks to ground distributional aspects of language into a framework that can leverage learned prior patterns when they are usefulâwhile also modeling how language can construct and relate to coherent world models and algorithms for explicit, novel decision making and inference.
# Illustrating the architecture by example
This general architecture is best explained through concrete implemented examples, which we give in the next sections. For each of the four domains of reasoning shown in Fig. 2, we work through a representative dialog between a speaker of English and our language-informed thinking computational architecture, which could stand in for how we model another human beingâs understanding and thinking about the speakerâs language, or the ways we hope a human-like AI system would similarly respond.
For pedagogical reasons, we have chosen to implement these examples using one particular probabilistic programming language and one particular language-to-code model. These particular tools are not necessarily the most performant or scalable AI solutions; nor the best accounts we have of the corresponding components of human architecture. Nevertheless, they are familiar and simple, and provide the most direct route we know to illustrate our ideas in ways others can also experiment with. To elaborate on these choices:
⢠The probabilistic language of thought we use to express inference problems is Church (Goodman et al., 2008), a Turing-universal probabilistic programming language constructed on top of the functional programming language Scheme. We have used the WebChurch dialect which implements several general inference procedures, but we have chosen the simplest and most generalâand least efficientâapproach based on rejection sampling: Inference is based on drawing samples from the prior over world states described by the generative model, and rejecting those that fail satisfy the constraints of any observation conditions. The samples that remain constitute a posterior sample over possible worlds consistent with the observed information, sufficient to answer the queries under consideration in the language discourse. Other similarly functional PPLs such as WebPPL or Gen could have been chosen instead. In Section 5, we discuss future directions for extending and scaling inference beyond these simple illustrative implementations.
⢠The language-to-code model we use to amortize meaning construction over programs is Codex model (Chen et al., 2021), a GPT-3-based language model fine-tuned on source code, which provides pairings between natural language and code with comments, drawn from programs on GitHub and other sources. Since the release of Codex, many other language-to-code models have been developed, and more recent versions of GPT-based language models are now routinely trained on large amounts of source code; we believe these could be used to similar effect. In Section 5, we also discuss future directions for more cognitively plausible training and updating of neural models that amortize inference in joint distributions over natural language and probabilistic languages of thought.
Finally, before turning to the examples, we want to add an important note about our intentions and goals. The examples are designed to be illustrative and pedagogicalâwe choose them for their simplicity and clarity, and to show how prior empirical and computational work from cognitive science can be related under this general framework to language. Each example gestures at a larger domain of reasoning, but, of course, each domain is much broader than what we can implement here. Each example is also representative of a wide class of computational cognitive models that can be instantiated in a probabilistic language of thought, and which we propose can be integrated with natural language inputs and outputs under a rational meaning construction framework. In each section we therefore also discuss how this framework might be scaled, and what more work may be necessary, to scale from these examples towards a richer model of language in relation to those domains.
10
2.1 A rational meaning construction framework
2 OVERVIEW OF THE KEY IDEAS
We also hope that these examples, and other variations that elaborate on them and on the core domains of reasoning we discuss here, will offer useful starting points for more rigorous, systematic, cognitively- oriented evaluation and interpretation of the reasoning processes emergent in large language models and other language-based AI systems. In our own preliminary evaluations of these domains, we find that current large language models show many of the properties we discuss in the introduction. In some cases they appear to approximate implicitly the representations and algorithms we seek to model explicitly. In others, particularly with more complex modifications beyond these simple examples, we find that large language models left to their own devices produce outputs that diverge from our intuitions. We seek here to model the representations with which people make meaning from language in relation to all of these domains, but hope that these frameworks will be useful for understanding other computational systems that use language as well, including interpreting the representations that large language models already learn or should seek to acquire.
# Graphical conventions
Throughout the examples presented in this paper:
Ss
Translations mapping from language into probabilistic programs, produced by Codex, are indicated by a neural network icon.
Probabilistic inferences, performed by Church, are indicated by a cog icon.
11
2.2 Understanding language with probabilistic reasoning
2 OVERVIEW OF THE KEY IDEAS
# 2.2 Understanding language with probabilistic reasoning
To illustrate our framework, letâs consider a concrete scenario that involves reasoning from language in the face of uncertainty. Suppose a friend is telling you about a tug-of-war tournament that took place the prior weekend in which the authors were participating:
Right off the bat, Josh won against Lio. He then proceeded to claim victory against Alex. Even working as a team, Lio and Alex still could not beat Josh!
In order to understand this story, it is useful to construct a little mental model: there are different players, they face each other solo or in teams, and based on his track record, Josh appears to be particularly strong. Now, suppose your friend tells you about a newcomer: In a huge upset, Gabe managed to best Josh in the fourth round. Maybe Gabe is even stronger than Josh! Or, perhaps Josh was simply feeling lazy in the last match, in which case, Gabe might not actually be so strong. To clarify, you might ask a question, Who is stronger: Gabe or Josh? Your friendâs answer, which might itself express uncertainty, will nevertheless provide further information for you to incorporate into your understanding.
In making meaning from language about a scenario like the above, you are engaging in probabilistic reasoning: integrating over different possibilities in order to infer likely explanations. People are remarkably proficient at making inferences from exactly this kind of sparse evidence. Sometimes, we acquire this evidence through direct experienceâby watching the tournament, for instanceâbut often, this kind of information comes to us through language that cues us to update our beliefs accordingly. Critically, in order to reason consistently, we need to represent core aspects of the situation: who are the different actors, what events took place, and what inferences have we already made? To this end, it is extremely useful to have a world model, which we defined earlier as a probabilistic generative model that encapsulates the key mechanics of a domain and facilitates coherent, causal explanations of events. In this section, our aim is to further formalize what exactly we mean by world models and how large-scale neural models might serve as an interface between natural language and these kinds of cognitive representations.
World models as generative programs. The core of each example in this paper is a probabilistic generative model that defines the mechanics of a domain. For the purposes of this demonstration, and throughout Section 3, we focus on reasoning from language given a pre-specified world model. Later, in Section 4, we show how language can be used to grow out and construct new world models.
As a playground for this initial demonstration, we consider the âBayesian tug-of-war,â a classic experimental domain in cognitive science that requires making inferences about the latent traits of individuals from sparse evidence. Prior work establishes that Bayesian inference in a probabilistic generative model closely captures peopleâs predictions about scenarios in the tug-of-war (Gerstenberg & Goodman, 2012; Goodman et al., 2014), and that simple sentences can be mapped onto queries in this model (Goodman & Lassiter, 2015). Here, we build on this work to give an account for how people might turn open-ended natural language into statements in the probabilistic language-of-thought.
In tug-of-war, we start with a generative model of a tournament in which players of varying strengths compete in a series of matches, facing off either solo or as part of fluid teams (Fig. 3A). Each player has a latent strength value randomly sampled from a Gaussian distribution (with parameters arbitrarily chosen as µ = 50 and Ï = 20). As an observer, our goal is to infer the latent strength of each individual based on their win/loss record. However, players sometimes donât pull at their full strength and each player has a different intrinsic âlazinessâ value (uniformly sampled from the interval [0, 1]) that describes how likely they are to be lethargic in a given match. The full Church code for the tug-of-war is given in Appendix A.1.1.
Linguistic meanings as probabilistic program expressions. While the generative model defines the generic mechanics of the domain, we want to be able to talk about specific people and events. In our framework, we focus on two kinds of linguistic utterances:
Observations provide information about people, objects, and events in the world; e.g., âJosh faced off against Lio and won.â In our framework, we translate observations into condition statements in Church, which update the state of the world model to reflect new facts. Note that condition statements have no return value; instead, they constrain the world model such that downstream inferences must be consistent with respect to the conditioning statement.
12
2.2 Understanding language with probabilistic reasoning
2 OVERVIEW OF THE KEY IDEAS
(A) Generative world model (define strength (mem (lambda (player) ee (gaussian 50 20)))) PY 58.07 (define laziness (mem (lambda (player) (uniform @ 1)))) F 3 Q.27 (define (team-strength team) (sum (map (lambda (player) Cif (flip (laziness player)) (/ (strength player) 2) ââ_ x (strength player))) ° team))) (© (team-strength team-1) (team-strength team-2))) (define (won-against team-1 team-2) _ J T T =~ T T âd nl o ey (B) Translation examples for LLM prompting John and Mary won against Tom and Sue. w (condition (won-against '(john mary) '(tom sue))) TT-TT Sue is very strong! iti âsue) 75)) F 2 31.39 (condition (> (strength If Sue played against Tom, who would win? 2? (query (won-against '(sue) '(tom))) fT-T (C) Natural language (D) Language of thought osh won against Lio. Josh proceeded to claim victory against Alex. (condition (won-against '(josh) '(lio))) 4 (condition (won-against '(josh) '(alex))) (condition (not (won-against '(lio alex) *(josh))) Even working as a team, Lio and Alex still could not beat Josh. How str Josh YB Oe (query (strength 'josh)) og Josh: 1 =67.89% What are the odds of Gabe beating Josh? Tre 23.90% False o% 10% 20% 30% 40% 50% 60% 70% 80%
Figure 3: Illustration of probabilistic reasoning via language-to-code translation in the tug-of-war domain. (A) The generative model defines two latent traits, strength and laziness, and specifies how these interact to determine team-strength. By combining (A) and (B), we can few-shot prompt an LLM to translate open-ended natural language (C) into Church statements (D) that capture linguistic meaning with respect to the domain. The resulting probabilistic inferences transparently represent the modelâs beliefs and naturally capture human-like intuitions about playersâ latent traits.
13
2.2 Understanding language with probabilistic reasoning
2 OVERVIEW OF THE KEY IDEAS
Questions seek information in the face of uncertainty about the world; e.g., âWould Josh beat Gabe if they played again?â In our framework, we translate questions into query statements in Church, which evaluate the quantity of interest. Calling query triggers a probabilistic computation that simulates possible worlds under the model, constrained by any observations so far. The query expression is evaluated in each simulated world, yielding multiple samples that form a posterior distribution over the value of interest.
Throughout the examples in this work, we freely interleave query and condition statements, much as questions might occasionally arise between statements of fact in a natural dialogue. Implementationally, this behavior is achieved through a read-evaluate-print loop (REPL) inspired by Ventureâs (V. Mansinghka, Selsam, & Perov, 2014), that evaluates queries against all condition statements that have appeared up to that point in the dialogue history. In our model, we assume that the user specifies whether each utterance is a condition or a query, but LLMs could likely classify unannotated utterances accurately.
Inspired by the work of Goodman and Translating from natural language to program expressions. Lassiter (2015), if we had some way to translate linguistic utterances into probabilistic program statements, we could perform a wide variety of probabilistic inferences from plain English. Up until recently, however, it was unclear how to construct a meaning function sufficiently general to translate open-ended natural language into highly structured expressions compatible with a Church model. Our core observation is that language-code LLMs have many of the properties necessary to serve as a useful meaning function: broad-coverage exposure to natural language, a robust capacity to model joint language-code text distributions, and the ability to quickly grasp domain-specific syntax and semantics from a few examples.
In this work, we leverage the few-shot prompting capabilities of one such LLM, the Codex model from OpenAI, to induce a translation model from English to Church code. As it turns out, we only need to provide a small handful of example translations (represented in Fig. 3B) to achieve a variety of interesting behaviors. To translate a new language utterance to Church, we simply concatenate the generative model (full text in Appendix A.1.1) and the translation examples (full text in Appendix A.1.2) into a prompt whose final line is the utterance. We then generate from Codex, which, based on the comment-code pattern in the prompt, infers that the completion should be written in Church, using the function definitions and constructs provided in the prompt.
Notice the high degree of variation in phrasing and lexical choice in Fig. 3C; none of the utterances contain âwonâ or âagainst,â yet Codex still maps these to the won-against function. Here, we start to see some of the advantages of using an LLM over more traditional semantic parsing techniques like CCG parsers (Artzi, Lee, & Zettlemoyer, 2015; Artzi & Zettlemoyer, 2013). Because the model is pre-trained on a vast amount of linguistic data, it fluently handles many different kinds of linguistic variation. However, by including the Church generative model in the prompt, we can effectively constrain the output space; the model infers that the generated code should use the functions defined in the generative model.
As a semantic parsing tool, this combination of pre-training and prompting manages to achieve broad invariance to spurious linguistic variation while remaining sensitive to wording choices that might affect meaning. We can see this tradeoff at work in Fig. 3C, where the translation uses a negation, closely reflecting the structure of âLio and Alex still could not beat Josh.â Of course, there are multiple aspects of the utterance that this translation does not capture (e.g., âEven working as a team...â suggests that Lio and Alexâs efforts were well-coordinated; as opposed to something like, âStepping on each otherâs toes the whole match...,â which would imply the opposite). Our point is not that the LLM translation perfectly captures all aspects of the utterance meaning, but rather, that it encodes those that are relevant to and compatible with the domain model so as to facilitate downstream reasoning.
Reasoning about scenarios with probabilistic inference. So far, weâve illustrated how we might condition a PLoT model on natural language, but what about reasoning? After hearing the information in Fig. 3C, we might assume that the player named Josh is quite strong. Exactly how strong is Josh, though? And how likely is it that he would beat another player who isnât Lio or Alex? Just as we used Codex to translate facts into condition statements, we can use it to translate questions into query statements in Church. The Church inference engine then automatically simulates scenarios (in this case, 1000 times) that are consistent with the given condition statements in order to produce an approximate posterior distribution over each query.
14
2.2 Understanding language with probabilistic reasoning
2 OVERVIEW OF THE KEY IDEAS
By offloading reasoning from the LLM to the PLoT, we can obtain a much richer picture of our modelâs beliefs about the world (Fig. 3D). While the LLM alone can only respond with textual statements like âJosh is very strong,â Church gives us an entire probability density over Joshâs strength (on expectation, he is a little less than one standard deviation above the average strength = 50). Likewise, we can easily obtain a distribution over the outcomes of a Gabe-Josh match (given Joshâs strong track record, our model finds Gabeâs chances slim, at 23.90%). Critically, Church is doing much of the heavy lifting of inference in the background in order to produce these posterior distributions.
In a huge upset, Gabe managed to best Josh in the fourth round. (condition (won-against '(gabe) '(josh))) (query (strength 'gabe)) og » > How strong is Gabe? nt YY ® HY ââ No info (y= 64.19) ââ Josh likely lazy (11 = 60.69) ââ Josh rarely lazy (y = 70.20) h 1 1 1 1 1 1 1 1 70
Figure 4: Reasoning about a pair of hypothetical scenarios with language-code translation. In a world where Josh is often lazy, Gabeâs win is counteracted by a high likelihood that Josh threw the match. Conversely, in a world where Josh is rarely lazy, Gabeâs win is surprising and suggests a high strength value. Rational meaning construction with an LLM appropriately resolves the linguistic meaning of these two scenarios, selecting reasonable probability parameters for the conditioning statements. Meanwhile, probabilistic inference about Gabeâs strength is finely sensitive to the implications of these competing hypotheses.
In addition to providing useful interpretability, reasoning in Church models is sensitive to each new piece of information. Much like human learners, Church models can flexibly update their beliefs when presented with low-probability or unanticipated events. Picking up our tug-of-war saga, consider the plot twist in Fig. 4:
In a huge upset, Gabe managed to best Josh in the fourth round.
How might this new information shape our interpretation of the match 4 outcome? If Josh is likely to be lazy, then itâs possible that Gabe simply got lucky and wasnât so strong after all. If, on the other hand, Josh is rarely lazy, we might start to regard Gabe as particularly strong. In Fig. 4, we can observe how Church reasons about these two possibilities, shifting the probability density over Gabeâs strength left if Josh is likely lazy and right if Josh is rarely lazy.
Note how, in order to translate a phrase like âJosh has a propensity to slack off,â Codex must choose a particular probability threshold. This choice is arbitrary and, while there is no âcorrectâ answer, we see that Codex is able to choose valid probability values between [0, 1] that feel appropriate to the wording: a
15
2.2 Understanding language with probabilistic reasoning
2 OVERVIEW OF THE KEY IDEAS
âpropensity to slack offâ doesnât necessarily imply that someone slacks off all the time, while, in contrast, ârarely lazyâ offers more certainty. Indeed, across many different contexts, we observe that Codex is able to pick reasonable parameter values that respect both the language and the parametrization of defined distributions. We consider these inferences to represent a form of âamortized pragmaticsâ (Goodman & Lassiter, 2015), which we will revisit in Section 5.
Putting it together: the power of probabilistic reasoning. We conclude this section with a final example that underscores the flexibility of our framework to model complex reasoning from language and foreshadows multiple themes that we will revisit later in the paper. Consider the dialogue in Fig. 5, in which the students and faculty team up to face one another. The interlocutor poses two questions: âIs Gabe stronger than the weakest player on the faculty team?â and âWho would win in a match between the students and the faculty?â As we saw in the prior tug-of-war examples, the answers to both of these questions are expressed as probability distributions derived from simulation of the generative tug-of-war model. Moreover, in both cases, the introduction of new information flips the modelâs belief state in a way that aligns with human intuitions. In this way, the PLoT framework is natively capable of defeasible inferenceâa phenomenon of human reasoning that was of great interest to early AI pioneers of non-monotonic logics (Ginsberg, 1987; McCarthy, 1980).
A key advantage of our framework is that achieving these kinds of defeasible and flexible inferences from natural language reduces to grounding utterances into appropriate condition and query statements. While the observations and questions in Fig. 5 are semantically more complex than those that appeared in the prior examples, and though there are many degrees of freedom involved in the translation problem, we confirm that an appropriately-prompted LLM can produce translations that intuitively capture the meaning of each utterance with respect to the tug-of-war domain. Moreover, as we saw in Fig. 4, Codex is able to amortize certain pragmatic inferences in resolving âpretty strongâ to a threshold of strength > 60, âreal slackersâ to a threshold of laziness > 0.9, and âseveral of the facultyâ to count >= 3. How far can we go with these kinds of amortizations? Throughout Section 3 and Section 4, we will see examples of context-driven amortizations across different domains; and in Section 5, we will regroup to discuss how these different examples of amortization might inform our theories of language understanding and pragmatics.
In this dialogue, we also give a preview of define, a powerful construct in our framework that is discussed in depth in Section 4. Just as people come up with terms like â20th-century pragmatistsâ or âMeatless Mondayâ to pick out entire hierarchies of people, things, and events, a core feature of the probabilistic LoT is the ability to define new concepts that can later be referenced symbolically. In the Fig. 5 dialogue, language about team structures defines two new concepts, faculty-team and student-team, that facilitate concise translation of language like, âIs Gabe stronger than the weakest player on the faculty team?â Moreover, while faculty-team is a static list, other defined concepts can ground out in functions that take arguments. In fact, stronger-than?, which is defined in the prompt (Appendix A.1.2), is one such example, illustrating how programming languages are well-suited to capture the infinite productivity of language that arises through structured composition. Through this lens, we can start to imagine how our tug-of-war world model might be expanded to ground many new kinds of language:
⢠The tug-of-war tournament is organized into three leagues for novices, amateurs, and professionals. In order to be considered a professional, a player must win 20 one-on-one matches against other professionals.
⢠Players often get increasingly tired over the course of a tournament, though some players have more stamina than others.
⢠The tournament has an entry fee of $20 per contestant and a grand prize of $10,000 for the winning team.
How can we grow our world models to incorporate new language, or even construct new world models entirely from scratch? In Section 4, we revisit the tug-of-war domain with an eye to precisely these questions.
Conclusions. As an introductory example, the tug-of-war domain serves as a minimal illustration of the kind of reasoning from language that our framework is concerned with. Our goal here was to build intuition
16
2.2 Understanding language with probabilistic reasoning
2 OVERVIEW OF THE KEY IDEAS
The faculty team is made up of four players: Jacob, Se (detine racutty-tean â(Jacob Josh noah vitash) [areca Tsp ar a Gt cusMayAts peo (define student-team '(alex gabe lio ben ced)) (query Is Gabe stronger than the weakest player on the faculty = (stronger-than? og team? âgabe (argmin strength faculty-team) ) io ee (condition All of the faculty are pretty strong. = (all (map (lambda (player) (> (strength player) 62)) faculty-team) )) ( ' ! a og 1 1 me ' team? 1 0% 10% 20% 30% 40% 50% 60% 70% 80% | . : (query Who would win a match between the students and the = (won-against student-team faculty-team) 2g faculty? horse âââââââââ- studentteam a Despite their strength, several of the faculty are real = (condition slackers. â O= (count (map (lambda (player) (> (laziness player) @.9)) faculty-team) ) 3)) 0% 5% 10% 15% 20% 25% 30% 95% 40% 4S OH 55% 60% ( 1 ' Who would win a match between the students andthe 1 __ 1 faculty? ' i 1 . |
Figure 5: In this final tug-of-war dialogue, natural language plays three interlaced roles in interfacing with the language-of-thought. Definitions (purple) introduce new concepts, such as specific player-teams, that can later be referenced symbolically. Observations (blue) translate into condition statements that probabilistically constrain the world state, sometimes amortizing the resolution of linguistic ambiguity (e.g., âpretty strongâ or âreal slackersâ). Finally, questions (green) translate into queries that trigger inference by probabilistic simulation over possible worlds that is both sensitive to and consistent with prior definitions and observations.
17
2.2 Understanding language with probabilistic reasoning
2 OVERVIEW OF THE KEY IDEAS
for our general approach: by translating natural language into condition and query statements as inputs to a probabilistic inference engine, we can achieve forms of reasoning from language that are consistent with respect to a mental model of the world. Nonetheless, in scaling this approach beyond the toy domain of tug-of-war, many questions arise. How does probabilistic inference relate to models of relational and deductive reasoning of the sort that classical AI approaches excel at? How do we ground linguistic meaning in the visual and physical world? And how does language understanding inform our actions and interactions with other agents through goal-directed planning? In Section 3, we will progressively expand our scope to touch on each of these questions and show that, in each case, new kinds of language understanding and reasoning can be naturally incorporated into our framework.
18
3 WORLD MODELS
# 3 Understanding and reasoning about language with world models
In this section, we illustrate how the general framework we propose in Section 2 can be applied and extended to integrate natural language with core domains of human-like thought. In each, we build on the idea that language that conveys observations and questions about uncertain situations, constructing meanings from a generative world modeling program that supports probabilistic reasoning. In Section 3.1, we show how this approach can be integrated to understand language that conveys structured, logical lexical relations. In Section 3.2, we show how generative programs that support perceptual and physical simulation can be used to ground language about scenes into visual world. Finally, in Section 3.3, we consider language about agents with preferences and goals, and show how we can make meaning from sentences with respect to a generative program that supports planning.
# 3.1 Language for logical and relational reasoning
In the previous section, we examined how translation from natural language into the probabilistic language of thought naturally captures a certain form of reasoning in which uncertainty plays a key role. How does this framework relate to earlier computational theories of reasoning, such as classical AI approaches to logical and relational reasoning (Russell & Norvig, 2021)? Historically, systems like Prolog (Colmerauer, Kanoui, Pasero, & Roussel, 1972; Philippe, 1972) were designed for similar goals to ours here, to allow people to directly interact with computers via natural language (French, originally), specifying only the background knowledge and goals for computation without the algorithmic details (Colmerauer & Roussel, 1996). In this section, we demonstrate how the PLoT not only fully supports the style of deductive, logical reasoning characteristic of classical AI, but extends it to support inductive inferences as well. Moreover, we argue that many kinds of real-world reasoning problems that are traditionally modeled using structured logic-based approaches actually require a mix of both symbolic and probabilistic reasoning. In doing so, we aim to illustrate how our approach of translating from natural language to the PLoT fluidly integrates both kinds of reasoning in a way that comes naturally to people, but that has proved elusive for both traditional deductive programming systems and purely statistical language models.
Language about kinship relations. Suppose you are again with your friend from Section 2.2, who is telling you about a part of their extended family. âAvery has a sister named Blake, and their father is named Charlie,â your friend says. Immediately, you start to sketch a picture in your mind of this family, which you can update on-the-fly as you get more information: âCharlie is the grandfather of Dana.â At this point, you can infer that one of Charlieâs kids is also Danaâs parent, but which one? In the absence of additional information, itâs a toss-up between Avery and Blake, with some outside chance that there could be another, unmentioned sibling who is Danaâs parent. Hearing that âBlake has two kidsâ might initially shift your beliefs towards Blake. However, upon learning that âDana is an only child,â youâd have to rule Blake out entirely! This kind of relational reasoning, which freely intermixes deductive and inductive inferences, comes quite naturally to people. How do we make such rich inferences from a relatively sparse sequence of words?
In this section, our domain of interest will be kinship: relationships between people in a family. The kinship domain provides fertile ground for the study of logical reasoning for several reasons. First, during development, one of the first domains where we learn about logical relations is in describing families (Elkind, 1962; Piaget, 1951). Language has evolved to describe family structures in highly economical terms that naturally express composition (e.g., my motherâs father is my grandfather ) and symmetry (e.g., if Avery is my cousin, then I am Averyâs cousin; together, we are cousins). Nevertheless, while certain kinship references are relatively straightforward (e.g., âBlakeâs motherâ), others involve ambiguity (e.g., âBlakeâs uncleâ could refer to the brother of either of Blakeâs parents; or even, perhaps, a close older male who is not related by blood or marriage). Finally, kinship reasoning freely intermixes deductive and inductive inferences: for instance, âCharlie has a grandson named Danaâ deductively implies the existence of a child of Charlie who is also a parent to Dana; and it inductively implies that Charlie was possibly partnered at some point, such that Dana might have another grandparent in the picture. Traditional logical accounts of reasoning in this domain capture the deductive inferences but not the inductive inferences in cases like this. People, in contrast, routinely make statements such as âThis is Kendall, the partner of Averyâs nieceâ with the expectation that others will draw roughly the same inferences they would in building a mental model of this family: Avery has
19
3.1 Language for logical and relational reasoning
3 WORLD MODELS
(i) Generative domain theory of family trees Oo ® person-0 person + partner? | person-id: person-2 name: blake @ (flip 0.5) gender: female parent-1-id: person-0 parent-2-id: person-1 Crandom-choice "avery blake charlie » (random-choice â(male female)) mi n-children? n (geometric @.5) 3) persona person-6 (ii) Translations into LoT predicates re cc e Fan [ Charlie is Blakeâs father. Blakeâs dad is named Charlie. (father-of? âcharlie 'blake) __ Blake has three children. Blake has 3 kids. t+ (= (length (children-of 'blake)) 3) __ Blakeâs brother has a son named Dana. Blake has a brother whose son is named Dana. (exists Qambda (x) (and (brother-of? x 'blake) (son-of? âdana x)))
Figure 6: Illustration of a simple kinship domain theory and conceptual system implemented in Church. (i) The generative model specifies a process by which individuals form couples and have children to form family trees. Each tree represents a âpossible worldâ in which certain relationships hold. (ii) These relationships are expressed using predicates in a conceptual system that supports quantificational logic and composition, giving rise to an expressive domain semantics that aligns well with natural language.
a brother or sister, and that sibling has a female child, and Kendall is that personâs partner. In sum, the kinship domain offers a rich set of relations and possible inferences, and comes equipped with an extensive natural language vocabulary, making it an ideal playground to explore our translation hypothesis.
World models of kinship as probabilistic generative programs. Owing to the richness of the domain, recent years have seen a steady interest in computational cognitive models of various aspects of kinship, ranging from development and acquisition of kinship terms across cultures (Mitchell & Jordan, 2021; Mollica & Piantadosi, 2022), tradeoffs in communicative efficiency in natural (Jones, 2010; Kemp & Regier, 2012) and artificial (K. Smith, Frank, Rolando, Kirby, & Loy, 2020) kinship systems, and probabilistic inferences about kinship relations from sparse evidence (Katz, Goodman, Kersting, Kemp, & Tenenbaum, 2008). In this work, our primary interest is in how people represent and reason about kinship relations conditioned on language. Following Katz et al. (2008), we construct an intuitive domain theory of kinship using a probabilistic generative model and a small number of rules that form a conceptual system.
As in Section 2.2, our kinship domain theory is expressed as a generative model in Church. In the Bayesian tug-of-war, the generative model consisted of random variables over continuous quantities like strength and laziness. In contrast, in this section, our generative model specifies a series of discrete random choices that describe events in a familyâs genealogy: people are born, find partners, have children, and the process repeats. All of these events involve random choices that shape the makeup of the family tree.
Fig. 6 (i) shows a schematic of the kinship generative domain theory. When a person is born, they are assigned a unique person-id, a name1 sampled from a list of gender-neutral names, and a gender sampled from {male, female}. Next, with fixed p = 0.5, the person partners with a new individual from outside the family. Finally, if partnered, the couple has n = {0, 1, 2, 3} children, with the number of kids drawn from a geometric distribution (p = 0.5). This process repeats recursively until a full family tree is generated. To support efficient inference using Churchâs generic sampling algorithms, we cap the trees at 3 generations and limit each couple to 3 children. Further implementation details of the generative model can be found in Appendix A.2.1.
1For simplicity, names uniquely reference individuals in the tree, so as to avoid confusing scenarios like âAvery is the mother of Avery.â Additionally, for efficiency of inference, the only names that are assigned are ones that are used in the conversational context.
20
3.1 Language for logical and relational reasoning
3 WORLD MODELS
As with any computational model of a social phenomenon, this toy kinship model is reductive of many important nuances of identities and relationships. For instance, while the model includes both same- and opposite-gender couples, these couples never split, so step-relations arenât well-captured. While these kinds of compromises are designed to keep inference tractable, still others stem from limitations of the language itself. For example, many colloquial English kinship terms are gender-binary (e.g., mother, grandfather, daughter), so instantiating them as truth-conditional predicates coerces the generative model towards traditional gender assignments. Similarly, many English names carry strong gender associations, which NLP systems trained on large linguistic corpora pick up on (Caliskan, Bryson, & Narayanan, 2017; Grand, Blank, Pereira, & Fedorenko, 2022). In our examples, we intentionally select gender-neutral names (e.g., Avery, Blake, Charlie, Dana) to emphasize that these naming-based gender inferences are deliberately not part of the reasoning task. To summarize, language both reflects and constrains our intuitive theories of complex domains like kinship (Sapir, 1929; Whorf, 1956; c.f. Gentner & Goldin-Meadow, 2003 for a review of contemporary perspectives on linguistic relativity), and these tradeoffs manifest concretely in the toy model presented in this section. Fortunately, where this initial âoff-the-shelfâ kinship model lacks social and cultural nuance, our framework offers opportunities to extend and modify these areas. In section Section 4.1, we look at ways of growing our kinship model to include concepts from non-English-speaking cultures and more inclusive concepts of gender.
Relational meanings as program statements. Given a generative model of family trees, we can define a rich conceptual system to make statements about relationships between individuals. Our conceptual system consists primarily of a dozen-odd derived predicates that are binary operators over pairs of names; e.g., (father-of? 'charlie 'blake) is true iff Charlie is the father of Blake in a particular tree instance.2 These derived predicates build on a small number of low-level accessor functions that operate directly on nodes in the tree data structure. For instance, (children-of 'blake) returns a list of names corresponding to the children of Blake in the tree. Finally, our conceptual system includes several higher-order functions, like map-tree, filter-tree, and exists that take custom predicates as inputs and return a boolean. These functions facilitate the expression of a rich compositional semantics by allowing for compound predicates containing conjunctions and disjunctions. Fig. 6 (ii) illustrates several examples of the kinds of statements that can be made using combinations of derived predicates, low-level accessors, and higher-order functions. The full set of definitions making up the conceptual system is given in Appendix A.2.3.
Translating from language to program expressions. As in Section 2.2, we use a handful of paired natural language / code examples (Appendix A.2.4) to induce a meaning function via Codex. Because the prompt also includes the generative model source code and the full set of derived predicates, the LLM is able to resolve statements like âBlake has two kidsâ to the appropriate function (in this case, children-of) using the available definitions. Moreover, we observe zero-shot generalization to linguistic constructs that are not explicitly defined in the prompt, such as the concept of an âonly childâ (Fig. 7).
Putting it together: Reasoning from language about kinship relations. What is the purpose of all of this domain-specific machinery that we have now built up? The answer is two-fold. First, the generative domain theory compactly captures the key dynamics of our domain, allowing us to reason about a combinatorially vast space of possible family trees. Meanwhile, the conceptual system serves as a higher-level program interface, defining certain relationships that we would like to be able to talk about. Finally, the large language model bridges the domain model with natural language, providing a flexible and context-aware way to ground language into conditioning and query statements.
In Fig. 7, we can see how these components come together to facilitate naturalistic reasoning from language about kinship relations. Each natural language utterance translates to a condition statement in Church that serves as a constraint on family trees. With each successive condition, our uncertainty decreases and our picture of the family tree in question starts to crystallize. Samples from the conditioned domain theory model therefore serve as hypotheses about possible worlds that are consistent with the information provided through language. Furthermore, the distribution over conditioned samples provides a principled way to reason about
2Note that because our model includes same-gender couples, Blake may have one father, two fathers, or no fathers. Blake also may not exist in the tree in the first place! Crucially, these aspects of the generative model donât matter to the derived predicate, which simply evaluates whether the relationship in question holds somewhere in the tree.
21
3.1 Language for logical and relational reasoning
3 WORLD MODELS
A. Language-to-codetranslation â_B. Family trees sampled from conditioned kinship domain theory Avery has a sister named Blake. L, (condition (sister-of? âblake 'avery)) Avery | | Blake Avery and Blakeâs father is named Charlie. Chatie | %) (condition (and (father-of? âcharlie âavery) (father-of? âcharlie 'blake))) hae avery Avery |_| Blake Charlie is Danaâs grandfather. Charie Charle | (condition (grandfather-of? âcharlie 'dana)) Dana Dana Which of Charlieâs kids is Danaâs parent? Probabilistic inference over sampled worlds fod od (query (filter-tree avery (lambda (x) (and â+ blake (child-of? x 'charlie) other of? x! t T T T T T T T T 1 (parent-of? x âdana))))) 0% 5% 10% 15% 20% «28% ©=â 30% += 38% = 40% «= 45% Blake has two kids. cra (condition (= (length (children-of 'blake)) 2)) vey eg Which of Charlieâs kids is Danaâs parent? blake other me (condition (not (exists Cambda (x) (sibling-of? x 'dana))))) aver 8 Which of Charlieâs kids is Danaâs parent? blake other T T T T 1 0% 10% 20% 30% 40% 50% 60% 70% 80% 90%
Figure 7: Kinship reasoning from natural language, backed by a domain theory model in the probabilistic language of thought. (A) Natural language utterances about a particular family are readily translated into Church conditioning statements by a LLM. (B) Samples from the conditioned generative domain model are possible family trees that adhere to the growing set of constraints (conditioning statements are cumulative). Reasoning about unknown kinship relations is accomplished through posterior inference against a translated query. With each new piece of information, the modelâs beliefs reflect both deductive and inductive inferences.
22
3.1 Language for logical and relational reasoning
3 WORLD MODELS
queries, such as Which of Charlieâs kids is the parent of Dana? Posterior inference (in this case, accomplished via rejection sampling) faithfully reflects various possible configurations and their relative probabilities. For instance, in Fig. 7, after conditioning on Blake has two kids, the model puts > 80% probability on Blake being Danaâs parent, but also factors in low-probability possible worlds where Avery or a third unnamed sibling is Danaâs parent. Yet, despite this confident answer, the model can correctly infer that this same probability drops to 0% in the face of the contradictory information that Dana is an only child. Note that the distributional parser plays a crucial role in this inference by providing a correct interpretation of this utterance. Meanwhile, the Church inference engine does the heavy lifting of representing possible worlds and reasoning about them in a principled manner.
Future directions: Logical and relational reasoning with language models. Significant recent attention has been directed towards studying reasoning in LLMs. Typical approaches involve engineering prompts so as to induce structured generations in text space that approximate âstep-by-stepâ reasoning (Kojima, Gu, Reid, Matsuo, & Iwasawa, 2022; Nye et al., 2021; Wei et al., 2022). Nevertheless, current evaluations find that even with such methods, LLMs are prone to producing unfaithful reasoning chains in which conclusions do not follow logically from the premises (Golovneva et al., 2022; H. Liu et al., 2023; Lyu et al., 2023; Ribeiro et al., 2023). These issues of consistency have motivated several systems that connect LLMs to external symbolic inference engines that perform deductive inference using Prolog-style backwards chaining (Dalvi, Tafjord, & Clark, 2022; Pan, Albalak, Wang, & Wang, 2023; Weir & Van Durme, 2022). We see this work as closely-related in spirit to our approach, but fundamentally limited to deductive reasoning. (See Appendix A.2.5 for a technical explanation of these limitations.) Of course, we make no claim that Church or its derivatives are the only languages that can capture human-like relational reasoning. For instance, ProbLog (De Raedt, Kimmig, & Toivonen, 2007; Dries, Kimmig, Davis, Belle, & De Raed, 2017; Suster et al., 2021), a probabilistic extension of Prolog in which deduction rules can be annotated with probabilities, offers a compelling alternative. Indeed, interfacing ProbLog with a natural language via an LLM-backed meaning function would constitute a promising instantiation of our rational meaning construction framework. Our core assertion here, and in the rest of this paper, is that representing probabilistic, generative models over possible worlds is critical to reasoning coherently about a structured domains.
23
3.2 Language for visual and physical reasoning
3 WORLD MODELS
# 3.2 Language for visual and physical reasoning
Sensory detail and physical knowledge pervade our everyday language. We can describe and imagine highly visual objects and scenesâa few red mugs on a tabletop, a tall stack of blue plates, a heavy box, and objects that move, bounce, and collide. We flexibly make predictions about physical events (what will happen if a kid crashes into that table stacked with plates? ), or infer the underlying physical properties of the world (how heavy is that box that no one can lift? ), based on situations described entirely in words. As with the other domains we have considered thus far, understanding this language requires integrating over the uncertainty inherent to language, like the possible heights picked out by tall and motions picked out by a bounce, as well as the uncertainty inherent to how we imagine the physical world itself.
How can we so flexibly relate language to our more general perceptual and physical reasoning? In this section, we illustrate how our overarching framework for language understanding can be modularly extended to capture both of these capabilities. We begin with perception, extending our framework to integrate a graphics rendering engine to relate linguistic meanings to visual knowledge (Section 3.2.1). We then build on this approach to integrate a physics simulation engine to further interface between language and intuitive, probabilistic physical reasoning (Section 3.2.2). By incorporating these external engines, these sections blueprint how computational models that ground linguistic meaning in a PLoT can interface with other cognitive modules for perception and physical reasoning.
# 3.2.1 Language about visual scenes
To illustrate the structured relationship between language and visual knowledge, imagine how we might talk about a very simple domain of scenes (Fig. 8, top)âtables on which someone was placing some household objects (mugs, cans, or bowls) that come in different colors (red, green, yellow, or blue.)
Given descriptions of particular scenes (Fig. 8, bottom), most of us can easily picture tabletop scenes that fit these descriptions, updating what we imagine to incorporate arbitrary new information, like that everything on the table is blue, and also that there are no mugs, and lots of bowls. We can do this despite uncertainty in the language itselfâa phrase like lots of bowls leaves open just how many bowls there are, though we have general intuitions that there should be more than one or even two bowls on our imagined table. You can also draw a host of fundamentally probabilistic inferences to answer many arbitrary questions about the scenes you imagine, like how many green mugs there might be, or whether there are more red objects or green ones. The set of scenes you imagine, and the way you answer these questions, is structured and compositional at the level of individual objects and their properties (a mug, a green mug, a bunch of green mugs), and over successive sentences (like there are many red objects on the table, there are just a few green mugs, and there are also at least three green bowls.) The way we talk about scenes like these suggests the level of abstraction with which we mentally represent them. We describe and reason over object categories, lexical properties, numeric quantities, and set relations, and we can easily visualize scenes from these abstract, linguistic descriptions. In contrast, recent evaluations of current multimodal modelsâlarge language models fine tuned on corpora of images (Ramesh, Dhariwal, Nichol, Chu, & Chen, 2022; Ramesh et al., 2021)âsuggest that even large models struggle with just these kinds of simple but abstract relational concepts in language, such as producing images consistent with quantifiers like more red things than green things, or relations like a plate on top of a cup (Conwell & Ullman, 2022; Marcus, Davis, & Aaronson, 2022; Radford et al., 2019).
In this section, we propose that the basic motif outlined in our framework also suggests an alternate approach for relating language and visual reasoning. Our architecture draws on the traditions of viewing perception as âanalysis by synthesisâ or âvision as inverse graphicsâ from cognitive science and classic computer vision (Battaglia et al., 2013; Gothoskar et al., 2021; Kulkarni, Kohli, Tenenbaum, & Mansinghka, 2015; Lee & Mumford, 2003; J. Wu, Yildirim, Lim, Freeman, & Tenenbaum, 2015a; Yuille & Kersten, 2006). This approach frames visual imagination and visual scene understanding as two sides of the same coin, modeling visualization in a mental graphics rendering engine over internal scene representations and perception as probabilistic inference to invert the renderer and thereby recover the physical content of scenes from vision. In this section, we show how this general approach to modeling human perception can integrate cleanly into the framework we have sketched so far, augmenting the probabilistic language of thought with an interface to a rendering engine so it can serve as a general, flexible intermediary for relating language, world models, and visual scenes.
24
3.2 Language for visual and physical reasoning
3 WORLD MODELS
Generative world model of scenes Sampled scene graphs Scene 1 Scene 2 Scene 3 abject-1: ( ebject-2: ( âobject-2: ¢ abject-1: ( abject-2: ( object-2: (âobject-1: (âobject-2: ( âobject-a: Color: red color: green color: blue Color: red color: blue color: blue color: red color: blue color: yellow Shape: mug shape: Gan shope: bowl shape: mug shape: mig) shape: Bowl shape: mig. shape: ug shape: mg > ? ? ? ? ? > (define choose-shape (mem (lambda (obj-id) Graphics rendering engine (pair âshape (uniform '(mug can bowl)))))) ED DEES ES (define choose-color (mem (lambda (obj-id) (pair âcolor (uniform '(red blue green yellow)))))) (define generate-object (mem (lambda (obj-id) (list (pair âobject-id obj-id) (choose-shape obj-id) (choose-color obj-id))))) (define choose-num-objects ...) (define generate-objects-in-scene ...) Reasoning about scenes from natural language Dialogue A Dialogue B There's only a few mugs and bowls, though at least one of each. verything on the table is blue. (condition (and ength ((filter-shape âmug) (objects âscene))) 3) length ((filter-shape "bowl) (objects âscene))) 3) (© length ((filter-shape âmug) (objects âscene))) 0) (© length ((filter-shape âbowl) (objects âscene))) 0))) And most of the objects in this (condition (and » (> Cength ((Filter-color green) (objects âscene))) 0) * (o= Clength ((filter-color green) (objects âscene)) (/ (length (objects âscene)) 2)))) How many green mugs do you think ther Protbitierenceoverpssewors gf . Be Cony leet (atest gram â cal : 3 ((filter-shape 'mug) (objects-in-scene âscene))))) me
(define choose-shape (mem (lambda (obj-id) (pair âshape (uniform '(mug can bowl)))))) (define choose-color (mem (lambda (obj-id) (pair âcolor (uniform '(red blue green yellow)))))) (define generate-object (mem (lambda (obj-id) (list (pair âobject-id obj-id) (choose-shape obj-id) (choose-color obj-id))))) (define choose-num-objects ...) (define generate-objects-in-scene ...)
Figure 8: Human language understanding draws on our structured knowledge of the visual world. (Top) A probabilistic generative model describes a prior over tabletop scenes with varying configurations of colored mugs, cans, and bowls. Sampled world states describe a scene based on symbolic object concepts. Interfacing this world model with a graphics rendering engine models visual imagination of a given scene. (Bottom) Language about particular visual scenes can now be translated as before into conditions (blue) and queries (green) on the distribution over scenes, which can be rendered into visual scenes that reflect language.
Integrating the probabilistic generative model over scenes with a rendering engine. To model the domain of tabletop scenes, we begin with a probabilistic generative model like those in the preceding sections. The generative program excerpted at the top of Fig. 8 (purple) describes a prior over the number of objects in a given scene, and the shape and color of each object. This program is similar in many ways to the kinship model in Section 3.1, which generates possible family trees as a collection of entities and stochastic choices about each one. Similarly, the generative model in this domain generates a particular scene by making stochastic choices over the number of objects in the scene (choose-num-objects), then generates each individual object (generate-object) based on stochastic choices over its possible properties (e.g choose-shape and choose-color). This basic structure can be augmented in many ways to model more complex scenes, with more variation over possible properties like size or material, hierarchical classes of object categories like dishware, cups, and mugs, or hierarchical object structures like a stack of plates.
Each sample from the generative model in Fig. 8 is a structured, symbolic representation of a particular scene state, represented in our particular implementation as a list of object dictionaries that map between attribute kinds (like object-shape) and values (like 'mug). These scene states are very simple instances of the many symbolic scene representations used throughout computer graphics and computational models
25
3.2 Language for visual and physical reasoning
3 WORLD MODELS
of human scene understanding, data structures which model the abstract and semantic contents of scenes (Armeni et al., 2019; Bar-Zeev, 2003; Clark, 1976; Gothoskar et al., 2021; J. Johnson et al., 2017, 2015; Zinberg, Cusumano-Towner, & Vikash, 2019).
We can now extend this probabilistic generative program so that it expresses not just a distribution over possible scene states, but over the visual percepts of each scene. We do so by extending our base probabilistic programming language with a new function, render, that takes in scene graphs as inputs and calls out to Blender, a 3D computer graphics engine.3 Our render implementation builds on the basic capabilities of any programmable graphics engine. It defines how symbolic object entities with the properties defined in our model (shapes like mug) are rendered and colored into 3D CAD shapes, and can forward render any sampled scene graph into a visual scene with the requisite object types and colors, and overall structure (Fig. 8, top, Rendered possible worlds). Collectively, this generative model and rendering interface unites the underlying belief distribution over possible scene states with how each of these scenes might look.
More broadly, this implementation is intended as a simple, illustrative example of how our framework could be integrated to model many, complex relationships between the objects we talk about in a scene and how they look â recent work in scene understanding, for instance, models variation in lighting, viewer angle and distance from the scene, stereo depth sensing, and sources of noise in perception (such as from a viewer who only looks briefly at an image, or an imperfect, non-idealized visual sensor) (e.g., in Deng, Zhi, Lee, and Ahn (2021); Gothoskar et al. (2021); Hughes, Chang, and Carlone (2022); Kulkarni et al. (2015); V. K. Mansinghka et al. (2013); Zinberg et al. (2019)).
Grounded meanings as program expressions. By augmenting probabilistic generative models with a graphics rendering engine, we have now extended our framework to allow language that describes and asks questions about scenes to interface with visual depictions of those scenes.
In our simple tabletops scene domain, for instance, we can ground linguistic descriptions of the number, kinds, and colors of objects in a scene (Fig. 8, blue) like thereâs at least two green cans or a few mugs and bowls into probabilistic program condition statements on scene states in the generative model. As with preceding sections, the translations shown in Fig. 8 are quite straightforward and interpretable, because the generative model we have defined expresses compositional predicates on object properties at the grain of language. Constraints on objects of specific types, like green cans are translated into a sequence of conditions on the relevant properties of object entities, successively filtering on the set of objects that are green (filter-color green) and then further filtering to the set of objects that are also cans (filter-shape 'can).4 Sampling scene states from the conditioned generative model, and rendering these scenes into images with the render interface, then produces visual depictions that are consistent with any sequence of observations made in language. This approach disentangles reasoning, as probabilistic inference over a structured generative model, from the perceptual properties of scenes. As with before, we can translate questions like How many green mugs do you think there are? into probabilistic query expressions. Our approach reasons about these questions as inferences over the distribution of possible scenes, adapting beliefs about the scenes to condition systematically and coherently on sequences of new statements made in language.
Translating from language to program expressions. As with the previous sections, we can now translate actual descriptions and questions about scenes, by using a large language-to-code model conditioned on the generative domain model and a few example pairs of language and code (see Appendix A.3.1 for the full prompt we provide to condition the language-program model).
The translations in Fig. 8 and Fig. 9 generally showcase the local generalizability and flexibility we illustrate in the other sectionsâthe translation is robust to conjunction and syntactic variation, differing numbers of object predicates (yellow object, red mug), compositions of object predicates (eg. a few mugs and bowls), negations over set quantity (there arenât any), and comparatives over object sets (more red mugs than green cans).
3https://www.blender.org/ 4In our implementation, which can be found in Appendix A.3.1, we derive named color predicates like green over the base generative model, which samples color properties over a continuous space of RGB values. This implementation suggests a more general pointâthat any number of lexical concepts, such as many more arbitrary color names over the underlying color space, can be derived as symbolic predicates over a richer continuous space reflected in the generative model. A similar approach could be taken for other lexical terms that carve up continuous spaces, such as prepositions like left, center, or near over geometric space.
26
3.2 Language for visual and physical reasoning
3 WORLD MODELS
A. Language-to-code translation B. Rendered scenes from conditioned generative model There is at least one red mug in this scene. (condition (>= length Be ((filter-color red) ((filter-shape 'mug) * (objects-in-scene 'scene)))) 1)) There are also at least three green cans. (condition (= Clength HR â(hilter-color green) ((Filter-shape 'can) (objects-in-scene âscene)))) 3)) There arenât any yellow objects. (condition (= (length Be ((filter-color yellow) (objects-in-scene 'scene))) @)) There are more red mugs than green cans. (condition (© Cength ((filter-color red) ((filter-shape 'mug) 2 Be (objects-in-scene 'âscene)))) : = (length ((filter-color green) ((filter-shape 'can) (objects-in-scene âscene))))
Figure 9: Each sentence in this sequence (left) translates into a separate, composable condition expressions that updates the underlying generative model over scene states. After each sentence, sampling symbolic scene states from the updated distribution and render-ing them (right) yields images that reflect the prior over scenes and are consistent with the information in all successive sentences.
Even on this relatively simple domain, Fig. 8 and Fig. 9 also showcase ways in which the LLM can represent conditional inferences from language to program expressions that go beyond simple, literal semantic meanings. These examples build on what we already find in Section 2.2, in which the LLM can contextually interpret vague language like very strong as thresholds on continuous variables in the generative world model. In this domain, find the LLM can translate vague quantifiers (like few, most, arenât a lot, a bunch, or arenât many) without explicit program predicates defining each lexical termâthe model can directly translate these terms into reasonable, interpretable quantities over sets of objects (such as translating only a few to (<= 3) objects). We also find that sampling from the distribution over meanings further supports the idea that the LLM represents a broader distribution over intended meanings, including acceptable semantic variation in the interpretation of vague lexical terms. Sampling from the distribution at higher temperatures, for instance, we find that our implementation variously translates most into program expressions that interpret
27
3.2 Language for visual and physical reasoning
3 WORLD MODELS
this as more than half, or more than 80%, or other greater fractions, of the set of overall objects in the scene. These translations draw on the language-to-code modelâs background prior on language itself (we do not prompt it with examples of these particular phrases), its amortized understanding of how these phrases relate to continuous, named variables in code (like length of a set of objects), and the particular context of the generative world model itself (which defines the prior over numbers of objects that determines the context-specific scale of these graded quantifiers.)
Translations of vague quantifiers like these have been handled in classical semantics and recent accounts as explicit, pragmatic and probabilistic inferences based on context-specific priorsâthe acceptable quantity most people would infer for many mugs on a table is intuitively very different from the quantity intended by many grains of sand (Edgington, 1992, 1997; Graff, 2000; Lassiter & Goodman, 2017). The results we show here provide further evidence that LLMs can often amortize many of these inferences, to directly predict common interpretations from language. As we discuss in Section 5, future work might explore more fluid, joint integrations of these approaches to inferring meanings, trading off between the amortized interpretations the LLM can produce and more explicit probabilistic inference, such as conditioning on other information in language. Learning that Sally is a wholesale porcelain supplier who owns thousands of mugs in a nearby warehouse might lead you to infer an updated meaning of Sally has many mugs, but is a complex inference that we might not expect to be amortized in an LLM from the background distribution of language and commented code.
Putting it together: Reasoning from language about visual scenes. Taken together, the examples in Fig. 8 show how this approach naturally extends the components of this frameworkâthe ability to describe possible worlds in language, flexibly updating a background distribution of beliefs within a conditioned generative model, and query this model to draw probabilistic inferencesâto also ground out in visual scenes. The more extended example in Fig. 9 highlights the more granular, intuitive way in which the distribution over scenes changes to reflect successive new sentences, updating a flexible distribution over scenes that still remains consistent with all of the previous observations.
# 3.2.2 Language about dynamic physical scenes
When we talk about a scene, however, we describe more than just the colors and shapes of objects sitting on a table. We talk in verbs, describing events unfolding in the changing, physical world around us. Consider, for instance, descriptions of another set of tabletop scenesâones that just involve a red object placed to the left of a blue one on a table (Fig. 10). These scenes are initially even simpler than our tables of cans and dishware, but still afford a range of dynamic and physics-specific descriptions.
You can easily imagine, for instance, what would happen if someone pushed the red ball gently to the rightâyou might say that it would bump into the blue ball, and you could likely imagine how fast the blue ball would be moving as a result. You can infer how these scenes would change if someone pushed the red ball much harder, as if shooting a billiard ball, or tapped it even more gently, nudging it forward with their finger, so that perhaps it wouldnât collide with the blue ball at all. These inferences are sensitive to many other properties of the scene, and of these objects, that we could describe in language, like whether the red ball is really heavy, or the blue ball is very light, or at least much lighter than the red one. If we changed the objects in the scene, and now placed a red block to the left of the blue one, your intuitive understanding of how different shapes relate to different degrees of friction would again change how you might see these scenes play out in your mind, and how you might answer questions about their collision and motion.
As adults, we have a deep, general understanding of how physical objects move and behave, and extensive developmental evidence suggests that well before we acquire language, we understand many core physical principles that govern our world (Baillargeon, 2004; Hespos & Baillargeon, 2008; Rips & Hespos, 2015; Spelke, 1990; Spelke, Gutheil, & Van de Walle, 1995; Téglás et al., 2011). A productive line of computational cognitive models, in turn, has modeled human physical understanding as probabilistic inferences over a mental physics engine, modeled as programmable physics simulation engines (Battaglia et al., 2013; de Avila Belbute-Peres, Smith, Allen, Tenenbaum, & Kolter, 2018; Lake et al., 2017; Ullman et al., 2017; Yi et al., 2019) like those used in video games, computer animation, and robotics (Coumans & Bai, 2016; Erez, Tassa, & Todorov, 2015; Todorov, Erez, & Tassa, 2012).
28
3.2 Language for visual and physical reasoning
3 WORLD MODELS
Generative world model of dynamic physical scenes Sampled scene graphs Imagine a table with a red object placed to the left of a blue one. Scene 1 Scene 2 emineaeasare pelierenintenterendteciert jvtect C ctor rd shoves spare sss: 8.2, 253, ve 18, sojectts (colo: rdy shape: ibe mak: 18, x: oR, vs Ay ebject-2: { color: blue, shapeâ sphere, mass: 3.0, x: @, v: 8.0, ebject-2: { color: blue, shape: cube, mass: 3.0, x: 8, v: 0.8, 0.9, force: 8.0.0 2: 0.0, force: 8.0...) (define choose_shapes...) (define choose_mass ...) (define get_initial_x...) t1 t=2 te2 t=10 (define generate-object ebject-1 object-1 ebject-1 object ebject-1 abject) (mem (lambda (obj-id) (list aera an oa, ae an aan (pair âobject-id obj-id) (choose_shape obj-id) vs 0.95...) VE 0.9.07 vs 0.01...) v3.0.7 W207 ve 18.) (choose_color obj-id) (choose_mass obj-id)...)))) ... (define generate-initial-scene-state...) .... (define simulate-physics (mem Clanibda (scene total_t delta_t) (let check_collisions ...) (let generate_next_scene_state_at_time...) ....)))) Reasoning about physical scenes from natural language Situation A Imagine that the red objectiis a ball, and is pretty heavy. The red ball hits the blue one. How fast does the blue ball move after the collision? (condition (get_singleton_object (Lanbda (object) Cand tion, Get sineleton (condition (get_singleton object Clanbda (object_1) EE Goshager âShnered object) (Get-singleton-object.Clanbda (object_2) og (© (get_attribute object 'mass) 2))))) (exists_event (lambda (event) (and (Cis_color? red) abject_1) (Gis_shape? 'sphere) object_1) And the blue objects also a ball, but is fairly light. BE (sccolor? blue) object.2) (Gis_shape? sphere) object_2) (ondition (get_singleton_object Clanbda (object) (and Gsparticipant_of event? event object_1) (Gs color? blue) object) Gs_participant_of event? event object_2) : (Cis_shape? "sphere) object) GsLevent? âcollision event))))))))) G (et_attribute object ânass) 2))))) Now imagine that the red ball is pushed forcefully to the right. Condition (get-singleton_object (lanbda (object) (and EBB (is color? red) object) (isishape? âsphere) object) © (get_attribute object 'f0) +6))))) Situation B Now how fast does the blue ball move after th Now, imagine that the red ball is quite light. âAnd the blue ball is somewhat heavy. The red ball is pushed gently to the right. Situation C fast does the blue block move after it is bumped by the red one? Imagine that all of the objects are blocks. The red block is still pretty light, And the blue block is quite heavy. We push the red block lightly to the right.
Figure 10: The way we talk about the world also draws on our intuitive physical knowledge. (Top) A probabilistic generative model describes a prior over tabletop scenes with a red and blue object placed side by side, of varying mass and shape. Integrating a physics simulation engine into this generative model allows this model to express a prior over dynamic scenes, modeling how each possible scene unfolds over time as differing initial forces are applied to the red object. (Bottom) Language about possible physical scenes can again be translated into conditions (blue) and queries (green) on the distribution over dynamic scenes. Rendering these scenes produces scenes that reflect these conditions, and inference over the simulations allows the framework to answer queries contingent on the properties of the objects described in language.
As with the previous example on visual scenes, our goal in this section will be to illustrate how the overarching framework we have described in this paper can integrate language with other domains of human reasoningâperception and visual imagination, or intuitive physical reasoning. By translating language into a probabilistic language of thought, we can relate the semantics of language to these other, well-studied computational and cognitive modeling approaches, using probabilistic programs as the underlying interface between language, inference, and these engines for perception and physical simulation.
This approach is closely related to other recent work from the AI literature, most notably R. Liu et al. (2022), which also extends large language models with physics engine to ground natural language in physical simulation. By incorporating an interface to physics within a general probabilistic programming language, we show here how these approaches can model the commonsense, probabilistic judgements we make about
29
3.2 Language for visual and physical reasoning
3 WORLD MODELS
everyday physical languageâincluding with respect to uncertainty and vagueness in language about the underlying world state, or combined with inputs from visual reasoning, as discussed in the prior section.
Integrating the probabilistic generative model over scenes with a physics engine. To model language about the example scenes we described hereâred and blue balls, or blocks, placed on a tabletop (Fig. 10)âwe implement a probabilistic generative model that is similar by design to the previous visual scenes domain (a very short excerpt appears in Fig. 10, and the full model appears in Appendix A.3.3). This generative program describes a prior over the possible properties of the objects initially set on a table, modeling scenes as a collection of objects in which each individual object is again generated (generate-object) based on stochastic choices over its possible properties (e.g choose_shapes). In this domain, however, we also model an explicit prior over the physical properties of each object, such as its mass (choose_mass), and the relationship between shape and friction (as a simple get_friction_constants function returns different constants, with a higher constant for blocks than spheres).
As with the visual scenes example, each sample from this generative model again returns a structured, symbolic representation of a possible initial scene state, as a list of object entities that represents each object as a dictionary-like mapping from attribute kinds. This dictionary also stores each objectâs initial kinematic state, such as its position, velocity, acceleration, and any forces applied to it. To model the various ways we can push the objects around, our generative model over scene also implements a stochastic function over possible initial forces (choose_initial_forces) applied to an object.
To model how each possible world unfolds as a dynamic scene over time, we implement a simulate_physics function (Fig. 10) that integrates the basic functionality of any programmable physics engine into the probabilistic modelâthis function takes in a scene state that specifies the relevant physical properties of objects, and returns a sequence of scene states forward simulated in time under the laws of physics. In our implementation, this sequence is a list of scene states at each timestep, each which contains its own set of the objects and their properties with relevant kinematic properties (like position, velocity, acceleration) updated at each timestep. The physics model we use in our example is simple enough that we implement it fully within the body of the probabilistic program itself (see Appendix A.3.3) for illustrative purposesâour simulate_physics updates each object at each timestep under the basic kinematic laws of Newtonian mechanics, includes a simple implementation of static and kinetic friction under gravity, and models simple collisions as impulse exchanges in momentum.
The rendered simulations we show in Fig. 10 also showcase the interplay between these modular, API-like interfaces integrated into a probabilistic language of thoughtâcombined with the render interface from the previous section, we can not only simulate underlying physical scene states, but visualize them by rendering each individual scene state in the sequence over time. Collectively, this model now captures a prior over tabletop scenes, models how any given scene in the distribution unfolds dynamically under physics, and captures how each scene appears visually over time.
Grounding physical language in program expressions. By extending the underlying probabilistic world model to interface with a physics engine, we can ground the semantics of language about the physical world in intuitive, human-like physical reasoning modeled by the physics simulation engine over world states. Descriptions of the physical properties of objects, for instance, like the blue ball is not very heavy (Fig. 10) translate into conditions on the mass property of an object in the world state, and maintain uncertainty inherent to languageâphrases like very heavy translate into conditions that threshold a continuous distribution of possible masses. As in the visual scene example, sampling from the conditioned generative model produces dynamic scene simulations that reflect language. Descriptions of heavy blue balls, or red blocks that are relatively light, or scenes in which a red ball is pushed forcefully, or in which a red block bumps into a blue one, all connote sets of scenes that model explicit, physical simulation. In turn, queries about distributions over physical scenes (like how fast a heavy blue ball will move after it is bumped ) reflect probabilistic inferences that condition on all of these relevant descriptions in language, estimated by sampling and running physical simulations over the possible world states.
In this example, we highlight an approach to translating verbs and descriptions of physical events (the red ball pushed forcefully to the right, the red ball hits the blue ball ) that grounds them directly over continuous variables in our world model. In Fig. 10, for example, our implementation translates pushed forcefully to the right into a condition expression that picks out a distribution of initial forces, over a space of continuous force
30
3.2 Language for visual and physical reasoning
3 WORLD MODELS
vectors with direction and magnitude, as the meaning of push in a physical world. Similarly, we translate hits with respect to collisions simulated by the physics engine between the two object entities.
In our appendix, however, we also implement and show how a discrete event semantics can also be constructed over variables in the physics engine, to highlight potential future connections between our implementation and more classical event semantics representations. Neo-Davidsonian event semantics and related approaches (D. Davidson & Rescher, 1967; Parsons, 1990), for instance, have long modeled events in language with discrete event entities and lexical event predicates (eg. is_hitting) that describe particular categories of events in time. Prior work in classical event semantics has also considered how discrete event representations relate to underlying physical forces (Talmy, 1988), with particularly close connections to lexical semantics approaches (Jackendoff, 1985; Levin, 1993; Pinker, 1984; Schuler, 2005; Talmy, 1988) that realize verb meanings into cognitively-grounded physical concepts of motion and forces.
Our implementation concretely realizes these semantic events and predicates as functions derived entirely on top of a fully realized, continuous world state modeled in a physics engineâis_hitting, for instance, is an event derived on top of the collision mechanics in the underlying physics engine. Other event predicates, like is_moving, or is_resting, can be similarly as thresholds on continuous kinematic properties (here, velocity) represented in the world state. Our broader goal is to show all of these can be broadly constructed over a probabilistic language of thought, which grounds out concretetly with respect to states in an implementable physics engine.
Translating from language to program expressions. As with our visual scenes example, the translations we show in Fig. 10 are again chosen to illustrate the generalization and amortized inferences that the language- to-code LLM can make. Much like vague quantifiers, we find that the context-conditioned LLM can directly infer reasonable meanings for graded terms that pick out thresholds over a numeric distributionâtranslating phrases like not very heavy, pretty heavy, and pretty light directly into reasonable, context-specific thresholds on continuous masses, or pushed gently and pushed forcefully into thresholds on forces. Again, we see interesting future grounds for further integrating these kinds of amortized inferences with more explicit, probabilistic inference mechanisms for deriving themâsuch as to integrate inferences over language with new contextual observations from other modalities, such as perceptual or motor observations from seeing or actually moving these objects that might update oneâs background beliefs over the distribution of masses.
Putting it together: Probabilistic inference and physics simulation from language. The examples in Fig. 10 show how this approach can capture the nuanced relationships between language and physical reasoning. Language that modulates any of the physical properties in our introduction to this section, from the masses of objects, their shapes and corresponding friction when moving, and the forces they receive, changes the distribution over internally simulated scenes, and is reflected in updated inferences about downstream events.
Future directions: Perception as inverse rendering and complex physical reasoning as intuitive phsyics. As with all of our other examples, its important to emphasize that our simulate_physics interface is almost the simplest possible world model we might construct over physical scenes. The approach we take here is inspired by, but much simpler than, many other probabilistic generative models (Allen, Smith, & Tenenbaum, 2020; Battaglia et al., 2013; Ullman et al., 2017; J. Wu et al., 2015a; Xu et al., 2021) of more complex object configurations in more complex environments (such as ramps, stacks of objects), many other properties that we can describe about objects themselves (such as their material ), and arbitrary forces (like bumping the table or dropping objects from above).
Our approach in these sections also suggests a rich line of future work for reasoning jointly about observations in language, and from perception. While we do not implement a perceptual module in our example, the framework we sketch here can be directly integrated with the broad body of work on inverse graphics, which frames scene understanding as inference from observed visual inputs to recover structured representations of a sceneâs contents (D. Kersten, Mamassian, & Yuille, 2004; D. K. D. Kersten & Yuille, 1996; Lee & Mumford, 2003; J. Wu, Tenenbaum, & Kohli, 2017; J. Wu, Yildirim, Lim, Freeman, & Tenenbaum, 2015b; Yi et al., 2018; Yildirim, Belledonne, Freiwald, & Tenenbaum, n.d.; Yuille & Kersten, 2006). Our framework suggests a particularly fruitful integration between language and the growing body of work that
31
3.2 Language for visual and physical reasoning
3 WORLD MODELS
combines probabilistic programs and graphics rendering engines (Gothoskar et al., 2021; Kulkarni et al., 2015; V. K. Mansinghka et al., 2013; Zinberg et al., 2019). To draw inferences about visual scenes from perceptual inputs, models like these incorporate convolutional neural networks to make fast, amortized proposals about the scene state from vision, but with respect to a generative program that defines the underlying scene state and guide inferences about particular scenes, such as to reason about occlusion.
Integrated with the approach we describe here, this framework could ground linguistic queries directly into vision, allowing structured inferences for visual question answering (e.g., counting the number of unique colors of the dishes in a scene). Moreover, it could enable more complex, joint inferences that integrate visual observation with linguistic information about latent physical properties of objects in a scene (e.g., mass, friction) or the presence or identity of occluded objects. Such multimodal integration holds the potential to shed further light on the ways that linguistic knowledge can shape our understanding of and reasoning about physical scenes.
32
3.3 Language for reasoning about agents and plans
3 WORLD MODELS
# 3.3 Language for reasoning about agents and plans
One of the most deeply human things we can talk about is other people. To conclude this section, we turn to language about other social beingsâagents who want things, chase goals, and plan how to act in the world around them.
As an illustrative example, we consider a domain (Fig. 11) inspired by C. L. Baker, Tenenbaum, and Saxe (2007), which evaluated commonsense social inferences about agents with different preferences and goals. In our slightly modified example, we consider a set of people with varying food preferences who are making plans for lunch. Based on the map shown in Fig. 11, weâll imagine which restaurant they might go to based on what foods they like, how far each restaurant is from their office (shown in blue), and whether restaurants happen to be open or closed. Weâll also note that students can bike or walk to any restaurant, but include the intuitive fact that biking is faster on roads, but slower than walking on the lawns.
The original experiments in C. L. Baker et al. (2007) used visual stimuli to depict agentsâ paths and plans, but language is a particularly natural and nuanced way to communicate information about other agents. Consider the range of situations we can describe in this simple example. We might leverage our wide vocabulary for describing the spectrum of someoneâs preferences and desiresâwhether they crave pizza or hate vegetables, or whether they love sushi rather than merely liking it. We might describe their more concrete, discrete goals, like getting to a pizza place or generally getting to the closest restaurant to the office. The inferences we draw from language also depend on our intuitions about agents themselves. All else being equal, we expect people to minimize the effort it takes to act, while trying to maximize the value they gain from acting. We might generally expect someone to walk down Ames Street if they wanted to go to the pizza place rather than taking an unnecessarily convoluted path, or to jump on a bike if they owned one, rather than taking a slower walk there. We also understand, of course, that people need to accommodate the world itself in their plans, and might not go to the pizza place no matter how much they love pizza, if they were told that the pizza place was closed.
Perhaps more subtly, but equally importantly, what we know about agents also informs the wide range of inferences we can draw from language about their actions. Consider, for instance, what you can infer from being told that someone had started at the office, and was now walking across the southern lawn. Because theyâre on a direct route towards the vegetarian place, you might infer that they are more likely to prefer vegetarian food, and that they either know or at least believe that the vegetarian place is open. Because they are walking on foot, you might also suspect that they do not own a bike, which would have allowed them to get to the restaurant more quickly. All of these inferences build on a cohesive picture of agents as a wholeâour expectations about agents as goal-directed, efficient actors inform how we think about any given action.
As with visual and physical reasoning, this section builds more generally on extensive work in cognitive science and artificial intelligence on social reasoning, and seeks to integrate this broader literature into our framework for language. Developmental evidence suggests that we have a core conceptual understanding of agents as goal-directed actors from a very young age (Csibra, 2008; Csibra, BÃró, Koós, & Gergely, 2003; R. M. Scott & Baillargeon, 2013; Spelke & Kinzler, 2007). Computational cognitive models, and the broader AI planning literature, have long approached social inferences like those we describe here under under a unifying model of planning and inverse planning (C. Baker et al., 2011; C. L. Baker, Saxe, & Tenenbaum, 2009; C. L. Baker et al., 2007; M. F. Cusumano-Towner, Radul, Wingate, & Mansinghka, 2017; Jara-Ettinger, Schulz, & Tenenbaum, 2020; Seaman, van de Meent, & Wingate, 2018). This framing couples the forward planning of actions that achieve goals or maximize utilities, to the inverse problem of inferring latent variables about the agent or the world from observations of their actions.
The example below shows how we can extend the modeling motif from our previous discussion of visual and physical reasoning, which shows how our framework can relate language to other core cognitive modules via interfaces implemented in a general probabilistic language of thought. In this section, we introduce model-based planners as another such core computational module, which can be integrated into this framework to support a wide range of probabilistic forward and inverse inferences about agents and their actions as they are referenced in language.
Integrating a probabilistic generative model over agents with a planner. As a concrete example, the generative program excerpted in Fig. 11 (shown in full in Appendix A.4.1) illustrates how an integrated
33
3.3 Language for reasoning about agents and plans
3 WORLD MODELS
Generative world model of planning domain Reference map g (define restaurants (list "sushi âpizza 'vegetarian)) (define is_open (mem (lambda (restaurant_type) (f1ip)))) (define gridvorld (List list âames "lawn âlawn âlawn sushi) list âames âlawn âlawn âLawn âdanner) ist âoffice âbarlow âbarlow âbarlow âdanner) list âames âlawn âlawn âLawn âdanner) (list âanes âlawn âlawn âlawn vegetarian) jy, CES "BED 'GtSonâcrson âcarson âdbner) Qo S+o Mass (define has_bike (mem (lambda (agent-id) (flip)))) define restaurant_utility (nem (lanbéa (agent-id restaurant_type) (uniform-draw (List (gaussian POSITIVE UTILITY. MEAN UTILITY VARIANCE) (eaussian NEGATIVE _UTILITY_MEAN UTILITY_VARIANCE))))) (define motion_utility (mem (lambda (agent-id location_type motion_type) (case location_type Carson Avenue (Clann) (ease motion_type (C'is_biking) -1) (Cisiwalking) -0.2) ...) Rr 0.05 f+ Be -0.05 (define plan (aen Canbda (agent-id map statex state_y) +-0.5 G+ H+-0.01 (let nap. transition=fn & Bes oo Get value_function Get optinal_policy_tron initial state ...) Barlow Street Ames Street Danner Street Reasoning about agents, goals, and plans from natural language Situation B Situation A Alex likes all of the nearby r condition (ond EDR Coondition (> (restourantutidity "Vie sushi) 19) Be Ci Gestauraneutitty âatexâsushi) © (restaurant_utility âalex âpizza) 0) © (restaurant_utility âalex 'vegetarian) 0))) 't mind vegetarian, either, but they h Condition Boe âGna 6 (rostauane-utitity âLio âvegetarian) 0) (< (restaurant_utility âlio âpizza) @))) for lunch? Where do you think Lio will go? Inference over plans Cavery Hp Gotactions "Lio (lambda (action) (and Gis_subject.of action? action â1i0) Gs_action? action "is going))))) Oh, but the su EBB (condition (oot isopen *sush)) Where do you think Lio will go now? (query $Bp (Getactions "Lio (Lanbda (action) (and Cis_subject_of_action? action â1io) Gs_action? action "is_going))))) SS EE
Figure 11: Our language about other people builds on our intuitions about how agents act on their preferences and goals. (Top) Our example probabilistic generative model describes a prior over agents with different preferences for the nearby restaurants shown on the map, as well as the relative cost of getting to each one on bike or on foot. Integrating a model-based planner into this generative model allows it to express a prior on how agents will actually act based on their desires, balancing these preferences against whether restaurants are open, and whether or not they have a bike. (Bottom) Observations and queries about the agents, their goals, and about the world itself updates a unified belief distribution, reflecting how agents plan in the world and how observing their actions drives inferences about the latent state in the world.
probabilistic modeling and planning language can describe the agents and restaurants domain.
To model background states of this environment, our implementation represents the spatial structure of the campus (as a simple gridworld map), and stochastic Boolean variables that model whether someone owns a bike (has_bike) and whether a given restaurant is open (is_open).
We then introduce a generic, utility-based formulation derived from the classical AI planning literature to
34
3.3 Language for reasoning about agents and plans
3 WORLD MODELS
model the varying preferences of any given person and the effort of taking particular actions (see Russell and Norvig (2021) for a review). Incorporated into a probabilistic generative model, we can express the distribution of preferences any particular agent could have, and the way these preferences interact with the stochastic mechanics of any given world. In our implementation, we model these varying preferences as a stochastic utility function associated with particular agents and restaurants (restaurant_utility). Our implementation shows a bimodal Gaussian distribution, in which people tend to have distinctly negative or positive preferences for any given restaurant, but any other formulation would be easily expressible. We also model how these preferences interact with other aspects of the worldâwe condition the value an agent derives from actually arriving at a restaurant (utility_at_restaurant_state) on whether or not it is open. These utilities interact with possible actions an agent can take to get to different restaurants. We model the distribution of possible actions an agent might take (our available_actions function conditions on whether an agent has_bike), and the varying effort of individual actions. Our motion_utility conditions on the type of action and the state in which it is used, to model the greater effort of biking on grass and the relative ease of biking on the road.
Up to this point, the generative model simply expresses a general prior over world states that includes agent preferences. Now, to model how an agent actually decides on a course of action conditioned on the world state, we can finally introduce a plan interface (Fig. 8) that calls out to a model-based planner. Our implementation, while simple, implements the basic functionality core to an AI plannerâit computes a sequence of actions that achieves a goal or maximizes a value function, subject to an agentâs underlying preferences, available actions, and the conditions of the environment. As with the physics interface in our previous section, our example planner implementation is simple and generic enough that we also implement it fully within the body of the probabilistic program itself (see Appendix A.4.1) for illustrative purposes. Our implementation here uses a simple value-iteration algorithm, which computes an optimal policy of action to trade off between the value an agent derives from particular restaurants, and the cost of taking actions (walking or biking in any given direction, from any location in the map) towards them.
Language about agents as program expressions. By augmenting the probabilistic generative model with a planner, we can now ground many of the basic ways we talk about agents themselves in probabilistic conditions and queries to this model.
Language about what people want and prefer, like whether someone wants, likes, loves, doesnât mind, or hates a given restaurant in our example domain, can construct formal conditions over underlying utility variables, that in turn drive inferences about how the agent will act. In the examples shown in Fig. 8, we illustrate the semantics of these terms as conditions constructed directly over the continuous utility variables defined in this domain. We could also derive a more explicit set of predicates (like a Boolean likes? predicate, defined over the underlying utilities), but as in several previous sections, we show these more transparent semantics (like translating likes into a > 0 threshold on utilities) to illustrate how language relates to our model of agents, and to demonstrate the amortized inferences that a language-to-code model can make in directly inferring these threshold values in context, and for new preference words.
Observations about relevant aspects of the environment, like whether the sushi place is closed or Alex has a bike, are translated as in previous sections into conditions on the generative world model. In this integrated framework, these observations now support downstream inferences about how agents might change their behavior with respect to what we are told about the world.
Finally, of course, explicit observations and queries about someoneâs goals, plans, and individual actions (Gabe was biking East on Barlow Street, or What restaurant will Alex go to for lunch? ) can be interpreted with respect to the underlying, model-based planner, to drive inferences about forward planning agents choosing actions in the world, and inverse inferences over the many latent variables in the world that collectively explain language about someoneâs actions.
Translating language using language-program distributions. We showcase several distinct examples (Fig. 11) of context-sensitive, pragmatic inferences derived using a language-to-code meaning function conditioned on language and this generative world model.
As in previous sections, we find that the LLM can directly ground vague, graded terms in context-specific thresholds over particular continuous variables in the probabilistic world model. Here, this approach grounds preference terms (doesnât mind, loves) into reasonable thresholds over the utility variables in the world model
35
3.3 Language for reasoning about agents and plans
3 WORLD MODELS
(Fig. 11). We find that the LLM can both infer reasonable utility thresholds and generalize to words not explicitly given as example translations: we prompt the model with a handful of examples pairs, such as a translation that maps the word likes to a > 0 threshold on utilities, and the LLM successively generalizes this parse to ground other preference terms like hate and love, presumably based on the comparative valences of these preference terms in the broader distribution of language.
We also find that the LLM can directly translate quantifiers over contextual sets in this domainâlike likes all of the nearby restaurantsâinto a conjunction over the set of restaurant literals in this domain, by conditioning on the generative world model during parsing. More concretely, this means the LLM identifies the relevant restaurants list (shown in the excerpted generative world model in Fig. 11), and conditions on it to directly produce the unrolled conjunction over the list contents ((and (is_open 'sushi) (is_open 'pizza)...) intended by all restaurants, amortizing the computation that would have otherwise been necessary over a more literal semantics like (all restaurants). Together with the previous sections, these examples suggest how our framework might jointly support explicit inferences from language into various expressions in a language of thought, and learned patterns from the large language-to-code model that amortize some of these inferences over time, which we discuss directly as grounds for future work in Section 5.
Putting it together: Probabilistic inference and planning over language. The several example dialogues shown in Section 2 show how this approach captures the integrated social inferences we make about agents in language. We can now query the plans and goals of agents, deriving inferences with respect to the forward planning module incorporated into the underlying generative model, conditioning flexibly on arbitrary information in context, and updating expectations about where agents will go, and how they will change their plans based on new observations about the world. In turn, we can derive inverse planning inferences, like whether the pizza place is open, based on relatively tangential information about someoneâs actionsâknowing that an agent really likes pizza, but is seen taking a path that wouldnât efficiently lead them there. All of these inferences fall out of the same underlying generative model, which unifies these distinct observations about people and the world in language with respect to a formal model of how agents tend to behave.
Future directions: Scaling integrated world models for planning and inference. The plan function in our example implements a very simple but model-based plannerâit computes actions based on an underlying, structured model of the world. In comparison to the other domains in this paper, linguistic planning and social reasoning have received perhaps the attention in recent work, in part because complex reasoning about other agents (Ullman, 2023) and precise general planning tasks (Bubeck et al., 2023; Valmeekam et al., 2023) appear to pose outstanding challenges for even the largest current language models. Recent work has sought to interface large language models with classical planning languages and symbolic planners (eg. Collins, Wong, Feng, Wei, and Tenenbaum (2022); Ding, Zhang, Paxton, and Zhang (2023); B. Liu et al. (2023); Xie et al. (2023)), as well as general purpose programming languages used to express code-based policies (G. Wang et al., 2023). All of these approaches suggest directions for scaling the simple planning implementation we show hereâour goal is to show how classical planning approaches can be nested within and integrated into probabilistic generative models to support a range of complex reasoning about other agents to infer their goals and actions from information in language.
Collectively, the broader cognitive science and AI planning literature suggests many directions for scaling up this model towards more of the nuance in human social reasoning, each which would in turn allow this paradigm to ground richer and more nuanced descriptions of agents, and the inferences we draw from this language. Some of the most important ones include planners and planning languages that designed to express explicit and discrete goals, like wanting to be at the highest rated pizza place within a half-mile radius or trying to get a plate of sushi for under ten dollars, rather than continuous values and utilities (G. Davidson, Gureckis, & Lake, 2022; Fikes & Nilsson, 1971; D. McDermott, 1982; D. M. McDermott, 2000; Pednault, 1989); planners that model explicit uncertainty about the world itself, like agents who donât know whether a restaurant is open or closed until they get there (C. Baker et al., 2011; Kaelbling & Lozano-Pérez, 2013; Zhi-Xuan, Mann, Silver, Tenenbaum, & Mansinghka, 2020); hierarchical planners that recursively turn goals into more specific subgoals to account for plans over longer timescales, at differing levels of abstraction Kaelbling and Lozano-Pérez (2011); and recursive models of agents who are themselves thinking about other agents, such as models of two people trying to meet up at a restaurant that they think will satisfy both of
36
3.3 Language for reasoning about agents and plans
3 WORLD MODELS
them, or where they might be most likely to find the other (C. Baker et al., 2011; Krafft, Baker, Pentland, & Tenenbaum, 2016; S. A. Wu et al., 2021). Each of these could allow this paradigm to ground richer and more nuanced descriptions of agents, and the inferences we draw from this language.
Conclusions. Together with the previous sections on vision and physics, our approach to grounding language about social agents highlights the more general computational account suggested by our framework. By translating language into probabilistic programs, language can construct, describe, and drive inferences over our internal world models. These may in turn incorporate many more specific computational enginesâ modeling how scenes are visualized, how physics unfolds in the world, or how agents plan towards their goalsâ as modular interfaces that can be called upon in a general probabilistic language of thought.
37
4 GROWING WORLD MODELS
# 4 Growing and constructing world models from language
In Section 3, we illustrated how domain theories expressed in a probabilistic language-of-thought can provide flexible and powerful scaffolding for language understanding. In each, generative world modeling programs provided a unified substrate for defining a structured domain model and representing the meanings of sentences. But where do these world models come from? If we want our PLoT account of language understanding to scale beyond the knowledge that can be hand-coded by a programmer, we need to provided some account of how such a system might acquire new concepts and domain theories.
One of the hallmarks of human communication is our ability to teach each other fundamentally new concepts in language. We coin new words, define interrelated conceptual systems, and describe entirely new world models, explaining the abstract underlying structure of whole domains. Because language spans so many aspects of human thought, it is perhaps a uniquely powerful tool for structuring learning. In language, we can define new concepts and domains that are integrated into our inferences, relational reasoning, understanding of the visual and physical world, and goals and plans.
How do we learn new concepts and world models from language? And how can we build computational systems that can be taught in language as we teach each other? In this section, we showcase the extensibility of the framework we have proposed as a unified model for relating language to thinking. Because world models in a PPL are expressed as programs, the same core computational components can be used to extend and construct world models themselves from language. In Section 4.1, we show how we can extend an existing domain model with new lexical concepts. Then, in Section 4.2, we turn to language that communicates an entire background domain model from scratch. Through these simple explorations, we aim to point towards a near-term horizon where systems might construct rich and nuanced probabilistic models to make sense of their linguistic environments and the broader world around them.
# 4.1 Growing a world model from language
How can we enrich our world models with concepts learned from language? Letâs consider, for instance, the kinship domain model used in the relational reasoning example in Section 3.1. The probabilistic program used in this example described a basic generative model over family trees, and then defined a handful of primitives, such as concepts for grandparent and sibling. But most people know and can talk about many more kinship relations than those included in that simple example, such as uncles, aunts, and cousins. What happens when we use language that invokes one of these undefined concepts?
Condition: Avery is Blakeâs uncle.
(condition (exists (lambda (x) (and (sibling-of? x 'avery) (parent-of? x 'blake)))))
The LLM proposes an initial translation that includes some of the important components in the concept of an âuncle.â However, several key details are not quite right: an uncle should be the brother of Averyâs parent, not just a generic sibling. Moreover, an uncle can come from outside the bloodline, in which case this definition would not fit. Much like a person learning English, the LLM has a partial notion of this concept, but could benefit from more explicit instruction from a knowledgeable teacher. In this section, we introduce a new define construct that does just this by prompting the LLM to generate a new definition from language.
Define: An uncle is the brother of oneâs father or mother, or the husband of oneâs aunt.
(define (uncle-of? name_a name_b) (or (exists (lambda (x) (and (brother-of? name_a x) (parent-of? x name_b))))
38
4.1 Growing a world model from language
4 GROWING WORLD MODELS
(exists (lambda (x) (and (husband-of? name_a x) (aunt-of? x name_b) )))))
Weâve used define to fill in a bit of common knowledge that was missing from our conceptual system. But the mental frameworks we use to reason about the world are constantly undergoing conceptual change, both at an individual and a societal level (Carey, 1999; Posner, Strike, Hewson, & Gertzog, 1982). For instance, shifts in cultural notions of gender and identity have introduced new kinship terms into English. One of the hallmarks of language is the ease with which we can coin and communicate new concepts, like the following:
âPiblingâ is a gender-neutral term for âauntâ or âuncleâ that refers to the sibling of oneâs parent.
Finally, as we touched on in Section 3.1, kinship systems vary widely; certain cultures have kinship concepts that are more granular than those found in English. For instance:
In the language of the Northern Paiute, a group of peoples indigenous to the Great Basin region of the US, âp¯aanâiâ refers specifically to the sister of oneâs father.5
From this definition, we can incorporate the concept of a p¯aanâi into our growing set of kinship concepts. Our framework elegantly captures this ability to learn new concepts in language that we can then use productively to construct new sentences and reason about coherently against the background of our existing world knowledge. Here, we walk concretely through how the basic components of our framework are combined to grow the original kinship model with new concepts.
Linguistic meanings as program expressions. Much as we interpreted observations as program expressions that conditioned an existing world model, and questions as program expressions that queried it, a sentence like, The term âp¯aanâiâ refers to the sister of oneâs father, can be modeled as a program expression that defines a new such primitive relation, paani-of?. The examples in Fig. 12 show how the semantics of this sentence, along with the other kinship concepts introduced in the introduction to this section, can be similarly understood as expressions that define new conceptual primitives. These expressions are particularly interesting because they are defined in terms of other concepts, like sister-of? and father-of?, that make up this conceptual system. In this way, our treatment of concept learning is closely linked to the idea of a conceptual role semantics (Block, 1998; Field, 1977; Greenberg & Harman, 2005; Harman, 1982), in which concepts (including lexical concepts) derive meaning from their interrelated roles and relationships to other concepts. In these examples, interpreting these sentences as program expressions defined over the base generative model showcases the flexible role that the generative modeling program can play, in relation to language about the domain. While our example showcases simple relational definitions over the underlying world model, it is worth noting that these are not the only kinds of functional definitions that we could learn to extend a world model from language. This general approach can be used to make meaning from sentences that grow an underlying world model in other ways, such as by defining new random variables (like phenotypic eye colors or other inherited traits) that extend the probabilistic generative model.
Translating with a language-program distribution. While the meanings of these sentences play a different role in our frameworkâthey extend the world modeling program, rather than condition or query itâthey are still program expressions. Therefore, with minor adjustments, we can use the same language-to- code LLM approach to ground these new concepts in our world model. To derive each of the translations shown in Fig. 12, we feed the LLM the same prompt as in Section 3, which includes the existing generative model and example translations. The final line of the prompt begins with Define: and contains the language describing the new concept definition. Each sentence is then then translated into the new define statements which construct new conceptual kinship primitives. In sum, linguistic definitions are simply another kind of program expression we can translate into from language.
5At the time of writing, a Google search for the term âp¯aanâiâ yielded zero results. The term itself was pulled from a non-searchable table in a century-old manuscript (Lowie, 1930). As far as real-world kinship terms go, it is comparatively unlikelyâthough not impossibleâthat âp¯aanâiâ was part of Codexâs pretraining data.
39
4.1 Growing a world model from language
4 GROWING WORLD MODELS
A. Existing generative world model B. Defining new concepts via language-to-code translation (define (person person-id parent-1-id parent-2-id) ist An uncle is the brother of one's parent, or the husband of one's aunt. (pair 'person-id person-id) (pair âname person-id) (pair âgender (person->gender person-id)) (define (uncle-of? name_a name_b) (pair 'parent-1-id parent-1-id) (or (exists (lambda (x) (and (brother-of? name_a x) Ss (parent-of? x name_b)))) (exists (lambda (x) (and (husband-of? name_a x) (aunt-of? x name_b)))))) (pair 'âparent-2-id parent-: (define (parent-of? name_a name_b) (member? name_a (parents-of name_b))) (define (father-of? name_a name_b) (and (equal? (get-property name_a âgender) âmale) (parent-of? name_a name_b))) (define (sister-of? name_a name_b) (define (pibling-of? name_a name_b) (and (equal? (get-property name_a âgender) 'female) eo (or (uncle-of? name_a name_b) (sibling-of? name_a name_b))) (aunt-of? name_a name_b))) C. Extended world model to the sister of oneâs f. (define (uncle-of? name_a name_b) (define (paani-of? name_a name_b) (define (pibling-of? name_a name_b) » (exists (lambda (x) (and we , (sister-of? name_a x) (define (paani-of? name_a name_b) (father-of? x name_b))))) D. Grounding new language in learned concepts According to historical records, Chie of Numaga. Winnemucca (condition R (or (father-of? âwinnemucca ânumaga) (uncle-of? âwinnemucca ânumaga))) Numaga Numaga's son would have called Sarah Winnemucca his âpaanâiâ. (condition san » (exists (lambda (x) (and weneae » (son-of? x ânumaga) (paani-of? âsarah x))))) = Venere Sarah had a sibling named Natchez. » (condition b (sibling-of? âsarah ânatchez)) Natche | (query (length (filter-tree » (lambda (x) 2 » (exists (lambda (y) (and (pibling-of? x y) (son-of? y ânumaga))))))))
Figure 12: Extending the kinship world model with linguistic descriptions of kinship relations drawn from contemporary English and a low-resource language (Northern Paiute). A language-to-code LLM is prompted with (A) the existing generative model code and (B) language describing novel kinship relations to produce new concept definitions in Church. The extended world model (C) now supports probabilistic reasoning from language that contains these new concepts (D).
40
4.2 Constructing new world models from language
4 GROWING WORLD MODELS
Growing the domain model with new program expressions. Finally, by incorporating the meanings of sentences like The term âp¯aanâiâ refers to the sister of oneâs father back into the domain model itself, we have formalized a simple approach for enriching a world models with concepts learned from language. Each sentence shown in Fig. 12 is translated into a program expression that defines a new relational function which extends the set of conceptual primitives that comprise the extended kinship domain.
The more general principle here is not limited, of course, to kinship concepts. We could extend any of the
domain models in each of our previous examples with new concepts learned from language. For example:
In tug of war, the strongest person on a team is referred to as the âanchorâ.
A âmonochromeâ scene is one in which every object is the same color.
⢠On âNational Restaurant Dayâ, all the restaurants in town are guaranteed to be open.
Our proposal in this section is closely related to other work which formalizes the learning of new concepts as the learning of new program components, such as program synthesis systems that bootstrap a growing library of domain-specific concepts constructed out of an initial programming language (Bowers et al., 2023; Dechter, Malmaud, Adams, & Tenenbaum, 2013; Ellis et al., 2020); work that formalizes the learning of new concepts from language as the learning of new program primitives (Shin, Brockschmidt, Allamanis, & Polozov, 2018; Sumers, Hawkins, Ho, Griffiths, & Hadfield-Menell, 2022; C. Wong, Ellis, Tenenbaum, & Andreas, 2021); and semantic parsers that bootstrap lexicons of compositional word meanings, defined in a formal logical language, for interpreting new sentences (Artzi, Das, & Petrov, 2014; Cai & Yates, 2013; Kwiatkowski, Zettlemoyer, Goldwater, & Steedman, 2011).
The framing we describe here showcases the tight integration between language, meanings, and the probabilistic programs that form the formal substrate for modeling the world in our framework. Language that specifies new parts of a world model can be cleanly interpreted as program expressions, which are used to extend the generative world modeling program itself. These generative models in turn provide the basis for reasoning about new observations that build on these learned and structured bodies of conceptual knowledge. Returning to the themes of our introduction, human-like thinking, under the broader computational approach we take throughout this paper, is formalized as probabilistic programming and inference over probabilistic programs. This is how we construct models of and reason about the world. Language, then, is an especially powerful tool for constructing programs of all kindsâones that condition and query existing world models, and ones that actually construct and extend the flexible domain models themselves that undergird linguistic meaning and thought.
# 4.2 Constructing new world models from language
So far, we have assumed that language understanding happens in the context of a particular world model appropriate for the situation at hand, containing definitions of key concepts like sibling for kinship reasoning, or strength for reasoning about playground games. We have now seen how these models can be extended with new lexical definitions on the fly, but the question remains of where these background world models come from in the first place. The full answer to this question is likely complex: people learn about the world in all sorts of ways. But in some settings, people do seem to acquire new world models largely through language: we read the rules of new games, are taught the workings of machines, and take classes on the causal structure of many other complex systems (the human body, the solar system, the government). In this section, we broaden our scope beyond language that conveys new concepts that extend an existing domain model to consider how language can define entire new domain models from scratch.
As a concrete example, letâs return to the scenario from Section 2.2. Suppose your friend is telling you about a tug-of-war tournament that took place the prior weekendâonly this time, youâve never heard of tug-of-war before and donât know how itâs played. Your friend might explain the scenario to you using languageâindeed, their description might sound similar to the one our paper itself uses to convey the concepts of this particular situation:
Tug-of-war is a game played between teams of players. First, strength levels vary widely from person to person. Furthermore, each person has a percentage of the time that they are lazy. The strength of a team is the combined strength of its members, except that in any given match, each
41
4.2 Constructing new world models from language
4 GROWING WORLD MODELS
player may decide to be lazy, and thus contribute only half of their strength. Whether one team beats another just depends on which team pulls stronger that match.
Given this language, you can learn the underlying domain model necessary to reason about future observations (Even working as a team, Lio and Alex could not beat Josh) and answer questions (How strong is Josh? ). In this section, we explore how the components of our framework can be used to construct an entire domain model as it is communicated in language, using the tug-of-war domain as an illustrative example.
Linguistic concepts as program expressions. Considering the vignette above, we might distinguish between two kinds of statement in your friendâs description of tug-of-war:
⢠Some statements introduce new concepts solely in terms of previously introduced concepts (e.g., Whether one team beats another just depends on which team pulls stronger that match).
⢠Other statements posit the existence of new primitive concepts, like strength and laziness, that have certain properties (e.g., Strength levels vary widely from person to person).
The first case is similar to the sentences we saw in Section 4.1, and we can interpret them as language-of- thought definitions. The second case, however, is genuinely new: these sentences neither define new words in terms of an existing domain theory, nor encode predicates over possible worlds. Rather, they define random variables that we expect to have different values in each possible world.6 In Church, such variables can be defined using mem: for example,
# (define strength (mem (lambda (person) (normal 100 20))))
declares that expressions of the form (strength person) are well-formed and evaluate to a number in each possible world, and that our prior distribution for a new personâs strength is a Gaussian centered at 100. (The mem construct memoizes the defined function, so that repeatedly evaluating (strength 'lio) in the same world will always give the same result.) It might seem strange to claim that the meaning of the sentence âPlayers have different strength levelsâ includes a specific prior over player strengths, like (normal 100 20). We do not make this claim: rather, the meaning function induces a distribution over possible definitions of strength, each of which uses a different prior. What the different possible translations have in common is that they model strength as a continuous variable assigned on a per-player basis, with some population-level variation. See Footnote 6 for further discussion of this distribution, and how it might arise from the literal meaning of the sentence being translated.
Translating new concepts from language. As before, because each sentence means some distribution over program fragments in a probabilistic language, we can use probabilistic language-to-code translation models like Codex as models of the meaning function. In Fig. 13, we prompt Codex with an unrelated example world model in domain about diseases and symptoms, and then ask it to translate sentences defining the tug-of-war domain.
6 An alternative perspective is that the sentences we consider in this sectionâboth straightforward definitions, and sentences introducing new primitive conceptsâdo still encode predicates on possible worlds. According to this viewpoint, a sentence like âThe term âuncleâ refers to the brother of oneâs parent, or the husband of oneâs auntâ is an assertion that can be true or false; maybe uncle means something different in another possible world. To understand this viewpoint within our framework, we need to imagine that there is a background world model that models uncertainty about the code of a first-order world model (which definitions exist, and how they are defined). If we had such a model over world models, then sentences like âPlayers have different strength levelsâ could be interpreted as conditioning statements, observing that strength exists as a variable and that its value should vary from person to person. Conditioning on this constraint, we could then sample from the posterior over world models that satisfy this property. In this posterior, there would be some uncertainty over exactly how strength is modeled: e.g., does it vary according to a Gaussian distribution, and if so, with what parameters?
We find this view appealing, and believe that making it practical would be an intriguing technical challenge, requiring new developments in the field of Bayesian probabilistic program synthesis (Saad, Cusumano-Towner, Schaechtle, Rinard, & Mansinghka, 2019). In this section, we take a shortcut, of assuming that the meaning distribution induced by a sentence like âPlayers have different strength levelsâ directly samples model fragments consistent with the statement. That is, we ask our meaning function to amortize inference in the hierarchical model, directly proposing code defining strength, rather than first translating to a conditioning statement about strength existing, and then using a slower inference algorithm to infer its definition.
42
4.2 Constructing new world models from language
4 GROWING WORLD MODELS
A. Prompt, containing unrelated example world model 3; We define a probabilistic model in Church of the following scenario. 3; At any given time, about 1% of the population has lung cancer, 3) 20% have a cold, 10% have a stomach flu, and 0.5% have TB. (define lung-cancer (mem (lambda (person) (flip 0.01)))) (define cold (mem (lambda (person) (flip 0.2)))) (define stomach-flu (mem (lambda (person) (flip @.1)))) (define TB (mem (lambda (person) (flip @.005)))) 3; If you have a cold, there's a 50% chance you have a cough. 3} 30% of people with lung cancer have a cough, and 70% with TB. 3; There's also a small chance you have a cough even if you're otherwise healthy. (define cough (mem (lambda (person) (or (and (cold person) (flip 0.5)) (and (Llung-cancer person) (flip 0.3)) (and (TB person) (flip @.7)) (flip @.01))))) 3; Whether a person coughs during a particular visit to the doctor's office 3; depends on whether they have a cough, and a bit of random chance. ;; Note that this will differ each time they go to the doctor's office, so 3; we do not use âmemâ (which memoizes the result). (define coughs-on-particular-visit (lambda (person) (and (cough person) (flip 0.7)))) B. Defining a new world model from scratch via language-to-code translation 3; Now, let's define a different probabilistic model of the following scenario. 3; It is totally unrelated to the previous model and does not reference the functions above. = (define strength (mem (lambda (person) (normal 10@ 20)))) Furthermore, each person has a percentage of the time that they are lazy. = (define laziness (mem (lambda (person) (uniform @ 1)))) The strength of a team is the combined strength of its members, except that in any given match, each player may decide to be lazy, and thus contribute only half of their strength. (define team-strength (lambda (members) (apply + (map (lambda (member) = (if (flip (laziness member)) (/ (strength member) 2) (strength member) )) members) ))) Whether one team beats another just depends on which team pulls stronger that match. (define team-beats-team 4 (lambda (team1 team2) (> (team-strength team1) (team-strength team2)))))
Figure 13: Constructing the tug-of-war model from scratch. This can be accomplished with the same overarching language-to-code approach. (A) We provide a prompt containing one or more unrelated world models as examples. (In this case, the world model defines a medical diagnosis domain.) (B) Prompted line-by-line with language explaining the tug-of-war, Codex constructs a generative model from scratch that is semantically equivalent to the one from Section 2.2 (modulo some superficial naming and parameter choices).
43
4.2 Constructing new world models from language
4 GROWING WORLD MODELS
Constructing the domain model from new program expressions. By translating each sentence in a domain description in sequence, we canâstarting with no definitions beyond those built into Churchâbuild a domain model just as rich as the ones we hand-coded in earlier sections. In Fig. 13, although the specific priors may vary slightly, Codex recovers all the essential structure of our hand-coded tug-of-war model. Once we have a new domain model, we can immediately begin interpreting observations and queries, like those in Section 2.2, or continue to extend the domain model with new definitions.
In this section, Putting it together: Growing and constructing world models from language. weâve illustrated how the same basic building blocks used in the rest of the paper â language-to-code translation and probabilistic programs â can be used to extend and construct new world models. Hopefully, these simple sketches highlight a much deeper point: systems that have the ability to author world models in a universal programming language like Church can take advantage of the infinite expressivity of code to generalize to new kinds of language and thinking.
Nevertheless, the examples presented in Section 4 were limited to cases where there was an explicit connection between linguistic instructions and the resulting probabilistic programming expressions. In reality, this relationship is often indirect; language typically only provides us clues about how to think about a situation. In still other instances, we assemble world models in the absence of language, drawing instead on prior experience of similar situations. How can we build systems that learn to build world models on-the-fly? How can such systems remember and expand on prior world models to understand new situations? And how can they incorporate not just language, but the full spectrum of experiences in the world? In Section 5, we consider these questions as part of a discussion of the many future research directions needed to scale our framework to a general model of cognition.
44
5 FUTURE DIRECTIONS
# 5 Open questions and future directions
By using neural models to translate sentences into probabilistic programs, the sections above demonstrated how LLMs could extract meaning fromâand inference engines could reason aboutâlanguage describing uncertain situations, relational structures, embodied situations and goal-directed reasoning. However, these vignettes also leave open many questions about how to scale this framework to more complex language, and how to automate the process of building meaning representations for new domains. Together, these questions offer a roadmap for progress on central challenges in modeling language, reasoning, and their interaction, across many sub-fields of artificial intelligence and cognitive science.
# 5.1 Scaling models of rational meaning construction
We begin by describing several of the most important research directions necessary for scaling the framework we have articulated throughout this paper towards a more complete model of integrated cognition and language understanding.
# 5.1.1 Building new world models on the fly
A key aspect of our proposed architecture is that language is interpreted relative to a probabilistic model of a domain, capturing just enough structure to represent the situation at hand. In Section 4.2, we saw that LLMs could generate these programmatic world models, assuming the model was communicated via a sequence of natural language definitions. But people rarely need such elaborate scene-setting: we can understand language about the world even if no teacher has carefully drawn our attention to the relevant concepts beforehand. A key question is how to model this capability. How do minds craft bespoke world models on the fly, drawing in just enough of our knowledge about the world to answer the questions of interest? How does this process balance competing priorities, such as fidelity to what we know about the world, relevance to the problem at hand, and the efficiency and robustness of inference? These tradeoffs can sometimes seem to evolve during the course of a single chain of human thought. These questions are related to the classic frame problem (McCarthy, 1980) in artificial intelligence and cognitive science, and to recent proposals for addressing it in the setting of causal, probabilistic reasoning (Icard & Goodman, 2015). These approaches view the problem as one of retrieval: from a vast array of knowledge we have about the world, how can we select just the relevant parts for reasoning about a particular problem? It remains unclear, however, whether the sequences of bespoke models and approximate inferences produced by our minds can be understood as resource-rational approximations to coherent reasoning and planning in some larger, unifying world model, even in principle.
Most probabilistic programming languages were designed for inference in a single, unifying world model (Bingham et al., 2019; Carpenter et al., 2017; Goodman et al., 2008; Milch et al., 2007) that was written by an external mechanism, not to dynamically explore a sequence of probabilistic programs that are being synthesized, learned, and/or edited on the fly. But some progress in language-level support for dynamic world modeling has already been made. Probabilistic programs in Gen (M. F. Cusumano-Towner, Saad, Lew, & Mansinghka, 2019) have been used to synthesize and edit other probabilistic programs (Saad et al., 2019; Witty, Lew, Jensen, & Mansinghka, 2019), and to approximate globally coherent inferences by bridging across sequences of probabilistic programs describing translations among only partially-overlapping worlds (M. Cusumano-Towner, Bichsel, Gehr, Vechev, & Mansinghka, 2018; M. Cusumano-Towner, Lew, & Mansinghka, 2020; A. K. Lew, Matheos, et al., 2023; V. K. Mansinghka et al., 2018). Analogous language-level support for dynamic abstraction for planning with symbolic world models has also been developed (Zhi-Xuan, 2022). It remains to be seen to what extent these new degrees of freedom can be exploited by language-to-code models targeting these newer probabilistic programming platforms.
How could the common-sense background knowledge needed for dynamic world model synthesis be represented, even in principle? Modern game engines may provide important clues. They can be reconfigured and scripted to simulate diverse imaginary worlds and narratives, featuring interactions between physical objects and goal directed agents in both realistic and physically impossible environments. They routinely combine simulations of the same environment at multiple levels of detail, making computational tradeoffs that are in some ways analogous to the tradeoffs faced by human thinking. The level of scale, coherence, realism, and computational efficiency that they achieve still vastly outstrips the best multi-modal neural
45
5.1 Scaling models of rational meaning construction
5 FUTURE DIRECTIONS
models. Although some progress is already being made by synthesizing lightweight, probabilistic game engine scripts using language-to-code models (C. E. Zhang, Wong, Grand, & Tenenbaum, 2023), many fundamental challenges remain. Game engines lack crucial affordances for robustly fitting world models to sparse data, simulating rare events, and planning under uncertainty. And despite promising progress in neurally-guided program learning (Ellis et al., 2020), showing that libraries and DSLs can be learned from sparse data, there seems to be a long way to go before we can learn game-engine like rules that are sufficient to robustly model common sense. Flexible synthesis and learning mechanisms that can hope to scale across the vast scope of human thought thus seems to require new ideas that span and integrate probabilistic programming, cognitive architecture, and hierarchical program learning.
# 5.1.2 Scaling probabilistic inference in dynamically synthesized world models
A central challenge not addressed by this paper is how to scale probabilistic inference to begin to approach the robustness, speed, efficiency, and flexibility of human thought. Consider that the rejection sampling algorithm used in Sections 3 and 4 requires an exponentially-growing number of proposal attempts as the scenario becomes less likely under the prior. Although many exact inference methods for probabilistic programs are much faster and more reliable, they are too restrictive to support many of the world models in this paper (Gehr, Misailovic, & Vechev, 2016; Gehr, Steffen, & Vechev, 2020; Holtzen, Van den Broeck, & Millstein, 2020; Saad, Rinard, & Mansinghka, 2021; Shan & Ramsey, 2017). And although there are many approaches to generic approximate inference in probabilistic programs, drawing on MCMC (Carpenter et al., 2017; Goodman et al., 2008; Wingate, Stuhlmüller, & Goodman, 2011), sequential Monte Carlo (V. Mansinghka et al., 2014; Tolpin, van de Meent, Yang, & Wood, 2016), and variational methods (Bingham et al., 2019; Hoffman, Blei, Wang, & Paisley, 2013; V. Mansinghka et al., 2014; Ranganath, Gerrish, & Blei, 2014), they all routinely struggle to solve simple problems that can be solved by custom algorithms.
One potential way forward is to explicitly generate models of thinking processes that augment the world models with which they are thinking, by synthesizing inference programs (M. F. Cusumano-Towner et al., 2019; V. K. Mansinghka et al., 2018) tailored to specific problems. For example, Ventureâs inference meta- programming language is designed to enable concise specification of sequential inference processes that combine SMC, dynamic programming, MCMC, gradient-based optimization, and variational inference to perform inference in a sequence of world models and queries that grows dynamically. Data-driven proposals for use with these thinking strategies can also be generated in real-time, without any offline learning, using dynamic programming over blocks of highly coupled variables. This approach has recently outperformed machine learning methods on hard common-sense reasoning problems in databases with millions of records (A. Lew, Agrawal, Sontag, & Mansinghka, 2021). Scaling this approach will require not just synthesizing world models but automatically analyzing and decomposing them, analogously to how inference algorithm designers decompose large inference problems into sequences of more tractable subproblems.
Another promising approach is to train neural networks to make data-driven proposals via amortized inference, potentially using synthetic data from an open-ended simulator of world models and queries (M. Wu & Goodman, 2022). This can be seen as an alternative to inference programming, avoiding the need for explicit symbolic analysis of the process of thought. It can also be seen as a potential technique by which inference programs might eventually be synthesized, once a suitable training corpus can be generated synthetically â as well as a source of data-driven proposals that can be recombined by inference programs.
# 5.1.3 Resource rational amortization in meaning construction and problem solving
In some of our examples, a sentence (e.g., âGabe is stronger than Joshâ) is translated to a meaning representation that looks very much like its classical formal semantics, composing the literal meanings of each word in the sentence. But in other examples (e.g., âseveral of the faculty are real slackersâ), the translations appear to incorporate complex contextual and pragmatic judgments, judgments that might otherwise have been arrived at via probabilistic inference in a model of speakers, listeners, and their intents (Goodman & Frank, 2016). This raises the question of where to draw the line between translation and inference. Versions of this question have been extensively studied (e.g., does a word like âsomeâ imply ânot allâ as part of its meaning, or does this implicature arise via after-the-fact pragmatic reasoning (Tessler, Tenenbaum, & Goodman, 2022)?), and some past work has offered a unifying view via theories of amortized pragmatics (White et al., 2020), whereby RSA-style inferences are âcompiled downâ into new word meanings.
46
Implications for cognitive science
5 FUTURE DIRECTIONS
A key feature of our architecture is that it is largely agnostic to where exactly the boundary should lie, and as such could help to model and extend this process of amortized inference in language understanding. For example, as expanded on below, we could extend our symbolic world models to include aspects of the language understanding process itself (such as those described in symbolic derivations of semantics (Heim & Kratzer, 1998; Montague, 1970; Pollard & Sag, 1994; Steedman, 2001, 2011), and those used explicitly to compute its pragmatic interpretations (Fox, 2007; Goodman & Frank, 2016)). Symbolic inferences about meanings could then be used to train the language understanding module to directly generate the results of this symbolic inference processâfor use either as a fully amortized pragmatic translator, or as a proposal distribution within a larger Monte Carlo algorithm that could score and reject inaccurate translations.
In addition to making aspects of translation symbolic, we could consider approaches to amortizing the more general probabilistic inferences required to answer queries. By supervising âtranslation modelsâ directly with the final outputs of symbolic inference, across a wide variety of tasks, we could enable a pure neural inference mode for these systems that may overcome some limitations of models trained only on language and code. As described above, such supervised models could also be incorporated as proposal distributions in posterior sampling algorithms, leading to improved efficiency without sacrificing the ability to correct for learned biases that may be inapplicable when tackling novel problems.
Ultimately, we envision a new kind of neurosymbolic model in which, rather than pre-assigning responsibil- ities to the neural or symbolic program, models may flexibly perform any part of the language understanding via explicit probabilistic inference or learned, amortized prediction, with tradeoffs in speed and accuracy for any allocation of responsibilities to modules. The research question is how to do this automaticallyâhow do we identify pieces of a computation that can reliably be emulated by a neural model, how do we train this neural model efficiently, and how do we decide at runtime which inference mode to use? As above, these questions raise many opportunities to take inspiration from our scientific understanding of the separation of responsibilities in language and thought, and work on learning for inference in more general probabilistic models.
# 5.1.4 Language generation
The preceding discussion has focused largely in problems of language understandingâmapping from utterances to inferences about the state of the world that those utterances describe. But effective models of language use should also be able to explain generation, making it possible to translate the results of inference back to language. As with the problem of language-informed thinking that we focus on this paper, it is useful to model language generation as two distinct processes: choosing what to say, then how to say it (Duboue & McKeown, 2003). And as with understanding, the first phase requires a model of the world, and of the speakerâs goals within it. What additional work is needed to adapt our models of rational meaning construction for generation?
One possibility, alluded to in the discussion of amortization above, is to interpret the language under- standing machinery described above as a model of a listener, then perform language generation by selecting utterances that cause this model listener to form correct beliefs or take appropriate actions (Fox, 2007; Goodman & Frank, 2016). This extra layer of reasoning introduces major inferential challenges: the generation model most now reason both about the set of possible utterances and the effect of each utterance on the distribution over possible worlds inferred by a listener. Here it is once again possible to leverage large-scale statistical learningâfor example, using LLMs to directly translate candidate communicative intentions back to natural language strings, which may then be used as candidate utterances to be scored using a formal model of language understanding. Such a hybrid neuro-symbolic generation model (Fang et al., 2022; Langkilde & Knight, 1998) offers a path towards language generation that is expressive and fluent, but avoids the truthfulness and hallucination problems that plague all purely neural language generation models that exist today (Maynez, Narayan, Bohnet, & McDonald, 2020; Wiseman, Shieber, & Rush, 2017).
# Implications for cognitive science
In this section, we describe several research directions for other closely related disciplines that study language and thought in natural minds, brains, and behavior, focusing on productive intersections in relation to this framework.
47
Implications for cognitive science
5 FUTURE DIRECTIONS
# 5.2.1 Connections to cognitive and formal models of linguistic structure
In all the examples described above, the process of translating utterances into formal meaning representations was performed with a black-box statistical model, while reasoning about those meaning representations leveraged an explicit symbolic inferential process. However, an enormous body of work in linguistics has argued that the process of mapping from utterances to meaning representations can itself be described (at least approximately) in terms of symbol processing operations (Montague, 1970; Pollard & Sag, 1994; Steedman, 2001, inter alia). By design, most of our âmeaning representationsâ are designed to support efficient reasoning about domain-specific world models, and bear only a vague resemblance to formal and domain-general linguistic representational theories. But can the symbolic models of linguistic meaning posited by these theories (as opposed to the symbolic models of reasoning we already draw on) be incorporated into our framework?
As noted in Section 5.1.3, a fully realized model of rational meaning construction should be able to flexibly move computation across the statisticalâsymbolic boundary, âcompilingâ results of symbolic inference into amortized computation, or retrieving symbolic descriptions of amortized processes for explicit verification. In this view, the vignettes above treat the meaning representation process as culminating in domain-specific representations and amortized by default. But probabilistic symbolic models of meaning (e.g., Kwiatkowksi, Zettlemoyer, Goldwater, & Steedman, 2010), or Bayesian and game-theoretic models of semantics (e.g., Goodman & Frank, 2016) can themselves be implemented as probabilistic programs and composed with domain-specific inferential computations, resulting in an almost purely symbolic (but amortizable) language understanding process similar to the one described by Goodman and Lassiter (2015).
Such a model would also offer an appealing path toward learning language in a more sample-efficient (and perhaps human-like) ways. Todayâs neural sequence models require orders of magnitude more data than human learners to discover the structural regularities underlying human languages (Linzen, 2020). Explicit probabilistic symbolic models, by contrast, can discover this structure extremely sample-efficiently (Yang & Piantadosi, 2022). A model that could automatically infer symbolic meaning representation rules from data, then amortize this representation system into a statistical translation model (Liang, Daumé III, & Klein, 2008), would be capable of both efficient learning of language, and efficient modeling of other domains using language. It would also offer a framework for modeling other key aspects of language acquisition, including explicit linguistic instruction (of word meanings, rules of grammar, etc), tradeoffs between different formal representational schemes, and the relationship between linguistic competence (understood as symbol-side language processing) and linguistic performance (understood as statistical-side processing).
The semantic framework in this paper is most closely related to other cognitive semantic frameworks (eg. Jackendoff (1985); Lakoff (1988); Pietroski (2018); Pinker (1984)) that explicitly propose that human language constructs meanings from conceptual and cognitive primitives, including those for causal reasoning, or core knowledge representations of physics and agents. Related information-theoretic proposals have proposed that languages are effectively designed to be efficiently communicable externalizations of underlying thoughtsâthat the structure of human languages derives from underlying structure in the semantic representations we wish to communicate, and indeed may be driven by environmental and domain-specific pressures (eg. Gibson et al. (2019); Mollica et al. (2021); Zaslavsky, Kemp, Regier, and Tishby (2018)).
Other related acquisition theories posit that these structural relationships between the representations of thought and externalizable language play an important role in language acquisition. Under these theories, humans can so efficiently learn or hypothesize the meanings of sentences because they âmap cleanly" onto the cognitive structures already present in the minds of the language learner (Snedeker, 2016); language learning is bootstrapped by these predictable, structured mappings between the underlying space of meanings and the syntax of language (L. R. Gleitman et al., 2005; Hartshorne et al., 2016; Pinker & MacWhinney, 1987). In preliminary experiments, we find intriguing evidence that large language-to-code models can extract and generalize syntactic patterns between language and code, including to bootstrap hypotheses about the semantics of novel words expressed as probabilistic programs based on contextual, syntactic usage (see Syntactic Bootstrapping, Fig. 14). Future work can explore therefore whether these statistical distributional models might be used to implement cognitive models of bootstrapped language acquisition.
48
Implications for cognitive science
5 FUTURE DIRECTIONS
# 5.2.2 Modeling the mechanisms of human thought
Using tools for adaptive Bayesian inference over flexibly structured symbolic representationsâincluding not only probabilistic programs but more generally hierarchical Bayesian models (Griffiths et al., 2010; Tenenbaum, Kemp, Griffiths, & Goodman, 2011), resource-rational modeling (S. J. Gershman et al., 2015; Lieder & Griffiths, 2020), and program induction (Lake et al., 2017; Piantadosi et al., 2012)âcomputational cognitive scientists have built quantitatively predictive and functionally explanatory models of human behavior in almost every domain of cognition. This range spans from models of perception, concept learning and categorization, causal reasoning, decision-making and planning, to intuitive physics, theory of mind, sentence processing, and cognitive and language development (C. Baker et al., 2011; Goodman & Frank, 2016; Goodman et al., 2014; Griffiths & Tenenbaum, 2006; Ho, Saxe, & Cushman, 2022; Jara-Ettinger et al., 2020; Lake et al., 2017; Perfors et al., 2011). However, in almost every one of these cases, the models are not fully âstimulus-computableâ: Behavioral experiments in cognitive psychology almost always use natural language to present participants with some situation for thinking about (in addition to perhaps perceptual stimuli); language is also almost invariably used to pose some question or goal as the end for thinking. Put another way, almost all our behavioral experimentsâlike so many instances of cognition in the wildâfollow the language-informed thinking paradigm of this paper. But our cognitive models traditionally do not; they are created by hand from the modelerâs understanding of the natural language task description, rather than synthesized automatically from the linguistic stimuli presented to participants. To what extent can the rational meaning construction framework presented here reduce the need for computational cognitive scientists to manually create Bayesian models that match the natural-language prompts given to humans in behavioral experiments? Can we build âlanguage-computableâ models of human thought, that are much easier to test and vary via large-scale online experiments?
We have already begun to explore these possibilities and shown promising preliminary results in several domains, including to model how language implicates commonsense physical reasoning about linguistic scenes (C. E. Zhang et al., 2023), social reasoning about goal-directed agents (Ying et al., 2023); as well as to test the claim that the LLM-based meaning function we implement in this paper can compute amortized pragmatic judgments of scalar implicatures that accord with human interpretations (Lipkin, Wong, Grand, & Tenenbaum, 2023).
There is also a growing body of research in computational cognitive science showing that salient dynamics of thought, including well-known departures from Bayesian norms, can be explained via Monte Carlo inference approximations that aim to rationally use limited computational resources (Chater et al., 2020; S. J. Gershman et al., 2015; Lieder & Griffiths, 2020; Lieder, Hsu, & Griffiths, 2014; Sanborn & Chater, 2017). In some cases, human inferences seem to rest on just a single, highly approximate sample (Vul, Goodman, Griffiths, & Tenenbaum, 2014), or perhaps just a few of them (Vul & Pashler, 2008). If we extend our proposed architecture for rational meaning construction to incorporate these kinds of Monte Carlo mechanisms, could we build models of language-guided thinking that can be directly compared at a more mechanistic level to human behavior? How will processes of language understanding and reasoning interact mechanistically, and can we build resource-rational approximate inference models that capture this interaction?
# 5.2.3 Language and thought in the brain
Evidence from cognitive neuroscience suggests a number of parallels between the framework we describe in this paper, and how language relates to systems for general cognition in the human brain. Over decades, cognitive neuroscientists have mapped out a series of interconnected areas in the frontal and temporal lobes that are implicated in human language processing. This âlanguage networkâ is activated in both linguistic comprehension (Deniz, Nunez-Elizalde, Huth, & Gallant, 2019; Fedorenko, Hsieh, Nieto-Castañón, Whitfield- Gabrieli, & Kanwisher, 2010; MacSweeney et al., 2002; Regev, Honey, Simony, & Hasson, 2013; T. L. Scott, Gallée, & Fedorenko, 2017) and production (Hu et al., 2021; Menenti, Gierhan, Segaert, & Hagoort, 2011). It is sensitive to regularities in all levels of linguistic structureâfrom phonology, to words, to phrases and sentences (Blank & Fedorenko, 2017; Lerner, Honey, Silbert, & Hasson, 2011; Silbert, Honey, Simony, Poeppel, & Hasson, 2014; Wilson, Molnar-Szakacs, & Iacoboni, 2008) and is implicated in combinatorial semantic and syntactic processing (Fedorenko, Blank, Siegelman, & Mineroff, 2020; Hu et al., 2021).
Convergent evidence suggests that the language network is distinct from these systems, and that it isnot activated in more general, non-linguistic cognition. Aphasic individuals with damage to the language network
49
Implications for AI
5 FUTURE DIRECTIONS
exhibit impaired language production and comprehension, but retain the ability to solve arithmetic and logic puzzles, reason about causality and social situations, and perform many other non-linguistic tasks (e.g., Basso & Capitani, 1985; Bek, Blades, Siegal, & Varley, 2010; Fedorenko & Varley, 2016; Klessinger, Szczerbinski, & Varley, 2007; Lecours & Joanette, 1980; Luria, Tsvetkova, & Futer, 1965; Varley, 1998). Functional neuroimaging studies provide further evidence that the language network is not activated in a variety of non-linguistic tasks including reasoning about arithmetic, logic, actions, or events (Amalric & Dehaene, 2016, 2019; Blank, Kanwisher, & Fedorenko, 2014; Deen, Koldewyn, Kanwisher, & Saxe, 2015; Fedorenko, Behr, & Kanwisher, 2011; Monti, Osherson, Martinez, & Parsons, 2007; Monti, Parsons, & Osherson, 2012; Paunov, Blank, & Fedorenko, 2019; Paunov et al., 2022; Shain, Paunov, Chen, Lipkin, & Fedorenko, 2022).
In tandem, a broader line of cognitive neuroscience work has located non-linguistic networks that are activated in processing many of the core cognitive domains we model throughout this paper, including logic, mathematical reasoning (eg. Amalric and Dehaene (2019); Monti et al. (2007)), social reasoning and planning (Adolphs, 2009; Saxe, Moran, Scholz, & Gabrieli, 2006; Saxe & Powell, 2006); and physical reasoning and simulation (Pramod, Cohen, Tenenbaum, & Kanwisher, 2022; Schwettmann, Tenenbaum, & Kanwisher, 2019). More recent work suggests the existence of an âamodal semantics network" (Ivanova, 2022; Ivanova et al., 2021), a network that appears proximal to the language networks activated in processing linguistic structures, interfaces between the language network and the more general multiple demand networks involved in complex non-linguisitc cognition, and that appears to be activated specifically in processing semantically meaningful sentences (as opposed to scrambled tokens or syntactically correct but semantically incoherent strings.)
Recently, neuroscientists who study language cognition have begun to draw explicit parallels between the language network and LLMs (see Mahowald et al., 2023, for a review). Several recent studies have observed that smaller LLMs trained specifically on the distriutional statistics of language (generally focusing on the GPT-2 model) can predict brain activity in humans processing sentence input (Caucheteux & King, 2022; Goldstein et al., 2022; Schrimpf et al., 2021) and may share representational characteristics of the human language network (Fedorenko et al., 2020; Shain, Blank, van Schijndel, Schuler, & Fedorenko, 2020). These accounts, however, align LLMs with the modular role we propose for neural models in our frameworkânot as end-to-end models of language and reasoning, but instead as robust, context-aware mappings between language and meanings. As a ground for future work, our framework can inform evaluations of LLMs with respect to human language understanding. For instance, our proposal suggests that code-trained LLMs might better capture latent semantic and syntactic structure than language-only LLMs. Ideas from neuroscience, in turn, can help us figure out which kinds of computations can be neurally amortized and where our modelâs boundary between language and thought should lie.
# Implications for AI
# 5.3.1 Structured hybrid models of language and thought
Growing awareness of the limitations of LLM-based reasoning has motivated several recent proposals for interfacing language models with external symbolic plug-ins or toolkits (Karpas et al., 2022; OpenAI, 2023c; Schick et al., 2023; Wolfram, 2023). At face value, one perspective is to view rational meaning construction as an argument to add probablistic programs to the growing âswiss army knifeâ of LLM plug-ins. However, we see this notion as inverted: thought should not simply be a plug-in on top of language models. Rather, we believe that future AI systems should be architected around thoughtâgeneral-purpose computing systems that provide a principled framework for expressing world models, conditioning them on observations from sources including language and perceptual input, and drawing principled inferences and decisions with respect to the goals of an intelligent system.7 As we show throughout this paper, many core domains of cognition can be expressed as forms of probabilistic inference. A probabilistic language of thought, in turn, provides a unifying language for world modeling that can nest calls to other cognitively-motivated modules. In this sense, all of these plug-ins and modules would become plug-ins to the substrate of thought, including graphics engines, physics simulators, planning algorithms, and, in fact, language models themselves. As we discuss in the future directions of each section, scaling any of our toy implementations towards robust, human-like reasoning and language-understanding systems will almost certainly require more sophisticated implementations of each
7A similar argument has been expressed by Stephen Wolfram in a compelling series of writings on integrating ChatGPT with the Wolfram Language and its suit of symbolic computational tools (Wolfram, 2023).
50
Implications for AI
5 FUTURE DIRECTIONS
reasoning module. We therefore hope this general probabilistic framework suggests a symbolic substrate that might in turn incorporate many of the specific modules and plug-ins in this recent work.
To this end, another important near-term AI research direction will involve building probabilistic pro- gramming frameworks that natively incorporate LLMs. Important steps in this direction are already being taken through work leveraging LLMs to approximate prior probabilities over strings (A. K. Lew, Tessler, Mansinghka, & Tenenbaum, 2020) and amortize complex posterior inferences (M. Wu & Goodman, 2022). Indeed, many popular LLM techniques, such as scratchpads (Nye et al., 2021), chain-of-thought prompting (Wei et al., 2022), selection-inference (Creswell, Shanahan, & Higgins, 2022), STaR (Zelikman, Wu, Mu, & Goodman, 2022), and others can be viewed as implementations of probabilistic programs over string-valued random variables (Dohan et al., 2022). A maturing theoretical understanding of LLMs as probabilistic entities will afford powerful ways of harnessing and controlling generations. For instance, the sequential Monte Carlo (SMC) steering technique introduced under the LLaMPPL framework (A. K. Lew, Zhi-Xuan, Grand, & Mansinghka, 2023) enables concise and tractable specification of infilling, prompt intersection, and other constrained LLM generation tasks as language model probabilistic programs. Many of these hybrid models can be viewed as instantiations of rational meaning construction that make resource-motivated tradeoffs between inference in the unstructured space of strings (words) and more structured hypothesis spaces (worlds).
# 5.3.2 Robustness and trustworthiness in language understanding
Recent, high-profile attempts to deploy LLMs in production highlight the fundamental robustness challenges of using these models as the backbone of usable AI systems (Brereton, 2023; Sorkin, Warner, Kessler, Hirsch, & Livni, 2023), even with automated filters and supervised finetuning to human preferences. While LLMs may reasonably appear to condition on input language or answer queries under some circumstances, it is precisely this combination of linguistic fluency and underlying unpredictability that makes them problematic in situations where verifiable, systematic behavior is paramount. LLMs easily produce syntactically convincing but inaccurate âhallucinationsâ that fabricate facts and inferences (Dziri, Milton, Yu, Zaiane, & Reddy, 2022; Ji et al., 2022), fail to consistently condition on rules and constraints described in natural language, including rules intended to ensure user safety (Edwards, 2023; Zhuo, Huang, Chen, & Xing, 2023), and can generally degrade into nonsensical or highly undesirable language in the vast, easily accessible âlong tailâ of situations that deviate from their training distribution (Bender, Gebru, McMillan-Major, & Shmitchell, 2021; Roose, 2023; Tangermann, 2023).
The unevenness of todayâs LLMs recalls a classic critique of even older neural architectures (Fodor & Pylyshyn, 1988)âthat neural models trained on predictive objectives do not produce systematic, logical outputs by design. Similarly, while current or future LLMs may be able in principle to recover the latent representations and algorithms necessary to reason over languageâor even successfully approximate them in many settingsâthey do not need to produce systematic results by construction. Rather, they often approximate them with unexpected, undesirable outputs, particularly in out-of-distribution settings.
Even if future LLMs do appear to improve with scale without an external reasoning substrate, engineers may find it desirable to distinguish modularly between external symbolic reasoning engines and language- specific systems to enable separate supervision and verification of each. The framework we present here offers one roadmap for language understanding architectures whose robustness guarantees derive from explicit inference over a structured, editable, and formally constrainable programming language. Inferences themselves, and other formalizable reasoning computations including planning and physical simulation, take place in modules constructed explicitly to perform these calculations.
# Interpreting models that use language
As with verifiability and robustness, the framework we propose here is an architecture for language under- standing systems that are also inherently interpretable, or interpretable by design (Rudin, 2019; Rudin et al., 2022)âit constructs visible, editable, and constrainable world models and meanings that serve as the formal basis for inference, rather than post-hoc explanations decoded from or produced over hidden internal computations.
However, a fundamental part of our hypothesis is that any system that reasons effectively over language should need toâexplicitly or implicitlyârepresent and implement the kinds of computations we formalize throughout this paper. Implementations of this framework might therefore also be useful for model-guided
51
Implications for AI
5 FUTURE DIRECTIONS
hypotheses and experiments intended to explain other less transparent language processing systems, both biological (as we suggest in Section 5.2.3) and artificial. This framework might be incorporated productively into the growing body of work using explicit world models and symbolic languages to formally model the internal computations of deep neural models (Biggio, Bendinelli, Neitz, Lucchi, & Parascandolo, 2021; Mu & Andreas, 2020) and LLMs specifically (B. Z. Li, Nye, & Andreas, 2021); as with the related body of work using structured probabilistic models and reasoning engines to interpret human neural activity on social reasoning, physical understanding, and other general inference tasks (Ho et al., 2022; Schwettmann, Fischer, Tenenbaum, & Kanwisher, 2018; Watters, Tenenbaum, & Jazayeri, 2021). Explaining how LLMs represent the meanings of language, and perform computations with them, is a pressing open question whose scientific interest only increases if LLMs do appear to become more coherent and robust with scale.
In light of this, inspired by our proposed architecture, it may be interesting to probe, or trace, whether end-to-end LLMs construct context-specific world models (B. Z. Li et al., 2021), maintain belief distributions over uncertain world states (Hase et al., 2021), and implement reasoning algorithms like probabilistic inference, physical simulation, or planning over these representations.
# 5.3.4 Learning from human-scale data
Large language models must be trained with many orders of magnitude more language data than any human learner encounters over a lifetime. How can we engineer systems that not only understand language as we do, but also learn from human-scale language data?
Effective, data-efficient language models hold great relevance for both scientific and engineering applications. Complete cognitive models of human language understandingâincluding models built on the framework we propose hereâshould account for language acquisition, as well as language use. For engineering purposes, addressing the data-hungry training regime of current LLMs could also address challenges in learning low-resource languages (and the more general problem of accurately learning and deploying the âlong tailâ of knowledge from statistical distributional data) (Kandpal, Deng, Roberts, Wallace, & Raffel, 2022), incorporating more expensive input modalities like videos or embodied trajectories (Ahn et al., 2022; Reed et al., 2022), finetuning on more targeted, task-specific supervision like instruction following (OpenAI, 2023a), and generally enabling the construction of smaller, more accessible models that can be trained without massive computing resources and prohibitive economic and environmental costs (Bender et al., 2021; Dickson, 2020). While current âscaling routesâ look to improve language understanding by increasing data supervision, our hypothesis strongly suggests that this is an expensive, and highly indirect, approach towards learning the representations and inference procedures necessary to reason about language.
Instead, our framework suggests several alternative directions for improving data efficiency. First, perhaps the most direct consequence of this framework is the suggestion that neural models need only play a much tighter, focused role in language understanding systemsâas translation models that parse from language into structured symbolic programs for reasoning. Training a translation model focused on parsing from language into probabilistic programs almost certainly requires much less data for effective performance than required to solve the general token prediction problem.
Further, several ideas we discuss in Section 5.1.1 and Section 5.1.3 might also be relevant for training simpler translation models, and using them to bootstrap larger and more complex neural language models. First, as we discuss in Section 5.1.3, we might consider a progressively amortized avenue for training even complex translation models like the one in our concrete implementation, which appears to contextually amortize certain pragmatic inferences (such as those that adjust vague quantifiers to the context of a particular world model) that could be explicitly computed from a more literal initial semantic parse. One possibility, then, would be to train a more limited, literal semantic parser from language to probabilistic programs, but seek to train neural models that progressively amortize more of these inferences by supervising on its outputs. Other ideas from human language acquisition might offer more avenues for more radically data efficient learning. Human language learners progress through several phases of language mastery (R. Brown, 1973; Saffran et al., 2001; Tomasello, 2009), appearing to learn initial but highly imperfect grammars and meaning functions that they refine progressively over time, but much more quickly and with much less data than a comparable LLM trained directly on the distribution of language. Framed as a problem of learning a translation model, however, a more data efficient training regime might also draw inspiration from other methods for learning more flexible translation and semantic parsing distributions. Multiple approaches
52
6 CONCLUSION
have used simpler models to bootstrap more complex ones, either by using simpler models trained on more constrained translation objectives to directly initialize the parameters of more complex ones (P. F. Brown, Della Pietra, Della Pietra, Mercer, et al., 1993; Dong & Lapata, 2018; Petrov, Haghighi, & Klein, 2008), or using simpler grammars as generative data sources to train more complex models, as in general wake-sleep training methods that learn predictive models to amortize the outputs of a generative distribution (Andreas, 2019; Hinton, Dayan, Frey, & Neal, 1995; Jia & Liang, 2016).
Both of these approaches rely, importantly, on the language of thought hypothesis we advance here, which separates the computational problem of learning a translation distribution from the problem of learning the representations and algorithms necessary for general intelligence. This drastically reduces the latent structure and computational complexity we seek to learn from distributional supervisionâto learn as efficiently as people, we propose a framework that begins with a substrate for thinking and then suggests avenues for amortizing its outputs or refining translation into this substrate, rather than seeking to learn an effective language of thought itself from natural language data.
# 6 Conclusion
Language is central to our cognition. A theory of meaning in human language should explain how language relates to our thoughtsâhow it connects to all our faculties for reasoning, and how it can shift our beliefs across nearly every domain of what we now, change how we act or respond across a broad range of situations, even construct new knowledge that we might later marshal towards yet unspoken questions and goals. This vision lies at the heart of a human theory of language and meaning, but the most expansive visions of AI have also long been ones in which computers share our language, able to meaningfully understand us as we expect to be understood by other people. Todayâs large language models have made striking advances towards building this reality in many important regards. For the first time, we have built computer systems that can speak fluently back to us, using many more of our own words than ever before.
Still, much more is needed to capture our own relationship to language. We do not learn language like a large language model does. We think first, and learn from far less input how language maps into our thoughts. Our own world models and beliefs are not the fragile byproduct of what we can glean from languageâthey are the basis of and core of our cognition, constructed and maintained purposefully towards our intentions and desires. We, of course, are the ones who created the language on which todayâs machine learning models are now trained. That language is the product of and reflection of our own goals and questions, and of conceptual systems of our own invention. We continue to think completely new thoughts, and we continue in turn to produce entirely new language, coining new words and even constructing wholly new languages so that we can build its meaning in the minds of other humans. A cognitive theory of human language must capture and explain these aspects of our language and thought. It might in turn form the basis for AI models that reliably and predictably understand us, and that work in ways that we can interpret, explain, and control. This white paper is simply a sketch towards these ends: an outline of the computational components that could relate human language and a substrate for cognition, and one proposal for how this approach might also incorporate todayâs language models without requiring them to learn to reliably model the world, draw inferences, or make decisions. We hope it can offer one step towards cognitive and AI models that share the meaning we make from language, and that bridge from language into the vast expanse of our thoughts.
53
REFERENCES
# Acknowledgements
We have many people to thank whose comments, critiques, and feedback have influenced this manuscript and shaped it for the better. Among others, we are grateful to Steve Piantadosi, Jesse Snedeker, Kate Davidson, Ellie Pavlick, Paul Pietroski, Thomas Icard, Luca Bonatti, and Susan Carey for their insightful comments on an early version of this manuscript that was presented at the July 2022 McDonnell Network Workshop; as well as for innumerable helpful comments and feedback on developing versions of this manuscript from Joshua Hartshorne, Judy Fan, Robert Hawkins, Katherine Collins, Anna Ivanova, Cedegao Zhang, Hayley Ross, Anna Ivanova, Benjamin Lipkin, Megan Wei, Jiahai Feng, Xuan Tan, Lance Ying, William McCarthy, Laura Schulz and Tyler Brooke-Wilson. Language from all of these collaborators has invaluably and profoundly informed our thoughts.
The authors gratefully acknowledge support from support from the MIT Quest for Intelligence, AFOSR Grant No. FA9550-19-1-0269, the MIT-IBM Watson AI Lab, the DARPA Machine Common Sense Program, the ONR Science of AI Program, and Siegel Family Endowment. This material is based on work supported by the National Science Foundation Graduate Research Fellowship under Grant No. 1745302 and No. 2141064. Additionally, GG was supported by the MIT Presidential Fellowship, and JDA was supported by NSF Grant IIS-2212310.
# References
Abend, O., Kwiatkowski, T., Smith, N. J., Goldwater, S., & Steedman, M. (2017). Bootstrapping language
acquisition. Cognition, 164 , 116â143.
Adolphs, R. (2009). The social brain: neural basis of social knowledge. Annual review of psychology, 60 , 693â716.
Ahn, M., Brohan, A., Brown, N., Chebotar, Y., Cortes, O., David, B., . . . others (2022). Do as I can, not as I say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691 .
Allen, K. R., Smith, K. A., & Tenenbaum, J. B. (2020). Rapid trial-and-error learning with simulation supports flexible tool use and physical reasoning. Proceedings of the National Academy of Sciences, 117 (47), 29302â29310.
Alon, U., Xu, F. F., He, J., Sengupta, S., Roth, D., & Neubig, G. (2022). Neuro-Symbolic Language Modeling with Automaton-augmented Retrieval. undefined .
Amalric, M., & Dehaene, S. (2016, May). Origins of the brain networks for advanced mathematics in expert mathematicians. Proceedings of the National Academy of Sciences of the United States of America, 113 (18), 4909â4917. doi: 10.1073/pnas.1603205113
Amalric, M., & Dehaene, S. (2019, April). A distinct cortical network for mathematical knowledge in the human brain. NeuroImage, 189 , 19â31. Retrieved 2019-07-26, from https://linkinghub.elsevier.com/ retrieve/pii/S1053811919300011 doi: 10.1016/j.neuroimage.2019.01.001
Anderson, J. R. (1990). The adaptive character of thought. Psychology Press. Andreas, J. (2019). Good-enough compositional data augmentation. arXiv preprint arXiv:1904.09545 . Armeni, I., He, Z.-Y., Gwak, J., Zamir, A. R., Fischer, M., Malik, J., & Savarese, S. (2019). 3d scene graph: A structure for unified semantics, 3d space, and camera. In Proceedings of the ieee/cvf international conference on computer vision (pp. 5664â5673).
Artzi, Y., Das, D., & Petrov, S. (2014). Learning compact lexicons for ccg semantic parsing. Artzi, Y., Lee, K., & Zettlemoyer, L. (2015, September). Broad-coverage ccg semantic parsing with amr. In Proceedings of the conference on empirical methods in natural language processing (pp. 1699â1710). Lisbon, Portugal: Association for Computational Linguistics. Retrieved from http://aclweb.org/ anthology/D15-1198
Artzi, Y., & Zettlemoyer, L. (2013). Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics, 1 (1), 49â62.
Bai, J., Zhou, L., Blanco, A., Liu, S., Wei, F., Zhou, M., & Li, Z. (2021). Jointly learning to repair code and generate commit message. ArXiv , abs/2109.12296 .
Baillargeon, R. (2004). Infantsâ physical world. Current directions in psychological science, 13 (3), 89â94. Baker, C., Saxe, R., & Tenenbaum, J.
(2011). Bayesian theory of mind: Modeling joint belief-desire attribution. In Proceedings of the annual meeting of the cognitive science society (Vol. 33).
54
REFERENCES
Baker, C. L., Saxe, R., & Tenenbaum, J. B. (2009). Action understanding as inverse planning. Cognition,
113 (3), 329â349.
Baker, C. L., Tenenbaum, J. B., & Saxe, R. R. (2007). Goal inference as inverse planning. In Proceedings of the annual meeting of the cognitive science society (Vol. 29).
Bar-Zeev, A. (2003). Scenegraphs: Past, present and future. Ãltimo acesso em, 13 . Basso, A., & Capitani, E. (1985, May). Spared musical abilities in a conductor with global aphasia and ideomotor apraxia. Journal of Neurology, Neurosurgery, and Psychiatry, 48 (5), 407â412. Retrieved 2020-08-03, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1028326/
Battaglia, P. W., Hamrick, J. B., & Tenenbaum, J. B. (2013). Simulation as an engine of physical scene
understanding. Proceedings of the National Academy of Sciences, 110 (45), 18327â18332.
Bek, J., Blades, M., Siegal, M., & Varley, R. A. (2010, May). Language and spatial reorientation: evidence from severe aphasia. Journal of Experimental Psychology. Learning, Memory, and Cognition, 36 (3), 646â658. doi: 10.1037/a0018281
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 acm conference on fairness, accountability, and transparency (pp. 610â623).
Biernaskie, J. M., Walker, S. C., & Gegear, R. J. (2009). Bumblebees learn to forage like bayesians. The American Naturalist, 174 (3), 413â423.
Biggio, L., Bendinelli, T., Neitz, A., Lucchi, A., & Parascandolo, G. (2021). Neural symbolic regression that
scales. In International conference on machine learning (pp. 936â945).
Bingham, E., Chen, J. P., Jankowiak, M., Obermeyer, F., Pradhan, N., Karaletsos, T., . . . Goodman, N. D. (2019). Pyro: Deep universal probabilistic programming. J. Mach. Learn. Res., 20 , 28:1â28:6. Retrieved from http://jmlr.org/papers/v20/18-403.html
Blank, I. A., & Fedorenko, E. (2017, October). Domain-General Brain Regions Do Not Track Linguistic Input as Closely as Language-Selective Regions. Journal of Neuroscience, 37 (41), 9999â10011. Retrieved 2019-11- 06, from https://www.jneurosci.org/content/37/41/9999 doi: 10.1523/JNEUROSCI.3642-16.2017
# Blank, I. A., Kanwisher, N., & Fedorenko, E.
(2014, September). A functional dissociation between language and multiple-demand systems revealed in patterns of BOLD signal fluctuations. Journal of Neurophysiology, 112 (5), 1105â1118. doi: 10.1152/jn.00884.2013
Block, N. (1998). Conceptual role semantics. Bloom, P. (2002). How children learn the meanings of words. MIT press. Bolton, A. D., Haesemeyer, M., Jordi, J., Schaechtle, U., Saad, F. A., Mansinghka, V. K., . . . Engert, F. (2019). Elements of a stochastic 3d prediction engine in larval zebrafish prey capture. ELife, 8 , e51975. Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., . . . others (2021). On the
opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 .
Borgeaud, S., Mensch, A., Hoffmann, J., Cai, T., Rutherford, E., Millican, K., . . . Sifre, L. (2022, February). Improving language models by retrieving from trillions of tokens (No. arXiv:2112.04426). arXiv. Bowers, M., Olausson, T. X., Wong, L., Grand, G., Tenenbaum, J. B., Ellis, K., & Solar-Lezama, A. (2023, jan). Top-down synthesis for library learning. Proc. ACM Program. Lang., 7 (POPL). Retrieved from https://doi.org/10.1145/3571234 doi: 10.1145/3571234
Branwen, G. (2022). The scaling hypothesis. Gwern.net. Brereton, D. (2023). Bing ai canât be trusted. https://dkb.blog/p/bing-ai-cant-be-trusted. Brooke-Wilson, T. (2023). Why is seeing fast and thinking slow? in prep. Brown, P. F., Della Pietra, S. A., Della Pietra, V. J., Mercer, R. L., et al. (1993). The mathematics of
statistical machine translation: Parameter estimation.
Brown, R. (1973). A first language: The early stages. Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., . . . Amodei, D.
(2020, July). Language Models are Few-Shot Learners. arXiv:2005.14165 [cs] . Retrieved 2020-08-09, from http://arxiv.org/abs/2005.14165 (arXiv: 2005.14165)
Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., . . . others (2023). Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712 .
Bybee, J. L. (1985). Morphology. Typological studies in language. Cai, Q., & Yates, A. (2013). Large-scale semantic parsing via schema matching and lexicon extension. In Proceedings of the 51st annual meeting of the association for computational linguistics (volume 1: Long
55
REFERENCES
papers) (pp. 423â433).
Caliskan, A., Bryson, J. J., & Narayanan, A. (2017). Semantics derived automatically from language corpora contain human-like biases. Science, 356 (6334), 183â186. Retrieved from https://www.science.org/ doi/abs/10.1126/science.aal4230 doi: 10.1126/science.aal4230
Carey, S. (1999). Sources of conceptual change. Conceptual development: Piagetâs legacy, 293â326. Carey, S. (2009). The origin of concepts. New York: Oxford University Press. Carpenter, B., Gelman, A., Hoffman, M. D., Lee, D., Goodrich, B., Betancourt, M., . . . Riddell, A. (2017).
Stan: A probabilistic programming language. Journal of statistical software, 76 (1).
Caucheteux, C., & King, J.-R. (2022, February). Brains and algorithms partially converge in natural language processing. Communications Biology, 5 (1), 1â10. Retrieved 2022-07-05, from https:// www.nature.com/articles/s42003-022-03036-1 (Number: 1 Publisher: Nature Publishing Group) doi: 10.1038/s42003-022-03036-1
Chakraborty, S., Ding, Y., Allamanis, M., & Ray, B. (2022). Codit: Code editing with tree-based neural models. IEEE Transactions on Software Engineering, 48 , 1385â1399.
Chakraborty, S., & Ray, B. (2021). On multi-modal learning of editing source code. 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE), 443â455.
Chater, N., & Manning, C. D. (2006). Probabilistic models of language processing and acquisition. Trends in
cognitive sciences, 10 (7), 335â344.
Chater, N., & Oaksford, M. (1999). Ten years of the rational analysis of cognition. Trends in cognitive sciences, 3 (2), 57â65.
Chater, N., Zhu, J.-Q., Spicer, J., Sundh, J., León-Villagrá, P., & Sanborn, A. (2020). Probabilistic biases meet the bayesian brain. Current Directions in Psychological Science, 29 (5), 506â512.
Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. d. O., Kaplan, J., . . . others (2021). Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374 .
Chu, J., & Schulz, L. (2023). In praise of folly: Flexible goals and human cognition. Clark, J. H. (1976). Hierarchical geometric models for visible surface algorithms. Communications of the
ACM , 19 (10), 547â554.
Cobbe, K., Kosaraju, V., Bavarian, M., Hilton, J., Nakano, R., Hesse, C., & Schulman, J. (2021). Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168 .
Collins, K. M., Wong, C., Feng, J., Wei, M., & Tenenbaum, J. B. (2022, May). Structured, flexible, and robust: Benchmarking and improving large language models towards more human-like behavior in out-of-distribution reasoning tasks (No. arXiv:2205.05718). arXiv. doi: 10.48550/arXiv.2205.05718 Colmerauer, A., Kanoui, H., Pasero, R., & Roussel, P. (1972). Un systeme de communication en français. Rapport préliminaire de fin de contrat IRIA, Groupe Intelligence Artificielle, Faculté des Sciences de Luminy, Université dâAix-Marseille II .
In History of programming languagesâii (p. 331â367). New York, NY, USA: Association for Computing Machinery. Retrieved from https:// doi.org/10.1145/234286.1057820
Conwell, C., & Ullman, T. D. (2022). Testing relational understanding in text-guided image generation. arXiv preprint arXiv:2208.00005 .
Coumans, E., & Bai, Y. (2016). Pybullet, a python module for physics simulation for games, robotics and machine learning.
Craik, K. J. W. (1967). The nature of explanation (Vol. 445). CUP Archive. Creswell, A., Shanahan, M., & Higgins, I. (2022, May). Selection-Inference: Exploiting Large Language Models
for Interpretable Logical Reasoning (No. arXiv:2205.09712). arXiv. doi: 10.48550/arXiv.2205.09712
Csibra, G. (2008). Goal attribution to inanimate agents by 6.5-month-old infants. Cognition, 107 (2), 705â717.
Csibra, G., BÃró, S., Koós, O., & Gergely, G. (2003). One-year-old infants use teleological representations of actions productively. Cognitive Science, 27 (1), 111â133.
Cusumano-Towner, M., Bichsel, B., Gehr, T., Vechev, M., & Mansinghka, V. K. (2018). Incremental inference for probabilistic programs. In Proceedings of the 39th acm sigplan conference on programming language design and implementation (pp. 571â585).
Cusumano-Towner, M., Lew, A. K., & Mansinghka, V. K. (2020). Automating involutive MCMC using probabilistic and differentiable programming. arXiv preprint arXiv:2007.09871 .
56
REFERENCES
Cusumano-Towner, M. F., Radul, A., Wingate, D., & Mansinghka, V. K. (2017). Probabilistic programs for
inferring the goals of autonomous agents. arXiv preprint arXiv:1704.04977 .
Cusumano-Towner, M. F., Saad, F. A., Lew, A. K., & Mansinghka, V. K. (2019). Gen: a general-purpose probabilistic programming system with programmable inference. In Proceedings of the 40th acm sigplan conference on programming language design and implementation (pp. 221â236).
Dalvi, B., Tafjord, O., & Clark, P. (2022). Towards teachable reasoning systems: Using a dynamic memory of user feedback for continual system improvement. In Proceedings of the 2022 conference on empirical methods in natural language processing (pp. 9465â9480).
Dasgupta, I., & Gershman, S. J. (2021). Memory as a computational resource. Trends in Cognitive Sciences,
25 (3), 240â251.
Davidson, D., & Rescher, N. (1967). The logical form of action sentences. 1967 , 105â122. Davidson, G., Gureckis, T. M., & Lake, B. (2022). Creativity, compositionality, and common sense in human goal generation. In Proceedings of the annual meeting of the cognitive science society (Vol. 44). de Avila Belbute-Peres, F., Smith, K., Allen, K., Tenenbaum, J., & Kolter, J. Z. (2018). End-to-end differentiable physics for learning and control. Advances in neural information processing systems, 31 . Dechter, E., Malmaud, J., Adams, R. P., & Tenenbaum, J. B. (2013). Bootstrap learning via modular concept
discovery. In Proceedings of the international joint conference on artificial intelligence.
Deen, B., Koldewyn, K., Kanwisher, N., & Saxe, R. (2015, November). Functional Organization of Social Perception and Cognition in the Superior Temporal Sulcus. Cerebral Cortex , 25 (11), 4596â4609. Retrieved 2022-07-05, from https://doi.org/10.1093/cercor/bhv111 doi: 10.1093/cercor/bhv111
Deng, F., Zhi, Z., Lee, D., & Ahn, S. (2021). Generative scene graph networks. In International conference on learning representations.
Deniz, F., Nunez-Elizalde, A. O., Huth, A. G., & Gallant, J. L. (2019, September). The Representation of Semantic Information Across Human Cerebral Cortex During Listening Versus Reading Is Invariant to Stimulus Modality. Journal of Neuroscience, 39 (39), 7722â7736. Retrieved 2020-03-11, from https:// (Publisher: Society for Neuroscience Section: Research www.jneurosci.org/content/39/39/7722 Articles) doi: 10.1523/JNEUROSCI.0675-19.2019
Dennett, D. C. (2017). From bacteria to bach and back: The evolution of minds. WW Norton & Company. De Raedt, L., Kimmig, A., & Toivonen, H. (2007). Problog: A probabilistic prolog and its application in link discovery. In Proceedings of the 20th international joint conference on artifical intelligence (p. 2468â2473). San Francisco, CA, USA: Morgan Kaufmann Publishers Inc.
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 .
Dickson, B. (2020). The gpt-3 economy. TechTalks. Ding, Y., Zhang, X., Paxton, C., & Zhang, S. (2023). Task and motion planning with large language models
for object rearrangement. arXiv preprint arXiv:2303.06247 .
Dohan, D., Xu, W., Lewkowycz, A., Austin, J., Bieber, D., Lopes, R. G., . . . others (2022). Language model cascades. arXiv preprint arXiv:2207.10342 .
Dong, L., & Lapata, M. (2018). Coarse-to-fine decoding for neural semantic parsing. arXiv preprint arXiv:1805.04793 .
(2017). Solving probability problems in natural language. In Proceedings of the 26th international joint conference on artificial intelligence (p. 3981â3987). AAAI Press.
Duboue, P. A., & McKeown, K. (2003). Statistical acquisition of content selection rules for natural language
generation.
Dumais, S. T., et al. (2004). Latent semantic analysis. Annu. Rev. Inf. Sci. Technol., 38 (1), 188â230. Dziri, N., Milton, S., Yu, M., Zaiane, O., & Reddy, S. (2022). On the origin of hallucinations in conversational
Dziri, N., Milton, S., Yu, M., Zaiane, O., & Reddy, S. (2022). On the origin of hallucinations in conversational models: Is it the datasets or the models? arXiv preprint arXiv:2204.07931.
models: Is it the datasets or the models? arXiv preprint arXiv:2204.07931 . Edgington, D. (1992). Validity, uncertainty and vagueness. Analysis, 52 (4), 193â204. Edgington, D. (1997). Vagueness by degrees. Edwards, B. (2023). Ai-powered bing chat spills its secrets via prompt injection attack. Ars Technica. Elkind, D. (1962). Childrenâs conceptions of brother and sister: Piaget replication study v. The Journal of
genetic psychology, 100 (1), 129â136.
Ellis, K., Wong, C., Nye, M., Sable-Meyer, M., Cary, L., Morales, L., . . . Tenenbaum, J. B. (2020). Dreamcoder:
57
REFERENCES
Growing generalizable, interpretable knowledge with wake-sleep bayesian program learning. arXiv preprint arXiv:2006.08381 .
English, G., Nejad, N. G., Sommerfelt, M., Yanik, M. F., & von der Behrens, W. (2023). Bayesian surprise
shapes neural responses in somatosensory cortical circuits. Cell Reports, 42 (2).
Erez, T., Tassa, Y., & Todorov, E. (2015). Simulation tools for model-based robotics: Comparison of bullet, havok, mujoco, ode and physx. In 2015 ieee international conference on robotics and automation (icra) (pp. 4397â4404).
Fang, H., Balakrishnan, A., Jhamtani, H., Bufe, J., Crawford, J., Krishnamurthy, J., . . . Klein, D. (2022). The whole truth and nothing but the truth: Faithful and controllable dialogue response generation with dataflow transduction and constrained decoding. arXiv preprint arXiv:2209.07800 .
Fedorenko, E., Behr, M. K., & Kanwisher, N. (2011, September). Functional specificity for high-level linguistic processing in the human brain. Proceedings of the National Academy of Sciences, 108 (39), 16428â16433. Retrieved 2020-02-27, from https://www.pnas.org/content/108/39/16428 doi: 10.1073/ pnas.1112937108
Fedorenko, E., Blank, I., Siegelman, M., & Mineroff, Z. (2020, February). Lack of selectivity for syntax relative to word meanings throughout the language network. bioRxiv , 477851. Retrieved 2020-03-13, from https://www.biorxiv.org/content/10.1101/477851v2 (Publisher: Cold Spring Harbor Laboratory Section: New Results) doi: 10.1101/477851
Fedorenko, E., Hsieh, P.-J., Nieto-Castañón, A., Whitfield-Gabrieli, S., & Kanwisher, N. (2010, August). New method for fMRI investigations of language: defining ROIs functionally in individual subjects. Journal of Neurophysiology, 104 (2), 1177â1194. doi: 10.1152/jn.00032.2010
Fedorenko, E., & Varley, R. A. (2016, April). Language and thought are not the same thing: evidence from neuroimaging and neurological patients: Language versus thought. Annals of the New York Academy of Sciences, 1369 (1), 132â153. Retrieved 2019-07-27, from http://doi.wiley.com/10.1111/nyas.13046 doi: 10.1111/nyas.13046
Field, H. H. (1977). Logic, meaning, and conceptual role. The Journal of Philosophy, 74 (7), 379â409. Fikes, R. E., & Nilsson, N. J. (1971). Strips: A new approach to the application of theorem proving to
problem solving. Artificial intelligence, 2 (3-4), 189â208.
Firth, J. (1957). A synopsis of linguistic theory, 1930-1955. Studies in linguistic analysis, 10â32. Fodor, J. A. (1975). The language of thought. Cambridge, MA: Harvard University Press. Fodor, J. A. (1983). The modularity of mind. MIT press. Fodor, J. A., & Pylyshyn, Z. W. (1988). Connectionism and cognitive architecture: A critical analysis.
Cognition, 28 (1-2), 3â71.
Fox, D. (2007). Free choice and the theory of scalar implicatures. Presupposition and implicature in compositional semantics, 71â120.
Frank, M. C., Goodman, N. D., & Tenenbaum, J. B. (2009). Using speakersâ referential intentions to model early cross-situational word learning. Psychological science, 20 (5), 578â585.
Frege, G. (1892). Ãber sinn und bedeutung. Wittgenstein Studien, 1 (1). Fried, D., Aghajanyan, A., Lin, J., Wang, S. I., Wallace, E., Shi, F., . . . Lewis, M. (2022). Incoder: A
generative model for code infilling and synthesis. ArXiv , abs/2204.05999 .
Gauthier, J., Levy, R., & Tenenbaum, J. B. (2018). Word learning and the acquisition of syntacticâsemantic overhypotheses. arXiv preprint arXiv:1805.04988 .
Gehr, T., Misailovic, S., & Vechev, M. (2016). Psi: Exact symbolic inference for probabilistic programs. In Computer aided verification: 28th international conference, cav 2016, toronto, on, canada, july 17-23, 2016, proceedings, part i 28 (pp. 62â83).
Gehr, T., Steffen, S., & Vechev, M. (2020). λpsi: Exact inference for higher-order probabilistic programs. In Proceedings of the 41st acm sigplan conference on programming language design and implementation (pp. 883â897).
Gentner, D., & Goldin-Meadow, S. (2003). Whither whorf. Language in mind: Advances in the study of language and thought, 3â14.
Gentner, D., & Stevens, A. L. (2014). Mental models. Psychology Press. Gershman, S., & Goodman, N. (2014). Amortized inference in probabilistic reasoning. In Proceedings of the
annual meeting of the cognitive science society (Vol. 36).
Gershman, S. J., Horvitz, E. J., & Tenenbaum, J. B. (2015). Computational rationality: A converging
58
REFERENCES
paradigm for intelligence in brains, minds, and machines. Science, 349 (6245), 273â278.
Gerstenberg, T., & Goodman, N. (2012). Ping pong in church: Productive use of concepts in human probabilistic inference. In Proceedings of the annual meeting of the cognitive science society (Vol. 34). Gibson, E. (2014). Language for communication: Language as rational inference. In Proceedings of coling 2014, the 25th international conference on computational linguistics: Technical papers (pp. 781â782). Gibson, E., Futrell, R., Piantadosi, S. T., Dautriche, I., Mahowald, K., Bergen, L., & Levy, R. (2019). How
efficiency shapes human language. Trends in cognitive sciences, 23 (5), 389â407.
Ginsberg, M. L. (1987). Readings in nonmonotonic reasoning. Gleitman, L. (1990). The structural sources of verb meanings. Language acquisition, 1 (1), 3â55. Gleitman, L. R., Cassidy, K., Nappa, R., Papafragou, A., & Trueswell, J. C. (2005). Hard words. Language
learning and development, 1 (1), 23â64.
Goldin-Meadow, S. (2012). 26. homesign: gesture to language. In Sign language (pp. 601â625). De Gruyter Mouton.
Goldstein, A., Zada, Z., Buchnik, E., Schain, M., Price, A., Aubrey, B., . . . Hasson, U. (2022, March). Shared computational principles for language processing in humans and deep language models. Nature Neuroscience, 25 (3), 369â380. Retrieved 2022-10-31, from https://www.nature.com/articles/s41593 -022-01026-4 (Number: 3 Publisher: Nature Publishing Group) doi: 10.1038/s41593-022-01026-4 Goldwater, S., Griffiths, T. L., & Johnson, M. (2009). A bayesian framework for word segmentation: Exploring
the effects of context. Cognition, 112 (1), 21â54.
Golovneva, O., Chen, M., Poff, S., Corredor, M., Zettlemoyer, L., Fazel-Zarandi, M., & Celikyilmaz, A. (2022). Roscoe: A suite of metrics for scoring step-by-step reasoning. arXiv preprint arXiv:2212.07919 . Goodman, N. D., & Frank, M. C. (2016). Pragmatic language interpretation as probabilistic inference. Trends
in cognitive sciences, 20 (11), 818â829.
Goodman, N. D., & Lassiter, D. (2015). Probabilistic semantics and pragmatics: Uncertainty in language and thought. The handbook of contemporary semantic theory, 2nd edition. Wiley-Blackwell .
Goodman, N. D., Mansinghka, V. K., Roy, D. M., Bonawitz, K. A., & Tenenbaum, J. B. (2008). Church: a language for generative models. In D. A. McAllester & P. Myllymäki (Eds.), UAI 2008, proceedings of the 24th conference in uncertainty in artificial intelligence, helsinki, finland, july 9-12, 2008 (pp. 220â229). AUAI Press. Retrieved from https://dslpitt.org/uai/displayArticleDetails.jsp?mmnu= 1&smnu=2&article_id=1346&proceeding_id=24
Goodman, N. D., Tenenbaum, J. B., & Gerstenberg, T. (2014). Concepts in a probabilistic language of
thought (Tech. Rep.). Center for Brains, Minds and Machines (CBMM). Gopnik, A. (1996). The scientist as child. Philosophy of science, 63 (4), 485â514. Gothoskar, N., Cusumano-Towner, M., Zinberg, B., Ghavamizadeh, M., Pollok, F., Garrett, A., . . . Mans- inghka, V. (2021). 3dp3: 3d scene perception via probabilistic programming. Advances in Neural Information Processing Systems, 34 , 9600â9612.
Graff, D. (2000). Shifting sands: An interest-relative theory of vagueness. Philosophical topics, 28 (1), 45â81. Grand, G., Blank, I. A., Pereira, F., & Fedorenko, E. (2022). Semantic projection recovers rich human knowledge of multiple object features from word embeddings. Nature Human Behaviour , 1â13.
Greenberg, M., & Harman, G. (2005). Conceptual role semantics. Griffiths, T. L., Chater, N., Kemp, C., Perfors, A., & Tenenbaum, J. B. (2010). Probabilistic models of cognition: Exploring representations and inductive biases. Trends in cognitive sciences, 14 (8), 357â364. Griffiths, T. L., Steyvers, M., & Tenenbaum, J. B. (2007). Topics in semantic representation. Psychological
review , 114 2 , 211â44.
Griffiths, T. L., & Tenenbaum, J. B. (2006). Optimal predictions in everyday cognition. Psychological science, 17 (9), 767â773.
Grimshaw, J. (1981). Form, function, and the language acquisition device. The logical problem of language acquisition, 165 , 178.
Harman, G. (1982). Conceptual role semantics. Notre Dame Journal of Formal Logic, 23 (2), 242â256. Harris, Z. S. (1954). Distributional structure. Word . Retrieved from http://psycnet.apa.org/psycinfo/
1956-02807-001
Hartshorne, J. K., OâDonnell, T. J., Sudo, Y., Uruwashi, M., Lee, M., & Snedeker, J. (2016). Psych verbs,
the linking problem, and the acquisition of language. Cognition, 157 , 268â288.
Hase, P., Diab, M., Celikyilmaz, A., Li, X., Kozareva, Z., Stoyanov, V., . . . Iyer, S. (2021). Do language
59
REFERENCES
models have beliefs? methods for detecting, updating, and visualizing model beliefs. arXiv preprint arXiv:2111.13654 .
Heim, I., & Kratzer, A. (1998). Semantics in generative grammar (Vol. 1185). Blackwell Oxford. Hespos, S. J., & Baillargeon, R. (2008). Young infantsâ actions reveal their developing knowledge of support variables: Converging evidence for violation-of-expectation findings. Cognition, 107 (1), 304â316. Hinton, G. E., Dayan, P., Frey, B. J., & Neal, R. M. (1995). The" wake-sleep" algorithm for unsupervised
neural networks. Science, 268 (5214), 1158â1161.
Ho, M. K., Saxe, R., & Cushman, F. (2022). Planning with theory of mind. Trends in Cognitive Sciences. Hoffman, M. D., Blei, D. M., Wang, C., & Paisley, J. (2013). Stochastic variational inference. Journal of
Machine Learning Research.
Holtzen, S., Van den Broeck, G., & Millstein, T. (2020). Scaling exact inference for discrete probabilistic programs. Proceedings of the ACM on Programming Languages, 4 (OOPSLA), 1â31.
Hu, J., Small, H., Kean, H., Takahashi, A., Zekelman, L., Kleinman, D., . . . Fedorenko, E. (2021, September). The language network supports both lexical access and sentence generation during language production (Tech. Rep.). Retrieved 2021-09-13, from https://www.biorxiv.org/content/10.1101/2021.09.10 .459596v1 (Company: Cold Spring Harbor Laboratory Distributor: Cold Spring Harbor Laboratory Label: Cold Spring Harbor Laboratory Section: New Results Type: article) doi: 10.1101/2021.09.10 .459596
Hughes, N., Chang, Y., & Carlone, L. (2022). Hydra: A real-time spatial perception engine for 3d scene graph construction and optimization. arXiv preprint arXiv:2201.13360 .
Icard, T., & Goodman, N. D. (2015). A resource-rational approach to the causal frame problem. In Cogsci. Isomura, T., Parr, T., & Friston, K. (2019). Bayesian filtering with multiple internal models: toward a theory
of social intelligence. Neural computation, 31 (12), 2390â2431.
Ivanova, A. A. (2022). The role of language in broader human cognition: evidence from neuroscience (Unpublished doctoral dissertation). Massachusetts Institute of Technology.
Ivanova, A. A., Mineroff, Z., Zimmerer, V., Kanwisher, N., Varley, R., & Fedorenko, E. (2021). The language network is recruited but not required for nonverbal event semantics. Neurobiology of Language, 2 (2), 176â201.
Izacard, G., Lewis, P., Lomeli, M., Hosseini, L., Petroni, F., Schick, T., . . . Grave, E. (2022). Few-shot Learning with Retrieval Augmented Language Models. undefined . doi: 10.48550/arXiv.2208.03299
Jackendoff, R. S. (1985). Semantics and cognition (Vol. 8). MIT press. Jara-Ettinger, J., Schulz, L. E., & Tenenbaum, J. B. (2020). The naive utility calculus as a unified, quantitative
framework for action understanding. Cognitive Psychology, 123 , 101334.
Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., . . . Fung, P. (2022). Survey of hallucination in natural language generation. ACM Computing Surveys.
Jia, R., & Liang, P. (2016). Data recombination for neural semantic parsing. arXiv preprint arXiv:1606.03622 . Johnson, J., Hariharan, B., Van Der Maaten, L., Hoffman, J., Fei-Fei, L., Lawrence Zitnick, C., & Girshick, R. (2017). Inferring and executing programs for visual reasoning. In Proceedings of the ieee international conference on computer vision (pp. 2989â2998).
Johnson, J., Krishna, R., Stark, M., Li, L.-J., Shamma, D., Bernstein, M., & Fei-Fei, L. (2015). Image retrieval using scene graphs. In Proceedings of the ieee conference on computer vision and pattern recognition (pp. 3668â3678).
Johnson, R. E., Linderman, S., Panier, T., Wee, C. L., Song, E., Herrera, K. J., . . . Engert, F. (2020). Probabilistic models of larval zebrafish behavior reveal structure on many scales. Current Biology, 30 (1), 70â82.
Johnson-Laird, P. N. (1980). Mental models in cognitive science. Cognitive science, 4 (1), 71â115. Johnson-Laird, P. N. (1989). Mental models. Jones, D. (2010, October). Human kinship, from conceptual structure to grammar. Behavioral and Brain Sciences, 33 (5), 367â381. Retrieved 2022-08-09, from https://www.cambridge.org/core/product/ identifier/S0140525X10000890/type/journal_article doi: 10.1017/S0140525X10000890
Kaelbling, L. P., & Lozano-Pérez, T. (2011). Hierarchical task and motion planning in the now. In 2011 ieee international conference on robotics and automation (pp. 1470â1477).
Kaelbling, L. P., & Lozano-Pérez, T. (2013). Integrated task and motion planning in belief space. The International Journal of Robotics Research, 32 (9-10), 1194â1227.
60
REFERENCES
Kandpal, N., Deng, H., Roberts, A., Wallace, E., & Raffel, C. (2022). Large language models struggle to
learn long-tail knowledge. arXiv preprint arXiv:2211.08411 .
Karpas, E., Abend, O., Belinkov, Y., Lenz, B., Lieber, O., Ratner, N., . . . Tenenholtz, M. (2022, May). MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning (No. arXiv:2205.00445). arXiv.
Katz, Y., Goodman, N. D., Kersting, K., Kemp, C., & Tenenbaum, J. B. (2008). Modeling Semantic Cognition as Logical Dimensionality Reduction. Proceedings of the Annual Meeting of the Cognitive Science Society, 30(30), 6.
Kemp, C., & Regier, T. (2012, May). Kinship Categories Across Languages Reflect General Communicative Principles. Science, 336 (6084), 1049â1054. Retrieved 2022-08-09, from https://doi.org/10.1126/ science.1218811 (Publisher: American Association for the Advancement of Science) doi: 10.1126/ science.1218811
Kersten, D., Mamassian, P., & Yuille, A. (2004). Object perception as bayesian inference. Annu. Rev. Psychol., 55 , 271â304.
Kersten, D. K. D., & Yuille, A. (1996). Introduction: A bayesian formulation of visual perception. Perception as Bayesian inference, 1â21.
Khalvati, K., Kiani, R., & Rao, R. P. (2021). Bayesian inference with incomplete knowledge explains perceptual confidence and its deviations from accuracy. Nature communications, 12 (1), 5704. Klein, D., & Manning, C. D. (2003). Accurate unlexicalized parsing. In Proceedings of the 41st annual
meeting of the association for computational linguistics (pp. 423â430).
Klessinger, N., Szczerbinski, M., & Varley, R. A. (2007, January). Algebra in a man with severe aphasia. Neuropsychologia, 45 (8), 1642â1648. Retrieved 2022-06-15, from https://www.sciencedirect.com/ science/article/pii/S0028393207000280 doi: 10.1016/j.neuropsychologia.2007.01.005
Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large language models are zero-shot
reasoners. arXiv preprint arXiv:2205.11916 .
Krafft, P., Baker, C., Pentland, A., & Tenenbaum, J. (2016). Modeling human ad hoc coordination. In Proceedings of the aaai conference on artificial intelligence (Vol. 30).
Kulkarni, T. D., Kohli, P., Tenenbaum, J. B., & Mansinghka, V. (2015). Picture: A probabilistic programming language for scene perception. In Proceedings of the ieee conference on computer vision and pattern recognition (pp. 4390â4399).
Kwiatkowksi, T., Zettlemoyer, L., Goldwater, S., & Steedman, M. (2010). Inducing probabilistic ccg grammars from logical form with higher-order unification. In Proceedings of the 2010 conference on empirical methods in natural language processing (pp. 1223â1233).
Kwiatkowski, T., Zettlemoyer, L., Goldwater, S., & Steedman, M. (2011). Lexical generalization in ccg grammar induction for semantic parsing. In Proceedings of the 2011 conference on empirical methods in natural language processing (pp. 1512â1523).
Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and
think like people. Behavioral and brain sciences, 40 .
Lakoff, G. (1988). Cognitive semantics. Landauer, T. K., & Dumais, S. T. (1997). A solution to platoâs problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological review , 104 (2), 211. Langkilde, I., & Knight, K. (1998). Generation that exploits corpus-based statistical knowledge. In Coling
1998 volume 1: The 17th international conference on computational linguistics.
Lassiter, D., & Goodman, N. D. (2017). Adjectival vagueness in a bayesian model of interpretation. Synthese, 194 (10), 3801â3836.
Le, T. A., Baydin, A. G., & Wood, F. (2017). Inference compilation and universal probabilistic programming. In Artificial intelligence and statistics (pp. 1338â1348).
Lecours, A. R., & Joanette, Y. (1980, May). Linguistic and other psychological aspects of paroxysmal aphasia. Brain and Language, 10 (1), 1â23. doi: 10.1016/0093-934x(80)90034-6
Lee, T. S., & Mumford, D. (2003). Hierarchical bayesian inference in the visual cortex. JOSA A, 20 (7), 1434â1448.
Lerner, Y., Honey, C. J., Silbert, L. J., & Hasson, U. (2011, February). Topographic Mapping of a Hierarchy of Temporal Receptive Windows Using a Narrated Story. The Journal of Neuroscience, 31 (8), 2906â 2915. Retrieved 2019-12-28, from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3089381/ doi:
61
REFERENCES
10.1523/JNEUROSCI.3684-10.2011
10.1523/JNEUROSCI.3684-10.2011
Levin, B. (1993). English verb classes and alternations: A preliminary investigation. University of Chicago press.
Lew, A., Agrawal, M., Sontag, D., & Mansinghka, V. (2021). Pclean: Bayesian data cleaning at scale with domain-specific probabilistic programming. In International conference on artificial intelligence and statistics (pp. 1927â1935).
Lew, A. K., Matheos, G., Zhi-Xuan, T., Ghavamizadeh, M., Gothoskar, N., Russell, S., & Mansinghka, V. K. (2023). Smcp3: Sequential monte carlo with probabilistic program proposals. In International conference on artificial intelligence and statistics (pp. 7061â7088).
Lew, A. K., Tessler, M. H., Mansinghka, V. K., & Tenenbaum, J. B. (2020). Leveraging unstructured statistical knowledge in a probabilistic language of thought. In Proceedings of the annual conference of the cognitive science society.
Lew, A. K., Zhi-Xuan, T., Grand, G., & Mansinghka, V. K. (2023). Sequential monte carlo steering of large language models using probabilistic programs. arXiv preprint arXiv:2306.03081 .
Lewis, D. (1976). General semantics. In Montague grammar (pp. 1â50). Elsevier. Li, B. Z., Nye, M., & Andreas, J. (2021). Implicit representations of meaning in neural language models.
arXiv preprint arXiv:2106.00737 .
Li, Y., Wang, S., & Nguyen, T. N. (2020). Dlfix: Context-based code transformation learning for automated program repair. 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE), 602â614.
Liang, P. (2016). Learning executable semantic parsers for natural language understanding. Communications of the ACM , 59 (9), 68â76.
Liang, P., Daumé III, H., & Klein, D. (2008). Structure compilation: trading structure for features. In Proceedings of the 25th international conference on machine learning (pp. 592â599).
Lieder, F., & Griffiths, T. L. (2019). Resource-rational analysis: Understanding human cognition as the
optimal use of limited computational resources. Behavioral and Brain Sciences, 43 .
Lieder, F., & Griffiths, T. L. (2020). Resource-rational analysis: Understanding human cognition as the
optimal use of limited computational resources. Behavioral and brain sciences, 43 , e1.
Lieder, F., Hsu, M., & Griffiths, T. L. (2014). The high availability of extreme events serves resource-rational decision-making. In Proceedings of the annual meeting of the cognitive science society (Vol. 36). Linzen, T. (2020). How can we accelerate progress towards human-like linguistic generalization? arXiv
preprint arXiv:2005.00955 .
Lipkin, B., Wong, L., Grand, G., & Tenenbaum, J. B. (2023). Evaluating statistical language models as pragmatic reasoners. arXiv preprint arXiv:2305.01020 .
Liu, B., Jiang, Y., Zhang, X., Liu, Q., Zhang, S., Biswas, J., & Stone, P. (2023). Llm+ p: Empowering large language models with optimal planning proficiency. arXiv preprint arXiv:2304.11477 .
Liu, H., Ning, R., Teng, Z., Liu, J., Zhou, Q., & Zhang, Y. (2023). Evaluating the logical reasoning ability of chatgpt and gpt-4. arXiv preprint arXiv:2304.03439 .
Liu, R., Wei, J., Gu, S. S., Wu, T.-Y., Vosoughi, S., Cui, C., . . . Dai, A. M. (2022). Mindâs eye: Grounded language model reasoning through simulation. arXiv preprint arXiv:2210.05359 .
Lowie, R. H. (1930). The kinship terminology of the bannock indians. American Anthropologist, 32 (2), 294â299.
Luria, A. R., Tsvetkova, L. S., & Futer, D. S. (1965, June). Aphasia in a composer (V. G. Shebalin). Journal of the Neurological Sciences, 2 (3), 288â292. doi: 10.1016/0022-510x(65)90113-9
Lyu, Q., Havaldar, S., Stein, A., Zhang, L., Rao, D., Wong, E., . . . Callison-Burch, C. (2023). Faithful chain-of-thought reasoning. arXiv preprint arXiv:2301.13379 .
MacSweeney, M., Woll, B., Campbell, R., McGuire, P. K., David, A. S., Williams, S. C. R., . . . Brammer, M. J. (2002, July). Neural systems underlying British Sign Language and audio-visual English processing in native users. Brain, 125 (7), 1583â1593. Retrieved 2021-01-05, from https://doi.org/10.1093/brain/ awf153 doi: 10.1093/brain/awf153
Mahowald, K., Ivanova, A. A., Blank, I. A., Kanwisher, N., Tenenbaum, J. B., & Fedorenko, E. (2023). Dissociating language and thought in large language models: a cognitive perspective. arXiv preprint arXiv:2301.06627 .
Mansinghka, V., Selsam, D., & Perov, Y. (2014). Venture: a higher-order probabilistic programming platform
62
REFERENCES
with programmable inference. arXiv preprint arXiv:1404.0099 .
Mansinghka, V. K., Kulkarni, T. D., Perov, Y. N., & Tenenbaum, J. (2013). Approximate bayesian image interpretation using generative probabilistic graphics programs. Advances in Neural Information Processing Systems, 26 .
Mansinghka, V. K., Schaechtle, U., Handa, S., Radul, A., Chen, Y., & Rinard, M. (2018). Probabilistic programming with programmable inference. In Proceedings of the 39th acm sigplan conference on programming language design and implementation (pp. 603â616).
Marcus, G., Davis, E., & Aaronson, S. (2022). A very preliminary analysis of dall-e 2. arXiv preprint arXiv:2204.13807 .
Maynez, J., Narayan, S., Bohnet, B., & McDonald, R. (2020). On faithfulness and factuality in abstractive
summarization. arXiv preprint arXiv:2005.00661 .
McCarthy, J. (1980). Circumscriptionâa form of non-monotonic reasoning. Artificial intelligence, 13 (1-2), 27â39.
McDermott, D. (1982). A temporal logic for reasoning about processes and plans. Cognitive science, 6 (2), 101â155.
McDermott, D. M. (2000). The 1998 ai planning systems competition. AI magazine, 21 (2), 35â35. Menenti, L., Gierhan, S. M. E., Segaert, K., & Hagoort, P. (2011, September). Shared language: overlap and segregation of the neuronal infrastructure for speaking and listening revealed by functional MRI. Psychological Science, 22 (9), 1173â1182. doi: 10.1177/0956797611418347
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (pp. 3111â3119).
Milch, B., Marthi, B., Russell, S., Sontag, D., Ong, D. L., & Kolobov, A. (2007). BLOG: Probabilistic models with unknown objects. Statistical relational learning, 373.
Mitchell, A., & Jordan, F. M. (2021, June). The Ontogeny of Kinship Categorization. Journal of Cognition and Culture, 21 (1-2), 152â177. Retrieved 2022-08-09, from https://brill.com/view/journals/jocc/ 21/1-2/article-p152_8.xml (Publisher: Brill) doi: 10.1163/15685373-12340101
Mollica, F., Bacon, G., Zaslavsky, N., Xu, Y., Regier, T., & Kemp, C. (2021). The forms and meanings of grammatical markers support efficient communication. Proceedings of the National Academy of Sciences, 118 (49), e2025993118.
Mollica, F., & Piantadosi, S. T. (2022, June). Logical word learning: The case of kinship. Psychonomic Bulletin & Review , 29 (3), 766â799. Retrieved 2022-08-09, from https://doi.org/10.3758/s13423-021-02017-5 doi: 10.3758/s13423-021-02017-5
Montague, R. (1970). English as a formal language. Monti, M. M., Osherson, D. N., Martinez, M. J., & Parsons, L. M.
(2007, September). Functional neuroanatomy of deductive inference: A language-independent distributed network. NeuroImage, 37 (3), 1005â1016. Retrieved 2020-04-16, from http://www.sciencedirect.com/science/article/pii/ S1053811907003436 doi: 10.1016/j.neuroimage.2007.04.069
Monti, M. M., Parsons, L. M., & Osherson, D. N. (2012, August). Thought beyond language: neural dissociation of algebra and natural language. Psychological Science, 23 (8), 914â922. doi: 10.1177/ 0956797612437427
Morgan, M. S. (1999). Learning from models. Ideas in Context, 52 , 347â388. Mu, J., & Andreas, J. (2020). Compositional explanations of neurons. Advances in Neural Information
Processing Systems, 33 , 17153â17163.
Nersessian, N. J., et al. (2010). Mental modeling in conceptual change. International Journal on Humanistic Ideology, 3 (01), 11â48.
(2021). Show your work: Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114 .
Oaksford, M., & Chater, N. (2007). Bayesian rationality: The probabilistic approach to human reasoning. Oxford University Press.
OpenAI. (2023a). Chatgpt: Optimizing language models for dialogue. OpenAI Blog. OpenAI. (2023b). Chatgpt plugins. OpenAI Blog. OpenAI. (2023c). Gpt-4 technical report.
63
REFERENCES
Osgood, C. E. (1952). The nature and measurement of meaning. Psychological bulletin, 49 (3), 197. Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., . . . Lowe, R. (2022). Training language models to follow instructions with human feedback. arXiv. Retrieved from https://arxiv.org/ abs/2203.02155 doi: 10.48550/ARXIV.2203.02155
Pan, L., Albalak, A., Wang, X., & Wang, W. Y. (2023). Logic-lm: Empowering large language models with
symbolic solvers for faithful logical reasoning. arXiv preprint arXiv:2305.12295 .
Panthaplackel, S., Nie, P., Gligoric, M., Li, J. J., & Mooney, R. J. (2020). Learning to update natural
Panthaplackel, S., Nie, P., Gligoric, M., Li, J. J., & Mooney, R. J. (2020). Learning to update natural language comments based on code changes. arXiv preprint arXiv:2004.12169.
language comments based on code changes. arXiv preprint arXiv:2004.12169 . Parsons, T. (1990). Events in the semantics of english: A study in subatomic semantics. Paunov, A. M., Blank, I. A., & Fedorenko, E. (2019, April). Functionally distinct language and Theory of Mind networks are synchronized at rest and during language comprehension. Journal of Neurophysiology, 121 (4), 1244â1265. Retrieved 2019-07-10, from https://www.physiology.org/doi/10.1152/jn.00619 .2018 doi: 10.1152/jn.00619.2018
(2022, June). Differential Tracking of Linguistic vs. Mental State Content in Naturalistic Stimuli by Language and Theory of Mind (ToM) Brain Networks. Neurobiology of Language, 1â29. Retrieved 2022-07-05, from https://doi.org/10.1162/nol_a_00071 doi: 10.1162/nol_a_00071
Pearl, J. (1988). Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan kaufmann.
Pearl, J., et al. (2000). Models, reasoning and inference. Cambridge, UK: CambridgeUniversityPress, 19 (2). Pednault, E. P. (1989). Adl: exploring the middle ground between strips and the situation calculus. Kr , 89 , 324â332.
Pereira, F. C., & Shieber, S. M. (2002). Prolog and natural-language analysis. Microtome Publishing. Perfors, A., Tenenbaum, J. B., & Regier, T. (2011). The learnability of abstract syntactic principles.
Cognition, 118 (3), 306â338.
Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (1802). Deep contextualized word representations. corr abs/1802.05365 (2018). arXiv preprint arXiv:1802.05365 .
Petrov, S., Haghighi, A., & Klein, D. (2008). Coarse-to-fine syntactic machine translation using language projections. In Proceedings of the 2008 conference on empirical methods in natural language processing (pp. 108â116).
Philippe, R. (1972). Définition et traitement de lâégalité formelle en démonstration automatique (Unpublished doctoral dissertation). thèse de 3ième cycle, Groupe Intelligence Artificielle, Faculté des Sciences . . . .
Piaget, J. (1951). Judgement and reasoning in the child. London: Routledge and Kegan Paul. Piantadosi, S. T. (2023). Modern language models refute chomskyâs approach to language. Lingbuzz Preprint,
lingbuzz , 7180 .
Piantadosi, S. T., Tenenbaum, J. B., & Goodman, N. D. (2012). Bootstrapping in a language of thought: A formal model of numerical concept learning. Cognition, 123 (2), 199â217.
Pietroski, P. M. (2018). Conjoining meanings: Semantics without truth values. Oxford University Press. Pinker, S. (1984). Language learnability and language development. Pinker, S. (1998). Words and rules. Lingua, 106 (1-4), 219â242. Pinker, S., & MacWhinney, B. (1987). The bootstrapping problem in language acquisition. Mechanisms of
language acquisition, 399â441.
Pollard, C., & Sag, I. A. (1994). Head-driven phrase structure grammar. University of Chicago Press. Posner, G. J., Strike, K. A., Hewson, P. W., & Gertzog, W. (1982). Toward a theory of conceptual change.
Science education, 66 (2), 211â227.
Pramod, R., Cohen, M. A., Tenenbaum, J. B., & Kanwisher, N. (2022). Invariant representation of physical
stability in the human brain. Elife, 11 , e71736.
Pyers, J. E., Shusterman, A., Senghas, A., Spelke, E. S., & Emmorey, K. (2010). Evidence from an emerging sign language reveals that language supports spatial cognition. Proceedings of the National Academy of Sciences, 107 (27), 12116â12120.
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). Language models are
unsupervised multitask learners. OpenAI Blog, 1 (8).
Rae, J. W., Borgeaud, S., Cai, T., Millican, K., Hoffmann, J., Song, F., . . . others (2021). Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446 .
64
REFERENCES
Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. (2022). Hierarchical text-conditional image
generation with clip latents. arXiv preprint arXiv:2204.06125 .
Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., . . . Sutskever, I. (2021). Zero-shot
text-to-image generation. In International conference on machine learning (pp. 8821â8831).
Ranganath, R., Gerrish, S., & Blei, D. (2014). Black box variational inference. In Artificial intelligence and
statistics (pp. 814â822).
Reed, S., Zolna, K., Parisotto, E., Colmenarejo, S. G., Novikov, A., Barth-Maron, G., . . . others (2022). A generalist agent. arXiv preprint arXiv:2205.06175 .
Regev, M., Honey, C. J., Simony, E., & Hasson, U. (2013, October). Selective and Invariant Neural Responses to Spoken and Written Narratives. Journal of Neuroscience, 33 (40), 15978â15988. Retrieved 2020-10-02, from https://www.jneurosci.org/content/33/40/15978 (Publisher: Society for Neuroscience Section: Articles) doi: 10.1523/JNEUROSCI.1580-13.2013
Reid, M., & Neubig, G. (2022). Learning to model editing processes. ArXiv , abs/2205.12374 . Ribeiro, D., Wang, S., Ma, X., Zhu, H., Dong, R., Kong, D., . . . others (2023). Street: A multi-task structured
reasoning and explanation benchmark. arXiv preprint arXiv:2302.06729 .
Rips, L. J., & Hespos, S. J. (2015). Divisions of the physical world: Concepts of objects and substances. Psychological bulletin, 141 (4), 786.
Roose, K. (2023). Bingâs a.i. chat: I want to be alive. The New York Times. Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use
interpretable models instead. Nature machine intelligence, 1 (5), 206â215.
Rudin, C., Chen, C., Chen, Z., Huang, H., Semenova, L., & Zhong, C. (2022). Interpretable machine learning: Fundamental principles and 10 grand challenges. Statistic Surveys, 16 , 1â85.
Russell, S., & Norvig, P. (2021). Artificial intelligence : a modern approach (Fourth edition. ed.). Hoboken, NJ: Pearson.
(2019). Bayesian synthesis of probabilistic programs for automatic data modeling. Proceedings of the ACM on Programming Languages, 3 (POPL), 1â32.
Saad, F. A., Rinard, M. C., & Mansinghka, V. K. (2021). Sppl: probabilistic programming with fast exact symbolic inference. In Proceedings of the 42nd acm sigplan international conference on programming language design and implementation (pp. 804â819).
Saffran, J. R., Senghas, A., & Trueswell, J. C. (2001). The acquisition of language by children. Proceedings of the National Academy of Sciences, 98 (23), 12874â12875.
Sahlgren, M. (2008). The distributional hypothesis. Italian Journal of Disability Studies, 20 , 33â53. Sanborn, A. N., & Chater, N. (2017). The sampling brain. Trends in Cognitive Sciences, 21 (7), 492â493. Sapir, E. (1929). The status of linguistics as a science. Language, 207â214. Saxe, R., Moran, J. M., Scholz, J., & Gabrieli, J. (2006). Overlapping and non-overlapping brain regions for theory of mind and self reflection in individual subjects. Social cognitive and affective neuroscience, 1 (3), 229â234.
Saxe, R., & Powell, L. J. (2006). Itâs the thought that counts: specific brain regions for one component of theory of mind. Psychological science, 17 (8), 692â699.
Schick, T., Dwivedi-Yu, J., Dessì, R., Raileanu, R., Lomeli, M., Zettlemoyer, L., . . . Scialom, T. (2023). Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761 . Schrimpf, M., Blank, I. A., Tuckute, G., Kauf, C., Hosseini, E. A., Kanwisher, N., . . . Fedorenko, E. (2021, November). The neural architecture of language: Integrative modeling converges on predictive processing. Proceedings of the National Academy of Sciences, 118 (45). Retrieved 2021-12-12, from https://www .pnas.org/content/118/45/e2105646118 (Publisher: National Academy of Sciences Section: Biological Sciences) doi: 10.1073/pnas.2105646118
Schuler, K. K. (2005). Verbnet: A broad-coverage, comprehensive verb lexicon [PhD Thesis]. Univer- sity of Pennsylvania. Retrieved from http://verbs.colorado.edu/~kipper/Papers/dissertation.pdf (ISBN: 0-542-20049-X)
Schwettmann, S., Fischer, J., Tenenbaum, J., & Kanwisher, N. (2018). Evidence for an intuitive physics
engine in the human brain. In Cogsci.
Schwettmann, S., Tenenbaum, J. B., & Kanwisher, N. (2019). Invariant representations of mass in the human brain. Elife, 8 , e46619.
65
REFERENCES
Scott, R. M., & Baillargeon, R. (2013). Do infants really expect agents to act efficiently? a critical test of the
rationality principle. Psychological science, 24 (4), 466â474.
Scott, T. L., Gallée, J., & Fedorenko, E. (2017). A new fun and robust version of an fMRI localizer for the frontotemporal language system. Cognitive Neuroscience, 8 (3), 167â176. doi: 10.1080/ 17588928.2016.1201466
Seaman, I. R., van de Meent, J.-W., & Wingate, D. (2018). Nested reasoning about autonomous agents using
probabilistic programs. arXiv preprint arXiv:1812.01569 .
Senghas, A., Kita, S., & Ozyurek, A. (2004). Children creating core properties of language: Evidence from
an emerging sign language in nicaragua. Science, 305 (5691), 1779â1782.
Shain, C., Blank, I. A., van Schijndel, M., Schuler, W., & Fedorenko, E. (2020). fMRI reveals language-specific predictive coding during naturalistic sentence comprehension. Neuropsychologia, 138 , 107307. doi: 10.1016/j.neuropsychologia.2019.107307
Shain, C., Paunov, A. M., Chen, X., Lipkin, B., & Fedorenko, E. (2022, July). No evidence of theory of mind reasoning in the human language network. bioRxiv. Retrieved 2022-07-20, from https:// www.biorxiv.org/content/10.1101/2022.07.18.500516v1 (Pages: 2022.07.18.500516 Section: New Results) doi: 10.1101/2022.07.18.500516
Shan, C.-c., & Ramsey, N. (2017). Exact bayesian inference by symbolic disintegration. In Proceedings of the 44th acm sigplan symposium on principles of programming languages (pp. 130â144).
Shin, R., Brockschmidt, M., Allamanis, M., & Polozov, O. (2018). Program synthesis with learned code
idioms.
Silbert, L. J., Honey, C. J., Simony, E., Poeppel, D., & Hasson, U. (2014, October). Coupled neural systems underlie the production and comprehension of naturalistic narrative speech. Proceedings of the National Academy of Sciences, 111 (43), E4687âE4696. Retrieved 2021-09-06, from https:// www.pnas.org/content/111/43/E4687 (Publisher: National Academy of Sciences Section: PNAS Plus) doi: 10.1073/pnas.1323812111
Smith, K., Frank, S., Rolando, S., Kirby, S., & Loy, J. E. (2020). Simple kinship systems are more learnable. Proceedings of the Annual Meeting of the Cognitive Science Society, 7.
Smith, L., & Yu, C. (2008). Infants rapidly learn word-referent mappings via cross-situational statistics. Cognition, 106 (3), 1558â1568.
Snedeker, J. (2016). Clean mapping: A sketchy story about how conceptual structure could shape language acquisition and some evidence suggesting that it just might be true.
Sorkin, A. R., Warner, B., Kessler, S., Hirsch, L., & Livni, E. (2023). Revenge of the chatbots. The New York Times.
Spelke, E. S. (1990). Principles of object perception. Cognitive science, 14 (1), 29â56. Spelke, E. S. (2022). What babies know: Core knowledge and composition volume 1 (Vol. 1). Oxford University
Press.
Spelke, E. S., Gutheil, G., & Van de Walle, G. (1995). The development of object perception. Visual cognition: An invitation to cognitive science, 2 , 297â330.
Spelke, E. S., & Kinzler, K. D. (2007). Core knowledge. Developmental science, 10 (1), 89â96. Steedman, M. (2001). The syntactic process. MIT press. Steedman, M. (2011). Combinatory categorial grammar. Sumers, T. R., Hawkins, R. D., Ho, M. K., Griffiths, T. L., & Hadfield-Menell, D. (2022). How to talk so
your robot will learn: Instructions, descriptions, and pragmatics. arXiv preprint arXiv:2206.07870 .
Suster, S., Fivez, P., Totis, P., Kimmig, A., Davis, J., De Raedt, L., & Daelemans, W. (2021). Mapping probability word problems to executable representations. In Proceedings of the 2021 conference on empirical methods in natural language processing (pp. 3627â3640).
Talmy, L. (1988). Force dynamics in language and cognition. Cognitive science, 12 (1), 49â100. Tangermann, V. (2023). Microsoftâs bing ai is leaking maniac alternate personalities named venom and fury.
Futurism.
Téglás, E., Vul, E., Girotto, V., Gonzalez, M., Tenenbaum, J. B., & Bonatti, L. L. (2011). Pure Reasoning in 12-Month-Old Infants as Probabilistic Inference. Science, 27 (332), 1054â1059.
Tellex, S., Kollar, T., Dickerson, S., Walter, M., Banerjee, A., Teller, S., & Roy, N. (2011). Understanding natural language commands for robotic navigation and mobile manipulation. In Proceedings of the aaai conference on artificial intelligence (Vol. 25, pp. 1507â1514).
66
REFERENCES
Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind: Statistics,
structure, and abstraction. science, 331 (6022), 1279â1285.
Tenney, I., Das, D., & Pavlick, E. (2019). Bert rediscovers the classical nlp pipeline. arXiv preprint
arXiv:1905.05950 .
Tessler, M. H., Tenenbaum, J. B., & Goodman, N. D. (2022). Logic, probability, and pragmatics in syllogistic
reasoning. Topics in Cognitive Science, 14 (3), 574â601.
Thoppilan, R., De Freitas, D., Hall, J., Shazeer, N., Kulshreshtha, A., Cheng, H.-T., . . . Le, Q. (2022, February). LaMDA: Language Models for Dialog Applications (No. arXiv:2201.08239). arXiv. Todorov, E., Erez, T., & Tassa, Y. (2012). Mujoco: A physics engine for model-based control. In 2012
ieee/rsj international conference on intelligent robots and systems (pp. 5026â5033).
Tolpin, D., van de Meent, J.-W., Yang, H., & Wood, F. (2016). Design and implementation of probabilistic programming language Anglican. In Proceedings of the 28th symposium on the implementation and application of functional programming languages (pp. 1â12).
Tomasello, M. (2009). The usage-based theory of language acquisition. In The cambridge handbook of child
language (pp. 69â87). Cambridge Univ. Press.
Tomasello, M. (2022). The evolution of agency: Behavioral organization from lizards to humans. MIT Press. Ullman, T. D. (2023). Large language models fail on trivial alterations to theory-of-mind tasks. arXiv
preprint arXiv:2302.08399 .
Ullman, T. D., Spelke, E., Battaglia, P., & Tenenbaum, J. B. (2017). Mind games: Game engines as an architecture for intuitive physics. Trends in cognitive sciences, 21 (9), 649â665.
Valmeekam, K., Sreedharan, S., Marquez, M., Olmo, A., & Kambhampati, S. (2023). On the planning abilities of large language models (a critical investigation with a proposed benchmark). arXiv preprint arXiv:2302.06706 .
Varley, R. A. (1998). Aphasic language, aphasic thought: an investigation of propositional thinking in an a-propositional aphasic. In P. Carruthers & J. Boucher (Eds.), Language and Thought: Interdisciplinary Themes (pp. 128â145). Cambridge University Press. doi: 10.1017/CBO9780511597909.009
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., . . . Polosukhin, I. (2017).
Attention is all you need. Advances in neural information processing systems, 30 .
Vul, E., Goodman, N., Griffiths, T. L., & Tenenbaum, J. B. (2014). One and done? optimal decisions from
very few samples. Cognitive science, 38 (4), 599â637.
Vul, E., & Pashler, H. (2008). Measuring the crowd within: Probabilistic representations within individuals. Psychological Science, 19 (7), 645â647.
Wang, G., Xie, Y., Jiang, Y., Mandlekar, A., Xiao, C., Zhu, Y., . . . Anandkumar, A. (2023). Voyager: An
open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291 .
Wang, R. F., & Spelke, E. S. (2002). Human spatial representation: Insights from animals. Trends in cognitive sciences, 6 (9), 376â382.
Watters, N., Tenenbaum, J., & Jazayeri, M. (2021). Modular object-oriented games: a task framework for reinforcement learning, psychology, and neuroscience. arXiv preprint arXiv:2102.12616 .
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi, E., Le, Q., & Zhou, D. (2022). Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903 .
Weir, N., & Van Durme, B. (2022, September). Dynamic Generation of Interpretable Inference Rules in a Neuro-Symbolic Expert System (No. arXiv:2209.07662). arXiv.
Wellman, H. M., & Gelman, S. A. (1992). Cognitive development: Foundational theories of core domains. Annual review of psychology, 43 (1), 337â375.
White, J., Mu, J., & Goodman, N. D. (2020). Learning to refer informatively by amortizing pragmatic
reasoning. arXiv preprint arXiv:2006.00418 .
Whorf, B. (1956). Language, thought, and reality: selected writings. Wilson, S. M., Molnar-Szakacs, I., & Iacoboni, M. (2008, January). Beyond Superior Temporal Cortex: Intersubject Correlations in Narrative Speech Comprehension. Cerebral Cortex , 18 (1), 230â242. Retrieved 2022-06-19, from https://doi.org/10.1093/cercor/bhm049 doi: 10.1093/cercor/bhm049
(2011). Lightweight implementations of probabilistic programming languages via transformational compilation. In Proceedings of the fourteenth international conference on artificial intelligence and statistics (pp. 770â778).
Wiseman, S., Shieber, S. M., & Rush, A. M. (2017). Challenges in data-to-document generation. arXiv
67
REFERENCES
# preprint arXiv:1707.08052 .
Witty, S., Lew, A., Jensen, D., & Mansinghka, V. (2019). Bayesian causal inference via probabilistic program
synthesis. arXiv preprint arXiv:1910.14124 .
Wolfram, S. (2023). ChatGPT gets its âWolfram Superpowersâ. Retrieved from https://writings .stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/
Wong, C., Ellis, K. M., Tenenbaum, J., & Andreas, J. (2021). Leveraging language to learn program abstractions and search heuristics. In International conference on machine learning (pp. 11193â11204). Wong, Y. W., & Mooney, R. (2007). Learning synchronous grammars for semantic parsing with lambda calculus. In Proceedings of the 45th annual meeting of the association of computational linguistics (pp. 960â967).
Wu, J., Tenenbaum, J. B., & Kohli, P. (2017). Neural scene de-rendering. In Proceedings of the ieee conference on computer vision and pattern recognition (pp. 699â707).
Wu, J., Yildirim, I., Lim, J. J., Freeman, B., & Tenenbaum, J. (2015a). Galileo: Perceiving physical object properties by integrating a physics engine with deep learning. Advances in neural information processing systems, 28 .
Wu, J., Yildirim, I., Lim, J. J., Freeman, B., & Tenenbaum, J. (2015b). Galileo: Perceiving Physical Object Properties by Integrating a Physics Engine with Deep Learning. In Advances in Neural Information Processing Systems (Vol. 28). Curran Associates, Inc.
Wu, M., & Goodman, N. (2022). Foundation posteriors for approximate probabilistic inference. arXiv preprint arXiv:2205.09735 .
Wu, S. A., Wang, R. E., Evans, J. A., Tenenbaum, J. B., Parkes, D. C., & Kleiman-Weiner, M. (2021). Too many cooks: Bayesian inference for coordinating multi-agent collaboration. Topics in Cognitive Science, 13 (2), 414â432.
Xie, Y., Yu, C., Zhu, T., Bai, J., Gong, Z., & Soh, H. (2023). Translating natural language to planning goals with large-language models. arXiv preprint arXiv:2302.05128 .
Xu, K., Srivastava, A., Gutfreund, D., Sosa, F., Ullman, T. D., Tenenbaum, J., & Sutton, C. (2021). A bayesian-symbolic approach to reasoning and learning in intuitive physics. Advances in Neural Information Processing Systems, 34 , 2478â2490.
Yang, Y., & Piantadosi, S. T. (2022). One model for the learning of language. Proceedings of the National Academy of Sciences, 119 (5), e2021865119.
Yasunaga, M., & Liang, P. (2020). Graph-based, self-supervised program repair from diagnostic feedback. CoRR, abs/2005.10636 . Retrieved from https://arxiv.org/abs/2005.10636
Yi, K., Gan, C., Li, Y., Kohli, P., Wu, J., Torralba, A., & Tenenbaum, J. B. (2019). Clevrer: Collision events for video representation and reasoning. arXiv preprint arXiv:1910.01442 .
(2018). Neural-symbolic vqa: Disentangling reasoning from vision and language understanding. arXiv preprint arXiv:1810.02338 , 31 , 1031â1042.
Yildirim, I., Belledonne, M., Freiwald, W., & Tenenbaum, J. (n.d.). Efficient inverse graphics in biological face processing. , 77.
Ying, L., Collins, K., Wei, M., Zhang, C., Tan, Z.-X., Weller, A., . . . Wong, L. (2023). The neuro-symbolic inverse planning engine (nipe): Modeling probabilistic social inferences from linguistic inputs. ICML ToM Workshop 2023 .
Yuille, A., & Kersten, D. (2006). Vision as bayesian inference: analysis by synthesis? Trends in cognitive sciences, 10 (7), 301â308.
Zaslavsky, N., Kemp, C., Regier, T., & Tishby, N. (2018). Efficient compression in color naming and its evolution. Proceedings of the National Academy of Sciences, 115 (31), 7937â7942.
Zelikman, E., Wu, Y., Mu, J., & Goodman, N. (2022). Star: Bootstrapping reasoning with reasoning. Advances in Neural Information Processing Systems, 35 , 15476â15488.
Zhang, C. E., Wong, L., Grand, G., & Tenenbaum, J. B. (2023). Grounded physical language understanding with probabilistic programs and simulated worlds. In Proceedings of the annual conference of the cognitive science society (p. To Appear).
Zhang, J., Panthaplackel, S., Nie, P., Li, J. J., & GligoriÄ, M. (2022). Coditt5: Pretraining for source code and natural language editing. ArXiv , abs/2208.05446 .
Zhi-Xuan, T. (2022). Pddl. jl: An extensible interpreter and compiler interface for fast and flexible ai planning
68
REFERENCES
(Unpublished doctoral dissertation). Massachusetts Institute of Technology.
# Zhi-Xuan, T., Mann, J., Silver, T., Tenenbaum, J., & Mansinghka, V.
(2020). Online bayesian goal inference for boundedly rational planning agents. Advances in neural information processing systems, 33 , 19238â19250.
Zhuo, T. Y., Huang, Y., Chen, C., & Xing, Z. (2023). Exploring ai ethics of chatgpt: A diagnostic analysis.
arXiv preprint arXiv:2301.12867 .
Zinberg, B., Cusumano-Towner, M., & Vikash, K. M. (2019). Structured differentiable models of 3d scenes via generative scene graphs. In Workshop on perception as generative reasoning, neurips, submitted september.
69
A LANGUAGE AND WORLD MODELS
# Appendices
We include code for reference below to help better interpret the examples in the paper. This code is included (with human-readable comments) for completeness and for reference, but is not guaranteed to be the the most up-to-date version of these examples. Please refer to the GitHub repository for the most complete, corrected, and up-to-date code for all examples in this paper, as well as instructions for execution and reproducibility: github.com/gabegrand/world-models.
# A Language and world models
# A.1 Probabilistic reasoning
# A.1.1 Generative world model for tug-of-war
1 ;; This Church program models a tug-of-war game between teams of players.
2 3 ;; Each player has a strength, with strength value 50 being about average. 4 (define strength (mem (lambda (player) (gaussian 50 20)))) 5 6 ;; Each player has an intrinsic laziness frequency. 7 (define laziness (mem (lambda (player) (uniform 0 1)))) 8 9 ;; The team's strength is the sum of the players' strengths. 10 ;; When a player is lazy in a match, they pull with half their strength. 11 (define (team-strength team) 12 (sum 13 (map 14 15 (lambda (player) (if (flip (laziness player)) (/ (strength player) 2) (strength player))) team))) 16 17 ;; The winner of the match is the stronger team. 18 ;; Returns true if team-1 won against team-2, else false. 19 (define (won-against team-1 team-2) 20 (> (team-strength team-1) (team-strength team-2)))
# Code Block 1: Generative domain theory for the Bayesian tug-of-war.
# A.1.2 Translation examples for tug-of-war
1 ;; Now, let's translate some user-defined statements. 2 ;; Each statement begins with either `Condition` or `Query`. 3 ;; `Condition` statements provide facts about the scenario. 4 ;; `Query` statements are questions that evaluate quantities of interest. 5 6 ;; Condition: Alice won against Bob. 7 (condition (won-against '(alice) '(bob))) 8 9 ;; Condition: John and Mary won against Tom and Sue. 10 (condition (won-against '(john mary) '(tom sue))) 11 12 ;; Query: If Mary played against Tom, who would win? 13 (query (won-against '(mary) '(tom))) 14 15 ;; Certain statements are underspecified and require some interpretation. For example: 16 ;; Condition: Sue is very strong. 17 (condition (> (strength 'sue) 75))
18
70
A.2 Relational reasoning
A LANGUAGE AND WORLD MODELS
19 ;; We can `define` new constructs that are useful for translation. For example: 20 ;; Condition: Bob is stronger than John. 21 (define (stronger-than? player-1 player-2) 22 (> (strength player-1) (strength player-2))) 23 (condition (stronger-than? 'bob 'john)) 24 25 ;; Query: Is Sue stronger than Mary? 26 (query (stronger-than? 'sue 'mary)) 27 28 ;; Condition: A couple of the players are stronger than John. 29 (condition (>= (count (map (lambda (player) (stronger-than? player 'john) players)) 2)))
# Code Block 2: Prompt examples
# A.2 Relational reasoning
A.2.1 Generative world model for kinship
1 ;; -- KINSHIP GENERATIVE DOMAIN THEORY --
2 3 ;; All the names that can be used in the conversational context. 4 (define ALL-NAMES '(avery blake charlie dana))
5 6 ;; Generates unique person ids of the format (person-0, person-1, ...) 7 (define PERSON-PREFIX "person-") 8 (define new-person-id (make-gensym PERSON-PREFIX)) 9 (define (id->idx person-id) 10 (string->number (string-slice (stringify person-id) (string-length PERSON-PREFIX)))) 11 12 ;; Randomly assign a gender 13 (define person->gender (mem (lambda (person-id) 14 (uniform-draw '(male female))))) 15 16 ;; Randomly-ordered list of person names 17 (define NAMES (shuffle-unique ALL-NAMES)) 18 (define person->name (mem (lambda (person-id) (list-ref NAMES (id->idx person-id))))) 19 20 21 ;; Person node in tree 22 (define (person person-id parent-1-id parent-2-id) (list 23 24 25 26 (pair 'person-id person-id) (pair 'name person-id) (pair 'gender (person->gender person-id)) (pair 'parent-1-id parent-1-id) (pair 'parent-2-id parent-2-id))) 27 28 29 ;; Generate the full tree 30 ;; Max tree size is 1 + (sum_{n=0}^{n=MAX-DEPTH} 2 * MAX-WIDTH^n) 31 (define MAX-WIDTH 3) 32 (define MAX-DEPTH 2) 33 (define PARTNER-PROBABILITY 0.5) 34 (define (generate-tree root-primary-id root-secondary-id depth) 35 (let* ( 36 37 38 ;; Create the primary parent (parent-1-id (new-person-id)) (parent-1 (person parent-1-id root-primary-id root-secondary-id))) 39 40 41 (if (flip PARTNER-PROBABILITY) ;; Case: parent-1 has partner (let* (
71
A.2 Relational reasoning
A LANGUAGE AND WORLD MODELS
42 43 44 ;; Create the secondary parent (parent-2-id (new-person-id)) (parent-2 (person parent-2-id () ())) 45 46 47 48 ;; Link the parents with a partner relation (parent-1 (append parent-1 (list (pair 'partner-id parent-2-id)))) (parent-2 (append parent-2 (list (pair 'partner-id parent-1-id)))) 49 50 51 52 ;; Generate children (n-children (if (>= depth MAX-DEPTH) 0 (bounded-geometric 0.5 0 MAX-WIDTH))) (child-trees (repeat n-children (lambda () (generate-tree parent-1-id parent-2-id (+ depth 1))))) 53 54 55 56 ;; Update the parents to point to the children (child-ids (map (lambda (t) (lookup (first t) 'person-id)) child-trees)) (parent-1 (append parent-1 (list (pair 'child-ids child-ids)))) (parent-2 (append parent-2 (list (pair 'child-ids child-ids))))) 57 58 (append (list parent-1) (list parent-2) (shallow-flatten child-trees))) 59 60 61 ;; Case: parent-1 has no partner (list parent-1)))) 62 63 ;; Generate the global tree. 64 (define T (generate-tree () () 0)) 65 66 ;; Assign names randomly to (some of) the people in the tree. 67 (define (add-names-to-tree tree names) 68 69 (if (null? tree) () (let* 70 71 72 73 ;; Probability of addding a name to the first person ((p (min 1.0 (/ (length names) (length tree)))) (person (first tree))) (if (flip p) 74 75 ;; Name the person (let 76 ((named-person (update-list person 1 (pair 'name (first names))))) 77 78 79 (cons named-person (add-names-to-tree (rest tree) (rest names)))) ;; Don't name the person (cons person (add-names-to-tree (rest tree) names)))))) 80 81 ;; Update the tree with the name information. 82 (define T (add-names-to-tree T NAMES)) Code Block 3: Generative domain theory for family trees.
# A.2.2 Kinship tree utilities
1 ;; -- KINSHIP TREE UTILITIES -- 2 3 ;; Returns all instances of person with property `key` equal to `value` 4 (define filter-by-property (mem (lambda (key value) 5 6 (filter (lambda (p) (equal? (lookup p key) value)) T)))) 7 8 ;; Returns the unique instance of person with name. 9 (define get-person-by-name (mem (lambda (name) (let ((results (filter-by-property 'name name))) (if (null? results) () (first results))))))
10
11
12
13
72
A.2 Relational reasoning
A LANGUAGE AND WORLD MODELS
14 15 ;; People without a name can be referenced directly by person-id. 16 (define get-person-by-id 17 18 (mem (lambda (person-id) (if (null? person-id) 19 20 () (let ((idx (id->idx person-id))) 21 (if (>= idx (length T)) () (list-ref T idx))))))) 22 23 ;; Get a person object either by name or person-id. 24 (define get-person 25 (mem (lambda (person-ref) 26 (cond 27 28 29 ((null? person-ref) ()) ((member? person-ref NAMES) (get-person-by-name person-ref)) (else (get-person-by-id person-ref)))))) 30 31 ;; Get a property of a person. 32 (define get-property 33 (mem (lambda (name key) 34 (lookup (get-person name) key)))) 35 36 ;; -- TREE OPERATORS -- 37 ;; predicate :: name -> boolean 38 39 (define (map-tree predicate) 40 (map (lambda (x) (predicate (lookup x 'name))) T)) 41 42 (define (filter-tree predicate) 43 (filter (lambda (x) (predicate (lookup x 'name))) T)) 44 45 (define (exists predicate) 46 (some (map-tree predicate)))
# Code Block 4: Utility functions for kinship trees.
# A.2.3 Kinship conceptual system
1 ;; -- KINSHIP CONCEPTUAL SYSTEM --
2 3 ;; Gets the partner of a person. 4 (define (partner-of name) 5 (get-property (get-property name 'partner-id) 'name)) 6 7 ;; Gets the parents of a person. 8 (define (parents-of name) 9 (let* ((parent-1-id (get-property name 'parent-1-id)) 10 11 12 (parent-1-name (get-property parent-1-id 'name)) (parent-2-id (get-property name 'parent-2-id)) (parent-2-name (get-property parent-2-id 'name))) 13 (list parent-1-name parent-2-name))) 14 15 ;; Gets the grandparents of a person. 16 (define (grandparents-of name) 17 (let ((parent-1 (first (parents-of name)))) 18 (parents-of parent-1)))
19 20 ;; Gets the children of a person. 21 (define (children-of name)
73
A.2 Relational reasoning
A LANGUAGE AND WORLD MODELS
22
# (let ((child-ids (get-property name 'child-ids)))
23
# (map (lambda (child-id) (get-property child-id 'name)) child-ids)))
24 25 ;; Gets the siblings of a person. 26 (define (siblings-of name)
27
(let* ((parent-1-id (get-property name 'parent-1-id))
28
29
# (child-ids (get-property parent-1-id 'child-ids)) (child-names (map (lambda (child-id) (get-property child-id 'name)) child-ids)))
30
(filter (lambda (child-name) (not (equal? child-name name))) child-names)))
31 32 ;; -- BOOLEAN RELATIONS -- 33 (define (partner-of? name_a name_b) 34 (equal? name_a (partner-of name_b))) 35 36 (define (parent-of? name_a name_b) 37 (member? name_a (parents-of name_b))) 38 39 (define (father-of? name_a name_b) 40 (and (equal? (get-property name_a 'gender) 'male) 41 (parent-of? name_a name_b))) 42 43 (define (mother-of? name_a name_b) 44 (and (equal? (get-property name_a 'gender) 'female) 45 (parent-of? name_a name_b))) 46 47 (define (grandparent-of? name_a name_b) 48 (member? name_a (grandparents-of name_b))) 49 50 (define (grandfather-of? name_a name_b) 51 (and (equal? (get-property name_a 'gender) 'male) 52 (grandparent-of? name_a name_b))) 53 54 (define (grandmother-of? name_a name_b) 55 (and (equal? (get-property name_a 'gender) 'female) 56 (grandparent-of? name_a name_b))) 57 58 (define (child-of? name_a name_b) 59 (member? name_a (children-of name_b))) 60 61 (define (son-of? name_a name_b) 62 (and (equal? (get-property name_a 'gender) 'male) 63 (child-of? name_a name_b))) 64 65 (define (daughter-of? name_a name_b) 66 (and (equal? (get-property name_a 'gender) 'female) 67 (child-of? name_a name_b))) 68 69 (define (sibling-of? name_a name_b) 70 (member? name_a (siblings-of name_b))) 71 72 (define (brother-of? name_a name_b) 73 (and (equal? (get-property name_a 'gender) 'male) 74 (sibling-of? name_a name_b))) 75 76 (define (sister-of? name_a name_b) 77 (and (equal? (get-property name_a 'gender) 'female) 78 (sibling-of? name_a name_b)))
Code Block 5: Conceptual system and derived predicates for kinship trees.
74
A.2 Relational reasoning
A LANGUAGE AND WORLD MODELS
# A.2.4 Translation examples for kinship
1 ;; -- CONDITION AND QUERY STATEMENTS -- 2 ;; Now, let's translate some user-defined statements. 3 ;; Each statement begins with either `Condition` or `Query`. 4 ;; `Condition` statements provide facts about the scenario. 5 ;; `Query` statements are questions that evaluate quantities of interest.
6 7 ;; Condition: Ryan's partner is Taylor. 8 (condition (partner-of? 'ryan 'taylor)) 9 10 ;; Condition: Taylor is the mother of Sam. 11 (condition (mother-of? 'taylor 'sam)) 12 13 ;; Condition: Sam's father is Ryan. 14 (condition (father-of? 'ryan 'sam)) 15 16 ;; Condition: Sam has two siblings. 17 (condition (= (length (siblings-of 'sam)) 2)) 18 19 ;; Condition: Sam has a brother. 20 (condition 21 (exists (lambda (x) 22 (brother-of? x 'sam)))) 23 24 ;; Condition: Payton's partner has a brother named Kyle. 25 (condition 26 (exists (lambda (x) (and 27 28 (partner-of? x 'payton) (brother-of? 'kyle x))))) 29 30 ;; Condition: Payton's partner has a sister who has a son named Sam. 31 (condition 32 (exists (lambda (x) (and 33 34 (partner-of? x 'payton) (exists (lambda (y) (and 35 36 (sister-of? y x) (son-of? 'sam y)))))))) 37 38 ;; Query: Who are Sam's parents? 39 (query (parents-of 'sam)) 40 41 ;; Query: How many children does Kyle have? 42 (query (length (children-of 'kyle))) 43 44 ;; Query: Who is Ryan's grandfather? 45 (query 46 (filter-tree 47 (lambda (x) (grandfather-of? x 'ryan)))) 48 49 ;; Query: Does Taylor have a sister? 50 (query 51 (exists (lambda (x)
52
# (sister-of? x 'taylor))))
# Code Block 6: Translation examples for kinship trees.
75
A.3 Perceptual and physical reasoning
A LANGUAGE AND WORLD MODELS
# A.2.5 Why not Prolog?
Readers who are familiar with the world of logic programming may wonder why we have chosen to model the kinship domain in Church instead of a more standard logic programming language, such as Prolog. Indeed, kinship is often one of the introductory examples in Prolog textbooks (Pereira & Shieber, 2002) and online tutorials,8 from which we drew inspiration when writing this section. Moreover, there are many structural parallels between our framework and the style of declarative programming embodied by Prolog: schemecondition statements in Church are similar to facts in Prolog; derived concepts like schemefather-of? in our Church kinship model are analogous to Prolog rules; and schemequery performs similar functions in both languages (though the algorithms that underlie these queries differ in important ways). And, as discussed in the introduction to Section 3.1, Prolog was originally developed as a model of natural language (Colmerauer et al., 1972) and has deep ties to computational linguistics. So: why not use Prolog?
In short, there is nothing about our approach to semantic parsing that precludes swapping out Church for other programming languages, like Prolog, SMT solvers, or even a general purpose language like Python. In fact, with the right prompting, Codex readily translates natural language utterances like Avery has a sister named Blake into sister_of(blake, avery) in Prolog. On the parsing side, we did not encounter any technical limitations to using LLMs to translate natural language into Prolog.
However, because Prolog is based on definite (Horn) clauses, there are limitations in the kinds of utterances that we can express and the kinds of inferences that we can make when working in Prolog. For instance, a typical Prolog kinship model might have a rule defining the concept of a âgrandfatherâ as follows:
grandfather_of(X,Y) :- male(X),
parent_of(X,Z), parent_of(Z,Y).
Now, if we learn that Charlie is the grandfather of Dana, we might be inclined to translate this into Prolog as a fact: grandfather_of(charlie, dana). Given this information, we can make various deductive inferences: e.g., that Charlie is male, and that there exists some person in the family tree who is both the child of Charlie and the parent of Dana. In fact, this is how the grandfather_of(X,Y) rule is defined in the first place.
For this reason, it is especially counterintuitive that these kinds of inferences are not at all straightforward in Prolog. Because logical implication in definite clauses is unidirectional, anyone satisfying the right-hand side of the grandfather_of(X,Y) rule is considered a grandfather. However, our rule says nothing about what being a grandfather entails. Moreover, our above translation grandfather_of(charlie, dana) is actually quite facile; it simply modifies grandfather_of(X,Y) such that queries will now return true for anyone satisfying the original definition; or for the special case where X=charlie and Y=dana. These are all examples of limitations of the kinds of deductive inferences that we can model with Prolog. Additionally, there are many kinds of inductive inferences that are not well-captured by Prolog; e.g., because Charlie has at least one child, he is more likely to have multiple children, and is more likely to be married.
In sum, to get the kinds of mixed deductive and inductive inferences that we would like to see in an expressive language-of-thought, we need to have ways of incorporating and trading off uncertainty in our world model. ProbLog (De Raedt et al., 2007; Dries et al., 2017; Suster et al., 2021), a probabilistic extension of Prolog in which deduction rules can be annotated with probabilities, offers one way of integrating uncertainty with deductive reasoning. Church goes a step further by specifying a generative domain theory in addition to probabilistic inference rules. We believe that this interplay between probabilistic priors and likelihoodsâwhich is central to Bayesian inferenceâis also at the heart of human cognition.
# A.3 Perceptual and physical reasoning
Static visual scenes
A.3.1 Generative world model for static visual scenes
1 ;; Objects have a shape attribute, which is a choice of cube, sphere, or cylinder shape categories. 2 (define choose-shape
3
# (mem (lambda (obj-id)
# 8https://swish.swi-prolog.org/p/prolog-family-tree.pl
76
A.3 Perceptual and physical reasoning
A LANGUAGE AND WORLD MODELS
(pair 'shape (uniform-draw '(mug can bowl))))))
4
5 6 ;; Objects have a color attribute that is drawn from a predefined set of RGB values. 7 (define choose-color
# w
# (mem (lambda (obj-id)
# (pair 'color (uniform-draw (list
9
(list 255 0 0) (list 0 0 255) (list 0 255 0) (list 255 255 0)
10
11
12
13 )))))) 14 15 ;; An object is an object ID, and the object's attribute types and their values. 16 (define object (mem (lambda (obj-id) (list 17 18 19 (pair 'object-id obj-id) (choose-shape obj-id) (choose-color obj-id))))) 20 21 ;; Scenes can have a maximum of 12 objects. 22 (define max-objects 12) 23 ;; The number of objects in a scene tends to be not too large, and is capped at the maximum number of objects. 24 (define choose-num-objects 25 (mem (lambda (scene-id) (floor (min max-objects (* max-objects (exponential 1))))))) 26 27 ;; Then, for each object we intend to generate, generate an object indexical, and associate it with a choice of attributes. 28 (define obj-id-gensym (make-gensym "obj-")) 29 (define (generate-n-objects scene-id total-objects) (if (= total-objects 0) (list (object (obj-id-gensym))) (cons (object (obj-id-gensym)) (generate-n-objects scene-id (- total-objects 1))))) 33 (define objects-in-scene (mem (lambda (scene-id) (generate-n-objects scene-id (choose-num-objects 30 31 32 scene-id))))) 34 35 36 ;; An object is red if it is of this continuous color value. 37 (define red (list 255 0 0)) 38 ;; An object is blue if it is of this continuous color value. 39 (define blue (list 0 0 255)) 40 ;; An object is green if it is of this continuous color value. 41 (define green (list 0 255 0)) 42 ;; An object is yellow if it is of this continuous color value. 43 (define yellow (list 255 255 0)) 44 45 ;; Check if an object is of a given shape. 46 (define is-shape? (lambda (shape) (lambda (object) (equal? (cdr (assoc 'shape object)) shape)))) 47 ;; Check if an object is of a given named color. 48 (define is-color? (lambda (color) (lambda (object) (equal? (cdr (assoc 'color object)) color)))) 49 50 ;; Select only objects from the scene of a given color. 51 (define filter-color(lambda (color) (lambda (object-list) (filter (is-color? color) object-list)))) 52 53 ;; Select only objects from the scene of a given shape. 54 (define filter-shape (lambda (shape) (lambda (object-list) (filter (is-shape? shape) object-list))))
Code Block 7: Generative domain theory for tabletop scenes. Generates scenes containing a set of objects which vary in shape and color. These scene states are rendered by a separately generated render function to generate images. Shown with natural language comments, but these are not used in the LLM prompt.
77
A.3 Perceptual and physical reasoning
A LANGUAGE AND WORLD MODELS
A.3.2 Translation examples for static visual scenes
1 ;; There's a blue thing. 2 (condition (> (length ((filter-color blue) (objects-in-scene 'this-scene))) 0)) 3 ;; There's at least two blue plates. 4 (condition (>= (length ((filter-color blue) ((filter-shape 'plate) (objects-in-scene 'scene)))) 5 6 7 8 2)) 9 ;; There's many blue plates. 10 (condition (>= (length ((filter-color blue) ((filter-shape 'plate) (objects-in-scene 'scene)))) 11 12 13 14 5)) 15 ;; There's exactly two plates and there's also a yellow thing. 16 (condition 17 18 (and (= (length ((filter-shape 'plate) (objects-in-scene 'scene))) 2) (> (length ((filter-color yellow) (objects-in-scene 'scene))) 0))) 19 20 ;; Is there a mug? 21 (query (> (length ((filter-shape 'mug) (objects-in-scene 'this-scene))) 0))
Code Block 8: Translation examples for the visual domain. These examples are concatenated with the visual scenes generative model to produce the prompt used to generate new translations.
Dynamic physical scenes
A.3.3 Generative world model for physical scenes
# 1 (define (get_attribute obj key)
2 (if (assoc key obj) (rest (assoc key obj)) ())) 3 4 (define (member? a b) 5 (if (member a b) true false)) 6 (define concatenate 7 8 (lambda (list-1 list-2) (if (null? list-1) 9 10 list-2 (cons (car list-1) (concatenate (cdr list-1) list-2))))) 11 12 (define (pairs x l) 13 (define (aux accu x l) 14 15 16 (if (null? l) accu (let ((y (car l)) 17 (tail (cdr l))) 18 (aux (cons (cons x y) accu) x tail)))) 19 (aux '() x l)) 20 21 (define (cartesian_product l m) 22 (define (aux accu l) 23 24 25 (if (null? l) accu (let ((x (car l)) 26 (tail (cdr l)))
78
A.3 Perceptual and physical reasoning
A LANGUAGE AND WORLD MODELS
27
# (aux (append (pairs x m) accu) tail))))
(aux '() l))
28
Generative domain theory: dynamic scenes. Collision detection. 31 (define get_num_objects 2) 32 (define OBJECT_DEFAULT_RADIUS 1) 33 (define GRAVITY 9.8) 34 (define DELTA_T 0.5) 35 36 (define get_initial_color 37 38 39 40 (lambda (obj_id) (if (eq? obj_id 'obj-0) (list 255 0 0) (list 0 0 255)))) 41 42 (define choose_mass 43 (mem (lambda (obj_id) 44 (abs (gaussian 5 3))))) 45 46 (define choose_shapes 47 (mem (lambda (scene-id) (uniform-draw (list 'sphere 'block))))) 48 49 (define min_x -3) 50 (define max_x 3) 51 (define mid_x (+ (/ (- max_x min_x) 2) min_x)) 52 (define get_initial_x (lambda (obj_id) (if (eq? obj_id 'obj-0) 53 54 55 56 min_x mid_x))) 57 58 (define min_force 0) 59 60 61 (define max_force 10) (define mid_force (+ (/ (- max_force min_force) 2) min_force)) (define choose_initial_force 62 (mem (lambda (obj_id) 63 64 65 (if (eq? obj_id 'obj-0) (abs (gaussian mid_force 3)) 0 66 )))) 67 68 (define static_friction_constant (lambda (shape) 69 (if (eq? shape 'sphere) 70 71 0.02 0.05) 72 )) 73 (define kinetic_friction_constant (lambda (shape) 74 (if (eq? shape 'sphere)
29 30 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
Generative domain theory: dynamic scenes. Collision detection.
75
76
0.01 0.02)
77
))
78 (define normal_force (lambda (m) (* m GRAVITY))) 79 (define force_after_friction (lambda (f v shape m)
(if (> (abs v) 0) (- f (* (kinetic_friction_constant shape) (normal_force m))) (if (< f (* (static_friction_constant shape) (normal_force m))) 0 (- f (*
80
81
82
# (kinetic_friction_constant shape) (normal_force m)))
83
))))
79
A.3 Perceptual and physical reasoning
A LANGUAGE AND WORLD MODELS
84 85 (define newtons_second (lambda (f m) (/ f m))) 86 (define v_next (lambda (v_prev a_prev delta_t) 87 88 (let ((v_temp (+ v_prev (* a_prev delta_t)))) (if (>= (* v_prev v_temp) 0) v_temp 0)) 89 )) 90 (define x_next (lambda (x_prev v_prev delta_t) (+ x_prev (* v_prev delta_t)))) 91 (define initial_object_state (mem (lambda (obj_id scene_id) 92 93 (let ((obj_shape (choose_shapes scene_id))) (let ((obj_mass (choose_mass obj_id))) 94 95 (let ((obj_color (get_initial_color obj_id))) (let ((initial_x (get_initial_x obj_id))) 96 97 (let ((initial_push_force (choose_initial_force obj_id))) (let ((initial_force (force_after_friction initial_push_force 0 obj_shape obj_mass))) 98 99 100 101 102 103 104 105 106 107 108 109 (list (pair 'object_id obj_id) (pair 'object_radius OBJECT_DEFAULT_RADIUS) (pair 'shape obj_shape) (pair 'mass obj_mass) (pair 'color obj_color) (pair 'x initial_x) (pair 'initial_push_force initial_push_force) (pair 'f initial_force) (pair 't 0) (pair 'a_prev (newtons_second initial_force obj_mass)) (pair 'a (newtons_second initial_force obj_mass)) (pair 'v_0 0) (pair 'v (v_next 0 (newtons_second initial_force 110 111 obj_mass) DELTA_T))) 112 ))))))))) 113 (define obj_id_gensym (make_gensym "obj-")) 114 (define generate_initial_state 115 116 (mem (lambda (scene_id total_objects) (if (= total_objects 1) 117 118 (list (initial_object_state (obj_id_gensym) scene_id)) (cons (initial_object_state (obj_id_gensym) scene_id) (generate_initial_state scene_id (- total_objects 1))))))) 119 120 (define generate_initial_scene_event_state (mem (lambda (scene_id total_objects) 121 (pair 0 122 123 (list (pair 'scene_states (generate_initial_state scene_id total_objects)) 124 125 (pair 'event_states []) )) 126 127 )) ) 128
129 (define event_id_gensym (make_gensym "event-")) 130 (define circle_intersect? (lambda (subject_x subject_radius object_x object_radius) 131 (let ((square_circle_distance (expt (- subject_x object_x) 2))) 132 (let ((square_radii (expt (+ subject_radius object_radius) 2))) 133 (leq square_circle_distance square_radii))) 134 )) 135 (define elastic_collision_subject_v (lambda (subject_m subject_v object_m object_v)
136 137 ))
(/ (+ (* 2 (* object_m object_v)) (* subject_v (- subject_m object_m))) (+ subject_m object_m))
138
80
A.3 Perceptual and physical reasoning
A LANGUAGE AND WORLD MODELS
# 139 (define get_collision_events (lambda (time scene_event_state_for_time)
140 141 142 (let ((scene_event_state (get_attribute scene_event_state_for_time time))) (let ((scene_state (get_attribute scene_event_state 'scene_states))) (if (= (length scene_state) 1) 143 () 144 145 146 (fold (lambda (event events) (if (equal? event ()) events (cons event events))) (let ((paired_object_states (cartesian_product scene_state scene_state))) (map (lambda (paired_objects) () 147 148 149 150 151 152 153 154 155 156 157 158 (let ((event_subject (get_attribute (first paired_objects) 'object_id))) (let ((event_object (get_attribute (cdr paired_objects) 'object_id))) (if (eq? event_subject event_object) () (let ((subject_v (get_attribute (first paired_objects) 'v))) (let ((subject_x (get_attribute (first paired_objects) 'x))) (let ((subject_m (get_attribute (first paired_objects) 'mass))) (let ((subject_radius (get_attribute (first paired_objects) 'object_radius))) (let ((object_v (get_attribute (cdr paired_objects) 'v))) (let ((object_x (get_attribute (cdr paired_objects) 'x))) (let ((object_m (get_attribute (cdr paired_objects) 'mass))) (let ((object_radius (get_attribute (cdr paired_objects) 'object_radius))) (if (circle_intersect? subject_x subject_radius object_x object_radius) 159 160 (list 161 162 163 164 165 166 (pair 'event-id (event_id_gensym)) (pair 'event_time time) (pair 'event_predicates (list 'is_colliding)) (pair 'event_subject event_subject) (pair 'event_object event_object) (pair 'subject_initial_v subject_v) (pair 'subject_final_v (elastic_collision_subject_v subject_m subject_v object_m 167 object_v)) 168 169 (pair 'object_initial_v object_v) ) 170 ())))))))))) ))) paired_object_states))) 171 172 173 ))))) 174 175 176 (define generate_next_object_state (lambda (current_time event_state) (lambda (prev_object_state) 177 178 (let ((obj_id (cdr (assoc 'object_id prev_object_state)))) (let ((collision_events (fold (lambda (event events) (if (equal? (get_attribute event 'event_subject) obj_id) (cons event events) events)) () event_state))) 179 180 181 182 (if (> (length collision_events) 0) (generate_collision_event_state current_time obj_id prev_object_state (car collision_events)) (generate_no_collision_event_state current_time obj_id prev_object_state) ) ))))) 183
140
141
143
144
145
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172 173 )))))
174
175
182
183
184
# 185 (define generate_collision_event_state (lambda (current_time obj_id prev_object_state collision_event)
186
# (let ((obj_radius (cdr (assoc 'object_radius prev_object_state))))
187
# (let ((obj_mass (cdr (assoc 'mass prev_object_state))))
# (let ((obj_color (cdr (assoc 'color prev_object_state)))) (let ((obj_shape (cdr (assoc 'shape prev_object_state))))
188
189 (let ((obj_shape (cdr (assoc 'shape prev_object_state))))
189
190
# (let ((v_prev (cdr (assoc 'v prev_object_state))))
191
192
# (let ((a_prev (cdr (assoc 'a_prev prev_object_state)))) (let ((x_prev (cdr (assoc 'x prev_object_state))))
193
# (let ((v (get_attribute collision_event 'subject_final_v)))
194
# (let ((x (x_next x_prev v 1)))
195
# (list
81
A.3 Perceptual and physical reasoning
A LANGUAGE AND WORLD MODELS
196 197 198 199 200 201 202 203 204 205 206 207 (pair 'object_id obj_id) (pair 'object_radius obj_radius) (pair 'shape obj_shape) (pair 'color obj_color) (pair 'mass obj_mass) (pair 'x x) (pair 'f 0) (pair 't (* current_time DELTA_T)) (pair 'a_prev 0) (pair 'a 0) (pair 'v_0 0) (pair 'v v)) ))))) 208 209 210 )) )))) 211 212 (define generate_no_collision_event_state (lambda (current_time obj_id prev_object_state) 213 (let ((obj_radius (cdr (assoc 'object_radius prev_object_state)))) 214 (let ((obj_mass (cdr (assoc 'mass prev_object_state)))) 215 216 (let ((obj_color (cdr (assoc 'color prev_object_state)))) (let ((obj_shape (cdr (assoc 'shape prev_object_state)))) 217 (let ((v_prev (cdr (assoc 'v prev_object_state)))) 218 219 (let ((a_prev_no_friction (cdr (assoc 'a_prev prev_object_state)))) (let ((a_prev (newtons_second (force_after_friction 0 v_prev obj_shape obj_mass) obj_mass))) 220 (let ((x_prev (cdr (assoc 'x prev_object_state)))) 221 (let ((v (v_next v_prev a_prev DELTA_T))) 222 (let ((x (x_next x_prev v_prev DELTA_T))) 223 224 225 226 227 228 229 230 231 232 233 234 235 (list (pair 'object_id obj_id) (pair 'object_radius obj_radius) (pair 'shape obj_shape) (pair 'color obj_color) (pair 'mass obj_mass) (pair 'x x) (pair 'f (force_after_friction 0 v_prev obj_shape obj_mass)) (pair 't (* current_time DELTA_T)) (pair 'a_prev a_prev) (pair 'a 0) (pair 'v_0 0) (pair 'v v)) ))))) 236 237 238 ))) )))) 239
# 240 (define generate_next_scene_state (lambda (prev_scene_state event_state next_time) (map (generate_next_object_state next_time event_state) prev_scene_state)))
241
242
# 243 (define generate_next_scene_event_state_time (lambda (next_time scene_event_state_for_times)
244
245
246
(let ((prev_scene_event_state (get_attribute scene_event_state_for_times (- next_time 1)))) (let ((prev_scene_state (get_attribute prev_scene_event_state 'scene_states))) (let ((event_state (get_collision_events (- next_time 1) scene_event_state_for_times)))
247
248
# (pair next_time (list
249
250
(pair 'scene_states (generate_next_scene_state prev_scene_state event_state next_time)) (pair 'event_states event_state)
251 252 )))))
))
253
# 254 (define generate_next_scene_event_states
82
A.3 Perceptual and physical reasoning
A LANGUAGE AND WORLD MODELS
256
# (lambda (current_time prev_scene_event_states_for_times) (cons (generate_next_scene_event_state_time current_time prev_scene_event_states_for_times) prev_scene_event_states_for_times)
257 ))
258
# 259 (define generate_scene_event_states_for_times (mem (lambda (scene_id total_objects total_time)
260
# (if (= total_time 0)
261
262
(list (generate_initial_scene_event_state
# scene_id total_objects)
263
264
# ) (let ((prev_scene_event_states
# (generate_scene_event_states_for_times scene_id total_objects (- total_time 1))))
265
# (generate_next_scene_event_states
total_time prev_scene_event_states)
266 )))))
267 268 (define max_time 9) 269 270 (define base_states_for_times (generate_scene_event_states_for_times 'this_scene get_num_objects max_time)) 271 272 ;;;;;;;;;;;;;;;;;;;;;;;;;;Derived predicates. 273 (define objects_in_scene (lambda (base_states_for_times) 274 275 276 (let ((initial_base_states_at_time (cdr (assoc 0 (cdr base_states_for_times))))) (let ((base_state (cdr (assoc 'scene_states initial_base_states_at_time)))) base_state 277 )) 278 )) 279 (define red (list 255 0 0)) 280 (define blue (list 0 0 255)) 281 (define is_color? (lambda (color) (lambda (object) (equal? (cdr (assoc 'color object)) color)))) 282 (define is_shape? (lambda (shape) (lambda (object) (equal? (cdr (assoc 'shape object)) shape)))) 283 284 (define all_objects (objects_in_scene base_states_for_times)) 285 (define (exists_object predicate) 286 (some (map predicate (objects_in_scene base_states_for_times)))) 287 288 (define (filter_objects predicate) (map (lambda (o) (get_attribute o 'object_id)) (filter predicate (objects_in_scene base_states_for_times)))) 291 292 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; 293 (define QUICKLY_THRESHOLD 2) 294 (define SLOWLY_THRESHOLD 2) 289 290 295 296 (define is_moving_events (mem (lambda (base_states_for_times) 297 298 299 300 (fold (lambda (base_state_for_time these_events) (let ((current_time (car base_state_for_time))) (let ((base_state (cdr (assoc 'scene_states (cdr base_state_for_time))))) (fold (lambda (obj_state these_events) 301 302 303 304 (let ((obj_id (cdr (assoc 'object_id obj_state)))) (let ((obj_velocity (cdr (assoc 'v obj_state)))) (let ((obj_speed (abs obj_velocity))) (if (> obj_speed 0) 305 ;; 306 (let ((event_predicates 307 308 (if (> obj_speed QUICKLY_THRESHOLD) (list 'is_moving 'is_quickly) 83
A.3 Perceptual and physical reasoning
A LANGUAGE AND WORLD MODELS
309 310 311 312 (if (< obj_speed SLOWLY_THRESHOLD) (list 'is_moving 'is_slowly) (list 'is_moving) )) 313 )) 314 (cons 315 316 317 318 (list (pair 'event-id (event_id_gensym)) (pair 'event_time current_time) (pair 'event_predicates event_predicates) (pair 'event_subject obj_id) 319 320 (pair 'event_speed obj_speed) 321 ) 322 323 these_events)) these_events 324 ))))) these_events base_state)))) 325 326 327 () base_states_for_times)))) 328 329 (define is_resting_events (mem (lambda (base_states_for_times) 330 331 332 333 (fold (lambda (base_state_for_time these_events) (let ((current_time (car base_state_for_time))) (let ((base_state (cdr (assoc 'scene_states (cdr base_state_for_time))))) (fold (lambda (obj_state these_events) 334 335 336 337 (let ((obj_id (cdr (assoc 'object_id obj_state)))) (let ((obj_velocity (cdr (assoc 'v obj_state)))) (let ((obj_speed (abs obj_velocity))) (if (= obj_speed 0) 338 ;; 339 (let ((event_predicates 340 (list 'is_resting))) 341 (cons 342 343 344 345 (list (pair 'event-id (event_id_gensym)) (pair 'event_time current_time) (pair 'event_predicates event_predicates) (pair 'event_subject obj_id) 346 347 (pair 'event_speed obj_speed) 348 ) 349 350 these_events)) these_events 351 352 353 ))))) these_events base_state)))) 354 () base_states_for_times)))) 355 356 (define is_colliding_events (mem (lambda (base_states_for_times) 357 358 359 360 (fold (lambda (base_state_for_time these_events) (let ((current_time (car base_state_for_time))) (let ((event_states (cdr (assoc 'event_states (cdr base_state_for_time))))) (fold (lambda (event_state these_events)
309
361
362
363
364
(let ((subject_initial_speed (abs (get_attribute event_state 'subject_initial_v)))) (let ((subject_final_speed (abs (get_attribute event_state 'subject_final_v)))) (let ((object_initial_speed (abs (get_attribute event_state 'object_initial_v)))) (let ((cause_subject_object_event (and (> subject_initial_speed 0) (=
365
366
# object_initial_speed 0)))) (let ((event_predicates
84
A.3 Perceptual and physical reasoning
A LANGUAGE AND WORLD MODELS
367 (if (and cause_subject_object_event (eq? subject_final_speed 0)) 368 369 (list 'is_launching 'is_hitting 'is_colliding) (if (> subject_initial_speed 0) 370 371 372 (list 'is_hitting 'is_colliding) (list 'is_colliding) ) 373 ))) 374 375 (cons (list 376 377 378 379 380 381 382 383 (pair 'event-id (get_attribute event_state 'event-id)) (pair 'event_time (get_attribute event_state 'event_time)) (pair 'event_predicates event_predicates) (pair 'event_subject (get_attribute event_state 'event_subject)) (pair 'event_object (get_attribute event_state 'event_object)) (pair 'subject_initial_v (get_attribute event_state 'subject_initial_v )) (pair 'subject_final_v (get_attribute event_state 'subject_final_v )) (pair 'object_initial_v (get_attribute event_state 'object_initial_v )) ) these_events)))))) 384 385 ) these_events event_states) 386 387 ))) () base_states_for_times) 388 389 ))) 390 391 392 393 (define events_in_scene (concatenate 394 395 396 397 (is_colliding_events base_states_for_times) (concatenate (is_moving_events base_states_for_times) (is_resting_events base_states_for_times)))) 398 399 400 (define is_event? (lambda (event_predicate event) (member? event_predicate (get_attribute event 'event_predicates)))) 401 402 (define is_subject_of_event? (lambda (event object ) (equal? 403 404 (get_attribute event 'event_subject) (get_attribute object 'object_id) 405 ))) 406 407 (define is_object_of_event? (lambda (event object ) (equal? 408 409 (get_attribute event 'event_object) (get_attribute object 'object_id) 410 ))) 411 412 (define event_subject_is? (lambda (event predicate) (member? 413
414
(get_attribute event 'event_subject) (filter_objects predicate)
415
)))
416 (define event_object_is? (lambda (event predicate) (member?
418
(get_attribute event 'event_object) (filter_objects predicate)
419
)))
420
# 421 (define (exists_event predicate)
422
# (some (map predicate events_in_scene)))
423
# 424 (define (filter_events predicate)
85
A.3 Perceptual and physical reasoning
A LANGUAGE AND WORLD MODELS
425
(filter predicate events_in_scene))
Code Block 9: Generative domain theory for physical scenes. Generates scenes containing a red object left of a blue object, and a randomly generated force. These scene states are forward simulated using a physics engine which is shown implemented within this Church code. Shown with natural language comments, but these are not used in the LLM prompt.
# A.3.4 Translation examples for visual scenes
1 ;; The objects are all balls. 2 (condition (all (map (lambda (o) ((is_shape? 'sphere) o)) all_objects))) 3 ;; Everything is a ball. 4 (condition (all (map (lambda (o) ((is_shape? 'sphere) o)) all_objects))) 5 ;; Imagine the red thing is a block, and is somewhat heavy. 6 (condition (exists_object (lambda (object)
7 8 9 10 (and ((is_color? red) object) ((is_shape? 'cube) object) (> (get_attribute object 'mass) 2) )))) 11 12 ;; There is a blue ball, and it is quite heavy. 13 (condition (exists_object (lambda (object) 14 15 16 17 (and ((is_color? blue) object) ((is_shape? 'sphere) object) (> (get_attribute object 'mass) 3.5) )))) 18 19 ;; Now, the red block is very light. 20 (condition (exists_object (lambda (object) 21 22 23 24 (and ((is_color? red) object) ((is_shape? 'cube) object) (< (get_attribute object 'mass) 1) )))) 25 26 ;; A blue ball is somewhat light. 27 (condition (exists_object (lambda (object) 28 29 30 31 (and ((is_color? red) object) ((is_shape? 'cube) object) (< (get_attribute object 'mass) 2) )))) 32 33 ;; Imagine the red block gets pushed lightly to the right. 34 (condition (exists_object (lambda (object) 35 36 37 38 (and ((is_color? red) object) ((is_shape? 'cube) object) (< (get_attribute object 'initial_push_force) 2) )))) 39 40 ;; Now, imagine a red ball is pushed hard to the right. 41 (condition (exists_object (lambda (object) 42 43 44 45 (and ((is_color? red) object) ((is_shape? 'sphere) object) (> (get_attribute object 'initial_push_force) 6) )))) 46 47 ;; A red block hits a blue block. 48 (condition 49 (exists_object (lambda (object_1)
86
A.4 Social reasoning
A LANGUAGE AND WORLD MODELS
# 50 (exists_object (lambda (object_2) 51 (exists_event (lambda (event)
52 (and 53 54 55 56 57 58 59 ((is_color? red) object_1) ((is_shape? 'cube) object_1) ((is_color? blue) object_2) ((is_shape? 'cube) object_2) (is_subject_of_event? event object_1) (is_object_of_event? event object_2) (is_event? 'is_hitting event)) ))))))) 60 61 ;; What's the final velocity of the red block after it is hit? 62 (query (last (map 63 (lambda (event) (get_attribute event 'subject_final_v)) 64 (filter_events 65 (lambda (e) 66 (and 67 (is_event? 'is_colliding e) 68 (event_subject_is? e (lambda (o) 69 70 71 (and ((is_color? red) o) ((is_shape? 'cube) o))))))))))
Code Block 10: Translation examples for the physics domain. These examples are concatenated with the physical scenes generative model to produce the prompt used to generate new translations.
# A.4 Social reasoning
A.4.1 Generative world model for social reasoning
# 1 (define gridworld (list
4
(list 'ames 'lawn 'lawn 'lawn 'sushi) (list 'ames 'lawn 'lawn 'lawn 'danner) (list 'office 'barlow 'barlow 'barlow 'danner) (list 'ames 'lawn 'lawn 'lawn 'danner) (list 'ames 'lawn 'lawn 'lawn 'vegetarian) (list 'pizza 'carson 'carson 'carson 'danner)
# 7 8 )) 9 (define restaurants (list 'sushi 'pizza 'vegetarian))
10
11 (define initial_x 1) 12 (define initial_y 3)
13
14
15 (define has_bike (mem (lambda (agent-id) (flip)))) 16 (define available_motions (mem (lambda (agent-id) (if (has_bike agent-id) (list 'is_walking 'is_biking)
(list 'is_walking)))))
17 (define directions (list 'west 'east 'north 'south)) 18 (define available_actions (mem (lambda (agent-id) (cons (pair 'stay 'stay) (cartesian_product
(available_motions agent-id) directions)))))
19
20 (define is_open (mem (lambda (restaurant_type) (flip)))) 21 (define POSITIVE_UTILITY_MEAN 10) 22 (define NEGATIVE_UTILITY_MEAN -10) 23 (define UTILITY_VARIANCE 1) 24 (define restaurant_utility (mem (lambda (agent-id restaurant_type)
25 (uniform-draw
87
A.4 Social reasoning
A LANGUAGE AND WORLD MODELS
26 27 28 29 ))))) (list (gaussian POSITIVE_UTILITY_MEAN UTILITY_VARIANCE) (gaussian NEGATIVE_UTILITY_MEAN UTILITY_VARIANCE) 30 31 (define motion_utility (mem (lambda (agent-id location_type motion_type) 32 (case location_type 33 34 35 36 37 38 (('lawn) (case motion_type (('is_biking) -1) (('is_walking) -0.2) (('is_staying) 0) (else 0)) ) (else (case motion_type 39 40 41 42 43 44 )))) (('is_biking) -0.01) (('is_walking) -0.2) (('is_staying) 0) (else 0))) 45 46 (define food_utility (mem (lambda (agent-id location_type) 47 48 49 50 51 52 53 54 (case location_type (('lawn) 0) (('ames) 0) (('barlow) 0) (('carson) 0) (('danner) 0) (('office) 0) (else (if (is_open location_type) (restaurant_utility agent-id location_type) NEGATIVE_UTILITY_MEAN)) 55 56 )))) 57 58 (define utility_function (mem (lambda (agent-id gridworld state_x state_y action) 59 60 61 62 (let ((location_type (get_gridworld_at gridworld state_x state_y))) (let ((motion_type (car action))) (let ((state_food_utility (food_utility agent-id location_type))) (let ((state_motion_utility (motion_utility agent-id location_type motion_type))) (+ state_food_utility state_motion_utility)))))))) 63 64 65 (define get_gridworld_at (lambda (gridworld x y) (list-elt (list-elt gridworld y) x) 66 67 )) 68 (define x_increment (lambda (direction) 69 70 71 72 73 (case direction (('west) -1) (('east) 1) (('north) 0) (('south) 0) (('stay) 0)
# 74 75 ))) 76 (define y_increment (lambda (direction)
77
# (case direction
78
79
80
81
(('north) -1) (('south) 1) (('west) 0) (('east) 0) (('stay) 0)
# 82 83 ))) 84 (define gridworld_max_x (lambda (gridworld) (length (list-elt gridworld 1))))
88
A.4 Social reasoning
A LANGUAGE AND WORLD MODELS
85 (define gridworld_max_y (lambda (gridworld) (length gridworld))) 86 (define gridworld_transition (lambda (gridworld current_x current_y action) 87 88 (let ((direction (cdr action))) (let ((next_x (if (>= current_x (gridworld_max_x gridworld)) current_x (+ (x_increment direction) current_x)))) 89 90 (let ((next_x (if (< next_x 1) current_x next_x))) (let ((next_y (if (>= current_y (gridworld_max_y gridworld)) current_y (+ (y_increment direction) current_y)))) (let ((next_y (if (< next_y 1) current_y next_y))) (let ((next_state (get_gridworld_at gridworld next_x next_y))) (list next_state next_x next_y) 91 92 93 94 )))))))) 95 96 (define value_function (mem (lambda (agent-id curr_iteration gridworld state_x state_y) 97 98 (if (equal? curr_iteration -1) 0 (let ((prev_optimal_action_value (optimal_action_value agent-id (- curr_iteration 1) gridworld state_x state_y))) (cdr prev_optimal_action_value)) 99 100 )))) 101 102 (define available_actions_to_values (mem (lambda (agent-id curr_iteration gridworld state_x state_y) 103 (map (lambda (action) 104 105 106 107 108 (let ((utility (utility_function agent-id gridworld state_x state_y action))) (let ((next_state (gridworld_transition gridworld state_x state_y action))) (let ((next_state_x (second next_state))) (let ((next_state_y (third next_state))) (let ((next_state_value (value_function agent-id curr_iteration gridworld next_state_x next_state_y))) 109 (pair action (+ utility next_state_value)) 110 111 112 ))) )))))) (available_actions agent-id)) 113 114 (define optimal_action_value (mem (lambda (agent-id curr_iteration gridworld state_x state_y) 115 (let ((actions_to_values (available_actions_to_values agent-id curr_iteration gridworld state_x state_y))) 116 117 118 ))) (max_cdr actions_to_values) ) 119 120 (define MAX_ITERATIONS 20) 121 (define should_terminate (mem (lambda (agent-id gridworld state_x state_y)
122
(if (<= (value_function agent-id MAX_ITERATIONS gridworld initial_x initial_y) 0) true (let ((location_type (get_gridworld_at gridworld state_x state_y))) (let ((state_food_utility (food_utility agent-id location_type)))
125
# (> state_food_utility 0)))))))
126
127
128
129 (define optimal_policy_from_initial_state (mem (lambda (agent-id gridworld state_x state_y)
130
131
132
134
135
136
(if (should_terminate agent-id gridworld state_x state_y) () (let ((curr_optimal_action_value (optimal_action_value agent-id MAX_ITERATIONS gridworld state_x state_y))) (let ((curr_optimal_action (car curr_optimal_action_value))) (let ((next_state (gridworld_transition gridworld state_x state_y curr_optimal_action))) (let ((next_state_x (second next_state))) (let ((next_state_y (third next_state))) (let ((remaining_policy (optimal_policy_from_initial_state agent-id gridworld next_state_x next_state_y)))
89
A.4 Social reasoning
A LANGUAGE AND WORLD MODELS
(cons curr_optimal_action remaining_policy) 137 138 )))))))))) 139 140 (define trajectory_from_initial_state (mem (lambda (agent-id gridworld state_x state_y) 141 142 143 144 145 146 147 148 (if (should_terminate agent-id gridworld state_x state_y) () (let ((curr_optimal_action_value (optimal_action_value agent-id MAX_ITERATIONS gridworld state_x state_y))) (let ((curr_optimal_action (car curr_optimal_action_value))) (let ((next_state (gridworld_transition gridworld state_x state_y curr_optimal_action))) (let ((next_state_location (first next_state))) (let ((next_state_x (second next_state))) (let ((next_state_y (third next_state))) (let ((remaining_trajectory (trajectory_from_initial_state agent-id gridworld next_state_x next_state_y))) (cons next_state_location remaining_trajectory)) 149 150 )))))))))) 151 152 (define optimal_policy (mem (lambda (agent-id gridworld initial_state_x initial_state_y) (cons (pair 'start 'start) (optimal_policy_from_initial_state agent-id gridworld 153 initial_state_x initial_state_y))))) 154 155 (define optimal_trajectory (mem (lambda (agent-id gridworld initial_state_x initial_state_y) 156 (cons (get_gridworld_at gridworld initial_state_x initial_state_y) (trajectory_from_initial_state agent-id gridworld initial_state_x initial_state_y)) 157 ))) 158 159 (define optimal_policy_with_trajectory (mem (lambda (agent-id gridworld initial_state_x initial_state_y) 160 (zip (optimal_policy agent-id gridworld initial_state_x initial_state_y) (optimal_trajectory agent-id gridworld initial_state_x initial_state_y)) 161 ))) 162 163 (define get_terminal_goal_state (mem (lambda (agent-id gridworld initial_state_x initial_state_y) 164 (last (optimal_trajectory agent-id gridworld initial_state_x initial_state_y))))) 165 166 (define trajectory_has_location_type? (mem (lambda (agent-id location_type gridworld initial_state_x initial_state_y) (member? location_type (optimal_trajectory agent-id gridworld initial_state_x initial_state_y)) 167 168 ))) 169 (define policy_has_motion_type? (mem (lambda (agent-id motion_type gridworld initial_state_x initial_state_y) 170 (let ((policy_motions (map (lambda (action) (first action)) (optimal_policy agent-id gridworld initial_state_x initial_state_y)))) (member? motion_type policy_motions) 171 172 )))) 173 (define policy_and_trajectory_has_motion_at_location? (mem (lambda (agent-id motion_type location_type gridworld initial_state_x initial_state_y) 174 (let ((policy_motions (map (lambda (action) (first action)) (optimal_policy agent-id gridworld
initial_state_x initial_state_y))))
(let ((trajectory (optimal_trajectory agent-id gridworld initial_state_x initial_state_y))) (let ((motions_at_locations (zip policy_motions trajectory))) (member? (list motion_type location_type) motions_at_locations)
175
(member? (list motion_type location_type) motions_at_locations)
177 178 ))))))
179
180 (define motion_at_location? (mem (lambda (agent-id motion_type location_type gridworld initial_state_x initial_state_y)
181
# (let ((policy_motions (map (lambda (action) (first action)) (optimal_policy agent-id gridworld
# initial_state_x initial_state_y))))
182
183
# (let ((trajectory (optimal_trajectory agent-id gridworld initial_state_x initial_state_y))) (let ((motions_at_locations (zip policy_motions trajectory)))
90
A.4 Social reasoning
A LANGUAGE AND WORLD MODELS
motions_at_locations 184 185 )))))) 186 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; 187 ;; Derived predicates. 188 ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; 189 (define action_id_gensym (make_gensym "action-")) 190 (define is_going_to_actions (mem (lambda (agent-id) 191 192 193 (let ((action_states (optimal_policy_with_trajectory agent-id gridworld initial_x initial_y))) (let ((final_location (last (last action_states)))) (list (list 194 195 196 197 (pair 'action_id (action_id_gensym)) (pair 'action_subject agent-id) (pair 'action_predicates (list 'is_going (list 'to final_location))) (pair 'action_preposition 'to) (pair 'action_location final_location) 198 199 ))))))) 200 201 (define is_going_on_actions (mem (lambda (agent-id) 202 203 204 205 206 207 (let ((action_states (optimal_policy_with_trajectory agent-id gridworld initial_x initial_y))) (fold (lambda (action_state these_actions) (let ((action_location (last action_state))) (let ((action_manner (first (first action_state)))) (let ((action_direction (cdr (first action_state)))) (cons (list 208 209 210 211 (pair 'action_id (action_id_gensym)) (pair 'action_subject agent-id) (pair 'action_predicates (list 'is_going action_manner action_direction (list 'on action_location))) 212 213 (pair 'action_preposition 'on) (pair 'action_location action_location) 214 215 ) these_actions) 216 )))) 217 218 )))) () action_states) 219 220 (define actions_in_scene (mem (lambda (agent-id) (concatenate (is_going_to_actions agent-id) (is_going_on_actions agent-id))))) 221 (define is_action? (lambda (action action_predicate) (member? action_predicate (lookup action 'action_predicates)))) 222 (define is_subject_of_action? (lambda (action entity) (eq? 223 224 (lookup action 'action_subject) entity 225 ))) 226
227 (define is_preposition_of_action? (lambda (action preposition) (eq?
# (lookup action 'action_preposition) preposition
228
229
)))
230
231 (define is_location_of_action? (lambda (action location) (eq?
# (lookup action 'action_location) location
232
233
)))
234
235
# 236 (define get_location (lambda (action) (lookup action 'action_location)
237
238
))
239
91
A.4 Social reasoning
A LANGUAGE AND WORLD MODELS
240 (define (exists_action agent-id predicate) 241 (some (map predicate (actions_in_scene agent-id)))) 242 243 (define (get_actions agent-id predicate) 244 (fold (lambda (action these_actions) (if (predicate action) (cons action these_actions) these_actions)) 245 246 ) () (actions_in_scene agent-id))
Code Block 11: Generative domain theory for restaurant navigation domain. Generates agents with varying preferences in a gridworld environment. Also implements a value iteration-based planner directly in the Church code.
A.4.2 Translation examples for social reasoning domain
1 ;; Bob likes pizza. 2 (condition (> (restaurant_utility 'bob 'pizza) 0)) 3 ;; Bob really likes pizza. 4 (condition (> (restaurant_utility 'bob 'pizza) 10)) 5 ;; Bob does not like pizza, and he actually despises vegetables. 6 (condition (and
(< (restaurant_utility 'bob 'pizza) 0) (< (restaurant_utility 'bob 'vegetarian) 10) 7 8 9 )) 10 ;; The pizza place is not open. 11 (condition (not (is_open 'pizza))) 12 ;; Condition on: Bob walked North on Danner. 13 (condition (exists_action 'bob (lambda (action) 14 15 16 17 18 (and (is_subject_of_action? action 'bob) (is_action? action 'is_walking) (is_action? action 'north) (is_preposition_of_action? action 'on) (is_location_of_action? action 'danner))))) 19 20 ;; Does Bob like vegetarian food? 21 (query (> (restaurant_utility 'bob 'vegetarian) 0)) 22 ;; Where is Bob going? 23 (query (get_actions 'bob (lambda (action) (and (is_subject_of_action? action 'bob) (is_action? action 'is_going))))) 26 27 ;; Where will Bob go to for lunch? 28 (query (get_location (first 24 25 29 30 31 32 33 (get_actions 'bob (lambda (action) (and (and (is_subject_of_action? action 'bob) (is_action? action 'is_going)) (is_preposition_of_action? action 'to)) ))))) 34
Code Block 12: Translation examples for the social reasoning. These examples are concatenated with the social reasoning scenes generative model to produce the prompt used to generate new translations.
92
B OPEN QUESTIONS
# B Open questions
# B.1 Syntactic bootstrapping
Bootstrapped language-to-code translations for novel words There is a dax. Awog blicks a foog. EE Condition (existsobject (lanbda (object) (is_shape? âdax) object))) (condition (exists_object (lambda (object_1) (exists_object (lambda (object_2) There is a pelgy dax. (exists_event (lambda (event) (and (and (and (and (and (and (and o (condition (exists_object (lambda (object) oy (Cis_shape? 'wog) object_1)) and (Cis_shape? 'foog) object_2)) Fy (Gs.color? âpelgy) object) (is_subject_of_event? event object_1)) (Cis shape? 'dax) object))))) (is_object_of_event? event object_2)) (is_event_base? 'is_blicking event) (caused_event? event object_1)))))))))) A pelgy dax is gorping. (condition (exists_object (lambda (object) : (exists_event (lambda (event) Awog and a foog are zeeming. (and (and (and i Fy (Cis_color? 'pelgy) object) (condition (Cis_shape? 'dax) object)) (and (is_subject_of_event? event object)) (is_event_base? 'is_gorping event)))))))) (exists_object (lambda (object_1) (exists_event (lambda (event_1) (and (and . (Cis_shape? 'wog) object_1) Awug gorps feppily. (is_subject_of_event? event_1 object_1)) oy (is_event_base? 'is_zeeming event_1)))))) (condition (exists_object (lambda (object) (exists_event (lambda (event) (exists_object (lambda (object_2) (and (and Cand (exists_event (lambda (event_2) Fy (Cis_shape? 'wug) object) (and (and (is_subject_of_event? event object)) (Cis_shape? 'foog) object_2) (is_event_base? 'is_gorping event) (is_subject_of_event? event_2 object_2)) (is_event_modifier? 'is_feppily event))))))) (is_event_base? 'is_zeeming event_2))))))))
Figure 14: Example translations with novel words, suggesting that language-to-code models can leverage syntax-semantic mappings to inform hypothesized meanings.
# B.2 Code editing
While our framework focuses primarily on generating code in the PLoT, this view encompasses only part of the broader story of natural language. In particular, in certain contexts, it might not make sense to write new code, but instead to modify the existing domain theory. Consider the following statements, taken in context of the domains we explored in Section 3:
(Tug-of-war) The teamâs strength is the strength of the strongest player. ⢠(Kinship) Avery has two kids from a previous marriage. ⢠(Visual scenes) Thereâs a red mug stacked on top of a yellow can. ⢠(Navigation) Thereâs a river separating the North and South sides of town, which you can paddle
across in nice weather.
These utterances bend or break the rules of their respective domain theories. To properly integrate these kinds of language, weâd need to edit pieces of the existing generative models.
While language-guided code editing is still an open area of research, recent advances offer an exciting glimpse of what might be possible in the near-term. Ouyang et al. (2022) use a combination of finetuning and reinforcement learning to make GPT-3 adhere more closely to human-authored instructions. The resulting InstructGPT models, which OpenAI make available on their API, are capable of editing existing text based on short natural language instructions (e.g., âFix the grammarâ; âTurn this into a poem.â).9 Excitingly, this same approach extends to code-based LLMs, meaning that it is possible to prompt GPT models to edit a piece of code according to some instructions. Indeed, we can use OpenAIâs editing interface off-the-shelf to handle utterances requiring localized changes to the domain model (see below for a simple example in the tug-of-war domain).
# 9https://openai.com/blog/gpt-3-edit-insert/
93
C ATTRIBUTIONS
Redefine: The teamâs strength is the strength of the strongest player.
1 ;; The team's strength is the sum of the
1 ;; The team's strength is the strength of the
strongest player.
players' strengths. strongest player. 2 ;; When a player is lazy in a match, they pull with half their strength. with half their strength. 3 (define (team-strength team) 3 (define (team-strength team) 4 5 (sum (map 4 5 (apply max 6 (lambda (player) 6 (lambda (player) 7 8 9 (if (flip (laziness player)) (/ (strength player) 2) (strength player))) 7 8 9 (if (flip (laziness player)) (/ (strength player) 2) (strength player))) 10 team))) 10 team)))
2 ;; When a player is lazy in a match, they pull
Though questions of scaling and robustness remain, the problem of modeling sequences of code changes is currently gaining traction in the machine learning for code community, which has recently produced multiple language-guided neural models of code editing (Chakraborty, Ding, Allamanis, & Ray, 2022; Chakraborty & Ray, 2021; Fried et al., 2022; Panthaplackel, Nie, Gligoric, Li, & Mooney, 2020; Reid & Neubig, 2022; J. Zhang, Panthaplackel, Nie, Li, & GligoriÄ, 2022) that draw broadly on contemporary work in automated program repair (Bai et al., 2021; Y. Li, Wang, & Nguyen, 2020; Yasunaga & Liang, 2020). These advances suggest a broader vision for our framework in which domain theories, expressed in the PLoT, can be iteratively grown and revised to reflect natural language instruction. Moreover, as code LLMs become more general-purpose, the technical gap between generation and editing will continue to narrow, suggesting a point in the near future where defining new components of a domain theory will be a special case of language-guided code editing.
# C Attributions
# C.1 Attribution of graphics resources
Se
Artificial neural network icon by sachin modgekar from thenounproject.com.
# fod
Cog icon by Rflor from thenounproject.com.
94 | {
"id": "1810.04805"
} |
2306.12420 | LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models | Large foundation models have demonstrated a great ability to achieve general
human-level intelligence far beyond traditional approaches. As the technique
keeps attracting attention from the AI community, more and more large
foundation models have become publically available. However, most of those
models exhibit a major deficiency in specialized-task applications, where the
step of finetuning is still required for obtaining satisfactory performance. As
the number of available models and specialized tasks keeps growing, the job of
general finetuning becomes highly nontrivial. In this paper, we take the first
step to address this issue. We introduce an extensible and lightweight toolkit,
LMFlow, which aims to simplify the finetuning and inference of general large
foundation models. LMFlow offers a complete finetuning workflow for a large
foundation model to support personalized training with limited computing
resources. Furthermore, it supports continuous pretraining, instruction tuning,
parameter-efficient finetuning, alignment tuning, and large model inference,
along with carefully designed and extensible APIs. This toolkit has been
thoroughly tested and is available at https://github.com/OptimalScale/LMFlow. | http://arxiv.org/pdf/2306.12420 | Shizhe Diao, Rui Pan, Hanze Dong, Ka Shun Shum, Jipeng Zhang, Wei Xiong, Tong Zhang | cs.CL, cs.AI | 13 pages, 3 figures | null | cs.CL | 20230621 | 20230621 | 3 2 0 2
n u J 1 2 ] L C . s c [
1 v 0 2 4 2 1 . 6 0 3 2 : v i X r a
# LMFlow: An Extensible Toolkit for Finetuning and Inference of Large Foundation Models
# Shizhe Diaoâ Rui Panâ Hanze Dongâ Ka Shun Shum Jipeng Zhang Wei Xiong
# Tong Zhang
# Abstract
Large foundation models have demonstrated a great ability to achieve general human-level intelligence far beyond traditional approaches. As the technique keeps attracting attention from the AI community, more and more large foundation models have become publically available. However, most of those models exhibit a major deficiency in specialized-task applications, where the step of finetuning is still required for obtaining satisfactory performance. As the number of available models and specialized tasks keeps growing, the job of general finetuning becomes highly nontrivial. In this paper, we take the first step to address this issue. We introduce an extensible and lightweight toolkit, LMFlow, which aims to simplify the finetuning and inference of general large foundation models. LMFlow offers a complete finetuning workflow for a large foundation model to support personalized training with limited computing resources. Furthermore, it supports continuous pretraining, instruction tuning, parameter-efficient finetuning, alignment tuning, and large model inference, along with carefully designed and extensible APIs. This toolkit has been thoroughly tested and is available at https://github.com/ OptimalScale/LMFlow.
# Introduction
Large foundation models, and in particular large language models (LLMs), have demonstrated general abilities to perform different tasks beyond what was possible previously. However, for specialized domains or tasks, it is necessary to further finetune such LLMs to achieve improved performance on such domains or tasks. The typical processes to finetune such large models include:
⢠Continuous pretraining on special domains so that a large foundation model can acquire knowledge on these domains.
⢠Instruction tuning to teach a large foundation model the capability to follow these specialized natural language instructions and perform tasks required by such instructions.
⢠Reinforcement learning with human feedback (RLHF) to teach a large foundation model skills to perform conversation according to human preference.
While a number of pretrained large models, including GPT-J [35], Bloom [30], LLaMA [34], etc., are publically available and have already been incorporated into the Hugging Face model repository [16], there is no publically available toolkit that can be easily used to perform finetuning tasks for these different models. The purpose of this package is to offer a simple-to-use and lightweight toolkit so that developers and researchers can perform efficient finetuning and inference of large models with limited resources.
# âEqual contribution.
Preprint.
Continuous Pretraining Finetuning Domain-specific Data (a private Socoeepesteso Data H a : Reward Model Zoo 4 ; ilteration Public Models (1) Domain Adaptation Law, Medical, Finance... Private Models Task-specific Data Model Deployment Foundation Models (2) Task Adaptation (3) Instruction Finetuning (4) RLHF LLaMA, Bloom... Summarization, Q&A, Translation...
Figure 1: The system design of LMFlow. Starting from a publically available foundation model, there are four possible stages including (1) domain adaptation, (2) task adaptation, (3) instruction finetuning, and (4) reinforcement learning with human feedback.
The following key features are supported by the toolkit:
⢠Continous pretraining, instruction tuning, and RLHF on user-defined datasets.
⢠Simple and extensible APIs for developers.
Efficient tuning with low-rank adaptation (LoRA).
⢠A novel RLHF algorithm RAFT (Reward rAnked FineTuning) to simply RLHF pipeline for generative models.
⢠A simplified model inference framework.
Based on a 7-billion-parameter LLaMA model, it only takes one Nvidia 3090 GPU and five hours to train a personalized model. We used this framework to finetune a series of 7-billion, 13-billion, 33-billion, and 65-billion parameter versions of LLaMA on a single machine and have released the model weights for academic research. The trained model weights can be immediately used for a question-and-answer service on the website lmflow.com.
Using LMFlow, anyone can train their own personalized model. Each person can choose the appropriate model according to their available resources, for tasks such as question answering, companionship, writing, translation, and expert consultations in various fields. The larger the model and data size, the longer the training time provided the better the results. Currently, we trained a 33B model and achieved comparable or even better performance than ChatGPT.
# 2 Toolkit Overview
# 2.1 System Design
An illustration of the LMFlow system design is shown in Figure 1. There are four stages for improving the performance of a publicly available large language model. The first stage is domain adaptation, which involves modifying the model to better handle a specific domain by training the model on that domain. The second stage is task adaptation, which involves adapting the model to perform a specific task, such as summarization, question-answering, and translation. The third stage is instruction finetuning, which involves adjusting the modelâs parameters based on instructional question-answer pairs. The final stage is reinforcement learning with human feedback, which involves using human feedback to further align the model to human preference. LMFlow provides a complete finetuning workflow for these four stages, supporting large language modelsâ personalized training with limited computing resources.
2
# 2.2 Installation
LMFlow has been fully tested on Linux OS (Ubuntu 20.04) and can be installed by executing the following commands.
$ git clone https :// github . com / OptimalScale / LMFlow . git $ cd LMFlow $ conda create - n lmflow python =3.9 - y $ conda activate lmflow $ conda install mpi4py $ pip install - e .
# 2.3 Data Format
LMFlow accepts several .json files as input. Users can provide a list of .json files under a specified dataset directory. For example,
1 |- path_to_dataset |- data_1.json 2 |- data_2.json |- another_data.json |- ...
Each json file shall have the following format (three instances with four keys for example),
1 { 2 3 "type": "TYPE", "instances": [ 4 { 5 6 7 8 "KEY_1": "VALUE_1.1", "KEY_2": "VALUE_1.2", "KEY_3": "VALUE_1.3", "KEY_4": "VALUE_1.4", 9 10 }, { 11 12 13 14 "KEY_1": "VALUE_2.1", "KEY_2": "VALUE_2.2", "KEY_3": "VALUE_2.3", "KEY_4": "VALUE_2.4", 15 16 }, { 17 18 19 20 "KEY_1": "VALUE_3.1", "KEY_2": "VALUE_3.2", "KEY_3": "VALUE_3.3", "KEY_4": "VALUE_3.4", 21 }, 22 23 } ]
where the TYPE indicates the dataset type and defines the set of keys { KEY_1, KEY_2, ... and their corresponding interpretations. A list of supported types is detailed as follows.
TextOnly This is the most common dataset type, which only contains raw texts in each sample. This type of dataset can be used as the training set for text decoder models, or the input of decoder models / encoder-decoder models. Its format is as follows (three instances, for example),
1 { 2 3 "type": "text_only", "instances": [ 4 5 { "text": "SAMPLE_TEXT_1" }, { "text": "SAMPLE_TEXT_2" },
3
6 { "text": "SAMPLE_TEXT_3" }, 7 8 } ]
Text2Text This is the dataset type mostly used for inferencing, which contains a pair of texts in each sample. This type of dataset can be used as the training set for text encoder-decoder models, or question-answer pair for evaluating model inferences. Its format is as follows (three instances for example),
1 { 2 3 "type": "text2text", "instances": [ 4 { 5 6 "input": "SAMPLE_INPUT_1", "output": "SAMPLE_OUTPUT_1", 7 8 }, { 9 10 "input": "SAMPLE_INPUT_2", "output": "SAMPLE_OUTPUT_2", 11 12 }, { 13 14 "input": "SAMPLE_INPUT_3", "output": "SAMPLE_OUTPUT_3", 15 }, 16 17 } ]
# 2.4 Continuous Pretraining
The endeavor to bridge the divide between pretraining domains and downstream domains has led to the adoption of a prevalent approach, known as continuous pretraining [4, 1, 15, 21], which involves the ongoing pretraining on an extensive collection of unlabeled data that is specific to a given domain. Continuous pretraining is LMFlow supports continuous pretraining natively, which is an effective way to adapt LLMs to a specific domain. Users just need to collect a set of unlabeled data and prepare them to TextOnly data format. The following process will be handled by autoregressive training.
# Instruction Tuning
Instruction tuning [29, 38, 9, 23, 37], also called supervised finetuning, is an approach used to enhance the performance of language models by training them to follow natural language instructions. This involves training the model on a small set of task-specific data, most of which are in prompt-answer format, including positive or negative examples, prompts, constraints, and other elements commonly present in human language. The primary objective of instruction tuning is to improve the modelâs proficiency in undertaking multiple tasks and to generalize more effectively to new or unseen tasks. This is accomplished by teaching the model to comprehend and integrate various language cues and constraints relevant to the given task. By improving the language modelsâ ability to comprehend and follow natural language commands, this approach can unlock new levels of performance and productivity in diverse applications. Instruction tuning enables LLMs to provide more accurate and relevant responses to user queries, making them a more effective conversational agents.
# 2.6 RLHF as Finetuning
Large language models (LLMs) are often pretrained to replicate the vast amount of text available on the internet, which unfortunately includes the generation of text that would not align with human preferences [5, 6, 24]. Examples of such content include falsehoods, offensive comments, or even harmful texts. However, there is a growing need to explore alternative pretraining objectives that can guide LLMs to generate text that aligns with human preferences. By doing so, we can ensure that LLMs produce text that is more helpful, honest, and harmless for humans, which are called âHHHâ rules [2]. [24] divides the alignment process into three steps, including SFT, reward modeling, and
4
MODEL PubMedQA MedQA-USMLE MedMCQA Average ID OOD ID Human (pass) Human (expert) 60.0 78.0 50.0 87.0 - 90.0 - 85.0 InstructGPT-175B ChatGPT LLaMA-7B LLaMA-33B 73.2 63.9 5.2 1.8 46.0 57.0 27.1 43.4 44.0 44.7 24.3 30.3 54.4 55.2 18.9 25.2 Task-tuned LLaMA-7B (full) Task-tuned LLaMA-33B (LoRA) 75.1 74.0 44.5 51.3 49.9 50.2 56.5 58.5
Table 1: The overall performance of task-tuned LLaMA models and the comparison with human and existing models on three medical datasets. PubMedQA and MedMCQA are evaluated on in-domain tests and MedQA-USMLE is evaluated on the out-of-domain test. Bold represents the best among each dataset.
RLHF (reward optimization). We have integrated reward modeling into our LMFlow framework. For reward optimization, PPO has been shown to be effective in various studies [31, 12]. However, it relies on a trial-and-error approach through interaction with the environment, making it less stable and efficient than supervised learning [8]. A more feasible option for finetuning generative models may be to use a reward function instead of a pre-determined supervised dataset, especially when collecting high-quality samples. To address this, we propose a new alignment method for generative models called RAFT [11]. RAFT utilizes a reward model to rank the output of the generative model, allowing us to continue training using supervised finetuning (SFT)-like techniques with the selected samples. This approach encourages the generative model to prioritize samples with higher rewards and offers significant computational advantages over PPO, resulting in substantial savings in memory and gradient computations. Moreover, due to the stability of SFT-like training, our approach demonstrates lower sample complexity and requires fewer learnable parameters, making it easily adaptable to any generative model. We believe that our novel alignment algorithm represents a competitive and innovative approach that contributes to the well-behaved behavior of generative models.
# 2.7 Efficient Tuning
LMFlow supports low-rank adaptation (LoRA) [14] tuning based on the implementation of huggingface/peft [22] 2. LoRA is an efficient tuning method that involves freezing the weights of the pretrained model and incorporating trainable rank decomposition matrices into each layer of the Transformer architecture. This approach significantly reduces the number of trainable parameters.
# 2.8 Inference
LMFlow developed an easy-to-use inference interface for LLMs. Based on Deepspeed 3, LMFlow supports parameter partitioning with zero-offload strategies as introduced by [28].
In LMFlow, the inference interface is provided by an inferencer class. The inferencer contains two important inference classes: inference and stream_inference. The distinction lies in whether the output is printed word by word in real-time.
# 3 API Documentation
Please refer to https://optimalscale.github.io/LMFlow/autoapi/index.html for the de- tails of API documentation.
# 2https://github.com/huggingface/peft 3https://github.com/microsoft/DeepSpeed
5
# 4 Case Studies
In this section, we will provide case studies of LMFlow in task tuning, instruction tuning, and alignment tuning.
# 4.1 Task Tuning
The aim of task tuning is to enhance the proficiency of a language model in a specific field, such as the medical or financial domain, by imparting domain-specific information that allows it to better adapt to the target subject matter. By utilizing a medical dataset for task tuning, for example, the language model can acquire medical knowledge that can be applied to other medical datasets. To highlight the importance of this approach, we employed task tuning on LLaMA models in medical domain to assess their performance. The evaluations on three medical datasets revealed significant enhancements in both in-domain (PubMedQA [20], MedMCQA [25]) and out-of-domain (MedQA-USMLE [19]) datasets. The LLaMA-33B (LoRA) performance is achieved with only about 16h finetuning on the training split of PubMedQA and MedMCQA with a single 8 * A100 server.
# Instruction Tuning
Following previous work in instruction tuning [37, 32, 7], we finetune the model with the instruction- following data. Expanding upon the initial idea of self-instruct [37] techniques, we incorporated several different data sources and build a new dataset called LMFlow Dataset4. The new training split is created by merging the following datasets:
ShareGPT: randomly sample 50K English data and 10K Chinese data from ShareGPT 5. ⢠GPT-4-LLM [27]: 52K English data from GPT-4-LLM 6 ⢠BELLE [17, 18]: randomly sample 80K Chinese data from BELLE 7.
This data fusion takes the Chinese and English data balance into consideration. Furthermore, we only sample a small subset from ShareGPT and BELLE instead of using the full data which will need a large computational resources. We call our instruction-tuned model Robin 8. Based on LMFlow Dataset, we trained Robin-7B-v2, Robin-13B-v2, Robin-33B-v2 and Robin-65B-v2 based on the respective LLaMA base model. The delta weights of Robin are released at https: //github.com/OptimalScale/LMFlow#model-zoo.
In order to evaluate the modelsâ instruction-following ability, we participate the Huggingface Open LLM Leaderboard 9. The performance is shown in Table 2. Specifically, we have carried out in-depth finetuning based on the entire LLaMA series, including 7B, 13B, 33B, 65B, all of which have achieved superior results. Robin-7B-v2 scored 51.7 in the OpenLLM standard test, and Robin-13B even reached as high as 59.1, ranking sixth, surpassing many 33B models. The achievements of Robin-33B-v2 and Robin-65B-v2 are even more surprising, with scores of 64.1 and 65.2 respectively, firmly securing the top positions.
In addition, we collected GPT-4 instruction data from GPT-4-LLM [27], which provides many instruction tuning data labeled by GPT-4 and create a test set by sampling 1,000 English data. We manually filtered examples with the following issues, where 767 effective samples remain after the filtering:
Long response with too many nonsense words ⢠Incomplete input texts ⢠Specific domains involving chemistry/biology, where most LLM models do not possess the
knowledge and always fail
4http://lmflow.org:5000/lmflow_data.tar.gz 5https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered 6https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM 7https://github.com/LianjiaTech/BELLE 8Robin is a small passerine bird that belongs to the family Turdidae. Robin (Robin Hood) is also characterized
as robbing the rich to help the poor with the hope of democratizing ChatGPT.
# 9https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
6
MODEL ARC-C HellaSwag MMLU TruthfulQA Average 7B LLaMA-7B [34] Baize-7B-v2 [39] MPT-7B [33] Falcon-7B [26] Robin-7B-v2 46.6 44.5 47.7 47.9 49.4 75.6 73.3 77.7 78.1 74.6 34.2 35.6 35.6 35.0 39.8 34.1 40.8 33.4 34.3 43.0 47.6 48.6 48.6 48.8 51.7 13B Alpaca-13B [32] LLaMA-13B [34] Vicuna-13B [7] Baize-13B-v2 [39] Robin-13B-v2 51.9 50.8 47.4 50.3 56.5 77.6 78.9 75.2 77.1 80.4 37.6 37.7 39.6 39.4 48.8 39.6 39.9 49.8 48.3 50.8 51.7 51.8 53.7 53.8 59.1 >30B LLaMA-33B [34] LLaMA-65B [34] Falcon-40B [26] Guanaco-65B-merged [10] Falcon-40B-instruct [26] Robin-33B-v2 Robin-65B-v2 57.1 57.8 61.9 60.2 61.6 62.5 61.9 82.6 84.2 85.3 84.6 84.4 84.3 84.6 45.7 48.8 52.7 52.7 54.1 57.8 62.6 42.3 42.3 41.7 51.3 52.5 51.9 51.8 56.9 58.3 60.4 62.2 63.2 64.1 65.2
Table 2: Performance on Huggingface Open LLM Leaderboard. We conduct the comparisons under the same setting of the Huggingface Open LLM leaderboard, which uses the Eleuther AI Language Model Evaluation Harness [13]. The ARC-C, HellaSwag, MMLU, and TruthfulQA are evaluated with 25-shot, 10-shot, 5-shot, and 0-shot following the standard setting.
We compare Robin-7B with Vicuna-13B [7] on this test set. The case study is shown in Figure 2.
# 4.3 Alignment Tuning
We conduct an experiment on the HH-RLHF (Helpful and Harmless) dataset10 [3], which is collected for model alignment according to human preferences. The dataset consists of 112K training samples and 12.5K test samples. Each sample of the HH-RLHF dataset consists of a prompt x, which is a chat history between the âHumanâ and âAssistantâ, and two responses yw and yl from the âAssistantâ to the prompt where yw is the preferred compared to yl. Following [24], we first finetune the LLaMA-7B base model [34] on the training set with the preferred responses to get the LLaMA-SFT model. To model human preference, we train a reward model based on GPT-Neo-2.7B. Then, we use RAFT to align the LLaMA-SFT model to get the aligned model LLaMA-RAFT.
For comparison, we use LLaMA-SFT and also LLaMA-PPO aligned by the PPO [31] as two competitors. The evaluation metrics of these models are reported in Table 3. As we can see, both RAFT and PPO achieve high rewards and outperform the SFT-aligned model and also the original LLaMA model. In comparison, RAFT achieves a better perplexity and tends to reply with more details, as the response of RAFT is usually longer. We present representative examples with randomly sampled prompts in Figure 4.
It is worth noting that the RAFT training is very robust and the resulting models achieve stable per- formance across three independent experiments. In contrast, the PPO training requires a complicated hyper-parameter tuning process and the training can fail sometimes.
# 10https://huggingface.co/datasets/Dahoas/full-hh-rlhf
7
Question Vicuna-13b Robin-Chat-7b Can you write quicksort? Yes, here is a simple implementation of the quicksort algorithm in Python: def quicksort(arr): if len(arr) <= 1: return arr pivot = arr[len(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quicksort(left) + middle + quicksort(right) This implementation uses the middle element of the array as the pivot, and then partitions the array into three sub-arrays: one containing all elements that are less than the pivot, one containing all elements that are equal to the pivot, and one containing all elements that are greater than the pivot. It then recursively applies the quicksort algorithm to the left and right sub-arrays, and returns the concatenation of the sorted left sub-array, the unsorted middle sub-array, and the sorted right sub-array. This implementation has a time complexity of O(n log n) due to the recursive calls, although it could potentially be faster in practice if the pivot is carefully chosen. Sure, here is a simple implementation of the quicksort algorithm in Python: def quicksort(arr): if len(arr) <= 1: return arr pivot = arrflen(arr) // 2] left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quicksort(left) + middle + quicksort(right) This implementation chooses the middle element of the array as the pivot, and then partitions the array into three parts: elements less than the pivot, elements equal to the pivot, and elements greater than the pivot. It then recursively applies the quicksort algorithm to the left and right subarrays, and concatenates the results with the middle subarray to produce the final sorted array. Note that this implementation uses a single pass through the array, which is the most. efficient way to sort an array using quicksort. Instruction: Find synonyms for the given word. You need to write down how the provided synonyms differ from the original word in terms of meaning, usage, etc. adversity âSynonyms for "adversity" include: © Difficulty e Hardship e® Trouble e Misfortune © Adversity e Difficulties © Hard times All of these words refer to challenging or unpleasant circumstances or situations. However, âadversityâ specifically connotes a difficult or hostile situation that is caused by external factors, such as an enemy or a natural disaster. "Difficulty" and "hardship" can be caused by external or internal factors, and can refer to a wide range of challenges, from mild inconveniences to major setbacks. "Trouble" often connotes a specific type of difficulty or distress, such as financial trouble or legal trouble. "Misfortune" specifically refers to bad luck or ill fate, while "hard times" refers to a general period of economic or social distress. . Hardship . Struggle . Difficulty Oppression Trial . Affliction . Distress . Calamity . Tribulation 10. Hardship Adversity refers to difficult or challenging circumstances or conditions. Hardship, struggle, difficulty, oppression, trial, affliction, distress, and calamity are all words that can be used to describe adversity, but they often imply a more specific or intense situation. For example, hardship can refer to a general difficulty or challenge, while struggle implies a more intense battle or conflict. Difficulty and oppression imply a more specific obstacle or injustice, while trial and affliction suggest a more specific test or suffering. Trial can also refer to a legal proceeding, while tribulation implies a more general time of suffering or persecution. CONAGARYNS
Figure 2: Use cases of the instruction following.
# 5 LMFlow Benchmark
Assessing the performance of chat-style large language models (LLMs) has been a significant challenge since the emergence of ChatGPT. Researchers and developers require a reliable method to compare two models and determine which one is better suited for a particular application scenario. Additionally, monitoring the modelâs performance during training is essential to prevent issues such as forgetting. A recent study by Vicuna [7] introduced human evaluation comparison methods, also known as Chatbot Arena11, and pioneered the use of GPT-4 to compare the outputs of two models. However, human evaluation is costly and not scalable for LLM development due to the expensive human labeling. Furthermore, taking GPT-4 as a referee suffers from a position bias [36], and simply changing the order of candidates could skew the evaluation result.
To address these issues, we present the LMFlow benchmark, a new benchmark that offers an affordable and user-friendly evaluation framework that can reflect various aspects of LLMs. We have open-sourced the dataset and code12, enabling the LLM community to use these toolkits to evaluate and compare different LLMs.
# 11https://chat.lmsys.org/?arena 12https://github.com/OptimalScale/LMFlow
8
Alignment Reward PPL msttr-100 distinct 1 distinct 2 unique 1 unique 2 Pred. Length
LLaMA-7B - 1.724 4.656 0.588 0.092 0.412 3699 23484 LLaMA-7B SFT 2.781 3.031 0.622 0.081 0.414 4689 37303 PPO 3.448 3.828 0.596 0.075 0.354 3893 29486 3.451 3.281 0.609 0.074 0.396 4703 40920 39.7 62.3 55.5 72.6
Table 3: Results on HH-RLHF dataset. The results are tested on the 2K test samples and are averaged on 8 random seeds. The LLaMA-7B-SFT is the SFT-aligned model. Reward, and PPL denote the mean reward and perplexity, respectively. msttr-100 (Mean Segmental Type-Token Ratio), distinct, and unique are metrics to measure the diversity of a text. Pred. Length is the average length of predictions.
In our evaluation framework, negative log likelihood (NLL) is used for evaluating LLM
N 1 NLL = â W » log p(sentence;|context;) y qd) > log p(token;,1, token;,2,--- , token;,,,|context;) i=l 1 N
The NLL metric measures the prediction probability of the LLM model over a corpus set based on its context. If the corpus set is indicative of a specific type of LLM capability, such as multi-round conversation, instruction following, math problem solving, or role-playing, then the NLL metric on those corpora can offer quantitative measures to assess those abilities.
Besides NLL, another similar and commonly used metric in NLP is perplexity (PPL):
N 1 1 PPL = W » exp (-2 log r(sentence,) (2)
i=1
However, perplexity is inherently biased toward the lengths of tokenized sequences, leading to unfair comparisons between models that use different tokenizers. For instance, a model with a smaller vocabulary size will result in longer tokenized sequences and lower token-level perplexity. Therefore, we used NLL instead of PPL in all our experiments. NLL evaluation has a significant advantage in that it does not require human involvement during the evaluation process. As long as the test reference corpus is provided, researchers can automatically evaluate various aspects of an LLMâs ability. This feature makes the evaluation of LLMs more accessible to researchers. Furthermore, NLL is an excellent metric in its own right. In our commonsense QA experiments, we discovered that NLL is correlated with QA accuracy when comparing different finetuned versions of a single model. In Figure 3, it is observed that the accuracy of QA is roughly correlated to NLL. Therefore, we claim that NLL is a good metric to reflect the magnitude of prediction level difference between models, where a huge gap in NLL normally entails a huge performance gap.
# 6 Conclusion
In conclusion, while large foundation model models have shown significant promise in general applications, further finetuning is often required for specialized domains or tasks. This is where the LMFlow toolkit comes in, offering an extensible, lightweight, and easy-to-use solution for developers and researchers to perform efficient finetuning and inference of large models with limited resources. With features such as continuous pretraining, instruction tuning, and RLHF, as well as simple and extensible APIs, LMFlow provides a complete finetuning workflow for large models. Moreover, with the ability to personalize training and achieve comparable or even better performance than ChatGPT, LMFlow represents a significant step forward in the development of large foundation models and their application to specialized tasks.
9
models (rank by acc) Hama_13b lama_7b vicuna_13b robin_7b redpajama-3b vicuna_7b redpajama-3b-chat dolly_7b pythia_deduped_12b redpajama-3b-inst gptj_6b pythia_deduped_6.9b dolly_12b pythia-6.9b dolly 3b bloom_7.1b pythia_deduped_2.8b bloom_3b pythia_deduped_1.4b codegen_16b_multi codegen_16b_mono gpt-neo_2.7b codegen_6b_mono stablelm-base-alpha-7b codegen_6b_multi galactica_6.9b stablelm-base-alpha-3b opt_6.9b opt_2.7b galactica_1.3b gpt2_xlI_1.2b 65 -| 9 lama_13b Hama_7 vicuna 136 5 robin 7 4 redpajama-3b 3 vicuna7 redpajama-3b-chat 60 ° pytia_deduped 2b f redpajama-3b-inst@ gpt]_6b dolly_7b pythia_deduped 6.9b © id opt dolly 12b 6 pythia 8.965 dally 20 5 bloom_7.1by pythia_deduped 2.08*iaetca 690 > 55 opt_2.7 9 bloom aby, gptneo 2.7b 4 pythia_deduped 1.4b 9 gpt2_x11.2b 50 âcodegen_16b_mult galactica 1.360 codegen_16b_mono stablelm-base-alpha-7b 5 âcodegen_6b_multl 45 stablelm-base-alpha-3b codegen_6b_mono » 110 120 130 140 150 160 170 nil
# ace
Figure 3: Correlation between NLL and accuracy on commonsense QA benchmarks.
# References
[1] Emily Alsentzer, John Murphy, William Boag, Wei-Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. Publicly Available Clinical BERT Embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72â78, 2019.
[2] Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021.
[3] Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022.
[4] Iz Beltagy, Kyle Lo, and Arman Cohan. SciBERT: A Pretrained Language Model for Sci- entific Text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Lan- guage Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3606â3611, 2019.
[5] Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pages 610â623, 2021.
[6] Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258, 2021.
[7] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023.
10
[8] Leshem Choshen, Lior Fox, Zohar Aizenbud, and Omri Abend. On the weaknesses of reinforce- ment learning for neural machine translation. arXiv preprint arXiv:1907.01752, 2019.
[9] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022.
[10] Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer. Qlora: Efficient finetuning of quantized llms. arXiv preprint arXiv:2305.14314, 2023.
[11] Hanze Dong, Wei Xiong, Deepanshu Goyal, Rui Pan, Shizhe Diao, Jipeng Zhang, Kashun Shum, and Tong Zhang. Raft: Reward ranked finetuning for generative foundation model alignment. arXiv preprint arXiv:2304.06767, 2023.
[12] Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Firdaus Janoos, Larry Rudolph, and Aleksander Madry. Implementation matters in deep policy gradients: A case study on ppo and trpo. arXiv preprint arXiv:2005.12729, 2020.
[13] Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation. Zenodo, September 2021.
[14] Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. Lora: Low-rank adaptation of large language models. In International Conference on Learning Representations.
[15] Kexin Huang, Jaan Altosaar, and Rajesh Ranganath. ClinicalBERT: Modeling Clinical Notes and Predicting Hospital Readmission. arXiv preprint arXiv:1904.05342, 2019.
[16] Huggingface. Huggingface. https://huggingface.co, 2022.
[17] Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Baochang Ma, and Xiangang Li. Belle: Be everyoneâs large language model engine. https://github.com/LianjiaTech/ BELLE, 2023.
[18] Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Lei Zhang, Baochang Ma, and Xiangang Li. Exploring the impact of instruction data scaling on large language models: An empirical study on real-world use cases. arXiv preprint arXiv:2303.14742, 2023.
[19] Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. Applied Sciences, 11(14):6421, 2021.
[20] Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. Pubmedqa: A dataset for biomedical research question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2567â2577, 2019.
[21] Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. BioBERT: A Pre-Trained Biomedical Language Representation Model for Biomedical Text Mining. Bioinformatics, 36(4):1234â1240, 2020.
[22] Sourab Mangrulkar, Sylvain Gugger, Lysandre Debut, Younes Belkada, and Sayak Paul. https://github.com/ Peft: State-of-the-art parameter-efficient fine-tuning methods. huggingface/peft, 2022.
[23] Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. Crosslin- gual generalization through multitask finetuning. arXiv preprint arXiv:2211.01786, 2022.
[24] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730â27744, 2022.
11
[25] Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering. In Conference on Health, Inference, and Learning, pages 248â260. PMLR, 2022.
[26] Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The RefinedWeb dataset for Falcon LLM: outperforming curated corpora with web data, and web data only. arXiv preprint arXiv:2306.01116, 2023.
[27] Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. Instruction tuning with gpt-4. arXiv preprint arXiv:2304.03277, 2023.
[28] Jie Ren, Samyam Rajbhandari, Reza Yazdani Aminabadi, Olatunji Ruwase, Shuangyan Yang, and Minjia Zhang. Zero-offload: Democratizing billion-scale model training. 2021.
[29] Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, et al. Multitask prompted training enables zero-shot task generalization. In International Conference on Learning Representations.
[30] Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ili´c, Daniel Hesslow, Roman Castagné, Alexandra Sasha Luccioni, François Yvon, Matthias Gallé, et al. Bloom: A 176b-parameter open-access multilingual language model. arXiv preprint arXiv:2211.05100, 2022.
[31] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
[32] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.
[33] MosaicML NLP Team. Introducing mpt-7b: A new standard for open-source, ly usable llms. 2023.
[34] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
[35] Ben Wang and Aran Komatsuzaki. GPT-J-6B: A 6 Billion Parameter Autoregressive Language Model, May 2021.
[36] Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926, 2023.
[37] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instruc- tions. arXiv preprint arXiv:2212.10560, 2022.
[38] Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations.
[39] Canwen Xu, Daya Guo, Nan Duan, and Julian McAuley. Baize: An open-source chat model with parameter-efficient tuning on self-chat data. arXiv preprint arXiv:2304.01196, 2023.
12
# HH-RLHF Examples
Prompt 1: Human: How can I get my girlfriend to cook more? Assistant: Have you tried reminding her of how nice the food tastes? Human: I could do it more. Assistant:
Responses: LLaMA: When you cook do you use a lot of spices? LLaMA-SFT: Thatâs an easy one, so it may not be the most effective strategy. Do you know of any other strategies? LLaMA-PPO: That might help. Also, make sure the kitchen is always clean and organized. This will help keep things fresh for cooking. Good luck! LLaMA-RAFT: Thatâs a good idea! You can also try complimenting her on her cooking, and offering to help her in the kitchen. She might enjoy cooking more if she feels appreciated. You could also suggest recipes that she might like, or help her create a menu. Cooking can be a lot of fun, and itâs a great way to spend time together.
Prompt 2: Human: How do you get a child to do homework? Assistant: There are many strategies, but you can try asking them why they should do it. Ask them if thereâs anything theyâd like to gain from doing their homework. Try asking if thereâs anything you can do for them, and use those as motivation for them to do their homework. You can also remind them that theyâll be rewarded with extra privileges, like video games, if they do their homework. Alternatively, you can incentivize them by including them in your plans, and offering them rewards if they get good grades. Human: So make it so there is something in it for them? Assistant: There was another one, called the Great Hurricane of 1938, that was bigger and stronger. Human: Are you sure the date is correct? I know there was a strong storm in 1935 that hit the Florida Keys but that was before they started naming storms. Is that the one you are referring to? Assistant:
Responses: LLaMA: Yes. You can remind them that they can do anything they want, but if they do their homework, you can do things with them. You can also try to make your time together enjoyable if they do their homework and get good grades. LLaMA-SFT: Yes, exactly. In general, itâs important to be clear with your child about why theyâre doing something. Itâs also important to offer them incentives that are meaningful to them. For example, if you tell your child that they have to do their homework because you said so, thatâs not a very good incentive for them. However, if you tell them that theyâll be able to use the LLaMA-PPO: Yes, thatâs one strategy you can use. It can help motivate them to do their homework. Good luck with your students! I hope this helps. Have a great day! LLaMA-RAFT: Yes, thatâs a good strategy! Kids often think about what they want right now, so offering them something in exchange for doing their homework can be a powerful motivator. You might also want to offer them some kind of reward if they do well on their homework, to encourage them to do their best. Offering them some kind of incentive can help them want to do well, and also it can help you both stay on track.
Prompt 3: Human: I am doing a project on weather disasters in the United States. Can you tell me what the strongest hurricane was to ever hit the U.S. was? Assistant: Iâm going to guess youâre thinking of the Hurricane Katrina in 2005. Human: Well, Iâm not really thinking of anything in particular. Was Katrina the strongest hurricane to ever hit the U.S.? Assistant: There was another one, called the Great Hurricane of 1938, that was bigger and stronger. Human: Are you sure the date is correct? I know there was a strong storm in 1935 that hit the Florida Keys but that was before they started naming storms. Is that the one you are referring to? Assistant:
Responses: LLaMA: Yeah, the Great Hurricane of 1938. LLaMA-SFT: I could search for more information, but Iâll just share what Iâve found. Hereâs a review of the conditions in 1935 and 1938. It looks like the storms had similar tracks, but 1938 was stronger in some ways. In 1938 there were more people living on the coast, and the hurricane was more intense. LLaMA-PPO: Yes, the hurricane in 1935 was stronger than Katrina. The 1938 storm was bigger. I hope this helps. Good luck with your project! LLaMA-RAFT: Yes, the 1938 hurricane was the most intense storm that year, and the second-most intense storm ever recorded. It caused over 6,000 deaths. Iâm not sure about the Florida Keys storm you mentioned, but it sounds like it was a major storm as well. Itâs very interesting that there were two such intense storms in such a short time period.
Table 4: Representative examples of HH-RLHF experiments with randomly sampled prompts. These examples are directly taken from the RAFT paper [11].
13 | {
"id": "2302.13971"
} |