Chunk ID
stringlengths 5
184
| Chunk
stringlengths 20
3.59k
| Source
stringclasses 22
values |
---|---|---|
Evaluating Abstractive Summarization (Chunk 3) | The table shows the ROUGE scores for evaluating two different summaries against a reference text. In the case of rouge-1, Summary 2 outperforms Summary 1, indicating a better overlap of individual words and for rouge-l, Summary 2 has a higher score, implying a closer match in the longest common subsequences, and thus a potentially better overall summarization in capturing the main content and order of the original text. Since Summary 2 has many words and short phrases directly lifted from the excerpt, its overlap with the reference summary would likely be higher, leading to higher ROUGE scores. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization (Chunk 4) | While ROUGE and similar metrics, such as BLEU and METEOR, offer quantitative measures, they often fail to capture the true essence of a well-generated summary. They also correlate worse with human scores. Given the advancements in LLMs, which are adept at producing fluent and coherent summaries, traditional metrics like ROUGE may inadvertently penalize these models. This is especially true if the summaries are articulated differently but still encapsulate the core information accurately. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization (Chunk 5) | Evaluating using BERTScore
ROUGE relies on the exact presence of words in both the predicted and reference texts, failing to interpret the underlying semantics. This is where BERTScore comes in and leverages the contextual embeddings from the BERT model, aiming to evaluate the similarity between a predicted and a reference sentence in the context of machine-generated text. By comparing embeddings from both sentences, BERTScore captures semantic similarities that might be missed by traditional n-gram based metrics. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization (Chunk 6) | The close F1 Scores between the summaries indicate that they may perform similarly in capturing the key information. However, this small difference should be interpreted with caution. Since BERTScore may not fully grasp subtleties and high-level concepts that a human evaluator might understand, reliance solely on this metric could lead to misinterpreting the actual quality and nuances of the summary. An integrated approach combining BERTScore with human judgment and other metrics could offer a more reliable evaluation. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization (Chunk 7) | Evaluating using GPT-4
Here we implement an example reference-free text evaluator using gpt-4, inspired by the G-Eval framework which evaluates the quality of generated text using large language models. Unlike metrics like ROUGE or BERTScore that rely on comparison to reference summaries, the gpt-4 based evaluator assesses the quality of generated content based solely on the input prompt and text, without any ground truth references. This makes it applicable to new datasets and tasks where human references are sparse or unavailable. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization (Chunk 8) | In this demonstration, we're using a direct scoring function where gpt-4 generates a discrete score (1-5) for each metric. Normalizing the scores and taking a weighted sum could result in more robust, continuous scores that better reflect the quality and diversity of the summaries. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization - Part 1 | Overall, the Summary 1 appears to outperform Summary 2 in three of the four categories (Coherence, Relevance and Fluency). Both summaries are found to be consistent with each other. The result might suggest that Summary 1 is generally preferable based on the given evaluation criteria. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization - Part 2 | Limitations
Note that LLM-based metrics could have a bias towards preferring LLM-generated texts over human-written texts. Additionally LLM based metrics are sensitive to system messages/prompts. We recommend experimenting with other techniques that can help improve performance and/or get consistent scores, striking the right balance between high-quality expensive evaluation and automated evaluations. It is also worth noting that this scoring methodology is currently limited by gpt-4's context window. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Evaluating Abstractive Summarization - Part 3 | Conclusion
Evaluating abstractive summarization remains an open area for further improvement. Traditional metrics like ROUGE, BLEU, and BERTScore provide useful automatic evaluation but have limitations in capturing semantic similarity and nuanced aspects of summarization quality. Moreover, they require reference outputs which can be expensive to collect/label. LLM-based metrics offer promise as a reference-free method of evaluating coherence, fluency, and relevance. However, they too have potential biases favoring text generated by LLMs. Ultimately, a combination of automatic metrics and human evaluation is ideal for reliably assessing abstractive summarization systems. While human evaluation is indispensable for gaining a comprehensive understanding of summary quality, it should be complemented with automated evaluation to enable efficient, large-scale testing. The field will continue to evolve more robust evaluation techniques, balancing quality, scalability, and fairness. Advancing evaluation methods is crucial for driving progress in production applications. | https://cookbook.openai.com/examples/evaluation/how_to_eval_abstractive_summarization |
Question answering using a search API and re-ranking | Searching for relevant information can sometimes feel like looking for a needle in a haystack, but don’t despair, GPTs can actually do a lot of this work for us. In this guide we explore a way to augment existing search systems with various AI techniques, helping us sift through the noise. | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
Mimicking Human Browsing | Two ways of retrieving information for GPT are:
Mimicking Human Browsing: GPT triggers a search, evaluates the results, and modifies the search query if necessary. It can also follow up on specific search results to form a chain of thought, much like a human user would do. | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
Retrieval with Embeddings | Retrieval with Embeddings: Calculate embeddings for your content and a user query, and then retrieve the content most related as measured by cosine similarity. This technique is used heavily by search engines like Google. | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
Combining Approaches | By combining these approaches, and drawing inspiration from re-ranking methods, we identify an approach that sits in the middle. This approach can be implemented on top of any existing search system, like the Slack search API, or an internal ElasticSearch instance with private data. Here’s how it works: | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
Step 1: Search | Step 1: Search
User asks a question.
GPT generates a list of potential queries.
Search queries are executed in parallel. | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
Step 2: Re-rank | Step 2: Re-rank
Embeddings for each result are used to calculate semantic similarity to a generated hypothetical ideal answer to the user question.
Results are ranked and filtered based on this similarity metric. | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
Step 3: Answer | Step 3: Answer
Given the top search results, the model generates an answer to the user’s question, including references and links.
This hybrid approach offers relatively low latency and can be integrated into any existing search endpoint, without requiring the upkeep of a vector database. Let's dive into it! We will use the News API as an example domain to search over. | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
Setup | Setup
In addition to your OPENAI_API_KEY, you'll have to include a NEWS_API_KEY in your environment. You can get an API key here. | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
User Asks a Question | User asks a question. GPT generates a list of potential queries. Search queries are executed in parallel. | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
Re-rank | Re-rank
Drawing inspiration from HyDE (Gao et al.), we first generate a hypothetical ideal answer to rerank our compare our results against. This helps prioritize results that look like good answers, rather than those similar to our question. Here’s the prompt we use to generate our hypothetical answer. | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
Generate a Hypothetical Answer | Generate a hypothetical answer to the user's question. This answer will be used to rank search results. Pretend you have all the information you need to answer, but don't use any actual facts. Instead, use placeholders like NAME did something, or NAME said something at PLACE. | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
Calculating Cosine Similarity | Now, let's generate embeddings for the search results and the hypothetical answer. We then calculate the cosine distance between these embeddings, giving us a semantic similarity metric. Note that we can simply calculate the dot product in lieu of doing a full cosine similarity calculation since the OpenAI embeddings are returned normalized in our API. | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
Re-rank Results | Finally, we use these similarity scores to sort and filter the results. | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
Top 5 Articles | Print top 5 articles | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
Display Top Results | These results look a lot more relevant to our original query. Now, let's use the top 5 results to generate a final answer. | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
Generate a Final Answer | Generate an answer to the user's question based on the given search results. TOP_RESULTS: [{'title': 'Article Title 1', 'description': 'Article Description 1', 'url': 'https://example.com/article1'}, ...] USER_QUESTION: Who won the NBA championship? And who was the MVP? Tell me a bit about the last game. | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
Question answering using a search API and re-ranking | Now, in order to be as exhaustive as possible, we use the model to generate a list of diverse queries based on this question. QUERIES_INPUT = f"""
You have access to a search API that returns recent news articles. Generate an array of search queries that are relevant to this question. Use a variation of related keywords for the queries, trying to be as general as possible. Include as many queries as you can think of, including and excluding terms. For example, include queries like ['keyword_1 keyword_2', 'keyword_1', 'keyword_2']. Be creative. The more queries you include, the more likely you are to find relevant results. User question: {USER_QUESTION} Format: {"queries": ["query_1", "query_2", "query_3"]} queries = json_gpt(QUERIES_INPUT)["queries"] # Let's include the original question as well for good measure queries.append(USER_QUESTION) queries The queries look good, so let's run the searches. | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
Re-rank | As we can see, oftentimes, the search queries will return a large number of results, many of which are not relevant to the original question asked by the user. In order to improve the quality of the final answer, we use embeddings to re-rank and filter the results. 2. Re-rank Drawing inspiration from HyDE (Gao et al.), we first generate a hypothetical ideal answer to rerank our compare our results against. This helps prioritize results that look like good answers, rather than those similar to our question. Here’s the prompt we use to generate our hypothetical answer. HA_INPUT = f"""
Generate a hypothetical answer to the user's question. This answer will be used to rank search results. Pretend you have all the information you need to answer, but don't use any actual facts. Instead, use placeholders like NAME did something, or NAME said something at PLACE. User question: {USER_QUESTION} Format: {"hypotheticalAnswer": "hypothetical answer text"} hypothetical_answer = json_gpt(HA_INPUT)["hypotheticalAnswer"] hypothetical_answer Now, let's generate embeddings for the search results and the hypothetical answer. We then calculate the cosine distance between these embeddings, giving us a semantic similarity metric. Note that we can simply calculate the dot product in lieu of doing a full cosine similarity calculation since the OpenAI embeddings are returned normalized in our API. | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
Answer | Finally, we use these similarity scores to sort and filter the results. scored_articles = zip(articles, cosine_similarities) # Sort articles by cosine similarity sorted_articles = sorted(scored_articles, key=lambda x: x[1], reverse=True) # Print top 5 articles print("Top 5 articles:", "\n") for article, score in sorted_articles[0:5]: print("Title:", article["title"]) print("Description:", article["description"]) print("Content:", article["content"][0:100] + "...") print("Score:", score) print() Awesome! These results look a lot more relevant to our original query. Now, let's use the top 5 results to generate a final answer. 3. Answer formatted_top_results = [ { "title": article["title"], "description": article["description"], "url": article["url"], } for article, _score in sorted_articles[0:5] ] ANSWER_INPUT = f"""
Generate an answer to the user's question based on the given search results. TOP_RESULTS: {formatted_top_results} USER_QUESTION: {USER_QUESTION} Include as much information as possible in the answer. Reference the relevant search result urls as markdown links.""" completion = openai.ChatCompletion.create( model=GPT_MODEL, messages=[{"role": "user", "content": ANSWER_INPUT}], temperature=0.5, stream=True, ) text = "" for chunk in completion: text += chunk.choices[0].delta.get("content", "") display.clear_output(wait=True) display.display(display.Markdown(text)) | https://cookbook.openai.com/examples/question_answering_using_a_search_api |
Related resources - Part 1 | People are writing great tools and papers for improving outputs from GPT. Here are some cool ones we've seen: | https://cookbook.openai.com/related_resources |
Prompting libraries & tools - Part 1 | Guidance: A handy looking Python library from Microsoft that uses Handlebars templating to interleave generation, prompting, and logical control. | https://cookbook.openai.com/related_resources |
Prompting libraries & tools - Part 2 | LangChain: A popular Python/JavaScript library for chaining sequences of language model prompts. | https://cookbook.openai.com/related_resources |
Prompting libraries & tools - Part 3 | FLAML (A Fast Library for Automated Machine Learning & Tuning): A Python library for automating selection of models, hyperparameters, and other tunable choices. | https://cookbook.openai.com/related_resources |
Prompting libraries & tools - Part 4 | Chainlit: A Python library for making chatbot interfaces. | https://cookbook.openai.com/related_resources |
Prompting libraries & tools - Part 5 | Guardrails.ai: A Python library for validating outputs and retrying failures. Still in alpha, so expect sharp edges and bugs. | https://cookbook.openai.com/related_resources |
Prompting libraries & tools - Part 6 | Semantic Kernel: A Python/C#/Java library from Microsoft that supports prompt templating, function chaining, vectorized memory, and intelligent planning. | https://cookbook.openai.com/related_resources |
Prompting libraries & tools - Part 7 | Prompttools: Open-source Python tools for testing and evaluating models, vector DBs, and prompts. | https://cookbook.openai.com/related_resources |
Prompting libraries & tools - Part 8 | Outlines: A Python library that provides a domain-specific language to simplify prompting and constrain generation. | https://cookbook.openai.com/related_resources |
Prompting libraries & tools - Part 9 | Promptify: A small Python library for using language models to perform NLP tasks. | https://cookbook.openai.com/related_resources |
Prompting libraries & tools - Part 10 | Scale Spellbook: A paid product for building, comparing, and shipping language model apps. | https://cookbook.openai.com/related_resources |
Prompting libraries & tools - Part 11 | PromptPerfect: A paid product for testing and improving prompts. | https://cookbook.openai.com/related_resources |
Prompting libraries & tools - Part 12 | Weights & Biases: A paid product for tracking model training and prompt engineering experiments. | https://cookbook.openai.com/related_resources |
Prompting libraries & tools - Part 13 | OpenAI Evals: An open-source library for evaluating task performance of language models and prompts. | https://cookbook.openai.com/related_resources |
Prompting libraries & tools - Part 14 | LlamaIndex: A Python library for augmenting LLM apps with data. | https://cookbook.openai.com/related_resources |
Prompting libraries & tools - Part 15 | Arthur Shield: A paid product for detecting toxicity, hallucination, prompt injection, etc. | https://cookbook.openai.com/related_resources |
Prompting libraries & tools - Part 16 | LMQL: A programming language for LLM interaction with support for typed prompting, control flow, constraints, and tools. | https://cookbook.openai.com/related_resources |
Prompting guides | Brex's Prompt Engineering Guide: Brex's introduction to language models and prompt engineering. | https://cookbook.openai.com/related_resources |
Prompting guides | promptingguide.ai: A prompt engineering guide that demonstrates many techniques. | https://cookbook.openai.com/related_resources |
Prompting guides | OpenAI Cookbook: Techniques to improve reliability: A slightly dated (Sep 2022) review of techniques for prompting language models. | https://cookbook.openai.com/related_resources |
Prompting guides | Lil'Log Prompt Engineering: An OpenAI researcher's review of the prompt engineering literature (as of March 2023). | https://cookbook.openai.com/related_resources |
Prompting guides | learnprompting.org: An introductory course to prompt engineering. | https://cookbook.openai.com/related_resources |
Video courses | Andrew Ng's DeepLearning.AI: A short course on prompt engineering for developers. | https://cookbook.openai.com/related_resources |
Video courses | Andrej Karpathy's Let's build GPT: A detailed dive into the machine learning underlying GPT. | https://cookbook.openai.com/related_resources |
Video courses | Prompt Engineering by DAIR.AI: A one-hour video on various prompt engineering techniques. | https://cookbook.openai.com/related_resources |
Papers on advanced prompting to improve reasoning - Part 1 | Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (2022): Using few-shot prompts to ask models to think step by step improves their reasoning. PaLM's score on math word problems (GSM8K) rises from 18% to 57%. | https://cookbook.openai.com/related_resources |
Papers on advanced prompting to improve reasoning - Part 2 | Self-Consistency Improves Chain of Thought Reasoning in Language Models (2022): Taking votes from multiple outputs improves accuracy even more. Voting across 40 outputs raises PaLM's score on math word problems further, from 57% to 74%, and code-davinci-002's from 60% to 78%. | https://cookbook.openai.com/related_resources |
Papers on advanced prompting to improve reasoning - Part 3 | Tree of Thoughts: Deliberate Problem Solving with Large Language Models (2023): Searching over trees of step by step reasoning helps even more than voting over chains of thought. It lifts GPT-4's scores on creative writing and crosswords. | https://cookbook.openai.com/related_resources |
Papers on advanced prompting to improve reasoning - Part 4 | Language Models are Zero-Shot Reasoners (2022): Telling instruction-following models to think step by step improves their reasoning. It lifts text-davinci-002's score on math word problems (GSM8K) from 13% to 41%. | https://cookbook.openai.com/related_resources |
Papers on advanced prompting to improve reasoning - Part 5 | Large Language Models Are Human-Level Prompt Engineers (2023): Automated searching over possible prompts found a prompt that lifts scores on math word problems (GSM8K) to 43%, 2 percentage points above the human-written prompt in Language Models are Zero-Shot Reasoners. | https://cookbook.openai.com/related_resources |
Papers on advanced prompting to improve reasoning - Part 6 | Reprompting: Automated Chain-of-Thought Prompt Inference Through Gibbs Sampling (2023): Automated searching over possible chain-of-thought prompts improved ChatGPT's scores on a few benchmarks by 0–20 percentage points. | https://cookbook.openai.com/related_resources |
Papers on advanced prompting to improve reasoning - Part 7 | Faithful Reasoning Using Large Language Models (2022): Reasoning can be improved by a system that combines: chains of thought generated by alternative selection and inference prompts, a halter model that chooses when to halt selection-inference loops, a value function to search over multiple reasoning paths, and sentence labels that help avoid hallucination. | https://cookbook.openai.com/related_resources |
Papers on advanced prompting to improve reasoning - Part 8 | STaR: Bootstrapping Reasoning With Reasoning (2022): Chain of thought reasoning can be baked into models via fine-tuning. For tasks with an answer key, example chains of thoughts can be generated by language models. | https://cookbook.openai.com/related_resources |
Papers on advanced prompting to improve reasoning - Part 9 | ReAct: Synergizing Reasoning and Acting in Language Models (2023): For tasks with tools or an environment, chain of thought works better you prescriptively alternate between Reasoning steps (thinking about what to do) and Acting (getting information from a tool or environment). | https://cookbook.openai.com/related_resources |
Papers on advanced prompting to improve reasoning - Part 10 | Reflexion: an autonomous agent with dynamic memory and self-reflection (2023): Retrying tasks with memory of prior failures improves subsequent performance. | https://cookbook.openai.com/related_resources |
Papers on advanced prompting to improve reasoning - Part 11 | Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP (2023): Models augmented with knowledge via a 'retrieve-then-read' can be improved with multi-hop chains of searches. | https://cookbook.openai.com/related_resources |
Papers on advanced prompting to improve reasoning - Part 12 | Improving Factuality and Reasoning in Language Models through Multiagent Debate (2023): Generating debates between a few ChatGPT agents over a few rounds improves scores on various benchmarks. Math word problem scores rise from 77% to 85%. | https://cookbook.openai.com/related_resources |
Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning | The aim of this notebook is to walk through a comprehensive example of how to fine-tune OpenAI models for Retrieval Augmented Generation (RAG).
We will also be integrating Qdrant and Few-Shot Learning to boost the model's performance and reduce hallucinations. This could serve as a practical guide for ML practitioners, data scientists, and AI Engineers interested in leveraging the power of OpenAI models for specific use-cases. 🤩 | https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant |
Setting up the Environment | Install and Import Dependencies
!pip install pandas openai tqdm tenacity scikit-learn tiktoken python-dotenv seaborn --upgrade --quiet
import json
import os
import time
import pandas as pd
import openai
import tiktoken
import seaborn as sns
from tenacity import retry, wait_exponential
from tqdm import tqdm
from collections import defaultdict
import numpy as np
import matplotlib.pyplot as plt
import numpy as np
from sklearn.metrics import confusion_matrix
import warnings
warnings.filterwarnings('ignore')
tqdm.pandas()
Set your keys
Get your OpenAI keys here and Qdrant keys after making a free cluster here. | null |
Data Preparation: SQuADv2 Data Subsets | For the purpose of demonstration, we'll make small slices from the train and validation splits of the SQuADv2 dataset. This dataset has questions and contexts where the answer is not present in the context, to help us evaluate how LLM handles this case.
We'll read the data from the JSON files and create a dataframe with the following columns: question, context, answer, is_impossible.
Download the Data
# !mkdir -p local_cache
# !wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json -O local_cache/train.json
# !wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json -O local_cache/dev.json
Read JSON to DataFrame
def json_to_dataframe_with_titles(json_data):
qas = []
context = []
is_impossible = []
answers = []
titles = []
for article in json_data['data']:
title = article['title']
for paragraph in article['paragraphs']:
for qa in paragraph['qas']:
qas.append(qa['question'].strip())
context.append(paragraph['context'])
is_impossible.append(qa['is_impossible'])
ans_list = []
for ans in qa['answers']:
ans_list.append(ans['text'])
answers.append(ans_list)
titles.append(title)
df = pd.DataFrame({'title': titles, 'question': qas, 'context': context, 'is_impossible': is_impossible, 'answers': answers})
return df
def get_diverse_sample(df, sample_size=100, random_state=42):
"""
Get a diverse sample of the dataframe by sampling from each title
"""
sample_df = df.groupby(['title', 'is_impossible']).apply(lambda x: x.sample(min(len(x), max(1, sample_size // 50)), random_state=random_state)).reset_index(drop=True)
if len(sample_df) < sample_size:
remaining_sample_size = sample_size - len(sample_df)
remaining_df = df.drop(sample_df.index).sample(remaining_sample_size, random_state=random_state)
sample_df = pd.concat([sample_df, remaining_df]).sample(frac=1, random_state=random_state).reset_index(drop=True)
return sample_df.sample(min(sample_size, len(sample_df)), random_state=random_state).reset_index(drop=True)
train_df = json_to_dataframe_with_titles(json.load(open('local_cache/train.json')))
val_df = json_to_dataframe_with_titles(json.load(open('local_cache/dev.json'))) | null |
Answering using Base gpt-3.5-turbo-0613 model | 3.1 Zero Shot Prompt
Let's start by using the base gpt-3.5-turbo-0613 model to answer the questions. This prompt is a simple concatenation of the question and context, with a separator token in between:
. We've a simple instruction part of the prompt:
Answer the following Question based on the Context only. Only answer from the Context. If you don't know the answer, say 'I don't know'.
Other prompts are possible, but this is a good starting point. We'll use this prompt to answer the questions in the validation set. | null |
Answering using Zero Shot Prompt | 3.2 Answering using Zero Shot Prompt
Next, you'll need some re-usable functions which make an OpenAI API Call and return the answer. You'll use the ChatCompletion.create endpoint of the API, which takes a prompt and returns the completed text.
# Function with tenacity for retries
@retry(wait=wait_exponential(multiplier=1, min=2, max=6))
def api_call(messages, model):
return openai.ChatCompletion.create(
model=model,
messages=messages,
stop=["\n\n"],
max_tokens=100,
temperature=0.0,
)
# Main function to answer question
def answer_question(row, prompt_func=get_prompt, model="gpt-3.5-turbo-0613"):
messages = prompt_func(row)
response = api_call(messages, model)
return response["choices"][0]["message"]["content"]
⏰ Time to run: ~3 min, 🛜 Needs Internet Connection
# Use progress_apply with tqdm for progress bar
df["generated_answer"] = df.progress_apply(answer_question, axis=1)
df.to_json("local_cache/100_val.json", orient="records", lines=True)
df = pd.read_json("local_cache/100_val.json", orient="records", lines=True)
df | null |
Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning | Notice that the fine-tuned model skips questions more often -- and makes fewer mistakes. This is because the fine-tuned model is more conservative and skips questions when it's not sure. | https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant |
Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning | evaluator.plot_model_comparison(["generated_answer", "ft_generated_answer"], scenario="idk_expected", nice_names=["Baseline", "Fine-Tuned"]) | https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant |
Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning | Notice that the fine-tuned model has learned to say "I don't know" a lot better than the prompt. Or, the model has gotten good at skipping questions. | https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant |
Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning | Observations The fine-tuned model is better at saying "I don't know" Hallucinations drop from 100% to 0% with fine-tuning. Wrong answers drop from 17% to 6% with fine-tuning. Correct answers also drop from 83% to 60% with fine-tuning - this is because the fine-tuned model is more conservative and says "I don't know" more often. This is a good thing because it's better to say "I don't know" than to give a wrong answer. | https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant |
Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning | That said, we want to improve the correctness of the model, even if that increases the hallucinations. We're looking for a model that is both correct and conservative, striking a balance between the two. We'll use Qdrant and Few-Shot Learning to achieve this. | https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant |
Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning | 💪 You're 2/3rds of the way there! Keep reading! | https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant |
Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning | Section B: Few Shot Learning We'll select a few examples from the dataset, including cases where the answer is not present in the context. We'll then use these examples to create a prompt that we can use to fine-tune the model. We'll then measure the performance of the fine-tuned model. | https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant |
Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning | What is next? Fine-Tuning OpenAI Model with Qdrant 6.1 Embed the Fine-Tuning Data 6.2 Embedding the Questions Using Qdrant to Improve RAG Prompt 6. Fine-Tuning OpenAI Model with Qdrant So far, we've been using the OpenAI model to answer questions without using examples of the answer. The previous step made it work better on in-context examples, while this one helps it generalize to unseen data, and attempt to learn when to say "I don't know" and when to give an answer. | https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant |
Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning | This is where few-shot learning comes in! Few-shot learning is a type of transfer learning that allows us to answer questions where the answer is not present in the context. We can do this by providing a few examples of the answer we're looking for, and the model will learn to answer questions where the answer is not present in the context. | https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant |
Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning | 5.1 Embed the Training Data Embeddings are a way to represent sentences as an array of floats. We'll use the embeddings to find the most similar questions to the ones we're looking for. | https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant |
Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning | import os from qdrant_client import QdrantClient from qdrant_client.http import models from qdrant_client.http.models import PointStruct from qdrant_client.http.models import Distance, VectorParams Now that we've the Qdrant imports in place, qdrant_client = QdrantClient( url=os.getenv("QDRANT_URL"), api_key=os.getenv("QDRANT_API_KEY"), timeout=6000, prefer_grpc=True ) collection_name = "squadv2-cookbook" # # Create the collection, run this only once # qdrant_client.recreate_collection( # collection_name=collection_name, # vectors_config=VectorParams(size=384, distance=Distance.COSINE), # ) from fastembed.embedding import DefaultEmbedding from typing import List import numpy as np import pandas as pd from tqdm.notebook import tqdm tqdm.pandas() embedding_model = DefaultEmbedding() | https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant |
Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning | 5.2 Embedding the Questions Next, you'll embed the entire training set questions. You'll use the question to question similarity to find the most similar questions to the question we're looking for. This is a workflow which is used in RAG to leverage the OpenAI model ability of in-context learning with more examples. This is what we call Few Shot Learning here. ❗️⏰ Important Note: This step can take up to 3 hours to complete. Please be patient. If you see Out of Memory errors or Kernel Crashes, please reduce the batch size to 32, restart the kernel and run the notebook again. This code needs to be run only ONCE. Function Breakdown for generate_points_from_dataframe Initialization: batch_size = 512 and total_batches set the stage for how many questions will be processed in one go. This is to prevent memory issues. If your machine can handle more, feel free to increase the batch size. If your kernel crashes, reduce the batch size to 32 and try again. Progress Bar: tqdm gives you a nice progress bar so you don't fall asleep. Batch Loop: The for-loop iterates through batches. start_idx and end_idx define the slice of the DataFrame to process. Generate Embeddings: batch_embeddings = embedding_model.embed(batch, batch_size=batch_size) - This is where the magic happens. Your questions get turned into embeddings. PointStruct Generation: Using .progress_apply, it turns each row into a PointStruct object. This includes an ID, the embedding vector, and other metadata. Returns the list of PointStruct objects, which can be used to create a collection in Qdrant. | https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant |
Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning | def generate_points_from_dataframe(df: pd.DataFrame) -> List[PointStruct]: batch_size = 512 questions = df["question"].tolist() total_batches = len(questions) // batch_size + 1 pbar = tqdm(total=len(questions), desc="Generating embeddings") # Generate embeddings in batches to improve performance embeddings = [] for i in range(total_batches): start_idx = i * batch_size end_idx = min((i + 1) * batch_size, len(questions)) batch = questions[start_idx:end_idx] batch_embeddings = embedding_model.embed(batch, batch_size=batch_size) embeddings.extend(batch_embeddings) pbar.update(len(batch)) pbar.close() # Convert embeddings to list of lists embeddings_list = [embedding.tolist() for embedding in embeddings] # Create a temporary DataFrame to hold the embeddings and existing DataFrame columns temp_df = df.copy() temp_df["embeddings"] = embeddings_list temp_df["id"] = temp_df.index # Generate PointStruct objects using DataFrame apply method points = temp_df.progress_apply( lambda row: PointStruct( id=row["id"], vector=row["embeddings"], payload={ "question": row["question"], "title": row["title"], "context": row["context"], "is_impossible": row["is_impossible"], "answers": row["answers"], }, ), axis=1, ).tolist() return points points = generate_points_from_dataframe(train_df) Upload the Embeddings to Qdrant Note that configuring Qdrant is outside the scope of this notebook. Please refer to the Qdrant for more information. We used a timeout of 600 seconds for the upload, and grpc compression to speed up the upload. operation_info = qdrant_client.upsert( collection_name=collection_name, wait=True, points=points ) print(operation_info) | https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant |
Fine-Tuning OpenAI Models for Retrieval Augmented Generation (RAG) with Qdrant and Few-Shot Learning | 6. Using Qdrant to Improve RAG Prompt
Now that we've uploaded the embeddings to Qdrant, we can use Qdrant to find the most similar questions to the question we're looking for. We'll use the top 5 most similar questions to create a prompt that we can use to fine-tune the model. We'll then measure the performance of the fine-tuned model on the same validation set, but with few shot prompting!
Our main function get_few_shot_prompt serves as the workhorse for generating prompts for few-shot learning. It does this by retrieving similar questions from Qdrant - a vector search engine, using an embeddings model. Here is the high-level workflow:
Retrieve similar questions from Qdrant where the answer is present in the context
Retrieve similar questions from Qdrant where the answer is IMPOSSIBLE i.e. the expected answer is "I don't know" to find in the context
Create a prompt using the retrieved questions
Fine-tune the model using the prompt
Evaluate the fine-tuned model on the validation set with the same prompting technique
def get_few_shot_prompt(row):
query, row_context = row["question"], row["context"]
embeddings = list(embedding_model.embed([query]))
query_embedding = embeddings[0].tolist()
num_of_qa_to_retrieve = 5
# Query Qdrant for similar questions that have an answer
q1 = qdrant_client.search(
collection_name=collection_name,
query_vector=query_embedding,
with_payload=True,
limit=num_of_qa_to_retrieve,
query_filter=models.Filter(
must=[
models.FieldCondition(
key="is_impossible",
match=models.MatchValue(
value=False,
),
),
],
)
)
# Query Qdrant for similar questions that are IMPOSSIBLE to answer
q2 = qdrant_client.search(
collection_name=collection_name,
query_vector=query_embedding,
query_filter=models.Filter(
must=[
models.FieldCondition(
key="is_impossible",
match=models.MatchValue(
value=True,
),
),
]
),
with_payload=True,
limit=num_of_qa_to_retrieve,
)
instruction = """Answer the following Question based on the Context only. Only answer from the Context. If you don't know the answer, say 'I don't know'.
"""
# If there is a next best question, add it to the prompt
def q_to_prompt(q):
question, context = q.payload["question"], q.payload["context"]
answer = q.payload["answers"][0] if len(q.payload["answers"]) > 0 else "I don't know"
return [
{
"role": "user",
"content": f"Question: {question}\n\nContext: {context}\n\nAnswer:"
},
{"role": "assistant", "content": answer},
]
rag_prompt = []
if len(q1) >= 1:
rag_prompt += q_to_prompt(q1[1])
if len(q2) >= 1:
rag_prompt += q_to_prompt(q2[1])
if len(q1) >= 1:
rag_prompt += q_to_prompt(q1[2])
rag_prompt += [
{
"role": "user",
"content": f"Question: {query}\n\nContext: {row_context}\n\nAnswer:"
},
]
rag_prompt = [{"role": "system", "content": instruction}] + rag_prompt
return rag_prompt
# ⏰ Time: 2 min
train_sample["few_shot_prompt"] = train_sample.progress_apply(get_few_shot_prompt, axis=1)
| https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant |
7. Fine-Tuning OpenAI Model with Qdrant | 7.1 Upload the Fine-Tuning Data to OpenAI
# Prepare the OpenAI File format i.e. JSONL from train_sample
def dataframe_to_jsonl(df):
def create_jsonl_entry(row):
messages = row["few_shot_prompt"]
return json.dumps({"messages": messages})
jsonl_output = df.progress_apply(create_jsonl_entry, axis=1)
return "\n".join(jsonl_output)
with open("local_cache/100_train_few_shot.jsonl", "w") as f:
f.write(dataframe_to_jsonl(train_sample))
7.2 Fine-Tune the Model
⏰ Time to run: ~15-30 minutes
fine_tuner = OpenAIFineTuner(
training_file_path="local_cache/100_train_few_shot.jsonl",
model_name="gpt-3.5-turbo",
suffix="trnfewshot20230907"
)
model_id = fine_tuner.fine_tune_model()
model_id
# Let's try this out
completion = openai.ChatCompletion.create(
model=model_id,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": "Can you answer the following question based on the given context? If not, say, I don't know:\n\nQuestion: What is the capital of France?\n\nContext: The capital of Mars is Gaia. Answer:",
},
{
"role": "assistant",
"content": "I don't know",
},
{
"role": "user",
"content": "Question: Where did Maharana Pratap die?\n\nContext: Rana Pratap's defiance of the mighty Mughal empire, almost alone and unaided by the other Rajput states, constitute a glorious saga of Rajput valour and the spirit of self-sacrifice for cherished principles. Rana Pratap's methods of guerrilla warfare were later elaborated further by Malik Ambar, the Deccani general, and by Emperor Shivaji.\nAnswer:",
},
{
"role": "assistant",
"content": "I don't know",
},
{
"role": "user",
"content": "Question: Who did Rana Pratap fight against?\n\nContext: In stark contrast to other Rajput rulers who accommodated and formed alliances with the various Muslim dynasties in the subcontinent, by the time Pratap ascended to the throne, Mewar was going through a long-standing conflict with the Mughals, which started with the defeat of his grandfather Rana Sanga in the Battle of Khanwa in 1527 and continued with the defeat of his father Udai Singh II in the Siege of Chittorgarh in 1568. Pratap Singh, gained distinction for his refusal to form any political alliance with the Mughal Empire and his resistance to Muslim domination. The conflicts between Pratap Singh and Akbar led to the Battle of Haldighati. Answer:",
},
{
"role": "assistant",
"content": "Akbar",
},
{
"role": "user",
"content": "Question: Which state is Chittorgarh in?\n\nContext: Chittorgarh, located in the southern part of the state of Rajasthan, 233 km (144.8 mi) from Ajmer, midway between Delhi and Mumbai on the National Highway 8 (India) in the road network of Golden Quadrilateral. Chittorgarh is situated where National Highways No. 76 & 79 intersect. Answer:",
},
],
)
print("Correct Answer: Rajasthan\nModel Answer:")
print(completion.choices[0].message)
⏰ Time to run: 5-15 min
df["ft_generated_answer_few_shot"] = df.progress_apply(answer_question, model=model_id, prompt_func=get_few_shot_prompt, axis=1)
df.to_json("local_cache/100_val_ft_few_shot.json", orient="records", lines=True)
| https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant |
8. Evaluation | But how well does the model perform? Let's compare the results from the 3 different models we've looked at so far:
evaluator = Evaluator(df)
evaluator.plot_model_comparison(["generated_answer", "ft_generated_answer", "ft_generated_answer_few_shot"], scenario="answer_expected", nice_names=["Baseline", "Fine-Tuned", "Fine-Tuned with Few-Shot"])
This is quite amazing -- we're able to get the best of both worlds! We're able to get the model to be both correct and conservative:
The model is correct 83% of the time -- this is the same as the base model
The model gives the wrong answer only 8% of the time -- down from 17% with the base model
Next, let's look at the hallucinations. We want to reduce the hallucinations, but not at the cost of correctness. We want to strike a balance between the two. We've struck a good balance here:
The model hallucinates 53% of the time -- down from 100% with the base model
The model says "I don't know" 47% of the time -- up from NEVER with the base model
evaluator.plot_model_comparison(["generated_answer", "ft_generated_answer", "ft_generated_answer_few_shot"], scenario="idk_expected", nice_names=["Baseline", "Fine-Tuned", "Fine-Tuned with Few-Shot"])
Few Shot Fine-Tuning with Qdrant is a great way to control and steer the performance of your RAG system. Here, we made the model less conservative compared to zero shot and more confident by using Qdrant to find similar questions.
You can also use Qdrant to make the model more conservative. We did this by giving examples of questions where the answer is not present in the context.
This is biasing the model to say "I don't know" more often.
Similarly, one can also use Qdrant to make the model more confident by giving examples of questions where the answer is present in the context. This biases the model to give an answer more often. The trade-off is that the model will also hallucinate more often.
You can make this trade-off by adjusting the training data: distribution of questions and examples, as well as the kind and number of examples you retrieve from Qdrant.
| https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant |
9. Conclusion | In this notebook, we've demonstrated how to fine-tune OpenAI models for specific use-cases. We've also demonstrated how to use Qdrant and Few-Shot Learning to improve the performance of the model.
Aggregate Results
So far, we've looked at the results for each scenario separately, i.e. each scenario summed to 100. Let's look at the results as an aggregate to get a broader sense of how the model is performing:
Category Base Fine-Tuned Fine-Tuned with Qdrant
Correct 44% 32% 44%
Skipped 0% 18% 5%
Wrong 9% 3% 4%
Hallucination 47% 7% 25%
I don't know 0% 40% 22%
Observations
Compared to base model
The few shot fine-tuned with Qdrant model is as good as the base model at answering questions where the answer is present in the context.
The few shot fine-tuned with Qdrant model is better at saying "I don't know" when the answer is not present in the context.
The few shot fine-tuned with Qdrant model is better at reducing hallucinations.
Compared to fine-tuned model
The few shot fine-tuned with Qdrant model gets more correct answers than the fine-tuned model: 83% of the questions are answered correctly vs 60% for the fine-tuned model
The few shot fine-tuned with Qdrant model is better at deciding when to say "I don't know" when the answer is not present in the context. 34% skip rate for the plain fine-tuning mode, vs 9% for the few shot fine-tuned with Qdrant model
Now, you should be able to:
Notice the trade-offs between the number of correct answers and hallucinations -- and how training dataset choice influences that!
Fine-tune OpenAI models for specific use-cases and use Qdrant to improve the performance of your RAG model
Get started on how to evaluate the performance of your RAG model | https://cookbook.openai.com/examples/fine-tuned_qa/ft_retrieval_augmented_generation_qdrant |
Azure chat completion models with your own data (preview)_1 | This example shows how to use Azure OpenAI service models with your own data. The feature is currently in preview. | https://cookbook.openai.com/examples/azure/chat_with_your_own_data |
Azure chat completion models with your own data (preview)_2 | Azure OpenAI on your data enables you to run supported chat models such as GPT-3.5-Turbo and GPT-4 on your data without needing to train or fine-tune models. Running models on your data enables you to chat on top of, and analyze your data with greater accuracy and speed. | https://cookbook.openai.com/examples/azure/chat_with_your_own_data |
Azure chat completion models with your own data (preview)_3 | One of the key benefits of Azure OpenAI on your data is its ability to tailor the content of conversational AI. Because the model has access to, and can reference specific sources to support its responses, answers are not only based on its pretrained knowledge but also on the latest information available in the designated data source. | https://cookbook.openai.com/examples/azure/chat_with_your_own_data |
Azure chat completion models with your own data (preview)_4 | This grounding data also helps the model avoid generating responses based on outdated or incorrect information. | https://cookbook.openai.com/examples/azure/chat_with_your_own_data |
Azure chat completion models with your own data (preview)_5 | Azure OpenAI on your own data with Azure Cognitive Search provides a customizable, pre-built solution for knowledge retrieval, from which a conversational AI application can be built. To see alternative methods for knowledge retrieval and semantic search, check out the cookbook examples for vector databases. | https://cookbook.openai.com/examples/azure/chat_with_your_own_data |
Azure chat completion models with your own data (preview)_6 | How it works
Azure OpenAI on your own data connects the model with your data, giving it the ability to retrieve and utilize data in a way that enhances the model's output. | https://cookbook.openai.com/examples/azure/chat_with_your_own_data |
Azure chat completion models with your own data (preview)_7 | Together with Azure Cognitive Search, data is retrieved from designated data sources based on the user input and provided conversation history. The data is then augmented and resubmitted as a prompt to the model, giving the model contextual information it can use to generate a response. | https://cookbook.openai.com/examples/azure/chat_with_your_own_data |
Azure chat completion models with your own data (preview)_8 | See the Data, privacy, and security for Azure OpenAI Service for more information. | https://cookbook.openai.com/examples/azure/chat_with_your_own_data |
Azure chat completion models with your own data (preview)_9 | Prerequisites
To get started, we'll cover a few prerequisites. | https://cookbook.openai.com/examples/azure/chat_with_your_own_data |
Azure chat completion models with your own data (preview)_10 | To properly access the Azure OpenAI Service, we need to create the proper resources at the Azure Portal (you can check a detailed guide on how to do this in the Microsoft Docs) | https://cookbook.openai.com/examples/azure/chat_with_your_own_data |
Azure chat completion models with your own data (preview)_11 | To use your own data with Azure OpenAI models, you will need: | https://cookbook.openai.com/examples/azure/chat_with_your_own_data |
Azure chat completion models with your own data (preview)_12 | Azure OpenAI access and a resource with a chat model deployed (for example, GPT-3 or GPT-4) | https://cookbook.openai.com/examples/azure/chat_with_your_own_data |
Azure chat completion models with your own data (preview)_13 | Azure Cognitive Search resource | https://cookbook.openai.com/examples/azure/chat_with_your_own_data |