Unleashing the Power of Logprobs in Language Models: A Practical Guide
Introduction
Embarking on a journey through the capabilities of language models, we delve into the innovative use of logprobs, specifically in classification and Q&A evaluation. This article demonstrates the enhanced potential unlocked by the logprobs parameter in the Chat Completions API, providing profound insights into the workings of language models.
Definitions:
Log probabilities, denoted as logprobs, represent the likelihood of each token occurring in a sequence given the context. A higher log probability signifies greater confidence in the model's output within a specific context. The logprobs parameter returns these probabilities, enabling a deeper understanding of the model's decision-making process.
Benefits:
Confidence in Classification:Traditional classification tasks gain a new dimension with logprobs. Assessing the model's confidence in predictions becomes more transparent, allowing for the creation of accurate and trustworthy classifiers.
Enhanced Q&A Evaluation:Logprobs assist in self-evaluation within retrieval applications, particularly in Q&A scenarios. Confidence scores aid in reducing retrieval-based hallucinations and improving overall accuracy.
Autocomplete Systems:Logprobs serve as a valuable tool in autocomplete systems, dynamically suggesting words or tokens with high confidence. This feature contributes to a more intuitive user experience.
Token Highlighting and Outputting Bytes:Creating a token highlighter with logprobs enhances text visualization. The bytes parameter, coupled with logprobs, allows for the encoding and decoding of tokens, opening avenues for handling special characters and emojis.
Code Implementation
The provided code implements two methods for generating log probabilities for a given text prompt using advanced language models. These log probabilities offer insights into the likelihood of different token sequences. Users can customize prompts and adjust parameters to explore and analyze the models' predictions.
STEP I: Install Libraries and initiate Comet
pip install -q comet_ml transformers datasets openai "httpx<0.25.0" -Uq
STEP II: IMport and initiate Huggingface and Comet-ML
import comet_ml
from huggingface_hub import notebook_login
from datasets import load_dataset
import re
import string
import datasets
import random
import pandas as pd
from IPython.display import display, HTML
# Loading the dataset
raw_datasets = load_dataset("rotten_tomatoes")
# Initializing the project
comet_ml.init(project_name="text-classification-with-transformers")
# Logining Hugging Face
notebook_login()
Step 3: Functions
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
for column, typ in dataset.features.items():
if isinstance(typ, datasets.ClassLabel):
df[column] = df[column].transform(lambda i: typ.names[i])
display(HTML(df.to_html()))
def clean_text(text):
# Remove non-printable characters
printable_chars = string.printable
text = ''.join(filter(lambda x: x in printable_chars, text))
# Replace multiple whitespace characters with a single space
text = re.sub(r'\s{2,}', ' ', text).strip()
return text
def preprocess_dataset(raw_dataset):
# Set the format of the dataset to Pandas dataframe
raw_dataset.set_format(type="pandas")
# Copy the 'train' dataset to a new dataframe
df = raw_dataset["train"][:]
# Apply text cleaning to the 'text' column
df['text'] = df['text'].apply(clean_text)
return df
def get_completion(
messages: list[dict[str, str]],
model: str = "gpt-3.5-turbo-0613",
max_tokens=500,
temperature=0,
stop=None,
seed=123,
tools=None,
logprobs=None, # whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message..
top_logprobs=None,
) -> str:
params = {
"model": model,
"messages": messages,
"max_tokens": max_tokens,
"temperature": temperature,
"stop": stop,
"seed": seed,
"logprobs": logprobs,
"top_logprobs": top_logprobs,
}
if tools:
params["tools"] = tools
completion = client.chat.completions.create(**params)
return completion
Step IV: Dataframes
show_random_elements(raw_datasets["train"])
df = preprocess_dataset(raw_datasets)
df.head()
Step V: Logprobs for classification
import os
import openai
openai.api_key = ""
client = OpenAI(api_key=openai.api_key)
CLASSIFICATION_PROMPT = """You will be given a text of rotten tomato reviews.
Classify the article into one of the following categories: Positive Negative.
Return only the name of the category, and nothing else.
MAKE SURE your output is one of the four categories stated.
Article headline: {headline}"""
headlines = df['text'].tolist()
for headline in headlines[:4]:
print(f"\nHeadline: {headline}")
API_RESPONSE = get_completion(
[{"role": "user", "content": CLASSIFICATION_PROMPT.format(headline=headline)}],
model="gpt-4",
logprobs=True,
top_logprobs=2,
)
top_two_logprobs = API_RESPONSE.choices[0].logprobs.content[0].top_logprobs
html_content = ""
for i, logprob in enumerate(top_two_logprobs, start=1):
html_content += (
f"<span style='color: cyan'>Output token {i}:</span> {logprob.token}, "
f"<span style='color: darkorange'>logprobs:</span> {logprob.logprob}, "
f"<span style='color: magenta'>linear probability:</span> {np.round(np.exp(logprob.logprob)*100,2)}%<br>"
)
display(HTML(html_content))
print("\n")
Output
Conclusion
In the realm of logprobs, we traverse the fascinating landscape of probabilities and linguistic nuances. As we delved into the depths of log probabilities, a symphony of insights unfolded, revealing the intricate dance of language generation.
The logarithmic probabilities, akin to a maestro orchestrating a masterpiece, guide the generation of each word, each phrase. It's not just about the words themselves; it's about the intricate harmony they create, birthing narratives that resonate with the soul.
Through the lens of logprobs, we decipher the underlying complexity of language generation. Every token, a note in the grand composition, contributes to the coherence and fluidity of the narrative. It's a ballet of linguistic expression where each movement is dictated by the logarithmic rhythm.
Our exploration into logprobs transcends mere technicalities; it's an odyssey into the artistry of AI-driven text creation. The logarithmic symphony doesn't just predict; it crafts, molds, and sculpts language into a tapestry of meaning and emotion.
In the grand conclusion of our logprobs journey, we find ourselves not just at the intersection of data and algorithms but at the crossroads of inspiration and innovation. Logprobs, in their silent logarithmic whispers, invite us to reimagine storytelling, content creation, and the very essence of human-machine collaboration.
As we step away from this logprobs expedition, let the echoes of this logarithmic saga linger. It's not just about understanding probabilities; it's about embracing the infinite possibilities that unfold when we harmonize technology with the boundless realms of human imagination.
“Stay connected and support my work through various platforms:
Medium: You can read my latest articles and insights on Medium at https://medium.com/@andysingal
Paypal: Enjoyed my article? Buy me a coffee! https://paypal.me/alphasingal?country.x=US&locale.x=en_US"
Requests and questions: If you have a project in mind that you’d like me to work on or if you have any questions about the concepts I’ve explained, don’t hesitate to let me know. I’m always looking for new ideas for future Notebooks and I love helping to resolve any doubts you might have.
Resources: