Fine-tuning OpenAI GPT-4o for Multi-label Text Classification

Featured Imgs 23

In my previous article, I presented a comparison of GPT-4o and Claude 3.5 Sonnet for multi-label text classification. The accuracies achieved by both models were relatively low.

Fine-tuning is one solution to overcome the low performance of large-language models. With fine-tuning, you can incorporate custom domain knowledge into an LLM's weights, leading to better performance on your custom dataset.

This article will show how to fine-tune the OpenAI GPT-4o model on the multi-label research paper classification dataset. It is the same dataset I used for zero-shot multi-label classification in my previous article. You will see significantly better results with the fine-tuned GPT-4o model.

So, let's begin without ado.

Importing and Installing Required Libraries

We will fine-tune the OpenAI GPT-4o model using the OpenAI API in Python. The following script installs the OpenAI Python library.


!pip install openai

The script below imports the required libraries into your Python application.

import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from itertools import combinations
from collections import Counter
from sklearn.metrics import hamming_loss, accuracy_score
import json
import os
from openai import OpenAI
Importing and Preprocessing the Dataset

We will fine-tune the GPT-4o model using the same multi-label research paper classification dataset we used in the last article.

The following script imports the dataset into a Pandas dataframe and displays the dataset header.

## dataset download link
## https://www.kaggle.com/datasets/shivanandmn/multilabel-classification-dataset?select=train.csv

dataset = pd.read_csv(r"D:\Datasets\Multilabel Research Paper Classification\train.csv")
print(f"Dataset Shape: {dataset.shape}")
dataset.head()

Output:

img1.png

The dataset has nine columns. The ID column holds the paper ID, while the TITLE and ABSTRACT columns store the titles and abstracts of the research papers. In the remaining columns, a one indicates that the paper falls under that category, while a zero shows it does not.

We will filter the papers that belong to at least two categories, as our goal is to conduct multi-label classification.


subjects = ["Computer Science", "Physics", "Mathematics", "Statistics", "Quantitative Biology", "Quantitative Finance"]
filtered_dataset = dataset[(dataset[subjects] == 1).sum(axis=1) >= 2]
print(f"Filtered Dataset Shape: {filtered_dataset.shape}")
filtered_dataset.head()

Output:

img2.png

We will fine-tune the GPT-4o model on the first 100 records in our dataset. At the same time, the test set will contain 100 randomly selected records with the random_state = 42 so that we have the same test dataset as in the previous article.


train_dataset = filtered_dataset.iloc[:100]  # First 100 records for training
test_dataset = filtered_dataset.sample(n=100, random_state=42)  # randomly selecting 100 records for testing

# Display the shapes of the resulting datasets
print(f"Training Dataset Shape: {train_dataset.shape}")
print(f"Testing Dataset Shape: {test_dataset.shape}")

Output:

Training Dataset Shape: (100, 9)
Testing Dataset Shape: (100, 9)
Creating a Training File for OpenAI Fine Tuning

You must convert your dataset into multi-line JSON format for OpenAI model fine-tuning. Each line should contain data like this as per the OpenAI official documentation.

{"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "What's the capital of France?"}, {"role": "assistant", "content": "Paris, as if everyone doesn't know that already."}]}
{"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "Who wrote 'Romeo and Juliet'?"}, {"role": "assistant", "content": "Oh, just some guy named William Shakespeare. Ever heard of him?"}]}
{"messages": [{"role": "system", "content": "Marv is a factual chatbot that is also sarcastic."}, {"role": "user", "content": "How far is the Moon from Earth?"}, {"role": "assistant", "content": "Around 384,400 kilometers. Give or take a few, like that really matters."}]}

We will convert our dataset into the above format. The system content will contain the system instructions. We will use the same instructions as we used in the previous article. The user content will contain the research paper title and abstract. Finally we store the desired output i.e. the research paper categories in a comma-separated list in the assistant content.

The following script converts our training dataframe into the JSON file we will use for model fine-tuning.



# Initialize list to hold JSON-like strings
json_lines = []

# Template for system role content
system_content = (
    "You are an expert in various scientific domains.\n"
    "Given the following research paper title and abstract, classify the research paper into at least two or more of the following categories:\n"
    "- Computer Science\n"
    "- Physics\n"
    "- Mathematics\n"
    "- Statistics\n"
    "- Quantitative Biology\n"
    "- Quantitative Finance\n\n"
    "Return only a comma-separated list of the categories (e.g., [Computer Science,Physics] or [Computer Science,Physics,Mathematics]).\n"
    "Use the exact case sensitivity and spelling of the categories provided above."
)

# Loop through each row in the DataFrame
for _, row in train_dataset.iterrows():
    # Identify the categories with a value of 1 and reverse the list
    categories = [
        subject for subject in ["Computer Science", "Physics", "Mathematics", "Statistics", "Quantitative Biology", "Quantitative Finance"]
        if row[subject] == 1
    ][::-1]  # Reverse the order of categories

    # Create JSON structure for each row
    json_record = {
        "messages": [
            {"role": "system", "content": system_content},
            {"role": "user", "content": f"Title: {row['TITLE']}\nAbstract: {row['ABSTRACT']}"},
            {"role": "assistant", "content": f"[{','.join(categories)}]"}
        ]
    }
    # Convert to JSON string and add to list
    json_lines.append(json.dumps(json_record))

# Join all JSON strings with newline separators for the final output
final_output = "\n".join(json_lines)

The following script saves the JSON file.


# Save the JSON records to 'train.json'

training_file_path = r"D:\Datasets\Multilabel Research Paper Classification\train.json"

with open(training_file_path, 'w') as file:
    file.write(final_output)

print("Data successfully saved to 'train.json'")

We need to upload the JSON training file to OpenAI servers for fine-tuning. The following script uploads the file.


client = OpenAI(
    # This is the default and can be omitted
    api_key = os.environ.get('OPENAI_API_KEY'),
)


training_file = client.files.create(
  file=open(training_file_path, "rb"),
  purpose="fine-tune"
)

print(training_file.id)
Fine-tuning GPT-4o Model

We are now ready to fine-tune the GPT-4o model on our training file.
To do so, call the fine_tuning.jobs.create() method and pass it the training file ID and the model ID of the model you want to fine-tune.


fine_tuning_job_gpt4o = client.fine_tuning.jobs.create(
  training_file=training_file.id,
  model="gpt-4o-2024-08-06"
)

You can see the model fine-tuning events using the script below:


# List up to 10 events from a fine-tuning job
print(client.fine_tuning.jobs.list_events(fine_tuning_job_id = fine_tuning_job_gpt4o.id,
                                    limit=10))

Once the model is fine-tuned, you will receive an email from OpenAI containing your fine-tuned model ID. Alternatively, you can use the following script to retrieve the ID of your fine-tuned model.


ft_model_id = client.fine_tuning.jobs.retrieve(fine_tuning_job_gpt4o.id).fine_tuned_model
Predicting Research Paper Category with Fine-tuned GPT-4o

The rest of the same as explained in the previous article.

We will define the find_research_category() function, which accepts the OpenAI API client, the fine-tuned model ID, and the test dataset as parameters.

Within this function, we iterate through each row, extracting the title and abstract of each paper. Then, we will define the same system prompt as we used for training to instruct the fine-tuned models to classify each paper into two or more of the predefined subject categories.


def find_research_category(client, model, dataset):

    outputs = []
    i = 0

    for _, row in dataset.iterrows():
        title = row['TITLE']
        abstract = row['ABSTRACT']

        content = """You are an expert in various scientific domains.
                     Given the following research paper title and abstract, classify the research paper into at least two or more of the following categories:
                    - Computer Science
                    - Physics
                    - Mathematics
                    - Statistics
                    - Quantitative Biology
                    - Quantitative Finance

                    Return only a comma-separated list of the categories (e.g., [Computer Science,Physics] or [Computer Science,Physics,Mathematics]).
                    Use the exact case sensitivity and spelling of the categories provided above.

                    text: Title: {}\nAbstract: {}""".format(title, abstract)


        research_category = client.chat.completions.create(
                                model= model,
                                temperature = 0,
                                max_tokens = 100,
                                messages=[
                                      {"role": "user", "content": content}
                                  ]
                              ).choices[0].message.content


        outputs.append(research_category)
        print(i + 1, research_category)
        i += 1

    return outputs

The find_research_category() function returns a list of lists, with each internal list containing a comma-separated list of predicted categories for a paper. We will convert these subject categories into a Pandas dataframe using the parse_outputs_to_dataframe() function, allowing us to compare the model outputs against the target labels.


def parse_outputs_to_dataframe(outputs):

    subjects = ["Computer Science", "Physics", "Mathematics", "Statistics", "Quantitative Biology", "Quantitative Finance"]
    # Remove square brackets and split the subjects for each entry in outputs
    parsed_data = [item.strip('[]').split(',') for item in outputs]

    # Create an empty DataFrame with columns for each subject, initializing with 0s
    df = pd.DataFrame(0, index=range(len(parsed_data)), columns=subjects)

    # Populate the DataFrame with 1s based on the presence of each subject in each row
    for i, subjects_list in enumerate(parsed_data):
        for subject in subjects_list:
            if subject in subjects:
                df.loc[i, subject] = 1

    return df

Next, we call the find_research_category() function with the OpenAI client object, the fine-tuned model ID, and the test dataset.


model = ft_model_id
outputs = find_research_category(client,
                                 model,
                                 test_dataset)

Output:

img3.png

We will convert model predictions into Pandas dataframe using the parse_outputs_to_dataframe() function.

Finally, we calculate the hamming loss and the model accuracy for the predictions on the test set.


predictions = parse_outputs_to_dataframe(outputs)
targets = test_dataset[subjects]

# Calculate Hamming Loss
hamming = hamming_loss(targets, predictions)
print(f"Hamming Loss: {hamming}")

# Calculate Subset Accuracy (Exact Match Ratio)
subset_accuracy = accuracy_score(targets, predictions)
print(f"Subset Accuracy: {subset_accuracy}")

Output:


Hamming Loss: 0.09333333333333334
Subset Accuracy: 0.69

These results were achieved in the previous article using the default GPT-4o model.

Hamming Loss: 0.16
Subset Accuracy: 0.4

The above output shows that the fine-tuned GPT-4o model performs significantly better than the default model. The hamming loss score of 0.09 shows that only 9% of the labels for each record were incorrectly predicted, compared to 16% of incorrect labels predicted by the default GPT-4o.

Similarly, fine-tuned GPT-4o achieves a subset accuracy of 69% compared to 40% achieved by the default GPT-4o model.

Conclusion

In this article, you saw how to fine-tune the GPT-4o model for multi-label text classification. The results show that with just 100 training examples, fine-tuned GPT-4o achieves 29% higher accuracy compared to the default GPT-4o model.

If you are receiving poor results on your dataset with default GPT-4o, I suggest fine-tuning it with around 100 examples. You will see a clear improvement in model performance.

OpenAI GPT-4o vs Claude 3.5 Sonnet for Multi-label Text Classification

Featured Imgs 23

In one of my previous articles, you saw a comparison of GPT-4o vs. Claude 3.5 sonnet for zero-shot text classification. In that article; we performed multi-class text classification where input tweets belonged to one of the three categories.

In this article, we will go a step further and perform zero-shot multi-label text classification with GPT-4o and Claude 3.5 sonnet models. We will compare the two models using accuracy and hamming loss criteria and see which model is suited for zero-shot multi-label text classification.

So, let's begin without ado.

Installing and Importing Required Libraries

We will call the Claude 3.5 sonnet and GPT-4o models using the Anthropic and OpenAI Python libraries.

The following script installs these libraries.


!pip install anthropic
!pip install openai

The script below imports the required libraries into your Python application.

import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from itertools import combinations
from collections import Counter
from sklearn.metrics import hamming_loss, accuracy_score

import anthropic
from openai import OpenAI

from google.colab import userdata
OPENAI_API_KEY = userdata.get('OPENAI_API_KEY')
ANTHROPIC_API_KEY = userdata.get('ANTHROPIC_API_KEY')
Importing and Visualizing the Dataset

We will use a multi-label text classification dataset from Kaggle containing research paper titles and abstracts that can belong to one or more of the six output categories: Computer Science, Physics, Mathematics, Statistics, Quantitative Biology, and Quantitative Finance.

The following script imports the training set from the dataset and plots the dataset header.


## dataset download link
## https://www.kaggle.com/datasets/shivanandmn/multilabel-classification-dataset?select=train.csv

dataset = pd.read_csv("/content/train.csv")
print(f"Dataset Shape: {dataset.shape}")
dataset.head()

Output:

img1.png

The dataset consists of 9 columns. The ID column contains the paper ID. The TITLE and ABSTRACT columns contain research papers' titles and abstracts, respectively. In the rest of the columns, a one signifies that the paper belongs to this category. A zero means that the paper does not belong to this category.

Let's filter papers that belong to at least two categories since we want to perform multi-label classification.


subjects = ["Computer Science", "Physics", "Mathematics", "Statistics", "Quantitative Biology", "Quantitative Finance"]
filtered_dataset = dataset[(dataset[subjects] == 1).sum(axis=1) >= 2]
print(f"Filtered Dataset Shape: {filtered_dataset.shape}")
filtered_dataset.head()

Output:

img2.png

Next, we will plot a heatmap that shows the co-occurrences of research papers in various categories.


filtered_dataset[subjects] = filtered_dataset[subjects].astype(int)

pair_counts = {pair: 0 for pair in combinations(subjects, 2)}

# Count occurrences where both subjects in each pair have a value of 1
for subject1, subject2 in pair_counts:
    pair_counts[(subject1, subject2)] = ((filtered_dataset[subject1] == 1) & (filtered_dataset[subject2] == 1)).sum()

# Convert the pair counts to a DataFrame suitable for a heatmap
pair_counts_df = pd.DataFrame(0, index=subjects, columns=subjects)
for (subject1, subject2), count in pair_counts.items():
    pair_counts_df.loc[subject1, subject2] = count
    pair_counts_df.loc[subject2, subject1] = count  # Ensure symmetry

# Plotting the heatmap
plt.figure(figsize=(8, 6))
sns.heatmap(pair_counts_df,
            annot=True,
            fmt="d",
            cmap="Blues",
            square=True,
            cbar=True)

plt.title("Pairing Count of Columns with Both Values as 1")
plt.show()

Output:

img3.png

The above output shows that most research papers in the dataset are grouped in the Computer ScienceStatistics and MathematicsStatistics categories.

In the next section, we will use the Claude 3.5 Sonnet and GPT-4o models to classify the research papers in the dataset using multi-label classification.

Zero-Shot Multi-label Text Classification

We will define the find_research_category() method that takes as parameters the Anthropic or OpenAI API client, the model name from the API, and the dataset.

Inside the method, we iterate through all the rows and extract the paper titles and abstracts. Next, we will write a prompt that tells the models that using the research paper title and abstract, they have to classify the paper into two or more of the predefined subject categories.

Next, depending on the model type, we send the prompt to Anthropic or OpenAI and retrieve the comma-separated list of subject types for the research paper.



def find_research_category(client, model, dataset):

    outputs = []
    i = 0

    for _, row in sampled_df.iterrows():
        title = row['TITLE']
        abstract = row['ABSTRACT']

        content = """You are an expert in various scientific domains.
                     Given the following research paper title and abstract, classify the research paper into at least two or more of the following categories:
                    - Computer Science
                    - Physics
                    - Mathematics
                    - Statistics
                    - Quantitative Biology
                    - Quantitative Finance

                    Return only a comma-separated list of the categories (e.g., [Computer Science,Physics] or [Computer Science,Physics,Mathematics]).
                    Use the exact case sensitivity and spelling of the categories provided above.

                    text: Title: {}\nAbstract: {}""".format(title, abstract)


        research_category = ""

        if model == "gpt-4o":

          research_category = client.chat.completions.create(
                                model= model,
                                temperature = 0,
                                max_tokens = 100,
                                messages=[
                                      {"role": "user", "content": content}
                                  ]
                              ).choices[0].message.content


        if model == "claude-3-5-sonnet-20241022":

          research_category = client.messages.create(
                                model= model,
                                max_tokens=10,
                                temperature=0.0,
                                messages=[
                                    {"role": "user", "content": content}
                                ]
                            ).content[0].text

        outputs.append(research_category)
        print(i + 1, research_category)
        i += 1

    return outputs

The response from the find_research_category() method is a list of lists where each internal list contains subject categories. We will convert these subject categories into a Pandas dataframe to compare the model outputs with target labels.


def parse_outputs_to_dataframe(outputs):

    subjects = ["Computer Science", "Physics", "Mathematics", "Statistics", "Quantitative Biology", "Quantitative Finance"]
    # Remove square brackets and split the subjects for each entry in outputs
    parsed_data = [item.strip('[]').split(',') for item in outputs]

    # Create an empty DataFrame with columns for each subject, initializing with 0s
    df = pd.DataFrame(0, index=range(len(parsed_data)), columns=subjects)

    # Populate the DataFrame with 1s based on the presence of each subject in each row
    for i, subjects_list in enumerate(parsed_data):
        for subject in subjects_list:
            if subject in subjects:
                df.loc[i, subject] = 1

    return df

We will randomly select the first 100 records from the dataset and use these records to make predictions. You can select more records. However, it can cost more since the OpenAI and Claude APIs are not free.

sampled_df = filtered_dataset.sample(n=100, random_state=42)
Multi-label Text Classification with GPT-4o

Let's first do multi-label classification with GPT-4o. To do so, we will create an OpenAI client object and pass it the OpenAI API key.

We will pass the client object and the gpt-4o model ID to the find_research_category() method, which returns model predictions.

client = OpenAI(api_key = OPENAI_API_KEY,)
model = "gpt-4o"

outputs = find_research_category(client, model, sampled_df)

Output:

img4.png

We will convert model predictions into Pandas dataframe using the parse_outputs_to_dataframe() method.

Finally, we calculate the hamming loss and model accuracy using the hamming_loss and accuracy_score functions from the sklearn library.


predictions = parse_outputs_to_dataframe(outputs)
targets = sampled_df[subjects]

# Calculate Hamming Loss
hamming = hamming_loss(targets, predictions)
print(f"Hamming Loss: {hamming}")

# Calculate Subset Accuracy (Exact Match Ratio)
subset_accuracy = accuracy_score(targets, predictions)
print(f"Subset Accuracy: {subset_accuracy}")

Output:


Hamming Loss: 0.16
Subset Accuracy: 0.4

We achieved a hamming loss of 0.16, which shows that 16% of labels for each record were incorrectly predicted. The lower the hamming loss, the better the model performs.

Similarly, we achieved an accuracy of 40%, which means that the model correctly predicted all the categories in 40% of the records.

Multi-label Text Classification with Claude 3.5 Sonnet

Next, we will make predictions with the Claude 3.5 sonnet. We will create an object of the Anthropic class and pass it the Anthropic API key.

To retrieve model responses, we pass the Anthropic client and the Claude 3.5 sonnet model ID to the find_research_category() method.


client = anthropic.Anthropic(api_key = ANTHROPIC_API_KEY)
model = "claude-3-5-sonnet-20241022"

outputs = find_research_category(client, model, sampled_df)

Output:

img5.png

Next, we parse the model outputs to a dataframe using the parse_outputs_to_dataframe() method and print the hamming loss and model accuracy for the Claude 3.5 sonnet model.


predictions = parse_outputs_to_dataframe(outputs)
targets = sampled_df[subjects]

# Calculate Hamming Loss
hamming = hamming_loss(targets, predictions)
print(f"Hamming Loss: {hamming}")

# Calculate Subset Accuracy (Exact Match Ratio)
subset_accuracy = accuracy_score(targets, predictions)
print(f"Subset Accuracy: {subset_accuracy}")

Output:


Hamming Loss: 0.17166666666666666
Subset Accuracy: 0.29

The output shows that the Claude 3.5 sonnet model performs slightly better than GPT-4o regarding hamming loss, which means that the Claude model predicted fewer labels incorrectly. However, Claude returned an accuracy of 29% for an exact label match, significantly less than GPT-4o.

Conclusion

Claude 3.5 sonnet and OpenAI GPT-4o models achieve brilliant zero-shot multi-label text classification results. In this article, you saw how to use these models for multi-label text classification and convert the model predictions into Pandas dataframe to evaluate model performance.

The results show that while Claude 3.5 makes fewer mistakes for predicted individual labels, the GPT-4o is significantly better if you look for an exact multi-label match for your input text.

Image Analysis Using Llama 3.2 Vision Instruct Model

Featured Imgs 23

On September 25, 2024, Meta released the Llama 3.2 series of multimodal models. The models are lightweight yet extremely powerful for image-to-text and text-to-text tasks.

In this article, you will learn how to use the Llama 3.2 Vision Instruct model for general image analysis, graph analysis, and facial sentiment prediction. You will see how to use the Hugging Face Inference API to call the Llama 3.2 Vision Instruct model.

The results are comparable with the proprietary Claude 3.5 Sonnet model as explained in this article.

So, let's begin without ado.

Importing Required Libraries

We will call the Llama 3.2 Vision Instruct model using the Hugging Face Inference API. To access the API, you need to install the following library.

pip install huggingface_hub==0.24.7

The following script imports the required libraries into your Python application.

import os
import base64
from IPython.display import display, HTML
from IPython.display import Image
from huggingface_hub import InferenceClient
import requests
from PIL import Image
from io import BytesIO
import matplotlib.pyplot as plt
A Basic Image Analysis Example with Llama 3.2 Vision Instruct Model

Let's first see how to analyze an image using the Llama 3.2 vision instruct model using the Hugging Face Inference API.

We will analyze the following image.

image_url = r"https://healthier.stanfordchildrens.org/wp-content/uploads/2021/04/Child-climbing-window-scaled.jpg"
Image(url=image_url, width=600, height=600)

Output:

image-1.png

To analyze an image using the Hugging Face Inference, you must first create an object of the InferenceClient class from the huggingface_hub module. You must pass your Hugging Face access token to the InferenceClient class constructor.

Next, call the chat_completion() method on the InferenceClient object (llama3_2_model_client in the following script) and pass it the Hugging Face model ID, the model temperature, and the list of messages.

In the following script, we pass one user message with the image we want to analyze and the text query.

The chat_completion() function returns the response based on the image and the query, which you can retrieve using the response.choices[0].message.content.

In the script below, we simply ask the Meta Llama 3.2 Vision Instruct model to describe the image in a single line.


hf_token = os.environ.get('HF_TOKEN')
llama3_2_model_client = InferenceClient(token=hf_token)

model_id = "meta-llama/Llama-3.2-11B-Vision-Instruct"
query = "Describe the image please in one line please!"

response =  llama3_2_model_client.chat_completion(
    model=model_id,
    temperature = 0,
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "image_url", "image_url": {"url": image_url}},
                {"type": "text", "text": query},
                ],
        }
    ],
    max_tokens=1024,
)
print(response.choices[0].message.content)

Output:

A young child with blonde hair and blue striped pajamas is climbing on a wicker chair in front of a window.

The above output shows that the model describes the image precisely.

Now that you know how to analyze an image using the Meta Llama 3.2 Vision Instruct model and the Hugging Face Inference API, let's define a utility function analyze_image() that takes in the user query and the image_url and returns the response answering the query related to the image.

In the script below, we ask the model if he sees any potentially dangerous situation in the image and how to prevent it. The response shows that the model correctly analyzes the image and suggests potential prevention measures.


def analyze_image(query, image_url):


    response =  llama3_2_model_client.chat_completion(
        model=model_id,
        temperature = 0,
        messages=[
            {
                "role": "user",
                "content": [
                    {"type": "image_url", "image_url": {"url": image_url}},
                    {"type": "text", "text": query},
                    ],
            }
        ],
        max_tokens=1024,
    )
    return response.choices[0].message.content


query = "You are a baby sitter. Do you see any dangerous sitation in the image? If yes, how to prevent it?"
image_url = r"https://healthier.stanfordchildrens.org/wp-content/uploads/2021/04/Child-climbing-window-scaled.jpg"

response = analyze_image(query, image_url)
print(response)

Output:

image-2.png

Overall, the Meta Llama 3.2 Vision Instruct seems capable of general image analysis and performs at par with advanced proprietary models such as GPT-4o and Claude 3.5 Sonnet.

Graph Analysis

Let's see how well the llama 3.2 Vision Instruct performs for graph analysis tasks.

We will analyze the following bar plot, which displays Government gross debts as a percentage of GDPs for the European countries in 2023.


image_url = r"https://globaleurope.eu/wp-content/uploads/sites/24/2023/12/Folie2.jpg"
Image(url=image_url, width=600, height=600)

Output:

image-3.jpg

Let's just ask the model to summarize the plot.


query =  "You are an expert graph and visualization expert. Can you summarize the graph?"
response = analyze_image(query, image_url)
print(response)

Output:

image-4.png

The above output shows that the model provides detailed insights into different aspects of the information in the bar plot.

Let's ask a slightly tricky question. We will ask the model to convert the bar plot into a table.


query =  "You are an expert graph and visualization expert. Can you convert the graph to table such as Country -> Debt?"
response = analyze_image(query, image_url)
print(response)

Output:

image-5.png

The above output shows that the model's conversions were not precise. For example, the plot shows that Greece's GDP debt percentage is around 170%. However, the model shows it as 180%. In fact, the model shows the values in units of 10s.

On the other hand, the Claude 3.5 sonnet provided exact values.

Image Sentiment Prediction

Let's test the Llama 3.2 Vision Instruct model for the image sentiment prediction task. We will predict the facial sentiment expressed in the following image.

image_url = r"https://www.allprodad.com/wp-content/uploads/2021/03/05-12-21-happy-people.jpg"
Image(url=image_url, width=600, height=600)

Output:

image-6.jpg

Run the following script to print the facial sentiment.

query =  "You are helpful psychologist. Can you predict facial sentiment from the input image"
response = analyze_image(query, image_url)
print(response)

Output:


Based on the image, the individual appears to be smiling, which is a common indicator of happiness or positive sentiment. The person's facial expression suggests that they are feeling content or joyful.

The above output shows that the model correctly predicted the facial sentiment in the image.

Analyzing Multiple Images

Like the advanced vision models like GPT-4o and Claude 3.5 Sonnet, you can also analyze multiple images using the Llama 3.2 Vision Instruct model.

We will compare the following two images using the Llama 3.2 Vision Instruct model.


# URLs of the images
image_url1 = r"https://www.allprodad.com/wp-content/uploads/2021/03/05-12-21-happy-people.jpg"
image_url2 = r"https://www.shortform.com/blog/wp-content/uploads/2023/12/sad-woman-looking-down-eyes-slightly-closed-750x350.jpg"

# Fetch the images from the URLs
response1 = requests.get(image_url1)
response2 = requests.get(image_url2)

# Open the images using Pillow
img1 = Image.open(BytesIO(response1.content))
img2 = Image.open(BytesIO(response2.content))

# Create a figure to display the images side by side
fig, axes = plt.subplots(1, 2, figsize=(10, 5))

# Display the first image
axes[0].imshow(img1)
axes[0].axis('off')  # Hide axes

# Display the second image
axes[1].imshow(img2)
axes[1].axis('off')  # Hide axes

# Show the plot
plt.tight_layout()
plt.show()

Output:

image-7.png

To analyze multiple images, you must add the images to the content list of the user messages, as shown in the following script.

The script below defines the analyze_multiple_images() function that accepts a text query and two images and answers the query related to both images.


def analyze_multiple_images(query, image1_url, image2_url):


    response =  llama3_2_model_client.chat_completion(
        model=model_id,
        temperature = 0,
        messages=[
            {
                "role": "user",
                "content": [
                    {"type": "image_url", "image_url": {"url": image1_url}},
                    {"type": "image_url", "image_url": {"url": image2_url}},
                    {"type": "text", "text": query},
                    ],
            }
        ],
        max_tokens=1024,
    )
    return response.choices[0].message.content

query =  "You are helpful psychologist. Can you explain all the differences in the two images?"
response = analyze_multiple_images(query, image_url1, image_url2)
print(response)

The above script attempts to find all the differences between the two input images.

Output:

image-8.png

The output shows that Llama 3.2 Vision Instruct can find most of the differences between the two images, and its findings are very close to those of the Claude 3.5 sonnet model.

Conclusion

Meta Llama 3.2 Vision Instruct is a lightweight yet extremely powerful model for text-to-text and image-to-text tasks. It is open-source, and you can use it for free using the Hugging Face Inference API.

In this article, you saw how to use the Llama 3.2 Vision Instruct model for image analysis tasks such as graph analysis, sentiment prediction, etc. I suggest you try the Llama 3.2 Vision Instruct model for your image-to-text and text-to-text tasks and share your feedback.

RAG with LangChain and Hugging Face Serverless Inference API

Featured Imgs 23

This article explains how to create a retrieval augmented generation (RAG) chatbot in LangChain using open-source models from Hugging Face serverless inference API.

You will see how to call large language models (LLMs) and embedding models from Hugging Face serverless inference API using LangChain. You will also see how to employ these LLMs and embedding models to create LangChain chatbots with and without memory.

So, let's begin without ado.

Installing and Import Required Libraries

We will first install the Python libraries required to run codes in this article.


!pip install langchain
!pip install langchain_community
!pip install pypdf
!pip install faiss-gpu
!pip install langchain-huggingface
!pip install --upgrade --quiet huggingface_hub

The script below imports the required libraries into your Python application.

Since we will be accessing the Hugging Face serverless inference API, you must obtain your access token from Hugging Face.

Note: The codes in this article are run with Google Colab, so I used the user data.get() method to access environment variables. You must use the method that is appropriate for your environment.

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_community.document_loaders import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from langchain_community.embeddings import (
    HuggingFaceInferenceAPIEmbeddings,
)
from langchain_community.vectorstores import FAISS
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain.chains import create_retrieval_chain
from langchain_core.documents import Document
from langchain.chains import create_history_aware_retriever
from langchain_core.prompts import MessagesPlaceholder
from langchain_core.messages import HumanMessage, AIMessage
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain.memory import ChatMessageHistory

import os
from google.colab import userdata

hf_token = userdata.get('HF_API_TOKEN')
Basic Example of Calling Hugging Face Inference API in LangChain

Let's first see a basic example of how to call a text generation model from Hugging Face serverless inference API.

You need to create an object of the HuggingFaceEndpoint() class and pass it your model repo ID and Hugging Face access token. The temperature and max_new_tokens variables are optional.

We will use the Qwen 2.5-72b model for response generation.

Next, pass the object of the HuggingFaceEndpoint to the ChatHuggingFace class constructor to create an LLM object.


repo_id = "Qwen/Qwen2.5-72B-Instruct"

llm = HuggingFaceEndpoint(
    repo_id=repo_id,
    temperature=0.5,
    huggingfacehub_api_token=hf_token,
    max_new_tokens=1024
)

llm = ChatHuggingFace(llm=llm)

You can use the LLM object defined in the above script like any other LangChain chat LLM model.

Let's see an example. In the following script, we create a prompt using the ChatPromptTemplate object that answers a user's question.

Next, we chain the prompt with the LLM and the output parser object.

Finally, we invoke the chain with a question to generate a final LLM response.

question = "Who won the Cricket World Cup 2019, who was the captain?"

template = """Answer the following question to the best of your knowledge.
Question: {question}
"""

prompt = ChatPromptTemplate.from_template(template)

output_parser = StrOutputParser()

llm_chain = prompt | llm | output_parser

llm_chain.invoke({"question": question})

Output:

The Cricket World Cup 2019 was won by England. The captain of the England team was Eoin Morgan.

You can see that once you have created an LLM using HuggingFaceEndpoint and ChatHuggingFace objects, the rest of the stuff is a standard LangChain response generation process.

Let's now create a simple chatbot that answers users' questions about a PDF document.

A RAG Chatbot Using Hugging Face Inference API

This section will show how to create a LangChain chatbot using LLMs and word embedding models from Hugging Face serverless API.

Creating Document Embeddings

We will create a chatbot that answers questions related to Google's 2024 Q1 earnings report.

The following script imports the PDF document and splits it into document chunks.


loader = PyPDFLoader("https://abc.xyz/assets/91/b3/3f9213d14ce3ae27e1038e01a0e0/2024q1-alphabet-earnings-release-pdf.pdf")
docs = loader.load_and_split()

Next, we will create vector embeddings from the PDF document and store them in a Vector store. The user queries will also be converted into vector embeddings.

Finally, the text whose vector embeddings in the vector store match the vector embeddings of the user query will be retrieved to generate the final LLM response.

We will use the all-MiniLM-l6-v2 vector embedding model to generate vector embeddings. You can retrieve this model free from Hugging Face serverless inference API using the HuggingFaceInferenceAPIEmbeddings object.

The following script splits our input PDF document and generates vector embeddings using the all-MiniLM-l6-v2 vector embedding model. The vector embeddings are stored in FAISS vector store.

embeddings = HuggingFaceInferenceAPIEmbeddings(
    api_key=hf_token,
    model_name="sentence-transformers/all-MiniLM-l6-v2"
)

text_splitter = RecursiveCharacterTextSplitter()
documents = text_splitter.split_documents(docs)
vector = FAISS.from_documents(documents, embeddings)

We are now ready to create our chatbot using LangChain.

Creating a Chatbot Without Memory

We will first create a chatbot without memory. The first step is to create a prompt that uses context to answer user questions.

Next, we will create two chains: a stuff document chain and a retrieval chain. The stuff document chain will link the prompt with the LLM.

The retrieval chain will be responsible for retrieving vector store documents relevant to the user query and passing them to the context variable in the prompt.

prompt = ChatPromptTemplate.from_template("""Answer the following question based only on the provided context:

Question: {input}

Context: {context}
"""
)

document_chain = create_stuff_documents_chain(llm, prompt)

retriever = vector.as_retriever()
retrieval_chain = create_retrieval_chain(retriever, document_chain)

Finally, you can invoke the retrieval chain, ask it questions related to the Google earnings report, and get relevant responses.

For example, in the following script, we ask our chatbot about YouTube's add revenue for the first quarters of 2023 and 2024. The model returns correct responses. You can verify the response from the PDF document.

def generate_response(query):
    response = retrieval_chain.invoke({"input": query})
    print(response["answer"].rpartition("Assistant:")[-1].strip())

query = "What is the revenue from YouTube adds for the 1st Quarters of 2023 and 2024"
generate_response(query)

Output:

Based on the provided context, the revenue from YouTube ads for the first quarter of 2023 was $6,693 million, and for the first quarter of 2024, it was $8,090 million.

Next, we will add memory to our chatbot to remember the previous interaction.

Adding Memory to the Chatbot

To create a LangChain chatbot with memory, we need a history-aware retriever chain in addition to the stuff document and retrieval chain, as explained in the official documentation.

The following script defines the prompt for the history-aware retriever chain. This chain creates a standalone query using the new user query and the chat history. This new standalone query is then passed to the retrieval chain as we did before.

contextualize_q_system_prompt = (
    "Given a chat history and the latest user question "
    "which might reference context in the chat history, "
    "formulate a standalone question which can be understood "
    "without the chat history. Do NOT answer the question, "
    "just reformulate it if needed and otherwise return it as is."
)

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", contextualize_q_system_prompt),
        MessagesPlaceholder("chat_history"),
        ("user", "{input}"),
    ]
)

history_retriever_chain = create_history_aware_retriever(llm, retriever, prompt)

The stuff document chain will now have a message placeholder that stores the chat history.


prompt = ChatPromptTemplate.from_messages([
    ("system", "Answer the user's questions based on the below context:\n\n{context}"),
    MessagesPlaceholder(variable_name="chat_history"),
    ("user", "{input}")
])
document_chain = create_stuff_documents_chain(llm, prompt)

Finally, the retrieval chain will combine the history retriever chain and the stuff document chain to create the final response generation chain.

retrieval_chain = create_retrieval_chain(history_retriever_chain, document_chain)

Next, we will initialize an empty list that stores our chat messages. Each call to the retrieval chain extends the chat message history with the user query and the model response.

The script below defines the generate_response_with_memory() function that generates chatbot responses with memory.



chat_history = []

def generate_response_with_memory(query):
    response = retrieval_chain.invoke({
    "chat_history": chat_history,
    "input": query
    })

    response = response["answer"].rpartition("Assistant:")[-1].strip()
    chat_history.extend([HumanMessage(content = query),
                       AIMessage(content = response)])

    return response

Finally, we can combine everything to create a simple console chatbot that answers user questions related to Google's earnings report. The chatbot will continue answering user questions until the user types' bye. `


print("=======================================================================")
print("Earnings Call Chatbot")
print("=======================================================================")

query = ""
while query != "bye":
    query = input("\033[1m User >>:\033[0m")

    if query == "bye":
        chat_history = []
        print("\033[1m Chatbot>>:\033[0m Thank you for your messages, have a good day!")
        break
    response = generate_response_with_memory(query)
    print(f"\033[1m Chatbot>>:\033[0m {response}")

Output:

img1.png

From the above output, you can see that that model generates responses based on chat history.

Conclusion

In this article, you saw how you can use LangChain and Hugging Face serverless inference API to create a document question-answering.

With Hugging Face's serverless inference API, you can call modern LLMs and embedding models without installing anything locally. This eliminates the need for expensive GPUs and hardware to run advanced LLMS.

I encourage you to use Hugging Face serverless inference API to develop your chatbot and LLM applications.

Qwen vs Llama – Who is winning the Open Source LLM Race

Featured Imgs 23

Open-source LLMS, owing to their comparable performance with advanced proprietary LLMs, have been gaining immense popularity lately. Open-source LLMs are free to use, and you can easily modify their source code or fine-tune them on your systems.

Alibaba's Qwen and Meta's Llama series of models are two major players in the open source LLM arena. In this article, we will compare the performance of Qwen 2.5-72b and Llama 3.1-70b models for zero-shot text classification and summarization.

By the end of this article, you will have a rough idea of which model to use for your NLP tasks.

So, lets begin without ado.

Installing and Importing Required Libraries

We will call the Hugging Face inference API to access the Qwen and LLama models. In addition, we will need the rouge-score library to calculate ROUGE scores for text summarization tasks. The script below installs the required libraries for this article.


!pip install huggingface_hub==0.24.7
!pip install rouge-score
!pip install --upgrade openpyxl
!pip install pandas openpyxl

The script below installs the required libraries.


from huggingface_hub import InferenceClient
import os
import pandas as pd
from rouge_score import rouge_scorer
from sklearn.metrics import accuracy_score
from collections import defaultdict
Calling Qwen 2.5 and Llama 3.1 Using Hugging Face Inference API

To access models via the Hugging Face inference API, you will need your Hugging Face User Access tokens.

Next, create a client object for the corresponding model using the InferenceClient class from the huggingface_hub library.
You must pass the Hugging Face model path and the access token to the InferenceClientclass constructor.

The script below creates model clients for Qwen 2.5-72b and Llama 3.1-70b models.


hf_token = os.environ.get('HF_TOKEN')

#qwen 2.5 endpoint
#https://huggingface.co/Qwen/Qwen2.5-72B-Instruct
qwen_model_client = InferenceClient(
    "Qwen/Qwen2.5-72B-Instruct",
    token=hf_token
)

#Llama 3.1 endpoint
#https://huggingface.co/meta-llama/Llama-3.1-70B-Instruct
llama_model_client = InferenceClient(
    "meta-llama/Llama-3.1-70B-Instruct",
    token=hf_token
)

To get a response from the model, you can call the chat_completion() method and pass a list of system and user messages to the messages attribute of the method.

The script below defines the make_prediction() method, which accepts the model client, the system role prompt, and the user query and generates a response using the model client.


def make_prediction(model, system_role, user_query):

    response = model.chat_completion(
    messages=[{"role": "system", "content": system_role},
        {"role": "user", "content": user_query}],
    max_tokens=10,
    )

    return response.choices[0].message.content

Let's first generate a dummy response using the Qwen 2.5-72b.


system_role = "Assign positive, negative, or neutral sentiment to the movie review. Return only a single word in your response"
user_query = "I like this movie a lot"
make_prediction(qwen_model_client,
               system_role,
               user_query)

Output:

'positive'

The above output shows that the Qwen model generates the expected response.

Let's try the Llama 3.1-70b model now.

system_role = "Assign positive, negative, or neutral sentiment to the movie review. Return only a single word in your response"
user_query = "I hate this movie a lot"
make_prediction(llama_model_client,
               system_role,
               user_query)

Output:

'negative'

And voila, the Llama also makes correct predictions.

In the following two sections, we will compare the performance of the two models on custom datasets. We see how the two models fare for zero-shot text classification and summarization.

Qwen 2.5-72b vs Llama 3.1-70b For Text Classification

For text classification, we will use the Twitter US Airline Sentiment, which consists of positive, negative, and neutral tweets for various US airlines.

The following script imports the dataset into a Pandas DataFrame.


## Dataset download link
## https://www.kaggle.com/datasets/crowdflower/twitter-airline-sentiment?select=Tweets.csv

dataset = pd.read_csv(r"D:\Datasets\Tweets.csv")
dataset.head()

Output:

img1.png

We will preprocess our dataset and select 100 tweets (34 neutral and 33 each for positive and negative sentiments).


# Remove rows where 'airline_sentiment' or 'text' are NaN
dataset = dataset.dropna(subset=['airline_sentiment', 'text'])

# Remove rows where 'airline_sentiment' or 'text' are empty strings
dataset = dataset[(dataset['airline_sentiment'].str.strip() != '') & (dataset['text'].str.strip() != '')]

# Filter the DataFrame for each sentiment
neutral_df = dataset[dataset['airline_sentiment'] == 'neutral']
positive_df = dataset[dataset['airline_sentiment'] == 'positive']
negative_df = dataset[dataset['airline_sentiment'] == 'negative']

# Randomly sample records from each sentiment
neutral_sample = neutral_df.sample(n=34)
positive_sample = positive_df.sample(n=33)
negative_sample = negative_df.sample(n=33)

# Concatenate the samples into one DataFrame
dataset = pd.concat([neutral_sample, positive_sample, negative_sample])

# Reset index if needed
dataset.reset_index(drop=True, inplace=True)

# print value counts
print(dataset["airline_sentiment"].value_counts())

Output:


airline_sentiment
neutral     34
positive    33
negative    33
Name: count, dtype: int64

Next, we will define the predict_sentiment() function, which accepts the model client, the system prompt, and the user query and generates a model response.


def predict_sentiment(model, system_role, user_query):

    response = model.chat_completion(
    messages=[{"role": "system", "content": system_role},
        {"role": "user", "content": user_query}],
    max_tokens=10,
    )

    return response.choices[0].message.content

In the next step, we will iterate through the 100 tweets in our dataset and predict sentiment for each tweet using the Qwen 2.5-72b and Llama 3.1-70b models, as shown in the following script.


models = {
    "qwen2.5-72b": qwen_model_client,
    "llama3.1-70b": llama_model_client
}

tweets_list = dataset["text"].tolist()
all_sentiments = []
exceptions = 0

for i, tweet in enumerate(tweets_list, 1):
    for model_name, model_client in models.items():
        try:
            print(f"Processing tweet {i} with model {model_name}")

            system_role = "You are an expert in annotating tweets with positive, negative, and neutral emotions"

            user_query = (
                f"What is the sentiment expressed in the following tweet about an airline? "
                f"Select sentiment value from positive, negative, or neutral. "
                f"Return only the sentiment value in small letters.\n\n"
                f"tweet: {tweet}"
            )

            sentiment_value = predict_sentiment(model_client, system_role, user_query)
            all_sentiments.append({
                'tweet_id': i,
                'model': model_name,
                'sentiment': sentiment_value
            })
            print(i, model_name, sentiment_value)

        except Exception as e:
            print("===================")
            print("Exception occurred with model:", model_name, "| Tweet:", i, "| Error:", e)
            exceptions += 1

print("Total exception count:", exceptions)

Output:

img2.png

Finally, we will convert the predictions for both models into a Pandas Dataframe. We will then fetch the predictions for the individual models and compare them with the actual sentiment values in the datasets to calculate accuracy.

# Convert results to DataFrame and calculate accuracy for each model
results_df = pd.DataFrame(all_sentiments)

for model_name in models.keys():
    model_results = results_df[results_df['model'] == model_name]
    accuracy = accuracy_score(model_results['sentiment'], dataset["airline_sentiment"].iloc[:len(model_results)])
    print(f"Accuracy for {model_name}: {accuracy}")

Output:

Accuracy for qwen2.5-72b: 0.8
Accuracy for llama3.1-70b: 0.77

The above output shows that the Qwen 2.5-72b model achieves 80% accuracy while the Llama-3.1-70b model achieves 77% accuracy. Qwen 2.5-72b model wins the battle for zero-shot text classification.

Let's now see which model performs better for zero-shot text summarization.

Qwen 2.5-72b vs Llama 3.1-70b For Text Summarization

We will use the News Articles Dataset to summarise text using the Qwen and Llama models.

The following script imports the dataset into Pandas DataFrame.

# Kaggle dataset download link
# https://github.com/reddzzz/DataScience_FP/blob/main/dataset.xlsx

dataset = pd.read_excel(r"D:\Datasets\dataset.xlsx")
dataset = dataset.sample(frac=1)
print(dataset.shape)
dataset.head()

Output:

img3.png

Next, we will check the average number of characters in all summaries. We will use this number as output tokens in the LLM model response.

dataset['summary_length'] = dataset['human_summary'].apply(len)
average_length = dataset['summary_length'].mean()
print(f"Average length of summaries: {average_length:.2f} characters")

Output:

Average length of summaries: 1168.78 characters

We will define the generate_summary() helper method, which takes in the model client, the system prompt, and the user query as parameters and returns the model client response.

def generate_summary(model, system_role, user_query):

    response = model.chat_completion(
    messages=[{"role": "system", "content": system_role},
        {"role": "user", "content": user_query}],
    max_tokens=1200,
    )

    return response.choices[0].message.content

We will also define the calculate_rouge helper method that takes in actual and predicted summaries as parameters and returns ROUGE scores.


# Function to calculate ROUGE scores
def calculate_rouge(reference, candidate):
    scorer = rouge_scorer.RougeScorer(['rouge1', 'rouge2', 'rougeL'], use_stemmer=True)
    scores = scorer.score(reference, candidate)
    return {key: value.fmeasure for key, value in scores.items()}

Finally, we will iterate through the first 20 articles in the dataset and summarize them using the Qwen 2.5-72b and Llama 3.1-70b models. We will use the generate_summary() function to generate the summary of the model and then use the calculate_rouge() method to calculate ROUGE scores for the prediction.

We create a Pandas DataFrame that contains ROUGE scores for all article summaries generated via the Qwen 2.5-72b and Llama 3.1-70b models.

models = {"qwen2.5-72b": qwen_model_client,
          "llama3.1-70b": llama_model_client}

results = []

i = 0
for _, row in dataset[:20].iterrows():
    article = row['content']
    human_summary = row['human_summary']

    i = i + 1

    for model_name, model_client in models.items():

        print(f"Summarizing article {i} with model {model_name}")
        system_role = "You are an expert in creating summaries from text"
        user_query = f"Summarize the following article in 1150 characters. The summary should look like human created:\n\n{article}\n\nSummary:"

        generated_summary = generate_summary(model_client, system_role, user_query)
        rouge_scores = calculate_rouge(human_summary, generated_summary)

        results.append({
            'model': model_name,
            'article_id': row.id,
            'generated_summary': generated_summary,
            'rouge1': rouge_scores['rouge1'],
            'rouge2': rouge_scores['rouge2'],
            'rougeL': rouge_scores['rougeL']
        })

# Create a DataFrame with results
results_df = pd.DataFrame(results)

Output:

img4.png

Finally, we can take the average ROUGE scores of all the article summaries to compare the two models.

average_scores = results_df.groupby('model')[['rouge1', 'rouge2', 'rougeL']].mean()
average_scores_sorted = average_scores.sort_values(by='rouge1', ascending=False)
print("Average ROUGE scores by model:")
average_scores_sorted.head()

Output:

img5.png

The Qwen model wins here as well for all ROUGE scores.

Conclusion

Open-source LLMs are quickly catching up with proprietary models. Qwen 2.5-72B has already surpassed GPT-4 turbo, introduced at the beginning of this year.

In this article, you saw a comparison between Qwen 2.5-72b and Llama 3.1-70b models for zero-shot text classification and summarization. The Qwen model performs better than Llama on both tasks.

I encourage you to use the Qwen model for text generation tasks like Chatbot development and share your work with us.

Fine-tuning OpenAI Vision Models for Visual Question-Answering

Featured Imgs 23

In my previous article, I explained how to fine-tune OpenAI GPT-4o model for natural language processing tasks.

In OpenAI DevDay, held on October 1, 2024, OpenAI announced that users can now fine-tune OpenAI vision and multimodal models such as GPT-4o and GPT-4o mini. The best part is that fine-tuning vision models are free until October 31.

In this article, you will see an example of how to vision fine-tune the GPT-4o model on your custom visual question-answering dataset. So, let's begin without ado.

Importing and Installing Required Libraries

You will need to install the OpenAI Python library.

!pip install openai

In this article, we will be using the following Python libraries. Run the following script to import them into your Python application.


from openai import OpenAI
import pandas as pd
import json
import os
from sklearn.utils import shuffle
from sklearn.metrics import accuracy_score
Importing and Preprocessing the Dataset

We will fine-tune the GPT-4o model on a visual question-answering dataset you can download from Kaggle.

The following script imports the CSV file containing the question, the image ID, and the corresponding answer to the question.

#Data download link
#https://www.kaggle.com/datasets/bhavikardeshna/visual-question-answering-computer-vision-nlp

dataset = pd.read_csv(r"D:\Datasets\dataset\data_train.csv")
dataset.head()

Output:

img1.png

Here is the image with the id image100. You can see cups on the shelves.

image100.png

For vision fine-tuning, you must pass image URLs to the OpenAI API. Hence, we will upload our images to a cloud service (Github for this article). The dataset consists of over 1500 images. However, I only uploaded the first 495 images to GitHub. You can upload more images if you want.

We will fine-tune the GPT-4o model on 300 images and will test the model on 100 images.

The following script extracts the digit part from the image_id column of the dataset and filters the images with IDs less than 495, as I uploaded only the first 495 images to GitHub.


dataset['image_num'] = dataset['image_id'].str.extract('(\d+)').astype(int)
filtered_data = dataset[dataset['image_num'] < 495]

filtered_data.head()

Output:

img2.png

You must convert your dataset into the following JSON format for vision fine-tuning OpenAI models.


{
  "messages": [
    { "role": "system", "content": "You are an assistant that identifies uncommon cheeses." },
    { "role": "user", "content": "What is this cheese?" },
    { "role": "user", "content": [
        {
          "type": "image_url",
          "image_url": {
            "url": "https://upload.wikimedia.org/wikipedia/commons/3/36/Danbo_Cheese.jpg"
          }
        }
      ]
    },
    { "role": "assistant", "content": "Danbo" }
  ]
}

We will convert our CSV file into the above JSON format. First, we will divide the dataset into training and test files with 300 and 100 records, respectively.

Next, we will iterate through all the rows in the training set and set the system role, which instructs the model on how to respond to model queries.

Subsequently, we will set the first user role content with the value from the question column and the second with the image URL. Note that we concatenate the base GitHub URL with the image ID to generate the full URL.

Finally, we set the assistant role content with the value from the answer column.

We will perform the above tasks for all the training set records and create our training JSON file.

We will use the test data later for model evaluation.



# Base URL for the images
base_url = "https://raw.githubusercontent.com/usmanmalik57/daniweb-articles/refs/heads/main/vqa_images/"

# Shuffle the dataset
shuffled_dataset = shuffle(filtered_data)

# Split the dataset: first 300 for training, next 100 for testing
training_data = shuffled_dataset[:300]
test_data = shuffled_dataset[300:400]

# Create the JSONL structure for training data and save each entry on a single line
training_output_file = r'D:\Datasets\dataset\training_data.jsonl'

with open(training_output_file, 'w') as f:
    for index, row in training_data.iterrows():
        # Update image URL
        image_url = f"{base_url}image{row['image_num']}.png"
        entry = {
            "messages": [
                {"role": "system", "content": "You are an assistant that answers questions related to images."},
                {"role": "user", "content": row['question']},
                {"role": "user", "content": [
                    {"type": "image_url", "image_url": {"url": image_url}}
                ]},
                {"role": "assistant", "content": row['answer']}
            ]
        }
        # Write each entry as a single line in the JSONL file
        f.write(json.dumps(entry) + '\n')

print(f"Training JSONL file saved as {training_output_file}")

Output:


Training JSONL file saved as D:\Datasets\dataset\training_data.jsonl

We are now ready to fine-tune the GPT-4o model.

Vision Fine-tuning OpenAI GPT-4o Mini

The vision fine-tuning process remains the same as text fine-tuning as I have explained in a previous article. The only difference lies in the training file which contains image URLs for vision fine-tuning.

Let's quickly walk through the fine-tuning process.

First, create a client object for the OpenAI class and pass it your OpenAI API key.

Next, you need to upload the training file to the OpenAI server. You can do so using the files.create() method of the OpenAI client object. This method returns the training file data, including the training file ID.

client = OpenAI(
    # This is the default and can be omitted
    api_key = os.environ.get('OPENAI_API_KEY'),
)

training_file = client.files.create(
  file=open(training_output_file, "rb"),
  purpose="fine-tune"
)

To start fine-tuning, call the fine_tuning.jobs.create() method and pass it the ID of the training file you just uploaded to the OpenAI server.

fine_tuning_job_gpt4o = client.fine_tuning.jobs.create(
  training_file=training_file.id,
  model="gpt-4o-2024-08-06"
)

The training process will start in the background. Using the fine_tuning.jobs.list_events() method, you can see various training events.

# List up to 10 events from a fine-tuning job
print(client.fine_tuning.jobs.list_events(fine_tuning_job_id = fine_tuning_job_gpt4o.id,
                                    limit=10))

Once the fine-tuning is completed, you will receive an email containing the model ID of your fine-tuned model. Alternatively, you can retrieve the model ID using your fine-tuning job ID, as the following script shows.

ft_model_id = client.fine_tuning.jobs.retrieve(fine_tuning_job_gpt4o.id).fine_tuned_model

Let's now evaluate the model on the test data.

Evaluating Fine-Tuned Vision Model

We will add the full_image_path column to the test data, which contains image URLs from the GitHub repository of images.


test_data['full_image_path'] = test_data['image_num'].apply(lambda x: f"{base_url}image{x}.png")
test_data.head(10)

Output:

img3.png

Next, we will define the get_single_prediction() function, which accepts as parameters the query (question), the full image path, the OpenAI model ID, and the system role and returns the answer to the user's query.


def get_single_prediction(query, image_path, model_id, system_role):

    try:
        # Make the API call to get the response from the model
        response = client.chat.completions.create(
          model= model_id,
          temperature = 0,
          messages=[
                {"role": "system", "content": system_role},
                {"role": "user", "content": [
                    {"type": "text", "text": query},
                    {"type": "image_url", "image_url": {"url": image_path}
                    }
                ]}
          ]
        )


        # Extract the prediction from the API response
        prediction = response.choices[0].message.content.strip().lower()
        return prediction
    except Exception as e:
        print(f"Error making prediction: {e}")
        return None  # In case of failure

Let's first make a prediction using the default GPT-4o model.


image_path = test_data["full_image_path"].iloc[1]
query = test_data["question"].iloc[1]
system_role = "You are an assistant that answers questions related to images."
model_id = "gpt-4o-2024-08-06"
response = get_single_prediction(query, image_path, model_id, system_role)

print(f"Image path: {image_path}")
print(f"User Query: {query}")
print(f"Model Response: {response}")

Output:

img4.png

The above output shows that the default GPT-4o model correctly predicts the answer. However, the output is not a single word as it was in our dataset.

Let's now make a prediction for the same image using our fine-tuned model.


image_path = test_data["full_image_path"].iloc[1]
query = test_data["question"].iloc[1]
system_role = "You are an assistant that answers questions related to images."
model_id = ft_model_id
response = get_single_prediction(query, image_path, model_id, system_role)

print(f"Image path: {image_path}")
print(f"User Query: {query}")
print(f"Model Response: {response}")

Output:

img5.png

The model response contains the single word printer, which shows that the fine-tuned model has learned the patterns from the dataset.

We will define the make_predictions() function, which predicts all the records in the test data. The function accepts the dataset, the model ID, and the system role as parameter values.

The function iterates through each record in the dataset and uses the get_single_prediction() function to predict the response. The function then appends the response to the predicted_answers[] list. Finally, the actual_answers list containing the actual answers is compared with the predicted_answers list to calculate the model's accuracy.



def make_predictions(dataframe, model_id, system_role):
    actual_answers = []
    predicted_answers = []

    # Initialize a counter to track record numbers
    record_number = 1

    # Iterate through each row in the dataframe
    for _, row in dataframe.iterrows():
        image_path = row['full_image_path']
        query = row['question']
        actual_answer = row['answer'].lower()

        # Get the predicted answer from the API
        predicted_answer = get_single_prediction(query, image_path, model_id, system_role)

        if predicted_answer:
            # Append actual and predicted answers for accuracy calculation
            actual_answers.append(actual_answer)
            predicted_answers.append(predicted_answer)
        else:
            print(f"Skipping record #{record_number} due to prediction error.")
            record_number += 1
            continue

        # Print the status indicating the record number processed and the response
        print(f"Record #{record_number} processed. Response: {predicted_answer}")

        # Increment the record number for the next iteration
        record_number += 1

    # Calculate accuracy using sklearn's accuracy_score
    accuracy = accuracy_score(actual_answers, predicted_answers) * 100
    print(f"Accuracy: {accuracy:.2f}%")

    return accuracy, predicted_answers
Results Using Default GPT-4o Model

Let's first calculate the default model accuracy on the test data. Notice that the following system prompt contains more details than the fine-tuning prompt since we want the default model to generate predictions similar to those in our dataset's answers column.


model_id = "gpt-4o-2024-08-06"
system_role = """
You are an assistant that answers questions related to images.
Return your response in a single word without period at the end.
For digits you should return digit number and not word. "
"""
gpt_4o_predictions = make_predictions(test_data, model_id, system_role)

Output:

Accuracy: 29.00%

The above output shows that the model achieves 29% accuracy for precisely predicting the answers to questions related to images in our dataset.

Results Using Fine-tuned GPT-4o Model

Let's now make predictions using our fine-tuned model. Here we will use the same system prompt we used for fine-tuning the model.


model_id = ft_model_id
system_role = "You are an assistant that answers questions related to images."
gpt_4o_fine_tuned_predictions = make_predictions(test_data, model_id, system_role)

Output:

Accuracy: 36.00%

The above output shows that the model achieves 36% accuracy, which is much better than the default model.

Note: These results may seem poor, with very low accuracy values. However, here, the accuracy is calculated based on exact string matching, which is difficult to get right. Furthermore, the accuracy for this dataset is in the range of 15-25% on Kaggle with the default neural networks, which shows that our fine-tuned model performed quite well.
You can further increase the model performance by fine-tuning on the complete dataset.

Comparing Default vs Fine-Tuned GPT-4o Model

Let's plot actual answers, the default and fine-tuned mo,del predictions side by side to further understand the.


comparis furtheron_df = pd.DataFrame({
    'Actual Answers': test_data['answer'],
    'Default GPT-4o': gpt_4o_predictions[1],
    'Fine-tuned GPT-4o': gpt_4o_fine_tuned_predictions[1]
})

# Display the new DataFrame
comparison_df.head(20)

Output:

img6.png

The above output shows that the default and fine-tuned models sometimes predicted the correct answer but in different words. For example, our fine-tuned model predicted chalkboard for blackboard, which are semantically similar.

To overcome this problem, we will ask the GPT-4o model to return True if two predictions are semantically similar. This will give us a better picture of the model's performance.

The following script defines the compare_answer function that takes the actual answer and the prediction as inputs and returns True if the two are semantically similar.


def compare_answer(answer, prediction):

    content = f"""
    Compare the actual answer and prediction and check if the actual answer and prediction have the same meaning.
    They dont have to be the exact match but the meaning must be similarl.
    Actual answer {answer}.
    Prediction: {prediction}.
    Return True if the have same meaning, else return False. Do not return anything else.

    """
    response = client.chat.completions.create(
        model= "gpt-4o-2024-08-06",
        temperature=0,
        max_tokens=10,
        messages=[
            {"role": "user", "content": content}
        ]
    )

    response = response.choices[0].message.content.strip().lower() == 'true'
    print(f"{answer} -> {prediction} -> {response}")
    return response

Next, we will define the count_matching_answers() function which takes two lists as inputs and returns the count of semantically similar values in corresponding items of the two lists.


def count_matching_answers(answers, predictions):
    count = 0
    # Iterate through both lists together using zip
    for answer, prediction in zip(answers, predictions):
        # Call the compare_answer function and increment count if True
        if compare_answer(answer, prediction):
            count += 1
    return count

Let's first check the count of semantically similar outputs for the default GPT-4o model.

matching_count = count_matching_answers(test_data['answer'], gpt_4o_predictions[1])
print(f"Number of matching answers: {matching_count}")

Output:

Number of matching answers: 36

The model shows an accuracy of 36%, better than the 29% achieved previously.

Similarly, the script below calculates the accuracy for the fine-tuned GPT-4o model.

matching_count = count_matching_answers(test_data['answer'], gpt_4o_fine_tuned_predictions[1])
print(f"Number of matching answers: {matching_count}")

Output:

Number of matching answers: 40

The accuracy now reaches 40% for our fine-tuned model.

Conclusion

OpenAI recently released a much-awaited feature: vision fine-tuning of the OpenAI models. In this article, you saw how to fine-tune the OpenAI GPT-4o model for visual question-answering. Try fine-tuning the GPT-4o model on your custom dataset and see if you get improved results. The fine-tuning is free until October 31, 2024, so trying wouldn't cost a dime anyway ;)

Image Generation with State of the Art Flux Diffusion Models

Featured Imgs 23

In one of my previous articles, I explained how to generate stunning images for free using diffusion models and showed how to generate Stability AI's diffusion models for text-to-image generation.

Since then, the AI domain has progressed considerably, particularly in image generation. Black Forest Labs has released Flux.1 series of state-of-the-art vision models.

In this article, you will see how to use Flux.1 models for text-to-image generation and text-to-image modification. You will import Flux models from Hugging Face and generate images using Python code.

So, let's begin without ado.

Installing and Importing Required Libraries

Flux models are gated on Hugging Face, meaning you have to log into your account to access Flux models. To do so from a Python application, particularly Jupyter Notebook, you need to download the huggingface_hub module. In addition, you need to download the diffusers module from Hugging Face.

The script below downloads these two modules.


!pip install huggingface_hub
!pip install git+https://github.com/huggingface/diffusers.git

Note: To run scripts in this article, you will need Nvidia GPUs. You can use Google Colab, which provides free Nvidia GPUs.

Next, let's import the required libraries into our Python application:


from huggingface_hub import notebook_login
import torch
import matplotlib.pyplot as plt
from diffusers import FluxPipeline
from diffusers import FluxImg2ImgPipeline
from diffusers.utils import load_image

notebook_login() # you need to log into your hugging face account using access token
Text to Image Generation with Flux

Flux models have two variants: timestep-distilled (FLUX.1-schnell) and guidance-distilled (FLUX.1-dev). The timestep-distilled model requires fewer sampling steps and has a maximum sequence length of 256, while the guidance-distilled variant needs about 50 sampling steps for good-quality generation and has no limitations on max sequence length.

We will use the guidance-distilled Flux.1-dev model for text-to-image generation.

The following script creates a Hugging Face pipeline by importing the pretrained Flux.1-dev model from Hugging Face.


pipe = FluxPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload()

Next, for image generation, you must pass the text prompt, the output image's height and width, the guidance scale, the number of interference steps, and the maximum sequence length for the input text.

The guidance_scale parameter influences how closely the generated image adheres to the prompt. Its value ranges between 0 and 20. The num_inference_steps determines the number of denoising steps, affecting the quality and generation time. A higher number of inference steps results in a higher-quality image but takes more time to generate.

The following script will generate an image of a girl standing in front of the Eiffel Tower, holding a sign that says, "Welcome to Paris."


prompt = "A little girl standing in front of eifel tower holding a sign that says welcome to Paris"
image = pipe(
    prompt,
    height=1024,
    width=1024,
    guidance_scale=3.5,
    num_inference_steps=50,
    max_sequence_length=512,
    generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save("girl-in-paris.png")

Output:

girl-in-paris.png

From the above output, you can see that the model can generate a photo-realistic image.

Let's see another example. We will generate an image of a baby riding a line in Times Square, NY, with an elephant in the background.


prompt = "A baby riding a lion in time square new york with elephants in the background"
image = pipe(
    prompt,
    height=1024,
    width=1024,
    guidance_scale=10,
    num_inference_steps=50,
    max_sequence_length=512,
    generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save("baby_lion_time_square.png")

Output:

baby_lion_time_square.png

The above output shows that the model could generate all the details specified in the text prompt.

Finally, I will create a simple function that generates an image given a prompt and the output image name. You can use this function to generate images in your code.


def generate_image_from_text(prompt, image_name):
  image = pipe(
    prompt,
    height=1024,
    width=1024,
    guidance_scale=3.5,
    num_inference_steps=50,
    max_sequence_length=512,
    generator=torch.Generator("cpu").manual_seed(0)
  ).images[0]
  image.save(image_name + ".png")


prompt = "A golden duck swimming in a lake with mountains and sunset view in the background"
image_name = "duck_in_lake"
generate_image_from_text(prompt, image_name)

Output:

duck_in_lake.png

As you can see from the above output, the Flux.1-dev model generates very high-quality images based on text prompts.

In the next section, you will see how to modify existing images based on text prompts.

Image Modification using Text Prompts in Flux

We will use the timestep-distilled Flux.1-schnell model for image modification.

The following script creates a Hugging Face pipeline for the Flux.1-schnell model.


device = "cuda"
mod_pipe = FluxImg2ImgPipeline.from_pretrained("black-forest-labs/FLUX.1-schnell", torch_dtype=torch.bfloat16)
mod_pipe = mod_pipe.to(device)

We will modify the following Wikipedia image of the Pyramid of Giza by adding birds, camels, and a river.

Input Image:

All_Gizah_Pyramids.jpg

Image modification is similar to image generation, except we must also pass the strength parameter to the pipeline object. The strength parameter defines the extent to which the original image will be modified.


url = "https://upload.wikimedia.org/wikipedia/commons/thumb/a/af/All_Gizah_Pyramids.jpg/1920px-All_Gizah_Pyramids.jpg"
init_image = load_image(url)

prompt = "add birds, camels, and blue river"

images = mod_pipe(
    prompt=prompt,
    image=init_image,
    num_inference_steps=50,
    strength=0.75,
    guidance_scale= 7.5,
    ).images[0]

images.save("pyramids_modified.png")

Output:

pyramids_modified.png

In the above image, a few birds and camels are added to the original image.

Let's increase the values of the strength and guidance_scale parameters to see how this affects the original image.


images = mod_pipe(
    prompt=prompt,
    image=init_image,
    num_inference_steps=50,
    strength=0.85,
    guidance_scale= 10.0,
    ).images[0]

images.save("pyramids_modified2.png")

Output:

pyramids_modified2.png

The above output shows that the original image has been modified much more extensively than the previous modification.

Finally, we will define the modify_image() function, which accepts the image URL, the prompt to modify the image, and the name for the modified image and modifies the passed image.


def modify_image(image_url, prompt, image_name):

  init_image = load_image(url)
  images = mod_pipe(
    prompt=prompt,
    image=init_image,
    num_inference_steps=50,
    strength=0.85,
    guidance_scale= 10.0,
    ).images[0]

  images.save(image_name + ".png")

prompt = "cars, horses"
url = "/content/1280px-Taj_Mahal,_Agra,_India_edit3.jpg"
name = "taj_mahal_modified"

modify_image(url, prompt, name)

Here is the input image.

Input Image:

1280px-Taj_Mahal,_Agra,_India_edit3.jpg

And here is the modified output. You can see some cars and horses added to the image.

Output:

taj_mahal_modified.png

Conclusion

Flux.1 models are the state-of-the-art image generation models. In this article, you saw how to generate and modify images with text prompts using the Flux.1 models. I encourage you to play around with strength and guidance scale parameters to generate and modify your custom images. Let me know if you like the results.

Text Classification and Summarization with Qwen 2.5 Model From Hugging Face

Featured Imgs 23

On September 19, 2024, Alibaba released the Qwen 2.5 series of models. The Qwen 2.5-72B base and instruct models outperformed larger state-of-the-art models like Llama 3.1-405B on multiple benchmarks. It is safe to assume that Qwen 2.5-72B is a state-of-the-art open-source large language model.

This article will show you how to use Qwen 2.5 models in your Python applications. You will see how to import Qwen 2.5 models from the Hugging Face library and generate responses. You will also see how to perform text classification and summarization tasks on custom datasets using the Qwen 2.5-7B. So, let's begin without ado.

Note: If you have a GPU with larger memory, you can also try Qwen 2.5-7B using the scripts provided in this code.

Installing and Importing Required Libraries

You can run the scripts in this article on Google Colab. In this case, you only need to install the following libraries.


!pip install rouge-score
!pip install --upgrade openpyxl
!pip install pandas openpyxl

The following script imports the libraries you need to run scripts in this article.

from transformers import AutoModelForCausalLM, AutoTokenizer
import pandas as pd
from sklearn.metrics import accuracy_score
from rouge_score import rouge_scorer
A Basic Example of Using Qwen 2.5 Instruct Model in Hugging Face

Before moving to text classification and summarization on custom datasets, let's first see how to generate a single response from the Qwen 2.5-7B model.

Importing the Model and Tokenizer from Hugging Face

The first step is to import the model weights and tokenizer from the Hugging Face library, as the following script demonstrates.



model_name = "Qwen/Qwen2.5-7B-Instruct"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

Note: To use the Qwen 2.5-72B model you can use Qwen/Qwen2.5-7B-Instruct path.

Generating a Response from the Qwen 2.5 Model

The next step is to generate a response from the model. To do so, you need two prompts: a system prompt and a user prompt. The system prompt tells the model his role, while the user prompt is the question that the user asks.

You need to create a list containing the user and system prompts dictionaries. Next, you can call the apply_chat_template() method to convert the messages into a format that the Qwen models understand.

The following defines the system and user prompts that print a Python function. The output shows the formatted messages.


system_prompt = "You are an expert Python coder"

user_prompt = "Give me a Python recursive function calculate factorial of a number"

messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": user_prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

print(text)

Output:


<|im_start|>system
You are an expert Python coder<|im_end|>
<|im_start|>user
Give me a Python recursive function calculate factorial of a number<|im_end|>
<|im_start|>assistant

Once you have the message list, you can tokenize it using the Qwen tokenizer you imported earlier. The tokenizer returns model inputs that you can pass to the model.generate() method. Finally, you can decode the model outputs using the tokenizer.batch_decode() method to receive the final string output.

Based on the user input, the script below returns a recursive method for printing the factorial of a number.


model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

print(response)

Output:

img_1.png

Let's test the function returned by the Qwen model.


def factorial(n):
    # Base case: factorial of 0 or 1 is 1
    if n == 0 or n == 1:
        return 1
    # Recursive case: n * factorial of (n-1)
    else:
        return n * factorial(n-1)

# Example usage:
number = 6
print(f"The factorial of {number} is {factorial(number)}")

Output:

The factorial of 6 is 720

The above output shows that the method functions perfectly.

Now that you know how to generate responses from a Qwem 2.5 model. Let's apply the Qwen 2.5 model to your custom datasets for text classification and summarization.

Text Classification with Qwen 2.5 Model

We will perform sentiment classification of tweets about US Airlines using the Qwen 2.5-7B model.

Importing and Preprocessing the Dataset

We will use the US Airline Sentiment dataset for the twitter sentiment classification tasks.

The following script imports the dataset into a Pandas Dataframe.


## Dataset download link
## https://www.kaggle.com/datasets/crowdflower/twitter-airline-sentiment?select=Tweets.csv

dataset = pd.read_csv(r"/content/Tweets.csv")
dataset.head()

Output:

img_2.png

We will perform sentiment classification on 100 tweets divided into 34, 33, and 33 neutral, positive, and negative tweets, respectively. The script below preprocesses the dataset.


# Remove rows where 'airline_sentiment' or 'text' are NaN
dataset = dataset.dropna(subset=['airline_sentiment', 'text'])

# Remove rows where 'airline_sentiment' or 'text' are empty strings
dataset = dataset[(dataset['airline_sentiment'].str.strip() != '') & (dataset['text'].str.strip() != '')]

# Filter the DataFrame for each sentiment
neutral_df = dataset[dataset['airline_sentiment'] == 'neutral']
positive_df = dataset[dataset['airline_sentiment'] == 'positive']
negative_df = dataset[dataset['airline_sentiment'] == 'negative']

# Randomly sample records from each sentiment
neutral_sample = neutral_df.sample(n=34)
positive_sample = positive_df.sample(n=33)
negative_sample = negative_df.sample(n=33)

# Concatenate the samples into one DataFrame
dataset = pd.concat([neutral_sample, positive_sample, negative_sample])

# Reset index if needed
dataset.reset_index(drop=True, inplace=True)

# print value counts
print(dataset["airline_sentiment"].value_counts())

Output:


airline_sentiment
neutral     34
positive    33
negative    33
Name: count, dtype: int64
Predicting Tweets Sentiment with Qwen 2.5

Next, we will define the generate_model_response() function, which accepts the system and user prompt as parameters and returns the Qwen 2.5-7B model response.


def generate_model_response(system_prompt, user_prompt):

  messages = [
      {"role": "system", "content": system_prompt},
      {"role": "user", "content": user_prompt}
  ]

  text = tokenizer.apply_chat_template(
      messages,
      tokenize=False,
      add_generation_prompt=True
  )
  model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

  generated_ids = model.generate(
      **model_inputs,
      max_new_tokens=512
  )
  generated_ids = [
      output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
  ]

  response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

  return response

Subsequently, we will iterate through all the tweets in the preprocessed dataset and predict the sentiment of each tweet using the generate_model_response() function.

Finally, we compare the predicted sentiments with the actual tweet sentiments and display the model accuracy.


def find_sentiment(dataset):

    tweets_list = dataset["text"].tolist()

    all_sentiments = []

    i = 0
    exceptions = 0
    while i < len(tweets_list):

        try:
            tweet = tweets_list[i]

            system_prompt = "You are an expert in annotating tweets with positive, negative, and neutral emotions"

            user_prompt = """What is the sentiment expressed in the following tweet about an airline?
            Select sentiment value from positive, negative, or neutral. Return only the sentiment value in small letters.
            tweet: {}""".format(tweet)

            sentiment_value = generate_model_response(system_prompt, user_prompt)
            all_sentiments.append(sentiment_value)
            i = i + 1
            print(i, sentiment_value)

        except Exception as e:
            print("===================")
            print("Exception occurred:", e)
            exceptions += 1

    print("Total exception count:", exceptions)
    accuracy = accuracy_score(all_sentiments, dataset["airline_sentiment"])
    print("Accuracy:", accuracy)

find_sentiment(dataset)

Output:


Total exception count: 0
Accuracy: 0.79

The above output shows that the model achieves an accuracy of 79% for zero-shot classification of tweets. This performance is even better than 76% achieved by GPT-4o in this article.

In the next section, you will see how to perform text summarization with Qwen 2.5-7B model.

Text Summarization with with Qwen 2.5

We will summarize a BBC news article using the Qwen 2.5-7B model and evaluate model performance using ROUGE scores.

Importing the Dataset

We will summarize articles from the News Articles with Summary dataset.
The script below imports the dataset into your Python application.


# Kaggle dataset download link
# https://github.com/reddzzz/DataScience_FP/blob/main/dataset.xlsx

dataset = pd.read_excel(r"/content/dataset.xlsx")
dataset = dataset.sample(frac=1)
print(dataset.shape)
dataset.head()

Output:

img_3.png

Summarizing News Articles with Qwen 2.5

The process will remain the same. We will generate model summaries from the Qwen 2.5-7B model using the generate_model_response() function that we defined earlier.

Next, we will compare the model-generated summaries with human summaries and evaluate the model's performance using ROUGE scores.

The following script defines the calculate_rouge() function, which will accept summaries generated by humans and models and return ROUGE scores for comparison.


# Function to calculate ROUGE scores
def calculate_rouge(reference, candidate):
    scorer = rouge_scorer.RougeScorer(['rouge1', 'rouge2', 'rougeL'], use_stemmer=True)
    scores = scorer.score(reference, candidate)
    return {key: value.fmeasure for key, value in scores.items()}

The following script defines the generate_summary() function, which iterates through the first 20 articles in the dataset, generates summaries using the Qwen model, and compares them with the human summary to calculate ROUGE scores.

Finally, the average ROUGE scores for all the articles is printed on the console.


# Function to generate summary using OpenAI API
def generate_summary(dataset):

    results = []

    i = 0

    for _, row in dataset[:20].iterrows():
      article = row['content']
      human_summary = row['human_summary']

      i +=1
      print(f"Summarizing article {i}")

      system_prompt = "You are an expert in summarizing news articles"
      user_prompt = f"Summarize the following article in 1150 characters. The summary should look like human created:\n\n{article}\n\nSummary:"

      generated_summary = generate_model_response(system_prompt, user_prompt)

      rouge_scores = calculate_rouge(human_summary, generated_summary)

      results.append({
          'article_id': row.id,
          'generated_summary': generated_summary,
          'rouge1': rouge_scores['rouge1'],
          'rouge2': rouge_scores['rouge2'],
          'rougeL': rouge_scores['rougeL']
      })

    return results

results = generate_summary(dataset)

results_df = pd.DataFrame(results)

mean_values = results_df[["rouge1", "rouge2", "rougeL"]].mean()
print(mean_values)

Output:


rouge1    0.325830
rouge2    0.068624
rougeL    0.168639
Conclusion

Qwen-2.5 models have demonstrated state-of-the-art results for text generation and natural language processing tasks.

In this article, you saw how to generate a response from the Qwen 2.5-7B model from Hugging Face and how to perform text classification and summarization on your custom datasets. I suggest you try using the Qwen 2.5-72B model to see if you get better results.
Feel free to share your feedback.

Text and Image to Video Generation using Diffusion Models in Hugging Face

Featured Imgs 23

The AI wave has introduced a myriad of exciting applications. While text generation and natural language processing are leading the AI revolution, image, and vision-based technologies are quickly catching up. The intersection of text and vision applications has seen a rapid surge recently.

In this article, you'll learn how to generate videos using text and image inputs. We'll leverage open-source models from Hugging Face to bring these applications to life. So, without further ado, let's dive in!

Installing and Importing Required Libraries

We will use the Hugging Face diffusion models to generate videos from text and images. The following script installs the libraries you will need to import these models from Hugging Face.

!pip install --upgrade transformers accelerate diffusers imageio-ffmpeg

For text-to-video generation, we will use the CogVideoX-2b diffusion model. For image-to-video generation, we will use the Stability AI's img2vid model.

The following script imports the Hugging Face pipelines for the two models. We also import some utility classes to save videos and display images.


import torch
from diffusers import CogVideoXPipeline
from diffusers.utils import export_to_video

from diffusers import StableVideoDiffusionPipeline
from diffusers.utils import load_image
Text to Video Generation with Hugging Face Diffusers

The first step is to create a Hugging Face pipeline that can access the CogVideoX-2b model. You can also use the CogVideoX-5b model, but it requires more space and memory.

The following script creates the pipeline for the CogVideoX-2b model. We also call some utility methods such as enable_model_cpu_offload(), enable_sequential_cpu_offload(), enable_slicing(), and enable_tiling() to improve the model performance.


text_video_pipe = CogVideoXPipeline.from_pretrained(
    "THUDM/CogVideoX-2b",
    torch_dtype=torch.float16
)

text_video_pipe.enable_model_cpu_offload()
text_video_pipe.enable_sequential_cpu_offload()
text_video_pipe.vae.enable_slicing()
text_video_pipe.vae.enable_tiling()

Next, we define our text prompt and pass the prompt and other video configurations to the text_video_pipe pipeline that we created in the previous script. You can play around with the configuration settings to see how they affect the output.

The pipeline returns video frames you can export to video using the export_to_video() utility, as the following script shows.


prompt = "A white dog running on a Caribbean beach."

video = text_video_pipe(
    prompt=prompt,
    num_videos_per_prompt=1,
    num_inference_steps=50,
    num_frames=49,
    guidance_scale=6,
    generator=torch.Generator(device="cuda").manual_seed(42),
).frames[0]

export_to_video(video, "text_to_video.mp4", fps=8)

Output:
text-to-video.gif

Note: I intentionally reduced the output video dimensions.

The above output shows the video generated based on our input prompt. Amazing, isn't it?

But the magic doesn't end here. You can also pass an image as input to a diffusion model and get an animated video in response. This is what you will see in the next section.

Image to Video Generation with Hugging Face Diffusers

We will use Stability AI's img2vid model for image-to-video generation.

The script below imports the corresponding pipeline from the Hugging Face library.


image_video_pipe = StableVideoDiffusionPipeline.from_pretrained(
    "stabilityai/stable-video-diffusion-img2vid",
    torch_dtype=torch.float16,
    variant="fp16"
)
image_video_pipe.enable_model_cpu_offload()

We will generate a video using the following image as input. You can use any other image if you want.


## Image link: https://www.brookhavenanimalhosp.com/site/blog/2022/11/15/15-cute-dog-breeds-that-stay-small-forever
image = load_image("/content/image-73073-800.jpg")
image = image.resize((1024, 576))
image

Output:

input-image-for-video-generation.png

To generate video from the image, you must pass the image object and the number of total frames to generate to the image_video_pipe pipeline you created in the previous script.

frames = image_video_pipe(image, num_frames=28).frames[0]
export_to_video(frames, "image_to_video.mp4", fps=7)

Output:

image-to-video.gif

Conclusion

Video generation from text and image inputs is a fascinating application. In this article, you saw how to generate videos from text using open-source diffusion models from Hugging Face. I encourage you to play around with these models to generate your own stunning videos using text prompts and image inputs.

Extracting Structured Outputs from LLMs in LangChain

Featured Imgs 23

Large language models (LLMS) are trained to predict the next token (set of characters) following an input sequence of tokens. This makes LLMs suitable for unstructured textual responses.

However, we often need to extract structured information from unstructured text. With the Python LangChain module, you can extract structured information in a Python Pydantic object.

In this article, you will see how to extract structured information from news articles. You will extract the article's tone, type, country, title, and conclusion. You will also see how to extract structured information from single and multiple text documents.

So, let's begin without ado.

Installing and Importing Required Libraries

As always, we will first install and import the required libraries.
The script below installs the LangChain and LangChain OpenAI libraries. We will extract structured data from the news articles using the OpenAI GPT-4 latest LLM.


!pip install -U langchain
!pip install -qU langchain-openai

Next, we will import the required libraries in a Python application.


import pandas as pd
import os
from typing import List, Optional
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import OpenAI
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
Importing the Dataset

We will extract structured information from the articles in the News Article with Summary dataset.

The following script imports the data into a Pandas DataFrame.


dataset = pd.read_excel(r"D:\Datasets\dataset.xlsx")
dataset.head(10)

Output:

image1.png

Defining the Structured Output Format

To extract structured output, we need to define the attributes of the structured output. We will extract the article title, type, tone, country, and conclusion. Furthermore, we want to categorize the article types and tones into the following categories.


article_types = [
    "informative",
    "critical",
    "opinion",
    "explanatory",
    "analytical",
    "persuasive",
    "narrative",
    "investigative",
    "feature",
    "review",
    "profile",
    "how-to/guide",
]

article_tones = [
    "aggressive",
    "neutral",
    "passive",
    "formal",
    "informal",
    "humorous",
    "serious",
    "optimistic",
    "pessimistic",
    "sarcastic",
    "enthusiastic",
    "melancholic",
    "objective",
    "subjective",
    "cautious",
    "assertive",
    "conciliatory",
    "urgent"
]

Next, you must define a class inheriting from the Pytdantic BaseModel class. Inside the class, you define the attributes containing the structured information.

For example, in the following script, the title attribute contains a string type article title. The LLM will use the attribute description to extract information for this attribute from the article text.

We will extract the title, type, tone, country, and conclusion.


class ArticleInformation(BaseModel):
    """Information about a news paper article"""


    title:str = Field(description= "This is the title of the article in less than 100 characters")
    article_type: str = Field(description = f"The type of the artile. It can be one of the following : {article_types}")
    tone: str = Field(description = f"The tone of the artile. It can be one of the following: {article_tones}")
    country: str = Field(description= """The country which is at the center of discussion in the article.
                                         Return global if the article is about the whole world.""")

    conclusion: str = Field(description= "The conclusion of the article in less than 100 words.")


Extracting the Structured Output from Text

Next, you must define an LLM to extract structured information from the news article. In the following script, we will use the latest OpenAI GPT-4o LLM.


OPENAI_API_KEY = os.environ.get('OPENAI_API_KEY')
llm = ChatOpenAI(api_key = OPENAI_API_KEY ,
                 temperature = 0,
                model_name = "gpt-4o-2024-08-06")

You need to define the prompt that instructs the LLM that he should act as an expert extraction algorithm while extracting structured outputs.

Subsequently, using the LangChain Expression Language, we will create a chain that passes the prompt to an LLM. Notice that here, we call the with_structured_output() method on the LLM object and pass it the ArticleInformation class to the schema attribute of the method. This ensures the output object contains attributes from the ArticleInformation class.


extraction_prompt = """
You are an expert extraction algorithm.
Only extract relevant information from the text.
If you do not know the value of an attribute asked to extract,
return null for the attribute's value."
"""

prompt = ChatPromptTemplate.from_messages([
    ("system", extraction_prompt),
    ("user", "{input}")
])

extraction_chain = prompt | llm.with_structured_output(schema = ArticleInformation)

Finally, we can call the invoke() function of the chain you just created and pass it the article text.


first_article = dataset["content"].iloc[0]
article_information = extraction_chain.invoke({"input":first_article})
print(article_information)

Output:

image2.png

From the above output, you can see structured data extracted from the article.

Extracting a List of Formatted Items

In most cases, you will want to extract structured data from multiple text documents. To do so, you have two options: merge multiple documents into one document or iterate through multiple documents and extract structured data from each document.

Extracting List of Items From a Single Merged Document

You can merge multiple documents into a single document and then create a Pydantic class that contains a list of objects of the Pydantic class containing the structure data you want to extract. This approach is helpful if you have a small number of documents since merging multiple documents can result in the number of tokens greater than an LLM's context window.

To do so, we will create another Pydantic class with a list of objects from the initial Pydantic class containing structured data information.

For example, in the following script, we define the ArticleInfos class, which contains the articles list of the ArticleInformation class.


class ArticleInfos(BaseModel):
    """Extracted data about multiple articles."""

    # Creates a model so that we can extract multiple entities.
    articles: List[ArticleInformation]

Next, we will merge the first 10 documents from our dataset using the following script.


# Function to generate the formatted article
def format_articles(df, num_articles=10):
    formatted_articles = ""
    for i in range(min(num_articles, len(df))):
        article_info = f"================================================================\n"
        article_info += f"Article Number: {i+1}, {df.loc[i, 'author']}, {df.loc[i, 'date']}, {df.loc[i, 'year']}, {df.loc[i, 'month']}\n"
        article_info += "================================================================\n"
        article_info += f"{df.loc[i, 'content']}\n\n"
        formatted_articles += article_info
    return formatted_articles

# Get the formatted articles for the first 10
formatted_articles = format_articles(dataset, 10)

# Output the result
print(formatted_articles)

Output:

image3.png

The above output shows one extensive document containing text from the first ten articles.

We will create a chain where the LLM uses the ArticleInfos class in the llm.with_structured_output() method.

Finally, we call the invoke() method and pass our document containing multiple articles, as shown in the following script.

If you print the articles attribute from the LLM response, you will see that it contains a list of structured items corresponding to each article.


extraction_chain = prompt | llm.with_structured_output(schema = ArticleInfos)
article_information = extraction_chain.invoke({"input":formatted_articles})
print(article_information.articles)

Output:

image4.png

Using the script below, you can store the extracted information in a Pandas DataFrame.


# Converting the list of objects to a list of dictionaries
articles_data = [
    {
        "title": article.title,
        "article_type": article.article_type,
        "tone": article.tone,
        "country": article.country,
        "conclusion": article.conclusion
    }
    for article in article_information.articles
]

# Creating a DataFrame from the list of dictionaries
df = pd.DataFrame(articles_data)

df.head(10)

Output:

image5.png

The above output shows the extracted article title, type, tone, country, and conclusion in a Pandas DataFrame.

Extracting List of Items From Multiple Documents

The second option for extracting structured data from multiple documents is to simply iterate over each document and use the Pydantic structured class to extract structured information. I prefer this approach if I have a large number of documents.

The following script iterates through the first 10 documents in the dataset, extracts structured data from each document, and stores the extracted data in a list.


extraction_chain = prompt | llm.with_structured_output(schema = ArticleInformation)

articles_information_list = []
for index, row in dataset.tail(10).iterrows():
    content_text = row['content']
    article_information = extraction_chain.invoke({"input":content_text})
    articles_information_list.append(article_information)

articles_information_list

Output:

image6.png

Finally, we can convert the list of extracted data into a Pandas DataFrame using the following script.


# Converting the list of objects to a list of dictionaries
articles_data = [
    {
        "title": article.title,
        "article_type": article.article_type,
        "tone": article.tone,
        "country": article.country,
        "conclusion": article.conclusion
    }
    for article in articles_information_list
]

# Creating a DataFrame from the list of dictionaries
df = pd.DataFrame(articles_data)

# Displaying the DataFrame
df.head(10)

Output:

image7.png

Conclusion

Extracting structured data from an LLM can be crucial, particularly for data engineering, preprocessing, analysis, and visualization tasks. In this article, you saw how to extract structured data using LLMs in LangChain, both from a single document and multiple documents.

If you have any feedback, please leave it in the comments section.

Enhancing RAG Functionalities using Tools and Agents in LangChain

Featured Imgs 23

Retrieval augmented generation (RAG) allows large language models (LLMs) to answer queries related to the data the models have not seen during training. In my previous article, I explained how to develop RAG systems using the Claude 3.5 Sonnet model.

However, RAG systems only answer queries about the data stored in the vector database. For example, you have a RAG system that answers queries related to financial documents in your database. If you ask it to search the internet for some information, it will not be able to do so.

This is where tools and agents come into play. Tools and agents enable LLMs to retrieve information from various external sources such as the internet, Wikipedia, YouTube, or virtually any Python method implemented as a tool in LangChain.

This article will show you how to enhance the functionalities of your RAG systems using tools and agents in the Python LangChain framework.

So, let's begin without an ado.

Installing and Importing Required Libraries

The following script installs the required libraries, including the Python LangChain framework and its associated modules and the OpenAI client.



!pip install -U langchain
!pip install langchain-core
!pip install langchainhub
!pip install -qU langchain-openai
!pip install pypdf
!pip install faiss-cpu
!pip install --upgrade --quiet  wikipedia
Requirement already satisfied: langchain in c:\us

The script below imports the required libraries into your Python application.


from langchain.tools import WikipediaQueryRun
from langchain_community.utilities import WikipediaAPIWrapper
from langchain.tools.retriever import create_retriever_tool
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_community.chat_message_histories import ChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_core.tools import tool
from langchain import hub
import os

from langchain_community.document_loaders import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.vectorstores import FAISS
Enhancing RAG with Tools and Agents

To enhance RAG using tools and agents in LangChain, follow the following steps.

  1. Import or create tools you want to use with RAG.
  2. Create a retrieval tool for RAG.
  3. Add retrieval and other tools to an agent.
  4. Create an agent executor that invokes agents' calls.

The benefit of agents over chains is that agents decide at runtime which tool to use to answer user queries.

This article will enhance the RAG model's performance using the Wikipedia tool. We will create a LangChain agent with a RAG tool capable of answering questions from a document containing information about the British parliamentary system. We will incorporate the Wikipedia tool into the agent to enhance its functionality.

If a user asks a question about the British parliament, the agent will call the RAG tool to answer it. In case of any other query, the agent will use the Wikipedia tool to search for answers on Wikipedia.

Let's implement this model step by step.

Importing Wikipedia Tool

The following script imports the built-in Wikipedia tool from the LangChain module. To retrieve Wikipedia pages, pass your query to the run() method.


wikipedia_tool = WikipediaQueryRun(api_wrapper=WikipediaAPIWrapper())
response = wikipedia_tool.run("What are large language models?")
print(response)

Output:

image1.png

To add the above tool to an agent, you must define a function using the @tool decorator. Inside the method, you simply call the run() method as you previously did and return the response.


@tool
def WikipediaSearch(search_term: str):

    """
    Use this tool to search for wikipedia articles.
    If a user asks to search the internet, you can search via this wikipedia tool.
    """


    result = wikipedia_tool.run(search_term)
    return result

Next, we will create a retrieval tool that implements the RAG functionality.

Creating Retrieval Tool

In my article on Retrieval Augmented Generation with Claude 3.5 Sonnet, I explained how to create retriever using LangChain. The process remains the same here.


openai_api_key = os.getenv('OPENAI_API_KEY')

loader = PyPDFLoader("https://web.archive.org/web/20170809122528id_/http://global-settlement.org/pub/The%20English%20Constitution%20-%20by%20Walter%20Bagehot.pdf")
docs = loader.load_and_split()

documents = RecursiveCharacterTextSplitter(
    chunk_size=1000, chunk_overlap=200
).split_documents(docs)

embeddings = OpenAIEmbeddings(openai_api_key = openai_api_key)
vector = FAISS.from_documents(documents, embeddings)

retriever = vector.as_retriever()

You can query the vector retriever using the invoke() method. In the output, you will see the section of documents having the highest semantic similarity with the input.


query = """
"What is the difference between the house of lords and house of commons? How members are elected for both?"
"""
retriever.invoke(query)[:3]

Output:

image2.png

Next, we will create a retrieval tool that uses the vector retriever you created to answer user queries. You can create a RAG retrieval tool using the create_retriever_tool() function.


BritishParliamentSearch = create_retriever_tool(
    retriever,
    "british_parliament_search",
    "Use this tool tos earch for information about the british parliament, house of lords and house of common and any other related information.",
)

We have created a Wikipedia tool and a retriever (RAG) tool; the next step is adding these tools to an agent.

Creating Tool Calling Agent

First, we will create a list containing all our tools. Next, we will define the prompt that we will use to call our agent. I used a built-in prompt from LangSmith, which you can see in the script's output below.


tools = [WikipediaSearch, BritishParliamentSearch]
# Get the prompt to use - you can modify this!
prompt = hub.pull("hwchase17/openai-functions-agent")
prompt.messages

Output:

image3.png

You can use your prompt if you want.

We also need to define the LLM we will use with the agent. We will use the OpenAI GPT-4o in our script. You can use any other LLM from LangChain.


llm = ChatOpenAI(model="gpt-4o",
                 temperature=0,
                 api_key=openai_api_key,
                )

Next, we will create a tool-calling agent that generates responses using the tools, LLM, and the prompt we just defined.

Finally, to execute an agent, we need to define our agent executor, which returns the agent's response to the user when invoked via the invoke() method.

In the script below, we ask our agent a question about the British parliament.


agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose = True)
response = agent_executor.invoke({"input": "How many members are there in the House of Common?"})
print(response)

Output:

image4.png

As you can see from the above output, the agent invoked the british_parliament_search tool to generate a response.

Let's ask another question about the President of the United States. Since this information is not available in the document that the RAG tool uses, the agent will call the WikipediaSearch tool to generate the response to this query.


response = agent_executor.invoke({"input": "Who is the current president of United States?"})

Output:

image5.png

Finally, if you want only to return the response without any additional information, you can use the output key of the response as shown below:


print(response["output"])

Output:


The current President of the United States is Joe Biden. He is the 46th president and assumed office on January 20, 2021.

As a last step, I will show you how to add memory to your agents so that they remember previous conversations with the user.

Adding Memory to Agent Executor

We will first create an AgentExecutor object as we did previously, but this time, we will set verbose = False since we are not interested in seeing the agent's internal workings.

Next, we will create an object of the ChatMessageHistory() class to save past conversations.

Finally, we will create an object of the RunnableWithMessageHistory() class and pass to it the agent executor and message history objects. We also pass the keys for the user input and chat history.


agent_executor = AgentExecutor(agent=agent, tools=tools, verbose = False)

message_history = ChatMessageHistory()

agent_with_chat_history = RunnableWithMessageHistory(
    agent_executor,
    # This is needed because in most real world scenarios, a session id is needed
    # It isn't really used here because we are using a simple in memory ChatMessageHistory
    lambda session_id: message_history,
    input_messages_key="input",
    history_messages_key="chat_history",
)

Next, we will define a function generate_response() that accepts a user query as a function parameter and invokes the RunnableWithMessageHistory() class object. In this case, you also need to pass the session ID, which points to the past conversation. You can have multiple session IDs if you want multiple conversations.


def generate_response(query):
    response = agent_with_chat_history.invoke(
    {"input": query},
    config={"configurable": {"session_id": "<foo>"}}
    )

    return response

Let's test the generate_response() function by asking a question about the British parliament.


query = "What is the difference between the house of lords and the house of commons?"
response = generate_response(query)
print(response["output"])

Output:

image6.png

Next, we will ask a question about the US president.


query = "Who is the current President of America?"
response = generate_response(query)
print(response["output"])

Output:


The current President of the United States is Joe Biden. He assumed office on January 20, 2021, and is the 46th president of the United States.

Next, we will only ask And for France but since the agent remembers the past conversation, it will figure out that the user wants to know about the current French President.


query = "And of France?"
response = generate_response(query)
print(response["output"])

Output:


The current President of France is Emmanuel Macron. He has been in office since May 14, 2017, and was re-elected for a second term in 2022.
Conclusion

Retrieval augmented generation allows you to answer questions using documents from a vector database. However, you may need to fetch information from external sources. This is where tools and agents come into play.

In this article, you saw how to enhance the functionalities of your RAG systems using tools and agents in LangChain. I encourage you to incorporate other tools and agents into your RAG systems to build amazing LLM products.

How to Fine-tune the OpenAI GPT-4o Model – The Wait is Finally Over

Featured Imgs 23

On August 20, 2024, OpenAI enabled GPT-4o fine-tuning in the OpenAI playground and the OpenAI API. The much-awaited feature is free for fine-tuning 1 million daily tokens until September 23, 2024.

In this article, I will show you how to fine-tune the OpenAI GPT-4o model for text classification and summarization tasks.

It is important to note that in my previous articles I have already demonstrated results obtained for zero-shot text classification and zero-shot text summarization using default GPT-4o model. In this article, you will see that fine-tuning a GPT-4o model improves text classification and text summarization performance significantly.

So, let's begin without an ado.

Installing and Importing Required Libraries

The following script installs the Python libraries you need to run codes in this article.


!pip install openai
!pip install rouge-score
!pip install --upgrade openpyxl
!pip install pandas openpyxl

The script below imports the required libraries into your Python application.


import os
import json
import time
import pandas as pd
from rouge_score import rouge_scorer
from sklearn.metrics import accuracy_score
from openai import OpenAI
Fine-tuning GPT-4o for Text Classification

In a previous article, I explained the process of fine-tuning GPT-4o mini and GPT-3.5 turbo models for zero-shot text classification.

The process remains the same for fine-tuning GPT-4o.
We will first import the text classification dataset, which in this article is the Twitter US Airline Sentiment Dataset.

The following script imports the dataset.


dataset = pd.read_csv(r"D:\Datasets\Tweets.csv")
dataset.head()

Output:

image1.png

Next, we will write the preprocess_data() function, which takes in a dataset, the start index n, and the number of records as parameters. It then divides the dataset by sentiment category and returns the number of records beginning at the specified index. This approach ensures we have an equal number of records for each sentiment category.

We will fetch 600 records (200 positive, negative, and neutral) for training and 99 records (33 for each category) for testing. You can use more number of records for fine-tuning if you want.




def preprocess_data(dataset, n, records):

    # Remove rows where 'airline_sentiment' or 'text' are NaN
    dataset = dataset.dropna(subset=['airline_sentiment', 'text'])

    # Remove rows where 'airline_sentiment' or 'text' are empty strings
    dataset = dataset[(dataset['airline_sentiment'].str.strip() != '') & (dataset['text'].str.strip() != '')]

    # Filter the DataFrame for each sentiment
    neutral_df = dataset[dataset['airline_sentiment'] == 'neutral']
    positive_df = dataset[dataset['airline_sentiment'] == 'positive']
    negative_df = dataset[dataset['airline_sentiment'] == 'negative']

    # Select records from Nth index
    neutral_sample = neutral_df[n: n +records]
    positive_sample = positive_df[n: n +records]
    negative_sample = negative_df[n: n +records]

    # Concatenate the samples into one DataFrame
    dataset = pd.concat([neutral_sample, positive_sample, negative_sample])

    # Reset index if needed
    dataset.reset_index(drop=True, inplace=True)

    dataset = dataset[["text", "airline_sentiment"]]

    return dataset

The following script creates training and test sets.


training_data = preprocess_data(dataset, 0, 200)
print("Training data value counts:\n", training_data["airline_sentiment"].value_counts())
print("===========================")
test_data = preprocess_data(dataset, 600, 33)
print("Test data value counts:\n", test_data["airline_sentiment"].value_counts())

Output:

image2.png

Next, we convert our dataset into the JSON format required to fine-tune OpenAI models.


# JSON file path
json_file_path = r"D:\Datasets\airline_sentiments.json"

# Function to create the JSON structure for each row
def create_json_structure(row):
    return {
        "messages": [
            {"role": "system", "content": "You are a Twitter sentiment analysis expert who can predict sentiment expressed in the tweets about an airline. You select sentiment value from positive, negative, or neutral."},
            {"role": "user", "content": row['text']},
            {"role": "assistant", "content": row['airline_sentiment']}
        ]
    }

# Convert DataFrame to JSON structures
json_structures = training_data.apply(create_json_structure, axis=1).tolist()

# Write JSON structures to file, each on a new line
with open(json_file_path, 'w') as f:
    for json_structure in json_structures:
        f.write(json.dumps(json_structure) + '\n')

print(f"Data has been written to {json_file_path}")

To fine-tune the OpenAI model, you need to upload training files to the OpenAI server. To do so, create a client object of the OpenAI class and pass the JSON file to the files.create() method of the client object.


client = OpenAI(
    # This is the default and can be omitted
    api_key = os.environ.get('OPENAI_API_KEY'),
)


training_file = client.files.create(
  file=open(json_file_path, "rb"),
  purpose="fine-tune"
)

Finally, as shown in the script below, you can start fine-tuning using the client.fine_tuning.jobs.create() method. Here, you must pass GPT-4o model id gpt-4o-2024-08-06 to the model attribute.


fine_tuning_job_gpt4o = client.fine_tuning.jobs.create(
  training_file=training_file.id,
  model="gpt-4o-2024-08-06"
)

You can see fine-tuning events for your fine-tuning job using the following script:


# List up to 10 events from a fine-tuning job
print(client.fine_tuning.jobs.list_events(fine_tuning_job_id = fine_tuning_job_gpt4o.id,
                                    limit=10))

Once fine-tuning is completed, you will receive an email with the fine-tuned model ID. Alternatively, you can retrieve the fine-tuned model ID using the following script.


ft_model_id = client.fine_tuning.jobs.retrieve(fine_tuning_job_gpt4o.id).fine_tuned_model

Once you have the fine-tuned model ID, you can use it like any default OpenAI model. The following script defines the find_sentiment() function, which uses the fine-tuned model ID to predict the sentiments of the tweets in the test set and finally prints the overall fine-tuned model accuracy.


def find_sentiment(client, model, dataset):
    tweets_list = dataset["text"].tolist()

    all_sentiments = []


    i = 0


    while i < len(tweets_list):

        try:
            tweet = tweets_list[i]
            content = """What is the sentiment expressed in the following tweet about an airline?
            Select sentiment value from positive, negative, or neutral. Return only the sentiment value in small letters.
            tweet: {}""".format(tweet)

            response = client.chat.completions.create(
                model=model,
                temperature=0,
                max_tokens=10,
                messages=[
                    {"role": "user", "content": content}
                ]
            )

            sentiment_value = response.choices[0].message.content

            all_sentiments.append(sentiment_value)
            i += 1
            print(i, sentiment_value)

        except Exception as e:
            print("===================")
            print("Exception occurred:", e)

    accuracy = accuracy_score(all_sentiments, dataset["airline_sentiment"])
    print(f"Accuracy: {accuracy}")

find_sentiment(client,ft_model_id, test_data)

Output:

image3.png

The above output shows that the fine-tuned model achieved an accuracy of 92.92%, significantly better than the accuracy achieved via the default GPT-4o model in a previous article.

In the next section, you will see how to fine-tune GPT-4o for text summarization.

Fine-tuning GPT-4o for Text Summarization

We will use the News Articles with Summary dataset to fine-tune the GPT-4o model.

The following script imports the dataset.


dataset = pd.read_excel(r"D:\Datasets\dataset.xlsx")
dataset = dataset.sample(frac=1)
dataset['summary_length'] = dataset['human_summary'].apply(len)
average_length = dataset['summary_length'].mean()
print(f"Average length of summaries: {average_length:.2f} characters")
print(dataset.shape)
dataset.head()

Output:

image4.png

The rest of the process remains the same as text classification. We will filter a subset of data for fine-tuning (in this case, records 101 to 200) and convert the dataset into OpenAI-compliant JSON format.


selected_data = dataset.iloc[101:201]

# Function to create the JSON structure for each row
def create_json_structure(row):
    return {
        "messages": [
            {"role": "system", "content": "You are analyzing news articles. Use the provided content to generate a concise summary."},
            {"role": "user", "content": row['content']},
            {"role": "assistant", "content": row['human_summary']}
        ]
    }

# Convert selected DataFrame rows to JSON structures
json_structures = selected_data.apply(create_json_structure, axis=1).tolist()

# JSON file path
json_file_path = r"D:\Datasets\news_summaries.json"

# Write JSON structures to file, each on a new line
with open(json_file_path, 'w') as f:
    for json_structure in json_structures:
        f.write(json.dumps(json_structure) + '\n')

print(f"Data has been written to {json_file_path}")

Next, upload the training file to OpenAI servers.


training_file = client.files.create(
  file=open(json_file_path, "rb"),
  purpose="fine-tune"
)

Finally, you can start fine-tuning using the following script.


fine_tuning_job_gpt4o_ts = client.fine_tuning.jobs.create(
  training_file=training_file.id,
  model="gpt-4o-2024-08-06"
)

Once the model is fine-tuned, retrieve the model ID using the following script.


ft_model_id = client.fine_tuning.jobs.retrieve(fine_tuning_job_gpt4o_ts.id).fine_tuned_model

We will use the ROUGE scores to evaluate the text summarization performance of the fine-tuned model. The following script defines the calculate_rouge() function that allows you to calculate ROUGE1, ROUGE2, and ROUGEL scores.


# Function to calculate ROUGE scores
def calculate_rouge(reference, candidate):
    scorer = rouge_scorer.RougeScorer(['rouge1', 'rouge2', 'rougeL'], use_stemmer=True)
    scores = scorer.score(reference, candidate)
    return {key: value.fmeasure for key, value in scores.items()}

Finally, the following script demonstrates how we generate the summaries of the first 20 articles in our dataset using the fine-tuned model.



%%time

results = []

i = 0

for _, row in dataset[:20].iterrows():
    article = row['content']
    human_summary = row['human_summary']

    i = i + 1
    print(f"Summarizing article {i}.")

    prompt = f"Summarize the following article in 1150 characters. The summary should look like human created:\n\n{article}\n\nSummary:"

    response = client.chat.completions.create(
        model= ft_model_id,
        messages=[{"role": "user", "content": prompt}],
        max_tokens=1150,
        temperature=0.7
    )
    generated_summary = response.choices[0].message.content
    rouge_scores = calculate_rouge(human_summary, generated_summary)

    results.append({
    'article_id': row.id,
    'generated_summary': generated_summary,
    'rouge1': rouge_scores['rouge1'],
    'rouge2': rouge_scores['rouge2'],
    'rougeL': rouge_scores['rougeL']
    })

The following script prints average ROUGE scores.


results_df = pd.DataFrame(results)
mean_values = results_df[["rouge1", "rouge2", "rougeL"]].mean()
print(mean_values)

Output:


rouge1    0.579758
rouge2    0.417515
rougeL    0.431266
dtype: float64

The above script shows that the fine-tuned GPT-4o model achieves significantly higher ROUGE scores than the default GPT-4o model.

Conclusion

Fine-tuning can significantly improve a model's performance on a specific task. This article explains how to fine-tune the OpenAI GPT-4o model for text classification and text summarization. The results show that the fine-tuned GPT-4o model significantly outperforms the default GPT-4o model on both tasks.

GPT-4o Snapshot vs Meta Llama 3.1 70b for Zero-Shot Text Summarization

Featured Imgs 23

In a previous article, I compared GPT-4o mini vs. GPT-4o and GPT-3.5 Turbo for zero-shot text summarization. The results showed that the GPT-4o mini achieves almost similar performance for zero-shot text classification at a much-reduced price compared to the other models.

I will compare Meta Llama 3.1 70b with OpenAI GPT-4o snapshot for zero-shot text summarization in this article. Meta Llama 3.1 series consists of Meta's state-of-the-art LLMs, including Llama 3.1 8b, Llama 3.1 70b, and Llama 3.1 405b. On the other hand, [OpenAI GPT-4o[(https://platform.openai.com/docs/models)] snapshot is OpenAIs latest LLM. We will use the Groq API to access Meta Llama 3.1 70b and the OpenAI API to access GPT-4o snapshot model.

So, let's begin without ado.

Installing and Importing Required Libraries

The following script installs the Python libraries you will need to run scripts in this article.


!pip install openai
!pip install groq
!pip install rouge-score
!pip install --upgrade openpyxl
!pip install pandas openpyxl

The script below installs the required libraries into your Python application.


import os
import time
import pandas as pd
from rouge_score import rouge_scorer
from openai import OpenAI
from groq import Groq
Importing the Dataset

This article will summarize the text in the News Articles with Summary dataset. The dataset consists of article content and human-generated summaries.

The following script imports the CSV dataset file into a Pandas DataFrame.


# Kaggle dataset download link
# https://github.com/reddzzz/DataScience_FP/blob/main/dataset.xlsx


dataset = pd.read_excel(r"D:\Datasets\dataset.xlsx")
dataset = dataset.sample(frac=1)
dataset['summary_length'] = dataset['human_summary'].apply(len)
average_length = dataset['summary_length'].mean()
print(f"Average length of summaries: {average_length:.2f} characters")
print(dataset.shape)
dataset.head()

Output:

image1.png

The content column stores the article's text, and the human_summary column contains the corresponding human-generated summaries.

We also calculate the average number of characters in the human-generated summaries, which we will use to generate summaries via the LLM models.

Text Summarization with GPT-4o Snapshot

We are now ready to summarize articles using GPT-4o snapshot and Llama 3.1 70b.

First, we'll create an instance of the OpenAI class, which we'll use to interact with various OpenAI language models. When initializing this object, you must provide your OpenAI API Key.

Additionally, we'll define the calculate_rouge() function, which computes the ROUGE-1, ROUGE-2, and ROUGE-L scores by comparing the LLM-generated summaries with the human-generated ones.

ROUGE scores are used to evaluate the quality of machine-generated text, such as summaries, by comparing them with human-generated text. ROUGE-1 evaluates the overlap of unigrams (single words), ROUGE-2 considers bigrams (pairs of consecutive words), and ROUGE-L focuses on the longest common subsequence between the two texts.


client = OpenAI(
    # This is the default and can be omitted
    api_key = os.environ.get('OPENAI_API_KEY'),
)

# Function to calculate ROUGE scores
def calculate_rouge(reference, candidate):
    scorer = rouge_scorer.RougeScorer(['rouge1', 'rouge2', 'rougeL'], use_stemmer=True)
    scores = scorer.score(reference, candidate)
    return {key: value.fmeasure for key, value in scores.items()}

Next, we will iterate through the first 20 articles in the dataset and call the GPT-4o snapshot model to produce a summary of the article with a target length of 1150 characters. We will use 1150 characters because the average length of the human-generated summaries is 1168 characters. Next, the LLM-generated and human-generated summaries are passed to the calculate_rouge() function, which returns ROUGE scores for the LLM-generated summaries. These ROUGE scores, along with the generated summaries, are stored in the results list.


%%time

results = []

i = 0

for _, row in dataset[:20].iterrows():
    article = row['content']
    human_summary = row['human_summary']

    i = i + 1
    print(f"Summarizing article {i}.")

    prompt = f"Summarize the following article in 1150 characters. The summary should look like human created:\n\n{article}\n\nSummary:"

    response = client.chat.completions.create(
        model= "gpt-4o-2024-08-06",
        messages=[{"role": "user", "content": prompt}],
        max_tokens=1150,
        temperature=0.7
    )
    generated_summary = response.choices[0].message.content
    rouge_scores = calculate_rouge(human_summary, generated_summary)

    results.append({
    'article_id': row.id,
    'generated_summary': generated_summary,
    'rouge1': rouge_scores['rouge1'],
    'rouge2': rouge_scores['rouge2'],
    'rougeL': rouge_scores['rougeL']
    })

Output:

image2.png

The above output shows that it took 59 seconds to summarize 20 articles.

Next, we convert the results list into a results_df dataframe and display the average ROUGE scores for 20 articles.


results_df = pd.DataFrame(results)
mean_values = results_df[["rouge1", "rouge2", "rougeL"]].mean()
print(mean_values)

Output:


rouge1    0.386724
rouge2    0.100371
rougeL    0.187491
dtype: float64

The above results show that ROUGE scores obtained by GPT-4o snapshot are slightly less than the results obtained by GPT-4o model in the previous article.

Let's evaluate the summarization of GPT-4o using another LLM, GPT-4o mini, in this case.

In the following script, we define the llm_evaluate_summary() function, which accepts the original article and LLM-generated summary and evaluates it on the completeness, conciseness, and coherence criteria.


def llm_evaluate_summary(article, summary):
    prompt = f"""Evaluate the following summary for the given article. Rate it on a scale of 1-10 for:
    1. Completeness: Does it capture all key points?
    2. Conciseness: Is it brief and to the point?
    3. Coherence: Is it well-structured and easy to understand?

    Article: {article}

    Summary: {summary}

    Provide the ratings as a comma-separated list (completeness,conciseness,coherence).
    """
    response = client.chat.completions.create(
        model= "gpt-4o-mini",
        messages=[{"role": "user", "content": prompt}],
        max_tokens=100,
        temperature=0.7
    )
    return [float(score) for score in response.choices[0].message.content.strip().split(',')]

We iterate through the first 20 articles and pass the content and LLM-generated summaries to the llm_evaluate_summary() function.


scores_dict = {'completeness': [], 'conciseness': [], 'coherence': []}

i = 0
for _, row in results_df.iterrows():
    i = i + 1
    # Corrected method to access content by article_id
    article = dataset.loc[dataset['id'] == row['article_id'], 'content'].iloc[0]
    scores = llm_evaluate_summary(article, row['generated_summary'])
    print(f"Article ID: {row['article_id']}, Scores: {scores}")

    # Store the scores in the dictionary
    scores_dict['completeness'].append(scores[0])
    scores_dict['conciseness'].append(scores[1])
    scores_dict['coherence'].append(scores[2])

Finally, the script below calculates and displays the average scores for completeness, conciseness, and coherence for GPT-4o snapshot summaries.


# Calculate the average scores
average_scores = {
    'completeness': sum(scores_dict['completeness']) / len(scores_dict['completeness']),
    'conciseness': sum(scores_dict['conciseness']) / len(scores_dict['conciseness']),
    'coherence': sum(scores_dict['coherence']) / len(scores_dict['coherence']),
}

# Convert to DataFrame for better visualization (optional)
average_scores_df = pd.DataFrame([average_scores])
average_scores_df.columns = ['Completeness', 'Conciseness', 'Coherence']

# Display the DataFrame
average_scores_df.head()

Output:

image3.png

Text Summarization with Llama 3.1 70b

In this section, we will perform a zero-shot text summarization of the same set of articles using the Llama 3.1 70b model.

You should try the Llama 3.1 405b model to get better results. However, at the time of writing this article, Groq Cloud had suspended the API calls for Llama 405b due to excessive demand. You can also try other cloud providers to run Llama 3.1 405b.

The process remains the same for text summarization using Meta Llama 3.1 70b. The only difference is that we will create an object of the Groq client in this case.


client = Groq(
    api_key=os.environ.get("GROQ_API_KEY"),
)

Next, we will iterate through the first 20 articles in the dataset, generate their summaries using the Llama 3.1 70b model, calculate ROUGE scores, and store the results in the results list.


%%time

results = []

i = 0

for _, row in dataset[:20].iterrows():
    article = row['content']
    human_summary = row['human_summary']

    i = i + 1
    print(f"Summarizing article {i}.")

    prompt = f"Summarize the following article in 1150 characters. The summary should look like human created:\n\n{article}\n\nSummary:"

    response = client.chat.completions.create(
          model="llama-3.1-70b-versatile",
          temperature = 0.7,
          max_tokens = 1150,
          messages=[
                {"role": "user", "content":  prompt}
            ]
    )

    generated_summary = response.choices[0].message.content
    rouge_scores = calculate_rouge(human_summary, generated_summary)

    results.append({
    'article_id': row.id,
    'generated_summary': generated_summary,
    'rouge1': rouge_scores['rouge1'],
    'rouge2': rouge_scores['rouge2'],
    'rougeL': rouge_scores['rougeL']
    })

Output:

image4.png

The above output shows that it only took 24 seconds to process 20 article using Llama 3.1 70b. The faster processing is because Llama 3.1 70b is a smaller model than the GPT-4o snapshot. Also, Groq uses LPU (language processing unit) which is much faster for LLM inference.

Next, we will convert the results list into the results_df dataframe and display the average ROUGE scores.


results_df = pd.DataFrame(results)
mean_values = results_df[["rouge1", "rouge2", "rougeL"]].mean()
print(mean_values)

Output:


rouge1    0.335863
rouge2    0.080865
rougeL    0.170834
dtype: float64

The above output shows that the ROUGE scores for Meta Llama 3.1 70b are lower compared to the GPT-40 snapshot model. I would again stress that you should use Llama 3.1 405b to get better results.

Finally, we will evaluate the summaries generated via Llama 3.1 70b using the GPT-4o mini model for completeness, conciseness, and coherence.


client = OpenAI(
    # This is the default and can be omitted
    api_key = os.environ.get('OPENAI_API_KEY'),
)

scores_dict = {'completeness': [], 'conciseness': [], 'coherence': []}

i = 0
for _, row in results_df.iterrows():
    i = i + 1
    # Corrected method to access content by article_id
    article = dataset.loc[dataset['id'] == row['article_id'], 'content'].iloc[0]
    scores = llm_evaluate_summary(article, row['generated_summary'])
    print(f"Article ID: {row['article_id']}, Scores: {scores}")

    # Store the scores in the dictionary
    scores_dict['completeness'].append(scores[0])
    scores_dict['conciseness'].append(scores[1])
    scores_dict['coherence'].append(scores[2])

# Calculate the average scores
average_scores = {
    'completeness': sum(scores_dict['completeness']) / len(scores_dict['completeness']),
    'conciseness': sum(scores_dict['conciseness']) / len(scores_dict['conciseness']),
    'coherence': sum(scores_dict['coherence']) / len(scores_dict['coherence']),
}

# Convert to DataFrame for better visualization (optional)
average_scores_df = pd.DataFrame([average_scores])
average_scores_df.columns = ['Completeness', 'Conciseness', 'Coherence']

# Display the DataFrame
average_scores_df.head()

Output:

image5.png

The above output shows that Llama 3.1 70b achieves performance similar to GPT-4o snapshot for text summarization when evaluated using 3rd LLM.

Conclusion

Meta Llama 3.1 series models are state-of-the-art open-source models. This article shows that Meta Llama 3.1 70b performs very similarly to GPT-4o snapshot for zero-shot text summarization. I stress that you use the Llama 3.1 405b model from Groq to see if you can get better results than GPT-4o.

Comparison of Fine-tuning GPT-4o mini vs GPT-3.5 for Text Classification

Featured Imgs 23

In my previous articles, I presented a comparison of OpenAI GPT-4o mini model with GPT-4o and GPT-3.5 turbo models for zero-shot text classification. The results showed that GPT-4o mini, while significantly cheaper than its counterparts, achieves comparable performance.

On 8 August 2024, OpenAI enabled GPT-4o mini fine-tuning for developers across usage tiers 1-5. You can now fine-tune GPT-4o mini for free until 23 September 2024, with a daily token limit of 2 million.

In this article, I will show you how to fine-tune the GPT-4o mini for text classification tasks and compare it to the fine-tuned GPT-3.5 turbo.

So, let's begin without ado.

Importing and Installing Required Libraries

The following script installs the OpenAI Python library you can use to make calls to the OpenAI API.


!pip install openai

The script below imports the required liberaries into your Python application.


from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from openai import OpenAI
import pandas as pd
import os
import json
Importing the Dataset

We will use the Twitter US Airline Sentiment dataset for fine-tuning the GPT-4o mini and GPT-3.5 turbo models.

The following script imports the dataset and defines the preprocess_data() function. This function takes in a dataset and an index value as inputs. It then divides the dataset by sentiment category, returning 34, 33, and 33 tweets from each category, beginning at the specified index. This approach ensures we have around 100 balanced records. You can use more number of records for fine-tuning if you want.



dataset = pd.read_csv(r"D:\Datasets\Tweets.csv")

def preprocess_data(dataset, n):

    # Remove rows where 'airline_sentiment' or 'text' are NaN
    dataset = dataset.dropna(subset=['airline_sentiment', 'text'])

    # Remove rows where 'airline_sentiment' or 'text' are empty strings
    dataset = dataset[(dataset['airline_sentiment'].str.strip() != '') & (dataset['text'].str.strip() != '')]

    # Filter the DataFrame for each sentiment
    neutral_df = dataset[dataset['airline_sentiment'] == 'neutral']
    positive_df = dataset[dataset['airline_sentiment'] == 'positive']
    negative_df = dataset[dataset['airline_sentiment'] == 'negative']

    # Select records from Nth index
    neutral_sample = neutral_df[n: n +34]
    positive_sample = positive_df[n: n +33]
    negative_sample = negative_df[n: n +33]

    # Concatenate the samples into one DataFrame
    dataset = pd.concat([neutral_sample, positive_sample, negative_sample])

    # Reset index if needed
    dataset.reset_index(drop=True, inplace=True)

    dataset = dataset[["text", "airline_sentiment"]]

    return dataset

The following script creates a balanced training dataset.


training_data = preprocess_data(dataset, 0)
print(training_data["airline_sentiment"].value_counts())
training_data.head()

Output:

image1.png

Similarly, the script below creates a test dataset.


test_data = preprocess_data(dataset, 100)
print(test_data["airline_sentiment"].value_counts())
test_data.head()

Output:

image2.png

Converting Training Data to JSON Format for OpenAI Model Fine-tuning

To fine-tune an OpenAI model, you need to transform the training data into JSON format, as outlined in the OpenAI official documentation. To achieve this, I have written a straightforward function that converts the input Pandas DataFrame into the required JSON structure.

The following script converts the training data into OpenAI complaint JSON format for fine-tuning. Fine-tuning relies significantly on the content specified for the system role, so pay special attention when setting this value.


# JSON file path
json_file_path = r"D:\Datasets\airline_sentiments.json"

# Function to create the JSON structure for each row
def create_json_structure(row):
    return {
        "messages": [
            {"role": "system", "content": "You are a Twitter sentiment analysis expert who can predict sentiment expressed in the tweets about an airline. You select sentiment value from positive, negative, or neutral."},
            {"role": "user", "content": row['text']},
            {"role": "assistant", "content": row['airline_sentiment']}
        ]
    }

# Convert DataFrame to JSON structures
json_structures = training_data.apply(create_json_structure, axis=1).tolist()

# Write JSON structures to file, each on a new line
with open(json_file_path, 'w') as f:
    for json_structure in json_structures:
        f.write(json.dumps(json_structure) + '\n')

print(f"Data has been written to {json_file_path}")

Output:


Data has been written to D:\Datasets\airline_sentiments.json

The next step is to upload your JSON file to the OpenAI server. To do so, start by creating an OpenAI client object. Then, call the files.create() method, passing the file path as an argument, as demonstrated in the following script:

client = OpenAI(
    # This is the default and can be omitted
    api_key = os.environ.get('OPENAI_API_KEY'),
)


training_file = client.files.create(
  file=open(json_file_path, "rb"),
  purpose="fine-tune"
)

print(training_file.id)

Once the file is uploaded, you will receive a file ID, as the above script demonstrates. You will use this file ID to fine-tune your OpenAI model.

Fine-Tuning GPT-4o Mini for Text Classification

To start fine-tuning, you must call the fine_tuning.jobs.create() method and pass it the ID of the uploaded training file and the model name. The current model name for GPT-4o mini is gpt-4o-mini-2024-07-18.


fine_tuning_job_gpt4o_mini = client.fine_tuning.jobs.create(
  training_file=training_file.id,
  model="gpt-4o-mini-2024-07-18"
)

Executing the above script initiates the fine-tuning process. The following script allows you to monitor and display various fine-tuning events.


# List up to 10 events from a fine-tuning job
print(client.fine_tuning.jobs.list_events(fine_tuning_job_id = fine_tuning_job_gpt4o_mini.id,
                                    limit=10))

Once fine-tuning is complete, you will receive an email containing the ID of your fine-tuned model, which you can use to make inferences. Alternatively, you can retrieve the ID of your fine-tuned model by running the following script.


ft_model_id = client.fine_tuning.jobs.retrieve(fine_tuning_job_gpt4o_mini.id).fine_tuned_model

The remainder of the process follows the same steps as outlined in a previous article. We will define the find_sentiment() function and pass it our fine-tuned model and the test set to predict the sentiment of the tweets in the dataset.

Finally, we predict the model's accuracy by comparing the actual and predicted sentiments of the tweets.


def find_sentiment(client, model, dataset):
    tweets_list = dataset["text"].tolist()

    all_sentiments = []


    i = 0


    while i < len(tweets_list):

        try:
            tweet = tweets_list[i]
            content = """What is the sentiment expressed in the following tweet about an airline?
            Select sentiment value from positive, negative, or neutral. Return only the sentiment value in small letters.
            tweet: {}""".format(tweet)

            response = client.chat.completions.create(
                model=model,
                temperature=0,
                max_tokens=10,
                messages=[
                    {"role": "user", "content": content}
                ]
            )

            sentiment_value = response.choices[0].message.content

            all_sentiments.append(sentiment_value)
            i += 1
            print(i, sentiment_value)

        except Exception as e:
            print("===================")
            print("Exception occurred:", e)

    accuracy = accuracy_score(all_sentiments, dataset["airline_sentiment"])
    print(f"Accuracy: {accuracy}")

find_sentiment(client,ft_model_id, test_data)

Output:


Accuracy: 0.78

The above output shows that the fine-tuned GPT-4o mini achieves a performance accuracy of 78% on the test set.

Fine-Tuning GPT-3.5 Turbo for Text Classification

For comparison, we will also fine-tune the GPT-3.5 turbo model for text classification.

The fine-tuning process remains the same as for the GPT-4o mini. We will pass the training file ID and the GPT-3.5 turbo model ID to the client.fine_tuning.jobs.create() method, as shown below.


fine_tuning_job_gpt_3_5 = client.fine_tuning.jobs.create(
  training_file=training_file.id,
  model="gpt-3.5-turbo"
)

Next, we will pass the fine-tuned GPT-3.5 model ID and the test dataset to the find_sentiment() function to evaluate the model's performance on the test set.


ft_model_id = client.fine_tuning.jobs.retrieve(fine_tuning_job_gpt_3_5.id).fine_tuned_model
find_sentiment(client,ft_model_id, test_data)

Output:


Accuracy: 0.82

The above output shows that the GPT-3.5 turbo model achieves 82% performance accuracy, 4% higher than the GPT-4o mini model.

Conclusion

GPT-4o mini is a cheaper and faster alternative to GPT-3.5. My last article showed that it achieves higher performance for zero-shot text classification than the GPT-3.5 turbo model.

However, based on the results presented in this article, a fine-tuned GPT-3.5 turbo model is still better than a fine-tuned GPT-4o mini.

Feel free to share your feedback in the comments section.

GPT-4o mini – A Cheaper and Faster Alternative to GPT-4o

Featured Imgs 23

On July 18th, 2024, OpenAI released GPT-4o mini, their most cost-efficient small model. GPT-4o mini is around 60% cheaper than GPT-3.5 Turbo and around 97% cheaper than GPT-4o. As per OpenAI, GPT-4o mini outperforms GPT-3.5 Turbo on almost all benchmarks while being cheaper.

In this article, we will compare the cost, performance, and latency of GPT-4o mini with GPT-3.5 turbo and GPT-4o. We will perform a zero-shot tweet sentiment classification task to compare the models. By the end of this article, you will find out which of the three models is better for your use cases. So, let's begin without ado.

Importing and Installing Required Libraries

As a first step, we will install and import the required libraries.

Run the following script to install the OpenAI library.


!pip install openai

The following script imports the required libraries into your application.


import os
import time
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from openai import OpenAI
Importing and Preprocessing the Dataset

To compare the models, we will perform zero-shot classification on the Twitter US Airline Sentiment dataset, which you can download from kaggle.

The following script imports the dataset from a CSV file into a Pandas dataframe.


## Dataset download link
## https://www.kaggle.com/datasets/crowdflower/twitter-airline-sentiment?select=Tweets.csv

dataset = pd.read_csv(r"D:\Datasets\tweets.csv")
print(dataset.shape)
dataset.head()

Output:

image1.png

The dataset contains more than 14 thousand records. However, we will randomly select 100 records. Of these, 34, 33, and 33 will have neutral, positive, and negative sentiments, respectively.

The following script selects the 100 tweets.


# Remove rows where 'airline_sentiment' or 'text' are NaN
dataset = dataset.dropna(subset=['airline_sentiment', 'text'])

# Remove rows where 'airline_sentiment' or 'text' are empty strings
dataset = dataset[(dataset['airline_sentiment'].str.strip() != '') & (dataset['text'].str.strip() != '')]

# Filter the DataFrame for each sentiment
neutral_df = dataset[dataset['airline_sentiment'] == 'neutral']
positive_df = dataset[dataset['airline_sentiment'] == 'positive']
negative_df = dataset[dataset['airline_sentiment'] == 'negative']

# Randomly sample records from each sentiment
neutral_sample = neutral_df.sample(n=34)
positive_sample = positive_df.sample(n=33)
negative_sample = negative_df.sample(n=33)

# Concatenate the samples into one DataFrame
dataset = pd.concat([neutral_sample, positive_sample, negative_sample])

# Reset index if needed
dataset.reset_index(drop=True, inplace=True)

# print value counts
print(dataset["airline_sentiment"].value_counts())

Output:

image2.png

Let's find out the average number of characters per tweet in these 100 tweets.


dataset['tweet_length'] = dataset['text'].apply(len)
average_length = dataset['tweet_length'].mean()
print(f"Average length of tweets: {average_length:.2f} characters")

Output:

Average length of tweets: 103.63 characters

Next, we will perform zero-shot classification of these tweets using GPT-4o mini, GPT-3.5 Turbo, and GPT-4o models.

Comparing GPT-4o mini with GPT 3.5 Turbo and GPT-4o

We will define the find_sentiment() function, which takes the OpenAI client object, model name, prices per input and output token for the model, and the dataset.

The find_sentiment() function will iterate through all the tweets in the dataset and will perform the following task:

  • predict their sentiment using the specified model.
  • calculate the number of input and output tokens for the request
  • calculate the total price to process all the tweets using the total input and output tokens.
  • calculate the average latency of all API calls.
  • calculate the model accuracy by comparing the actual and predicted sentiments.

Here is the code for the find_sentiment() function.


def find_sentiment(client, model, prompt_token_price, completion_token_price, dataset):
    tweets_list = dataset["text"].tolist()

    all_sentiments = []
    prompt_tokens = 0
    completion_tokens = 0

    i = 0
    exceptions = 0
    total_latency = 0

    while i < len(tweets_list):

        try:
            tweet = tweets_list[i]
            content = """What is the sentiment expressed in the following tweet about an airline?
            Select sentiment value from positive, negative, or neutral. Return only the sentiment value in small letters.
            tweet: {}""".format(tweet)

            # Record the start time before making the API call
            start_time = time.time()

            response = client.chat.completions.create(
                model=model,
                temperature=0,
                max_tokens=10,
                messages=[
                    {"role": "user", "content": content}
                ]
            )

            # Record the end time after receiving the response
            end_time = time.time()

            # Calculate the latency for this API call
            latency = end_time - start_time
            total_latency += latency

            sentiment_value = response.choices[0].message.content
            prompt_tokens += response.usage.prompt_tokens
            completion_tokens += response.usage.completion_tokens

            all_sentiments.append(sentiment_value)
            i += 1
            print(i, sentiment_value)

        except Exception as e:
            print("===================")
            print("Exception occurred:", e)
            exceptions += 1

    total_price = (prompt_tokens * prompt_token_price) + (completion_tokens * completion_token_price)
    average_latency = total_latency / len(tweets_list) if tweets_list else 0

    print(f"Total exception count: {exceptions}")
    print(f"Total price: ${total_price:.8f}")
    print(f"Average API latency: {average_latency:.4f} seconds")
    accuracy = accuracy_score(all_sentiments, dataset["airline_sentiment"])
    print(f"Accuracy: {accuracy}")
Results with GPT-4o Mini

First, Let's call the find_sentiment method using the GPT-4o mini model. You can see that the GPT-4o mini costs 15/60 cents to process a million input and output tokens, respectively.


client = OpenAI(
    # This is the default and can be omitted
    api_key = os.environ.get('OPENAI_API_KEY'),
)
model = "gpt-4o-mini"
input_token_price = 0.150/1_000_000
output_token_price = 0.600/1_000_000

find_sentiment(client, model, input_token_price, output_token_price, dataset)

Output:


Total exception count: 0
Total price: $0.00111945
Average API latency: 0.5097 seconds
Accuracy: 0.8

The above output shows that GPT-4o mini costs $0.0011 to process 100 tweets of around 103 characters. The average latency for an API call was 0.5097 seconds. Finally, the model achieved an accuracy of 80% in processing a 100 tweets.

Results with GPT-3.5 Turbo

Let's perform the same test with GPT-3.5 Turbo.


client = OpenAI(
    # This is the default and can be omitted
    api_key = os.environ.get('OPENAI_API_KEY'),
)
model = "gpt-3.5-turbo"
input_token_price = 0.50/1_000_000
output_token_price = 1.50/1_000_000

find_sentiment(client, model, input_token_price, output_token_price, dataset)

Output:


Total exception count: 0
Total price: $0.00370600
Average API latency: 0.4991 seconds
Accuracy: 0.72

The output showed that the GPT-3.5 turbo cost over three times as much as the GPT-4o mini for predicting the sentiment of 100 tweets. The latency is almost similar to that of the GPT-4o mini. Finally, the performance is much lower (72%) compared to that of the GPT-4o mini.

Results with GPT-4o

Finally, we can perform the zero-shot sentiment classification with the state-of-the-art GPT-4o.


client = OpenAI(
    # This is the default and can be omitted
    api_key = os.environ.get('OPENAI_API_KEY'),
)
model = "gpt-4o"
input_token_price = 5.00/1_000_000
output_token_price = 15/1_000_000

find_sentiment(client, model, input_token_price, output_token_price, dataset)

Output:


Total exception count: 0
Total price: $0.03681500
Average API latency: 0.5602 seconds
Accuracy: 0.82

The output shows GPT-4o has slightly slower latency than GPT-4o mini and GPT-3.5 turbo. In terms of performance, GPT-4o achieved 82% accuracy which is 2% higher than GPT-4o mini. However, GPT-4o is 36 times more expensive than GPT-4o mini.

Is a 2% performance gain worth the 36 times higher price? Let me know in the comments.

Final Verdict

To conclude, the following table summarizes the tests performed in this article.

image3.png

I recommend you always prefer the GPT-4o mini over the GPT-3.5 turbo model, as the former is cheaper and more accurate. I would go for the GPT-4o mini if you need to process huge volumes of texts that are not very sensitive and where you can compromise a bit on accuracy. This can save you tons of money.
Finally, I would still go for GPT-4o for the best accuracy, though it costs 36 times more than GPT-4o mini.

Let me know what you think of these results and which model you plan to use.

Image Analysis Using Claude 3.5 Sonnet Model

Featured Imgs 23

In my article on Image Analysis Using OpenAI GPT-4o Model, I explained how GPT-4o model allows you to analyze images and answer questions related images precisely.

In this article, I will show you how to analyze images with the Anthropic Claude 3.5 Sonnet model, which has shown state-of-the-art performance for many text and vision problems. I will also share my insights on how Claude 3.5 Sonnet compares with GPT-4o for image analysis tasks. So, let's begin without ado.

Importing Required Libraries

You will need to install the anthropic Python library to access the Claude 3.5 Sonnet model in this article. In addition, you will need the Anthropic API key, which you can obtain here.

The following script installs the Anthropic Python library.


!pip install anthropic

The script below imports all the Python modules you will need to run scripts in this article.


import os
import base64
from IPython.display import display, HTML
from IPython.display import Image
from anthropic import Anthropic
General Image Analysis

Let's first perform a general image analysis. We will analyze the following image and ask Claude 3.5 Sonnet if it shows any potentially dangerous situation.


# image source: https://healthier.stanfordchildrens.org/wp-content/uploads/2021/04/Child-climbing-window-scaled.jpg

image_path = r"D:\Datasets\sofa_kid.jpg"
img = Image(filename=image_path, width=600, height=600)
img

Output:

image1.jpg

Note: For comparison, the images we will analyze in this article are the same as those we analyzed with GPT-4o.

Next, we will define a method that converts an image into Base64 format. The Claude 3.5 Sonnet model expects image inputs to be in Base64 format.

We also define an object of the Anthropic client. We will call the Claude 3.5 Sonnet model using this client object.


def encode_image64(image_path):
    with open(image_path, "rb") as image_file:
        return base64.b64encode(image_file.read()).decode("utf-8")

base64_image = encode_image64(image_path)
image1_media_type = "image/jpeg"

client = Anthropic(api_key = os.environ.get('ANTHROPIC_API_KEY'))

We will define a helper function analyze_image() that accepts text query as a parameter. Inside the image function, we call the message.create() method of the Anthropic client object. We set the model value to claude-3-5-sonnet-20240620, which is the id for the Claude 3.5 Sonnet model. The temperature is set to 0 since we want a fair comparison with the GPT-4o model. Finally, We set the system prompt and then pass the image and the text query to the messages list.

We ask the Claude 3.5 Sonnet model to identify any dangerous situation in the image.


def analyze_image(query):
    message = client.messages.create(
        model="claude-3-5-sonnet-20240620",
        temperature = 0,
        max_tokens=1024,
        system="You are a baby sitter.",
        messages=[
             {
                "role": "user",
                "content": [
                    {
                        "type": "image",
                        "source": {
                            "type": "base64",
                            "media_type": image1_media_type,
                            "data":  base64_image,
                        },
                    },
                    {
                        "type": "text",
                        "text": query
                    }
                ],
            }
        ],
    )
    return message

response_content = analyze_image("Do you see any dangerous situation in the image? If yes, how to prevent it?")
print(response_content.content[0].text)

Output:

image2.png

The above output shows that the Claude 3.5 Sonnet model has identified a dangerous situation and provided some suggestions.

Compared to GPT-4o, which gave five suggestions, Claude 3.5 Sonnet provided seven suggestions and a more detailed response.

Graph Analysis

Next, we will perform the graph Analysis task using Claude 3.5 Sonnet and summarize the following graph.


# image path: https://globaleurope.eu/wp-content/uploads/sites/24/2023/12/Folie2.jpg

image_path = r"D:\Datasets\Folie2.jpg"
img = Image(filename=image_path, width=800, height=800)
img

Output:

image2.jpg


base64_image = encode_image64(image_path)

def analyze_graph(query):
    message = client.messages.create(
        model = "claude-3-5-sonnet-20240620",
        temperature = 0,
        max_tokens = 1024,
        system = "You are a an expert graph and visualization expert",
        messages = [
             {
                "role": "user",
                "content": [
                    {
                        "type": "image",
                        "source": {
                            "type": "base64",
                            "media_type": image1_media_type,
                            "data":  base64_image,
                        },
                    },
                    {
                        "type": "text",
                        "text": query
                    }
                ],
            }
        ],
    )
    return message.content[0].text

response_content = analyze_graph("Can you summarize the graph?")
print(response_content)

Output:

image3.png

The above output shows the graph's summary. Though Claude 3.5 Sonnet here is more elaborative than GPT-4o, I found the GPT-4o summary better as it categorized the countries into high, moderate, and lower debt levels.

Next, I asked Claude 3.5 Sonnet to create a table showing countries against their debts.


response_content = analyze_graph("Can you convert the graph to table such as Country -> Debt?")
print(response_content)

Output:

image3b.png

The results obtained with Claude 3.5 Sonnet were astonishingly accurate compared to GPT-4o. For example, GPT-4o showed Estonia having a debt of 10% of its GDP, whereas Claude 3.5 Sonnet depicted Estonia as having a debt of 19.2%. If you look at the Graph, you will see that Claude 3.5 Sonnet is extremely accurate here.

Claude 3.5 Sonnet is a clear winner for Graph Analysis.

Image Sentiment Prediction

Next, we will predict facial sentiment using Claude 3.5 Sonnet. Here is the sample image.


# image path: https://www.allprodad.com/the-3-happiest-people-in-the-world/

image_path = r"D:\Datasets\happy_men.jpg"
img = Image(filename=image_path, width=800, height=800)
img

Output:

image3.jpg


base64_image = encode_image64(image_path)

def predict_sentiment(query):
    message = client.messages.create(
        model = "claude-3-5-sonnet-20240620",
        temperature = 0,
        max_tokens = 1024,
        system = "You are helpful psychologist.",
        messages = [
             {
                "role": "user",
                "content": [
                    {
                        "type": "image",
                        "source": {
                            "type": "base64",
                            "media_type": image1_media_type,
                            "data":  base64_image,
                        },
                    },
                    {
                        "type": "text",
                        "text": query
                    }
                ],
            }
        ],
    )
    return message.content[0].text

response_content = predict_sentiment("Can you predict facial sentiment from the input image?")
print(response_content)

Output:

image3c.png

The above output shows that Claude 3.5 Sonnet provided detailed information about the sentiment expressed in the image. GPT-4o, on the other hand, was more precise.

I will again go with Claude 3.5 Sonnet here as the first choice for sentiment classification.

Analyzing Multiple Images

Finally, let's see how Claude 3.5 Sonnet fairs for analyzing multiple images.
We will compare the following two images for sentiment predictions.


from PIL import Image
import matplotlib.pyplot as plt

# image1_path: https://www.allprodad.com/the-3-happiest-people-in-the-world/
# image2_path: https://www.shortform.com/blog/self-care-for-grief/

image_path1 = r"D:\Datasets\happy_men.jpg"
image_path2 = r"D:\Datasets\sad_woman.jpg"


# Open the images using Pillow
img1 = Image.open(image_path1)
img2 = Image.open(image_path2)

# Create a figure to display the images side by side
fig, axes = plt.subplots(1, 2, figsize=(10, 5))

# Display the first image
axes[0].imshow(img1)
axes[0].axis('off')  # Hide axes

# Display the second image
axes[1].imshow(img2)
axes[1].axis('off')  # Hide axes

# Show the plot
plt.tight_layout()
plt.show()

Output:

image4.png


base64_image1 = encode_image64(image_path1)
base64_image2 = encode_image64(image_path2)


def predict_sentiment(query):
    message = client.messages.create(
        model = "claude-3-5-sonnet-20240620",
        temperature = 0,
        max_tokens = 1024,
        system = "You are helpful psychologist.",
        messages = [
             {
                "role": "user",
                "content": [
                    {
                        "type": "image",
                        "source": {
                            "type": "base64",
                            "media_type": image1_media_type,
                            "data":  base64_image1,
                        },
                    },
                    {
                        "type": "image",
                        "source": {
                            "type": "base64",
                            "media_type": image1_media_type,
                            "data":  base64_image2,
                        },
                    },
                    {
                        "type": "text",
                        "text": query
                    }
                ],
            }
        ],
    )
    return message.content[0].text

response_content = predict_sentiment("Can you explain all the differences in the two images?")
print(response_content)

Output:

image4b.png

The above output shows the image comparison results achieved via Claude 3.5 Sonnet. I found that for image comparison, the results I obtained with GPT-4o in my previous article were better than those of Claude 3.5 Sonnet. In my opinion, GPT-4o is a better model for image comparison tasks.

Conclusion

The Claude 3.5 Sonnet Model is a state-of-the-art model for text and vision tasks. In this article, I explained how to analyze images with Claude 3.5 Sonnet. Compared with GPT-4o, I found Claude 3.5 Sonnet better for general image and graph analysis tasks. On the contrary, GPT-4o achieved better results for image summarization and comparison tasks. I urge you to test both these models and share your results.

Extracting YouTube Channel Statistics in Python Using YouTube Data API

Featured Imgs 23

Are you interested in finding out what a YouTube channel mostly discusses? Do you want to analyze YouTube videos of a specific channel? If yes, we are in the same boat.

YouTube video titles are a great way to determine the channel's primary focus. Plotting a word cloud or a bar plot of the most frequently occurring words in YouTube video titles can give you precise insight into the nature of a YouTube channel. I will do exactly this in this tutorial using the Python programming language.

So, let's begin without ado.

Getting YouTube Data API Key

You can get information about a YouTube channel in Python programming via the YouTube Data API. However, to access the API, you must create a new project in Google Cloud Platform. You can do so for free.

image1.png

Once you create a new project, click the Go to APIs overview link, as shown in the screenshot below.

image2.png

Next, click the ENABLE APIS AND SERVICES link.

image3.png

Search for youtube data api v3.

image4.png

Click the ENABLE button.

image5.png

You will need to create credentials. To do so, click the CREDENTIALS link.

image6.png

If you have any existing credentials, you will see them. To create new credentials, click the + CREATE CREDENTIALS link and select API key.

image7.png

Your API key will be generated. Copy and save it in a secure place.

image8.png

Now, you can access the YouTube Data API in Python.

Installing and Importing Required Libraries

To begin our analysis, we must set up our Python environment by installing and importing the necessary libraries. The main libraries we will use are google-api-python-client for accessing the YouTube Data API, wordcloud for generating word clouds, and nltk (Natural Language Toolkit) for text processing.


!pip install google-api-python-client
!pip install wordcloud
!pip install nltk

Next, import the required libraries. These include googleapiclient.discovery for accessing the YouTube API, re for regular expressions to clean text, Counter from the collections module to count word frequencies, matplotlib.pyplot for plotting, WordCloud from the wordcloud library, and stopwords from nltk.corpus for removing common English words that do not contribute much to the analysis.


import googleapiclient.discovery
import re
from collections import Counter
import matplotlib.pyplot as plt
from wordcloud import WordCloud
import nltk
from nltk.corpus import stopwords
Extracting YouTube Channel Statistics

This section will extract video titles from a specific YouTube channel using the YouTube Data API. To do this, we need to set up our API client with the appropriate API key and configure the request to fetch video data.

First, specify the API details and initialize the YouTube API client using the API key you generated earlier. Replace "YOUR_API_KEY" with your actual API key.


# API information
api_service_name = "youtube"
api_version = "v3"

# API key
DEVELOPER_KEY = "YOUR_API_KEY"

# API client
youtube = googleapiclient.discovery.build(
    api_service_name,
    api_version,
    developerKey=DEVELOPER_KEY)

Next, define the channel ID of the YouTube channel you want to analyze. Here, we are using the channel ID UCYNS-_653RIE9x_BINefAMA as an example. You can use any other channel if you want.

We will then create a request to retrieve video titles for the specified channel. The request fetches up to 50 video titles per call. We use pagination to fetch additional results if available to handle more videos.


# Channel ID of the channel you want to search
channel_id = "UCYNS-_653RIE9x_BINefAMA"


# Request to retrieve all video titles for the specified channel
request = youtube.search().list(
    part="snippet",
    channelId=channel_id,
    maxResults=50,
    type="video"
)

# Initialize an empty list to store the video titles
video_titles = []

# Execute the request and retrieve the results
while request is not None:
    response = request.execute()
    for item in response["items"]:
        video_titles.append(item["snippet"]["title"])
    request = youtube.search().list_next(request, response)

Finally, print the total number of extracted video titles and display the first 10 titles to ensure our extraction process is working correctly.


# Print the video titles
print("Total extracted videos:", len(video_titles))
print("First 10 videos")
print("===============")
for title in video_titles[:10]:
    print(title)

Output:

image9.png

Plotting a Bar Plot with Most Frequently Occurring Words

This section will process the extracted video titles to identify the most frequently occurring words. We will then visualize these words using a bar plot.

First, download the NLTK stop words and define an additional set of stop words to exclude from our analysis. Stop words are common words like "the", "is", and "in" that does not provide significant meaning in our context.

nltk.download('stopwords')
stop_words = set(stopwords.words('english')) | {'sth', 'syed', 'talat', 'hussain'}

Next, filter the video titles to include only those containing Latin characters. This step ensures we focus on titles written in English or similar languages.

# Filter videos with Latin text only
latin_titles = [title for title in video_titles if re.search(r'[a-zA-Z]', title)]

We then clean the titles by removing special characters and tokenizing them into individual words. Stop words are excluded from the analysis.

# Remove special characters and tokenize
words = []
for title in latin_titles:
    cleaned_title = re.sub(r'[^\w\s]', '', title)  # Remove special characters
    for word in re.findall(r'\b\w+\b', cleaned_title):
        if word.lower() not in stop_words:
            words.append(word.lower())

Subsequently, we count the frequency of each word and identify the ten most common words. To avoid redundancy, exclude additional common words specific to the dataset, such as "sth", "syed", "talat", and "hussain".


# Count the frequency of words
word_counts = Counter(words)

# Get the 10 most common words
most_common_words = [word for word, count in word_counts.most_common(10) if word not in {'sth', 'syed', 'talat', 'hussain'}]

Finally, create a bar plot to visualize the frequency of the most common words.


# Create bar plot
plt.figure(figsize=(10, 5))
plt.bar(range(len(most_common_words)), [word_counts[word] for word in most_common_words])
plt.title('10 Most Common Words in Video Titles')
plt.xlabel('Words')
plt.ylabel('Frequency')
plt.xticks(range(len(most_common_words)), most_common_words, rotation=45)
plt.show()

Output:

image10.png

The output will display a bar plot showing the 10 most common words in the video titles, providing a clear insight into the primary topics discussed on the channel.

Word Cloud of Video Titles

We will create a word cloud to visualize the word frequency data further. A word cloud presents the most common words in a visually appealing way, and the size of each word represents its frequency.

First, generate the word cloud using the WordCloud class. The word cloud is generated from the list of words we compiled earlier. Next, we display the word cloud using Matplotlib.


# Generate word cloud
wordcloud = WordCloud(width=800, height=400, background_color='white').generate(' '.join(words))

# Display the word cloud
plt.figure(figsize=(12, 8))
plt.imshow(wordcloud, interpolation="bilinear")
plt.axis("off")
plt.title("Word Cloud of Video Titles")
plt.show()

Output:

image11.png

The resulting word cloud provides a visual representation of the most frequently occurring words in the video titles, allowing for quick and intuitive insights into the channel's content.

Conclusion

Analyzing YouTube video titles allows us to gain valuable insights into a channel's primary topics. Using Youtube Data API and libraries such as google-api-python-client, nltk, matplotlib, and wordcloud enables us to extract video data, process text, and visualize the most common words. This approach reveals the core themes of a YouTube channel, offering a clear understanding of its focus and audience interests. Whether for content creators or viewers, these techniques effectively uncover the essence of YouTube video discussions.

Retrieval Augmented Generation with Claude 3.5 Sonnet

Featured Imgs 23

In my previous article I presented results comparing Anthropic Claude 3.5 Sonnet and OpenAI GPT-4o models for zero-shot text classification. The results showed that the Claude 3.5 Sonnet significantly outperformed GPT-4o.

These results motivated me to develop a simple retrieval augmented generation system with LangChain that enables the Claude 3.5 Sonnet model to answer questions pertaining to custom documents.

By the end of this article, you will know how to develop a chatbot that uses the Claude 3.5 Sonnet LLM to answer questions on custom documents.

So, let's begin without ado.

Installing and Importing Required Libraries

The following script installs the libraries required to run scripts in this article.

!pip install -U langchain
!pip install -U langchain-anthropic
!pip install langchain-openai
!pip install pypdf
!pip install faiss-cpu

Subsequently, the script below imports the required libraries into your Python application.


from langchain_anthropic import ChatAnthropic

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_community.document_loaders import PyPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_openai import OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain.chains import create_retrieval_chain
from langchain_core.documents import Document
from langchain.chains import create_history_aware_retriever
from langchain_core.prompts import MessagesPlaceholder
from langchain_core.messages import HumanMessage, AIMessage
import os

Generating Default Response with Claude 3.5 Sonnet

Let's first generate a default response using Claude 3.5 Sonnet LLM in LangChain.

You will need an anthropic API key which you can get here.

Next, create an object of the ChatAnthropic class and pass the anthropic API key, the model ID, and the temperature value to its constructor. The temperature specifies how creative a model should be while generating responses. Higher temperature values allow models to be more creative.

Finally, pass the prompt to the invoke() method of the ChatAnthropic object to generate the model response.

anthropic_api_key = os.environ.get('ANTHROPIC_API_KEY')

llm = ChatAnthropic(model="claude-3-5-sonnet-20240620",
                     anthropic_api_key = anthropic_api_key,
                     temperature = 0.3)

result = llm.invoke("Write a funny poem for an ice cream shop on a beach.")
print(result.content)

Output:

image1.png

The LangChain ChatPromptTemplate class allows you to create a chatbot. The from_messages() method indicates that the conversation should be executed in message format. In this setup, you must specify the value for the user attribute, while the system attribute is optional.

You can use the StrOutputParser class to parse the model response in string format, as shown in the script below:


prompt = ChatPromptTemplate.from_messages([
    ("system", '{assistant}'),
    ("user", "{input}")
])

output_parser = StrOutputParser()

chain = prompt | llm | output_parser

result = chain.invoke(
    {"assistant": "You are a comedian",
     "input": "Write a funny poem for a music store on a beach."}
)
print(result)

Output:

image2.png

RAG with Claude 3.5 Sonnet

Now you know how to call the Claude 3.5 Sonnet LLM in LangChain. In this section, we will augment the Claude 3.5 Sonnet model's knowledge, making it capable of answering questions related to the documents it had not seen during training.

Step 1: Loading and Splitting Documents

We start by loading and splitting the document using PyPDFLoader. In this case, we load "The English Constitution" by Walter Bagehot from a URL.

The following script's load_and_split() method ensures the document is parsed correctly and divided into manageable sections.


loader = PyPDFLoader("https://web.archive.org/web/20170809122528id_/http://global-settlement.org/pub/The%20English%20Constitution%20-%20by%20Walter%20Bagehot.pdf")
docs = loader.load_and_split()
Step 2: Creating Embeddings

Next, we create embeddings for the text using OpenAI API's OpenAIEmbeddings class. We then split the text into smaller chunks with RecursiveCharacterTextSplitter and created a FAISS vector store using the split documents and their embeddings. This step transforms the text into a format suitable for retrieval and similarity search.


openai_key = os.environ.get('OPENAI_API_KEY')

embeddings = OpenAIEmbeddings(openai_api_key = openai_key)

text_splitter = RecursiveCharacterTextSplitter()
documents = text_splitter.split_documents(docs)
vector = FAISS.from_documents(documents, embeddings)
Step 3: Crafting the Prompt Template

The next step is to define a prompt template using the ChatPromptTemplate class. The template instructs the model to answer questions based solely on the provided context, ensuring accurate and relevant responses. The create_stuff_documents_chain function links this template with the language model, forming a document chain that will be used for generating responses.


prompt = ChatPromptTemplate.from_template("""Answer the following question based only on the provided context:

Question: {input}

Context: {context}
"""
)

document_chain = create_stuff_documents_chain(llm, prompt)
Step 4: Setting Up the Retriever

We convert the vector store into a retriever object with vector.as_retriever() method. The retriever, combined with the document chain, forms a retrieval_chain. This setup enables the system to fetch relevant document sections based on the user's query and use them as context for generating answers.


retriever = vector.as_retriever()
retrieval_chain = create_retrieval_chain(retriever, document_chain)
Step 5: Generating Responses

Finally, we define the generate_response() function that takes a query as input, invokes the retrieval chain, and prints the answer.


def generate_response(query):
    response = retrieval_chain.invoke({"input": query})
    print(response["answer"])

To demonstrate the system in action, we run a few example queries as shown below:

Query1:


query = "What is the total number of members of the house of commons?"
generate_response(query)

Output:

image3.png

Query2:


query = "What is the difference between the house of lords and house of commons? How members are elected for both?"
generate_response(query)

Output:

image4.png

Query3:


query = "How many players participate in a football game?"
generate_response(query)

Output:

image5.png

You can see that the model correctly replied to queries related to custom document and refused to generate response to questions that are not related to the document.

Conclusion

The retrieval augmented generation (RAG) technique has revolutionized the development of customized chatbots for various data sources. In this article, we demonstrated how to build a chatbot using the Claude 3.5 Sonnet model to answer questions based on previously unseen documents. This method can be applied to create chatbots capable of querying diverse data types such as PDFs, websites, text documents, and beyond.

I encourage you to leverage Claude 3.5 Sonnet to develop your custom chatbots, explore its powerful capabilities, and share your experience and feedback in the comments section.