Blogger page to publish content related to programming , AI , problem solving , interview questions , coding projects , programming languages ( python , C++ , C# , html, css , javascript , etc ) , frameworks and libraries.

ads 728x90

Saturday, July 13, 2024

Advanced example using TensorFlow with guide

 Advanced example using TensorFlow with guide



We will create a deep neural network for image classification with the CIFAR-10 dataset. This example will demonstrate advanced techniques such as custom training loops, data augmentation, and learning rate scheduling.

This is the  tensorflow page if you want to get more information

https://www.tensorflow.org


Step-by-Step Guide:


1. Import Required Libraries:


import tensorflow as tf

from tensorflow.keras import layers, models, optimizers, losses

from tensorflow.keras.preprocessing.image import ImageDataGenerator

import numpy as np


2. Load and Preprocess Data:


# Load the CIFAR-10 dataset

(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.cifar10.load_data()


# Normalize the images to the range [0, 1]

train_images, test_images = train_images / 255.0, test_images / 255.0


# Convert labels to one-hot encoding

train_labels = tf.keras.utils.to_categorical(train_labels, 10)

test_labels = tf.keras.utils.to_categorical(test_labels, 10)


3. Create a Data Augmentation Generator:


datagen = ImageDataGenerator(

    rotation_range=15,

    width_shift_range=0.1,

    height_shift_range=0.1,

    horizontal_flip=True

)


datagen.fit(train_images)


4. Build the Model:


def build_model():

    model = models.Sequential()

    model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)))

    model.add(layers.BatchNormalization())

    model.add(layers.Conv2D(32, (3, 3), activation='relu'))

    model.add(layers.BatchNormalization())

    model.add(layers.MaxPooling2D((2, 2)))

    model.add(layers.Dropout(0.2))


    model.add(layers.Conv2D(64, (3, 3), activation='relu'))

    model.add(layers.BatchNormalization())

    model.add(layers.Conv2D(64, (3, 3), activation='relu'))

    model.add(layers.BatchNormalization())

    model.add(layers.MaxPooling2D((2, 2)))

    model.add(layers.Dropout(0.3))


    model.add(layers.Conv2D(128, (3, 3), activation='relu'))

    model.add(layers.BatchNormalization())

    model.add(layers.Conv2D(128, (3, 3), activation='relu'))

    model.add(layers.BatchNormalization())

    model.add(layers.MaxPooling2D((2, 2)))

    model.add(layers.Dropout(0.4))


    model.add(layers.Flatten())

    model.add(layers.Dense(128, activation='relu'))

    model.add(layers.BatchNormalization())

    model.add(layers.Dropout(0.5))

    model.add(layers.Dense(10, activation='softmax'))


    return model


5. Custom Learning Rate Scheduler:


def lr_schedule(epoch):

    lr = 1e-3

    if epoch > 75:

        lr *= 0.5e-3

    elif epoch > 50:

        lr *= 1e-3

    elif epoch > 25:

        lr *= 1e-2

    return lr


6. Compile and Train the Model with Custom Training Loop:


model = build_model()

model.compile(optimizer=optimizers.Adam(learning_rate=lr_schedule(0)),

              loss=losses.CategoricalCrossentropy(),

              metrics=['accuracy'])


# Custom training loop

epochs = 100

batch_size = 64

steps_per_epoch = train_images.shape[0] // batch_size


for epoch in range(epochs):

    print(f'Epoch {epoch + 1}/{epochs}')

    lr = lr_schedule(epoch)

    tf.keras.backend.set_value(model.optimizer.lr, lr)

    print(f'Learning rate: {lr}')


    # Train the model using the data generator

    model.fit(datagen.flow(train_images, train_labels, batch_size=batch_size),

              steps_per_epoch=steps_per_epoch, epochs=1, verbose=1)


    # Evaluate the model

    loss, accuracy = model.evaluate(test_images, test_labels, verbose=0)

    print(f'Test loss: {loss}, Test accuracy: {accuracy}')


Explanation:


1. Data Loading and Preprocessing:

   - Load the CIFAR-10 dataset.

   - Normalize the images.

   - Convert labels to one-hot encoding.

2. Data Augmentation:

   - Use ImageDataGenerator for real-time data augmentation.

3. Model Building:

   - Build a deep convolutional neural network with batch normalization and dropout layers to improve regularization.

4. Learning Rate Scheduling:

   - Implement a custom learning rate scheduler to adjust the learning rate during training.

5. Custom Training Loop:

   - Train the model using a custom training loop, which allows for more control over the training process, such as updating the learning rate at each epoch and evaluating the model on the test set.

No comments:

Post a Comment

Leave a comment