Company Logo classification model with Tensorflow and Keras (Part-1)

Author: Iqbal

Objetive: To create model of company logo and detect them from image and movie.

Part-1: Create classification model
Part-2: Detection from image (Development under progress)

Data: logo_data.zip

In [56]:
import tensorflow as tf
from tensorflow import keras
from keras.utils import to_categorical
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import os
import cv2
from sklearn.model_selection import train_test_split

Define necessary functions

In [55]:
def get_class(path):
    class_names = os.listdir(path)
    class_names.remove(".DS_Store")    
    return class_names


def decode_image(file):
    img = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
    img = cv2.resize(img, (32,32), interpolation=cv2.INTER_CUBIC)
    # img = np.array(img).reshape(-1, 32, 32, 1)
    img = img.astype('float32')
    img = img / 255.0
    
    class_name = file.split('/')[-2:-1]
    
    return img, class_name

def prepare_training_data(path):
    train_data = []
    train_labels = []
    
    ds = os.listdir(path)
    classes = get_class(path)
    
    for d in tqdm(ds):
        if os.path.isdir("{}/{}".format(path, d)):
            files = os.listdir("{}/{}".format(path, d))
            
            for fl in files:
                if fl[-3:] == "png":
                    img_data, class_name = decode_image("{}/{}/{}".format(path, d, fl))
                    train_data.append(img_data)
                    
                    # add integer value from total classes
                    train_labels.append(classes.index(class_name[0]))
    
    train_data = np.array(train_data)
    train_lables = to_categorical(np.array(train_labels))
    return train_data, train_lables

def define_model():
    model = keras.Sequential([
        keras.layers.Flatten(input_shape=(32,32)),
        keras.layers.Dense(128, activation="relu"),
        keras.layers.Dense(4, activation="softmax")
    ])
    
    model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=['accuracy'])
    
    return model

Load model and training

In [4]:
data, lbl = prepare_training_data("./data")
train_images, test_images, train_labels, test_labels = train_test_split(data, lbl, random_state=42, test_size=0.1)

print(train_images.shape, test_images.shape, train_labels.shape, test_labels.shape)
model = define_model()
epoch_data = model.fit(train_images, train_labels, epochs=20, verbose=1, validation_data=(test_images, test_labels))
score = model.evaluate(test_images, test_labels, verbose=0)
print('Test loss and accuracy:', score)

model.save('logo_data_model.h5')
100%|██████████| 5/5 [00:00<00:00, 30.36it/s]
(83, 32, 32) (10, 32, 32) (83, 4) (10, 4)
Train on 83 samples, validate on 10 samples
Epoch 1/20
83/83 [==============================] - 0s 6ms/sample - loss: 1.8114 - accuracy: 0.3735 - val_loss: 2.2659 - val_accuracy: 0.2000
Epoch 2/20
83/83 [==============================] - 0s 300us/sample - loss: 1.3381 - accuracy: 0.4940 - val_loss: 1.1943 - val_accuracy: 0.4000
Epoch 3/20
83/83 [==============================] - 0s 508us/sample - loss: 1.1608 - accuracy: 0.4699 - val_loss: 1.1773 - val_accuracy: 0.3000
Epoch 4/20
83/83 [==============================] - 0s 464us/sample - loss: 0.9036 - accuracy: 0.6024 - val_loss: 1.4051 - val_accuracy: 0.2000
Epoch 5/20
83/83 [==============================] - 0s 321us/sample - loss: 0.8812 - accuracy: 0.5542 - val_loss: 1.0201 - val_accuracy: 0.5000
Epoch 6/20
83/83 [==============================] - 0s 392us/sample - loss: 0.7679 - accuracy: 0.7711 - val_loss: 0.9312 - val_accuracy: 0.6000
Epoch 7/20
83/83 [==============================] - 0s 503us/sample - loss: 0.6614 - accuracy: 0.7711 - val_loss: 0.9900 - val_accuracy: 0.6000
Epoch 8/20
83/83 [==============================] - 0s 635us/sample - loss: 0.5856 - accuracy: 0.8193 - val_loss: 0.7204 - val_accuracy: 0.7000
Epoch 9/20
83/83 [==============================] - 0s 648us/sample - loss: 0.5171 - accuracy: 0.8795 - val_loss: 0.5997 - val_accuracy: 0.9000
Epoch 10/20
83/83 [==============================] - 0s 705us/sample - loss: 0.4569 - accuracy: 0.9277 - val_loss: 0.6421 - val_accuracy: 0.7000
Epoch 11/20
83/83 [==============================] - 0s 789us/sample - loss: 0.3890 - accuracy: 0.9036 - val_loss: 0.4891 - val_accuracy: 0.9000
Epoch 12/20
83/83 [==============================] - 0s 379us/sample - loss: 0.3263 - accuracy: 0.9639 - val_loss: 0.4185 - val_accuracy: 1.0000
Epoch 13/20
83/83 [==============================] - 0s 323us/sample - loss: 0.2892 - accuracy: 1.0000 - val_loss: 0.4054 - val_accuracy: 0.9000
Epoch 14/20
83/83 [==============================] - 0s 343us/sample - loss: 0.2582 - accuracy: 0.9759 - val_loss: 0.3675 - val_accuracy: 0.9000
Epoch 15/20
83/83 [==============================] - 0s 324us/sample - loss: 0.2292 - accuracy: 0.9880 - val_loss: 0.2836 - val_accuracy: 1.0000
Epoch 16/20
83/83 [==============================] - 0s 309us/sample - loss: 0.2001 - accuracy: 1.0000 - val_loss: 0.2420 - val_accuracy: 1.0000
Epoch 17/20
83/83 [==============================] - 0s 359us/sample - loss: 0.1783 - accuracy: 1.0000 - val_loss: 0.2237 - val_accuracy: 1.0000
Epoch 18/20
83/83 [==============================] - 0s 464us/sample - loss: 0.1625 - accuracy: 1.0000 - val_loss: 0.2224 - val_accuracy: 1.0000
Epoch 19/20
83/83 [==============================] - 0s 409us/sample - loss: 0.1476 - accuracy: 1.0000 - val_loss: 0.2072 - val_accuracy: 1.0000
Epoch 20/20
83/83 [==============================] - 0s 389us/sample - loss: 0.1345 - accuracy: 1.0000 - val_loss: 0.1713 - val_accuracy: 1.0000
Test loss and accuracy: [0.1713065803050995, 1.0]

Testing and Predicting

In [54]:
class_names = get_class("./data")

lm = keras.models.load_model("logo_data_model.h5")
prediction = lm.predict(test_images)

for i in range(5):
    img = mpimg.imread("{}/{}.png".format("./logo", class_names[np.argmax(test_labels[i])]))
    plt.grid(False)
    plt.imshow(img)
    plt.xlabel("Actual: " + class_names[np.argmax(test_labels[i])])
    plt.title("Prediction: " + class_names[np.argmax(prediction[i])])
    plt.show()