多标签分类算法详解及实践(Keras)

举报
AI浩 发表于 2021/12/23 01:33:19 2021/12/23
【摘要】 目录 多标签分类 如何使用多标签分类 多标签使用实例 训练 引入库,设置超参数 设置全局参数 生成多分类的标签 切分训练集和验证集 数据增强 设置callback函数 设置模型 训练模型,并保存最终的模型 打印出训练的log 完整代码: 测试 多标签分类 multi-label classificat...

目录

多标签分类

如何使用多标签分类

多标签使用实例

训练

引入库,设置超参数

设置全局参数

生成多分类的标签

切分训练集和验证集

数据增强

设置callback函数

设置模型

训练模型,并保存最终的模型

打印出训练的log

完整代码:

测试


多标签分类

multi-label classification problem:多标签分类(或者叫多标记分类),是指一个样本的标签数量不止一个,即一个样本对应多个标签。

如何使用多标签分类

在预测多标签分类问题时,假设隐藏层的输出是[-1.0, 5.0, -0.5, 5.0, -0.5 ],如果用softmax函数的话,那么输出为:

z = np.array([-1.0, 5.0, -0.5, 5.0, -0.5])print(Softmax_sim(z))# 输出为[ 0.00123281  0.49735104  0.00203256  0.49735104  0.00203256]
  

通过使用softmax,我们可以清楚地选择标签2和标签4。但我们必须知道每个样本需要多少个标签,或者为概率选择一个阈值。这显然不是我们想要的,因为样本属于每个标签的概率应该是独立的。

对于一个二分类问题,常用的激活函数是sigmoid函数:


ps: sigmoid函数之所以在之前很长一段时间作为神经网络激活函数(现在大家基本都用Relu了),一个很重要的原因是sigmoid函数的导数很容易计算,可以用自身表示:


python 代码为:


   
  1. import numpy as np
  2. def Sigmoid_sim(x):
  3. return 1 /(1+np.exp(-x))
  4. a = np.array([-1.0, 5.0, -0.5, 5.0, -0.5])
  5. print(Sigmoid_sim(a))
  6. #输出为: [ 0.26894142 0.99330715 0.37754067 0.99330715 0.37754067]

 

此时,每个标签的概率即是独立的。完整整个模型构建之后,最后一步中最重要的是为模型的编译选择损失函数。在多标签分类中,大多使用binary_crossentropy损失而不是通常在多类分类中使用的categorical_crossentropy损失函数。这可能看起来不合理,但因为每个输出节点都是独立的,选择二元损失,并将网络输出建模为每个标签独立的bernoulli分布。整个多标签分类的模型为:


   
  1. from keras.models import Model
  2. from keras.layers import Input,Dense
  3. inputs = Input(shape=(10,))
  4. hidden = Dense(units=10,activation='relu')(inputs)
  5. output = Dense(units=5,activation='sigmoid')(hidden)
  6. model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

 

多标签使用实例

我们使用最常用的衣服数据集来实现多标签分类,网络模型使用ResNet50。

数据集地址:链接:https://pan.baidu.com/s/1eANXTnWl2nf853IEiLOvWg

提取码:jo4h

我们的数据集由5547张图片组成,它们来自12个不同的种类,包括:

  • black_dress(333张图片)
  • black_jeans(344张图片)
  • black_shirt(436张图片)
  • black_shoe(534张图片)
  • blue_dress(386张图片)
  • blue_jeans(356张图片)
  • blue_shirt(369张图片)
  • red_dress(384张图片)
  • red_shirt(332张图片)
  • red_shoe(486张图片)
  • white_bag(747张图片)
  • white_shoe(840张图片)

我们的卷积神经网络的目标是同时预测颜色和服饰类别。代码使用Tensorflow2.0以上版本编写。下面对我实现算法的代码作讲解:

训练

引入库,设置超参数


  
  1. # import the necessary packages
  2. from sklearn.preprocessing import MultiLabelBinarizer
  3. from sklearn.model_selection import train_test_split
  4. from imutils import paths
  5. import tensorflow as tf
  6. import numpy as np
  7. import argparse
  8. import random
  9. import pickle
  10. import cv2
  11. import os
  12. from tensorflow.python.keras.applications.resnet import ResNet50
  13. from tensorflow.keras.optimizers import Adam
  14. from tensorflow.python.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
  15. from tensorflow.python.keras.preprocessing.image import ImageDataGenerator, img_to_array
  16. # construct the argument parse and parse the arguments
  17. ap = argparse.ArgumentParser()
  18. ap.add_argument("-d", "--dataset", default='../dataset',
  19. help="path to input dataset (i.e., directory of images)")
  20. ap.add_argument("-m", "--model", default='model.h5',
  21. help="path to output model")
  22. ap.add_argument("-l", "--labelbin", default='labelbin',
  23. help="path to output label binarizer")
  24. ap.add_argument("-p", "--plot", type=str, default="plot.png",
  25. help="path to output accuracy/loss plot")
  26. args = vars(ap.parse_args())

超参数的解释:

  • --dataset:输入的数据集路径。
  • --model:输出的Keras序列模型路径。
  • --labelbin:输出的多标签二值化对象路径。
  • --plot:输出的训练损失及正确率图像路径。

设置全局参数


  
  1. EPOCHS = 150
  2. INIT_LR = 1e-3
  3. BS = 16
  4. IMAGE_DIMS = (224, 224, 3)

加载数据

print("[INFO] loading images...")

imagePaths = sorted(list(paths.list_images(args["dataset"])))

random.seed(42)

random.shuffle(imagePaths)

# initialize the data and labels

data = []

labels = []

# loop over the input images

for imagePath in imagePaths:

    # load the image, pre-process it, and store it in the data list

    image = cv2.imread(imagePath)

    image = cv2.resize(image, (IMAGE_DIMS[1], IMAGE_DIMS[0]))

    image = img_to_array(image)

    data.append(image)

    # extract set of class labels from the image path and update the

    # labels list

    l = label = imagePath.split(os.path.sep)[-2].split("_")

    labels.append(l)

# scale the raw pixel intensities to the range [0, 1]

data = np.array(data, dtype="float") / 255.0

labels = np.array(labels)

print(labels)

运行结果:

[['red' 'shirt']
 ['black' 'jeans']
 ['black' 'shoe']
 ...
 ['black' 'dress']
 ['black' 'shirt']
 ['white' 'shoe']]
 

生成多分类的标签

print("[INFO] class labels:")

mlb = MultiLabelBinarizer()

labels = mlb.fit_transform(labels)

# loop over each of the possible class labels and show them

for (i, label) in enumerate(mlb.classes_):

print("{}. {}".format(i + 1, label))

print(labels)

通过MultiLabelBinarizer()的fit就可以得到label的编码。我们将类别和生成后的标签打印出来。类别结果如下:

[INFO] class labels:
1. bag
2. black
3. blue
4. dress
5. jeans
6. red
7. shirt
8. shoe
9. white

lables的输出结果如下:

[[0 0 0 ... 1 0 0]
 [0 1 0 ... 0 0 0]
 [0 1 0 ... 0 1 0]
 ...
 [0 1 0 ... 0 0 0]
 [0 1 0 ... 1 0 0]
 [0 0 0 ... 0 1 1]]

 

为了方便大家理解标签,我通过下面的表格说明

 

Bag

Black

Blue

Dress

Jeans

Red

Shirt

Shoe

White

[‘red’ ’shirt’]

0

0

0

0

0

1

1

0

0

[‘black’ ’jeans’]

0

1

0

0

1

0

0

0

0

 ['white' 'shoe']

0

0

0

0

0

0

0

1

1

然后,将MultiLabelBinarizer()训练的模型保存,方便测试时使用。代码如下:

print("[INFO] serializing label binarizer...")

f = open(args["labelbin"], "wb")

f.write(pickle.dumps(mlb))

f.close()

切分训练集和验证集

(trainX, testX, trainY, testY) = train_test_split(data,

                                                  labels, test_size=0.2, random_state=42)

数据增强

# construct the image generator for data augmentation

aug = ImageDataGenerator(rotation_range=25, width_shift_range=0.1,

                         height_shift_range=0.1, shear_range=0.2, zoom_range=0.2,

                         horizontal_flip=True, fill_mode="nearest")

设置callback函数

checkpointer = ModelCheckpoint(filepath='weights_best_Reset50_model.hdf5',

                               monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')

 

reduce = ReduceLROnPlateau(monitor='val_accuracy', patience=10,

                           verbose=1,

                           factor=0.5,

                           min_lr=1e-6)

checkpointer的作用是保存最好的训练模型。reduce动态调整学习率。

设置模型

model = ResNet50(weights=None, classes=len(mlb.classes_))

optimizer = Adam(lr=INIT_LR)

model.compile(loss="binary_crossentropy", optimizer=optimizer,

              metrics=["accuracy"])

训练模型,并保存最终的模型

print("[INFO] training network...")

history = model.fit(

    x=aug.flow(trainX, trainY, batch_size=BS),

    validation_data=(testX, testY),

    steps_per_epoch=len(trainX) // BS,

epochs=EPOCHS, callbacks=[checkpointer, reduce], verbose=1)

# save the model to disk

print("[INFO] serializing network...")

model.save(args["model"], save_format="h5")

打印出训练的log

# plot the training loss and accuracy

loss_trend_graph_path = r"WW_loss.jpg"

acc_trend_graph_path = r"WW_acc.jpg"

import matplotlib.pyplot as plt

 

print("Now,we start drawing the loss and acc trends graph...")

# summarize history for accuracy

fig = plt.figure(1)

plt.plot(history.history["accuracy"])

plt.plot(history.history["val_accuracy"])

plt.title("Model accuracy")

plt.ylabel("accuracy")

plt.xlabel("epoch")

plt.legend(["train", "test"], loc="upper left")

plt.savefig(acc_trend_graph_path)

plt.close(1)

# summarize history for loss

fig = plt.figure(2)

plt.plot(history.history["loss"])

plt.plot(history.history["val_loss"])

plt.title("Model loss")

plt.ylabel("loss")

plt.xlabel("epoch")

plt.legend(["train", "test"], loc="upper left")

plt.savefig(loss_trend_graph_path)

plt.close(2)

print("We are done, everything seems OK...")

完整代码:


  
  1. # import the necessary packages
  2. from sklearn.preprocessing import MultiLabelBinarizer
  3. from sklearn.model_selection import train_test_split
  4. from imutils import paths
  5. import tensorflow as tf
  6. import numpy as np
  7. import argparse
  8. import random
  9. import pickle
  10. import cv2
  11. import os
  12. from tensorflow.python.keras.applications.resnet import ResNet50
  13. from tensorflow.keras.optimizers import Adam
  14. from tensorflow.python.keras.callbacks import ModelCheckpoint, ReduceLROnPlateau
  15. from tensorflow.python.keras.preprocessing.image import ImageDataGenerator, img_to_array
  16. # construct the argument parse and parse the arguments
  17. ap = argparse.ArgumentParser()
  18. ap.add_argument("-d", "--dataset", default='../dataset',
  19. help="path to input dataset (i.e., directory of images)")
  20. ap.add_argument("-m", "--model", default='model.h5',
  21. help="path to output model")
  22. ap.add_argument("-l", "--labelbin", default='labelbin',
  23. help="path to output label binarizer")
  24. ap.add_argument("-p", "--plot", type=str, default="plot.png",
  25. help="path to output accuracy/loss plot")
  26. args = vars(ap.parse_args())
  27. # initialize the number of epochs to train for, initial learning rate,
  28. # batch size, and image dimensions
  29. EPOCHS = 150
  30. INIT_LR = 1e-3
  31. BS = 16
  32. IMAGE_DIMS = (224, 224, 3)
  33. # disable eager execution
  34. tf.compat.v1.disable_eager_execution()
  35. # grab the image paths and randomly shuffle them
  36. print("[INFO] loading images...")
  37. imagePaths = sorted(list(paths.list_images(args["dataset"])))
  38. random.seed(42)
  39. random.shuffle(imagePaths)
  40. # initialize the data and labels
  41. data = []
  42. labels = []
  43. # loop over the input images
  44. for imagePath in imagePaths:
  45. # load the image, pre-process it, and store it in the data list
  46. image = cv2.imread(imagePath)
  47. image = cv2.resize(image, (IMAGE_DIMS[1], IMAGE_DIMS[0]))
  48. image = img_to_array(image)
  49. data.append(image)
  50. # extract set of class labels from the image path and update the
  51. # labels list
  52. l = label = imagePath.split(os.path.sep)[-2].split("_")
  53. labels.append(l)
  54. # scale the raw pixel intensities to the range [0, 1]
  55. data = np.array(data, dtype="float") / 255.0
  56. labels = np.array(labels)
  57. print("[INFO] data matrix: {} images ({:.2f}MB)".format(
  58. len(imagePaths), data.nbytes / (1024 * 1000.0)))
  59. # binarize the labels using scikit-learn's special multi-label
  60. # binarizer implementation
  61. print("[INFO] class labels:")
  62. mlb = MultiLabelBinarizer()
  63. labels = mlb.fit_transform(labels)
  64. # loop over each of the possible class labels and show them
  65. for (i, label) in enumerate(mlb.classes_):
  66. print("{}. {}".format(i + 1, label))
  67. print(labels)
  68. # partition the data into training and testing splits using 80% of
  69. # the data for training and the remaining 20% for testing
  70. (trainX, testX, trainY, testY) = train_test_split(data,
  71. labels, test_size=0.2, random_state=42)
  72. print("[INFO] serializing label binarizer...")
  73. f = open(args["labelbin"], "wb")
  74. f.write(pickle.dumps(mlb))
  75. f.close()
  76. # construct the image generator for data augmentation
  77. aug = ImageDataGenerator(rotation_range=25, width_shift_range=0.1,
  78. height_shift_range=0.1, shear_range=0.2, zoom_range=0.2,
  79. horizontal_flip=True, fill_mode="nearest")
  80. checkpointer = ModelCheckpoint(filepath='weights_best_Reset50_model.hdf5',
  81. monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
  82. reduce = ReduceLROnPlateau(monitor='val_accuracy', patience=10,
  83. verbose=1,
  84. factor=0.5,
  85. min_lr=1e-6)
  86. model = ResNet50(weights=None, classes=len(mlb.classes_))
  87. optimizer = Adam(lr=INIT_LR)
  88. model.compile(loss="binary_crossentropy", optimizer=optimizer,
  89. metrics=["accuracy"])
  90. # train the network
  91. print("[INFO] training network...")
  92. history = model.fit(
  93. x=aug.flow(trainX, trainY, batch_size=BS),
  94. validation_data=(testX, testY),
  95. steps_per_epoch=len(trainX) // BS,
  96. epochs=EPOCHS, callbacks=[checkpointer, reduce], verbose=1)
  97. # save the model to disk
  98. print("[INFO] serializing network...")
  99. model.save(args["model"], save_format="h5")
  100. # save the multi-label binarizer to disk
  101. # plot the training loss and accuracy
  102. loss_trend_graph_path = r"WW_loss.jpg"
  103. acc_trend_graph_path = r"WW_acc.jpg"
  104. import matplotlib.pyplot as plt
  105. print("Now,we start drawing the loss and acc trends graph...")
  106. # summarize history for accuracy
  107. fig = plt.figure(1)
  108. plt.plot(history.history["accuracy"])
  109. plt.plot(history.history["val_accuracy"])
  110. plt.title("Model accuracy")
  111. plt.ylabel("accuracy")
  112. plt.xlabel("epoch")
  113. plt.legend(["train", "test"], loc="upper left")
  114. plt.savefig(acc_trend_graph_path)
  115. plt.close(1)
  116. # summarize history for loss
  117. fig = plt.figure(2)
  118. plt.plot(history.history["loss"])
  119. plt.plot(history.history["val_loss"])
  120. plt.title("Model loss")
  121. plt.ylabel("loss")
  122. plt.xlabel("epoch")
  123. plt.legend(["train", "test"], loc="upper left")
  124. plt.savefig(loss_trend_graph_path)
  125. plt.close(2)
  126. print("We are done, everything seems OK...")

测试


  
  1. # import the necessary packages
  2. from tensorflow.keras.preprocessing.image import img_to_array
  3. from tensorflow.keras.models import load_model
  4. import numpy as np
  5. import argparse
  6. import imutils
  7. import pickle
  8. import cv2
  9. import os
  10. # construct the argument parse and parse the arguments
  11. ap = argparse.ArgumentParser()
  12. ap.add_argument("-m", "--model", default='weights_best_Reset50_model.hdf5',
  13. help="path to trained model model")
  14. ap.add_argument("-l", "--labelbin", default='labelbin',
  15. help="path to label binarizer")
  16. ap.add_argument("-i", "--image", default='../dataset/0.jpg',
  17. help="path to input image")
  18. args = vars(ap.parse_args())
  19. # load the image
  20. image = cv2.imread(args["image"])
  21. output = imutils.resize(image, width=400)
  22. # pre-process the image for classification
  23. image = cv2.resize(image, (224, 224))
  24. image = image.astype("float") / 255.0
  25. image = img_to_array(image)
  26. image = np.expand_dims(image, axis=0)
  27. # load the trained convolutional neural network and the multi-label
  28. # binarizer
  29. print("[INFO] loading network...")
  30. model = load_model(args["model"])
  31. mlb = pickle.loads(open(args["labelbin"], "rb").read())
  32. # classify the input image then find the indexes of the two class
  33. # labels with the *largest* probability
  34. print("[INFO] classifying image...")
  35. proba = model.predict(image)[0]
  36. idxs = np.argsort(proba)[::-1][:2]
  37. # loop over the indexes of the high confidence class labels
  38. for (i, j) in enumerate(idxs):
  39. # build the label and draw the label on the image
  40. label = "{}: {:.2f}%".format(mlb.classes_[j], proba[j] * 100)
  41. cv2.putText(output, label, (10, (i * 30) + 25),
  42. cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255, 0), 2)
  43. # show the probabilities for each of the individual labels
  44. for (label, p) in zip(mlb.classes_, proba):
  45. print("{}: {:.2f}%".format(label, p * 100))
  46. # show the output image
  47. cv2.imshow("Output", output)
  48. cv2.waitKey(0)

参考文章:
  

 

keras解决多标签分类问题

https://blog.csdn.net/somtian/article/details/79614570

Multi-label classification with Keras

https://www.pyimagesearch.com/2018/05/07/multi-label-classification-with-keras/

文章来源: wanghao.blog.csdn.net,作者:AI浩,版权归原作者所有,如需转载,请联系作者。

原文链接:wanghao.blog.csdn.net/article/details/111263824

【版权声明】本文为华为云社区用户转载文章,如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。