VGG16算法实现

举报
Nikolas 发表于 2020/12/27 22:14:47 2020/12/27
【摘要】 使用tensorflow2实现 VGG16算法

## 1.导入依赖包


```python
from tensorflow import keras
import tensorflow as tf
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, Dense, Flatten, Dropout, MaxPool2D
```

## 2.导入数据


```python
train = pd.read_csv('./data/fashion_train.csv')
test = pd.read_csv('./data/fashion_test.csv')
print(train.shape, test.shape)
```

## 3.数据预处理


```python
input_shape = (28, 28, 1)
x = np.array(train.iloc[:, 1:])
y = keras.utils.to_categorical(np.array(train.iloc[:, 0]))
x_train, x_val, y_train, y_val = train_test_split(x, y, test_size=0.2)
print(x_train.shape, y_train.shape)

x_test = np.array(test.iloc[:, 0:])
x_train = x_train.reshape(x_train.shape[0], 28, 28, 1)
x_val = x_val.reshape(x_val.shape[0], 28, 28, 1)
x_test = x_test.reshape(x_test.shape[0], 28, 28, 1)
print(x_train.shape, y_train.shape)

x_train = x_train.astype('float32')
x_val = x_val.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_val /= 255
x_test /= 255

batch_size = 64
classes = 10
epochs = 5
```

## 4.建立模型


```python
model = keras.models.Sequential([
    Conv2D(filters=64, kernel_size=(3, 3), padding='same')
    , BatchNormalization()
    , Activation('relu')
    , Conv2D(filters=64, kernel_size=(3, 3), padding='same')
    , BatchNormalization()
    , Activation('relu')
    , MaxPool2D(pool_size=(2, 2), strides=2, padding='same')
    , Dropout(0.2)

    , Conv2D(filters=128, kernel_size=(3, 3), padding='same')
    , BatchNormalization()
    , Activation('relu')
    , Conv2D(filters=128, kernel_size=(3, 3), padding='same')
    , BatchNormalization()
    , Activation('relu')
    , MaxPool2D(pool_size=(2, 2), strides=2, padding='same')
    , Dropout(0.2)

    , Conv2D(filters=256, kernel_size=(3, 3), padding='same')
    , BatchNormalization()
    , Activation('relu')
    , Conv2D(filters=256, kernel_size=(3, 3), padding='same')
    , BatchNormalization()
    , Activation('relu')
    , Conv2D(filters=256, kernel_size=(3, 3), padding='same')
    , BatchNormalization()
    , Activation('relu')
    , MaxPool2D(pool_size=(2, 2), strides=2, padding='same')
    , Dropout(0.2)

    , Conv2D(filters=512, kernel_size=(3, 3), padding='same')
    , BatchNormalization()
    , Activation('relu')
    , Conv2D(filters=512, kernel_size=(3, 3), padding='same')
    , BatchNormalization()
    , Activation('relu')
    , Conv2D(filters=512, kernel_size=(3, 3), padding='same')
    , BatchNormalization()
    , Activation('relu')
    , MaxPool2D(pool_size=(2, 2), strides=2, padding='same')
    , Dropout(0.2)

    , Conv2D(filters=512, kernel_size=(3, 3), padding='same')
    , BatchNormalization()
    , Activation('relu')
    , Conv2D(filters=512, kernel_size=(3, 3), padding='same')
    , BatchNormalization()
    , Activation('relu')
    , Conv2D(filters=512, kernel_size=(3, 3), padding='same')
    , BatchNormalization()
    , Activation('relu')
    , MaxPool2D(pool_size=(2, 2), strides=2, padding='same')
    , Dropout(0.2)

    , Flatten()
    , Dense(512, activation='relu')
    , Dropout(0.2)
    , Dense(512, activation='relu')
    , Dropout(0.2)
    , Dense(classes, activation='softmax')
])
```

## 5.定义优化器、损失函数和评价指标


```python
model.compile(optimizer='adam'
              , loss='categorical_crossentropy'
              , metrics=['accuracy'])
```

## 6.断点续训


```python
save_path = './checkpoint/VGG16.ckpt'
if os.path.exists(save_path + '.index'):
    print('model loading')
    model.load_weights(save_path)
cp_callback = keras.callbacks.ModelCheckpoint(filepath=save_path
                                              , save_weights_only=True
                                              , save_best_only=True)
```

## 7.训练模型


```python
history = model.fit(x_train, y_train
                    , batch_size=batch_size
                    , epochs=epochs
                    , verbose=1
                    , validation_data=(x_val, y_val)
                    , callbacks=[cp_callback])
```

## 8.预测结果


```python
result = model.predict(x_test)
pred = tf.argmax(result, axis=1)
df = pd.DataFrame(pred, columns=['label'])
df.to_csv(path_or_buf='Submission.csv', index_label='image_id')
```

## 9.损失和准确率可视化


```python
print(history.history.keys())
plt.plot(history.epoch, history.history.get('loss'), label='loss')
plt.plot(history.epoch, history.history.get('val_loss'), label='val_loss')
plt.legend()
plt.show()

plt.plot(history.epoch, history.history.get('accuracy'), label='acc')
plt.plot(history.epoch, history.history.get('val_accuracy'), label='val_acc')
plt.legend()
plt.show()
```

【版权声明】本文为华为云社区用户原创内容,转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息, 否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。