基于ModelArts的华为云账户注册以及在ModelArts上实现人脸的表情识别
本次分享活动的目的在于带领新手小白上手ModelArts并以实际操作人脸的表情识别做为操作案例为大家进行讲解。
一、首先我们从华为云账号的注册、学生认证开始,完成对于ModelArts的登录
ModelArts 开发三步走战略:
第一步,创建好个人账号,并完成个人和学生认证,领取学生权益(必须完成好, 不然可能会造成不必要扣费)
1. 登 录 华 为 云 官 网 注 册 账 号 : https://reg.huaweicloud.com/registerui/cn/register.html?service=https://bbs.huaweiclo ud.com/forum/thread-52588-1-1.html#/register
2.注册好账号后:
或点击右侧:
3.进入账号中心,进行实名认证,以及学生认证:(还未得到学生证的同学往后看)
4.学生认证成功后,搜索“沃土开发人员计划”
· https://developer.huaweicloud.com/plan/developer.html
网页往下翻:
· 申请加入该计划,耐心等待审核通过后,
· 会有代金券发放
·
· 若还没有获得学生证,完成账号注册以及实名认证后,点击进入该网址https://activity.huaweicloud.com/AI_bigdata.html?utm_source=WeChat&utm_mediu m=sm-huaweiyun&utm_term=shequn
点击免费领取左下角的 100 元代金券即可。
· 或者,完成账号注册以及实名认证后,点击进入 ModelArts 控制台
· https://console.huaweicloud.com/modelarts/?region=cn-north-4#/dashboard
两个方法都可以去尝试一下。在费用中心处,往账号充值几毛钱,让系统知道使用者不是机器人
同时在费用中心-》优惠与折扣-》优惠券的地方,检查自己的代金券是否领取成功
上述操作完成后才能进行 ModelArts-AI 开发与操作喔~
二、接下来我们从人脸表情识别的案例演示如何在ModelArts上操作
本案例通过深度学习模型识别人脸表情。这里我们使用的数据集为 kaggle2013 年表情识别数据集,共有七种表情。我们先通过人脸识别算法MTCNN检测人脸的区域,然后通过深度学习分类网络判断出人脸的表情。
1.进入 ModelArts
点 击 如 下 链 接 : https://www.huaweicloud.com/product/modelarts.html , 进 入ModelArts 主页。点击“登录”按钮,输入用户名和密码登录,进入 ModelArts 使用页面,点击“控制台”。
在最左侧找到“服务列表”,输入“ModelArts”。
2.创建 ModelArts notebook
在“开发环境”中点击“Notebook”。
接下来,我们创建一个实际的开发环境,点击“创建”。
我们可以选择“GPU”中的“[限时免费]体验规格 GPU 版”,并勾选“我已阅读并同意以上内容”,点击“下一步”,“提交”,即可成功创建 notebook 开发环境啦~
接下来,点击打开刚创建的 notebook 环境。
点击右上角的"New",然后选择 TensorFlow 1.13.1 开发环境。
点击左上 方的 文件 名 "Untitled" , 并输 入 一个与本 实验 相关 的名称, 如"facial_expression"。(注意:这里命名不能使用中文名字!!!!)
开发环境准备好啦,接下来可以愉快地写代码啦!
数据和代码下载
运行下面代码,进行数据和代码的下载和解压
import os
from modelarts.session import Session
sess = Session()
if sess.region_name == 'cn-north-1':
bucket_path="modelarts-labs/notebook/DL_face_facial_expression/facial_expression.tar.gz"
elif sess.region_name == 'cn-north-4':
bucket_path="modelarts-labs-bj4/notebook/DL_face_facial_expression/facial_expression.tar.gz"
else:
print("请更换地区到北京一或北京四")
if not os.path.exists('./fer2013'):
sess.download_data(bucket_path=bucket_path, path="./facial_expression.tar.gz")
Successfully download file modelarts-labs-bj4/notebook/DL_face_facial_expression/facial_expression.tar.gz from OBS to local ./facial_expression.tar.gz
if os.path.exists('./facial_expression.tar.gz'):
# 使用tar命令解压资源包
os.system("tar -xf ./facial_expression.tar.gz")
# 清理压缩包
os.system("rm ./facial_expression.tar.gz")
!pip install mtcnn==0.0.8
!pip install numpy==1.16.2
Collecting mtcnn==0.0.8
Downloading http://repo.myhuaweicloud.com/repository/pypi/packages/0b/f5/d62ac2bdf1c683b7650268305db3126323a7b6a2f6390273038285fa9e3f/mtcnn-0.0.8.tar.gz (2.3MB)
[K 100% |████████████████████████████████| 2.3MB 44.2MB/s ta 0:00:01
[?25hBuilding wheels for collected packages: mtcnn
Running setup.py bdist_wheel for mtcnn ... [?25ldone
[?25h Stored in directory: /home/ma-user/.cache/pip/wheels/5b/79/11/d14d6cffd223ad2ec9848799f86adc06c4973367bd9aa4fd61
Successfully built mtcnn
Installing collected packages: mtcnn
Successfully installed mtcnn-0.0.8
[33mYou are using pip version 9.0.1, however version 20.2.4 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
Collecting numpy==1.16.2
Downloading http://repo.myhuaweicloud.com/repository/pypi/packages/35/d5/4f8410ac303e690144f0a0603c4b8fd3b986feb2749c435f7cdbb288f17e/numpy-1.16.2-cp36-cp36m-manylinux1_x86_64.whl (17.3MB)
[K 100% |████████████████████████████████| 17.3MB 117.8MB/s ta 0:00:01
[?25hInstalling collected packages: numpy
Found existing installation: numpy 1.19.1
Uninstalling numpy-1.19.1:
Successfully uninstalled numpy-1.19.1
Successfully installed numpy-1.16.2
[33mYou are using pip version 9.0.1, however version 20.2.4 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
import numpy as np
import keras
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, AveragePooling2D
from keras.layers import Dense, Activation, Dropout, Flatten
from keras.preprocessing.image import ImageDataGenerator
Using TensorFlow backend.
emotions = ('angry', 'disgust', 'fear', 'happy', 'sad', 'surprise', 'neutral')
num_classes = 7 # 类别数
batch_size = 16 # 批大小
epochs = 5 # 训练轮数
with open("./fer2013/fer2013.csv") as f:
content = f.readlines()
lines = np.array(content)
查看一个样本数据
emotion_1, img_1, usage_1 = lines[1].split(",")
val_1 = img_1.split(" ")
pixels_1 = np.array(val_1, 'float32')
print(emotion_1)
print(usage_1)
print(pixels_1.shape)
0
Training
(2304,)
打印整个数据集的样本数
num_of_instances = lines.size
num_of_instances
35888
切分训练集和测试集
x_train, y_train, x_test, y_test = [], [], [], []
for i in range(1,num_of_instances):
try:
emotion, img, usage = lines[i].split(",")
val = img.split(" ")
pixels = np.array(val, 'float32')
emotion = keras.utils.to_categorical(emotion, num_classes)
if 'Training' in usage:
y_train.append(emotion)
x_train.append(pixels)
elif 'PublicTest' in usage:
y_test.append(emotion)
x_test.append(pixels)
except:
print("",end="")
数据归一化,并对图像缩放
x_train = np.array(x_train, 'float32')
y_train = np.array(y_train, 'float32')
x_test = np.array(x_test, 'float32')
y_test = np.array(y_test, 'float32')
x_train /= 255
x_test /= 255
x_train = x_train.reshape(x_train.shape[0], 48, 48, 1)
x_train = x_train.astype('float32')
x_test = x_test.reshape(x_test.shape[0], 48, 48, 1)
x_test = x_test.astype('float32')
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
28709 train samples
3589 test samples
获取数据生成器
gen = ImageDataGenerator()
train_generator = gen.flow(x_train, y_train, batch_size=batch_size)
def build_model():
model = Sequential()
#1st convolution layer
model.add(Conv2D(64, (5, 5), activation='relu', input_shape=(48,48,1)))
model.add(MaxPooling2D(pool_size=(5,5), strides=(2, 2)))
#2nd convolution layer
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(AveragePooling2D(pool_size=(3,3), strides=(2, 2)))
#3rd convolution layer
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(Conv2D(128, (3, 3), activation='relu'))
model.add(AveragePooling2D(pool_size=(3,3), strides=(2, 2)))
model.add(Flatten())
#fully connected neural networks
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
return model
from keras.models import model_from_json
model = build_model()
model.compile(loss='categorical_crossentropy', optimizer=keras.optimizers.Adam(), metrics=['accuracy'])
model.load_weights('./model/facial_expression_model_weights.h5') # 加载预训练权重
WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.
Instructions for updating:
Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.
查看模型结构
model.summary()
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 44, 44, 64) 1664
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 20, 20, 64) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 18, 18, 64) 36928
_________________________________________________________________
conv2d_3 (Conv2D) (None, 16, 16, 64) 36928
_________________________________________________________________
average_pooling2d_1 (Average (None, 7, 7, 64) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 5, 5, 128) 73856
_________________________________________________________________
conv2d_5 (Conv2D) (None, 3, 3, 128) 147584
_________________________________________________________________
average_pooling2d_2 (Average (None, 1, 1, 128) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 128) 0
_________________________________________________________________
dense_1 (Dense) (None, 1024) 132096
_________________________________________________________________
dropout_1 (Dropout) (None, 1024) 0
_________________________________________________________________
dense_2 (Dense) (None, 1024) 1049600
_________________________________________________________________
dropout_2 (Dropout) (None, 1024) 0
_________________________________________________________________
dense_3 (Dense) (None, 7) 7175
=================================================================
Total params: 1,485,831
Trainable params: 1,485,831
Non-trainable params: 0
_________________________________________________________________
model.fit_generator(train_generator, steps_per_epoch=batch_size, epochs=epochs)
WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Epoch 1/5
16/16 [==============================] - 7s 432ms/step - loss: 0.5990 - acc: 0.7852
Epoch 2/5
16/16 [==============================] - 0s 5ms/step - loss: 0.6849 - acc: 0.7773
Epoch 3/5
16/16 [==============================] - 0s 5ms/step - loss: 0.6426 - acc: 0.7852
Epoch 4/5
16/16 [==============================] - 0s 5ms/step - loss: 0.5970 - acc: 0.7969
Epoch 5/5
16/16 [==============================] - 0s 5ms/step - loss: 0.7880 - acc: 0.7344
<keras.callbacks.History at 0x7fc390105f28>
import cv2
img = cv2.cvtColor( cv2.imread("./test.jpg"), cv2.COLOR_BGR2RGB)
from PIL import Image
Image.fromarray(img)
使用mtcnn
算法检测出人脸区域
from mtcnn.mtcnn import MTCNN
detector = MTCNN()
result = detector.detect_faces(img)
result
WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/mtcnn/layer_factory.py:211: calling reduce_max_v1 (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/mtcnn/layer_factory.py:213: calling reduce_sum_v1 (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
WARNING:tensorflow:From /home/ma-user/anaconda3/envs/TensorFlow-1.13.1/lib/python3.6/site-packages/mtcnn/layer_factory.py:214: div (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Deprecated in favor of operator or tf.math.divide.
[{'box': [66, 68, 95, 123],
'confidence': 0.9999872446060181,
'keypoints': {'left_eye': (101, 111),
'mouth_left': (98, 156),
'mouth_right': (144, 156),
'nose': (128, 137),
'right_eye': (142, 111)}}]
打印检测出的人脸区域
x,y,w,h = result[0]["box"]
detected_face = img[int(y):int(y+h), int(x):int(x+w)]
detected_face = cv2.cvtColor(detected_face, cv2.COLOR_BGR2GRAY)
detected_face = cv2.resize(detected_face, (48, 48))
Image.fromarray(detected_face)
使用表情分类模型预测出人脸的表情
from keras.preprocessing import image
import numpy as np
img_pixels = image.img_to_array(detected_face)
img_pixels = np.expand_dims(img_pixels, axis = 0)
img_pixels /= 255
predictions = model.predict(img_pixels)
max_index = np.argmax(predictions[0])
result = emotions[max_index]
result
'happy'
- 点赞
- 收藏
- 关注作者
评论(0)