【计算机视觉】E经营——智能营业辅助系统

举报
yd_213964667 发表于 2023/12/01 17:24:57 2023/12/01
【摘要】 商家在经营过程中,需要更好地提升顾客的购物体验,进而获得更大的利益,因此就需要掌握顾客的全方位数据进行分析。本作品能够获取顾客多维度的信息并通过追踪与质心算法、推荐算法与数据库相结合,设置客流监测、熟客识别、商品推荐等功能版块,最终为顾客制定个性化购物推荐方案。通过国产华为云一站式AI开发平台实现开发、并搭配其HiLens硬件设备的高时效性软硬件结合的人工智能产品,是本软件的特色。

设计思路

  当今时代,各式店铺越来越多,顾客的购买选择不断增加,对于购买体验感的追求也越来越高,因此,为用户提供更好的服务成为当今商家考虑的一大问题。此外,随着经济的发展,员工雇佣成本不断提高,提高工作效率与减少劳动力的不必要开支也成为商家面临的重要问题。为帮助商家提升运营能力,我们设计了这款人工智能软件,使其能够实时监测店铺内客流量,以按需增减人手,并通过熟客识别功能让商家为老顾客提供更加贴心和优惠的服务,增加新顾客对商店特色的了解,与顾客建立起良好关系。而对于那些对店铺的服务质量以及盈利有更高追求的商家,我们不仅提供会员管理功能,还设置了精准营销版块。将会员基本信息与其购买商品相对应构建用户画像后,本软件可以根据用户以往购买信息对其消费行为进行多维度分析,进而为其提供个性化服务,提升商家在行业内的竞争力,也使得用户有更加优质的购物体验。我们将功能的实现分为客流量监测、熟客识别、会员信息管理和精准营销几大功能模块,对程序的设计思路进行介绍。

(一)客流监测

  首先,因为客流统计数据在商店运营决策中发挥越来越重要的作用,所以商店需要对顾客群体进行宏观的了解。我们注意到,虽然一般的商店会在出入口处安装红外感应器进行通知和记录,但也存在例如无法识别障碍物阻挡、人的反复移动等问题。此外,商店需要的不仅仅是简单的客流量与成交率的数字记录,更包含店内实时客流量大小等数据,这样,商家才能够对不同时段店内工作人员的数量进行合理分配。因此,我们通过构建模型对人体进行检测来解决以上问题。
  因此,在模型训练前,我们在华为云ModelArts平台上,通过自动与手动相结合的方法进行数据标注,并手动对标注错误的数据进行进一步更正,之后将其分成训练集和测试集。
  这里,我们创建两个标注任务:

  标注结果如下:

  在构建模型的过程中,我们搜集了大量不同的行人数据集,考虑到华为的深度学习硬件设备HiLens可以在摄像的同时调用模型进行分析,为方便后期搭配使用,之后,我们使用YOLOv5算法进行模型训练。

  之后我们将训练好的模型部署使用,构建高计算性能的API接口,由此,我们就可以对每一帧的视频进行截取,并通过调用深度学习模型构建来实时监测人像。之后,我们使用质心算法对检测到的人像进行追踪,并根据目标轨迹判断顾客进出商店的方向,以实现对商店人数的动态统计。除此之外,我们使用OpenCV库对视频的读取,标注与刷新功能,对传入的录像或实时监控场景进行人像标注和实时进出数据在视频中的动态显示,以便商家可以在视频中看到实时数据,并以文件的形式记录这些数据。

  在对充分的视频进行了处理和解析后,我们选择Python的matplotlib库对记录下来的数据进行动态图表绘制,将店内人数的变化进行可视化的展示。这样的动态图表在统计学意义上是一个时间序列数据,因此我们选用ARIMA模型对异常值(峰值)进行探测并通知商家,此动态图也可以应用于对未来的趋势进行预测。这些都可以作为商家调整营业时间、确定日常和节假日以及疫情期间的工作人员数量分配的依据。商家也可以通过分析消费者重点关注或停留时间较长的区域,实现店铺规划、客群导流、货品陈列优化等,进而提升营业效率,让小摄像头发挥大作用。
  为了给不具备联网条件的商家提供便利,我们还专门设计了导入视频的文件传入接口,以便商家将需要分析的监控视频交给机构(如乡镇服务站等),由他们代为统计分析,之后再将分析结果反馈给对应商家,以优化乡镇商店经营管理,提升用户体验感,进而推动乡镇经济发展。
  此外,这一功能也可广泛应用于景点、车站、展区等人群密集处,以便根据实施客流量的变化及时采取措施,对客流进行控制。

在这个阶段,我们提取模型得API、token等信息,使用华为Modelarts平台训练好的模型应用到客流量统计中:

def counting_video(self, args):
        
        url = "https://44c3135b8b884669b181a7816fffd9c6.apig.cn-north-4.huaweicloudapis.com/v1/infers/e780fdfd-bd0d-42fd-a07c-ea1dfc00e7f3"
        payload={}
        headers = {
                'X-Auth-Token' : 'MIIbqgYJKoZIhvcNAQcCoIIbmzCCG5cCAQExDTALBglghkgBZQMEAgEwghm8BgkqhkiG9w0BBwGgghmtBIIZqXsidG9rZW4iOnsiZXhwaXJlc19hdCI6IjIwMjAtMTItMTdUMTY6MDg6MjUuODYzMDAwWiIsIm1ldGhvZHMiOlsicGFzc3dvcmQiXSwiY2F0YWxvZyI6W10sInJvbGVzIjpbeyJuYW1lIjoidGVfYWRtaW4iLCJpZCI6IjAifSx7Im5hbWUiOiJ0ZV9hZ2VuY3kiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9kY3NfbXNfcndzIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfdjJ4IiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfZWNzX3Nwb3RfaW5zdGFuY2UiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9pdmFzX3Zjcl92Y2EiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9pZWZfbm9kZWdyb3VwIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfY2NlX3R1cmJvX2VuaGFuY2VkIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfZWNzX2FzY2VuZF9rYWkxIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfZWNzX2thZTEiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9kYnNfcmkiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9ibXNfaHBjX2gybGFyZ2UiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9ldnNfZXNzZCIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2lvZHBzIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfYmF0Y2hfZWNzX2NsdXN0ZXIiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9lY3NfZ3B1X3YxMDAiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9kd3NfcG9jIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfZWNzX2tjMV91c2VyX2RlZmluZWQiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9tZWV0aW5nX2VuZHBvaW50X2J1eSIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX21hcF9ubHAiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9tZWVldGluZ193aGl0ZWJvYXJkX2J1eSIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX3Npc19zYXNyX2VuIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfc2FkX2JldGEiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9WSVNfSW50bCIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2Vjc19ncHVfcDJzIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfZXZzX3ZvbHVtZV9yZWN5Y2xlX2JpbiIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2Rjc19kY3MyLWVudGVycHJpc2UiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF92Y2MiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF92Y3AiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9kcHAiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9jdnIiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9lY3NfYzZuZSIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX29jc21hcnRjYW1wdXMiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9ia3MiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9hcHBjdWJlIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfbWVldGluZ19oYXJkYWNjb3VudF9idXkiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9tdWx0aV9iaW5kIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfbmxwX210IiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfb3BfZ2F0ZWRfaW90c3RhZ2UiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9jc2VfMm5kIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfZWlwX3Bvb2wiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9tZWVldGluZ19jdXJyZW50X2J1eSIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2llZl9mdW5jdGlvbiIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2FfYXAtc291dGhlYXN0LTNkIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfcHJvamVjdF9kZWwiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9tNm10IiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfc2hhcmVCYW5kd2lkdGhfcW9zIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfY2NpX29jZWFuIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfYV9hZi1zb3V0aC0xYiIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2V2c19yZXR5cGUiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9hYWRfZnJlZSIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2Vjc19pcjN4IiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfZWxiX2d1YXJhbnRlZWQiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9hX2NuLXNvdXRod2VzdC0yYiIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2NpZSIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX3Nmc3R1cmJvIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfdnBjX25hdCIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX3Zwbl92Z3dfaW50bCIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2h2X3ZlbmRvciIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2FfY24tbm9ydGgtNGUiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9hX2NuLW5vcnRoLTRkIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfSUVDIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfZGF5dV9kbG1fY2x1c3RlciIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2ludGxfY29uZmlndXJhdGlvbiIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX3Npc19hc3Nlc3NfbXVsdGltb2RlbCIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2NjZV9tY3BfdGhhaSIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX25scF9sZ190ZyIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2RzYyIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX3NlcnZpY2VzdGFnZV9tZ3JfZHRtIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfYV9jbi1ub3J0aC00ZiIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2NwaCIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX21lZXRpbmdfaGlzdG9yeV9jdXN0b21fYnV5IiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfZWNzX2dwdV9nNXIiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF93a3Nfa3AiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9jY2lfa3VucGVuZyIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX3JpX2R3cyIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2FfY24tc291dGh3ZXN0LTJkIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfaW90ZWRnZV9jYW1wdXMiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9lY3Nfb2ZmbGluZV9kNiIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX3ZwY19mbG93X2xvZyIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX29wX2dhdGVkX2ljcyIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2FhZF9iZXRhX2lkYyIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2NzYnNfcmVwX2FjY2VsZXJhdGlvbiIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2llZl9lZGdlbWVzaCIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2Fpc19hcGlfaW1hZ2VfYW50aV9hZCIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2Rzc19tb250aCIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2NzZyIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX3Npc19hc3Nlc3NfYXVkaW8iLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9kZWNfbW9udGhfdXNlciIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2llZl9lZGdlYXV0b25vbXkiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF92aXBfYmFuZHdpZHRoIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfb3NjIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfZWNzX29sZF9yZW91cmNlIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfd2VsaW5rYnJpZGdlX2VuZHBvaW50X2J1eSIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2Rjc19yaSIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2llZi1pbnRsIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfYV9zYS1icmF6aWwtMWIiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9wc3RuX2VuZHBvaW50X2J1eSIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX21hcF9vY3IiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9kbHZfb3Blbl9iZXRhIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfaWVzIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfb2JzX2R1YWxzdGFjayIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2VkY20iLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9jc2JzX3Jlc3RvcmUiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9pdnNjcyIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2Vjc19jNmEiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF92cG5fdmd3IiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfc21uX2NhbGxub3RpZnkiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9pcnRjIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfY2NlX2JtczIiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9wY2EiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9jY2VfYXNtX2hrIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfY29uZmlndXJhdGlvbiIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2NzYnNfcHJvZ3Jlc3NiYXIiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9pb3YtdHJpYWwiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9lY3Nfb2ZmbGluZV9waTIiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9ldnNfcG9vbF9jYSIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2FfQ04tU09VVEgtMyIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2Vjc19vZmZsaW5lX2Rpc2tfNCIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2dzc19mcmVlX3RyaWFsIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfbWVldGluZ19jbG91ZF9idXkiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9lcHMiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9jc2JzX3Jlc3RvcmVfYWxsIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfMTIzIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfbDJjZyIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX1dlTGlua19lbmRwb2ludF9idXkiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9pbnRsX3ZwY19uYXQiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9mY3NfcGF5IiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfaW90YW5hbHl0aWNzIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfbWF4aHViX2VuZHBvaW50X2J1eSIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2wyY2dfaW50bCIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2FfYXAtc291dGhlYXN0LTFlIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfYV9hcC1zb3V0aGVhc3QtMWQiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9ubHBfa2ciLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9hX2FwLXNvdXRoZWFzdC0xZiIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2llZl9kZXZpY2VfZGlyZWN0IiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfZGNzX2RjczJfcHJveHkiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9lY3NfdmdwdV9nNSIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2htc2EiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF90aWNzX29wZW5fYmV0YSIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX2NzX2FybV9wb2MiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9tYXBfdmlzaW9uIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfZWNzX3JpIiwiaWQiOiIwIn0seyJuYW1lIjoib3BfZ2F0ZWRfYV9hcC1zb3V0aGVhc3QtMWMiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9hX3J1LW5vcnRod2VzdC0yYyIsImlkIjoiMCJ9LHsibmFtZSI6Im9wX2dhdGVkX3VsYl9taWl0X3Rlc3QiLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9pZWZfcGxhdGludW0iLCJpZCI6IjAifSx7Im5hbWUiOiJvcF9nYXRlZF9WaWRlb19DYW1wdXMiLCJpZCI6IjAifV0sInByb2plY3QiOnsiZG9tYWluIjp7Im5hbWUiOiJodzQyOTkyMDgxIiwiaWQiOiIwYTZiNmEyOGQ4MDAyNTc4MGY0ZWMwMTcwZjUxYzM4MCJ9LCJuYW1lIjoiY24tbm9ydGgtNCIsImlkIjoiMGE4MjdlZmY5YTgwZjM1MDJmNGRjMDE3YjBkODUxZjUifSwiaXNzdWVkX2F0IjoiMjAyMC0xMi0xNlQxNjowODoyNS44NjMwMDBaIiwidXNlciI6eyJkb21haW4iOnsibmFtZSI6Imh3NDI5OTIwODEiLCJpZCI6IjBhNmI2YTI4ZDgwMDI1NzgwZjRlYzAxNzBmNTFjMzgwIn0sIm5hbWUiOiJodzQyOTkyMDgxIiwicGFzc3dvcmRfZXhwaXJlc19hdCI6IiIsImlkIjoiMGE2YjZhMjk5MTgwMGZlNTFmODFjMDE3YmQ4YzA4NzUifX19MYIBwTCCAb0CAQEwgZcwgYkxCzAJBgNVBAYTAkNOMRIwEAYDVQQIDAlHdWFuZ0RvbmcxETAPBgNVBAcMCFNoZW5aaGVuMS4wLAYDVQQKDCVIdWF3ZWkgU29mdHdhcmUgVGVjaG5vbG9naWVzIENvLiwgTHRkMQ4wDAYDVQQLDAVDbG91ZDETMBEGA1UEAwwKY2EuaWFtLnBraQIJANyzK10QYWoQMAsGCWCGSAFlAwQCATANBgkqhkiG9w0BAQEFAASCAQB47xwv3l9I5fU0ErFzQFV1J7-yB8RH8oqohL3flAobkbaY4Kvs7mjhXW+WNrkFPGUgw-aayf15vqZSqMbCLCcAiR-eSRW9jU7SmKwobv7qayNPjaVGtkAgzT8FAf99LkqZ-SPHH-6Aq4ynK21JSzzoJtJM20mQuP1NFOHwBKQp1j2xautIAVqNT694W2073nhrQ+Xfz0GON+QauoPdJ4evY1506dNUITd3e3havD5HoBBemGqpliJ071zgw9LLiD6jp1gnjthzt-R2RFjC8U-TbWYF3BrH1c6rWfxgSp-nQE-TBZ3k+dy30ehnV76Co0oawrvmPE3V1ZObIf+ytwpu'
        }
        output = "dependents/counting_videos/testoutput.mp4"
        defaultconfidence = 0.5
        skipframes = 30
        
        # if a video path was not supplied, grab a reference to the webcam
        if args[0]:
            print("[INFO] starting video stream...")
            vs = VideoStream(src=0).start()
            time.sleep(2.0)
        
        # otherwise, grab a reference to the video file
        else:
            print("[INFO] opening video file...")
            vs = cv2.VideoCapture(args[1])
        
        # initialize the video writer (we'll instantiate later if need be)
        writer = None
        
        # initialize the frame dimensions (we'll set them as soon as we read 
        # the first frame from the video)
        W = None
        H = None
        
        # instantiate our centroid tracker, then initialize a list to store
        # each of our dlib correlation trackers, followed by a dictionary to
        # map each unique object ID to a TrackableObject
        ct = CentroidTracker(maxDisappeared=40, maxDistance=50)
        trackers = []
        trackableObjects = {}
        
        # initialize the total number of frames processed thus far, along
        # with the total number of objects that have moved either up or down
        totalFrames = 0
        totalRight = 0
        totalLeft = 0
        
        # start the frames per second throughput estimator
        fps = FPS().start()
        
        # loop over frames from the video stream
        while True:
            # grab the next frame and handle if we are reading from either
            # VideoCapture or VideoStream
            frame = vs.read()
            frame = frame[1] if not args[0] else frame
        
            # resize the frame to have a maximum width of 500 pixels (the
            # less data we have, the faster we can process it), then convert
            # the frame from BGR to RGB for dlib
            frame = imutils.resize(frame, width=700)
            rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
        
            # if the frame dimensions are empty, set them
            if W is None or H is None:
                (H, W) = frame.shape[:2]
        
            # if we are supposed to be writing a video to disk, initialize
            # the writer
            if output is not None and writer is None:
                fourcc = cv2.VideoWriter_fourcc(*"MJPG")
                writer = cv2.VideoWriter(output, fourcc, 30,
                    (W, H), True)
        
            # initialize the current status along with our list of bounding
            # box rectangles returned by either (1) our object detector or
            # (2) the correlation trackers
            status = "Waiting"
            rects = []
        
            # check to see if we should run a more computationally expensive
            # object detection method to aid our tracker
            if totalFrames % skipframes == 0:
                # set the status and initialize our new set of object trackers
                status = "Detecting"
                trackers = []
        
                # save the frame and read it as an image
                cv2.imwrite("dependents/1.png" ,frame * 1)
                files = [
                    ('images',('dependents/1.png',open('dependents/1.png','rb'),'image/png'))
                ]
        
                # input the image into the deployed model as a file parameter
                response = requests.request("POST", url, headers=headers, data=payload, files=files)
        
                detections = json.loads(response.text)
                if(detections == None):
                    continue
                # loop over the detections
                for i in np.arange(0, len(detections["detection_classes"])):
                    # extract the confidence (i.e., probability) associated
                    # with the prediction
                    confidence = detections["detection_scores"][i]
        
                    # filter out weak detections by requiring a minimum
                    # confidence
                    if confidence > defaultconfidence:
        
                        # compute the (x, y)-coordinates of the bounding box
                        # for the object
        
                        startX=int(detections["detection_boxes"][i][1])
                        startY=int(detections["detection_boxes"][i][0])
                        endX=int(detections["detection_boxes"][i][3])
                        endY=int(detections["detection_boxes"][i][2])
        
                        # construct a dlib rectangle object from the bounding
                        # box coordinates and then start the dlib correlation
                        # tracker
                        tracker = dlib.correlation_tracker()
                        rect = dlib.rectangle(startX, startY, endX, endY)
                        tracker.start_track(rgb, rect)
        
                        # add the tracker to our list of trackers so we can
                        # utilize it during skip frames
                        trackers.append(tracker)
        
            # otherwise, we should utilize our object *trackers* rather than
            # object *detectors* to obtain a higher frame processing throughput
            else:
                # loop over the trackers
                for tracker in trackers:
                    # set the status of our system to be 'tracking' rather
                    # than 'waiting' or 'detecting'
                    status = "Tracking"
        
                    # update the tracker and grab the updated position
                    tracker.update(rgb)
                    pos = tracker.get_position()
        
                    # unpack the position object
                    startX = int(pos.left())
                    startY = int(pos.top())
                    endX = int(pos.right())
                    endY = int(pos.bottom())
        
                    # add the bounding box coordinates to the rectangles list
                    rects.append((startX, startY, endX, endY))
        
            # draw a vertical line in the center of the frame -- once an
            # object crosses this line we will determine whether they were
            # moving 'left' or 'right'
            cv2.line(frame, (W // 2,0), (W // 2,H), (0, 255, 255), 2)
        
            # use the centroid tracker to associate the (1) old object
            # centroids with (2) the newly computed object centroids
            objects = ct.update(rects)
        
            # loop over the tracked objects
            for (objectID, centroid) in objects.items():
                # check to see if a trackable object exists for the current
                # object ID
                to = trackableObjects.get(objectID, None)
        
                # if there is no existing trackable object, create one
                if to is None:
                    to = TrackableObject(objectID, centroid)
        
                # otherwise, there is a trackable object so we can utilize it
                # to determine direction
                else:
                    # the difference between the x-coordinate of the *current*
                    # centroid and the mean of *previous* centroids will tell
                    # us in which direction the object is moving (negative for
                    # 'left' and positive for 'right')
                    x = [c[0] for c in to.centroids]
                    direction = centroid[0] - np.mean(x)
                    to.centroids.append(centroid)
        
                    # check to see if the object has been counted or not
                    if not to.counted:
                        # if the direction is negative (indicating the object
                        # is moving left) AND the centroid is on the left of
                        # the center line, count the object
                        if direction < 0 and centroid[0] < W // 2:
                            totalLeft += 1
                            to.counted = True
        
                        # if the direction is positive (indicating the object
                        # is moving right) AND the centroid is on the right of
                        # the center line, count the object
                        elif direction > 0 and centroid[0] > W // 2:
                            totalRight += 1
                            to.counted = True
        
                # store the trackable object in our dictionary
                trackableObjects[objectID] = to
        
                # draw both the ID of the object and the centroid of the
                # object on the output frame
                text = "ID {}".format(objectID)
                cv2.putText(frame, text, (centroid[0] - 10, centroid[1] - 10),
                    cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
                cv2.circle(frame, (centroid[0], centroid[1]), 4, (0, 255, 0), -1)
        
            # construct a tuple of information we will be displaying on the
            # frame
            info = [
                ("Left", totalLeft),
                ("Right", totalRight),
                ("Status", status),
            ]
        
            # loop over the info tuples and draw them on our frame
            for (i, (k, v)) in enumerate(info):
                text = "{}: {}".format(k, v)
                cv2.putText(frame, text, (10, H - ((i * 20) + 20)),
                    cv2.FONT_HERSHEY_SIMPLEX, 0.6, (0, 0, 255), 2)
        
            # check to see if we should write the frame to disk
            if writer is not None:
                writer.write(frame)
        
            # show the output frame
            # cv2.imshow("Frame", frame)
            self.LblVideo.setPixmap(QtGui.QPixmap.fromImage(
                QtGui.QImage(cv2.cvtColor(imutils.resize(frame, width=700), cv2.COLOR_BGR2RGB),
                             frame.shape[1],
                             frame.shape[0],
                             QtGui.QImage.Format_RGB888)))
            key = cv2.waitKey(1) & 0xFF
        
            # if the `q` key was pressed, break from the loop
            if key == ord("q"):
                break
        
            # increment the total number of frames processed thus far and
            # then update the FPS counter
            totalFrames += 1
            fps.update()
        
        # stop the timer and display FPS information
        fps.stop()
        print("[INFO] elapsed time: {:.2f}".format(fps.elapsed()))
        print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))
        
        # check to see if we need to release the video writer pointer
        if writer is not None:
            writer.release()
        
        # if we are not using a video file, stop the camera video stream
        if args[0]:
            vs.stop()
        
        # otherwise, release the video file pointer
        else:
            vs.release()
def frequent_video(self):
        video_capture=VideoStream(src=0).start()
        frequenters=[]
        time_start=datetime.datetime.now()
        num=0


        for root, ds, fs in os.walk("dependents/images"):
            for index,file in enumerate(fs):
                path=os.path.basename(file)
                frequenters.append(face_recognition.face_encodings(face_recognition.load_image_file(r"dependents/face_images/"+path))[0])

        face_locations = []
        face_encodings = []
        face_names = []
        process_this_frame = True
        count=1
        last_time=[datetime.datetime.strptime('2015-6-1 18:19:59', '%Y-%m-%d %H:%M:%S')]*len(os.listdir("dependents/face_images"))
        while True:
            frame = video_capture.read()
            small_frame = cv2.resize(frame, (0,0), fx=0.25, fy=0.25)
            if process_this_frame:
                face_locations = face_recognition.face_locations(small_frame)
                face_encodings = face_recognition.face_encodings(small_frame, face_locations)
                face_names = []
                for index,face_encoding in enumerate(face_encodings):
                    match = face_recognition.compare_faces(frequenters, face_encoding)
                    for index,result in enumerate(match):
                        if result:
                            if (datetime.datetime.now()-last_time[index]).total_seconds()/60>3:
                                count+=1
                                times_file=open(r"dependents/face_count/No."+str(index+1)+".txt")
                                try:
                                    times=int(times_file.read()[0:-1])+1
                                except:
                                    times=1
                                times_file.close()
                                times_file=open(r"dependents/face_count/No."+str(index)+".txt","w")
                                times_file.write(str(times)+"a")
                                times_file.seek(0)
                                times_file.close()
                                last_time[index]=datetime.datetime.now()
                                break
                            break
                    else:
                        count+=1
                        crop_img = small_frame[face_locations[index][0]:face_locations[index][2],face_locations[index][3]:face_locations[index][1]]
                        cv2.imwrite(r'dependents/face_images/No.'+str(count)+'.jpg', crop_img,[int(cv2.IMWRITE_JPEG_QUALITY), 100])
                        times_file=open(r"dependents/face_count/No."+str(index)+".txt","w")
                        times=1
                        times_file.write(str(times)+"a")
                        times_file.seek(0)
                        times_file.close()
                    face_names.append(times)
            process_this_frame = not process_this_frame
            for (top, right, bottom, left), name in zip(face_locations, face_names):
                top *= 4
                right *= 4
                bottom *= 4
                left *= 4
                cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255),  2)
                cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 0, 255), 2)
                font = cv2.FONT_HERSHEY_DUPLEX
                cv2.putText(frame, str(name), (left+6, bottom-6), font, 1.0, (255, 255, 255), 1)

            cv2.imwrite(r'dependents/2.png', frame,[int(cv2.IMWRITE_JPEG_QUALITY), 100])
            x = QtGui.QPixmap(r"dependents/2.png")
            self.LblVideo.setPixmap(x)

            if cv2.waitKey(1) & 0xFF == ord('q'):
                break

        video_capture.release()

(二)熟客识别

  为给新老用户提供更好的服务,提升其购物体验,我们开发了熟客识别功能,以便商家对光临的顾客形成初步认识,进而为其提供更贴心的服务。为实现这一功能,我们利用商家在收银台处设置的摄像头来即时识别并捕捉顾客面部特征。在这里,我们通过领先的C++开源库dlib中的深度学习模型,采用神经网络找到每个人脸的128个特征值,然后与顾客信息数据库中原有的顾客照片进行比对,以判别其为新顾客或老顾客。我们可以根据其光顾次数向商家随机迎客推荐方式,使得老顾客对店铺产生熟悉感与信任感、新顾客产生再次光临的欲望。
  除此之外,为保证老顾客的回购率,我们可以与商家协商,建议其根据顾客光顾次数自动形成优惠力度,或根据我们未来对数据的分析结果将最合适的折扣程度推荐给商家,以提高顾客回购率,使商家获得更多盈利。除此之外,商家还可凭借顾客光顾次数达到某一数值建议其办理会员,从而使用会员管理、精准营销等版块为用户提供更有针对性的个性化服务。

(三)会员信息管理与精准营销

  我们除了商家为提供客流量监测,熟客识别等功能,还为其设置了会员管理与精准营销模块,在这一模块中,为了避免侵犯顾客个人隐私等问题,我们决定在顾客授权的前提下将其个人信息与购买商品的有关信息进行记录,从而提供更有针对性的服务。首先,我们可以选择性地询问顾客的基本信息(如生日,性别,年龄区间、家庭状况等),从而在将来实现生日当天自动发送祝福短信、加大优惠力度、提供赠品等服务,鼓励顾客在生日当天光顾店铺,购买喜欢的商品并享受优惠。对于非常重视顾客体验的店铺,
  我们也能够提供依据顾客日常购物偏好进行赠品种类推荐的功能,进而促进消费。而根据性别和年龄区间,我们可以向顾客推荐适合其所在年龄段人群的产品,让顾客得到更好的购物体验。
  其次,我们根据在顾客消费过程中截取的数据与顾客头像,将顾客与其消费行为绑定,存入关系数据库中,进而为数据库中的每位顾客构建购物偏好画像,以便后续分析使用,实现会员的“一键办理”和“精准分析”。相比于传统商店繁琐的会员卡办理流程,这样的模式显得更加便捷,商家也可以通过贴心的会员办理方式直接或间接地提升顾客的回购率,来提高顾客对店铺的信赖度与忠诚度。
  接下来,我们可以对消费者的喜好与需求进行判断,通过推荐算法为顾客提供再次光顾店铺时选购商品的推荐服务。此类功能的实现基于商品的协同推荐算法(简称ItemCF算法),依据顾客的历史行为给商家做推荐并解释。ItemCF算法通过计算物品相似度矩阵并每日更新,结合顾客的历史行为生成推荐购买商品列表。消费者将会享受到个性化的新品介绍,而这些新品正是与顾客以往所购商品相似的商品,能够反映消费者本人的历史购物偏好。这样不仅可以使店员工作更有针对性,还可以让顾客了解到自己以往购买同类商品的情况,并使得商家根据用户每次的选择实现对其消费习惯的进一步精准化记录,最终通过为顾客提供精准荐购服务而实现商店营业额的增长。

设计重难点

(一)选择目标检测算法

  通过了解目前流行并且常用的目标检测算法,我们从项目目的出发,对以下几种算法进行了具体的优劣分析:

算法 优点 缺点
Faster R-CNN 在R-CNN以及Fast R-CNN的基础上,一定程度上提高了速度,精度极高 相对于其他算法而言速度仍然相对较慢
YOLO 速度快,能够轻松满足实时监测的速度要求,防止视频出现卡顿现象 在精度方面,较Faster R-CNN低一些,对小目标、相互靠近物体的检测效果不占优势
SSD 中和Faster R-CNN算法与YOLO算法的优势 在速度与精度方面都呈现中等效果

  考虑到项目主题为实时客流量监测,目的在于监测客流量的相对大小,而并非得到客流量的精确数值,因此我们经过分析比较选择了最新版本的YOLO v5算法,利用其速度快的优势,在监测过程中流畅地呈现出实时进出的顾客数量。

(二)训练目标检测模型

由于目前关于目标检测的算法以及模型都是将识别出的目标物体进行分类,而我们的项目主题侧重于对人的识别,更加需要提高识别人像的精度,因此我们选择自己训练模型来提高适用性和准确度。首先,我们找到了符合实际客流监测场景的数据集。为了提高效率,我们利用华为ModelArts平台对数据集进行了标注,并结合YOLOv5算法训练出符合项目应用场景的模型,生成可实时调用的API,便于对实时录像进行标注。

class CentroidTracker:
	def __init__(self, maxDisappeared=50, maxDistance=50):
		# initialize the next unique object ID along with two ordered
		# dictionaries used to keep track of mapping a given object
		# ID to its centroid and number of consecutive frames it has
		# been marked as "disappeared", respectively
		self.nextObjectID = 0
		self.objects = OrderedDict()
		self.disappeared = OrderedDict()

		# store the number of maximum consecutive frames a given
		# object is allowed to be marked as "disappeared" until we
		# need to deregister the object from tracking
		self.maxDisappeared = maxDisappeared

		# store the maximum distance between centroids to associate
		# an object -- if the distance is larger than this maximum
		# distance we'll start to mark the object as "disappeared"
		self.maxDistance = maxDistance

	def register(self, centroid):
		# when registering an object we use the next available object
		# ID to store the centroid
		self.objects[self.nextObjectID] = centroid
		self.disappeared[self.nextObjectID] = 0
		self.nextObjectID += 1

	def deregister(self, objectID):
		# to deregister an object ID we delete the object ID from
		# both of our respective dictionaries
		del self.objects[objectID]
		del self.disappeared[objectID]

	def update(self, rects):
		# check to see if the list of input bounding box rectangles
		# is empty
		if len(rects) == 0:
			# loop over any existing tracked objects and mark them
			# as disappeared
			for objectID in list(self.disappeared.keys()):
				self.disappeared[objectID] += 1

				# if we have reached a maximum number of consecutive
				# frames where a given object has been marked as
				# missing, deregister it
				if self.disappeared[objectID] > self.maxDisappeared:
					self.deregister(objectID)

			# return early as there are no centroids or tracking info
			# to update
			return self.objects

		# initialize an array of input centroids for the current frame
		inputCentroids = np.zeros((len(rects), 2), dtype="int")

		# loop over the bounding box rectangles
		for (i, (startX, startY, endX, endY)) in enumerate(rects):
			# use the bounding box coordinates to derive the centroid
			cX = int((startX + endX) / 2.0)
			cY = int((startY + endY) / 2.0)
			inputCentroids[i] = (cX, cY)

		# if we are currently not tracking any objects take the input
		# centroids and register each of them
		if len(self.objects) == 0:
			for i in range(0, len(inputCentroids)):
				self.register(inputCentroids[i])

		# otherwise, are are currently tracking objects so we need to
		# try to match the input centroids to existing object
		# centroids
		else:
			# grab the set of object IDs and corresponding centroids
			objectIDs = list(self.objects.keys())
			objectCentroids = list(self.objects.values())

			# compute the distance between each pair of object
			# centroids and input centroids, respectively -- our
			# goal will be to match an input centroid to an existing
			# object centroid
			D = dist.cdist(np.array(objectCentroids), inputCentroids)

			# in order to perform this matching we must (1) find the
			# smallest value in each row and then (2) sort the row
			# indexes based on their minimum values so that the row
			# with the smallest value as at the *front* of the index
			# list
			rows = D.min(axis=1).argsort()

			# next, we perform a similar process on the columns by
			# finding the smallest value in each column and then
			# sorting using the previously computed row index list
			cols = D.argmin(axis=1)[rows]

			# in order to determine if we need to update, register,
			# or deregister an object we need to keep track of which
			# of the rows and column indexes we have already examined
			usedRows = set()
			usedCols = set()

			# loop over the combination of the (row, column) index
			# tuples
			for (row, col) in zip(rows, cols):
				# if we have already examined either the row or
				# column value before, ignore it
				if row in usedRows or col in usedCols:
					continue

				# if the distance between centroids is greater than
				# the maximum distance, do not associate the two
				# centroids to the same object
				if D[row, col] > self.maxDistance:
					continue

				# otherwise, grab the object ID for the current row,
				# set its new centroid, and reset the disappeared
				# counter
				objectID = objectIDs[row]
				self.objects[objectID] = inputCentroids[col]
				self.disappeared[objectID] = 0

				# indicate that we have examined each of the row and
				# column indexes, respectively
				usedRows.add(row)
				usedCols.add(col)

			# compute both the row and column index we have NOT yet
			# examined
			unusedRows = set(range(0, D.shape[0])).difference(usedRows)
			unusedCols = set(range(0, D.shape[1])).difference(usedCols)

			# in the event that the number of object centroids is
			# equal or greater than the number of input centroids
			# we need to check and see if some of these objects have
			# potentially disappeared
			if D.shape[0] >= D.shape[1]:
				# loop over the unused row indexes
				for row in unusedRows:
					# grab the object ID for the corresponding row
					# index and increment the disappeared counter
					objectID = objectIDs[row]
					self.disappeared[objectID] += 1

					# check to see if the number of consecutive
					# frames the object has been marked "disappeared"
					# for warrants deregistering the object
					if self.disappeared[objectID] > self.maxDisappeared:
						self.deregister(objectID)

			# otherwise, if the number of input centroids is greater
			# than the number of existing object centroids we need to
			# register each new input centroid as a trackable object
			else:
				for col in unusedCols:
					self.register(inputCentroids[col])

		# return the set of trackable objects
		return self.objects

(三)计算累计客流量

(1)计数方式:

  对于已经确定的目标,我们采用在屏幕中央根据目标运动方向(水平/竖直)设置定位线的方式,对于经过定位线的目标进行累加计数。

(2)避免计数重复性:

  利用上述训练好的模型,可以对实时监测录像中每一帧的图片进行目标检测,但如果对每一帧中所有目标进行计数,则无法避免计数的重复性。因此利用质心算法对于已经检测出的目标进行实时追踪,并设置目标消失后的删除机制。而对于每次新检测出的目标也进行计数和追踪。

def count_chart(self):
        self.m=True
        fig=plt.figure(figsize=(8,4))
        ax=fig.add_subplot(1,1,1)

        ax.set_xlabel('Time/s')
        ax.set_ylabel('People_count')
        ax.set_title('')

        line = None
        plt.grid(True)
        plt.ion()
        obsX = []
        obsY = []

        t0 = time.time()
        t1=datetime.datetime.now()
        t2=0
        self.f=open("dependents/people_count.txt","r")
        s=int(self.f.readline()[:-1])
        while self.m==True:
            t=datetime.datetime.now().second
            obsX.append(t2)
            t2+=1
            s=self.f.readline()[:-1]
            n=s
            if not n:
                s=2
            else:
                s=int(self.f.readline()[:-1])
            obsY.append(s)

            if line is None:
                line = ax.plot(obsX,obsY,'-g',marker='*')[0]

            line.set_xdata(obsX)
            line.set_ydata(obsY)

            ax.set_xlim([t2-10,t2+1])
            ax.set_ylim([0,5])
            plt.savefig("dependents/1.png",bbox_inches='tight')
            x=QtGui.QPixmap("dependents/1.png")
            self.LblVideo.setPixmap(x)
            self.mypause(1)

(四)GUI设计与监测结果的呈现

  在UI方面,我们调查了较流行的用户界面设计工具,并对其进行了如下的分析对比。Tkinter:代码繁杂,美观性不足;Qt:美观性强,包含Qt Designer的交互式界面,可简化代码编写步骤。考虑到用户群体为商家、景区工作人员等,为了提升用户体验感,我们选择美观性与便捷性都相对较高的Python第三方库PyQt5来进行页面的开发与设计。


(五)向用户展示监测结果

  在设计过程中,我们发现PyQt5自带控件难以实现实时监测的目的,为解决这一问题,我们使用其与OpenCV库结合的方式实现摄像头的自动调用,并通过QLabel中图片的刷新效果呈现监测结果,以降低用户的操作难度,实现自动化监测。

(六)可视化数据的呈现

  为了使用户能够直观地看到商店的有关数据,我们选择matplotlib库为用户呈现动态图表。然而在此过程中我们发现图表不仅在PyQt5的QLabel控件界面中得以呈现,还另外弹出了显示框。经过研究,我们发现这是由于matplotlib库中pause函数自动调用了show方法所致。为此,我们重新编写了pause函数,实现了可视化界面的嵌入。除此之外,我们选择在解析识别视频的同时将实时产生的数据传入图表,并每隔一秒截取图片对其进行刷新,从而直观地向用户呈现动态效果。

(七)熟客识别的实现

  为实现这一功能,首先,我们需要对视频中每一帧画面进行人脸识别,因此我们将图片灰度化,变换为HOG形式,之后提取图像的特征,从而利用方向梯度直方图(HOG)来获取人脸位置,之后利用dlib中函数对人脸68个特征点进行定位,通过图像的几何变换,使各个特征点对齐,之后使用训练好的神经网络,将输入的脸部图像生成为128维的预测值,之后利用KNN分类器将新识别出的顾客面部与已有数据库中的数据进行比对,达到熟客识别目的。为了避免顾客识别的重复性,我们设置3小时之内不对同一个顾客进行再次计数。

总结

  本作品旨在为商家提供一个辅助营业的智能系统,通过目标检测、人脸识别、数据挖掘等技术,实现客流检测、熟客识别、商品推荐等功能,辅助商家及时了解商场的实时客流,并能根据客户的消费记录,分析顾客的购物需求,实现精准营销,提升顾客的购物体验。
  在作品实现上,对于客流检测、熟客识别、商品推荐等功能,借助华为云ModelArts平台尝试了多种人工智能算法,最终取得了较好的训练效果;基于PyQt5技术设计了简洁清晰的GUI界面;充分运用了多种重要第三方库,实现了程序的功能。

【版权声明】本文为华为云社区用户原创内容,未经允许不得转载,如需转载请自行联系原作者进行授权。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。