MaskGCT模型 推理适配昇腾
/**************************如有任何问题和疑问,请评论区留言*********************************/
0.前提条件
0.1 登录机器
机器已开通,密码已获取,能通过ssh登录
0.2 检查NPU设备
NPU设备检查:运行npu-smi info命令,返回npu设备信息。
0.3 docker安装
#检查docker是否安装:docker -v,如如尚未安装,运行以下命令进行docker安装
yum install -y docker-engine.aarch64 docker-engine-selinux.noarch docker-runc.aarch64
#配置IP转发,用于容器内的网络访问:
sed -i 's/net\.ipv4\.ip_forward=0/net\.ipv4\.ip_forward=1/g' /etc/sysctl.conf
sysctl -p | grep net.ipv4.ip_forward
0.4 获取镜像
docker pull swr.cn-southwest-2.myhuaweicloud.com/atelier/pytorch_2_1_ascend:pytorch_2.1.0-cann_8.0.rc3-py_3.9-hce_2.0.2406-aarch64-snt9b-20240910112800-2a95df3
0.5 启动镜像
启动容器镜像。启动前请先按照参数说明修改${}中的参数。
docker run -it --net=host \
--device=/dev/davinci0 \
--device=/dev/davinci1 \
--device=/dev/davinci2 \
--device=/dev/davinci3 \
--device=/dev/davinci4 \
--device=/dev/davinci5 \
--device=/dev/davinci6 \
--device=/dev/davinci7 \
--device=/dev/davinci_manager \
--device=/dev/devmm_svm \
--device=/dev/hisi_hdc \
--shm-size=32g \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/Ascend/driver:/usr/local/Ascend/driver \
-v /var/log/npu/:/usr/slog \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v ${work_dir}:${container_work_dir} \
--name ${container_name} \
${image_id} \
/bin/bash
参数说明:
device=/dev/davinci0,..., --device=/dev/davinci7:挂载NPU设备,示例中挂载了8张卡davinci0~davinci7,可根据需要选择挂载卡数。
${work_dir}:${container_work_dir} 代表需要在容器中挂载宿主机的目录。宿主机和容器使用不同的文件系统,work_dir为宿主机中工作目录,目录下存放着推理所需代码、数据等文件。container_work_dir为要挂载到的容器中的目录。为方便两个地址可以相同。
shm-size:共享内存大小。
${container_name}:容器名称,进入容器时会用到,此处可以自己定义一个容器名称。
${image_id}:镜像ID,通过docker images查看刚拉取的镜像ID。
说明:
容器不能挂载到/home/ma-user目录,此目录为ma-user用户家目录。如果容器挂载到/home/ma-user下,拉起容器时会与基础镜像冲突,导致基础镜像不可用。
driver及npu-smi需同时挂载至容器。不要将多个容器绑到同一个NPU上,会导致后续的容器无法正常使用NPU功能。
1. 推理验证
前提可访问公网,在容器中配置环境,以下命令中 ${container_work_dir} 替换为容器中自定义的工作目录
1.1 安装espeak-ng
安装libtool
yum install libtool
如果yum源有问题,参考https://support.huaweicloud.com/usermanual-server-modelarts/usermanual-server-0011.html 配置yum源
# 自动配置yum源
wget http://mirrors.myhuaweicloud.com/repo/mirrors_source.sh && bash mirrors_source.sh
# 测试
yum update --allowerasing --skip-broken –nobest
如果yum源安装不了
参考 https://www.cnblogs.com/dakewei/p/10682596.html 手动安装libtool:
wget http://ftpmirror.gnu.org/libtool/libtool-2.4.6.tar.gz
tar xvf libtool-2.4.6.tar.gz
./configure
make -j4
make install
编译安装espeak-ng
cd ${container_work_dir}
git clone https://github.com/espeak-ng/espeak-ng.git
cd espeak-ng
git checkout 1.51.1
./autogen.sh
./configure
make && make install
export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
ldconfig
1.2 下载maskgct
cd ${container_work_dir}
git clone https://github.com/open-mmlab/Amphion.git
cd Amphion/models/tts/maskgct
将requirements.txt文件修改如下:
setuptools
onnxruntime
transformers==4.41.2
accelerate==0.24.1
unidecode
numpy==1.26.0
scipy==1.12.0
librosa
encodec
phonemizer
g2p_en
jieba
cn2an
pypinyin
LangSegment
pyopenjtalk
pykakasi
json5
black==24.1.1
ruamel.yaml
tqdm
spaces
gradio
openai-whisper
torch==2.1.0
torchaudio==2.1.0
numba==0.59.1
urllib3==1.26.7
执行:
pip install -r requirements.txt
1.3 推理验证
1) 修改Amphion/models/codec/amphion_codec/vocos.py 360行
S = mag * (x + 1j * y)
修改为:
tmp = x.cpu() + 1j * y.cpu()
S = mag * tmp.npu()
说明:当前商发CANN 8.0.RC3版本中add算子不支持复数加实数,下一个版本会支持,可先放在CPU上运行,CPU上耗时20ms左右,占整体推理时间很小
2)修改/home/ma-user/anaconda3/envs/PyTorch-2.1.0/lib/python3.9/site-packages/requests/sessions.py 424行
self.verify = False
防止推理时从开源仓下载MaskGCT模型失败
3) 在Amphion文件夹下创建infer.py 推理代码(在官方提供的推理demo上简单修改如下)
from models.tts.maskgct.maskgct_utils import *
from huggingface_hub import hf_hub_download
import safetensors
import soundfile as sf
#NPU适配增加以下依赖
import torch
import torch_npu
import transfer_to_npu
import time
if __name__ == "__main__":
# build model
device = torch.device("cuda:0")
cfg_path = "./models/tts/maskgct/config/maskgct.json"
cfg = load_config(cfg_path)
# 1. build semantic model (w2v-bert-2.0)
semantic_model, semantic_mean, semantic_std = build_semantic_model(device)
# 2. build semantic codec
semantic_codec = build_semantic_codec(cfg.model.semantic_codec, device)
# 3. build acoustic codec
codec_encoder, codec_decoder = build_acoustic_codec(
cfg.model.acoustic_codec, device
)
# 4. build t2s model
t2s_model = build_t2s_model(cfg.model.t2s_model, device)
# 5. build s2a model
s2a_model_1layer = build_s2a_model(cfg.model.s2a_model.s2a_1layer, device)
s2a_model_full = build_s2a_model(cfg.model.s2a_model.s2a_full, device)
# download checkpoint
# download semantic codec ckpt
semantic_code_ckpt = hf_hub_download(
"amphion/MaskGCT", filename="semantic_codec/model.safetensors"
)
# download acoustic codec ckpt
codec_encoder_ckpt = hf_hub_download(
"amphion/MaskGCT", filename="acoustic_codec/model.safetensors"
)
codec_decoder_ckpt = hf_hub_download(
"amphion/MaskGCT", filename="acoustic_codec/model_1.safetensors"
)
# download t2s model ckpt
t2s_model_ckpt = hf_hub_download(
"amphion/MaskGCT", filename="t2s_model/model.safetensors"
)
# download s2a model ckpt
s2a_1layer_ckpt = hf_hub_download(
"amphion/MaskGCT", filename="s2a_model/s2a_model_1layer/model.safetensors"
)
s2a_full_ckpt = hf_hub_download(
"amphion/MaskGCT", filename="s2a_model/s2a_model_full/model.safetensors"
)
# load semantic codec
safetensors.torch.load_model(semantic_codec, semantic_code_ckpt)
# load acoustic codec
safetensors.torch.load_model(codec_encoder, codec_encoder_ckpt)
safetensors.torch.load_model(codec_decoder, codec_decoder_ckpt)
# load t2s model
safetensors.torch.load_model(t2s_model, t2s_model_ckpt)
# load s2a model
safetensors.torch.load_model(s2a_model_1layer, s2a_1layer_ckpt)
safetensors.torch.load_model(s2a_model_full, s2a_full_ckpt)
# inference
prompt_wav_path = "./models/tts/maskgct/wav/prompt.wav"
save_path = "generated_audio.wav"
prompt_text = " We do not break. We never give in. We never back down."
target_text = "In this paper, we introduce MaskGCT, a fully non-autoregressive TTS model that eliminates the need for explicit alignment information between text and speech supervision."
# Specify the target duration (in seconds). If target_len = None, we use a simple rule to predict the target duration.
target_len = 18
maskgct_inference_pipeline = MaskGCT_Inference_Pipeline(
semantic_model,
semantic_codec,
codec_encoder,
codec_decoder,
t2s_model,
s2a_model_1layer,
s2a_model_full,
semantic_mean,
semantic_std,
device,
)
# warmup
recovered_audio = maskgct_inference_pipeline.maskgct_inference(
prompt_wav_path, prompt_text, target_text, "en", "en", target_len=target_len
)
print("infer start")
start_time = time.time()
recovered_audio = maskgct_inference_pipeline.maskgct_inference(
prompt_wav_path, prompt_text, target_text, "en", "en", target_len=target_len
)
print("rtf is ", (time.time() - start_time) / target_len)
sf.write(save_path, recovered_audio, 24000)
执行 python infer.py 进行推理
另外 以 https://maskgct.github.io/ 以这个demo 为例
prompt:12s, target 19s, rtf:
- 点赞
- 收藏
- 关注作者
评论(0)