基于MindSpore的ChatGLM微调

举报
JeffDing 发表于 2023/10/13 10:37:27 2023/10/13
【摘要】 基于MindSpore的ChatGLM微调 克隆Hugging Face模型 克隆chatglm-6b代码仓,下载分布式的模型文件git lfs installgit clone https://huggingface.co/THUDM/chatglm-6b 准备环境安装Transformerpip install transformers执行 python 脚本,合并模型权重。from ...

基于MindSpore的ChatGLM微调

克隆Hugging Face模型

克隆chatglm-6b代码仓,下载分布式的模型文件

git lfs install
git clone https://huggingface.co/THUDM/chatglm-6b

准备环境

安装Transformer

pip install transformers

执行 python 脚本,合并模型权重。

from transformers import AutoModel
import torch as pt

pt_ckpt_path="./models/chatglm-6b"
model = AutoModel.from_pretrained(pt_ckpt_path, trust_remote_code=True).half()
pt_pth_path = "models/mindspore/pt_glm_6b.pth"
pt.save(model.state_dict(), pt_pth_path)

执行转换脚本,得到转换后的输出文件ms_glm_6b.ckpt

python mindformers/models/glm/convert_weight.py --pt_ckpt_path /home/ma-user/work/models/mindspore/pt_glm_6b.pth --ms_ckpt_path ../models/mindspore/ms_glm_6b.ckpt

注意可能会遇到以下错误:

执行转换脚本,得到转换后的输出文件ms_glm_6b.ckpt

解决方法:

export LD_PRELOAD=$LD_PRELOAD:/home/ma-user/anaconda3/envs/MindSpore/lib/python3.7/site-packages/torch/lib/libgomp-d22c30c5.so.1 

原理:找到torch中的libgomp-d22c30c5.so.1 然后赋值给LD_PRELOAD环境变量,这个报错好像只有ARM平台会有

微调

数据处理

ADGEN 数据集任务为根据输入(content)生成一段广告词(summary)。数据集可选离线生成 Mindrecord 或者实时生成两种方式,两种方式选其一即可。

下载地址:https://cloud.tsinghua.edu.cn/f/b3f119a008264b1cabd1/?dl=1

将任务配置文件 configs/glm/run_glm_6b_*.yaml中的 ==== dataset config ==== 部分中的 dataset_dir 指向 *.json文件,vocab_file 指向词表文件

LoRA低参微调

run_mindformers脚本启动LoRA低参微调
使用LoRA算法进行低参微调时,使用 configs/glm/run_glm_6b_lora.yaml 配置文件,该配置文件包含了lora低参微调算法所需的配置项

修改数据集/模型权重配置路径

数据集:修改 mindformers/configs/glm/run_glm_6b_lora.yaml 脚本中train_dataset 的 dataset_dir 为前文生成的数据集路径。
加载预训练模型权重:修改 mindformers/configs/glm/run_glm_6b_lora.yaml 脚本中的 load_checkpoint 为预训练模型权重路径。
** 安装jieba**

pip install -r requirements.txt

启动LoRA低参微调脚本(1卡):

python run_mindformer.py --config=./configs/glm/run_glm_6b_lora.yaml --use_parallel=False --run_mode=finetune

附录run_glm_6b_lora.yaml

seed: 0
run_mode: 'finetune'
load_checkpoint: "/home/ma-user/work/models/mindspore/ms_glm_6b.ckpt"
src_strategy_path_or_dir: ''
auto_trans_ckpt: False  # If true, auto transform load_checkpoint to load in distributed model
only_save_strategy: False
resume_training: False
output_dir: './output'  # 当前不支持自定义修改,请勿修改该默认值

# ==== context config ====
context:
  mode: 0 #0--Graph Mode; 1--Pynative Mode
  device_target: "Ascend"
  enable_graph_kernel: False
  graph_kernel_flags: "--disable_expand_ops=Softmax,Dropout --enable_parallel_fusion=true --reduce_fuse_depth=8 --enable_auto_tensor_inplace=true"
  max_call_depth: 10000
  max_device_memory: "30GB"
  save_graphs: False
  device_id: 0

# aicc
remote_save_url: "Please input obs url on AICC platform."

# ==== model config ====
model:
  model_config:
    type: GLMConfig
    vocab_size: 130528
    hidden_size: 4096
    num_layers: 28
    num_heads: 32
    inner_hidden_size: 16384
    seq_length: 512  # 推理时, 输入pad到的长度, model里的最大句长
    embedding_dropout_prob: 0.0
    attention_dropout_rate: 0.0
    hidden_dropout_rate: 0.0
    hidden_size_per_attention_head: # default "None" means hidden-size/num-attention-heads.
    layernorm_order: "post"
    layernorm_epsilon: 1.0e-5
    use_final_layernorm: True
    use_past: False
    activation_func: 'GELU'
    position_encoding_2d: True
    param_init_type: "float16"
    layernorm_compute_type: "float32"
    softmax_compute_type: "float32"
    compute_dtype: "float16"
    bos_token_id: 130004
    eos_token_id: 130005
    mask_token_id: 130000
    gmask_token_id: 130001
    pad_token_id: 3
    max_decode_length: 2048  # The maximum length of the generated words.
    is_enhanced_encoder: True
    is_sample_acceleration: False
    checkpoint_name_or_path: "glm_6b_lora"
    top_k: 1
    top_p: 1
    repetition_penalty: 1
    do_sample: True
    pet_config:
      pet_type: lora
      lora_rank: 8
      lora_alpha: 32
      lora_dropout: 0.1
  arch:
    type: GLMForPreTrainingWithLora

trainer:
  type: CausalLanguageModelingTrainer
  model_name: 'glm_6b_lora'
# if True, do evaluate during the training process. if false, do nothing.
# note that the task trainer should support _evaluate_in_training function.
do_eval: False

metric:
  type: ADGENMetric

processor:
  return_tensors: ms
  tokenizer:
    type: ChatGLMTokenizer
    bos_token: '<sop>'
    eos_token: '<eop>'
    end_token: '</s>'
    mask_token: '[MASK]'
    gmask_token: '[gMASK]'
    pad_token: '<pad>'
    unk_token: '<unk>'
  type: GLMProcessor

# ==== dataset config ====
train_dataset: &train_dataset
  data_loader:
    type: ADGenDataLoader
    dataset_dir: "/home/ma-user/work/data/AdvertiseGen/train.json"
    shuffle: True
    phase: "train"
    origin_columns: ["content", "summary"]
  tokenizer:
    type: ChatGLMTokenizer
    vocab_file: "/home/ma-user/work/data/AdvertiseGen/ice_text.model"
  input_columns: ["input_ids", "labels", "position_ids", "attention_mask"]
  max_source_length: 64
  max_target_length: 64
  ignore_pad_token_for_loss: True
  num_parallel_workers: 8
  python_multiprocessing: False
  drop_remainder: True
  batch_size: 1
  repeat: 1
  numa_enable: False
  prefetch_size: 1
  seed: 0

train_dataset_task:
  type: KeyWordGenDataset
  dataset_config: *train_dataset

eval_dataset: &eval_dataset
  data_loader:
    type: ADGenDataLoader
    dataset_dir: "/home/ma-usr/work/data/AdvertiseGen/dev.json"
    shuffle: False
    phase: "eval"
    origin_columns: ["content", "summary"]
  tokenizer:
    type: ChatGLMTokenizer
    vocab_file: "/home/ma-usr/work/data/AdvertiseGen/ice_text.model"
  max_source_length: 256
  max_target_length: 256
  ignore_pad_token_for_loss: True
  input_columns: ["input_ids", "labels"]
  num_parallel_workers: 8
  python_multiprocessing: False
  drop_remainder: True
  batch_size: 1
  repeat: 1
  numa_enable: False
  prefetch_size: 1
  seed: 0

eval_dataset_task:
  type: KeyWordGenDataset
  dataset_config: *eval_dataset

# ==== runner config ====
runner_config:
  epochs: 1
  batch_size: 8
  sink_mode: True
  sink_size: 4

runner_wrapper:
  type: MFTrainOneStepCell
  scale_sense:
    type: DynamicLossScaleUpdateCell
    loss_scale_value: 4294967296
    scale_factor: 2
    scale_window: 1000
  use_clip_grad: True

# lr sechdule
lr_schedule:
  type: polynomial
  learning_rate: 5.e-5
  lr_end: 1.e-6
  warmup_steps: 2000
  total_steps: -1 # -1 means it will load the total steps of the dataset

# optimizer
optimizer:
  type: FusedAdamWeightDecay
  beta1: 0.9
  beta2: 0.95
  eps: 1.e-8
  weight_decay: 0.1
layer_scale: False
lr_scale: False

# parallel config
use_parallel: False
parallel:
  parallel_mode: 0 # 0-data parallel, 1-semi-auto parallel, 2-auto parallel, 3-hybrid parallel
  gradients_mean: False
  loss_repeated_mean: True
  enable_alltoall: False
  full_batch: True
  search_mode: "sharding_propagation"
  enable_parallel_optimizer: False  # optimizer shard
  strategy_ckpt_save_file: "./ckpt_strategy.ckpt"
parallel_config:
  data_parallel: 1
  model_parallel: 1
  pipeline_stage: 1
  expert_parallel: 1
  optimizer_shard: False  # optimizer shard
  micro_batch_num: 1
  vocab_emb_dp: True
  gradient_aggregation_group: 4
micro_batch_interleave_num: 1

# moe
moe_config:
  expert_num: 1
  capacity_factor: 1.05
  aux_loss_factor: 0.05
  num_experts_chosen: 1

# recompute
recompute_config:
  recompute: False
  parallel_optimizer_comm_recompute: False
  mp_comm_recompute: True
  recompute_slice_activation: False

# autotune
auto_tune: False
filepath_prefix: './autotune'
autotune_per_step: 10

# profile
profile: False
profile_start_step: 1
profile_stop_step: 10
init_start_profile: True
profile_communication: True
profile_memory: True

# callbacks
callbacks:
  - type: MFLossMonitor
  - type: CheckpointMointor
    prefix: "glm-6b-lora"
    save_checkpoint_steps: 500
    keep_checkpoint_max: 2
    integrated_save: False
    async_save: False
  - type: ObsMonitor
    keep_last: False
eval_callbacks:
  - type: ObsMonitor
    keep_last: False
【版权声明】本文为华为云社区用户原创内容,转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息, 否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。