Skip to content

Commit

Permalink
Distilbert kunlunxin (#272)
Browse files Browse the repository at this point in the history
* Fit distilbert on kunlunxin

* Add kunlunxin readme

* Refine kunlunxin readme

* Refine task kind  kunlunxin readme

* Add vendor name in config_common.py

---------

Co-authored-by: root <[email protected]>
  • Loading branch information
KungYork and root authored Oct 8, 2023
1 parent ad36a03 commit b9b0924
Show file tree
Hide file tree
Showing 7 changed files with 61 additions and 1 deletion.
46 changes: 46 additions & 0 deletions training/kunlunxin/distilbert-pytorch/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
### 模型Checkpoint与测试数据集下载
[下载链接](https://bd.bcebos.com/klx-pytorch-ipipe-bd/flagperf/datasets/distilbert_train.tar)


### 昆仑芯 XPU 配置与运行信息参考
#### 环境配置
- ##### 硬件环境
- 机器型号: 昆仑芯AI加速器组R480-X8
- 加速卡型号: 昆仑芯AI加速卡R300
- 多机网络类型、带宽: InfiniBand,200Gb/s

- ##### 软件环境
- OS版本:Ubuntu 20.04
- OS kernel版本: 5.4.0-26-generic
- 加速卡驱动版本:4.0.25
- Docker镜像和版本:pytorch1.12.1-cpu-ubuntu20.04:v0.01
- 训练框架版本:xmlir
- 训练编译器版本:xacc
- 依赖软件版本:pytorch-1.12.1+cpu

#### 运行情况

* 通用指标

| 指标名称 | 指标值 | 特殊说明 |
| -------------- | ----------------------- | ------------------------------------- |
| 任务类别 | Text Classification | |
| 模型 | distilbert | |
| 数据集 | SST-2 | |
| 超参修改 | fix_hp,见“性能指标” | 跑满硬件设备评测吞吐量所需特殊超参 |
| 硬件设备简称 | R300 | |
| 硬件存储使用 | mem,见“性能指标” | 通常称为“显存”,单位为GiB |
| 端到端时间 | e2e_time,见“性能指标” | 总时间+Perf初始化等时间 |
| 总吞吐量 | p_whole,见“性能指标” | 实际样本数数除以总时间(performance_whole) |
| 训练吞吐量 | p_train,见“性能指标” | 不包含每个epoch末尾的评估部分耗时 |
| **计算吞吐量** | **p_core,见“性能指标”** | 不包含数据IO部分的耗时(p3>p2>p1),单位为samples/s(seq_length=512) |
| 训练结果 | acc,见“性能指标” | 分类准确率 |
| 额外修改项 || |

* 性能指标

| 配置 | precision | fix_hp | e2e_time | p_whole | p_train | p_core | acc | mem |
| ------------------- | --------- | ---------------- | -------- | ------- | ------- | ------ | ----- | --------- |
| R300单机单卡(1x1) | fp32 | bs=32 | | | | | | |
| R300单机8卡(1x8) | fp32 | bs=32 | | | | | 0.911| 13.5/32.0 |
| R300两机8卡(2x8) | fp32 | bs=16 | | | | | | |
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
from config_common import *

train_batch_size = 32
gradient_accumulation_steps = 8
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from config_common import *

train_batch_size = 32
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from config_common import *

train_batch_size = 16
4 changes: 4 additions & 0 deletions training/kunlunxin/distilbert-pytorch/config/config_common.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
vendor = "kunlunxin"

dist_backend = "xccl"
dataloader_num_workers = 1
Empty file.
2 changes: 1 addition & 1 deletion training/nvidia/distilbert-pytorch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@

| 指标名称 | 指标值 | 特殊说明 |
| -------------- | ----------------------- | ------------------------------------- |
| 任务类别 | Summarization | |
| 任务类别 | Text Classification | |
| 模型 | distilbert | |
| 数据集 | SST-2 | |
| 超参修改 | fix_hp,见“性能指标” | 跑满硬件设备评测吞吐量所需特殊超参 |
Expand Down

0 comments on commit b9b0924

Please sign in to comment.