Note
Objective: This document compiles and summarizes articles that utilize the XJTU battery dataset
, providing detailed records of the results reported in these articles. This is intended to facilitate direct comparison for future works using the same dataset.
Chinese document: Chinese
Last updatedπ: 2024-11-28 πππ
Dataset Links:
Data Description and Preprocessing Code: https://github.com/wang-fujin/Battery-dataset-preprocessing-code-library
Please cite our paper if you use this dataset:
Important
The XJTU battery dataset
comprises 6 batches with a total of 55 batteries. Not all articles use all batteries, so a shorthand is defined to indicate which batteries are used in the articles, formatted as Bxby
.
Bx
denotes the x-th batch;by
denotes the y-th battery in that batch;All
indicates all batteries.
Examples:
B1b1
indicates the 1st battery in the 1st batch;B1
indicates all batteries in the 1st batch;B2b1-b4
indicates the 1st to 4th batteries in the 2nd batch.
Important
We categorize the training and testing modes (Mode) in the articles into two types:
- Type 1: Training and testing on the same battery, using early data for training and later data for testing. This mode is noted as
Train A and Test A
, abbreviated asAA
. - Type 2: Training and testing on different batteries, noted as
Train A and Test B
, abbreviated asAB
.
Battery | Model Name | Mode | MSE | RMSE | MAE | MAPE | R2 | Details | Paper Link | Non-transfer learning | Transfer learning |
---|---|---|---|---|---|---|---|---|---|---|---|
B1b1 |
HHO-LSTM-FC | AA |
- | 0.0078 | 0.0065 | - | 0.9422 | Yang et al. (2024) | link | β | β |
All |
CNN1 | AB |
0.000161 | - | 0.0085 | 0.00926 | 0.9187 | Wang et al. (2024a) | link | β | β |
All |
LSTM1 | AB |
0.000117 | - | 0.0079 | 0.00861 | 0.9407 | Wang et al. (2024a) | link | β | β |
All |
GRU1 | AB |
0.0000983 | - | 0.0071 | 0.00776 | 0.9503 | Wang et al. (2024a) | link | β | β |
All |
MLP1 | AB |
0.000139 | - | 0.0078 | 0.00844 | 0.9331 | Wang et al. (2024a) | link | β | β |
All |
Attention1 | AB |
0.000135 | - | 0.0087 | 0.00950 | 0.9317 | Wang et al. (2024a) | link | β | β |
B1 |
MMAU-Net | AB |
- | 1.40% | 1.02% | - | - | Fan et al. (2024a) | link | β | β |
B2 |
MMAU-Net | AB |
- | 1.50% | 1.04% | - | - | Fan et al. (2024a) | link | β | β |
B3 |
MMAU-Net | AB |
- | 1.04% | 0.66% | - | - | Fan et al. (2024a) | link | β | β |
B1-B2 |
MSCNN1 | AB |
- | 0.74% | 0.67% | 0.37% | - | Wang et al. (2024b) | link | β | β |
B2b1 |
ZKF | AA |
- | 0.0172 | 0.0125 | - | 0.9624 | Wang et al. (2024c) | link | β | β |
B2b4 |
ZKF | AA |
- | 0.0167 | 0.0126 | - | 0.9628 | Wang et al. (2024c) | link | β | β |
B2b5 |
ZKF | AA |
- | 0.0123 | 0.0079 | - | 0.9824 | Wang et al. (2024c) | link | β | β |
B1-B3 |
MSFDTN1 | AB |
0.22% | - | 3.93% | - | 0.9533 | Wang et al. (2024d) | link | β | β |
B1-B3 |
DR-Net1 | AB |
1.92% | - | 10.49% | - | - | Wang et al. (2024d) | link | β | β |
B1-B3 |
AttMoE1 | AB |
2.43% | - | 10.63% | - | - | Wang et al. (2024d) | link | β | β |
B1-B3 |
ELSTM1 | AB |
2.07% | - | 11.20% | - | - | Wang et al. (2024d) | link | β | β |
B1-B3 |
MMMe1 | AB |
5.53% | - | 18.60% | - | - | Wang et al. (2024d) | link | β | β |
B1-B3 |
PVA-FFG-Transformer1 | AB |
6.11% | - | 21.50% | - | - | Wang et al. (2024d) | link | β | β |
Battery | Model Name | Mode | MSE | RMSE | MAE | MAPE | R2 | Details | Paper Link | Non-transfer learning | Transfer learning |
---|
Battery | Model Name | Mode | MSE | RMSE | MAE | MAPE | R2 | Details | Paper Link | Non-transfer learning | Transfer learning |
---|---|---|---|---|---|---|---|---|---|---|---|
B1b2 |
PINN | AB |
- | 14.86e-3 | - | - | - | Tang et al. (2024a) | link | β | β |
B1b8 |
PINN | AB |
- | 22.04e-3 | - | - | - | Tang et al. (2024a) | link | β | β |
B2b2 |
PINN | AB |
- | 40.95e-3 | - | - | - | Tang et al. (2024a) | link | β | β |
B2b8 |
PINN | AB |
- | 37.70e-3 | - | - | - | Tang et al. (2024a) | link | β | β |
B1 |
- | AB |
- | 0.046 (max) | - | - | - | Tang et al. (2024b) | link | β | β |
B2 |
- | AB |
- | 0.055 (max) | - | - | - | Tang et al. (2024b) | link | β | β |
Yang et al. (2024)
Used only the 1st battery of Batch-1, noted as B1b1
.
The article implemented two SOH estimation modes:
- Pre-training on NASA's B6 and B7 batteries, then fine-tuning with the first 30% data of
B1b1
, followed by testing onB1b1
. - Training with the first 70% data of
B1b1
, followed by testing onB1b1
.
Results:
RMSE | MAE | R2 | Mode | |
---|---|---|---|---|
HHO-LSTM-FC-TL(B6) | 0.0037 | 0.0029 | 0.9941 | 1 |
HHO-LSTM-FC-TL(B7) | 0.0034 | 0.0027 | 0.9952 | 1 |
HHO-LSTM-FC | 0.0078 | 0.0065 | 0.9422 | 2 |
Wang et al. (2024a)
In this article, we provide a benchmark testing five deep learning models on three types of inputs (all charging data
, partial charging data
, features
) and under three normalization methods.
The above image shows the results of the five models using features
as input and [-1,1] normalization
, with all results magnified by 1000 times. Due to the abundance of results, we only show one type here; other results can be found in the original paper.
Fan et al. (2024a)
The article uses data from Batch-1
, Batch-2
, and Batch-3
.
The model inputs are the raw voltage
, raw current
, and raw temperature
data.
Dataset partitioning:

Experimental results:οΌ

Wang et al. (2024b)
The article extracts 8 features from the charging data, which are:
Constant current charging time
, Constant voltage charging time
, Average charging voltage
, Average charging current
, Standard deviation of charging voltage
,
Skewness of charging current
, Skewness of charging voltage
, Kurtosis of charging voltage
.
Three modes were used to validate the model's performance.
Note: In the table below, Group A
is equivalent to B1
as defined above;
Group B
is equivalent to B2
as defined above.
Mode 1: Training and testing on the same batch
Dataset Partitioning:

Results on Batch-1 dataset (Group A 1
= B1b1
):

Results on Batch-2 dataset (the article selected odd-numbered batteries from Batch-2, so Group B x
= B2b(2x-1)
):

Mode 2: Varying the size of the training set
Dataset Partitioning:

Experimental results:

Mode 3: Mixed training and testing on two batches
Dataset Partitioning:

Experimental results:

Wang et al. (2024c)
The article uses data from 3 batteries in Batch-2, specifically: B2b1
, B2b4
, B2b5
.
The training and testing mode is AA
, meaning early data is used for training and later data for testing.
The average charging current (ACC)
during the period from
Results Visualization:

The authors test the estimated results of different starting points (with headers: battery
, Cycle
, MAE
, RMSE
, R2
):

The comparison results with other methods provided in the article are as follows:

Wang et al. (2024d)
The article uses the battery of Batch-1, the first 8 of Batch-2 and Batch-3 to verify the proposed method, which are: B1-B3
.
The task is to use the transfer learning method to predict the capacity
of the battery;
The 3 Batchs represent 3 domains, which are represented as D1, D2, and D3 in the article.
The comparison results with other methods provided in the article are as follows:
Tang et al. (2024a)
The task of this article is to predict the charging curve, using one-cycle's V-Q curve to predict the V-Q curve of multiple future cycles. The data of Batch-1 and Batch-2 were used for verification. In each batch, the data of batteries #1, #3, #4, #5, #6, and #7 are used for training, and the data of #2 and #8 are used for testing. The prediction length is 150 cycles.
Results VisualizationοΌ

Tang et al. (2024b)
The article uses the relaxation voltage
curve to predict the V-Q curve
,
and uses the data of Batch-1 and Batch-2 for verification.
Note that the article only gives the value of maximum RMSE
, which is 0.046 and 0.055 respectively, and does not give the average value.
Results VisualizationοΌ
