Skip to content

Compilation and summary of research articles utilizing the XJTU battery dataset, including detailed records of results for easy comparison and reference.

Notifications You must be signed in to change notification settings

wang-fujin/XJTU-Battery-Dataset-Papers-Summary

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

23 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Articles Using XJTU Battery Dataset: Compilation and Summary

Note

Objective: This document compiles and summarizes articles that utilize the XJTU battery dataset, providing detailed records of the results reported in these articles. This is intended to facilitate direct comparison for future works using the same dataset.

Chinese document: Chinese

Last updatedπŸ•’: 2024-11-28 πŸ˜€πŸ˜€πŸ˜€

Dataset Links:

Data Description and Preprocessing Code: https://github.com/wang-fujin/Battery-dataset-preprocessing-code-library

Please cite our paper if you use this dataset:

Wang F, Zhai Z, Zhao Z, et al. Physics-informed neural network for lithium-ion battery degradation stable modeling and prognosis[J]. Nature Communications, 2024, 15(1): 4332.

Data Summary

Important

The XJTU battery dataset comprises 6 batches with a total of 55 batteries. Not all articles use all batteries, so a shorthand is defined to indicate which batteries are used in the articles, formatted as Bxby.

  • Bx denotes the x-th batch;
  • by denotes the y-th battery in that batch;
  • All indicates all batteries.

Examples:

  • B1b1 indicates the 1st battery in the 1st batch;
  • B1 indicates all batteries in the 1st batch;
  • B2b1-b4 indicates the 1st to 4th batteries in the 2nd batch.

Important

We categorize the training and testing modes (Mode) in the articles into two types:

  • Type 1: Training and testing on the same battery, using early data for training and later data for testing. This mode is noted as Train A and Test A, abbreviated as AA.
  • Type 2: Training and testing on different batteries, noted as Train A and Test B, abbreviated as AB.

Summary of SOH Estimation Results

Battery Model Name Mode MSE RMSE MAE MAPE R2 Details Paper Link Non-transfer learning Transfer learning
B1b1 HHO-LSTM-FC AA - 0.0078 0.0065 - 0.9422 Yang et al. (2024) link βœ… βœ…
All CNN1 AB 0.000161 - 0.0085 0.00926 0.9187 Wang et al. (2024a) link βœ… ❌
All LSTM1 AB 0.000117 - 0.0079 0.00861 0.9407 Wang et al. (2024a) link βœ… ❌
All GRU1 AB 0.0000983 - 0.0071 0.00776 0.9503 Wang et al. (2024a) link βœ… ❌
All MLP1 AB 0.000139 - 0.0078 0.00844 0.9331 Wang et al. (2024a) link βœ… ❌
All Attention1 AB 0.000135 - 0.0087 0.00950 0.9317 Wang et al. (2024a) link βœ… ❌
B1 MMAU-Net AB - 1.40% 1.02% - - Fan et al. (2024a) link βœ… ❌
B2 MMAU-Net AB - 1.50% 1.04% - - Fan et al. (2024a) link βœ… ❌
B3 MMAU-Net AB - 1.04% 0.66% - - Fan et al. (2024a) link βœ… ❌
B1-B2 MSCNN1 AB - 0.74% 0.67% 0.37% - Wang et al. (2024b) link βœ… ❌
B2b1 ZKF AA - 0.0172 0.0125 - 0.9624 Wang et al. (2024c) link βœ… ❌
B2b4 ZKF AA - 0.0167 0.0126 - 0.9628 Wang et al. (2024c) link βœ… ❌
B2b5 ZKF AA - 0.0123 0.0079 - 0.9824 Wang et al. (2024c) link βœ… ❌
B1-B3 MSFDTN1 AB 0.22% - 3.93% - 0.9533 Wang et al. (2024d) link ❌ βœ…
B1-B3 DR-Net1 AB 1.92% - 10.49% - - Wang et al. (2024d) link ❌ βœ…
B1-B3 AttMoE1 AB 2.43% - 10.63% - - Wang et al. (2024d) link ❌ βœ…
B1-B3 ELSTM1 AB 2.07% - 11.20% - - Wang et al. (2024d) link ❌ βœ…
B1-B3 MMMe1 AB 5.53% - 18.60% - - Wang et al. (2024d) link ❌ βœ…
B1-B3 PVA-FFG-Transformer1 AB 6.11% - 21.50% - - Wang et al. (2024d) link ❌ βœ…

Summary of RUL Prediction Results

Battery Model Name Mode MSE RMSE MAE MAPE R2 Details Paper Link Non-transfer learning Transfer learning

Summary of V-Q Prediction Results

Battery Model Name Mode MSE RMSE MAE MAPE R2 Details Paper Link Non-transfer learning Transfer learning
B1b2 PINN AB - 14.86e-3 - - - Tang et al. (2024a) link βœ… ❌
B1b8 PINN AB - 22.04e-3 - - - Tang et al. (2024a) link βœ… ❌
B2b2 PINN AB - 40.95e-3 - - - Tang et al. (2024a) link βœ… ❌
B2b8 PINN AB - 37.70e-3 - - - Tang et al. (2024a) link βœ… ❌
B1 - AB - 0.046 (max) - - - Tang et al. (2024b) link βœ… ❌
B2 - AB - 0.055 (max) - - - Tang et al. (2024b) link βœ… ❌

SOH Estimation

Yang et al. (2024)

Yang G, Wang X, Li R, et al. State of Health Estimation for Lithium-Ion Batteries Based on Transferable Long Short-Term Memory Optimized Using Harris Hawk Algorithm[J]. Sustainability, 2024, 16(15): 6316.

Used only the 1st battery of Batch-1, noted as B1b1.

The article implemented two SOH estimation modes:

  1. Pre-training on NASA's B6 and B7 batteries, then fine-tuning with the first 30% data of B1b1, followed by testing on B1b1.
  2. Training with the first 70% data of B1b1, followed by testing on B1b1.

Results:

RMSE MAE R2 Mode
HHO-LSTM-FC-TL(B6) 0.0037 0.0029 0.9941 1
HHO-LSTM-FC-TL(B7) 0.0034 0.0027 0.9952 1
HHO-LSTM-FC 0.0078 0.0065 0.9422 2
Wang et al. (2024a)

Wang F, Zhai Z, Liu B, et al. Open access dataset, code library and benchmarking deep learning approaches for state-of-health estimation of lithium-ion batteries[J]. Journal of Energy Storage, 2024, 77: 109884.

In this article, we provide a benchmark testing five deep learning models on three types of inputs (all charging data, partial charging data, features) and under three normalization methods.

Specific Results

The above image shows the results of the five models using features as input and [-1,1] normalization, with all results magnified by 1000 times. Due to the abundance of results, we only show one type here; other results can be found in the original paper.

Fan et al. (2024a)

Fan X, Yang X, Hou F. Integrated Mixed Attention U-Net Mechanisms with Multi-Stage Division Strategy Customized for Accurate Estimation of Lithium-Ion Battery State of Health[J]. Electronics, 2024, 13(16): 3244.

The article uses data from Batch-1, Batch-2, and Batch-3. The model inputs are the raw voltage, raw current, and raw temperature data.

Dataset partitioning:

Description

Experimental results::

Description
Wang et al. (2024b)

Wang J, Li H, Wu C, et al. State of Health Estimations for Lithium-Ion Batteries Based on MSCNN[J]. Energies, 2024, 17(17): 4220.

The article extracts 8 features from the charging data, which are: Constant current charging time, Constant voltage charging time, Average charging voltage, Average charging current, Standard deviation of charging voltage, Skewness of charging current, Skewness of charging voltage, Kurtosis of charging voltage.
Three modes were used to validate the model's performance.

Note: In the table below, Group A is equivalent to B1 as defined above;
Group B is equivalent to B2 as defined above.


Mode 1: Training and testing on the same batch
Dataset Partitioning:

Description

Results on Batch-1 dataset (Group A 1 = B1b1):

Description

Results on Batch-2 dataset (the article selected odd-numbered batteries from Batch-2, so Group B x = B2b(2x-1)):

Description

Mode 2: Varying the size of the training set
Dataset Partitioning:

Description

Experimental results:

Description

Mode 3: Mixed training and testing on two batches
Dataset Partitioning:

Description

Experimental results:

Description
Wang et al. (2024c)

Wang Z, Zhao Z, Zhou M, et al. Online Capacity Prediction of Lithium-Ion Batteries Based on Physics-Constrained Zonotopic Kalman Filter[J]. IEEE Transactions on Reliability, 2024.

The article uses data from 3 batteries in Batch-2, specifically: B2b1, B2b4, B2b5.

The training and testing mode is AA, meaning early data is used for training and later data for testing.

The average charging current (ACC) during the period from $T_1$ to $T_2$ is constructed as an indirect health indicator (HI) to predict battery capacity.

Results Visualization:

Description

The authors test the estimated results of different starting points (with headers: battery, Cycle, MAE, RMSE, R2):

Description

The comparison results with other methods provided in the article are as follows:

Description
Wang et al. (2024d)

Wang C, Wu J, Yang Y, et al. Multi-scale self-attention feature decoupling transfer network-based cross-domain capacity prediction of lithium-ion batteries[J]. Journal of Energy Storage, 2024, 103: 114286.

The article uses the battery of Batch-1, the first 8 of Batch-2 and Batch-3 to verify the proposed method, which are: B1-B3. The task is to use the transfer learning method to predict the capacity of the battery; The 3 Batchs represent 3 domains, which are represented as D1, D2, and D3 in the article.

Results Visualization: Description

The comparison results with other methods provided in the article are as follows: Description


RUL Prediction


Other Tasks

Tang et al. (2024a)

Tang A, Xu Y, Tian J, et al. Physics-informed battery degradation prediction: Forecasting charging curves using one-cycle data[J]. Journal of Energy Chemistry, 2024.

The task of this article is to predict the charging curve, using one-cycle's V-Q curve to predict the V-Q curve of multiple future cycles. The data of Batch-1 and Batch-2 were used for verification. In each batch, the data of batteries #1, #3, #4, #5, #6, and #7 are used for training, and the data of #2 and #8 are used for testing. The prediction length is 150 cycles.

Results Visualization:

Description
Tang et al. (2024b)

Tang A, Xu Y, Liu P, et al. Deep learning driven battery voltage-capacity curve prediction utilizing short-term relaxation voltage[J]. eTransportation, 2024: 100378.

The article uses the relaxation voltage curve to predict the V-Q curve, and uses the data of Batch-1 and Batch-2 for verification.

Note that the article only gives the value of maximum RMSE, which is 0.046 and 0.055 respectively, and does not give the average value.

Results Visualization:

Description

Footnotes

  1. The MSE, RMSE, MAE, and MAPE values in the table are averages across all batteries. ↩ ↩2 ↩3 ↩4 ↩5 ↩6 ↩7 ↩8 ↩9 ↩10 ↩11 ↩12

About

Compilation and summary of research articles utilizing the XJTU battery dataset, including detailed records of results for easy comparison and reference.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published