Frontiers of Data and Computing ›› 2025, Vol. 7 ›› Issue (1): 163-174.

CSTR: 32002.14.jfdc.CN10-1649/TP.2025.01.012

doi: 10.11871/jfdc.issn.2096-742X.2025.01.012

• Technology and Application • Previous Articles     Next Articles

A Study of the Fine-Tuning Technique of the Llama2-70b Model and Its Application in the Field of Materials

TANG Lei1,2(),CHEN Ziyi1,2,LIANG Sihan1,2,LI Kai1,WAN Meng1,ZHANG Boyao1,LIU Miao3,MENG Sheng3,WANG Yangang1,2,ZHOU Chunbao1,2,*(),WANG Zongguo1,2,*()   

  1. 1. Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China
    2. University of Chinese Academy of Sciences, Beijing 100049, China
    3. Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China
  • Received:2024-11-14 Online:2025-02-20 Published:2025-02-21

Abstract:

[Objective] To lower the barriers of using large language models and promote their applications in different fields, this paper systematically introduces the fine-tuning process of the Llama2-70b model and its application procedure in the field of materials science. [Methods] This study utilized the DeepSpeed framework and an instruction data set of inorganic material synthesis pathways, and employed the LoRA fine-tuning technique to fine-tune the open-source Llama2-70b model. The model’s hyperparameters were optimized, and the tuning effects were evaluated based on the loss value during model training and the model’s stability. A suitable combination of hyperparameters was finally determined. [Results] Through the training and optimization of the model, a large language model for material synthesis that performs excellently in terms of stability and performance was obtained. [Conclusions] This research provides valuable experience and methods for the application of large language models in academic fields. The trained material language model offers meaningful reference and support for material synthesis design.

Key words: Llama2-70b Model, LoRA, Large Language model, material synthesis