| [1] |
DANG Y, LIN Q, HUANG P. Aiops: real-world challenges and research innovations[C]. In: 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion). IEEE, 2019: 4-5.
|
| [2] |
JOSH A, ADLER S, AGARWAL S, et al. GPT-4 technical report[R]. arXiv preprint arXiv:2303.08774, 2023.
|
| [3] |
HU E J, SHEN Y L, WALLIS P, et al. Lora: Low-rank adaptation of large language models[C]. International Conference on Learning Representations (ICLR), 2022, 3.
|
| [4] |
OLINER A, GANAPATHI A, XU W. Advances and challenges in log analysis[J]. Communications of the ACM, 2012, 55(2): 55-61.
|
| [5] |
JIANG Z, LIU J, CHEN Z, et al. LLMParser: A LLM-based Log Parsing Framework[R]. arXiv preprint arXiv:2310.01796, 2023.
|
| [6] |
QI J, HUANG S, LUAN Z, et al. LogGPT: Exploring ChatGPT for Log-based Anomaly Detection[C]. In 2023 IEEE International Conference on High Performance Computing & Communications, Data Science & Systems, Smart City & Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys). IEEE, 2023: 273-280.
|
| [7] |
HUANG S, LIU Y, QI J, et al. GLOSS: Guiding Large Language Models to Answer Questions from System Logs[C]. In 2024 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER). IEEE, 2024: 91-101.
|
| [8] |
RAM O, LEVINE Y, DALMEDIGOS I, et al. In-Context Retrieval-Augmented Language Models[J]. Transactions of the Association for Computational Linguistics, 2023, 11: 1316-1331.
|
| [9] |
CHENG D, HUANG S, BI J, et al. Uprise: Universal Prompt Retrieval for Improving Zero-Shot Evaluation[J]. arXiv preprint arXiv:2303.08518, 2023.
|
| [10] |
LIU H, LIU J, HUANG S, et al. SE2: Sequential Example Selection for In-Context Learning[J]. arXiv preprint arXiv:2402.13874, 2024.
|
| [11] |
REIMERS N, GUREVYCH I. Sentence-BERT: Sentence Embeddings Using Siamese BERT-Networks[J]. arXiv preprint arXiv:1908.10084, 2019.
|
| [12] |
ROBERTSON S, ZARAGOZA H. The Probabilistic Relevance Framework: BM25 and Beyond[J]. Foundations and Trends in Information Retrieval, 2009, 3(4): 333-389.
|
| [13] |
TOVRON H, LAVRIL T, IZACARD G, et al. LLaMA: Open and Efficient Foundation Language Models[J]. arXiv preprint arXiv:2302.13971, 2023.
|
| [14] |
TOVRON H, MARTIN L, STONE K, et al. LLaMA 2: Open Foundation and Fine-Tuned Chat Models[J]. arXiv preprint arXiv:2307.09288, 2023.
|
| [15] |
GUO D, YANG D, ZHANG H, et al. DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning[J]. arXiv preprint arXiv:2501. 12948, 2025.
|
| [16] |
WANG L, YANG N, HUANG X, et al. Text Embeddings by Weakly-Supervised Contrastive Pre-Training[J]. arXiv preprint arXiv:2212. 03533, 2022.
|
| [17] |
WANG L, YANG N, WEI F. Learning to Retrieve In-Context Examples for Large Language Models[J]. arXiv preprint arXiv:2307.07164, 2023.
|