| [1] |
STALLMAN R M. The GNU manifesto[M]. Computers, Ethics, & Society. USA: Oxford University Press, Inc., 1990: 308-317.
|
| [2] |
RAYMOND E S. The cathedral and the bazaar: musings on Linux and open source by an accidental revolutionary[M]. USA: O’Reilly & Associates, Inc., 2001: 23-49.
|
| [3] |
FELLER J, FITZGERALD B. A framework analysis of the open source software development paradigm[C]. In. Association for Information Systems. Proceedings of the Twenty First International Conference on Information Systems. USA. 2000: 58-69.
|
| [4] |
LERNER J, TIROLE J. Some Simple Economics of Open Source[J]. The Journal of Industrial Economics, 2002, 50 (2): 197-234.
doi: 10.1111/joie.2002.50.issue-2
|
| [5] |
OLATUNjI S O, IDREES S U, AL-GHAMDI Y S, et al. Mining software repositories-a comparative analysis[J]. International Journal of Computer Science and Network Security, 2010, 10(8): 161-174.
|
| [6] |
HARS A, OU S. Working for free? Motivations of participating in opensource projects[C]. In. Proceedings of the 34th Annual Hawaii International Conference on System Sciences. 2001: 9 pp.
|
| [7] |
LECUN Y, BENGIO Y, HINTON G. Deep learning[J]. Nature, 2015, 521(7553): 436-444.
doi: 10.1038/nature14539
|
| [8] |
HE K, ZHANG X, REN S, et al. Deep Residual Learning for Image Recognition[C]. In. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016: 770-778.
|
| [9] |
VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]. In. Curran Associates Inc. Proceedings of the 31st International Conference on Neural Information Processing Systems. Red Hook, NY, USA. 2017: 6000-6010.
|
| [10] |
BROWN T, MANN B, RYDER N, et al. Language Models are Few-Shot Learners[C]. In. Larochelle H, Ranzato M, Hadsell R et al. Curran Associates, Inc. Advances in Neural Information Processing Systems. 2020: 1877-1901.
|
| [11] |
ACHIAM J, ADLER S, AGARWAL S, et al. Gpt-4 technical report[J]. arXiv preprint arXiv:2303.08774. 2023.
|
| [12] |
TEAM G, ANIL R, BORGEAUD S, et al. Gemini: a family of highly capable multimodal models[J]. arXiv preprint arXiv:2312.11805, 2023.
|
| [13] |
ROUMELIOTIS K I, TSELIKAS N D, NASIOPOULOS D K. Llama 2: Early Adopters’ Utilization of Meta’s New Open-Source Pretrained Model[J]. 2023. DOI:10.20944/preprints202307.2142.v1.
|
| [14] |
LIU A, FENG B, XUE B, et al. Deepseek-v3 technical report[J]. arXiv preprint arXiv:2412.19437, 2024.
|
| [15] |
YANG A, XIAO B, WANG B, et al. Baichuan 2:Open large-scale language models[J/OL]. arXiv preprint arXiv: 2309.10305, 2023.
|
| [16] |
WANG T, ZHANG W, YE C, et al. FD4C: Automatic Fault Diagnosis Framework for Web Applications in Cloud Computing[J]. IEEE Transactionson Systems, Man, and Cybernetics: Systems, 2016, 46(1): 61-75.
|
| [17] |
LI X, ZHANG W. Deep Learning-Based Partial Domain Adaptation Method on Intelligent Machinery Fault Diagnostics[J]. IEEE Transactions on Industrial Electronics, 2021, 68(5): 4351-4361.
doi: 10.1109/TIE.41
|
| [18] |
DU M, LI F, ZHENG G, et al. DeepLog: Anomaly Detection and Diagnosis from System Logs through Deep Learning[C]. In. Association for Computing Machinery. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. New York, NY, USA. 2017: 1285-1298.
|
| [19] |
KUMAR S. Artificial Intelligence in Software Engineering: A Systematic Exploration of AI-Driven Development[J]. International Journal of Innovative Research in Science Engineering and Technology, 2024, 13(6): 11903-11913.
|
| [20] |
DESMOND O C. AI-Powered DevOps: Leveraging machine intelligence for seamless CI/CD and infrastructure optimization[J]. International Journal of Science and Research Archive, 2022, 6(2): 094-107.
doi: 10.30574/ijsra
|
| [21] |
Mnih V, Badia A P, Mirza M et al. Asynchronous methods for deep reinforcement learning[C]. In. JMLR. org. Proceedings of the 33rd International Conference on International Conference on Machine Learning-Volume 48. 2016: 1928-1937.
|
| [22] |
IZACARD G, GRAVE E. Distilling knowledge from reader to retriever for question answering[J]. arXiv preprint arXiv:2012.04584, 2020.
|
| [23] |
RACKAUCKAS Z. Rag-fusion: a new take on retrieval-augmented generation[J]. arXiv preprint arXiv:2402.03367, 2024
|
| [24] |
GAO L, MA X, LIN J, et al. Precise zero-shot dense retrieval without relevance labels[C]// Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2023: 1762-1777.
|
| [25] |
WANG Z, LIU A, LIN H, et al. Rat: Retrieval augmented thoughts elicit context-aware reasoning in lo-ng-horizon generation[J]. arXiv preprint arXiv:2403. 05313, 2024.
|
| [26] |
WEI J, WANG X, SCHUURMANS D, et al. Chain-of-thought prompting elicits reasoning in large language models[C]. In. Curran Associates Inc. Proceedings of the 36th International Conference on Neural Information Processing Systems. Red Hook, NY, USA. 2022: 1-14.
|
| [27] |
罗远波, 孙嘉, 陶俐芝. 面向时间序列和时空数据分析的大模型研究进展[J]. 科技导报, 2025, 43(18): 48-56.
doi: 10.3981/j.issn.1000-7857.2025.05.00037
|
| [28] |
NIE Y, NGUYEN N H, SINTHONG P, et al. A Time Series is Worth 64 Words: Long-term Forecasting with Transformers[C]// The Eleventh International Conference on Learning Representations.
|
| [29] |
ZHANG Y, YAN J. Crossformer: Transformer utilizing cross-dimension dependency for multivariate time series forecasting[C]// The eleventh international conference on learning representations. 2023.
|
| [30] |
WU H, HU T, LIU Y, et al. Timesnet: Temporal 2d-variation modeling for general time series analysis[J]. arXiv preprint arXiv:2210.02186, 2022.
|
| [31] |
WANG H, PENG J, HUANG F, et al. Micn: Multi-scale local and global context modeling for long-term series forecasting[C]// The eleventh international conference on learning representations. 2023.
|
| [32] |
ZHOU T, NIU P, WANG X, et al. One fits all: power general time series analysis by pretrained LM[C]. In. Curran Associates Inc. Proceedings of the 37th International Conference on Neural Information Processing Systems. Red Hook, NY, USA. 2023: 1-34.
|