[Objective] Automatic key performance indicator (KPI), the basis of Internet artificial intelligence operations (AIOps), is of vital importance to rapid failure detection and mitigation. [Scope of the literature] In this paper, we investigate unsupervised KPI anomaly detection methods, which are based on deep generative models. [Methods] We systematically describe the theoretic model of Donut, Bagel, and Buzz, which are all unsupervised KPI anomaly detection methods, and analyze their advantages and limitations in terms of accuracy and efficiency. [Results] We evaluate the performance of those three approaches based on real-world KPI data. [Limitations] The KPI anomaly detection methods based on deep generative model are continuously evolving, and we will explore more methods in this area. [Conclusions] Choosing a deep generative model should consider the characteristics of KPI data. Generally, if the KPI data is sensitive to timing information, we should apply Bagel to perform anomaly detection. Moreover, Buzz should be used if the data is non-seasonal and complex.
[Objective] HTTP is the most used protocol in network traffic these days, and the use of HTTP is still growing rapidly. However, the normal transmission of HTTP requires the underlying TCP/IP protocol stack, which limits its ability to solve problems in the current Internet. The response speed of a web page is critical to users’ experience when they browse the web pages. This paper aims to give a solution to improve the response speed of web browsing. [Methods] In this paper, we propose a solution to implement in-network cache in HTTP protocol using the idea of ICN (Information Centric Network) protocol and the P4 (Programming Protocol-Independent Packet Processors) language and conduct evaluation of the proposed solution. Firstly, we propose a packet conversion mechanism for converting a custom packet into a special packet and converting it back. Secondly, we adopt P4 language to implement the ICN transmission in the forwarding router. [Results] In order to verify the functionality of our design, we set up a network topology among multiple virtual machines and verify the network performance improvements when using the ICN protocol in HTTP transports. Evaluation results show our solution that enables P4 switches to cache HTTP content responses can aggregate the same requests and improve network performance. [Conclusions] Therefore, the solution we proposed in this paper is effective for users in web page browsings.
[Objective] Aiming at the problems in the Vehicle Social Network (VSN), such as large cache redundancy and low efficiency, two strategies related to cache decision and cache replacement adapting to the dynamic VSN are proposed. The fast response to user-requested content is realized through the in-network caching, which satisfies the real-time needs of passengers for data, and provides valuable reference for subsequent research on the cache mechanism in the Content-Centric Vehicule Social Network. [Scope of the literature] This paper focuses on the architecture design of Information-Centric Networking (ICN), the caching mechanism of ICN and VSN, and the researches of combining both. [Methods] Firstly, this paper uses the real-time monitoring of the popularity of cached content and the evaluation of the friendship among nodes as the basis for judging whether to cache the content. Then, the content storage is divided to increase the diversity of the cache. Finally, a cache replacement strategy is formulated based on the importance of the nodes. [Results] The cache strategy designed in this paper significantly improves the response efficiency of interest packets and avoids the loss caused by frequent switching. At the same time, under the premise of ensuring the packet delivery rate, the network overhead is greatly reduced. [Limitations] Due to the limitation of reality implementation, it is impossible to carry out the experiment in real environment, leading to overly ideal experimental results. [Conclusions] Applying ICN technology to VSN can effectively use the characteristcs of content and position separation, which can better support terminal mobility. Also the advantages obtained from in-network caching mechanism reduce network latency and achieve rapid content delivery.
[Objective] This paper models the partially observed network performance data as a tensor, and exploits powerful ability of the deep neural network in feature extraction to recover the missing data. [Methods] Different from traditional tensor completion which relies on tensor decomposition, we design a novel tensor completion scheme based on Deep Convolution Autoencoder (DCAE). DCAE can handle the sparse matrix input, learn the complex relationship of data, and reconstruct the missing data. [Results] We have conducted extensive experiments using three publicly real-world network performance datasets. Our results demonstrate that DCAE can achieve significantly better recovery accuracy even when the sampling ratio is very low. [Limitations] Due to network attacks, network performance data may have unavoidable anomalies, which will deteriorate the recovery accuracy. In the future, we hope that the abnormal data will be able to be processed for a more robust recovery. [Conclusions] The proposed model can capture the non-linear relationship between the network performance data, achieve high data recovery accuracy, and recover missing data for advanced network applications.
[Objective] Explore the possible innovation directions for future Internet architecture based on the idea of coexistence and integration of multi-dimensional cyberspace. [Methods] On the basis of the coexistence of multi-identifier, the interaction and cooperation between polymorphic networks, and the compatibility and expansibility of diversified service styles, this paper summarizes several features the Internet should have in the future and analyzes the key technologies as well as their difficulties, after which the corresponding goals and theoretical framework are put forward. [Results] By discussing the basic architecture and the core mechanism of the future Internet under the theory of coexistence and integration of multi-dimensional cyberspace, the architecture of smart integrated networking is proposed, which would be considered as a new idea for innovation and breakthrough of Internet technologies. [Conclusions] Despite the proposed design of coexistence and integration of multi-dimensional cyberspace conforms to the developing trend of Internet technology, its actual deployment as well as the performance evaluation still needs further discussion and study.
[Objective] 6G is considered as the next-generation network communication technology to network 2030. This paper proposes an intelligent management and control architecture for 6G networks in the term for network demands. [Application background] The architecture will integrate different types of artificial intelligence algorithm models, to solve the “network on demand” problem for highly resilient and dynamic 6G networks. [Methods] This paper first investigates the relevant literatures and the standardization progress in recent years, analyzes the requirements for 6G intelligent management and control, and the challenges faced by existing management and control technologies, and finally proposes corresponding solutions. [Results] Based on the concept of the 6G intelligent endogenous network, an intelligent management and control architecture of the 6G network is proposed, and the intelligent management and control function of the 6G network constructed with key technologies are described as well. [Limitations] The control architecture and key technologies proposed in this paper require further systematic verification with a prototype. [Conclusions] Endogenous intelligence is one of the important characteristics of 6G networks. Artificial intelligence technology will also play a core supporting role in the management and control of 6G networks, which provides ubiquitous and reliable guarantees for resource on-demand scheduling and network accompanying services.
[Context] Satellite networks have the advantages of wide coverage and robustness, which can be widely used in a variety of fields and occupy an important position in the development of future networks. However, due to the large size of its constellation, complex heterogeneity and dynamic topology, there are still challenges in efficient control and resource utilization. [Methods] Software-Defined Networking (SDN), as a key technology for future networks, can effectively solve a range of problems in satellite networks. [Results] This paper analyzes the advantages of applying SDN in satellite networks, and introduces the current research on SDN-based satellite networks in terms of network architecture design and key technologies, respectively. [Conclusions] Finally, this paper summarizes the challenges and future development directions of SDN-based satellite networks.
[Objective] The content naming granularity in information-centric networking (ICN) is an important influencing factor on network efficiency, especially the efficiency of routing table lookups. This work investigates the interplay between routing table lookup efficiency and content naming granularity. [Methods] Firstly, the number of network names is analyzed with respect to the content naming granularity, whereby the number of network requests and the size of routing tables are obtained under different naming methods. Then, the impact of content naming granularity on routing table lookup efficiency is further investigated. [Results] This study found that the smaller the content naming granularity, the larger the number of names and requests, and the routing table size may become larger depending on the naming method, which in turn leads to a reduction in table lookup efficiency. [Limitations] The results of this study are mainly based on the literature review, data analysis, and local testing, which means it currently lacks of support of real network testing. [Conclusions] This paper explores the impact of content naming granularity to the efficiency of ICN progressively and analyzes the main influencing factors and their interplay, which lays a theoretical foundation for further research on content naming granularity.
[Objective] Land is the basis for human survival and development. It is of great significance to study how the land use changes the human economy, politics and the environment. In order to provide references for land use evolution research and understand the research status on land use clearly and thoroughly, a new model is proposed in this paper. [Methods] This paper takes the Chebaling Ecological Reserve in Guangdong Province as the study case for model verification in land use simulations. The model combines the Recurrent Neural Network based on LSTM and Cellular Automata to study the land use change data in 2005-2017. Spatial analysis of Chebaling Ecological Reserve is based on massive vector and raster data are made with the aid of ArcGIS 10.2. Driving force mechanism as studied by constructing the fourteen spatial variables which include natural factors, social factors, distance factors and so on. In addition, the experiments are conducted by different threshold settings and random disturbance adoptions for better simulation improvements in terms of accuracy. [Results] The simulated results are shown as follows: the improved model has higher precision and Kappa coefficient than traditional models. Besides, the threshold and random disturbance can be set conveniently by the new model. [Limitations] In the experiment, the number and types of spatial variables are relatively insufficient, and more relevant variables and driving factors need to be considered in later studies.[Conclusions] The proposed LSTM-based RNN-CA model verified by the improved simulation results satisfies the requirements and provides references for land use evolution researches.
[Objective] Computation offloading is an important research area of Mobile Edge Computing (MEC). It makes up for the shortcomings of devices in storage and computation, thus receiving extensive attention. This paper studies the computation offloading strategy for MEC with dense network. [Methods] For the multi-base station and multi-user equipment scenario, we construct a computation offloading model with service caching and resource allocating features, and adopt dynamic programming and game theory to solve the caching problem and jointly allocate radio and computational resources. Finally, a Nash equilibrium state of mutual satisfaction achieves among users. [Results] Simulation experiments show that the proposed strategy is efficient to reduce overhead, improve system performance and can receive better satisfaction. [Conclusions] It is suitable for mobile edge computing scenario, and provides theoretical and practical supports for subsequent research on computation offloading. In the next step, we will consider the incentive mechanism to users’ behavior when discussing computation offloading.
[Objective] The interface energy and elastic strain energy are the main factors of anisotropy in the microstructure of materials. In this paper, the compact Exponential Time Difference method of anisotropic phase field model with elastic strain energy is studied. [Methods] In the framework of the compact Exponential Time Difference method, the calculation of interface energy and elastic strain energy is introduced. The interface energy and elastic strain energy are treated as the nonlinear terms of the Exponential Time Difference method, and the operator splitting scheme is designed for them. [Results] It is mathematically proved that the operator splitting scheme can guarantee the energy stability, and the numerical experiments of the corrosion phase field model of Ni-based alloy and Zr-hydride are carried out, which verify the energy stability of the Exponential Time Difference method of the anisotropic phase field model with elastic strain energy. [Limitations] In this paper, only the first-order and second-order solutions of the Exponential Time Difference method are obtained, and the higher-order solutions need to be further explored. [Conclusions] The Exponential Time Difference method of energy stable anisotropic phase field model with elastic strain energy is designed.
[Objective] This paper has implemented a parallel FMM based on Charm++ to take advantage of its over-decomposition and migratability. [Methods] It is achieved by analyzing communication, separating parallel tasks, and converting synchronous communication to asynchronous communication. Also, the SDAG was used to implement the basic communication calls and the LPT approximation strategy was adopted for dynamic load balancing. [Results] The results show that the implementation of parallel FMM based on Charm++ has the same accuracy as that of MPI implementation, and its execution speed on the thousand-core scale is better than that of MPI implementation. Over-decomposition and load-balancing strategy contribute to the execution time reduction by 10% in the unbalance particle distribution. [Limitations] The current implementation does not use the shared memory structure of Charm++ and needs further optimizations. Besides, the load balancing strategy is simple. [Conclusions] This paper gives a relatively general method to convert the MPI style programs to Charm++ style ones and proves that over-decomposition and load-balancing strategy can accelerate FMM execution.
[Objective] In order to accelerate the calculation of the LICOM oceanic circulation model and reduce the cost caused by the high resolution, this paper designs and implements a GPU accelerat-ed version using CUDA C. [Methods] Based on the latest version of LICOM3, this paper analyzes the parallel algorithms of ocean grid block, and uses CUDA threads to calculate the grid points in parallel, which enables porting of the main program of LICOM to the GPU platform, and data transmission and device memory usage are optimized. [Results] Experiments show that the simulation results of GPU version program are basically same as the original CPU version program, while achieving 9.31x to 1.27x speedup on 2 to 16 NVIDIA K20 GPUs compared with the same number of Intel Xeon E5-2680 V2 CPUs. [Limitations] Because there are many boundary synchronous communications in LICOM3, which limits the scalability of the program, and it is necessary to improve the scalability of the model through boundary communications optimization and algorithm optimization. [Conclusions] This paper implements and optimizes the GPU version of the LICOM3 program, achieves some speedup and keep a good scalability, which provides experience and reference for the development of larger-scale oceanic circulation model in the future.
[Objective] This paper introduces a network organization method for computing power network to satisfy business needs, which is able to flexibly schedule and allocate computing resources among clouds, networks, and edge devices. This approach aims to schedule and manage a wider range of computing resources in a unified framework. At the edge of the network, due to the large number of embedded devices and their different architectures, it is difficult for the existing resource scheduling methods to meet the demand of computing power. [Methods] Starting from the computing network architecture and based on the cloud-native resource scheduling mechanism, a lightweight, multi-cluster hierarchical edge resource scheduling scheme is described. [Results] Based on the lightweight cloud-native platform, we successfully manage and deploy a massive amount of hetero-architecture edge devices inside computing power networks in a unified framework. [Limitations] As a unified resource scheduling platform for front-end equipment in the “cloud, edge, and end” designed for computing power networks, it is important to solve the problems of implementing cloud-side collaboration, deploying artificial intelligence algorithms in front-end embedded clusters and making front-end equipment more autonomy. [Conclusions] The front-end embedded resource scheduling solution for computing power networks can be widely used in the Internet of Things, Internet of Vehicles, smart cities and other fields to improve the autonomous processing capabilities of the front-end equipment, and solve other practical problems such as the lack of innovation capabilities and insufficient supports in intelligent industry of China.
[Objective] This paper introduces the 5G media streaming-related standards in 3GPP, and provides a reference for mobile network operators and service providers to perform media streaming in 5G networks. [Methods] By studying 3GPP standard, the general service architecture for media streaming based on 5G, the 5G media distribution system and issues requiring further research are elucidated. [Results] The 3GPP SA WG4 Codec has proposed relevant technical specifications and technical reports on the media streaming system architecture, protocols, codecs, formats, etc. for 5G. It is instructive for mobile network operators and service providers to deploy media streaming in 5G. [Limitations] 3GPP standards for media transmission are numerous and are constantly evolving. Due to the limited number of standards referenced, this paper does not fully describe transmission architectures of all media forms. [Conclusions] Research on 5G media stream transmission standards is of great significance for improving the quality of experience of media stream transmission. It is necessary to keep up with 3GPP standards, but actual deployment can be carried out according to the specific conditions of mobile network operators and service providers.
[Objective] With the rapid development of 5G and AI, various types of applications appear and have the specific requirements for computing power and network. To provide users with better service experience, it is necessary to provide various applications with sufficient computing and deterministic network resources. Therefore, the joint optimization of computing and network resources is of an important research interest. [Scope of the literature] The article focuses on the joint optimization solution of computing/network resources as well as the related use cases adopted in current network. [Methods] This paper presents a joint optimization solution for computing/network resources allocation, namely computing power network (CPN), and introduces the architecture of the test platform based on the CPN as well as the key technologies and typical instances. [Results] The CPN combines the information of computing and network resources, which enables joint optimization solution and scheduling of relevant resources on the basis of user requirements. [Limitations] The joint optimization solution involves many fields. As a new solution, it faces many challenges and needs to be further improved and developed, according to different service requirements and business modalities. [Conclusions] The CPN can schedule the computing, storage, network and algorithm resources among multi-stage nodes. It has been recognized by experts in related fields and the standardization work has been carried out in international and domestic standard organizations such as ITU-T and CCSA.
[Objective] Through the analysis of the network slice management system and the industry practice of the operators, this paper provides an effective reference and model for the development of the network slice business in the trial commercial stage of 5G independent Networking (SA) of the operators. [Scope of the literature] This paper refers to the slice management system architecture of international standards, and combines the open sourced data with those from manufacturers and other channels to collect relevant hot slice information of operators in recent years. [Methods] This paper proposes a 5G network slice management system with whose functional architecture is described and analyzed, and sorts out and analyzes slice hot spot information. [Results] Through the research work of 5G network slice management systems, this paper provides reference selection suggestions from the aspects of architecture, function, and business development, etc. for operators in the stage of commercial implementation of 5G network slicing. [Limitations] At present, 5G network slicing is in the early stage of pilot application. The function of slicing management system and the application scenario of slicing business still need to be further improved. [Conclusions] The launch and operation of 5G slicing cannot be separated from the slicing management system. Through the 5G network slicing management system, the slicing business oriented to the industry is supported, and the differentiated 5G network service ability is provided to help operators incubate industry applications under certain commercial conditions and seize industry customers.
[Objective] The existing cloud computing service model cannot satisfy the development of the data-driven scientific research paradigm. Thus, how to design and implement a new architecture for data-intensive scientific computing is becoming a hot topic. [Scope of the literature] In this paper, the development trend of data-driven scientific research paradigm, the challenges it brought to networking and computing and the requirements for edge computing in scientific computing scenarios are summarized and analyzed in detail. At the same time, the research progress of edge computing in China and abroad is given. [Methods] Based on the analysis, this paper presents a novel cloud service architecture integrating edge computing for scientific research and depicts the basic functions as well as its typical application scenarios and service capabilities. [Results] This architecture can satisfy the requirements of scientific computing in multiple scenarios, such as data transmission optimization, virtual networking, 5G fusion access, edge-cloud collaborative computing power network, and edge cloud services for scientific research applications. [Conclusions] Based on the computing storage network route fusion, a new cloud service network architecture with heterogeneous network fusion and edge cloud collaboration can be achieved by technology integrations of edge computing, 5G network, artificial intelligence as well as network virtualization.
[Objective] The SDN switch southbound protocol performance testing system is dedicated to constructing test scenarios, detecting various performance indicators of the switch under test according to a certain traffic strategy, and evaluating whether it meets user performance requirements. [Scope of the literature] This article focuses on the development history of the SDN switch southbound protocols, the published SDN switch performance test results, and related SDN switch performance test literature, etc. [Methods] A flexible and accurate SDN switch southbound protocol performance test system is proposed in this article. A hardware/software co-design architecture is used to ensure scalability of the system, flexible flow construction capability, and nanosecond-level time measurement accuracy. [Results] Based on the FAST architecture, a flexible and accurate SDN switch southbound protocol performance test system for OpenFlow protocol is implemented. This system has conducted a series of OpenFlow protocol performance tests to a H3C switch. [Limitations] The scheme proposed in this article is only applicable to the performance test of a single switch without encryption southbound protocol. The scenario simulation required for SDN switch cluster performance test and the decryption breakthrough of encrypted southbound protocol still need to be further explored. [Conclusions] Through the OpenFlow flow table performance test of the H3C switch, the actual performance of the switch is objectively evaluated, and the availability of this test system is verified.
[Objective] This paper provides a comprehensive introduction to edge intelligence technologies, aiming to provide a reference for related readers to understand and focus on edge intelligence, and inspire more scholars to carry out researches on edge intelligence models in the era of the Internet of Things. [Methods] The paper first briefly introduces the origin and concept of edge intelligence and sorts out development trends, and then summarizes three major contradictions that currently exist. Finally, we summarize the current four research directions for the contradiction of edge intelligence and list typical application scenes. [Limitations] As a new technology at the initial stage, the development of edge intelligence is mostly driven by the industry rather than academia. The academic community lacks research ideas for standardization and integration, and cannot discuss future development for the time being. [Conclusions] Despite in the early stages of development, edge intelligence will become a catalyst for the development of the intelligent industry in the future, and promote the upgrading and transformation of the entire industry system.
[Background] Two mainstream network data structures, Protobuf and JSON, have their own characteristics and application scenarios. With the growing complexity of network applications, data exchanges under different scenarios are required. In traditional, JSON is mainly used for data transmission from the Web browser to the server, while Protobuf is mainly for efficient and safe data transmission from clients to servers. [Objective] Thus, if the data format of JSON and Protobuf exchange can be achieved, it will promote data interactions and greatly improve development efficiency. [Methods] This article implements the dynamic data conversion method to convert Protobuf data to JSON format based on dynamic parsing and type reflection technologies. Besides, a test platform with multiple test cases for verifications has been built. [Results] Experiments show the proposed method is reliable and stable with good compatibility. The conversion capacity maintains at 20MB/s under different test data of the test cases for both Protobuf 2 and Protobuf 3.
[Objective] The goal of the study is to meet the needs of asset allocation and actual transactions of enterprise annuity in China, determine the overall risk and return goals, and gain the best asset allocation ratio and better investment decisions. [Methods] Following the premise of security and profitability of enterprise annuity, this paper develops a mean-variance optimization model with investment constraints based on the matrix-valued factor algorithm. The optimal value is obtained based on the CVXOPT solver, genetic algorithm and particle swarm optimization algorithm. Then, considering the three indicators of best variance, mean variance and mean return rate, the optimal model is chosen for calculation in parallel. [Results] Our research and experimental results show that the model is able to reduce and predict the dimensions of high-dimensional covariance matrixes, which alleviates the problem that too many parameters may be difficult to solve when given numerous assets and makes faster convergence to the global optimal solution. By conducting parallel computing, the calculation efficiency of the optimal model is significantly improved, which can effectively shorten the running time of the model. [Limitations] As a portfolio optimization model for Chinese enterprise annuity, mitigating the unreliability of the mean-variance model solution and considering the differences in the risk tolerance of employees are important issues that need to be resolved next. [Conclusions] The portfolio optimization model combined with matrix-valued factor algorithm and parallel computing is beneficial to solving the calculation bottleneck problem of portfolio selection, promoting the preservation and appreciation of enterprise annuity, and alleviating the problems that the balance of social pension system is difficult to sustain and the burden of it is increasing under the circumstance of aging population.
[Objective] In order to improve the registration accuracy and authenticate the wrapping paper images, fine-grained registration is performed on the cigarette packaging images by using the matching feature points extracted from the optimized SIFT (Scale-invariant feature transform) algorithm. [Methods] After performing block processing on the images, removing unstable feature points, and using coarse registration of the homography matrix to filtrate paired points by distance constraint, an evaluation approach is proposed based on average distance betwteen fine-grained matching pairs to improve the registration performance based on SIFT features. [Results] The experimental results show that the improved feature point extraction method in this paper can extract more balanced feature points and improve the estimated matching rate. The proposed registration evaluation standard can effectively evaluate the registration quality, and the coarse registered matching points can improve the accuracy of fine-grained registration of the image and authenticate the wrapping paper images. [Limitations] The current improvement is focused on the selection of matching pairs and there is still room for improvement in the research of fine-grained registration methods. [Conclusions] Experiments prove that this strategy can improve registration accuracy and achieve the purpose of authenticating cigarette wrapping paper images.
[Background] The turbulence problem involves many fields in engineering, and its importance is self-evident. The Reynolds Average Navier-Stokes (RANS) equation provides an effective method for calculating the time-averaged turbulence, and it is widely used because of its ease in calculation. With the development of deep learning technology, data-driven modeling of RANS model has been widely concerned by researchers. [Methods] In this paper, a data-driven method for modeling RANS is proposed. Based on the results of numerical simulation software, the method uses deep learning technology to construct turbulence models. Because of the different initial conditions of different turbulence systems and various qualities of data, it is difficult to use a unified neural network structures for training. Therefore, we used the method of AutoML (automatic machine learning) in deep learning to solve the problem, which can automatically build appropriate network structures and choose proper hyper-parameters for different datasets. In addition, this paper also improves the method by mixing the data under various initial conditions to train the deep learning model, which greatly increases accuracy and the robustness of our model. [Results] In this paper, a typical example in OpenFOAM about the step flow simulation of the inner wall is selected as the data source for the experiment. The experimental results show that the model has good accuracy and efficiency in prediction of Reynolds stress, which indicates that the data-driven method has a great application prospect in turbulence simulation. [Limitations] In order to better apply deep learning technology in the field of turbulence modeling, the most important issue needs to be solved in the next step is how to couple deep learning models with turbulence numerical simulation software. [Conclusions] At present, there are few systematic researches about turbulence machine learning. Based on the current work, machine learning will play a more important role in the future turbulence modeling researches.
[Objective] Aiming at handling the current situation that there are high barriers impeding materials science researchers to take advantages of machine learning algorithms, this article focuses on developing a user-friendly and highly automated machine learning system for material data mining named Auto-Mat. [Methods] We have integrated some existing methods and machine learning algorithms in MatMiner and scikit-learn, and defined a data dictionary to read data from different material calculation databases. At the same time, we have developed some algorithms for feature selection and processing. [Results] It can provide the system with a visual interaction and display interface for data mining and machine learning modules under a unified data format. With the optimized algorithms, the performances of models are improved. [Limitations] For data acquisition, currently only the data in the MatMiner API can be obtained, and the writing of related code is also fully synchronized with the MatMiner API. So the scalability is poor. Moreover, at present, the execution speed of some core algorithms needs to be improved. [Conclusions] Through this system, users can read data from several mainstream databases such as Materials Project in one shot and quickly build their own material data mining workflow. In the end, a comparative analysis of two cases shows that our platform has a positive effect on reducing the barriers for users to use machine learning methods on material data mining.
主管:中国科学院
主办:中国科学院计算机网络信息中心
科学出版社有限责任公司
出版:科学出版社有限责任公司