2024 volumne 45 Issue 04
YANG Shouyi1, CHEN Yihang1, ZHANG Shuangling2, HAN Haojin1, LI Guangyuan3, HAO Wanming1
Abstract: Mobile edge computing (MEC) has become one of the key technologies for future-oriented communica tions by offloading the computing and storage tasks of mobile terminals from centralized data centers to edge grids to satisfy the diverse device service demands in complex communication scenarios. The basic concept and basic frame work of MEC technology were introduced by describing the development history from cloud computing, fog compu ting to mobile edge computing. On this basis, the research progress of MEC was discussed in four aspects, namely, computation offloading, resource allocation, cache management, and security protection. A detailed overview of the relevant research results was provided. Then, studies on several typical application scenarios of edge computing such as IoT, MEC combined with blockchain, AI-assisted MEC systems, integrated sensing and communication, and cloud-edge collaboration were summarized. It demonstrated the potential benefits of MEC in 6G in terms of con stituting an intelligent, efficient and secure communication network. Finally, the challenges faced by MEC research in convergence innovation from the aspects of interoperability, security risk, mobility management and scalability were pointed out, as well as the advantages and development trends in the directions of ultra-reliable low-latency communications, communication-sensing-computing integration and satellite-ground fusion mobile communication. The development trend of it in the future mobile communication was also summarized and outlooked.
SHE Wei1,2,3, KONG Xiangji1,3, GUO Shuming2,4, TIAN Zhao1,3, LI Yinghao1,2,3
Abstract: Based on deep learning MVS methods, neural networks suffered from a large number of parameters and high GPU memory consumption. To address this issue, a lightweight deep convolutional recurrent network recurrent network-based MVS method was proposed. Firstly, the original images passed through a lightweight multi-scale fea ture extraction network to obtain high-level semantic feature maps. Then, a sparse cost volume to reduce the com putational workload was constructed. Next, GPU memory consumption was reduced by using a simple plane sweep ing technique that utilized by a convolutional recurrent network for cost volume regularization. Finally, sparse depth maps were extended to dense depth maps using an extension module. With a refinement algorithm, the proposed approach achieved a certain level of accuracy. The proposed approach was compared to state-of-the-art methods on the DTU dataset including traditonal MVS methods Camp, Furu, Tola, and Gipuma, and also including deep learn ing-based MVS methods SurfaceNet, PU-Net, MVSNet, R-MVSNet, Point-MVSNet, Fast-MVSNet, GBI-Net, and TransMVSNet. The results demonstrated that the proposed approach reduced GPU consumption to approximately 3.1 GB during the prediction stage, and the differences in precision compared to other methods were relatively small.
ZHI Min, LU Jingfang
Abstract: ViT as a model based on the Transformer architecture has shown good results in image classification tasks. In this study, the application of ViT on image classification tasks was systematically summarized. Firstly, the functional characteristics of the ViT framework and its four modules (patch module, position encoding, multihead attention mechanism and feed-forward neural network) were briefly introduced. Secondly, the application of ViT in image classification tasks was summarized with the improvement measures of the four modules. Due to the fact that different model structures and improvement measures could have a significant impact on the final classification performance, a side-by-side comparison of various types of ViTs was made in this paper. Finally, the advantages and limitations of ViT in image classification were pointed out, and possible future research directions were proposed to break the limitations, and further to extend the application of ViT in other computer vision tasks. The extension of ViT to a wider range of computer vision fields, such as video understanding, was explored.
WEI Mingjun1,2, LI Feng1, LIU Yazhi1,2, LI Hui1
Abstract: In order to protect the Internet of Vehicles system from the threat of network attacks and improve the ac curacy of intrusion detection, a new intrusion detection method (AQVAE-RGSNet) was proposed for the character istics of large data flow and unbalanced attack types in the vehicle network. Firstly, the adversarial quantized varia tional auto encoder was used to process the vehicle network data imbalance. And it was constructed by combining the vector quantized variational auto encoder-2 and the generative adversarial network with gradient penalty to alle viate the extremely unbalanced number of samples of abnormal attack types in the dataset. Afterwards, the ResNet and improved segmented residual neural network were used to learn the input sample data and predict its attack cat egory. The experimental results indicated that AQVAE-RGSNet achieved F1 scores of 0.998 6 and 0.999 7 on the vehicle networking dataset CICDS2017 and CAN-intrusion-dataset, respectively. On the premise of ensuring the best training effect, it could identify attack threats more effectively in the vehicle network.
SUN Xiaochuan1,2, WANG Yu1,2, LI Yingqi1,2, HUANG Tianyu1,2
Abstract: To address the problem of undesirable prediction accuracy of a deep echo state network caused by redun dant structures in the reservoirs, a pruning algorithm for the deep echo state network based on detrended multiple cross-correlation was proposed. Firstly, according to the detrended covariance function and the detrended variance function, the detrended cross-correlation coefficient between each two neurons in the selected reservoirs in turn was calculated, and the detrended cross-correlation matrix was constructed. Based on this matrix, the detrended multi ple cross-correlation between a selected neuron and all remaining neurons in this reservoir could be evaluated. Sub sequently, the connections from the highly correlated neurons in each reservoir to the output layer were pruned se quentially, thus removing redundant components in the network. Finally, the network after pruning was retrained by least squares regression to obtain the optimal deep echo state network topology. Simulation results showed that the prediction accuracy and memory capacity of the deep echo state network optimized by the proposed algorithm on Mackey-Glass time series were improved by 89.80% and 30.93%, respectively, and on Call time series by 14.34% and 0.10%, respectively.
LI Zhixin1, SHANG Fanqi1, HUAN Zhan1, CHEN Ying1, LIANG Jiuzhen2
Abstract: The existing methods for human activity recognition using wearable sensors could not capture the struc tural information between the sampling points of time series effectively and might ignore the potential connections between samples. To address this issue, a graph convolutional neural network model with hybrid time-frequency and structural characteristics was proposed for human activity recognition. Firstly, the time-frequency characteris tics of the original signal were obtained by wavelet-packet transform, and the spatio-temporal graph was further con structed to extract the structural characteristics to identify the dynamic characteristics between the sampling points. The distance constraint was added to the structural characteristics to weaken the influence of long-distance neighbors on the central node on the spatio-temporal graph. Considering that the extraction of structural characteristics was greatly affected by the topological relationship of the spatio-temporal graph. The time-frequency characteristics of the samples to construct the input topology of the graph convolutional neural network were selected, and the time frequency and structural characteristics were combined as the input features of the network. Finally, the input fea tures propagated along the input topology to obtain the final classification result. To evaluate the performance of the proposed model, experiments were conducted on the WHARF and DataEgo datasets. Results in terms of F1 scores indicated that the proposed model outperformed existing convolutional neural network-based methods, achieving a maximum improvement of 19.58 percentage points on the WHARF dataset and 26.44 percentage points on the Da taEgo dataset. It demonstrated that the proposed model could effectively enhance the capability of activity recogni tion by exploiting dynamic characteristics.
LI Weijun, GU Jianlai, ZHANG Xinyong, GAO Yuxiao, LIU Jintong
Abstract: In few-show knowledge graphs, the representation of relationships between entity pairs was diverse and complex. However, existing few-show knowledge graph completion methods commonly suffered from insufficient re lational learning capabilities and the neglect of contextual semantics associated with entities. To address these chal lenges, a novel approach called the few-shot relation learning completion model (FRLC) was proposed. Firstly, during the process of aggregating high-order neighborhood entity information, a gating mechanism was introduced to mitigate the adverse effects of noise on neighbors while enriching the representation of central entities. Secondly, in the phase of relation representation learning, the correlations among entity pairs in a reference set were leveraged to obtain more accurate relationship representations. Lastly, within the Transformer-based learning framework, an LSTM structure was incorporated to further capture contextual semantic information of entities and relationships, which was used for predicting new factual knowledge. To validate the effectiveness of FRLC, comparative experi ments were conducted on the publicly available NELL-One and Wiki-One datasets, in which FRLC was compared with six few-shot knowledge graph completion models and five traditional models for 5-shot link prediction. The ex perimental results showed improvements in FRLC across four metrics: MRR, Hits@10, Hits@5, and Hits@1, demonstrating the model′s effectiveness.
SUN Jian1,2, LIU Pin1, LI Hao1, CHEN Pan1
Abstract: A two-stage KSAGA algorithm combining Spark-based simulated annealing and genetic algorithms was proposed for the single-depot multiple traveling salesman problem with minimum total path length. In the first stage, the multiple traveling salesman problem was split into multiple single traveling salesman problems by k means clustering, and the traversal order of cities in the group was optimized using the simulated annealing algo rithm. In the second stage, the classification of cities was optimized by genetic algorithm, and the cross-variance operator as well as the hybrid local optimization operator were designed based on the chromosome grouping encoding method to improve the search space and convergence speed of the algorithm. As the number of cities increased and the computational scale became larger, the characteristics of genetic algorithm were used to realize the parallelism of the algorithm in order to speed up the algorithm operation efficiency. Finally, the solution quality of KSAGA was compared with that of ACO, GA, SPKSA, ALNS and NSGA-Ⅱ and the convergence speed of GA and NSGA-Ⅱ by selecting some datasets of TSPLIB for simulation experiments. The results showed that KSAGA could converge quickly in solving the single-depot multiple traveling salesman problem, and the solution quality was greatly im proved compared with other algorithms. Meanwhile, the advantage of KSAGA was more obvious as the number of cities and the number of travelers increased.
GUAN Changshan, BING Wanlong, LIU Yahui, GU Pengfei, MA Hongliang
Abstract: At present, most rumor detection work mainly based on the original text content, communication struc ture and communication text content of Twitter or Weibo. However, these methods ignored the effective integration of original text features with other features, as well as the role of propagating users in the process of rumor propaga tion. Aiming at the shortcomings of the existing work, a multi-feature fusion model GCNs-BERT based on graph convolutional network was proposed, which combined the features of the original text, the propagating user and the propagating structure. Firstly, a propagation graph was constructed based on the propagation structure and the prop agation users, and the combination of multiple user attributes was used as the propagation node feature. Then, mul tiple graph convolutional networks were used to learn the representation of the propagation graph with different user attribute combinations, and BERT model was used to learn the feature representation of the original text content. Finally, the fusion with the features learned by the graph convolutional network was used to detect rumors. A large number of experiments using publicly available Weibo data sets showed that the GCNs-BERT model was significant ly better than the baseline method. In addition, the generalization ability experiment of GCNs-BERT model was conducted on the novel coronavirus epidemic data set. The training sample size of this data set was only 1/5 of that of the public Weibo data set, and the accuracy rate was still 92.5%, which proved that the model had good gener alization ability.
MAO Wentao1,2, GAO Xiang1, LUO Tiejun3, ZHANG Yanna1,2, SONG Zhaoyu1
Abstract: In the actual business, parts demand occured randomly and demand fluctuates, so the demand sequence for spare parts showed obvious intermittent distribution. At the same time, due to factors such as manual reporting errors or special events, the actual demand for spare parts was prone to abnormal changes, making it difficult for traditional time series prediction methods to capture the evolution of the demand for accessories, resulting in high uncertainty and insufficient reliability of prediction results. To solve this problem, an adaptive interval prediction method for intermittent series based on tensor representation was proposed. Firstly, hierarchical clustering was used to screen similar sequences based on the average demand interval and square of the coefficient of variation of acces sory sequences, forming sequence clusters to increase predictability. Secondly, the original demand sequence was reconstructed by tensor decomposition. The outliers in the sequence were then corrected while retaining the core in formation of the original sequence to maximum extent. Finally, an adaptive prediction interval algorithm was con structed, which could obtain the predicted value and prediction interval of the parts demand through the dynamic update mechanism to ensure the reliability of the results. The proposed method was validated on the aftersales data set from a large vehicle manufacturing enterprise. Compared with existing time series prediction methods, the pro posed method could effectively extract the evolutionary trend of various types of intermittent series and improve the prediction accuracy on the intermittent time series with small size as well. Experiments showed that the average root mean square scaled error (RMSSE) of this method was 0.32 lower than that of the mainstream in-depth learning method of demand prediction. More importantly, when the prediction results were distorted, the proposed method could provide a reliable and flexible prediction interval, which could be helpful to provide a feasible solution for in telligent parts management.
DOU Ming1, YAN Jiajia1, WANG Cai2, GUAN Jian3, LI Guiqiu1, HOU Jinjin4
Abstract: In view of the lack of studies on the impact of low impact development (LID) spatial layout on the con trol effect of water quantity and quality, Zhongyuan District of Zhengzhou City as a target area, its urban water quantity and quality model with LID facilities was built based on SWMM model principle. Based on the empirical value of urban comprehensive runoff coefficient and the actual situation of the area, Zhongyuan District was divided into high, medium and low-density areas of urban buildings, and different proportions of LID (S1-S5) were de ployed in the high, medium and low-density urban areas, and the control effect of different spatial patterns of LID facilities on water quantity and quality under different rainfall levels was calculated. The results showed that with the increase of rainfall level, the total runoff reduction rate, peak flow reduction rate and TSS load reduction rate of LID scheme continued to decrease. When rainfall levels were rainstorm, heavy rain and moderate rain, LID scheme (S5) with a proportion of 35%, 35% and 30% were deployed in high, medium and low dense urban areas, and the total runoff reduction rate exceeded 80%, the peak flow reduction rate exceeded 70%, and the TSS load reduc tion rate exceeded 50%. From the comprehensive index evaluation of water quantity and water quality in each e vent, the program had the best performance.
HUANG Li1,2, WANG Yunqing3, WANG Wei2,3, SONG Yue2,3, SHI Yuxin3
Abstract: In order to prevent and reduce the disaster losses caused by compound disasters to cities, it was of great theoretical and practical significance to study the exposure degree of chain compound disaster system. Through the research and analysis of the exposure degree of chain compound disaster system, considering the existing research and defining the connotation of the exposure degree of the compound disaster system, the evaluation model and cal culation method of the exposure degree of the chain compound disaster system were deduced and established. Tak ing the rainstorm-landslide disaster chain in Guangdong-Hong Kong-Macao Greater Bay Area as an example for em pirical analysis, the evaluation index system of regional disaster chain exposure was screened and constructed from the aspects of population, economy and society. The order relationship analysis and TOPSIS method were used to calculate the single disaster exposure index of rainstorm and landslide in 52 districts and counties in the bay area. Then, the theoretical model of exposure degree of chain compound disaster system and ArcGIS image processing technology were used to obtain the zoning map of rainstorm-landslide disaster chain exposure in the Guangdong Hong Kong-Macao Greater Bay Area, and the corresponding disaster prevention and mitigation measures were put forward according to the exposure degree attribute of each regional disaster chain. The results showed that the expo sure index corresponding to the economically developed areas was significantly higher than that of other regions, and the low exposure were mainly in the relatively backward areas. Among the indicators, population density and GDP per land area account for the main position, which had the highest impact on regional exposure. By the exposure of regional compound disaster systems, reasonable scientific basis and decision-making support could serve for the government and relevant departments, helping the government, urban planners, and the public better understand and respond to disaster risks, thereby reduce the losses and impacts of disasters.
WAN Junfeng1, 2, SONG Yifan1, 2, GUO Lin3, MA Yifei1,2, LI Zhe1, 2, DING Junxiang1, GUO Xiaoying1, 2
Abstract: In order to analyze the different types of influence of human disturbance on plant communities in the Yellow River wetland and the relevant mechanisms, three wetlands in Zhengzhou section which were respectively affected by agricultural planting, reservoir construction and building construction, were selected after conducting a large number of field investigations. After measuring the plant communities and related environmental factors, redundancy analysis was carried out based on human pressure index evaluation system. The results showed that the whole wetland plant community showed an obvious degradation trend, and the function of productivity maintenance decreased. For the sample land with different types of human disturbance, the severity of human disturbance varied from the disturbance caused by reservoir construction to agricultural planting, and building construction. With intensification of the severity of human disturbance, the similarity coefficients of plant communities in different wet lands declined successively, and the species richness index of wetland in Bird Nature Reserve and Taipingzhuang North wetland decreased by 15.98% and 37.05%, respectively, compared with the Taohuayu wetland, indicating that the overall construction of wetland plant communities gradually became simpler. Among the many factors causing wetland degradation, the change of ammonium nitrogen content in soil had a significant negative correlation with species diversity index and species evenness index, and was one of the main reasons for wetland plant degradation.
KUANG Yida1, YAO Zhimin2, BIAN Huiting1, ZHAO Jun1, LI Xinzhe1, ZHANG Lijuan1
Abstract: To study the influences of fire extinguishing agents on the mechanical performance of concretes after high temperatures, specimens of different curing ages were heated to different temperatures at 5 ℃/min and 10 ℃/min heating rates, and then they were maintained at different constant temperatures of 200 ℃, 400 ℃, 600 ℃ and 800 ℃ for one hour. The mass loss rate of specimens of different curing ages and heating temperatures were tested. Specimens were moved into a tempered glass frame. Water, Halon 1211, CO2 and HFC-227ea were used to treat these specimens. The results showed that the mass loss rates of the specimen with 7 d and 14 d curing age were sig nificantly higher than that in 28 d curing age at 400 ℃. Fire extinguishing agents did not affect the compressive strength of the specimens at room temperature. Water, Halon 1211, and HFC-227ea could reduce compressive strength of the specimens after high temperature treatment at 400 ℃ and 600 ℃. However, water cooling treatment at 600 ℃ could increase the compressive strength of the specimen with 7 d and 14 d curing age by 9.14% and 9.18%. Curing age did not affect the experimental results of Halon 1211 and HFC-227ea, and the compressive strength of the specimen was relatively low at 800 ℃, which could not reflect the influence of different treatment methods on the compressive strength. Water, Halon 1211, and HFC-227ea fire extinguishing agents could reduce the splitting tensile strength of concrete at 400 ℃. CO2 had no influence on the compressive strength and splitting tensile strength of concrete.
LIAO Xiaohui1, XIE Zichen1, XIN Zhongliang2, CHEN Yi1, YE Liangjin1
Abstract: In order to improve the accuracy of real-time detection of external defects of electrical equipment in sub stations and make the detection model more lightweight, a lightweight YOLOv5 based external defect detection method for electrical equipment was proposed. Firstly, the external defect image dataset of electrical equipment was constructed and processed by data enhancement. Secondly, three optimization strategies were used to improve the original YOLOv5. The EfficientViT network was introduced to improve the backbone network of the algorithm to re duce the number of model parameters, and the SimAM parameter-free attention mechanism was added to the Neck part of the algorithm to improve the recognition accuracy with the complex background of the substation. At the same time, the Soft-NMS module was used to improve the screening method of the detection box to avoid the phe nomenon of defect missed detection. Finally, verified by ablation test, the mAP value of the lightweight external de fect detection model of electrical equipment was stable at 86.4%, which was 1.2 percentage points higher than that of the original model, the number of model parameters were reduced by 20%, the calculation amount was reduced by 38%, and the model size was 11 MB, which was 19.7% lower than that of the original model. The improved model could meet the requirements of real-time detection of external defects of equipment.
JIANG Xin, DUAN Shijie, JIN Yang, SHANG Jingyi
Abstract: Due to the difference of the physical constraints of traded goods and trading time scale in the electricity market and the carbon market are different, it was difficult for the two markets to integrate effectively. Aiming at the problem, a rolling clearing model of the electro-carbon joint market based on the variable carbon emission inten sity and the centralized carbon trading mechanism was proposed. In the proposed model, the interaction between the electricity market and the carbon market was enhanced by considering the carbon intensity and load rate interval of the unit. Meanwhile, the rolling clearance of the joint market based on the centralized carbon trading mechanism reduced the trading time scale of the carbon market to synchronize with the electricity market, making it′s better to found the value of carbon emission rights in different periods. With the further reduction of China′s carbon emission baseline value and the increase of new energy penetration rate, the impact on each unit was analyzed by simulation examples. It was verified that in the proposed model, with the reduction of the carbon emission baseline value, the average carbon cost of high-carbon emission units increased by 46%, the average carbon income of low-carbon e mission units increased by 27%, and the increase in the penetration rate of new energy units reduced the average carbon cost of the large-capacity thermal power units by 5.53%. Therefore, the proposed model could effectively promote the transformation of the clean direction of the system. Compared with the traditional stepped carbon pricing mechanism, the average carbon cost of high-carbon emission units in the proposed model was reduced by 6.13%, which could indirectly improve the enthusiasm of high-carbon emission units to participate in the carbon market.
LIANG Jie, SUN Zhenwei
Abstract: Aiming at the problems of low efficiency and inability in accurate assessment of the roundness and flatness of air conditioning compressor crankshaft, which were measured manually with a micrometer, a set of non-con tact measurement system of roundness and flatness of air conditioning compressor crankshaft based on a line laser displacement sensor was designed. Firstly, a calibration method of systematic error was proposed by measuring a standard measuring rod. Then, based on the line laser sampling data, an extraction algorithm of crankshaft cross section roundness measurement points was developed, and the systematic error was used for the compensation of the measurement points to realize the roundness measurement of the crankshaft. After that, the contour line was separa ted according to the slope between the measurement points of the crankshaft plane, and the streamlined point cloud data were obtained by the homogeneous down sampling method, and the least square method was used to realize the flatness measurement of the crankshaft. Finally, the crankshaft roundness and flatness measurement experiments were carried out based on the above methods. The experimental results showed that the roundness measurement ac curacy of the system was less than 4 μm, the flatness measurement accuracy was less than 2 μm, the repeatability error of roundness measurement was less than 0.8 μm, and the repeatability error of flatness measurement was less than 0.3 μm, which could satisfy the requirements of fast and accurate measurement of roundness and flatness of crankshaft.
ZHOU Juncen, GAN Fangji, WANG Siyu, ZHONG Tao, YANG Suixian
Abstract: Ultrasonic guided wave technology was often used for wall thickness monitoring system of high tempera ture pipelines because it could isolate the ultrasonic transducer from the high temperature environment. However, the failure of conventional coupling technology and the low signal-to-noise ratio of the signal at high temperature af fected the reliability and measurement accuracy of the measurement system. A waveguide rod was designed to con duct ultrasonic signals. Based on the acoustic matching principle, the dry coupling technology was optimized to solve the failure problem of conventional coupling technology in high temperature environment. The adaptive excita tion method and the power multiplication algorithm of measurement data were proposed to improve the adaptability and signal-to-noise ratio of the measurement system at high temperature. Intelligent segmentation of the speed of sound was used to correct the speed of sound, and the method made the measurement accuracy of the high-tempera ture ultrasonic guided wave thickness measurement system reach ±0.03 mm. The experimental results showed that the temperature of the ultrasonic transducer end was not more than 56 ℃ at the maximum under the environment of a long term heat source of 500 ℃, which verified the reliability of the measurement system in high temperature en vironments.
Copyright © 2023 Editorial Board of Journal of Zhengzhou University (Engineering Science)