Please wait a minute...

Current Issue

    25 June 2022, Volume 43 Issue 6
    Topics: Key Technologies of 6G Oriented Intellicise Network
    6G-ADM: knowledge based 6G network management and control architecture
    Jianxin LIAO, Xiaoyuan FU, Qi QI, Jingyu WANG, Haifeng SUN
    2022, 43(6):  3-15.  doi:10.11959/j.issn.1000-436x.2022127
    Asbtract ( 550 )   HTML ( 89)   PDF (1986KB) ( 684 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Objectives: With the consensus reached on the 6G vision of three-dimensional coverage, extreme performance, virtual real integration and ubiquitous intelligence, problems such as personalized service customization, proliferation of network element types and changeable scene superposition will bring more severe challenges to the network management and control system. For 6G networks, network elements, protocols, applications and architectures will be highly heterogeneous and complex. Native Intelligence and lite networks provide feasible ideas for 6G network architecture and function design. The 6G network with native intelligence, whose network elements have different levels of intelligence, can independently generate strategies to complete the network functions realized by the traditional manual configuration strategies, providing the basic conditions for more efficient network management and control. Starting from the network design idea of "Da Dao Zhi Jian", starting with efficient network management and control, the network architecture can be simplified, complex protocols can be simplified, manual operation and maintenance can be reduced, and the full scene network on-demand services can berealized through the rapid and efficient organization and allocation of multi-level resources in the whole network. At present, 5G network management, control and operation and maintenance systems are closed independently, focusing on solving specific scenario problems. Their security, intelligence and collaboration lack global planning and unified design, and it is difficult to meet the future immersive and personalized full scenario services and performance requirements. It is urgent to build a network management and control system and make breakthroughs in key technologies for 6G on-demand services.

    Methods: Facing the design requirements of intellicise network, this paper introduces intelligent knowledge space into the 6G control system. The knowledge space is responsible for collecting and extracting the network control experience and knowledge generated by super intelligent network nodes through intelligent computing to form a network control knowledge space. The perception of network needs, the sharing of network resources and the generation of global network control strategies play a role of a super brain. Finally, there is only one layer of knowledge space above the next generation network implementation infrastructure. The 6G management and control architecture as the management and control decision-making layer has native intelligence and minimalist optimization. Based on this, this paper proposes a 6G network management and control system based on knowledge space, which is called 6G-ADM(6G admin) for short.

    Results: In order to realize the personalized customization of 6G services and improve the performance of 6G services, the integration trend of network management, control and operation was explained. The development of network management and control system will conform to the trend of "intellicise network" and form an "intellicise integration of management and control system". The management and control system is native intelligent. The concept and strategy generation process of intelligent management and control tends to be simplified, but resources will be allocated in a more granular manner. 6G network will support personalized and immersive services in the whole scene and provide users with extreme performance experience such as extremely low delay and high reliability. 6G resources are abundant, but still limited. There is a contradiction between demand growth and resource consumption, which poses an important challenge to the high adaptation of fine-grained resources. This paper proposes 6G-ADM to improve network management and control knowledge, aiming to form a closed loop to support on-demand services, and effectively deal with the contradiction between demand growth and resource consumption in 6G networks. This paper considers that the sustainable on-demand service can be realized through native intelligence of network knowledge, and establishes a knowledge space to coordinate artificial intelligence and traditional artificial definition.

    Conclusion: As two key technologies to realize the closed-loop control function of 6G-ADM, this paper proposes a new network slicing method and an anomaly detection method based on knowledge space. 6G-ADM converts the closed-loop service policy including resource allocation and anomaly detection into the execution behavior of the global network elements. Simulation results show the effectiveness of the proposed methods.

    New design paradigm for federated edge learning towards 6G:task-oriented resource management strategies
    Zhiqin WANG, Jiamo JIANG, Peixi LIU, Xiaowen CAO, Yang LI, Kaifeng HAN, Ying DU, Guangxu ZHU
    2022, 43(6):  16-27.  doi:10.11959/j.issn.1000-436x.2022128
    Asbtract ( 500 )   HTML ( 82)   PDF (2498KB) ( 511 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Objectives: To make full use of the abundant data distributed at the edge of the network to serve the training of artificial intelligence models, edge intelligence technology represented by federated edge learning emerges as the times require. The rich data at the edge enables it to serve artificial intelligence model training, and design wireless resource management strategies with the goal of optimizing learning performance (such as optimizing model training time, learning convergence, etc.).

    Methods: Using the federated learning network architecture, this paper analyzes the resource allocation and user scheduling schemes: 1. For the resource allocation problem, the trade-off relationship between the number of communication rounds and the delay per round when the goal of minimizing the total training time is analyzed, To meet the time constraints of each round of training, more frequency bandwidth should be allocated to devices with low computational power, compensating for a computational time by reducing communication time, and vice versa. Therefore, bandwidth allocation between devices should consider both channel conditions and computing resources, which is completely different from the traditional bandwidth allocation work that only considers channel conditions. To this end, it is necessary to model the total training time minimization problem, optimize the quantization series and bandwidth allocation, and design an alternate optimization algorithm to solve this problem. 2. For the user scheduling problem, the communication time minimization optimization problem is modeled by linking the importance of data with the number of communication rounds, channel quality and single-round communication delay, and using a theoretical model to unify the two. By solving the optimization problem, it is found that the optimal scheduling strategy will pay more attention to the importance of data in the early stage, and pay more attention to the channel quality in the later stage. In addition, the proposed single-device scheduling algorithm is also extended to multi-device scheduling scenarios.

    Results: 1. For the resource allocation problem, when the bandwidth allocation is optimal, the relationship between the total training time and the quantization level obtained by simulation, run the same training process at least 5 times on each quantization level, and there is a total training time. The total training time is T= N ϵ T d , and N ϵ is a decreasing function of quantization level q,and Td is an increasing function of q . In addition, the optimal quantization series obtained through theoretical optimization is consistent with the simulation results, and the effectiveness of the proposed algorithm is verified. According to the relationship between the optimal value interval of the loss function and the training time, the optimal quantization series and the optimal bandwidth allocation strategy are obtained by solving the training time minimization problem. 2. For the user scheduling problem, the proposed user scheduling (TLM) scheme is compared with three other common scheduling schemes in the simulation, and the average precision is shown when the communication time is 6 000 s and 14 000 s, where the average Accuracy is obtained by measuring the IoU, the intersection of union, of the predicted value and the true value. The CA scheme yields the worst accuracy on car 1 with the largest channel attenuation, while the IA scheme exhibits the lowest accuracy on car 4 where the data is less important. The ICA scheme aims to find a balance between CA and IA, but due to its heuristic nature, the performance is lower than that of the TLM scheme.

    Conclusions: 1. The training loss under the optimal quantization level and optimal bandwidth allocation reaches the predetermined threshold in a shorter time and can achieve the highest test accuracy. Secondly, the training performance under the non-optimal quantization level and optimal bandwidth allocation will be better than the performance of the optimal quantization leveland average bandwidth allocation, which also verifies the necessity of resource allocation. 2. The TLM scheme achieves slightly better performance early in training and significantly outperforms all other schemes after full training. This is due to the inherent prospective nature in the proposed TLM protocol which is advantageous over the myopic nature in the existing CA, IA and ICA protocols.

    6G-oriented cross-modal signal reconstruction technology
    Ang LI, Jianxin CHEN, Xin WEI, Liang ZHOU
    2022, 43(6):  28-40.  doi:10.11959/j.issn.1000-436x.2022093
    Asbtract ( 362 )   HTML ( 36)   PDF (2508KB) ( 560 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Objectives:It is well known that multimodal services containing audio,video and haptics such as mixed reality,digital twin and metaverse are bound to become killer applications in the 6G era,however,the large amount of multimodal data generated by such services is highly likely to burden the signal processing, transmission and storage of existing communication systems. Therefore, a cross-modal signal reconstruction scheme is urgently needed to reduce the amount of transmitted data to support 6G immersive multimodal services in order to meet the user's immersive experience requirements and guarantee low latency,high reliability and high capacity communication quality.

    Methods:Firstly,by controlling the robot to touch various materials,a dataset containing audio, visual and touch signals, VisTouch, is constructed to lay the foundation for subsequent research on various cross-modal problems; secondly, by exploiting the semantic correlation between multimodal signals, a universal and robust end-to-end cross-modal signal reconstruction architecture is designed, comprising three parts: a feature extraction module, a reconstruction module and an evaluation module. The feature extraction module maps the source modal signals into a semantic feature vector in the common semantic space, and the reconstruction module inverse transforms this semantic feature vector into the target modal signal.The evaluation module evaluates the reconstruction quality in semantic and spatio-temporal dimensions, and feeds the optimization information to the feature extraction module and the reconstruction module during the training process of the framework, forming a closed-loop loop to achieve accurate signal reconstruction through continuous iteration. Further, a teleoperated platform is designed to deploy the constructed haptic reconstruction model into the codec to actually verify the operational efficiency of the model; finally, the reliability of the cross-modal signal reconstruction architecture and the accuracy of the haptic reconstruction model are verified by experimental results.

    Results: The constructed VisTouch dataset involves three modalities: audio, video and haptics, and contains 47 common slices of life samples. The average absolute error and accuracy of the constructed video-assisted haptic reconstruction model on the VisTouch dataset reached 0.0135 and 0.78 respectively. In order to implement the proposed cross-modal signal reconstruction framework into practical application scenarios, a teleoperation platform was further built using the robot and Nvidia development board for the industrial scenario of The results of running on this platform show that the actual mean absolute error is 0.0126,the total end-to-end delay is 127ms and the reconstruction model delay is 98ms.A questionnaire was also used to assess user satisfaction,where the mean value of haptic realism satisfaction is 4.43 with a variance of 0.72 and the mean value of time delay satisfaction is 3.87 with a variance of 1.07.

    Conclusions: The results of the dataset runs fully demonstrate the practicality of the constructed VisTouch dataset and the accuracy of the video-assisted haptic reconstruction model, while the actual test results of the teleoperated platform indicate that users consider the haptic signals generated by the model to be closer to the actual signals,but are generally satisfied with the running time of the algorithm, i.e. the complexity of this modality needs further optimization.

    Intelligent task-oriented semantic communications:theory, technology and challenges
    Chuanhong LIU, Caili GUO, Yang YANG, Jiujiu CHEN, Meiyi ZHU, Lu’nan SUN
    2022, 43(6):  41-57.  doi:10.11959/j.issn.1000-436x.2022117
    Asbtract ( 2826 )   HTML ( 340)   PDF (2824KB) ( 1592 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Objectives: In the future, intelligent interconnection of all things, such as machine-to-machine and human-to-machine, poses challenges to traditional communication methods. The semantic communication method that extracts semantic information from source information and transmits them provides a novel solution for the sixth generation (6G) communication system. However, there are challenges in how to measure semantic information and how to achieve optimal semantic codec.This paper reviews the existing works related to semantic communication,and proposes a semantic communication method and framework for intelligent tasks,paving the way for further development of semantic communication.

    Methods: Firstly, the development history and research status of semantic communication are reviewed, the two bottleneck problems faced by semantic communication are analyzed and summarized, and a semantic communication method oriented to intelligent tasks is proposed. Aiming at the difficulty of semantic entropy,this paper defines the smallest basic unit of semantic message as semantic element,introduces fuzzy mathematics theory to describe the fuzzy degree of semantic understanding, and gives the calculation expression of semantic information entropy. Then, based on the information bottleneck theory, this paper proposes a semantic information coding scheme and a semantic channel joint coding scheme,respectively considering whether the receiver needs to reconstruct the original source. Furthermore, from the perspective of neural network interpretability,an interpretability-based semantic encoding method is proposed.Finally, a semantic communication platform for intelligent tasks is built based on software and hardware such as USRP and LabView,and the performance of the proposed algorithm is verified.

    Results:In the communication scenario where the source needs to be reconstructed,the semantic communication method proposed in this paper can greatly improve the compression ratio of the source data and greatly reduce the amount of transmitted data.Under the same compression ratio, the performance of the receiver to perform subsequent intelligent tasks can be improved,and the performance of source reconstruction can be improved at the same time.In scenarios where there is no need to reconstruct the source,the semantic communication method can better accomplish intelligent tasks with a large compression ratio.This is because semantic communication transmits the semantic information of the image instead of all the data of the image,which greatly reduces its bandwidth requirements,and the bandwidth utilization rate of semantic communication is 100 times higher than that of traditional communication methods. In addition, the anti-noise performance of the semantic communication method is much better than that of the traditional communication method, because the data transmitted by the semantic communication method retains the semantic features of the image,and the influence of channel noise is considered during model training, which makes the performance of intelligent tasks better and makes the communication system more robust. The semantic communication method greatly reduces the amount of data transmitted, so the transmission delay is significantly reduced under the same bandwidth resources.In addition,since image reconstruction is not required,the processing load of software and hardware is reduced, and the processing delay is also reduced. Therefore, the scheme proposed in this paper can greatly reduce the delay of end-to-end intelligent tasks while ensuring high-precision classification performance.

    Conclusions: Compared with traditional communication methods, the semantic communication method oriented to intelligent tasks has obvious advantages,which can greatly reduce the amount of transmitted data and improve the performance of intelligent tasks at the receiving end. Therefore, semantic communication will continue to maintain the trend of rapid development. However,there are still a lot of basic concepts and basic problems in semantic communication that need to be further discussed and improved,such as the basic theory of semantic information,the unified architecture of semantic communication,and the resource allocation strategy in semantic communication. Research is of great significance to promoting technological innovation and breakthroughs in the 6G era,and academic colleagues need to jointly promote the realization.

    Papers
    Research on entity recognition and alignment of APT attack based on Bert and BiLSTM-CRF
    Xiuzhang YANG, Guojun PENG, Zichuan LI, Yangqi LYU, Side LIU, Chenguang LI
    2022, 43(6):  58-70.  doi:10.11959/j.issn.1000-436x.2022116
    Asbtract ( 794 )   HTML ( 81)   PDF (1530KB) ( 741 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Objectives: In the face of the complex and changing network security environment, how to fight against Advanced Persistent Threat (APT) attacks has become an urgent problem for the entire security community. The massive APT attack analysis reports and threat intelligence generated by security companies have significant research value. They can effectively provide the information of APT organizations, thereby assisting in the traceability analysis of network attack events. Aiming at the problem that APT analysis reports have not been fully utilized, and there is a lack of automation methods to generate structured knowledge and construct feature portraits of the hacker organizations, an automatic knowledge extraction method of APT attacks combining entity recognition and entity alignment is proposed. The proposed method can automatically extract entities from APT analysis reports and construct structured knowledge of the APT organization.

    Methods: An automatic extraction method of APT attack knowledge that integrates entity recognition and entity alignment is designed. Firstly, 12 entity categories are designed according to the characteristics of APT attacks. Then, lowercase conversion, data cleaning, and data annotation are performed on the corpus through the preprocessing layer, and the preprocessed APT text sequence is represented as a vector. Secondly, the Bert model is built to pre-train the annotated corpus, encode each word, and generate the corresponding word vector. Also, the BiLSTM model is constructed to capture long-distance and contextual semantic features. The attention mechanism is built to highlight key features and convert the vector sequence into an annotation probability matrix. Thirdly, the CRF algorithm is utilized to decode the relationship between the output predicted labels and generate the optimal label sequence. Finally, the entity alignment method based on semantic similarity and Birch is constructed, which can improve the quality of the extracted APT attack knowledge through knowledge matching and merging into the infobox of each APT organization.

    Results: In terms of entity recognition, the proposed APT attack entity recognition method is superior to the existing entity recognition methods (i.e., CRF, LSTM-CRF, GRU-CRF, BiLSTMCRF, CNN-CRF, and Bert-CRF). The experimental results of our method have been improved to a certain extent, whose precision, recall, and F1-score are 0.929 6, 0.873 3, and 0.900 6. Compared with CRF, the F1-score of the proposed model is increased by 14.32%. Compared with CNN-CRF, which integrates convolutional neural networks, the F1-score of the proposed model is increased by 6.92%. Compared with LSTM-CRF and BiLSTM-CRF, the F1-score of the proposed model is increased by 8.43% and 5.30%, respectively. Compared with GRU-CRF, the F1-score of this model is increased by 8.74%. Compared with Bert-CRF, the F1-score of this model is increased by 7.03%. In addition, the accuracy of the proposed model is 0.9004, which is 9.85% higher than the average of the other six models. Also, the proposed model's training process is more stable, and the entire curve converges faster, which can achieve higher accuracy with fewer training batches. The model's error converges faster in the training period, and the curve is smoother. Moreover, the proposed model has the best prediction effect on the "attack method" entity category, whose F1-score is 0.927 5. On the one hand, a large number of entities exist in this category. On the other hand, this category of entities widely exists in semantic-rich APT attack events and has the action characteristics of attack behavior, which leads to a better recognition effect of this category. In terms of entity recognition with small sample annotation, the proposed method's precision, recall, and F1-score are 0.780 0, 0.589 4, and 0.671 4, respectively. Compared with the CRF model, LSTM-CRF model,GRU-CRF model, BiLSTM-CRF model, CNN-CRF model, and Bert-CRF model, the F1-score values of the proposed model are improved by 27.42%, 18.78%, 23.62%, 13.25%, 14.88%, and 14.46%. This experiment fully demonstrates that the proposed method can perform pre-training on a small sample corpus through the Bert model, thereby improving the effect of entity recognition. In terms of entity alignment and knowledge fusion, the experiment automatically extracts named entities with the high frequency of various entity categories, which often exist in APT attack events. For example, common APT organizations include "APT29", "APT32", "APT28", and "Turla";common attack equipment includes "PowerShell", "Cobalt Strike", and "Mimikatz"; common attack methods include "Spearphishing", "C2", "Watering Hole Attack", and "Backdoor"; common vulnerabilities include "CVE-2017-11882", "CVE-2017-0199", and "CVE-2012-0158", etc. The proposed method combines the corpus titles and keywords to carry out entity fusion of APT organization names. Finally, the infobox of common APT organizations in this dataset is constructed, and the structured knowledge of each APT organization is formed. Also, the attack domain knowledge of APT28 and APT32 is shown in detail.

    Conclusions: According to the characteristics of APT attacks, an automatic extraction method of APT attack knowledge based on entity recognition and entity alignment is designed and implemented. This method can effectively identify APT attack entities, automatically extract advanced persistent threat knowledge under the condition of few-sample annotation, and generate structured feature portraits of common APT organizations, which will provide support for subsequent APT attack knowledge graph construction and attack traceability analysis.

    Design and implementation of adaptive mimic voting device oriented to persistent connection
    Dacheng ZHOU, Hongchang CHEN, Guozhen CHENG, Weizhen HE, Ke SHANG, Hongchao HU
    2022, 43(6):  71-84.  doi:10.11959/j.issn.1000-436x.2022081
    Asbtract ( 178 )   HTML ( 14)   PDF (1159KB) ( 184 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Objectives: Mimic voter is a crucial component under the dynamic heterogeneous redundancy architecture of mimic defense technology,but the existing mimic voting method needs to collect and process the complete output data of heterogeneous redundant executives. In the application scenario where the connection continuously transmits data in chunked transfer encoding,there are problems that the mimic voting efficiency is too low and the memory resource overhead of mimic voting is too significant.This paper designs and implements an adaptive mimic voter oriented to the scenario of the continuous output of chunked transfer encoded data in a persistent connection to reduce the memory resource overhead of the mimic voter and improve voting efficiency.

    Methods: The proposed mimic voter adaptively divides the chunked-transfer-encoded data arriving at the voter successively from the heterogeneous redundant executives,dynamically votes, and then outputs the data in the form of a sliding window during the continuous data transmission process.Gradually releasing the data of the voted blocks can reduce the memory consumption of the mimic voter and lower the voting processing time while maintaining the continuity of data transmission of the persistent connection.On the one hand,a voting algorithm selection strategy set is constructed to keep the voting accuracy by analyzing the data characteristics in the sliding window.On the other hand,an inventory model of the data voting process of the adaptive mimic voter is established,and an adaptive voting window control strategy is proposed based on the cost optimization of the inventory model to provide the best adaptive segmentation scheme for the data to be voted.

    Results:A series of comparative experiments between the prototype system of the adaptive mimic voter and the traditional mimic voter is conducted as follows. (1) The evaluation of memory resource occupancy shows that the peak physical memory consumption and the total time of consuming physical memory when the adaptive mimic voter transmits 20MB web resources in chunked transfer encoding are significantly lower than those of the traditional mimic voter. (2) The evaluation of transmission delay shows that the response time of the adaptive mimic voter in the voting processing of 10MB to 320MB chunked transfer-encoded webpage resources is relatively low, indicating that its voting speed has been significantly improved. (3) The concurrency performance evaluation shows that the average of response time of the system applying the adaptive mimic voter under the request concurrency of 1000 to 5000 is lower than that of the traditional mimic voter,and the voting processing throughput is higher than that of the traditional mimic voter.(4)The evaluation of voting accuracy shows that the adaptive mimic voter based on the voting algorithm selects the strategy set is slightly weaker than the semantic feature algorithm and the AHP algorithm while far superior to the character similarity algorithm in the traditional mimic voter, which reveals that the adaptive mimic voter has an acceptable voting accuracy.

    Conclusions: The design of the adaptive mimic voter effectively solves the problem of service performance degradation caused by the excessive occupation of memory resources in voting chunked transfer encoding data of persistent connection. The memory occupancy experiment shows the improvement effect of the adaptive mimic voter on this problem, and the voting accuracy evaluation experiment shows that the adaptive mimic voter can improve voting efficiency while maintaining acceptable voting accuracy.The experiments under different service pressures give the feasibility analysis of the adaptive mimic voter in general application scenarios with micro-benchmarks. Therefore, the adaptive mimic voter reduces resource overhead and improves voting efficiency with acceptable voting accuracy, which can effectively support the mimic transformation of applications that transmit data in persistent connections.

    Low error floor LT coding algorithm for unequal error protection
    Xin SONG, Shuyan NI, Zhe ZHANG, Yurong LIAO, Tuofeng LEI
    2022, 43(6):  85-97.  doi:10.11959/j.issn.1000-436x.2022123
    Asbtract ( 240 )   HTML ( 31)   PDF (1067KB) ( 161 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Objectives:Rateless LT code is designed to provide an ideal transport protocol for large-scale data distribution and reliable broadcasting.The rateless LT code has three excellent characteristics,namely link adaptation,the code rate can be switched seamlessly, and the feedback method is relatively simple. There is an important application scenario in wireless data transmission, namely unequal error protection (UEP) data transmission. As the first achievable rateless code,the rateless LT code can be conveniently used in conjunction with the UEP algorithm to realize adaptive data transmission.However,the conventional UEP-LT code algorithm has problems such as high error floor and poor convergence performance in the additive white Gaussian noise(AWGN)channel.Therefore,an improved systematic UEP-LT coding scheme is designed in this paper.

    Methods: This paper considers designing independent systematic UEP-LT codes for AWGN channels.A design method of check distribution matching this scheme is given,and a coding scheme characterized by segmentation is proposed.In this scheme,systematic nodes connected with information nodes one by one are designed to provide non-zero log-likelihood ratio (LLR) information from the channel.After that, there is a fixed number of check nodes,that is,the fixed segment.The check node of this segment is only connected to the important bit(MIB),and its purpose is to make the MIB the closest to the successful decoding state.The final part of the coding scheme is the rateless coding segment. The check nodes in this segment will select MIB or least important bit (LIB) as neighbor nodes,and the proportion of check nodes connected to MIB and LIB can be flexibly adjusted,so that the MIB is always successfully decoded before the LIB.This paper also proposes a degree distribution design model adapted to the above coding scheme.This design model aims to provide a sufficiently wide extrinsic information decoding tunnel for the MIB,and an open and not too narrow decoding tunnel for the LIB.When only the fixed segment is transmitted, the degree distribution to be designed should bring the MIB closest to the successful decoding state.When starting to transmit the rateless segment,the degree distribution to be designed should ensure that the MIB is recovered correctly as soon as possible,and the decoding tunnel of the MIB is wide enough when the LIB is in a critical decoding state. Under the above constraints, the designed check degree distribution can provide the MIB with better convergence performance than the LIB.

    Results:Taking the code length K equal to 6000 as an example to simulate,and the results are as follows.(i)When the signal-to-noise ratio(SNR) is low,the MIB in this scheme has the lowest error floor.For example, when the reciprocal code rate -1=2.05, compared with the reference scheme, the bit error rate (BER) of the MIB proposed in this paper is reduced by nearly an order of magnitude,and the lowest value can reach the order of 10-7. In addition,the scheme in this paper also has the optimal convergence performance,that is,it can enter the BER waterfall region with a small coding overhead. Taking 10-6as the BER standard, the overhead saved by the proposed scheme is at least 10% of the code length K,which reflects the advantages of the proposed scheme.(ii) When the SNR is high,whether it is the MIB or the LIB,the BER performance of the proposed scheme is optimal. For the MIB,the BER of the proposed scheme is always more than an order of magnitude lower than the reference scheme. If 10-6is considered as the BER standard, the overhead saved by this scheme is about 7% of the code length K.However,if 10-7is further considered as the BER standard,the overhead saved by this scheme is about 15% of the code length K.This means that when the SNR increases,the performance improvement of the proposed scheme is higher than that of the reference scheme.

    Conclusions: In this paper, a systematic unequal error protection LT coding scheme is designed, and a check degree distribution design model suitable for this scheme is constructed to solve the problem of high error floor existing in the conventional UEP-LT algorithm in the AWGN channel.The main idea of this scheme is to design fixed coding segments,rateless coding segments and systematic node segments and transmit them in sequence.The advantage of this scheme is that it can provide non-zero LLR information as early as possible for information nodes, and make the MIB and the LIB obtain different and flexibly adjustable average degrees.In addition,based on the extrinsic information transfer(EXIT)chart,the check degree distribution is designed for the fixed segment and the rateless segment,so that the MIB can obtain the optimal protection performance,and the convergence performance of the LIB can be improved as much as possible. In the follow-up work, it can be considered to design a check degree distribution model that can more closely approximate the channel capacity, so as to further improve the coding efficiency of the scheme when the BER standard is given.

    APG mergence and topological potential optimization based heuristic user association strategy
    Zhirui HU, Meihua BI, Fangmin XU, Meilin HE, Changliang ZHENG
    2022, 43(6):  98-107.  doi:10.11959/j.issn.1000-436x.2022121
    Asbtract ( 175 )   HTML ( 11)   PDF (1483KB) ( 271 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Objective: In cell-free networks, access points (AP) collaborate to serve users. This coordination can break the performance bottleneck of traditional cellular network caused by inter-cell interference. However, it needs significant amounts information interaction and signal processing, which results in poor scalability. This paper studied the user association strategy that could improve the scalability of cell-free networks.

    Methods:The network scalable degree was designed as a measure of scalability,and then a user association strategy to improve network scalable degree was studied by using optimization theory. 1) For modelling the optimization problem, firstly, the network coupling degree, representing the degree of association among nodes, was constructed to establish the mathematical relationship between the network scalable degree and AP group (APG).Thus,the problem of improving the network scalable degree was modeled as the problem of minimizing the network coupling degree.Then,a multi-objective optimization problem of minimum network coupling degree and maximum user rate was established to find the balance between network scalable degree and network service quality. 2) For solving the optimization problem,to avoid the high computational complexity,a heuristic user association strategy based on APG mergence and topological potential optimization was proposed.With the proposed algorithm,the number of APG could be reduced by APG mergence,and the number of APG that AP belongs to could be reduced by AP exiting APG. Thus,it can reduce the network coupling degree and improve the network scalable degree. For APG mergence, O(KN log 2 N+ k 2 +NN N ¯ p ) was defined as the overlap rate between set I,J, and the APG whose overlap rate exceeds a certain threshold value would be merged. In terms of AP exiting APG, the relationship between network coupling degree and user rate was established by topological potential function,which was used as the performance index of AP exiting APG.

    Results:1)For the rationality of problem modeling,Fig.2 and Fig.5 show that the network scalable degree is inversely proportional to the network coupling degree. Therefore, it is reasonable to model the problem of improving network scalable degree as minimizing network coupling degree,and it is feasible to improve network scalable degree by reducing network coupling degree.2)The upper limit of computational complexity of the proposed algorithm is O(KN log 2 N+ k 2 +NN N ¯ p ),while that of directly solving the optimization problem isO( N N ¯ u K ).3)For theoretical analysis of the network scalable degree,take Fig.3 as an example.If AP2 changes,12 APs in Fig. 3(a)are affected and the network scalable degree is η2=0.51,while 4 APs in Fig.3(c)are affected and the network scalable degree is η2=0.79.4)Fig.5 shows the simulation results of network scalable degree.Compared with the traditional strategy,the network scalable degree is improved by 9.59% with 4.43% user rate loss.Compared with the strategy in[10],the network scalable degree is improved by 22.15% with 4.99% user rate loss. 5) The algorithm parameters, the threshold β0of overlap rate and the upper limit number N0of AP associated, effect the performance.As shown in Fig.6,with β0or N0decreases,η increases and the total user rate decreases. With N ¯ p increases,the effect of β0increases and that of N0decreases.Take N ¯ p =40,60as an example.The η gap between β0=0.5 and β0=0.9 increases from 5.97% to 14.17%, and the user rate gap increases from 47 bit/(s·Hz) to 155 bit/(s·Hz). The η gap between N0 =20 and N0 =60 decreases from 1.4% to 0.4%, and the user rate gap decreases from 76 bit/(s·Hz)to 29 bit/(s·Hz).

    Conclusions: The proposed user association strategy can improve the network scalable degree of cell-free networks at the cost of less rate loss. The smaller the overlap rate threshold or the upper limit of APs associated with an AP,the more the network scalable degree increases and the greater the rate loss.

    On-demand and efficient scheduling scheme for cryptographic service resource
    Wenlong KOU, Yuyang ZHANG, Fenghua LI, Xiaogang CAO, Jiamin LI, Zhu WANG, Kui GENG
    2022, 43(6):  108-118.  doi:10.11959/j.issn.1000-436x.2022092
    Asbtract ( 204 )   HTML ( 25)   PDF (1974KB) ( 305 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Objective: The popularity of network technology makes more and more enterprises and individuals join the wave of the Internet, and data presents an explosive exponential growth trend.With the increasing demand for data security transmission and fine-grained authentication, the use of cryptographic services in various applications is becoming more frequent. How to deal with random cross and large peak difference cryptographic service requests has gradually become a bottleneck problem restricting various network security applications.A model of cryptographic service scheduling system is proposed to explore the differential dynamic on-demand scheduling of cryptographic service resources.

    Methods: Optimized entropy method and cryptographic resource reconstruction technology were used to provide dynamic and extensible cryptographic service resources for users and devices accessing service system. Firstly, the evaluation method of cryptographic device service ability is proposed. By obtaining the operating state information such as the utilization rate of cryptographic resources and network throughput of cryptographic devices,the optimized entropy method is used to process the data. Combined with the cryptographic resource allocation of cryptographic devices, the cryptographic service ability provided by cryptographic devices is described,which provides support for cryptographic job scheduling.Then, an efficient on-demand cryptographic job scheduling strategy is proposed, and the cryptographic service request expectation is proposed. By calculating the load distance of the cryptographic device to determine whether to meet the requirements of the cryptographic service, the cryptographic job scheduling strategy is generated. In addition,the cryptographic devices can be reconstructed according to the scheduling algorithm to meet the differentiated needs of cryptographic services in terms of service quality and service efficiency.

    Results:The enhanced Min-Min load balancing algorithm,the cluster load balancing algorithm based on dynamic consistent hashing and the proposed on-demand scheduling algorithm are used for comparison. By sending cryptographic service requests, the maximum completion time of cryptographic operations, the number of serviceable requests per unit time and the average load of FPGA(field programmable gate array)cryptographic computing unit of the three scheduling algorithms are tested respectively.Fig.7 shows that when the number of cryptographic service requests is small,the difference among the three scheduling algorithms is not obvious.However, with the increase of the number of cryptographic service requests,the load of FPGA computing unit gradually increases. The other two scheduling algorithms do not consider the migration of cryptographic jobs and the dynamic configuration of FPGA computing unit, and the queuing time of cryptographic jobs increases significantly, and the gap between the other two scheduling algorithms and the on-demand scheduling algorithm is getting bigger and bigger.Fig.8 shows that when the number of cryptographic service requests is small, the difference of the three scheduling algorithms is not obvious,which can meet most of the cryptographic service requests. However, with the increase of the number of cryptographic service requests, the number of service requests per unit time of the three scheduling algorithms reaches the peak.Because the on-demand scheduling algorithm realizes the cryptographic job migration and the dynamic configuration of FPGA computing units, the number of service requests per unit time is higher than the other two scheduling algorithms.Fig. 9 shows that under the premise of minimizing the migration of cryptographic operations and the reconstruction of FPGA computing units, the on-demand scheduling algorithm prioritizes the cryptographic operations to the same FPGA computing unit.Therefore,only one FPGA computing unit has load when the number of cryptographic service requests is small, and with the increase of the number of cryptographic service requests, the number of FPGA computing units working also increases. Figs. 10 – 11 show that the FPGA load of the other two algorithms is relatively balanced.When the number of cryptographic service requests is large, the load of each FPGA is high.When the new cryptographic service request arrives,the residual calculation ability of FPGA calculation unit is insufficient to meet the cryptographic service demand because the migration of cryptographic jobs and the dynamic configuration of FPGA calculation unit are not considered.

    Conclusions: An efficient on-demand scheduling scheme for cryptographic service resources is proposed. The description and dynamic monitoring of cryptographic service capability are realized by using the normalized evaluation model of cryptographic devices based on optimized entropy method. At the same time, a cryptographic job scheduling strategy suitable for different requirements is proposed, and combined with the cryptographic resource reconstruction strategy,the differential configuration and scheduling of cryptographic resources are realized. The dynamic and extensible cryptographic service resources are provided to users and devices of any access service system.

    HSTC: hybrid traffic scheduling mechanism in time-sensitive networking
    Changchuan YIN, Yanjue LI, Hailong ZHU, Xinxin HE, Wenxuan HAN
    2022, 43(6):  119-132.  doi:10.11959/j.issn.1000-436x.2022103
    Asbtract ( 443 )   HTML ( 53)   PDF (1698KB) ( 597 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Objectives: With the arrival of the industry 4.0 era, the industrial production control system is becoming more and more intelligent, which puts forward higher requirements for real-time and deterministic transmission of information. Time-Sensitive Networking (TSN) has been introduced into industrial network because of its good compatibility and low delay jitter. To realize efficient transmission of mixed traffic in industrial network with the help of TSN, we explored a new traffic scheduling mechanism in TSN.

    Methods: Based on architecture of the centralized software defined network (SDN), we designed a method to determine the minimum scheduling slot of the network and adjusted sampling period of Scheduled Traffic (ST) based on the minimum slot. By reducing occupation of transmission bandwidth by ST traffic, more transmission resources were reserved for Stream Reservation (SR) traffic to improve network schedulability. Furthermore, for SR traffic, a parity mapping scheme was proposed. When SR flow was not schedulable, a flow offset planning (FOP) algorithm was designed to offset injection time of SR flow, which can further improve schedulability of the network by improving utilization of system resources.

    Results: To verify performance of core algorithm of HSTC mechanism, we built an experimental platform and compared performance of our mechanism with existing mechanisms from the perspectives of ST traffic bandwidth occupation, traffic scheduling priority, SR traffic mapping, injection slot selection, etc. In the experiment, the maximum transmission unit (MTU) of network was 1500B, the maximum buffer size of single queue in switch was 6MTU, and link rate was 1000 Mbit/s. The maximum sampling period and packet length of each ST flow were randomly selected from the set {0.6, 0.8, 1, 1.2, 1.6}ms, {0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1}kB. Minimum sampling period of ST flow was set to 0.1 ms. And the deadline (DDL) of ST flow was its actual sampling period. Sampling period and packet length of SR flow were randomly selected from set {4, 6, 8, 10, 12, 16, 20}ms, {1.5, 2, 2.5, 3, 3.5, 4, 4.5}kB. DDL of each SR flow was a random integer value within range of [0.5× T j R , T j R ]. Main experimental results are showed as follows: ①Adjusting sampling period of ST traffic: if sampling period was not adjusted, when the number of flows reached 12, transmitting ST traffic occupied more than 80% of network bandwidth. However, after adjusting sampling period by our scheme, the bandwidth occupation of ST traffic was greatly reduced. ②Traffic scheduling priority: weighted ranking scheme proposed in this paper had the best performance. Performance ranking of other schemes was the maximum packet length first, the shortest deadline first, and the minimum sampling period first. Compared with the sub optimal scheme, weighted ranking scheme can improve scheduling success rate by up to 0.52. ③SR traffic mapping: as the number of SR flows increased, impact of parity mapping scheme on improving scheduling success rate became more significant. The maximum difference between scheduling success rate of our scheme and the scheme which mapped according to DDL was 20%. ④Compared with the suboptimal random slot injection scheme, the slot sequencing scheme in HSTC can increase network scheduling success rate by up to 0.77, and the limit bandwidth utilization can reach 88%.⑤Overall performance comparison: we compared with two mechanisms which were proposed in two representative existing literatures under the same simulation parameter configuration. The experimental results showed that HSTC mechanism realized dual optimization of reducing solution complexity and improving scheduling performance.

    Conclusions: How to make full use of TSN's accurate flow scheduling capability to provide certainty and real-time guarantee for production control system is still a research focus of TSN. Therefore, we proposed a mixed traffic scheduling mechanism called HSTC, which combined two existing schemes of Time-Aware Shaper (TAS) and Cyclic Queuing and Forwarding (CQF) and formulated different scheduling strategies for time-sensitive traffic and large bandwidth traffic according to their characteristic. The experimental results showed that HSTC mechanism significantly improved network schedulability by improving system resource utilization, and it realized efficient scheduling of mixed traffic. Existing network schedule schemes for TSN are mostly based on off-line scheduling scenarios. However, in actual industrial network, there is still a small number of burst traffic triggered by events. Burst traffic has no fixed parameters but has an important impact on the normal operation of the system. Therefore, how to improve our current mechanism to support mixed transmission of burst traffic at the same time is our next research direction.

    Topology control based on dynamic graph embedding in Internet of vehicles
    Yanfei SUN, Jiazheng YIN, Jin QI, Xiaoxuan HU, Mengting CHEN, Zhenjiang DONG
    2022, 43(6):  133-142.  doi:10.11959/j.issn.1000-436x.2022122
    Asbtract ( 321 )   HTML ( 36)   PDF (823KB) ( 571 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Objectives:With the growth of the automotive market, the road carrying pressure is increasing. However, due to the dynamics, complexity and poor communication environment of Internet of vehicles (IoV), as well as the rapidly changing distance and occlusion between vehicles, frequent chain scission and signal fading among the nodes of the network. It is difficult to control the topology of network. In order to build a more stable and reasonable IoV, fuzzy inference and other methods was used to extract vehicle features, and a graph embedding method for the IoV environment was proposed to make full use of vehicle features to build a network, so as to realize the topology discovery and control of IoV.

    Methods:The proposed label-range graph embedding (LRGE) method was used to discover and control the topology of IoV. The first thing is to establish the vehiclar network model. The road was reasonably divided into several sub networks according to the road side unit (RSU). The driver assistance system was used to obtain the relevant historical information of the vehicle. Fourier transform and fuzzy inference methods were used to extract the driving features of the drivers, and the low-dimensional feature vector was obtained by processing the vehicle information. Then, proposed range based cold boot method was applied to the vehicles newly added to the network. According to the vehicle features of the target vehicle in join area, the feature vector was updated and modified to make it easier to establish the connections with the vehicle nodes in the new network. Finally, based on the application environment of the IoV, the LRGE method was used to perform random walk among vehicle nodes, and then the established random walk was optimized according to the LRGE model to determine the adjacency matrix. This matrix was dynamically updated according to the historical sequence information. According to the characteristics of the actual IoV, this method fused the feature vector and the driver feature label to realize topology discovery and control. The flat fading composite channel model was used to simulate the actual communication environment and the actual effect was tested on the NGSIM dataset.

    Results:From the simulation results under NGSIM dataset and flat fading composite channel, it can be seen that LRGE method could basically achieve more reasonable network topology control. Vehicles with more aggressive driving styles prefer to establish connections with forward vehicle clusters, and vehicles with similar driving styles are able to establish stable connections for a longer time. It is contrasted with random network, DeepWalk methods,Node2Vec methods and dynamic growth (DN) algorithm. By testing the established connection number, chain scission probability and average hops between reachable nodes, it is found that the networks established by Node2Vec method and LRGE method are more reasonable, with low chain scission probability, less network redundancy and more practical network topology. In order to reflect the difference of the main target, connectivity and robustness of the network, the connected probability, the importance distribution represented by PageRank and the proportion of cut-vertices were contrasted. The network established by LRGE method has higher connected probability. Its centrality distribution of nodes is flat, and the proportion of cut-vertices is relatively small, so it has advantages in connectivity and robustness. The experimental results were further verified by comparing the relative relations under different communication environment.

    Conclusions:Although the high dynamics and complexity of IoV make it difficult to build a reasonable and stable network, vehicles and drivers have different features. Therefore, these features can be extracted and used to assist in controlling the topology of network. Driver features are extracted through fuzzy inference and other methods, and the graph embedding method for IoV can make full use of vehicle feature information. Hence, the network topology can be constructed more reasonably and effectively. Moreover, the graph embedding method is simple to calculate, and the performance of established network is better. It can make a rapid response to the dynamic IoV, update the network topology in time, and finally realize the topology control of IoV with good dynamics, connectivity and robustness.

    Intelligent prediction method of virtual network function resource capacity for polymorphic network service slicing
    Julong LAN, Di ZHU, Dan LI
    2022, 43(6):  143-155.  doi:10.11959/j.issn.1000-436x.2022098
    Asbtract ( 356 )   HTML ( 47)   PDF (1421KB) ( 494 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Objectives:With the emergence and development of new network structures such as polymorphic network,the demand for network resource capacity is becoming more and more diverse.It is very difficult to adjust the distribution of virtual network functions (VNFs) carried on network slices for on-demand,dynamic and efficient resource capacity matching.This paper uses a data-driven VNF resource capacity prediction method to explore the pre-deployment of VNFs carried on network slices.

    Methods: Using the data-driven idea, a VNF resource capacity prediction method based on spatiotemporal feature extraction was adopted to pre-deploy VNFs for the upcoming slicing demand through resource capacity prediction. First, the time series of data stream used for prediction is subjected to two-stage weighting processing,and then the processed time series and its dependent spatial topology information are input into the network model for spatiotemporal feature extraction. For the extraction of spatial features, by given an adjacency matrix and a feature matrix,graph convolutional network is used to reorganize the spatial distribution features of time series in the Fourier domain. For the extraction of temporal features, the temporal dependencies of the input data are perceived through the information transfer between the units via gated recurrent units.Then,based on the mapping relationship between the data flow sequence and the number of VNF instances, the feedforward neural network performs data dimension transformation and finally outputs the VNF resource demand prediction results.

    Results: From the experimental results, the prediction performance of this method is mainly reflected in the following aspects:1.The prediction accuracy of this method is stable.The primary reason is that the method adopts a reasonable spatiotemporal feature extraction structure, which can effectively deal with both the spatial structure with non-Euclidean features and the time series features with contextual dependencies.The secondary reason is that the weighted preprocessing process of the input data stream sequence in this method effectively avoids the characteristic mutation caused by the burst flow in the network,and can truly reflect the changing trend of the data stream and slicing capacity requirements, thus obtaining stable prediction effect. 2. The method has the ability of spatiotemporal prediction. It shows that after a large amount of data training, the spatial feature extraction layer of the method can quickly calculate the topology relationship and data flow distribution of the network, and the temporal feature extraction layer can assist in predicting the sudden changes may occur in the data flow according to the potential correlation between data flows between nodes. The two feature extraction layers work in coordination to obtain accurate spatiotemporal prediction results.3.The method has the ability to convert data flow prediction results. It shows that this method can combine the mapping relationship between the fluctuation trend of data flow and the change trend of the number of VNF instances, and efficiently realizes the conversion of data flow prediction and network slice capacity prediction through the transformation of data dimension.

    Conclusions: With the vigorous development of artificial intelligence technology, polymorphic network has given computing, storage and transmission capabilities to virtual nodes on network slices,and has been able to realize the dual improvement of network resource utilization and user experience through the adaptive flow of business demand data on the basis of autonomously sensing data throughput and autonomously predicting the resource requirements of VNFs on nodes. With the help of machine learning algorithms, the VNF resource capacity demand prediction method VNFPre proposed for polymorphic network scenarios,it can judge the future VNF resource capacity demand of network slices, and provide a priori information for the placement and mapping of VNFs carried by network slices.

    Resource allocation strategies for improved mayfly algorithm in cognitive heterogeneous cellular network
    Damin ZHANG, Yi WANG, Chengcheng ZOU, Peiwen ZHAO, Linna ZHANG
    2022, 43(6):  156-167.  doi:10.11959/j.issn.1000-436x.2022115
    Asbtract ( 318 )   HTML ( 27)   PDF (957KB) ( 401 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the optimization of uplink resource allocation in cognitive heterogeneous cellular networks, a resource allocation algorithm based on improved discrete mayfly algorithm was proposed.In the cognitive heterogeneous cellular network model, the power control strategy was introduced to control the interference suppression of transmitted power, and the improved discrete mayfly algorithm was used to optimize and solve the optimal distribution scheme based on the user’s quality of service (QoS) requirements and interference threshold constraints to maximize the energy efficiency (EE).In order to improve the convergence rate and search ability of the mayfly algorithm, the dynamic adaptive weights of incomplete Gamma and Beta distribution functions and the golden sine position updating strategy were introduced.The simulation results show that the closed-loop power control based on SINR can dynamically adjust the transmitting power of users and effectively restrain the interference between users.The GSWBMA has good optimization efficiency and convergence performance to solve the resource allocation problem, effectively improve the energy efficiency of the system and the transmission rate of users, and ensure the QoS requirements of users.

    Ciphertext policy hidden access control scheme based on blockchain and supporting data sharing
    Ruizhong DU, Tianhe ZHANG, Pengliang SHI
    2022, 43(6):  168-178.  doi:10.11959/j.issn.1000-436x.2022119
    Asbtract ( 481 )   HTML ( 90)   PDF (1130KB) ( 582 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Objectives: Although the traditional attribute-based encryption scheme achieves one-to-many access control,there are still challenges such as single point of failure,low efficiency,no support for data sharing,and privacy leakage.To solve these problems,a ciphertext policy hidden access control scheme based on blockchain and supporting data sharing is proposed.

    Methods:Firstly,an efficient attribute vector and policy vector generation algorithm is proposed using vector compression technology, which judges whether user attributes satisfy the access policy through the inner product operation result of attribute vector and policy vector.Afterwards, the prime order bilinear group and attribute encryption technology were used to achieve fine-grained access control while avoiding the leakage of user attribute values; using the interstellar file system to store the ciphertext and storing the hash address of the ciphertext on the blockchain through a smart contract,it realizes distributed and reliable access control and reduces the storage overhead of the blockchain. The revocation function is realized by maintaining the revocation list in the revocation contract,which avoids the abuse of the user's private key.Finally, data sharing is realized by combining the proxy re-encryption technology.

    Results: Security analysis and simulation result analysis is carried out for the scheme. Firstly, based on the asymmetric decisional bilinear Diffie-Hellman,the ciphertext indistinguishability of the scheme in the access control phase and the data sharing phase is proved. Secondly, the proposed scheme is compared with some access control schemes with similar technologies in recent years in terms of group order,access structure,policy hiding and so on,it can be seen from the comparison results that the scheme in this paper has certain advantages in terms of functional characteristics.Afterwards,the cost of deploying contracts and executing related functions on the blockchain is evaluated.The results show that the gas cost of the scheme in this paper is within a reasonable range.The final simulation results show that the proposed scheme has high efficiency in both the access control stage and the data sharing stage.According to the design of the existing paper comparative experiments,we set the number of attributes to 0-20.In the access control stage, the initialization time, key generation time, encryption time and decryption time are compared with other schemes.The results show that although the computational overhead of the proposed scheme is relatively large in the initialization stage, the efficiency in the key generation stage, encryption stage and decryption stage is higher than that of the other three schemes, so the proposed scheme has higher efficiency in the access control stage.In the data sharing stage, the re-encryption time and re-decryption time are compared with other schemes, respectively. The results show that the proposed scheme has high efficiency in both the re-encryption stage and the re-decryption stage.The scheme in this paper has a constant number of pairings in the decryption stage and the re-decryption stage,so the decryption time and the re-decryption time are small and the changes are not obvious with the increase of the number of attributes.

    Conclusions: The ciphertext policy hiding access control scheme based on blockchain and supporting data sharing constructed in this paper solves the problems of single point of failure, low efficiency, non-support for data sharing and privacy leakage in traditional attribute-based encryption schemes.Firstly,the attribute vector and policy vector generation algorithm proposed in this paper not only supports AND-gates on+/-,but also supports AND-gates on multi-valued attributes by extension. Secondly, the distributed management of ciphertext is realized by using Ethereum and Interstellar file system.Afterwards,the use of prime order bilinear groups improves the pairing efficiency of bilinear pairs and realizes data sharing by combining proxy re-encryption technology.

    Hybrid precoding and power allocation for mmWave NOMA systems based on time delay line arrays
    Gangcan SUN, Xinli WU, Wanming HAO, Zhengyu ZHU
    2022, 43(6):  179-188.  doi:10.11959/j.issn.1000-436x.2022120
    Asbtract ( 204 )   HTML ( 19)   PDF (926KB) ( 390 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Objectives: The antenna structure in conventional millimeter-wave (mmWave) nonorthogonal multiple access(NOMA)systems is based on a high-resolution and high-energy phase shifter modulation network,and how to reduce the system energy consumption while improving the resolution is one of the key problems to be solved. In this thesis, a lowcomplexity and low-power delay line array consisting of switches and delay lines for continuous phase modulation is introduced into the mmWave NOMA system, and the energy efficiency(EE)and spectral efficiency(SE)of the system are investigated.

    Methods:To reduce inter-user interference,an improved K-means algorithm is proposed to group users and select a cluster head for each group of users, which maximizes the correlation of user channels in the same cluster and reduces the correlation between users in different clusters as much as possible;Then,a low-complexity analog precoding is designed to maximize the array gain of the antenna based on the relevant user channel matrix composed of the cluster head set, followed by a digital precoding designed to eliminate inter-user interference with the maximum equivalent channel gain between beams using a forced-zero technique; Finally, an EE maximization problem is formed to optimize the transmit power under users'quality of services and total transmit power constraints,and a two-layer iterative algorithm is proposed for the resulting non-convex optimization problem. Specifically,the Dinkelbach method is applied in the outer layer to transform the fractional structure of the objective function in EE optimization into a subtractive structure, and the non-convex objective function is transformed into a convex function by using mathematical tools in the inner layer, and then an iterative algorithm based on alternating optimization (AO) is proposed for power allocation, and finally the solution of the initial problem is obtained by circular iteration in both inner and outer layers.

    Results:Simulation analysis:(1)The figure6(a)show that the SE of the proposed scheme is higher than that of the conventional phase-shifter modulation network with fully-connected and sub-connected hybrid precoding structures; In addition, the system SE of the fullconnected structure under the proposed scheme is better than that of the hybrid-connected and sub-connected structures.(2) the figure6(b) show that the EE of the system under the proposed scheme is also better than that of the conventional phase-shifter network-based and fully-digital precoding structures; Also, the system SE of the sub-connected structure under the proposed scheme is better than that of the hybrid-connected and fully-connected structures. (3) The figure7(a) show that the SE of the system increases as the number of antennas increases;The figure7(b)show that the EE of the system decreases as the number of antennas increases. (4) The figure8(a) and (b) show that compared with the mmWave orthogonal multiple access (OMA) system based on the time-delayed line array, and the proposed scheme can achieve better performance in terms of EE and SE.(5)the figure9(a) show that the performance of the proposed improved K-means user grouping algorithm is better than the K-means user grouping algorithm.Although the K-means can be done better based on the state information of the channel,the randomness of its initial cluster head will affect the convergence of the algorithm and the system performance. In addition, the random grouping algorithm has the worst performance because there is user interference in NOMA system,and the random grouping will make the user interference in the same cluster increase.

    Conclusions: The time-delay line array is composed of low-power, low-complexity switches and delay lines,and the continuous phase modulation can be realized by adjusting the switches.The proposed mmWave NOMA system based on delay line array constructed can effectively improve the SE and EE of the system.

    Security decision method for the edge of multi-layer satellite network based on reinforcement learning
    Peiliang ZUO, Shaolong HOU, Chao GUO, Hua JIANG, Wenbo WANG
    2022, 43(6):  189-199.  doi:10.11959/j.issn.1000-436x.2022111
    Asbtract ( 318 )   HTML ( 45)   PDF (1821KB) ( 517 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Objectives: Multi-layer satellite network is an important component of space-ground integration technology.The purpose of this paper is to rely on the autonomous decision ability of satellite nodes to give full play to the processing and backhaul tasks of sensing data including encryption, decryption and compression in network edge scenarios. Collaboration. With the premise of ensuring data security and the goal of low transmission delay,the edge decision-making of mission satellites in the multi-layer satellite network architecture is realized.

    Methods:This paper considers a multi-layer satellite network consisting of low-orbit satellites, medium-orbit satellites, and high-orbit geosynchronous satellites.Among them,the low-orbit satellite nodes are responsible for observation and reconnaissance services (such as meteorological observation, geographic detection, intelligence reconnaissance,etc.),and the medium-orbit satellites are regarded as fog nodes in edge scenarios, and one of them serves as the fog computing processing center, responsible for planning and observing The data compression processing and security encryption are located in the satellite node and the network selection of the data backhaul. The geosynchronous orbit satellite has the largest coverage and the strongest computing processing capability. This paper uses deep reinforcement learning algorithms to implement edge security decisions for satellite networks. Specifically, the edge center node obtains the environmental state of the satellite network through the perception system, and on this basis, uses the ability of deep reinforcement learning algorithm to learn independently, and obtains the optimal data offloading strategy in the scene by fitting, and obtains the optimal link planning., so that the onboard resources can be fully utilized, so as to achieve the goal of minimizing the average return delay of many observation tasks.First,the edge center node observes the environment and obtains state elements such as the data volume, channel conditions, and edge node processing capability of the observation satellite mission in the environment. Selection;the strategy acts on the satellite network,which will change the state of the environment,and the environment will evaluate the strategy and feed it back to the edge center node in the form of reward;the edge center node will perform error calculation and update the Q value based on the new environment state and income,in order to optimize the action selection strategy,so as to obtain higher rewards and new environmental states; the above process is continuously iterated to finally obtain the optimal strategy.

    Results:Keras is used as the simulation platform,and in the simulation experiment,the constellation of low-orbit satellites is assumed to be the common Walker constellation. Taking a certain area in the multi-layer satellite network as the simulation object, the number of low-orbit observation satellites in this area is set to 8,the number of medium-orbit satellites is 3, and the number of high-orbit satellites is one. The simulation results include three aspects:1)Simulation of the convergence performance of each method for random snapshots with different numbers of satellites. The simulation results show that the proposed method shows a convergence trend for different numbers of satellites. With the increase of the number of satellites,the number of training times required for the proposed method to achieve convergence increases significantly. This is because the increase in the number of satellites increases significantly.The size of the action space of the method;2)The performance of the proposed method under different network configuration conditions is compared. Simulation results show that the proposed method has the best convergence performance under all 4 different configuration conditions,however,the initial performance of the low-high network configuration is excellent under partial snapshots,but as the training progresses, Its convergence performance becomes poor, because the network configuration has fewer link choices,which limits its performance; 3) The performance of the proposed method and the comparison method is simulated and verified by using the test set. The simulation results show that compared with the random edge security decision and the edge security decision oriented by the signal-to-noise ratio parameter,the method proposed has a greater advantage in the delay performance, and is comparable to the optimal edge security decision performance obtained by traversal.The difference is small.

    Conclusions:Aiming at the link selection problem of multi-layer satellite nodes for low-orbit observation satellites in the scene,this paper proposes a data compression and encryption backhaul decision method based on deep reinforcement learning. By rationally designing the state, action, reward, and training network related parameters of the method in combination with the scene, the proposed method can make intelligent and efficient edge decision-making with the goal of low transmission delay.

    Fast blind detection of short-wave frequency hopping signal based on MeanShift
    Zhengyu ZHU, Yu LIN, Zixuan WANG, Kexian GONG, Pengfei CHEN, Zhongyong WANG, Jing LIANG
    2022, 43(6):  200-210.  doi:10.11959/j.issn.1000-436x.2022118
    Asbtract ( 223 )   HTML ( 18)   PDF (970KB) ( 291 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    In the complex short-wave channel environment, combined with time-frequency analysis technology, a fast blind detection algorithm of the connected domain labeled frequency hopping signals based on MeanShift algorithm was proposed to reduce the influence of various interference signals and noises on frequency hopping signals and realize blind detection of frequency hopping signals under low signal-to-noise ratio.Firstly, the channel environment gray-scale time-frequency map was filtered by the secondary gray-scale morphology to obtain the binary time-frequency map.Secondly, the maximum duration of the signal was calculated by the connected domain labeling algorithm.Then, the MeanShift algorithm was used to cluster the maximum duration of the signal.Finally, the clustering result was made a second judgment by combining with the adaptive double threshold.The simulation results show that the proposed algorithm can quickly separate various interference signals and sharp noise under low signal-to-noise ratio, and realize fast blind detection of frequency hopping signals without any prior information.It has high detection probability, strong anti-interference ability in short-wave channel environment, low computational complexity and high engineering practical value.

    Comprehensive Review
    New dimension in vortex electro-magnetic wave transmission with orbital angular momentum
    Chao ZHANG, Yuanhe WANG
    2022, 43(6):  211-222.  doi:10.11959/j.issn.1000-436x.2022087
    Asbtract ( 352 )   HTML ( 29)   PDF (1568KB) ( 470 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Purpose:The vortex Electro-Magnetic (EM) wave transmission system with Orbital Angular Momentum (OAM), which is the potential key technology in future mobile communications,is easily confused with the traditional Multiple-Input Multiple-Output (MIMO) transmission system. This leads to the controversy on whether the OAM of vortex EM waves can provide the new dimension for the wireless transmission. This paper points out that only the vortex EM wave transmission system with vortex photons carrying Intrinsic OAM(IOAM)can obtain the new dimension.Furthermore,compared with the MIMO transmission with multiple antennas,Extrinsic OAM(EOAM)formed by the plane microwave photons in the statistical OAM beam is coupled with the space domain and cannot provide additional new dimension.

    Method:This paper analyzes the physical characteristics of the EM wave with OAM and traces back to the history of the EM wave resource utilization and development. Moreover, the formula of Shannon channel capacity containing OAM dimension is given, and the significance of power multiplexing of new dimension for capacity enhancement is specified. In order to show the insight of the new dimensional characteristics of vortex EM waves, the typical vortex EM wave OAM transmission systems are classified into four different regions according to the channel capacity. Taking the microwave band as an example,it is pointed out that only the quantum OAM vortex EM wave transmission based on vortex microwave photons can surpass the traditional multi-antenna MIMO capacity bound and form a new MIMO capacity bound containing the OAM dimension.

    Consequence:In order to illustrate the application scope and the outstanding advantages utilized by quantum OAM electromagnetic waves and statistical OAM Beams, this paper classifies the typical OAM transmission systems into four regions based on channel capacity.From the high channel capacity to the low channel capacity, Region A belongs to the quantum OAM transmission system with the new dimension of OAM, using vortex microwave photon to convey information, and the corresponding capacity bound can be enhanced to surpass the traditional multi-antenna MIMO capacity bound;Regions B,C and D belong to the statistical OAM vortex beams,which do not own new dimensions in MIMO transmission but have outstanding performance in the Line-of-Sight (LoS) channel. Compared with the traditional LoS MIMO transmission, Region B refers to the OAM dedicated antenna transmission system, which can recover the channel orthogonality and the rank of the channel matrix,so that a significant capacity enhancement is obtained,which represents the development trend of statistical OAM vortex beam.Region C refers to the array antenna full-phase plane transmission system.It enjoys low system complexity and high technical maturity as a representative of early OAM technology. Region D refers to the partial phase transmission system, which does not need to receive the complete phase plane, and is suitable for long-distance transmission.

    Conclusion:In response to the controversy on whether OAM electromagnetic waves can provide the new dimension for the wireless transmission, this paper analyzes the physical insight of the EM wave transmission with OAM in history.It can be concluded that:both the intrinsic OAM and the extrinsic OAM can be utilized by EM waves,but only quantum OAM vortex microwave photons transmission based on intrinsic OAM can generate new dimensions in the wireless transmission besides the MIMO transmission. The statistical OAM vortex beams based on extrinsic OAM cannot constitute new dimensions beyond MIMO transmission and just belongs to the special case of multi-antenna MIMO systems which can generate the beams with helical phase front.

    Correspondences
    Multi-objective optimal offloading decision for cloud-edge collaborative computing scenario in Internet of vehicles
    Sifeng ZHU, Jianghao CAI, Zhengyi CHAI, Enlin SUN
    2022, 43(6):  223-234.  doi:10.11959/j.issn.1000-436x.2022114
    Asbtract ( 527 )   HTML ( 70)   PDF (1705KB) ( 589 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Objectives:Computing tasks in Internet of vehicles are very sensitive to offloading delay, cloud-edge collaborative computing is required to meet such requirements. However,the characteristics of fast movement of vehicles in the Internet of vehicles make the conventional cloud-edge collaborative model not applicable. Combined with vehicle-to-vehicle communication technology and edge caching technology, this paper explores a cloud-edge collaborative computing offloading model suitable for The Internet of vehicles.

    Methods:Aiming at the problem that in the cloud-edge collaborative computing scenario of the Internet of vehicles, it is a challenging problem how to efficiently offload services, and simultaneously consider the offloading decisions of services with the collaborative resource allocation of edge servers and cloud servers, a vehicle computing network architecture based on cloud-edge collaboration was designed. In this architecture, vehicle terminals, cloud servers and edge servers could provide computing services. The cache strategy was introduced into the scenario of Internet of vehicles by classifying cache tasks. The cache model, delay model, energy consumption model, quality of service model and multi-objective optimization model were designed successively,the maximum unload delay of tasks is introduced into the quality of service model. An improved multi-objective optimization immune algorithm(MOIA)was proposed for offloading decision making, the algorithm is a multi-objective evolutionary algorithm, mainly through the combination of immune thought and reference point strategy to achieve the optimization of multi-objective problems.

    Results:Finally,the effectiveness of the proposed offloading decision scheme was verified by comparative experiments. Experimental results show that the computational offloading model proposed in this paper can cope with tasks with different requirements and has good adaptability under the condition of meeting the maximum offloading delay. Offloading delay in this design model is mainly composed of seven parts: The cache delay of service application required by task downloading from server, the uploading delay of task uploading from vehicle to edge server,the uploading delay of task uploading from edge server to cloud server, the execution delay required by task,the queuing delay required by task on server, the transmission delay required for tasks to be transmitted across regions through the server and the transmission delay required for tasks to be transmitted through vehicle-to-vehicle communication. In the experiment of communication strategy and cache strategy, it can be seen that each part of the delay in this paper has a relatively close relationship.The effect of cache strategy is tested by canceling half of cacheable edge cache service applications(MOIA-C).The results show that the total offload delay and cache delay of MOIA-C scheme increase 35.88% and 196.85% respectively compared with MOIA scheme, which is due to the decrease in the number of cacheable service applications. The scheme is more inclined to offload tasks to the cloud server that caches all service applications and has higher performance. As a result, the uploading delay of tasks from edge server to cloud server and the queuing delay of tasks on the server increase, the execution delay decreases,the system energy consumption decreases,and the service quality index increases.The communication strategy adopts the hybrid transmission mode based on server communication and vehicle-to-vehicle communication.The experiment of the communication strategy is realized by canceling the communication mode based on vehicle-to-vehicle technology (MOIA-S). The results show that the total offloading delay and communication delay of MOIA-S scheme increased by 58.45% and 433.33% respectively compared with MOIA scheme. This is due to the extreme bandwidth strain of using only the server to transport tasks. In order to reduce the bandwidth pressure caused by cross-region task transmission,the scheme tends to offload the task to the cloud server.Therefore,the cache delay of service application and the processing delay of the task decrease, the queuing delay increases, the system energy consumption decreases,and the quality of service index increases.

    Conclusions:Based on vehicle-to-vehicle communication technology and edge caching technology, this paper proposes an adaptive service caching and task offloading strategy,which can effectively reduce the total delay of vehicles tasks and the energy consumption of vehicles while ensuring the quality of service, and provide better service for high-delay-sensitive tasks in Internet of vehicles scenarios.

    Research on formant estimation algorithm for high order optimal LPC root value screening
    Hua LONG, Shumeng SU
    2022, 43(6):  235-245.  doi:10.11959/j.issn.1000-436x.2022113
    Asbtract ( 370 )   HTML ( 9)   PDF (28324KB) ( 84 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Objectives: The existing linear prediction (LP) formant estimation algorithms are difficult to locate formant precisely because of the pseudo root interference and interaction between poles.Because of the low order fitting formant of LP prediction,the accuracy of formant extraction is fundamentally limited.It is difficult to remove false roots and spectrum aliasing caused by pole interaction in the formant extraction of high-order LP.In order to solve the problem of large error of LP formant detection,a formant estimation algorithm based on high-order LP coefficient root value screening was proposed. The root determination threshold, optimal LP root value distribution, peak distribution of formant in spectral envelope and formant estimation error of speech digital resonance model constraints under different orders are investigated.

    Methods: The value of LP order is increased to improve the fitting degree of LP system spectrum of speech signal.The calculation precision of formant frequency of speech signal is analyzed in different order,and the root value of linear system with higher linear peak fitting precision is obtained.A speech digital resonance model is used to constrain the root amplitude range of the formant, and the number of false roots is reduced by matching the root amplitude of the order to filter the root values of the linear system.Combined with power weighting,the main spectral components of the signal are weighted. So the amplitude of speech frequency is corrected, and the energy matching between the spectral peak of the speech signal and the spectral peak of LPC is enhanced, the distance between poles is extended, the prediction error caused by harmonic generation interference is reduced, and the peak frequency discrimination of spectrum is improved.

    Results: As can be seen from the algorithm structure, the speech signal is preprocessed, in which the low frequency information is reweighted to reduce the interference of fundamental frequency to formant detection.And the high frequency information is enhanced to increase the amplitude distinction of the third formant in the high spectrum line. And the end detection is isolated to do the high-order LP analysis of the spoken frame under the constraint of digital resonance model. The model includes three main techniques which improving the performance:(1) Within the system tolerance range, LP order is increased, which can improve the formant prediction accuracy. The formant is the peak frequency of the spectral envelope, which corresponding to the zero-pole of the LP polynomial. The 9-order linear prediction only preserves the basic shape of LP response amplitude spectrum of speech signal.When the order of LP is increased to the 15,the fitting degree of the signal is increased,and the zero and pole of LP is dense and the distribution of LP is closer to the unit circle.The 15th order LP compensates for the sacrifice of formant fitting accuracy caused by the 9th order linear fitting, which improves the formant extraction accuracy by 2.5%. (2) Using the threshold value under the constraint of digital resonance root value to determine the complex roots,the low frequency false roots generated by fundamental frequency harmonics and the false roots generated by formant harmonics is effectively filtered.The zeroes-poles of the LP polynomial are the complex roots corresponding to the formant peaks.In the view of the distribution of formant detection root values, the high-order LP root threshold constrained by digital formant root values can effectively filter the false roots generated by harmonic action of sound channel. And accurately the location of the root corresponding to formant root values in the unit circle is accurately located. (3) The revised signal prediction formant is more accurate by reweighting the speech frequency power.The spectrum envelope energy is more concentrated after power weighting.At order 18, the aliasing interference caused by the peak frequency of the formant at 1363Hz to 1359Hz is eliminated. In terms of the robustness of the algorithm and the overall performance comparison of different methods,the proposed algorithm can extract the formant robustly from order 9 to 22, and the model algorithm shows the optimal performance when the formant is extracted from order 18.

    Conclusions:The method of formant detection based on LPC is improved.The effect of improving the order of linear prediction on formant extraction was studied.Aiming at the problem of multiple pseudo-roots and multi-pole interaction caused by increasing the order of linear prediction, the error of formant extraction constrained by the speech-digital resonance model is minimized. The relationship between the order of linear prediction and the screening threshold of root amplitude was analyzed. To remove false roots, the root amplitude feedback method under digital resonance constraint was used to obtain the filtering threshold of matching high order and low error rate. Combined with the power weighting, amplitude of the peak of the prominent spectrum is strengthened,which eliminates the pole interaction in formant extraction,achieving accurate and effective formant extraction.

Copyright Information
Authorized by: China Association for Science and Technology
Sponsored by: China Institute of Communications
Editor-in-Chief: Zhang Ping
Associate Editor-in-Chief:
Zhang Yanchuan, Ma Jianfeng, Yang Zhen, Shen Lianfeng, Tao Xiaofeng, Liu Hualu
Editorial Director: Wu Nada, Zhao Li
Address: F2, Beiyang Chenguang Building, Shunbatiao No.1 Courtyard, Fengtai District, Beijing, China
Post: 100079
Tel: 010-53933889、53878169、
53859522、010-53878236
Email: xuebao@ptpress.com.cn
Email: txxb@bjxintong.com.cn
ISSN 1000-436X
CN 11-2102/TN
Visited
Total visitors:
Visitors of today:
Now online: