Please wait a minute...

Current Issue

    25 March 2021, Volume 42 Issue 3
    Papers
    Reconstruction of sparse check matrix for LDPC at high bit error rate
    Zhaojun WU, Limin ZHANG, Zhaogen ZHONG, Renxin LIU
    2021, 42(3):  1-10.  doi:10.11959/j.issn.1000-436x.2021009
    Asbtract ( 359 )   HTML ( 119)   PDF (879KB) ( 586 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    In order to reconstruct the sparse check matrix of LDPC, a new algorithm which could directly reconstruct the LDPC was proposed.Firstly, according to the principle of the traditional reconstruction algorithm, the defects of the traditional algorithm and the reasons for the defects were analyzed in detail.Secondly, based on the characteristics of sparse matrix, some bit sequences in code words were randomly extracted for Gaussian elimination.At the same time, in order to reliably realize that the extracted bits sequence could contain parity check nodes, the multiple random variables were determined based on the probability of containing check nodes in one extraction.Finally, the statistical characteristics of LDPC under the suspected check vector was analyzed.Based on the minimum error decision rule, the sparse check vector was determined.The simulation results show that the rate of reconstruction of most LDPC in IEEE 802.11 protocol can reach more than 95% at BER of 0.001, and the noise robustness of the proposed method is better than that of the traditional algorithm.At the same time, the new algorithm not only does not need sparseness of parity check matrix, but also has the good performance for both diagonal and non-diagonal check matrix.

    Multi-key homomorphic proxy re-encryption scheme based on NTRU and its application
    Ruiqi LI, Chunfu JIA, Yafei WANG
    2021, 42(3):  11-22.  doi:10.11959/j.issn.1000-436x.2021023
    Asbtract ( 584 )   HTML ( 131)   PDF (837KB) ( 857 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    To improve the practicability of homomorphic encryption in the application of multi-user cloud computing, a NTRU-based multi-key homomorphic proxy re-encryption (MKH-PRE) scheme was constructed.Firstly, a new form of NTRU-based multi-key ciphertext was proposed based on the idea of ciphertext extension, and the corresponding homomorphic operations and relinearization procedure were designed on the basis of this new ciphertext form, so that a NTRU-based multi-key homomorphic encryption (MKHE) scheme which supported distributed decryption was constructed.Then, resorting to the idea of key switching, the re-encryption key and re-encryption procedure were put forward such that the functionality of proxy re-encryption (PRE) was integrated to this new NTRU-based MKHE scheme.The MKH-PRE scheme preserved the properties of MKHE and PRE, and had a better performance on the client side.The scheme was applied to the privacy-preserving problems in federated learning and an experiment of the application was carried out.The results demonstrate that the accuracy of learning is scarcely affected by the encryption procedure and the computational overhead of this MKH-PRE scheme is acceptable.

    Generalized Grad-CAM attacking method based on adversarial patch
    Nianwen SI, Wenlin ZHANG, Dan QU, Heyu CHANG, Shengxiang LI, Tong NIU
    2021, 42(3):  23-35.  doi:10.11959/j.issn.1000-436x.2021025
    Asbtract ( 493 )   HTML ( 49)   PDF (1989KB) ( 575 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    To verify the fragility of the Grad-CAM, a Grad-CAM attack method based on adversarial patch was proposed.By adding a constraint to the Grad-CAM in the classification loss function, an adversarial patch could be optimized and the adversarial image could be synthesized.The adversarial image guided the Grad-CAM interpretation result towards the patch area while the classification result remains unchanged, so as to attack the interpretations.Meanwhile, through batch-training on the dataset and increasing perturbation norm constraint, the generalization and the multi-scene usability of the adversarial patch were improved.Experimental results on the ILSVRC2012 dataset show that compared with the existing methods, the proposed method can attack the interpretation results of the Grad-CAM more simply and effectively while maintaining the classification accuracy.

    Research on link prediction model based on hierarchical attention mechanism
    Xiaojuan ZHAO, Yan JIA, Aiping LI, Kai CHEN
    2021, 42(3):  36-44.  doi:10.11959/j.issn.1000-436x.2021057
    Asbtract ( 408 )   HTML ( 66)   PDF (1033KB) ( 686 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    In order to solve the problem that the existing graph attention mechanism tends to cause attention distribution to certain relations with high frequency when performing link prediction related tasks, a new link prediction model based on hierarchical attention mechanism was proposed.In the link prediction task, a hierarchical attention mechanism was designed to give different attention to the relationships of different relationship types connected to a given entity in the knowledge graph according to the relationship in the prediction task.While the characteristics of multi-hop neighbor entities were pay attention to, the relationship characteristics was pay more attention to find the relationship type that matches the target relationship.Through comparison experiments with the mainstream models on multiple benchmark data sets, the results show that the performance of the model is better than the mainstream models and has good robustness.

    Two-timescale unlicensed spectrum partitioning algorithm between LTE and Wi-Fi network
    Weihua WU, Runzi LIU, Qinghai YANG
    2021, 42(3):  45-53.  doi:10.11959/j.issn.1000-436x.2021059
    Asbtract ( 219 )   HTML ( 40)   PDF (916KB) ( 446 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    According to the spectrum partitioning decisions of Wi-Fi and LTE respectively depended on global channel state information (CSI) and local CSI, an online two-timescale iteration algorithm was developed.The Wi-Fi spectrum was partitioned at the large timescale according to the global CSI whereas the LTE spectrum was partitioned according to the fast-changing local CSI.Then, the adaptive compensation mechanism was designed for improving the tracking performance of the two-timescale algorithm.Moreover, the sufficient condition was derived for the two-timescale algorithm tracking the moving equilibrium point without errors.Finally, the simulation results show that the proposed two-timescale algorithm achieves excellent system performance at very low overhead.

    Study of forecasting urban private car volumes based on multi-source heterogeneous data fusion
    Chenxi LIU, Dong WANG, Huiling CHEN, Renfa LI
    2021, 42(3):  54-64.  doi:10.11959/j.issn.1000-436x.2021018
    Asbtract ( 407 )   HTML ( 48)   PDF (1427KB) ( 529 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    By effectively capturing the spatio-temporal characteristics of urban private car travel, a multi-source heterogeneous data fusion model for private car volume prediction was proposed.Firstly, private car trajectory and area-of-interest data were integrated.Secondly, the spatio-temporal correlations between private car travel and urban areas were modeled through multi-view spatio-temporal graphs, the multi-graph convolution-attention network (MGC-AN) was proposed to extract the spatio-temporal characteristics of private car travel.Finally, the spatio-temporal characteristics and external characteristics such as weather were integrated for joint prediction.Experiments were conducted on real datasets, which were collected in Changsha and Shenzhen.The experimental results show that, compared with the existing prediction model, the root mean square error of the MGC-AN is reduced 11.3%~20.3%, and the average absolute percentage error is reduced 10.8%~36.1%.

    Performance analysis of physical layer security based on channel correlation
    Xuanli WU, Zhicong XU, Yuchen WANG, Yong LI
    2021, 42(3):  65-74.  doi:10.11959/j.issn.1000-436x.2021066
    Asbtract ( 395 )   HTML ( 55)   PDF (760KB) ( 933 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    In view of the correlation of the main and eavesdropper’s channels in physical layer security, the impact of channel correlation on the secrecy performance was studied, the exact form ulations of ergodic secrecy capacity and secrecy outage probability were derived, and then the asymptotic formulations in the scenarios where the channel states were large and small were given.The theoretical analysis were verified by the numerical simulations.The simulation results show that channel correlation will cause the loss of ergodic secrecy capacity, however, it does not mean the outage probability of communication will increase with high channel correlation.Actually, when the outage rate is set reasonably, the impact of channel correlation is related to the average signal-to-noise ratio (SNR) at the receiver end.At high SNR range, high correlation will decrease the outage probability, at low SNR range, high correlation will increase the outage probability, and at medium SNR range, the correlation will not affect the outage probability significantly.Based on the above conclusion, the parameters can be set according to SNR and correlation, so that the secrecy performance can be maintained.

    In-band network telemetry system based on high-performance packet processing architecture VPP
    Tian PAN, Xingchen LIN, Jiao ZHANG, Tao HUANG, Yunjie LIU
    2021, 42(3):  75-90.  doi:10.11959/j.issn.1000-436x.2021016
    Asbtract ( 527 )   HTML ( 59)   PDF (1326KB) ( 1054 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    An in-band network telemetry-based system was built based on VPP, a high-performance virtual network forwarding architecture/device, via reorganizing the data plane pipeline processing modules.Moreover, a network-wide telemetry mechanism was further developed via embedding source routing into the probe packet header for specifying the route of probe packets.Finally, a virtual network topology was built with performance evaluation conducted.The evaluation shows that the telemetry system can monitor the network in a high precision of every 0.13 ms, detecting link congestion in real time with minor performance overhead.Given virtual network devices have been widely deployed in data centers, the proposed scheme is expected to improve the reliability of multi-tenancy and network function virtualization in data centers via high-precision, network-wide virtual link telemetry coverage.

    WSN clustering routing algorithm based on PSO optimized fuzzy C-means
    Aijing SUN, Shichang LI, Yicai ZHANG
    2021, 42(3):  91-99.  doi:10.11959/j.issn.1000-436x.2021053
    Asbtract ( 402 )   HTML ( 52)   PDF (752KB) ( 484 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Aimed at the problems of limited energy and unbalanced load in wireless sensor network, POFCA based on particle swarm optimization fuzzy C-means was proposed.POFCA was respectively optimized from the cluster stage and the data transmission stage.In the clustering stage, the particle swarm optimization fuzzy C-means was firstly used to overcome the sensitivity to the initial clustering center.And the cluster head was dynamically updated according to the remaining power and the relative distance of the nodes to balance the network load.Then in the data transfer phase, a path evaluation function was designed based on the distance factor, the energy factor and the nodal load.Besides, the cat swarm optimization was used to search the optimal routing path for the cluster head to balance the load of the cluster head without increasing the load of the relay node.The simulation result shows that compared with algorithms of LEACH and LEACH-improved, POFCA can effectively balance the network load, reduce the energy consumption of nodes and extend the lifetime of the entire network.

    Robust chance-constrained optimization algorithm design for secure wireless powered backscatter communication system
    Wanming HAO, Jinkun XIE, Gangcan SUN, Zhengyu ZHU, Yiqing ZHOU
    2021, 42(3):  100-110.  doi:10.11959/j.issn.1000-436x.2021048
    Asbtract ( 224 )   HTML ( 29)   PDF (815KB) ( 545 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    For the channel uncertainty and security problems in wireless powered backscatter communication (WPBC) system, a robust chance-constrained optimization algorithm was proposed.Firstly, an optimization problem was established to maximize the minimum transmission rate of the system based on channel uncertainty considering eavesdropping data transmission rate constraints, minimum energy collection constraints and device reflectivity constraints.In order to solve the problem, the original uncertainty problem was transformed into a deterministic optimization problem by using the security approximation method based on Bernstein inequality.Then, combining the properties of inequality, the corresponding auxiliary variables were introduced to transform the deterministic optimization problem into a convex optimization problem, and the standard convex optimization algorithm was used to solve the problem.Finally, the simulation results show the effectiveness of the proposed algorithm.

    Reliability evaluation of hierarchical hypercube network
    Ximeng LIU, Yufang ZHANG, Shuming ZHOU, Xiaoyan LI
    2021, 42(3):  111-121.  doi:10.11959/j.issn.1000-436x.2021064
    Asbtract ( 241 )   HTML ( 24)   PDF (872KB) ( 285 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the problem that the reliability research on hierarchical hypercube networks was not yet systematic, which severely restricted its application and popularization, the hierarchical hypercube network was taken as studying object, on the basis of obtaining the relevant topological properties of the hierarchical hypercube network, the h-extra conditional diagnosability and t/s-diagnosability of the n-dimension hierarchical hypercube (HHCn) network under the PMC model and MM* model were obtained by theoretical deduction.In addition, the t/s-diagnosis algorithm of HHCn under the PMC model and MM* model was designed and its time complexity was analyzed.The research results show that the h-extra conditional diagnosability of HHCn is about h+1 times of its traditional diagnosability, and the t/s-diagnosability of HHCn is about s+1 times of its traditional diagnosability.Those results improve the reliability index of the hierarchical hypercube network and provide an important theoretical basis for its application and popularization.

    Defense-enhanced dynamic heterogeneous redundancy architecture based on executor partition
    Ting WU, Chengnan HU, Qingnan CHEN, Anbang CHEN, Qiuhua ZHENG
    2021, 42(3):  122-134.  doi:10.11959/j.issn.1000-436x.2021022
    Asbtract ( 325 )   HTML ( 32)   PDF (946KB) ( 473 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the security problem when servants are faced with common vulnerabilities, an improved DHR architecture called IDHR was proposed.On the basis of DHR, an executor-partition module that divided the executor-set to several executor pools by the heterogeneity among the executors was introduced to improve the heterogeneity among the executor pools.Moreover, the scheduling algorithm was improved by choosing executor pools randomly at first, and then choosing the executors from these pools randomly.Finally, through two experimental schemes of random simulation and Web server emulation, the security evaluation of the proposed IDHR architecture was carried out from two aspects of attack success rate and control rate.Experimental results show that the security of the IDHR architecture, especially when the common vulnerability is unknown, is significantly better than the traditional DHR architecture.

    Ciphertext-only fault analysis of the TWINE lightweight cryptogram algorithm
    Wei LI, Menglin WANG, Dawu GU, Jiayao LI, Tianpei CAI, Guangwei XU
    2021, 42(3):  135-149.  doi:10.11959/j.issn.1000-436x.2021039
    Asbtract ( 304 )   HTML ( 27)   PDF (1575KB) ( 416 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    The security analysis of TWINE against the ciphertext-only fault analysis was proposed.The secret key of TWINE could be recovered with a success probability at least 99% using a series of distinguishers of SEI、MLE、HW、GF、GF-SEI、GF-MLE、Parzen-HW、MLE-HE、HW-HE and HW-MLE-HE.Among them, the novel proposed distinguishers of MLE-HE、HW-HE and HW-MLE-HE can effectively reduce the faults and improve the attack efficiency in simulating experiments.It provides a significant reference for analyzing the security of lightweight ciphers in the Internet of Things.

    Context-aware learning-based access control method for power IoT
    Zhenyu ZHOU, Zehan JIA, Haijun LIAO, Xiongwen ZHAO, Lei ZHANG
    2021, 42(3):  150-159.  doi:10.11959/j.issn.1000-436x.2021062
    Asbtract ( 281 )   HTML ( 40)   PDF (1200KB) ( 539 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    In view of the problems of severe access conflicts, high queue backlog, and low energy efficiency in the massive terminal access scenario of the power Internet of things (power IoT) in 6G era, a context-aware learning-based access control (CLAC) algorithm was proposed.The proposed algorithm was based on reinforcement learning and fast uplink grant technology, considering active state and dormant state of terminals, and the optimization objective was to maximize the total network energy efficiency under the long-term constraint of terminal access service quality requirements.Lyapunov optimization was used to decouple the long-term optimization objective and constraint, and the long-term optimization problem was transformed into a series of single time-slot independent deterministic sub-problems, which could be solved by the terminal state-aware upper confidence bound algorithm.The simulation results show that CLAC can improve the network energy efficiency while meeting the terminal access service quality requirements.Compared with the traditional fast uplink grant, CLAC can improve the average energy efficiency by 48.11%, increase the proportion of terminals meeting access service quality requirements by 54.95%, and reduce the average queue backlog by 83.83%.

    Fine-grained attribute update and outsourcing computing access control scheme in fog computing
    Ruizhong DU, Peiwen YAN, Yan LIU
    2021, 42(3):  160-170.  doi:10.11959/j.issn.1000-436x.2021063
    Asbtract ( 290 )   HTML ( 35)   PDF (825KB) ( 597 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    To slove the problem that in the fog computing environment with comparatively high low latency demand, ciphertext policy attribute based encryption (CP-ABE) faced the problems of high encryption and decryption overhead and low efficiency of attribute update, an fine-grained attribute update and outsourcing computing access control scheme in fog computing was proposed.The unanimous consent control by modular addition technique was used to construct an access control tree, and the computing operations of ecryption and decryption were outsourced to fog nodes in order to reduce user encryption and decryption overhead.Combined with the re-encryption mechanism, a group key binary tree was established at the fog node to re-encrypt the ciphertext so that user attribute can be updated flexibly.The security analysis shows that the proposed scheme is safe under the decision bilinear Diffie-Hellman hypothesis.Compared with other schemes, the results of simulation experiment prove that the time cost of user encryption and decryption in this scheme is lower and the efficiency of attribute update is higher.

    Vehicular cache nodes selection algorithm under load constraint in C-V2X
    Zhexin XU, Kaimeng GAO, Wenkang JIA, Yi WU
    2021, 42(3):  171-182.  doi:10.11959/j.issn.1000-436x.2021065
    Asbtract ( 226 )   HTML ( 29)   PDF (1319KB) ( 484 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    In order to solve the problem that the C-V2X vehicle topology in urban environment was highly dynamic and the load capacity of vehicle nodes was limited, and improve the utilization of vehicular cache resources and reduce the load of base station, a vehicle cache nodes selection algorithm under load constraints was proposed.Firstly, by defining the link stability metric, the predicted weight adjacency matrix was constructed to describe the vehicular micro-topology in essence.Next, the objective function was further constructed under the load constraints and non-overlapping coverage constraint, which maximized the average link weight of the clusters by using the least cache nodes.Finally, the greedy concept was then introduced and the node states were reasonably defined.As a result, the minimum dominating set of the vehicle topology was figured out under the load constraints.Besides, the serviced neighbor nodes were then determined preferentially.The simulation results show that the proposed algorithm is close to the global optimal results in terms of the number of cache nodes and the average weight of cluster links.Moreover, the repeated response ratio of the proposed algorithm is always zero while the request response ratio can achieve the theoretical upper bound.Furthermore, the response times of cache resources can be also effectively improved.

    Caching strategy based on transmission delay for D2D cooperative edge caching system
    Yan CAI, Fan WU, Hongbo ZHU
    2021, 42(3):  183-189.  doi:10.11959/j.issn.1000-436x.2021042
    Asbtract ( 388 )   HTML ( 65)   PDF (787KB) ( 512 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    To meet the requirements of 5G system with low delay and high reliability, for a single-caching D2D cooperative edge caching system, an optimization of the caching strategy based on transmission delay was proposed.The dynamic distributions of requesting users and idle users were modeled as the independent homogeneous poisson point process (HPPP) utilizing the stochastic geometry theory.Comprehensively considering the content popularity, user location information, device transmission power and interference, the relationship between the average transmission delay of user and the cache probability distribution was derived.Taking the average transmission delay as the objective function, the optimization problem was established, and a low complexity iterative algorithm was proposed to obtain the cache strategy with sub-optimal average transmission delay.Simulation results demonstrate that the proposed optimization cache strategy is superior to several common cache strategies in terms of transmission delay.

    Comprehensive Review
    Survey on edge computing technology for autonomous driving
    Pin LYU, Jia XU, Taoshen LI, Wenbiao XU
    2021, 42(3):  190-208.  doi:10.11959/j.issn.1000-436x.2021045
    Asbtract ( 1734 )   HTML ( 353)   PDF (763KB) ( 2028 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Edge computing plays an extremely important role in the environment perception and data processing of autonomous driving.Autonomous driving vehicles can expand their perception scope by obtaining environmental information from edge nodes, and can also deal with the problem of insufficient computing resources by offloading tasks to edge nodes.Compared with cloud computing, edge computing avoids high latency caused by long-distance data transmission, and provides autonomous driving vehicles with faster responses, and relieves the traffic load of the backbone network.Firstly, the edge computing-based cooperative perception and task offloading technologies for autonomous vehicles were introduced firstly, and related challenging issues were also proposed.Then the state-of-the-art of cooperative perception and task offloading technologies were analyzed and summarized.Finally, the problems need to be further studied in this field were discussed.

    Correspondences
    RBFT: a new Byzantine fault-tolerant consensus mechanism based on Raft cluster
    Dongyan HUANG, Lang LI, Bin CHEN, Bo WANG
    2021, 42(3):  209-219.  doi:10.11959/j.issn.1000-436x.2021043
    Asbtract ( 958 )   HTML ( 85)   PDF (1013KB) ( 1325 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    The existing consensus mechanisms of consortium blockchain are not scalable enough to provide low latency, high throughput and security while supporting large-scale network.A new consensus mechanism called RBFT was proposed to improve scalability, which was a two-level consensus mechanism with supervised nodes based on the idea of network fragmentation.In RBFT, the nodes were firstly divided into several groups.Each group adopted the improved Raft mechanism to reach consensus and select leader.Then, the leaders of each group formed the network committee, and the network committee adopted PBFT mechanism for consensus.Comparative experiments verify that RBFT can tolerant Byzantine fault while ensuring high consensus efficiency in large-scale network compared with PBFT and Raft.

    Multi-authority attribute-based identification scheme
    Fei TANG, Jiali BAO, Yonghong HUANG, Dong HUANG, Huili WANG
    2021, 42(3):  220-228.  doi:10.11959/j.issn.1000-436x.2021047
    Asbtract ( 393 )   HTML ( 59)   PDF (1222KB) ( 559 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Based on the problem that the existing attribute-based identification scheme is all based on one single authority, which has a key escrow problem, that is, the key generation center knows all users’ private keys, an multi-authority attribute-based identification scheme was proposed.Distributed key generation technology was integrated to realize the (t,n) threshold generation mechanism of the user’s private key, which could resist collusion attacks from at most t-1 authorities.Utilizing bilinear mapping, a specific multi-authority attribute-based identification scheme was constructed.The security, computation cost and communication cost of the proposed scheme was analyzed, and it was compared with the same type of schemes.Finally, taking multi-factor identification as an example, the feasibility of the proposed scheme in the application scenario of electronic credentials was analyzed.The result shows that the proposed scheme has better comprehensive performance.

    Image denoising algorithm based on multi-channel GAN
    Hongyan WANG, Xiao YANG, Yanchao JIANG, Zumin WANG
    2021, 42(3):  229-237.  doi:10.11959/j.issn.1000-436x.2021049
    Asbtract ( 656 )   HTML ( 117)   PDF (29014KB) ( 665 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Aiming at the issue that the noise generated during image acquisition and transmission would degrade the ability of subsequent image processing, a generative adversarial network (GAN) based multi-channel image denoising algorithm was developed.The noisy color image could be separated into red-green-blue (RGB) three channels via the proposed approach, and then the denoising could be implemented in each channel on the basis of an end-to-end trainable GAN with the same architecture.The generator module of GAN was constructed based on the U-net derivative network and residual blocks such that the high-level feature information could be extracted effectively via referring to the low-level feature information to avoid the loss of the detail information.In the meantime, the discriminator module could be demonstrated on the basis of fully convolutional neural network such that the pixel-level classification could be achieved to improve the discrimination accuracy.Besides, in order to improve the denoising ability and retain the image detail as much as possible, the composite loss function could be depicted by the illustrated denoising network based on the following three loss measures, adversarial loss, visual perception loss, and mean square error (MSE).Finally, the resultant three-channel output information could be fused by exploiting the arithmetic mean method to obtain the final denoised image.Compared with the state-of-the-art algorithms, experimental results show that the proposed algorithm can remove the image noise effectively and restore the original image details considerably.

Copyright Information
Authorized by: China Association for Science and Technology
Sponsored by: China Institute of Communications
Editor-in-Chief: Zhang Ping
Associate Editor-in-Chief:
Zhang Yanchuan, Ma Jianfeng, Yang Zhen, Shen Lianfeng, Tao Xiaofeng, Liu Hualu
Editorial Director: Wu Nada, Zhao Li
Address: F2, Beiyang Chenguang Building, Shunbatiao No.1 Courtyard, Fengtai District, Beijing, China
Post: 100079
Tel: 010-53933889、53878169、
53859522、010-53878236
Email: xuebao@ptpress.com.cn
Email: txxb@bjxintong.com.cn
ISSN 1000-436X
CN 11-2102/TN
Visited
Total visitors:
Visitors of today:
Now online: