Please wait a minute...

����Ŀ¼

    25 October 2023, Volume 9 Issue 5
    Comprehensive Review
    Review of malware detection and classification visualization techniques
    Jinwei WANG, Zhengjia CHEN, Xue XIE, Xiangyang LUO, Bin MA
    2023, 9(5):  1-20.  doi:10.11959/j.issn.2096-109x.2023064
    Asbtract ( 274 )   HTML ( 121)   PDF (3271KB) ( 278 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    With the rapid advancement of technology, network security faces a significant challenge due to the proliferation of malicious software and its variants.These malicious software use various technical tactics to deceive or bypass traditional detection methods, rendering conventional non-visual detection techniques inadequate.In recent years, data visualization has gained considerable attention in the academic community as a powerful approach for detecting and classifying malicious software.By visually representing the key features of malicious software, these methods greatly enhance the accuracy of malware detection and classification, opening up extensive research opportunities in the field of cyber security.An overview of traditional non-visual detection techniques and visualization-based methods were provided in the realm of malicious software detection.Traditional non-visual approaches for malicious software detection, including static analysis, dynamic analysis, and hybrid techniques, were introduced.Subsequently, a comprehensive survey and evaluation of prominent contemporary visualization-based methods for detecting malicious software were undertaken.This primarily encompasses encompassed the integration of visualization with machine learning and visualization combined with deep learning, each of which exhibits distinct advantages and characteristics within the domain of malware detection and classification.Consequently, the holistic consideration of several factors, such as dataset size, computational resources, time constraints, model accuracy, and implementation complexity, is necessary for the selection of detection and classification methods.In conclusion, the challenges currently faced by detection technologies are summarized, and a forward-looking perspective on future research directions in the field is provided.

    Papers
    Redundancy and conflict detection method for label-based data flow control policy
    Rongna XIE, Xiaonan FAN, Suzhe LI, Yuxin HUANG, Guozhen SHI
    2023, 9(5):  21-32.  doi:10.11959/j.issn.2096-109x.2023074
    Asbtract ( 95 )   HTML ( 19)   PDF (1477KB) ( 73 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    To address the challenge of redundancy and conflict detection in the label-based data flow control mechanism, a label description method based on atomic operations has been proposed.When the label is changed, there is unavoidable redundancy or conflict between the new label and the existing label.How to carry out redundancy and conflict detection is an urgent problem in the label-based data flow control mechanism.To address the above problem, a label description method was proposed based on atomic operation.The object label was generated by the logical combination of multiple atomic tags, and the atomic tag was used to describe the minimum security requirement.The above label description method realized the simplicity and richness of label description.To enhance the detection efficiency and reduce the difficulty of redundancy and conflict detection, a method based on the correlation of sets in labels was introduced.Moreover, based on the detection results of atomic tags and their logical relationships, redundancy and conflict detection of object labels was carried out, further improving the overall detection efficiency.Redundancy and conflict detection of atomic tags was based on the relationships between the operations contained in different atomic tags.If different atomic tags contained the same operation, the detection was performed by analyzing the relationship between subject attributes, environmental attributes, and rule types in the atomic tags.On the other hand, if different atomic tags contained different operations without any relationship between them, there was no redundancy or conflict.If there was a partial order relationship between the operations in the atomic tags, the detection was performed by analyzing the partial order relationship of different operations, and the relationship between subject attribute, environment attribute, and rule types in different atomic tags.The performance of the redundancy and conflict detection algorithm proposed is analyzed theoretically and experimentally, and the influence of the number and complexity of atomic tags on the detection performance is verified through experiments.

    Metric-based learning approach to botnet detection with small samples
    Honggang LIN, Junjing ZHU, Lin CHEN
    2023, 9(5):  33-47.  doi:10.11959/j.issn.2096-109x.2023076
    Asbtract ( 84 )   HTML ( 23)   PDF (3604KB) ( 77 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Botnets pose a great threat to the Internet, and early detection is crucial for maintaining cybersecurity.However, in the early stages of botnet discovery, obtaining a small number of labeled samples restricts the training of current detection models based on deep learning, leading to poor detection results.To address this issue, a botnet detection method called BT-RN, based on metric learning, was proposed for small sample backgrounds.The task-based meta-learning training strategy was used to optimize the model.The verification set was introduced into the task and the similarity between the verification sample and the training sample feature representation was measured to quickly accumulate experience, thereby reducing the model’s dependence on the labeled sample space.The feature-level attention mechanism was introduced.By calculating the attention coefficients of each dimension in the feature, the feature representation was re-integrated and the importance attention was assigned to optimize the feature representation, thereby reducing the feature sparseness of the deep neural network in small samples.The residual network design pattern was introduced, and the skip link was used to avoid the risk of model degradation and gradient disappearance caused by the deeper network after increasing the feature-level attention mechanism module.

    Constructing method of opaque predicate based on type conversion and operation of floating point numbers
    Qingfeng WANG, Hao LIANG, Yawen WANG, Genlin XIE, Benwei HE
    2023, 9(5):  48-58.  doi:10.11959/j.issn.2096-109x.2023068
    Asbtract ( 72 )   HTML ( 17)   PDF (2505KB) ( 85 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    With the increasing complexity of software functions and the evolving technologies of network attacks, malicious behaviors such as software piracy, software cracking, data leakage, and malicious software modification are on the rise.As a result, software security has become a focal point in industry research.Code obfuscation is a common software protection technique used to hinder reverse engineering.It aims to make program analyzing and understanding more difficult for attackers while preserving the original program functionality.However, many existing code obfuscation techniques suffer from performance loss and poor concealment in pursuit of obfuscation effectiveness.Control flow obfuscation, particularly opaque predicate obfuscation, is widely used to increase the difficulty of code reverse engineering by disrupting the program’s control flow.A method was proposed to address the limitations of existing code obfuscation techniques.It utilized the phenomenon of precision loss that occurred during type conversion and floating-point number operations in computers.Under certain conditions, this method produced operation results that contradict common sense.By performing forced type conversion, addition, and multiplication with selected decimal numbers, a series of opaque predicates can be constructed based on the statistical analysis of their operation results.This approach achieved code obfuscation with high concealment, good generality, reversibility, and low overhead compared to traditional opaque predicates.Experimental verification demonstrates that this method significantly slows down attackers’ reverse engineering efforts and exhibits good resistance to dynamic analysis techniques such as symbolic execution.

    High-performance reconfigurable encryption scheme for distributed storage
    Zhihua FENG, Yuxuan ZHANG, Chong LUO, Jianing WANG
    2023, 9(5):  59-70.  doi:10.11959/j.issn.2096-109x.2023072
    Asbtract ( 93 )   HTML ( 11)   PDF (4157KB) ( 84 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    As the world embraces the digital economy and enters an information society, data has emerged as a critical production factor.The collection, processing, and storage of data have become increasingly prevalent.Distributed storage systems, known for their efficiency, are widely used in various data fields.However, as the scale of data storage continues to expand, distributed storage faces more significant security risks, such as information leakage and data destruction.These challenges drive the need for innovative advancements in big data distributed storage security technology and foster the integration of domestic cryptographic technology with computing storage technology.This work focused on addressing security issues, particularly information leakage, in distributed storage nodes.A dynamic and reconfigurable encryption storage solution was proposed, which considered the requirements for encryption performance and flexibility.A high-performance reconfigurable cryptographic module was designed based on the bio mapping framework.Based on this module, multiple storage pools equipped with different cryptographic algorithms were constructed to facilitate high-performance encryption and decryption operations on hard disk data.The scheme also enabled dynamic switching of cryptographic algorithms within the storage pools.A cryptographic protocol with remote online loading functions for cryptographic algorithms and keys was developed to meet the unified management and convenient security update requirements of reconfigurable cryptographic modules in various storage nodes.Furthermore, the scheme implemented fine-grained data encryption protection and logical security isolation functions based on cryptographic reconstruction technology.Experimental results demonstrate that the performance loss of this scheme for encryption protection and security isolation of stored data is approximately 10%.It provides a technical approach for distributed storage systems to meet the cryptographic application technology requirements outlined in GB/T 39786-2021 “Information Security Technology-Basic Requirements for Cryptography Applications” Level 3 and above in terms of device and computing security, application and data security.

    Lightweight and secure vehicle track verification scheme via broadcast communication channels
    Zhiqiang NING, Yuanyuan WANG, Chi ZHANG, Lingbo WEI, Nenghai YU, Yue HAO
    2023, 9(5):  71-81.  doi:10.11959/j.issn.2096-109x.2023077
    Asbtract ( 50 )   HTML ( 9)   PDF (2493KB) ( 78 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    In intelligent transportation systems, it is crucial for smart vehicles to broadcast real-time vehicle track messages to coordinate driving decisions and ensure driving safety.However, attackers can manipulate vehicle tracks by modifying timestamps or manipulating signal frequencies, posing a threat to security.To address this problem, a lightweight vehicle track verification scheme was proposed, utilizing the broadcast communication channels to achieve secure verification of vehicle tracks without any special hardware support.Without time synchronization, each verifier calculated the time interval between the reception time of the message and the sending timestamp.Spatial position constraint equations are formulated by combining these time intervals between any two verifiers, effectively defending against timestamp forgery attacks.Additionally, each verifier calculates the Doppler frequency shift between the arrival frequency and the scheduled transmit frequency.Velocity vector constraint equations were formulated by combining these frequency shifts between any two verifiers, providing defense against carrier frequency manipulation attacks.Formal analysis shows that increasing the number of verifiers improves the accuracy of the proposed verification scheme.Experimental results in a real-world environment further validate that the proposed verification scheme exhibits the best performance when the number of verifiers is set to 3.Compared to the existing solution, the proposed verification scheme has a higher accuracy, lower false rejection rate, and lower false acceptance rate when validating true vehicle track and false vehicle track separately.

    Privacy view and target of differential privacy
    Jingyu JIA, Chang TAN, Zhewei LIU, Xinhao LI, Zheli LIU, Tao ZHANG
    2023, 9(5):  82-91.  doi:10.11959/j.issn.2096-109x.2023071
    Asbtract ( 49 )   HTML ( 10)   PDF (978KB) ( 84 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    The study aimed to address the challenges in understanding the privacy goals of differential privacy by analyzing the privacy controversies surrounding it in various fields.It began with the example of data correlation and highlighted the differing perspectives among scholars regarding the targets of privacy protection.In cases where records in a dataset were correlated, adversaries can exploit this correlation to infer sensitive information about individuals, thereby sparking a debate on whether this violates privacy protection.To investigate the influence of privacy theories in the legal domain on defining privacy, two mainstream privacy theories in the computer field were examined.The first theory, limited access to personal information, focuses on preventing others from accessing an individual’s sensitive information.According to this theory, privacy mechanisms should aim to prevent adversaries from accessing a user’s actual information.In contrast, the second theory, control over personal information, emphasizes an individual’s right to communicate personal information to others.This theory suggests that the disclosure of personal information due to the relevance of others sharing data should not be considered a breach of privacy.Then the controversies of differential privacy were analyzed in the fields of computer science, social science, ethics and human-computer interaction due to their different understandings of privacy.By exploring the privacy concept of differential privacy from a multidisciplinary perspective, this study helps readers gain a correct understanding of the privacy viewpoint and goals of differential privacy while enhancing their understanding of the concept of “privacy” itself.

    Data traceability mechanism based on consortium chain
    Shoucai ZHAO, Lifeng CAO, Xuehui DU
    2023, 9(5):  92-105.  doi:10.11959/j.issn.2096-109x.2023067
    Asbtract ( 91 )   HTML ( 17)   PDF (1772KB) ( 83 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    With the unprecedented growth in the speed of data generation and circulation in the era of big data, the emergence of blockchain technology provides a new solution for data authenticity verification.However, with the increasing demand for data flow between different blockchains, new security issues arise.Cross-chain data transmission can lead to data leakage, and detecting data leakage caused by illegal access becomes challenging.To address these problems, a data traceability mechanism based on a consortium chain was proposed.A cross-blockchain data traceability model was designed, incorporating private data pipelines to ensure the security of cross-chain data transmission.User behaviors were recorded through authorization and access logs, ensuring the traceability of illegal unauthorized access.To improve the efficiency of data traceability and query, an on-chain and off-chain synchronous storage mechanism was adopted.The state of data flow before each transaction was encrypted and stored in the database, and its index was stored in the blockchain transaction.This enables a one-to-one correspondence between on-chain and off-chain data.Additionally, Merkle trees were introduced into the block body to store block summaries, enhancing the efficiency of block legitimacy verification.Based on the data storage form and cross-chain data interaction mechanism, a data traceability algorithm was designed.The traceability results were displayed in the form of an ordered tree.An experimental environment for consortium chain traceability was built using fabric, targeting the cross-domain data traceability scenario in the e-commerce industry.The GO language was used to simulate and test the data traceability performance with a large number of blocks and transactions.The results demonstrate that with the increasing number of blocks and transactions, the proposed data traceability mechanism maintains satisfactory performance.

    Block level cloud data deduplication scheme based on attribute encryption
    Wenting GE, Weihai LI, Nenghai YU
    2023, 9(5):  106-115.  doi:10.11959/j.issn.2096-109x.2023066
    Asbtract ( 47 )   HTML ( 6)   PDF (1641KB) ( 115 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Due to the existing cloud data deduplication schemes mainly focus on file-level deduplication.A scheme was proposed, based on attribute encryption, to support data block-level weight removal.Double granularity weight removal was performed for both file-level and data block-level, and data sharing was achieved through attribute encryption.The algorithm was designed on the hybrid cloud architecture Repeatability detection and consistency detection were conducted by the private cloud based on file labels and data block labels.A Merkle tree was established based on block-level labels to support user ownership proof.When a user uploaded the cipher text, the private cloud utilized linear secret sharing technology to add access structures and auxiliary information to the cipher text.It also updated the overall cipher text information for new users with permissions.The private cloud served as a proxy for re-encryption and proxy decryption, undertaking most of the calculation when the plaintext cannot be obtained, thereby reducing the computing overhead for users.The processed cipher text and labels were stored in the public cloud and accessed by the private cloud.Security analysis shows that the proposed scheme can achieve PRV-CDA (Privacy Choose-distribution attacks) security in the private cloud.In the simulation experiment, four types of elliptic curve encryption were used to test the calculation time for key generation, encryption, and decryption respectively, for different attribute numbers with a fixed block size, and different block sizes with a fixed attribute number.The results align with the characteristics of linear secret sharing.Simulation experiments and cost analysis demonstrate that the proposed scheme can enhance the efficiency of weight removal and save time costs.

    5G-based smart airport network security scheme design and security analysis
    Xinxin XING, Qingya ZUO, Jianwei LIU
    2023, 9(5):  116-126.  doi:10.11959/j.issn.2096-109x.2023075
    Asbtract ( 80 )   HTML ( 16)   PDF (3826KB) ( 166 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    To meet the security requirements of smart airports, a 5G-based smart airport network security solution was proposed.The security characteristics and security requirements of the 5G scenario in smart airport were analyzed, and the pain points of security requirements in the current scenario were summarized in five aspects:unified security management and control, network slicing security, security monitoring and early warning, edge computing security, and IoT-aware node security.And then a 5G network security system was designed for smart airports.The functional components of this system included 5G network unified security management and control functions for ubiquitous networks, lightweight 5G network identity authentication and authentication functions, 5G network slice security protection for multi-service requirements, 5G network security monitoring and early warning based on big data analysis, integrated security protection function based on edge computing, and sensory node security protection function based on device behavior analysis.This comprehensive approach built an all-in-one security platform covering business encryption, network security, terminal trustworthiness, identity trustworthiness, and security management and control.Additionally, the potential counterfeit base station attacks in the existing 5G authentication and key agreement (AKA) were analyzed.Due to the lack of authenticity verification of the messages forwarded by the SN, the attacker can pretend to be the real SN to communicate with the UE and the HN, thus carrying out the base station masquerading attack.This kind of attack may lead to the leakage of smart airport network data, and encounter problems such as tampering and deception by opponents.Aiming at the network security requirements of smart airports and the security issues of 5G authentication and key agreement protocol, an improved 5G authentication and key agreement protocol was designed.Formal security models, security goal definitions, and analysis were performed to ensure the robustness and effectiveness of the protocol against attacks.

    Anonymous group key distribution scheme for the internet of vehicles
    Zhiwang HE, Huaqun WANG
    2023, 9(5):  127-137.  doi:10.11959/j.issn.2096-109x.2023070
    Asbtract ( 66 )   HTML ( 17)   PDF (1352KB) ( 176 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Vehicular ad hoc networks (VANET) play a crucial role in intelligent transportation systems by providing driving information and services such as collision prevention and improved traffic efficiency.However, when a trusted third party (TTP) interacts with a vehicle in VANET, it can be vulnerable to security threats like eavesdropping, tampering, and forgery.Many existing schemes rely heavily on TTP for key negotiation to establish session keys and ensure session security.However, this over-reliance on TTP can introduce a single point of failure and redundancy issues when TTP sends the same information to multiple vehicles.Additionally, key negotiation methods used for creating group session keys often result in increased interaction data and interaction times.An anonymous group key distribution scheme for the internet of vehicles was proposed to address these challenges.The Road Side Units (RSUs) were used to facilitate the creation of group session keys among multiple vehicles.Identity-based public key cryptography and an improved multi-receiver encryption scheme were utilized for communication between RSUs and vehicles, enabling two-way authentication and secure distribution of group session keys.During the key distribution process, a single encryption operation was sufficient to allow all group members to obtain a consistent session key.This reduced the reliance on TTP for authentication and group communication.Formal security proofs demonstrate that the proposed scheme satisfies basic security requirements.Furthermore, performance analysis and comparisons indicate that this scheme offers lower computational overhead and communication overhead compared to similar schemes.

    Research on the robustness of neural machine translation systems in word order perturbation
    Yuran ZHAO, Tang XUE, Gongshen LIU
    2023, 9(5):  138-149.  doi:10.11959/j.issn.2096-109x.2023078
    Asbtract ( 50 )   HTML ( 6)   PDF (25027KB) ( 57 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Pre-trained language model is one of the most important models in the natural language processing field, as pre-train-finetune has become the paradigm in various NLP downstream tasks.Previous studies have proved integrating pre-trained language models (e.g., BERT) into neural machine translation (NMT) models can improve translation performance.However, it is still unclear whether these improvements stem from enhanced semantic or syntactic modeling capabilities, as well as how pre-trained knowledge impacts the robustness of the models.To address these questions, a systematic study was conducted to examine the syntactic ability of BERT-enhanced NMT models using probing tasks.The study revealed that the enhanced models showed proficiency in modeling word order, highlighting their syntactic modeling capabilities.In addition, an attacking method was proposed to evaluate the robustness of NMT models in handling word order.BERT-enhanced NMT models yielded better translation performance in most of the tasks, indicating that BERT can improve the robustness of NMT models.It was observed that BERT-enhanced NMT model generated poorer translations than vanilla NMT model after attacking in the English-German translation task, which meant that English BERT worsened model robustness in such a scenario.Further analyses revealed that English BERT failed to bridge the semantic gap between the original and perturbed sources, leading to more copying errors and errors in translating low-frequency words.These findings suggest that the benefits of pre-training may not always be consistent in downstream tasks, and careful consideration should be given to its usage.

    Research on strong robustness watermarking algorithm based on dynamic difference expansion
    Tianqi WANG, Yingzhou ZHANG, Yunlong DI, Dingwen LI, Linlin ZHU
    2023, 9(5):  150-165.  doi:10.11959/j.issn.2096-109x.2023065
    Asbtract ( 52 )   HTML ( 8)   PDF (1521KB) ( 47 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    A surge in the amount of information comes with the rapid development of the technology industry.Across all industries, there is a need to collect and utilize vast amounts of data.While this big data holds immense value, it also poses unprecedented challenges to the field of data security.As relational databases serve as a fundamental storage medium for data, they often contain large-scale data rich in content and privacy.In the event of a data leak, significant losses may occur, highlighting the pressing need to safeguard database ownership and verify data ownership.However, existing database watermarking technologies face an inherent tradeoff between improving watermark embedding capacity and reducing data distortion.To address this issue and enhance watermark robustness, a novel robust database watermarking algorithm based on dynamic difference expansion was introduced.The QR code was employed as the watermark, the SVD decomposition of the low frequency part of the image was utilized after Haar wavelet transform.By extracting specific feature values and using residual feature values as the watermark sequence, it was ensured that the same-length watermark sequence contains more information and the embedded watermark length can be reduced.Furthermore, by combining the adaptive differential evolution algorithm and the minimum difference algorithm, the optimal embedding attribute bits were selected to alleviate the problems of low computational efficiency, high data distortion and poor robustness of traditional difference expansion techniques in embedding watermarks, and to improve the embedding capacity of watermarks while reducing the distortion of data.Experimental results demonstrate that the proposed algorithm achieves a high watermark embedding rate with low data distortion.It is resilient against multiple attacks, exhibiting excellent robustness and strong traceability.Compared to existing algorithms, it offers distinct advantages and holds great potential for broad application in the field of data security.

    Model of the malicious traffic classification based on hypergraph neural network
    Wenbo ZHAO, Zitong MA, Zhe YANG
    2023, 9(5):  166-177.  doi:10.11959/j.issn.2096-109x.2023069
    Asbtract ( 146 )   HTML ( 32)   PDF (1994KB) ( 233 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    As the use and reliance on networks continue to grow, the prevalence of malicious network traffic poses a significant challenge in the field of network security.Cyber attackers constantly seek new ways to infiltrate systems, steal data, and disrupt network services.To address this ongoing threat, it is crucial to develop more effective intrusion detection systems that can promptly detect and counteract malicious network traffic, thereby minimizing the resulting losses.However, current methods for classifying malicious traffic have limitations, particularly in terms of excessive reliance on data feature selection.To improve the accuracy of malicious traffic classification, a novel malicious traffic classification model based on Hypergraph Neural Networks (HGNN) was proposed.The traffic data was represented as hypergraph structures and HGNN was utilized to capture the spatial features of the traffic.By considering the interrelations among traffic data, HGNN provided a more accurate representation of the characteristics of malicious traffic.Additionally, to handle the temporal features of traffic data, Recurrent Neural Networks (RNN) was introduced to further enhance the model’s classification performance.The extracted spatiotemporal features were then used for the classification of malicious traffic, aiding in the detection of potential threats within the network.Through a series of ablative experiments, the effectiveness of the HGNN+RNN method was verified.These experiments demonstrate the model’s ability to efficiently extract spatiotemporal features from traffic, resulting in improved classification performance for malicious traffic.The model achieved outstanding classification accuracy across three widely-used open-source datasets: NSL-KDD (94% accuracy), UNSW-NB15 (95.6% accuracy), and CIC-IDS-2017 (99.08% accuracy).These results underscore the potential significance of the malicious traffic classification model based on hypergraph neural networks in enhancing network security and its capacity to better address the evolving landscape of network threats within the domain of network security.

    Novel fingerprint key generation method based on the trimmed mean of feature distance
    Zhongtian JIA, Qinglong QIN, Li MA, Lizhi PENG
    2023, 9(5):  178-187.  doi:10.11959/j.issn.2096-109x.2023073
    Asbtract ( 41 )   HTML ( 21)   PDF (1319KB) ( 60 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    In recent years, biometrics has become widely adopted in access control systems, effectively resolving the challenges associated with password management in identity authentication.However, traditional biometric-based authentication methods often lead to the loss or leakage of users’ biometric data, compromising the reliability of biometric authentication.In the literature, two primary technical approaches have been proposed to address these issues.The first approach involves processing the extracted biometric data in a way that the authentication information used in the final stage or stored in the database does not contain the original biometric data.The second approach entails writing the biometric data onto a smart card and utilizing the smart card to generate the private key for public key cryptography.To address the challenge of constructing the private key of a public key cryptosystem based on fingerprint data without relying on a smart card, a detailed study was conducted on the stable feature points and stable feature distances of fingerprints.This study involved the extraction and analysis of fingerprint minutiae.Calculation methods were presented for sets of stable feature points, sets of equidistant stable feature points, sets of key feature points, and sets of truncated means.Based on the feature distance truncated mean, an original fingerprint key generation algorithm and key update strategy were proposed.This scheme enables the reconstruction of the fingerprint key through re-collecting fingerprints, without the need for direct storage of the key.The revocation and update of the fingerprint key were achieved through a salted hash function, which solved the problem of converting ambiguous fingerprint data into precise key data.Experiments prove that the probability of successfully reconstructing the fingerprint key by re-collecting fingerprints ten times is 0.7354, and the probability of reconstructing the fingerprint key by re-collecting fingerprints sixty times is 98.06%.

Copyright Information
Bimonthly, started in 2015
Authorized by:Ministry of Industry and Information Technology of the People's Republic of China
Sponsored by:Posts and Telecommunications Press
Co-sponsored by:Xidian University, Beihang University, Huazhong University of Science and Technology, Zhejiang University
Edited by:Editorial Board of Chinese Journal of Network and Information Security
Editor-in-Chief:FANG Bin-xing
Executive Editor-in-Chief:LI Feng-hua
Director:Xing Jianchun
Address:F2, Beiyang Chenguang Building, Shunbatiao No.1 Courtyard, Fengtai District, Beijing, China
Tel:010-53879136/53879138/53879139
Fax:+86-81055464
ISSN 2096-109X
CN 10-1366/TP
visited
Total visitors:
Visitors of today:
Now online: