Please wait a minute...

����Ŀ¼

    25 February 2023, Volume 9 Issue 1
    Comprehensive Review
    Overview of blockchain assets theft attacks and defense technology
    Beiyuan YU, Shanyao REN, Jianwei LIU
    2023, 9(1):  1-17.  doi:10.11959/j.issn.2096-109x.2023001
    Asbtract ( 447 )   HTML ( 144)   PDF (4793KB) ( 368 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Since Satoshi Nakamoto’s introduction of Bitcoin as a peer-to-peer electronic cash system, blockchain technology has been developing rapidly especially in the fields of digital assets transferring and electronic currency payments.Ethereum introduced smart contract code, giving it the ability to synchronize and preserve the execution status of smart contract programs, automatically execute transaction conditions and eliminate the need for intermediaries.Web3.0 developers can use Ethereum’s general-purpose programmable blockchain platform to build more powerful decentralized applications.Ethereum’s characteristics, such as central-less control, public and transparent interaction data guaranteed by smart contracts, and user-controlled data, have attracted more attentions.With the popularization and application of blockchain technology, more and more users are storing their digital assets on the blockchain.Due to the lack of regulatory and governance authority, public chain systems such as Ethereum are gradually becoming a medium for hackers to steal digital assets.Generally, fraud and phishing attacks are committed using blockchain to steal digital assets held by blockchain users.This article aims to help readers develop the concept of blockchain asset security and prevent asset theft attacks implemented using blockchain at the source.The characteristics and implementation scenarios of various attacks were effectively studied by summarizing the asset theft attack schemes that hackers use in the blockchain environment and abstracting research methods for threat models.Through an in-depth analysis of typical attack methods, the advantages and disadvantages of different attacks were compared, and the fundamental reasons why attackers can successfully implement attacks were analyzed.In terms of defense technology, defense schemes were introduced such as targeted phishing detection, token authorization detection, token locking, decentralized token ownership arbitration, smart contract vulnerability detection, asset isolation, supply chain attack detection, and signature data legitimacy detection, which combine attack cases and implementation scenarios.The primary process and plans for implementation of each type of defense plan were also given.And then it is clear which protective measures can protect user assets in different attack scenarios.

    Papers
    Fusion of satellite-ground and inter-satellite AKA protocols for double-layer satellite networks
    Jin CAO, Xiaoping SHI, Ruhui MA, Hui LI
    2023, 9(1):  18-31.  doi:10.11959/j.issn.2096-109x.2023004
    Asbtract ( 296 )   HTML ( 71)   PDF (4832KB) ( 152 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    With the characteristics of large space-time and satellite-ground network integration, the space integrated ground network has attracted much attention.Satellites can not only be used as emergency communication supplements, but also serve as air stations to expand the coverage of terrestrial networks, occupying an important position in both military and civilian scenarios.The entity authentication and key negotiation mechanism can prevent the malicious entities from joining the integrated network to steal users’ privacy, and guarantee network information security.In view of the characteristics of the high satellite-ground transmission delays, exposed links, limited processing capability and dynamic topology of the integrated network, a lightweight authentication scheme between satellites and ground suitable for double-layer satellite network was proposed to achieve a secure satellite networking architecture with session keys to protect data transmission.The scheme was based on symmetric cryptographic system, using lightweight cryptographic algorithms and introducing group key and hierarchical management mechanisms.The proposed scheme included three parts: inter-satellite authentication for geostationary earth orbit satellites, layer and inter-satellite authentication for same low earth orbit, and inter-satellite authentication for adjacent low earth orbit satellites.The group key and hierarchical management mechanism improved the efficiency of inter-group information transfer, reduced the authentication pressure on the ground control center, and enhanced the authentication security strength by realizing double verification in the three-entities authentication protocol.Different from the previous single scene authentication, the proposed authentication protocol took the form of multiplexing authentication parameters, which can realize the authentication requirements of dual scenes in one process.The results of Scyther, a protocol formal security simulation tool, show that the proposed scheme achieves secure access authentication.Compared with existing protocols, the proposed scheme improves authentication security and reduces communication and computational overhead.

    Encrypted traffic identification method based on deep residual capsule network with attention mechanism
    Guozhen SHI, Kunyang LI, Yao LIU, Yongjian YANG
    2023, 9(1):  32-41.  doi:10.11959/j.issn.2096-109x.2023007
    Asbtract ( 269 )   HTML ( 83)   PDF (4696KB) ( 341 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    With the improvement of users’ security awareness and the development of encryption technology, encrypted traffic has become an important part of network traffic, and identifying encrypted traffic has become an important part of network traffic supervision.The encrypted traffic identification method based on the traditional deep learning model has problems such as poor effect and long model training time.To address these problems, the encrypted traffic identification method based on a deep residual capsule network (DRCN) was proposed.However, the original capsule network was stacked in the form of full connection, which lead to a small model coupling coefficient and it was impossible to build a deep network model.The DRCN model adopted the dynamic routing algorithm based on the three-dimensional convolutional algorithm (3DCNN) instead of the fully-connected dynamic routing algorithm, to reduce the parameters passed between each capsule layer, decrease the complexity of operations, and then build the deep capsule network to improve the accuracy and efficiency of recognition.The channel attention mechanism was introduced to assign different weights to different features, and then the influence of useless features on the recognition results was reduced.The introduction of the residual network into the capsule network layer and the construction of the residual capsule network module alleviated the gradient disappearance problem of the deep capsule network.In terms of data pre-processing, the first 784byte of the intercepted packets was converted into images as input of the DRCN model, to avoid manual feature extraction and reduce the labor cost of encrypted traffic recognition.The experimental results on the ISCXVPN2016 dataset show that the accuracy of the DRCN model is improved by 5.54% and the training time of the model is reduced by 232s compared with the BLSTM model with the best performance.In addition, the accuracy of the DRCN model reaches 94.3% on the small dataset.The above experimental results prove that the proposed recognition scheme has high recognition rate, good performance and applicability.

    Secure controlling method for scalable botnets
    Qiang LIU, Pengfei LI, Zhangjie FU
    2023, 9(1):  42-55.  doi:10.11959/j.issn.2096-109x.2023002
    Asbtract ( 197 )   HTML ( 53)   PDF (4859KB) ( 313 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Botnet is one of main threats towards the Internet.Currently, botnets can expand to the whole world due to various types of network services, pervasive security vulnerabilities and massive deployment of networked devices, e.g., internet of things (IoT) devices.Future botnets will become more cross-platform and stealthy, which introduces severe security risks to cyberspace.Therefore, in-depth research on botnets can offer study targets to corresponding defensive studies, which is very meaningful for designing an architecture to secure the next-generation cyberspace.Hence, an HTTP-based scalable botnet framework was proposed to address the problems of compatibility, stealthiness and security.Specifically, the framework adopted a centralized controlling model.Moreover, it used the HTTP protocol as the designed botnet’s communication protocol and block encryption mechanisms based on symmetric cryptography to protect the botnet’s communication contents.Furthermore, a secure control mechanism for multi-platform botnets was designed.In particular, the proposed mechanism utilized source-level code integration and cross-compilation techniques to solve the compatibility challenge.It also introduced encrypted communication with dynamic secret keys to overcome the drawbacks of network traffic regularity and ease of analysis in traditional botnets.Moreover, it designed server migration and reconnection mechanisms to address the weakness of single-point-failure in centralized botnet models.Simulation results in three experimental scenarios with different levels of botnet controllability show that there is a linear relationship between the size of a botnet and the service overhead of the related C&C servers.In addition, under the condition of the same botnet scale, a higher level of controllability introduces a higher throughput and a greater system overhead.The above results demonstrate the effectiveness and the practical feasibility of the proposed method.

    Dual-stack host discovery method based on SSDP and DNS-SD protocol
    Fan SHI, Yao ZHONG, Pengfei XUE, Chengxi XU
    2023, 9(1):  56-66.  doi:10.11959/j.issn.2096-109x.2023003
    Asbtract ( 218 )   HTML ( 20)   PDF (5078KB) ( 126 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    With the exhaustion of the IPv4 addresses, the promotion and deployment of IPv6 has been accelerating.Dual-stack technology allows devices to enable both IPv4 and IPv6 protocols, which means that users are facing more security risks.Although the existing work can realize the identification and measurement of some dual-stack servers, the following problems still exist.Dual-stack host identification requires deep protocol identification of host services, but this method consumes too much scanning resources.Besides, network service providers may provide consistent services on distributed hosts, making it difficult to guarantee the accuracy of dual-stack host identification through service fingerprints.To solve these problems, the LAN service discovery protocol was used to bind host services to IP addresses, and a dual-stack host discovery method based on SSDP and DNS-SD protocols was proposed.In IPv4 network environment, the target host was induced to actively send a request to the constructed IPv6 server through SSDP protocol, and then the IPv6 address was extracted from the server’s log.Or the service list of the target host and its corresponding AAAA record was enumerated through the DNS-SD protocol and the IPv6 address of the target host was obtained, in order to realize the discovery of the dual stack address pairs.With this method, IPv6 addresses was obtained directly from the IPv4 host, which ensured the accuracy of the discovered dual-stack host.At the same time, only request packets for specific protocols were needed during the discovery process, which greatly saved scanning resources.Based on this method, the SSDP hosts and DNS-SD hosts accidentally exposed to the global IPv4 network were measured.A total number of 158k unique IPv6 addresses were collected, of which 55k were dual-stack host address pairs with globally reachable IPv6 addresses.Unlike existing work that focused on dual-stack servers, this method mainly targeted end-users and client devices, and built a unique set of active IPv6 devices and dual-stack host address pairs that have not been explored so far.Through the analysis of the obtained IPv6 address addressing type, it shows that IPv6 address is mainly generated in a random manner, which greatly reduces the possibility of IPv6 hosts being discovered by scanning.In particular, by measuring the ports and services of dual-stack hosts, we found that the security policy differences of dual-stack hosts on different protocol stacks.Especially, IPv6 protocol stack exposes more high-risk services, expanding the attack surface of hosts.The research results also show that the infeasibility of IPv6 address space traversal scanning mitigates the security risks of IPv6, but incorrect network configuration greatly increases the possibility of these high-risk IPv6 hosts being discovered and users should revisit IPv6 security strategy on dual-stack hosts.

    Temporal link prediction method based on community multi-features fusion and embedded representation
    Yuhang ZHU, Lixin JI, Yingle LI, Haitao LI, Shuxin LIU
    2023, 9(1):  67-82.  doi:10.11959/j.issn.2096-109x.2023013
    Asbtract ( 125 )   HTML ( 13)   PDF (2501KB) ( 345 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Dynamic networks integrates time attributes on the basis of static networks, and it contains multiple connotations such as the complexity and dynamics of the network structure.It is a better thinking object for studying complex network link prediction problems in the real world.Its high application value has attracted much attention in recent years.However, most of the research objects of traditional methods are still limited to static networks, and there are problems such as insufficient utilization of network time-domain evolution information and high time complexity.Combining sociological theory, a novel temporal link prediction method was proposed based on community multi-feature fusion embedding representation.The core idea of this method was to analyze the dynamic evolution characteristics of the network, learn the embedded representation vector of nodes within the community, and effectively fuse multiple features to measure the generation probability of the connection between nodes.The network was divided into several subgraphs by using community detection with collective influence weights and the Similarity index was proposed based on the collective influence.Then, the biased random walk and the Skip-gram were used to get the embedded vectors for every node and the Similarity index was proposed based on the random walk within the community.Integrating the collective influence, multiple central features of the community, and the representation vector learned within the community, the Similarity index was proposed based on the multi-features fusion.Compared with classical temporal link prediction methods, including moving average methods, embedded representation methods, and graph neural network methods, experimental results on six real data sets show that the proposed methods based on the random walk within the community and the multi-features fusion both achieve better prediction performance under the evaluation criteria of AUC.

    Cache of cryptographic key based on query popularity
    Wei JIN, Fenghua LI, Ziyan ZHOU, Xiyang SUN, Yunchuan GUO
    2023, 9(1):  83-91.  doi:10.11959/j.issn.2096-109x.2023008
    Asbtract ( 138 )   HTML ( 24)   PDF (3883KB) ( 282 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    In the current HDFS (Hadoop Distributed File System) key management system, the encryption zone keys are all loaded into the memory during startup of key service.With the increase of the key resource, the occupied memory space also grows, bringing the bottleneck of memory space and key indexing.There are three challenges induced: how to organize cached data and efficiently handle queries with missed keys, how to adjust key resources in the cache, and how to accurately predict the use of keys.In order to achieve fine-grained and efficient caching and improve the efficiency of key use, key caching optimization was considered from three aspects: key index data structure, key replacement algorithm, and key prefetching strategy.An architecture of key cache replacement module was designed, and then a key replacement algorithm based on the query frequency was set.Specifically, from the perspective of heat computing and key replacement, the potential influencing factors affecting the popularity of key cache were analyzed which considered the file system and user of key management system.Besides, the basic model of key usage popularity was constructed.The hash table and minheap linked list was combined to maintain the heat of the key in use, and the elimination algorithm was set based on heat identification.The key in the cache was dynamically updated, and key usage was adjusted by the time controller, so as to realize key replacement according to the key heat.For key prefetching, key usage rules were obtained through log mining and periodical usage analyzing of key provisioning policies, which considered business processes and the time period dimension of user accessing.Experimental results show that the proposed key replacement algorithm can effectively improve the hit rate of cache queries, reduce memory usage, and ameliorate the impact of key file I/O interaction on query performance.

    Intrinsic assurance: a systematic approach towards extensible cybersecurity
    Xunxun CHEN, Mingzhe LI, Ning LYU, Liang HUANG
    2023, 9(1):  92-102.  doi:10.11959/j.issn.2096-109x.2023010
    Asbtract ( 256 )   HTML ( 117)   PDF (2563KB) ( 520 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    At present, the mainstream cyber security systems are laid out in an alienated style, where security functions are separated from business processes, and security products are isolated from each other.It is difficult to effectively cope with increasingly complicated cyber threats in this architecture.Therefore, it is imperative to move security inward for more resilient and secure network infrastructures.Business scenarios of the cybersecurity sector can be categorized into four perspectives: organization, vendor, regulatory and threat, each of which has different business objectives.Starting from the commonness and individuality of the four perspectives, the needs of this sector was systematically summarized and then the goal of building an extensible cybersecurity capability ecosystem was recognized.As the key to this goal, the intrinsic assurance methodology was proposed.Intrinsic assurance capabilities referred to the abilities of ICT components to natively support security functions such as monitoring, protection and traceability.But intrinsic assurance is not the ultimate security implementation itself, which is a key difference from the existing “endogenous security” or “designed-in security” methodologies.Intrinsic assurance emphasizes the inherent security enabling endowment of network components, whether by activating an innate gift or by encapsulating a given one, both of which logically exhibit autoimmunity from an external viewpoint.One advantage of such a component is the cohesion of business and security, which leads to transparent security posture awareness, customized security policies, and close-fitting security protection.It also simplifies the overall engineering architecture and reduces management complexity through encapsulation of multiple functions into a singleton.Additionally, the Intrinsic Assurance Support Capability Framework was put forward, which summarized and enumerated the security capabilities that conformed to the intrinsic assurance concept.This framework classified the security capabilities into five categories, namely collection, cognition, execution, syndication and resilience respectively, together with their sub-types and underlying ICT technologies.Based on this framework, the enhanced implementations of typical security business scenarios was further introduced in light of intrinsic assurance.

    Physical-social attributes integrated Sybil detection for Tor bridge distribution
    Xin SHI, Yunfei GUO, Yawen WANG, Xiaoli SUN, Hao LIANG
    2023, 9(1):  103-114.  doi:10.11959/j.issn.2096-109x.2023014
    Asbtract ( 171 )   HTML ( 32)   PDF (4524KB) ( 79 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    As one of the most widely utilized censorship circumvention systems, Tor faces serious Sybil attacks in bridge distribution.Censors with rich network and human resources usually deploy a large number of Sybils, which disguise themselves as normal nodes to obtain bridges information and block them.In the process, due to the different identities, purposes and intentions of Sybils and normal nodes, individual or group behavior differences occur in network activities, called as node behavior characteristics.To handle the Sybil attacks threat, a Sybil detection mechanism integrating physical-social attributes was proposed based on the analysis of node behavior characteristics.The physical-social attributes evaluation methods were designed.The credit value of nodes objectively reflecting the operation status of bridges on the nodes and the suspicion index of nodes reflecting the blocking status of bridges, were utilized to evaluate the physical attributes of nodes.The social attributes of nodes were evaluated by the social similarity, which described the static attribute labels of nodes and their social trust characterizing the dynamic interaction behaviors of nodes.Furthermore, integrating the physical-social attributes, the credibility of nodes were defined as the possibility of the current node being a Sybil, which was exploited as a guidance on inferring the true identifies of nodes, so as to achieve accurate detection on Sybils.The detection performance of the proposed mechanism based on the constructed Tor network operation status simulator and the Microblog PCU dataset were simulated.The results show that the proposed mechanism can effectively improve the true positive rate on Sybils, and decrease the false positive rate.It also has stronger resistance on the deceptive behavior of censors, and still performs well in the absence of node social attributes.

    Multiple redundant flow fingerprint model based on time slots
    Kexian LIU, Jianfeng GUAN, Wancheng ZHANG, Zhikai HE, Dijia YAN
    2023, 9(1):  115-129.  doi:10.11959/j.issn.2096-109x.2023006
    Asbtract ( 143 )   HTML ( 15)   PDF (3797KB) ( 275 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    With the increasingly widespread use of the Internet, various network security problems are frequently exposed, while the “patching” style security enhancement mechanisms cannot effectively prevent the growing security risks.The researchers in the field of network security believe that the future Internet architecture should take security as a basic attribute to provide the native security support which is also called as endogenous safety and security.In order to support the data trustworthiness of endogenous security, a time-slot based multiple redundant flow fingerprint model was designed and implemented based on the research of the watermark (or fingerprint) mechanism.The proposed model used only three time slot intervals and operated the packets within the specified time slots, so that the fingerprint can be embedded without conflicting with the adjacent bit operations.Redundant coding was introduced to improve the fingerprint robustness, and the behaviors such as jitter or malicious disruptions by attackers in the network were considered.Furthermore, the impacts of delayed interference, spam packet interference and packet loss interference were analyzed.The analytical results show that the robustness of the fingerprint model improves with increasing redundant bits when the packet distribution in the network stream is given.Besides, in order to reduce the consumption of time and space and improve the efficiency and accuracy of packet operations, a flow fingerprinting prototype system was designed and implemented based on the kernel, and its efficiency and robustness were evaluated.The experimental result show that the model has high robustness.Additionally, the application scenario of the model was elaborated, which can effectively detect man-in-the-middle attacks and prevent network identity spoofing with the help of the flow fingerprinting model.

    IoT intrusion detection method for unbalanced samples
    ANTONG P, Wen CHEN, Lifa WU
    2023, 9(1):  130-139.  doi:10.11959/j.issn.2096-109x.2023005
    Asbtract ( 214 )   HTML ( 37)   PDF (5953KB) ( 167 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    In recent years, network traffic increases exponentially with the iteration of devices, while more and more attacks are launched against various applications.It is significant to identify and classify attacks at the traffic level.At the same time, with the explosion of Internet of Things (IoT) devices in recent years, attacks on IoT devices are also increasing, causing more and more damages.IoT intrusion detection is able to distinguish attack traffic from such a large volume of traffic, secure IoT devices at the traffic level, and stop the attack activity.In view of low detection accuracy of various attacks and sample imbalance at present, a random forest based intrusion detection method (Resample-RF) was proposed, which consisted of three specific methods: optimal sample selection algorithm, feature merging algorithm based on information entropy, and multi-classification greedy transformation algorithm.Aiming at the problem of unbalanced samples in the IoT environment, an optimal sample selection algorithm was proposed to increase the weight of small samples.Aiming at the low efficiency problem of random forest feature splitting, a feature merging method based on information entropy was proposed to improve the running efficiency.Aiming at the low accuracy problem of random forest multi-classification, a multi-classification greedy transformation method was proposed to further improve the accuracy.The method was evaluated on two public datasets.F1 reaches 0.99 on IoT-23 dataset and 1.0 on Kaggle dataset, both of which have good performance.The experimental results show that the proposed model can effectively identify the attack traffic from the massive traffic, better prevent the attack of hackers on the application, protect the IoT devices, and thus protect the related users.

    NLP neural network copyright protection based on black box watermark
    Long DAI, Jing ZHANG, Xuefeng FAN, Xiaoyi ZHOU
    2023, 9(1):  140-149.  doi:10.11959/j.issn.2096-109x.2023009
    Asbtract ( 263 )   HTML ( 30)   PDF (2931KB) ( 401 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    With the rapid development of natural language processing techniques, the use of language models in text classification and sentiment analysis has been increasing.However, language models are susceptible to piracy and redistribution by adversaries, posing a serious threat to the intellectual property of model owners.Therefore, researchers have been working on designing protection mechanisms to identify the copyright information of language models.However, existing watermarking of language models for text classification tasks cannot be associated with the owner’s identity, and they are not robust enough and cannot regenerate trigger sets.To solve these problems, a new model, namely black-box watermarking scheme for text classification tasks, was proposed.It was a scheme that can remotely and quickly verify model ownership.The copyright message and the key of the model owner were obtained through the Hash-based Message Authentication Code (HMAC), and the message digest obtained by HMAC can prevent forgery and had high security.A certain amount of text data was randomly selected from each category of the original training set and the digest was combined with the text data to construct the trigger set, then the watermark was embedded on the language model during the training process.To evaluate the performance of the proposed scheme, watermarks were embedded on three common language models on the IMDB’s movie reviews and CNews text classification datasets.The experimental results show that the accuracy of the proposed watermarking verification scheme can reach 100% without affecting the original model.Even under common attacks such as model fine-tuning and pruning, the proposed watermarking scheme shows strong robustness and resistance to forgery attacks.Meanwhile, the embedding of the watermark does not affect the convergence time of the model and has high embedding efficiency.

    Hard-coded backdoor detection method based on semantic conflict
    Anxiang HU, Da XIAO, Shichen GUO, Shengli LIU
    2023, 9(1):  150-157.  doi:10.11959/j.issn.2096-109x.2023015
    Asbtract ( 135 )   HTML ( 16)   PDF (1691KB) ( 326 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    The current router security issues focus on the mining and utilization of memory-type vulnerabilities, but there is low interest in detecting backdoors.Hard-coded backdoor is one of the most common backdoors, which is simple and convenient to set up and can be implemented with only a small amount of code.However, it is difficult to be discovered and often causes serious safety hazard and economic loss.The triggering process of hard-coded backdoor is inseparable from string comparison functions.Therefore, the detection of hard-coded backdoors relies on string comparison functions, which are mainly divided into static analysis method and symbolic execution method.The former has a high degree of automation, but has a high false positive rate and poor detection results.The latter has a high accuracy rate, but cannot automate large-scale detection of firmware, and faces the problem of path explosion or even unable to constrain solution.Aiming at the above problems, a hard-coded backdoor detection algorithm based on string text semantic conflict (Stect) was proposed since static analysis and the think of stain analysis.Stect started from the commonly used string comparison functions, combined with the characteristics of MIPS and ARM architectures, and extracted a set of paths with the same start and end nodes using function call relationships, control flow graphs, and branching selection dependent strings.If the strings in the successfully verified set of paths have semantic conflict, it means that there is a hard-coded backdoor in the router firmware.In order to evaluate the detection effect of Stect, 1 074 collected device images were tested and compared with other backdoor detection methods.Experimental results show that Stect has a better detection effect compared with existing backdoor detection methods including Costin and Stringer: 8 hard-coded backdoor images detected from image data set, and the recall rate reached 88.89%.

    Efficient and fully simulated oblivious transfer protocol on elliptic curve
    Jiashuo SONG, Zhenzhen LI, Haiyang DING, Zichen LI
    2023, 9(1):  158-166.  doi:10.11959/j.issn.2096-109x.2023012
    Asbtract ( 144 )   HTML ( 20)   PDF (1552KB) ( 284 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Oblivious transfer protocol, an important technology in secure multi-party computation, is the research hotspot on network and information security.Based on the bilinear pairs and the difficult problems on elliptic curves, an efficient 1-out-of-N oblivious transfer protocol in the semi-honest model and in the standard malicious model were proposed respectively.The protocol in semi-honest model was designed.It only needed two rounds of interaction.The receiver needed two times of bilinear pair arithmetic and one time of multi point arithmetic, and the sender needed n times of multi point arithmetic and n times of modular exponentiation.The security of the protocol was based on the discrete logarithm problem on elliptic curves.A zero-knowledge proof protocol and the oblivious transfer protocol in the standard malicious model were proposed respectively.The oblivious transfer protocol only needed four rounds of interaction.The receiver needed three times of bilinear pair arithmetic and three times of multi point arithmetic, and the sender needed n+1 times of multi point arithmetic and n+1 times of modular exponentiation.Besides, it can resist malicious behaviors of the party.The results show that the average running time of the protocol in the semi-honest model and in the standard malicious model were 0.787 9 s and 1.205 6 s respectively, which can further demonstrate the efficiency of the protocol.

    Deepfake detection method based on patch-wise lighting inconsistency
    Wenxuan WU, Wenbo ZHOU, Weiming ZHANG, Nenghai YU
    2023, 9(1):  167-177.  doi:10.11959/j.issn.2096-109x.2023011
    Asbtract ( 230 )   HTML ( 36)   PDF (5274KB) ( 138 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    The rapid development and widespread dissemination of deepfake techniques has caused increased concern.The malicious application of deepfake techniques also poses a potential threat to the society.Therefore, how to detect deepfake content has become a popular research topic.Most of the previous deepfake detection algorithms focused on capturing subtle forgery traces at pixel level and have achieved some results.However, most of the deepfake algorithms ignore the lighting information before and after generation, resulting in some lighting inconsistency between the original face and the forged face, which provided the possibility of using lighting inconsistency to detect deepfake.A corresponding algorithm was designed from two perspectives: introducing lighting inconsistency information and designing a network structure module for a specific task.For the introduction of lighting task, a new network structure was derived by designing the corresponding channel fusion method to provide more lighting inconsistency information to the network feature extraction layer.In order to ensure the portability of the network structure, the process of feature channel fusion was placed before the network extraction information, so that the proposed method can be fully planted to common deepfake detection networks.For the design of the network structure, a deepfake detection method was proposed for lighting inconsistency based on patch-similarity from two perspectives: network structure and loss function design.For the network structure, based on the characteristic of inconsistency between the forged image tampering region and the background region, the extracted features were chunked in the network feature layer and the feature layer similarity matrix was obtained by comparing the patch-wise cosine similarity to make the network focus more on the lighting inconsistency.On this basis, based on the feature layer similarity matching scheme, an independent ground truth and loss function was designed for this task in a targeted manner by comparing the input image with the untampered image of this image for patch-wise authenticity.It is demonstrated experimentally that the accuracy of the proposed method is significantly improved for deepfake detection compared with the baseline method.

    Education and Teaching
    Preliminary study on the construction of a data privacy protection course based on a teaching-in-practice range
    Zhe SUN, Hong NING, Lihua YIN, Binxing FANG
    2023, 9(1):  178-188.  doi:10.11959/j.issn.2096-109x.2023016
    Asbtract ( 156 )   HTML ( 53)   PDF (1333KB) ( 256 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Since China’s Data Security Law, Personal Information Protection Law and related laws were formalized, demand for privacy protection technology talents has increased sharply, and data privacy protection courses have been gradually offered in the cyberspace security majors of various universities.Building on longstanding practices in data security research and teaching, the teaching team of “Academician Fang Binxing’s Experimental Class” (referred to as “Fang Class”) at Guangzhou University has proposed a teaching method for data privacy protection based on a teaching-in-practice range.In the selection of teaching course content, the teaching team selected eight typical key privacy protection techniques including anonymity model, differential privacy, searchable encryption, ciphertext computation, adversarial training, multimedia privacy protection, privacy policy conflict resolution, and privacy violation traceability.Besides, the corresponding teaching modules were designed, which were deployed in the teaching practice range for students to learn and train.Three teaching methods were designed, including the knowledge and application oriented teaching method which integrates theory and programming, the engineering practice oriented teaching method based on algorithm extension and adaptation, and the comprehensive practice oriented teaching method for practical application scenarios.Then the closed loop of “learning-doing-using” knowledge learning and application was realized.Through three years of privacy protection teaching practice, the “Fang class” has achieved remarkable results in cultivating students’ knowledge application ability, engineering practice ability and comprehensive innovation ability, which provided useful discussion for the construction of the initial course of data privacy protection.

Copyright Information
Bimonthly, started in 2015
Authorized by:Ministry of Industry and Information Technology of the People's Republic of China
Sponsored by:Posts and Telecommunications Press
Co-sponsored by:Xidian University, Beihang University, Huazhong University of Science and Technology, Zhejiang University
Edited by:Editorial Board of Chinese Journal of Network and Information Security
Editor-in-Chief:FANG Bin-xing
Executive Editor-in-Chief:LI Feng-hua
Director:Xing Jianchun
Address:F2, Beiyang Chenguang Building, Shunbatiao No.1 Courtyard, Fengtai District, Beijing, China
Tel:010-53879136/53879138/53879139
Fax:+86-81055464
ISSN 2096-109X
CN 10-1366/TP
visited
Total visitors:
Visitors of today:
Now online: