通信学报

• 学术论文 • 上一篇    下一篇

基于GPU的LCS算法加速机制研究与实现

张常志,牟澄,黄小红,马严   

  1. 北京邮电大学 网络技术研究院信息网络中心,北京 100876
  • 出版日期:2013-12-25 发布日期:2013-12-17
  • 基金资助:
    国家自然科学基金资助项目(61003282);国家CNGI专项基金资助项目:可演进的下一代高智能网络架构研究和实验基金资助项目

Research and implementation of the GPU-based LCS algorithm acceleration mechanism

  • Online:2013-12-25 Published:2013-12-17

摘要: 协议特征识别技术中用到了一种重要的LCS算法,它是一种字符串比对算法,提取出字符串中的最长连续公共子串。然而,通过理论分析和实验表明:这个查找过程是一个时间复杂度较高的运算过程,如果输入的数据分组比较大,那么运行的时间将会非常长,为此不得不控制输入数据分组的大小和数量,这严重限制了所采用样本集的大小。提出了基于GPU对LCS运算实现加速的方法。在此基础上搭建和配置了CUDA平台,在此平台下研究并实现了LCS算法的并行性。通过对LCS算法在CUDA下并行性的研究,有效地加快了LCS算法的运行速度。实验结果表明,GPU下LCS算法的运行效率比CPU有了显著的提高。

Abstract: The LCS algorithm used in protocol feature recognition is a string matching algorithm to extract the longest string of continuous public substring. However, through theoretical analysis and some experimental situation, it can be seen that this process is a time complexity of higher computing process. If the input data packet is relatively large, the running time will be very long. To this end, the size and number of input packets have to be controled, which severely limits the size of the sample set. A GPU based method for accelerating the LCS algorithm was proposed. The CUDA platform was built and dedoyed and the parallel of LCS algorithm was researched on this platform. By the parallel study of LCS algorithm on the CUDA, the operation speed of the LCS is effectively enhanced. Highly competitive experimental results show that the LCS algorithm in the GPU is more effective and efficient than that in the CPU.

No Suggested Reading articles found!