Please wait a minute...

当期目录

    15 July 2020, Volume 6 Issue 4
    TOPIC:HETEROGENEOUS PARALLEL SYSTEMS FOR BIG DATA
    A research on GPU transactional memory
    Yuzhe LIN, Weihua ZHANG
    2020, 6(4):  3-17.  doi:10.11959/j.issn.2096-0271.2020029
    Asbtract ( 242 )   HTML ( 46)   PDF (1831KB) ( 524 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    GPU is one of the important architectures in parallel computing,however,when dealing with high data racing scenarios,programmers often need to design complex parallel schemes.In order to simplify this process,GPU transactional memory implements complex data synchronization and parallelism,and only provides simple API.The research background of GPU transactional memory was introduced.Then,the designs and strategies of GPU transactional memory in recent years were discussed,and the problems and solutions of different designs were analyzed,including the implementation of hardware and software.Finally,the current situation and future development of GPU transactional memory were summarized and prospected.

    Design,implementation and practice of parallel processing system for a large-scale heterogeneous data
    Zhengxun XIA, Shengmei LUO, Yuanhao SUN, Jianfei TANG, Yan ZHANG
    2020, 6(4):  18-29.  doi:10.11959/j.issn.2096-0271.2020030
    Asbtract ( 439 )   HTML ( 98)   PDF (1828KB) ( 688 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    With the rapid development of Internet and IoT applications,data processing has gradually expanded from structured to structured,semi-structured and unstructured hybrid heterogeneous data processing mode.A large-scale heterogeneous data parallel processing system was designed.Based on the functional view of a unified platform,the unified resource management framework was adopted to store and query a variety of heterogeneous data,including structured,JSON/XML,graph data,document data,etc.Adopting a unified database language,the parallel computing across data types and database engines was realized,and the needs of multi business application development were met.The feasibility of the system is verified by standard evaluation environment and commercial deployment.

    Sunway parallel storage system for big data heterogeneous system
    Xiaobin HE, Jinhu JIANG
    2020, 6(4):  30-39.  doi:10.11959/j.issn.2096-0271.2020031
    Asbtract ( 573 )   HTML ( 131)   PDF (1477KB) ( 695 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    With the integration of big data applications and traditional high-performance computing applications and the introduction of heterogeneous computing,the traditional parallel storage system for high-performance computing faces the problems of poor I/O support,performance interference,and low efficiency.By introducing multi-level storage architecture into the system architecture,the cache mapping mechanism was designed to reduce the I/O load.The I/O forwarding strategy was adjusted in the forwarding service layer to balance the I/O load.In the back-end storage layer,the high availability function of the system was adjusted to solve the conflict between the big data I/O access mode and the original high availability functions.After optimized design and improvement,the parallel storage system can better adapt to the heterogeneous multi-core architecture,making some applications get more than 10 times of I/O performance improvement.

    Research on performance optimization for large-scale sparse computation over many-core heterogenous supercomputer
    Zhengding HU, Wei XUE
    2020, 6(4):  40-55.  doi:10.11959/j.issn.2096-0271.2020032
    Asbtract ( 599 )   HTML ( 93)   PDF (2731KB) ( 758 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    With development of supercomputer technique,it is possible to solve extra-scale sparse problems in big data applications.However,irregular feature in computation and memory access of sparse problems brings challenges to implementation and optimization of applications.Many-core heterogenous architecture is popular in supercomputer design,which advances a higher requirement for application developers.How to utilize its extraordinary computing ability becomes a very difficult problem.Challenges in optimizing sparse computing problems were analyzed,and three cases of implementation and optimization based on typical many-core heterogenous computer system were introduced,which of all achieve very high performance.Experiences in those successful cases were summed up,to better solve extra-scale sparse computing problems on many-core heterogenous system of new generation.

    Memory management in deep learning:a survey
    Weiliang MA, Xuan PENG, Qian XIONG, Xuanhua SHI, Hai JIN
    2020, 6(4):  56-68.  doi:10.11959/j.issn.2096-0271.2020033
    Asbtract ( 602 )   HTML ( 73)   PDF (1299KB) ( 990 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    In recent years,deep learning has achieved great success in many fields.As the deep neural network develops towards a deeper and wider direction,the training and inference of a deep neural network face huge memory pressure.The limited memory space of accelerating devices has become an important factor restricting the rapid development of deep neural network.How to achieve efficient memory management in deep learning has become a key point in the development of deep learning.Therefore,the basic characteristics of deep neural network were introduced firstly and memory bottleneck in deep learning training was analyzed.Some representative research works were classified,and their advantages and disadvantages were analyzed.Finally,some important direction and tendency of memory management in deep learning were suggested.

    Research on the next-generation deep learning framework
    Fan YU
    2020, 6(4):  69-80.  doi:10.11959/j.issn.2096-0271.2020034
    Asbtract ( 802 )   HTML ( 134)   PDF (1695KB) ( 1463 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Started from the history of AI,the development and challenges of deep learning were described,the features of the nextgeneration deep learning framework was introduced,the overall framework was analyzed,and the technical advantages of auto parallel,auto differentiation,automatic tuning,as well as the performance advantages of collaborating with Ascend processors were expanded.This article can be used as a reference for deep learning technology researchers.

    Applications and challenges of language virtual machines in big data
    Mingyu WU, Haibo CHEN, Binyu ZANG
    2020, 6(4):  81-91.  doi:10.11959/j.issn.2096-0271.2020035
    Asbtract ( 265 )   HTML ( 55)   PDF (1178KB) ( 575 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Language virtual machines provide a platform-independent execution environment for big-data applications and simplify their development and deployment phases,so they are widely used in the big-data scenario.The applications of two different kinds of mainstream language virtual machines:JVM and CLR,were analyzed,and four challenges when adopting language virtual machines:initialization and warm-up overhead,garbage collection pauses,heterogeneous memory support,and data layout transformation,were summarized.Afterward,existing approaches to the challenges were discussed and their shortcomings and possible optimizations in the future were analyzed.

    STUDY
    Adaptive feature spectrum neural networks for special types of natural language classification
    Yifeng WANG, Liru SUN, Liangle CUI, Yi ZHAO
    2020, 6(4):  92-104.  doi:10.11959/j.issn.2096-0271.2020036
    Asbtract ( 193 )   HTML ( 31)   PDF (4704KB) ( 388 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    The improvement of computer computing power has led to the rapid development of deep learning algorithms.However,due to the special word order,wording,structure,sentence structure,grammatical structure,and expression of ancient poetry,deep learning models need to consume more computing power for feature extraction,etc.Therefore,it has not been widely used in this field.As a result,a new kind neural network:the adaptive feature spectrum neural network was proposed,which can considerably reduce the computation and adaptively select the features that are the most useful for classification in order to form the most efficient feature spectrum.The classification results obtained have certain interpretability.Moreover,its fast running speed and lower RAM consumption make it very suitable for learning aids software,and other fields.Based on this algorithm,a corresponding personalized learning platform was developed.This algorithm improves the classification accuracy of ancient Chinese poetry from 93.84% to 99%.

    APPLICATION
    Application of blockchain technology in government data sharing
    Peng WANG, Bi WEI, Cong WANG
    2020, 6(4):  105-114.  doi:10.11959/j.issn.2096-0271.2020037
    Asbtract ( 597 )   HTML ( 143)   PDF (1309KB) ( 1208 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Blockchain can resolve the contradiction between security and efficiency faced by data sharing and has great application potential in government data sharing.Based on the current policy environment and traditional data sharing issues,combined with the application principles and core advantages of blockchain,firstly,the advantages and expected effects of the application of blockchain in the sharing of government information resources were analyzed.Then,the case of the construction of real estate blockchain information sharing platform was introduced and analyzed.The case indicates that blockchain technology effectively solves the problems and challenges of government data sharing,empowering the government to provide better public services to the society.At the last,the recommendations and outlook of blockchain to be applied in government information sharing in large for the future were proposed.

    Teaching reform and practice of big data application technology course
    Dawen XIA, Lin WANG, Qian ZHANG, Jiayin WEI, Fujian FENG, Huaqing LI
    2020, 6(4):  115-124.  doi:10.11959/j.issn.2096-0271.2020038
    Asbtract ( 360 )   HTML ( 89)   PDF (1349KB) ( 691 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    Big data has promoted the development of data science research and the construction of data science disciplines and spawned the demand for new data talents.Firstly,the practical needs of big data talent training were analyzed.Then,the existing problems of big data talent training were pointed out.Finally,taking the “big data application technology course”as an example,the path selection and practice of teaching reform for big data were explored in terms of restructuring the teaching system,optimizing the teaching content,improving the teaching method,standardizing the teaching process,and updating the teaching evaluation,to innovatively cultivate interdisciplinary big data talents who both have engineering practice capabilities and technological innovation capabilities.

    FORUM
    An analysis of the military application and development path of artificial intelligence in the United States and Russia
    Shanshan XIAN
    2020, 6(4):  125-132.  doi:10.11959/j.issn.2096-0271.2020039
    Asbtract ( 698 )   HTML ( 103)   PDF (1251KB) ( 1299 )   Knowledge map   
    Figures and Tables | References | Related Articles | Metrics

    With the rapid development of artificial intelligence technology,its application scope has expanded to the military field.The analysis of the approach and characteristics of the United States and Russia’s artificial intelligence military application were focused,combined with the characteristics of big data,the development path of the United States and Russia’s artificial intelligence military application was explored,and the future trend of the application of artificial intelligence in this field was proposed,with a view to the application of artificial intelligence in China reference.

Most Download
Most Read
Most Cited