• Title/Summary/Keyword: Large Data Processing

Search Result 1,766, Processing Time 0.024 seconds

Concurrency processing comparison of large data list using GO language (GO언어를 이용한 대용량 데이터 리스트의 동시성 처리 비교)

  • Lee, Yoseb;Lim, Young-Han
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.2
    • /
    • pp.361-366
    • /
    • 2022
  • There are several ways to process large amounts of data. Depending on the processing method, there is a big difference in processing speed to create a large data list. Typically, to make a large data list, large data is converted into a normalized query, and the result of the query is stored in a List Map and converted into a printable form. This process occurs as a cause of lowering the processing speed step by step. In the process of storing the results of the created query as a List Map, the processing speed differs because the data is stored in a different format for each type of data. Through the simultaneous processing of GO language, we want to solve the problem of the existing difference in processing speed. In other words, it compares the results of GO language concurrency processing by providing how different and how it proceeds between the format contained in the existing List Map and the method of processing using concurrency in large data lists for faster processing. do.

Implementation of AIoT Edge Cluster System via Distributed Deep Learning Pipeline

  • Jeon, Sung-Ho;Lee, Cheol-Gyu;Lee, Jae-Deok;Kim, Bo-Seok;Kim, Joo-Man
    • International journal of advanced smart convergence
    • /
    • v.10 no.4
    • /
    • pp.278-288
    • /
    • 2021
  • Recently, IoT systems are cloud-based, so that continuous and large amounts of data collected from sensor nodes are processed in the data server through the cloud. However, in the centralized configuration of large-scale cloud computing, computational processing must be performed at a physical location where data collection and processing take place, and the need for edge computers to reduce the network load of the cloud system is gradually expanding. In this paper, a cluster system consisting of 6 inexpensive Raspberry Pi boards was constructed to perform fast data processing. And we propose "Kubernetes cluster system(KCS)" for processing large data collection and analysis by model distribution and data pipeline method. To compare the performance of this study, an ensemble model of deep learning was built, and the accuracy, processing performance, and processing time through the proposed KCS system and model distribution were compared and analyzed. As a result, the ensemble model was excellent in accuracy, but the KCS implemented as a data pipeline proved to be superior in processing speed..

A Method for Analyzing Web Log of the Hadoop System for Analyzing a Effective Pattern of Web Users (효과적인 웹 사용자의 패턴 분석을 위한 하둡 시스템의 웹 로그 분석 방안)

  • Lee, Byungju;Kwon, Jungsook;Go, Gicheol;Choi, Yonglak
    • Journal of Information Technology Services
    • /
    • v.13 no.4
    • /
    • pp.231-243
    • /
    • 2014
  • Of the various data that corporations can approach, web log data are important data that correspond to data analysis to implement customer relations management strategies. As the volume of approachable data has increased exponentially due to the Internet and popularization of smart phone, web log data have also increased a lot. As a result, it has become difficult to expand storage to process large amounts of web logs data flexibly and extremely hard to implement a system capable of categorizing, analyzing, and processing web log data accumulated over a long period of time. This study thus set out to apply Hadoop, a distributed processing system that had recently come into the spotlight for its capacity of processing large volumes of data, and propose an efficient analysis plan for large amounts of web log. The study checked the forms of web log by the effective web log collection methods and the web log levels by using Hadoop and proposed analysis techniques and Hadoop organization designs accordingly. The present study resolved the difficulty with processing large amounts of web log data and proposed the activity patterns of users through web log analysis, thus demonstrating its advantages as a new means of marketing.

An implementation of the high speed image processing board for contact image sensor (Contact image sensor를 위한 고속 영상 처리 보드 구현)

  • Kang, Hyun-Inn;Ju, Yong-Wan;Baek, Kwang-Ryul
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.5 no.6
    • /
    • pp.691-697
    • /
    • 1999
  • This paper describes the implementation of a high speed image processing board. This image processing board is consist of a image acquisition part and a image processing part. The image acquistion part is digitizing the image input data from CIS and save it to the dual port RAM. By putting on the dual port memory between two parts, during acquistion of image, the image processing part can be effectively processing of large-volume image data. Most of all image preprocessing part are integrated in a large-scaled FPGA. We arwe using ADSP-2181 of the Analog Device Inc., LTD. for a image processing part, and using the available all memory of DSP for the large-volume image data. Especially, using of IDMA exchanges the data with the external microprocessor or the external PC, and can watch the result of image processing and acquired image. Finally, we show that an implemented image processing board used for the simulation of image retreval by the one of the typical application.

  • PDF

Implementation of the FAT32 File System using PLC and CF Memory (PLC와 CF 메모리를 이용한 FAT32 파일시스템 구현)

  • Kim, Myeong Kyun;Yang, Oh;Chung, Won Sup
    • Journal of the Semiconductor & Display Technology
    • /
    • v.11 no.2
    • /
    • pp.85-91
    • /
    • 2012
  • In this paper, the large data processing and suitable FAT32 file system for industrial system using a PLC and CF memory was implemented. Most of PLC can't save the large data in user data memory. So it's required to the external devices of CF memory or NAND flash memory. The CF memory is used in order to save the large data of PLC system. The file system using the CF memory is NTFS, FAT, and FAT32 system to configure in various ways. Typically, the file system which is widely used in industrial data storage has been implemented as modified FAT32. The conventional FAT 32 file system was not possible for multiple writing and high speed data accessing. The proposed file system was implemented by the large data processing module can be handled that the files are copied at the 40 bytes for 1msec speed logging and creating 8 files at the same time. In a sudden power failure, high reliability was obtained that the problem was solved using a power fail monitor and the non-volatile random-access memory (NVSRAM). The implemented large data processing system was applied the modified file system as FAT32 and the good performance and high reliability was showed.

Development of the design methodology for large-scale database based on MongoDB

  • Lee, Jun-Ho;Joo, Kyung-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.22 no.11
    • /
    • pp.57-63
    • /
    • 2017
  • The recent sudden increase of big data has characteristics such as continuous generation of data, large amount, and unstructured format. The existing relational database technologies are inadequate to handle such big data due to the limited processing speed and the significant storage expansion cost. Thus, big data processing technologies, which are normally based on distributed file systems, distributed database management, and parallel processing technologies, have arisen as a core technology to implement big data repositories. In this paper, we propose a design methodology for large-scale database based on MongoDB by extending the information engineering methodology based on E-R data model.

Proposal For Improving Data Processing Performance Using Python (파이썬 활용한 데이터 처리 성능 향상방법 제안)

  • Kim, Hyo-Kwan;Hwang, Won-Yong
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.4
    • /
    • pp.306-311
    • /
    • 2020
  • This paper deals with how to improve the performance of Python language with various libraries when developing a model using big data. The Python language uses the Pandas library for processing spreadsheet-format data such as Excel. In processing data, Python operates on an in-memory basis. There is no performance issue when processing small scale of data. However, performance issues occur when processing large scale of data. Therefore, this paper introduces a method for distributed processing of execution tasks in a single cluster and multiple clusters by using a Dask library that can be used with Pandas when processing data. The experiment compares the speed of processing a simple exponential model using only Pandas on the same specification hardware and the speed of processing using a dask together. This paper presents a method to develop a model by distributing a large scale of data by CPU cores in terms of performance while maintaining that python's advantage of using various libraries is easy.

Application of Data Processing Technology on Large Clusters to Distribution Automation System (대용량 데이터 처리기술을 배전자동화 시스템에 적용)

  • Lee, Sung-Woo;Ha, Bok-Nam;Seo, In-Yong;Jang, Moon-Jong
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.2
    • /
    • pp.245-251
    • /
    • 2011
  • Quantities of data in the DMS (Distribution management system) or SCADA (Supervisory control and data acquisition) system is enormously large as illustrated by the usage of term flooding of data. This enormous quantity of data is transmitted to the status data or event data of the on-site apparatus in real-time. In addition, if GIS (Geographic information system) and AMR (Automatic meter reading), etc are integrated, the quantity of data to be processed in real-time increases unimaginably. Increase in the quantity of data due to addition of system or increase in the on-site facilities cannot be handled through the currently used Single Thread format of data processing technology. However, if Multi Thread technology that utilizes LF-POOL (Leader Follower -POOL) is applied in processing large quantity of data, large quantity of data can be processed in short period of time and the load on the server can be minimized. In this Study, the actual materialization and functions of LF POOL technology are examined.

A FRAMEWORK FOR QUERY PROCESSING OVER HETEROGENEOUS LARGE SCALE SENSOR NETWORKS

  • Lee, Chung-Ho;Kim, Min-Soo;Lee, Yong-Joon
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.101-104
    • /
    • 2007
  • Efficient Query processing and optimization are critical for reducing network traffic and decreasing latency of query when accessing and manipulating sensor data of large-scale sensor networks. Currently it has been studied in sensor database projects. These works have mainly focused on in-network query processing for sensor networks and assumes homogeneous sensor networks, where each sensor network has same hardware and software configuration. In this paper, we present a framework for efficient query processing over heterogeneous sensor networks. Our proposed framework introduces query processing paradigm considering two heterogeneous characteristics of sensor networks: (1) data dissemination approach such as push, pull, and hybrid; (2) query processing capability of sensor networks if they may support in-network aggregation, spatial, periodic and conditional operators. Additionally, we propose multi-query optimization strategies supporting cross-translation between data acquisition query and data stream query to minimize total cost of multiple queries. It has been implemented in WSN middleware, COSMOS, developed by ETRI.

  • PDF

MAHA-FS : A Distributed File System for High Performance Metadata Processing and Random IO (MAHA-FS : 고성능 메타데이터 처리 및 랜덤 입출력을 위한 분산 파일 시스템)

  • Kim, Young Chang;Kim, Dong Oh;Kim, Hong Yeon;Kim, Young Kyun;Choi, Wan
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.2
    • /
    • pp.91-96
    • /
    • 2013
  • The application field of supercomputing systems are changing to support into the field for both a large-volume data processing and high-performance computing at the same time such as bio-applications. These applications require high-performance distributed file system for storage management and efficient high-speed processing of large amounts of data that occurs. In this paper, we introduce MAHA-FS for supercomputing systems for processing large amounts of data and high-performance computing, providing excellent metadata operation performance and IO performance. It is shown through performance analysis that MAHA-FS provides excellent performance in terms of the metadata processing and random IO processing.