• Title/Summary/Keyword: Parallel File System

Search Result 72, Processing Time 0.022 seconds

Development of Big-data Management Platform Considering Docker Based Real Time Data Connecting and Processing Environments (도커 기반의 실시간 데이터 연계 및 처리 환경을 고려한 빅데이터 관리 플랫폼 개발)

  • Kim, Dong Gil;Park, Yong-Soon;Chung, Tae-Yun
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.4
    • /
    • pp.153-161
    • /
    • 2021
  • Real-time access is required to handle continuous and unstructured data and should be flexible in management under dynamic state. Platform can be built to allow data collection, storage, and processing from local-server or multi-server. Although the former centralize method is easy to control, it creates an overload problem because it proceeds all the processing in one unit, and the latter distributed method performs parallel processing, so it is fast to respond and can easily scale system capacity, but the design is complex. This paper provides data collection and processing on one platform to derive significant insights from various data held by an enterprise or agency in the latter manner, which is intuitively available on dashboards and utilizes Spark to improve distributed processing performance. All service utilize dockers to distribute and management. The data used in this study was 100% collected from Kafka, showing that when the file size is 4.4 gigabytes, the data processing speed in spark cluster mode is 2 minute 15 seconds, about 3 minutes 19 seconds faster than the local mode.

Design and Implementation of I/O Tracer for PVFS (PVFS를 위한 I/O Tracer 설계 및 구현)

  • Hyeyoung Cho;Kwangho Cha;Sungho Kim;SangDong Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.11a
    • /
    • pp.966-969
    • /
    • 2008
  • 사용자 프로그램의 I/O 패턴을 분석하거나 파일 시스템의 워크로드를 보다 정확하게 분석하기 위해서 실제 가동중인 파일 시스템의 동적 I/O 로그를 확보하기 위한 연구들이 많이 진행되어 왔다. 그러나 대량의 I/O 트렌젝션(transcation)이 처리되는 파일 시스템에서 동적 I/O 로그를 확보하는 일은 시스템의 부하와 막대한 데이터량 때문에 한계가 많다. 특히 다수의 이용자가 사용하는 대용량 분산/병렬 파일 시스템에서의 I/O Tracing은 로컬 파일 시스템에서 I/O Tracing에 비해 더욱 복잡하고 오버헤드가 크다. 본 논문에서는 기존의 파일 시스템 로깅 방법들을 알아보고, 클러스터 시스템에서 널리 이용되고 있는 분산 파일 시스템인 PVFS(Parallel Virtual File System)에서 동적 I/O 연산들의 로그를 생성할 수 있는 로깅 시스템을 제안하고 설계하였다.

Development of Climate & Environment Data System for Big Data from Climate Model Simulations (대용량 기후모델자료를 위한 통합관리시스템 구축)

  • Lee, Jae-Hee;Sung, Hyun Min;Won, Sangho;Lee, Johan;Byu, Young-Hwa
    • Atmosphere
    • /
    • v.29 no.1
    • /
    • pp.75-86
    • /
    • 2019
  • In this paper, we introduce a novel Climate & Environment Database System (CEDS). The CEDS is developed by the National Institute of Meteorological Sciences (NIMS) to provide easy and efficient user interfaces and storage management of climate model data, so improves work efficiency. In uploading the data/files, the CEDS provides an option to automatically operate the international standard data conversion (CMORization) and the quality assurance (QA) processes for submission of CMIP6 variable data. This option increases the system performance, removes the user mistakes, and increases the level of reliability as it eliminates user operation for the CMORization and QA processes. The uploaded raw files are saved in a NAS storage and the Cassandra database stores the metadata that will be used for efficient data access and storage management. The Metadata is automatically generated when uploading a file, or by the user inputs. With the Metadata, the CEDS supports effective storage management by categorizing data/files. This effective storage management allows easy and fast data access with a higher level of data reliability when requesting with the simple search words by a novice. Moreover, the CEDS supports parallel and distributed computing for increasing overall system performance and balancing the load. This supports the high level of availability as multiple users can use it at the same time with fast system-response. Additionally, it deduplicates redundant data and reduces storage space.

Implementation and Performance Analysis of a Multichannel Visual Monitoring System based on DirectX (DirectX 기반 다채널 영상 감시 시스템 구현 및 성능 분석)

  • Chung Sun-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.1
    • /
    • pp.217-227
    • /
    • 2005
  • In this paper, we present the result of an efficient implementation of DirectX-based multichannel visual monitoring system and the performance analysis of it. Our proposed system mainly consists of three subsystems: display, storage, and retrieval/playback. The display subsystem is designed to utilize efficiently H/W acceleration, overlay and flip of DirectX for faster real-time display, display synchronization among channels, and improvement of tearing artifact. For the performance improvement of storage speed, the storage subsystem is designed and implemented in DirectShow Filter-based multithreading architecture so that it can store multichannel video streams efficiently in each channel with minimizing data bottleneck among channels. In the retrieval and playback subsystem, efficient index file architecture and video data storage architecture, efficient playback architecture which can make playback be processed parallel among channels are designed and implemented for faster retrieval and playback,. Through experiments, our proposed system is shown to be maximally 2 times as fast in storaging speed and maximally 3.5 times as fast in retrieval and playback speed as the previous system.

  • PDF

A Small Real-Time Radio Broadcasting System by Using Smart Phone (스마트폰을 이용한 소규모 실시간 라디오 방송 시스템)

  • Lee, Jae-Moon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.12 no.5
    • /
    • pp.83-90
    • /
    • 2012
  • This paper is a research on the design and implementation of a small real-time radio broadcasting system by using smart phone based on Android. It was designed as the server-client structure, and used the progressive download of HTTP as methods of transferring data to further simplify the system. In order to realize the real-time broadcasting, the original audio source was divided with a short interval and captured to be compressed and stored into files. Then the client receives and plays the compressed files sequentially as it is downloaded. However, this method occurs two problems each of which is the loss of capturing the original source in the server and the discontinuity of playing the files in the client. We solved the problem in the server by separating the thread into two parallel threads of which is each captured and compressed/stored, also by using the double buffering method. The problem in the client was solved using MediaPlayer in Android and the file queue to store the multiple files.

MapReduce-based Localized Linear Regression for Electricity Price Forecasting (전기 가격 예측을 위한 맵리듀스 기반의 로컬 단위 선형회귀 모델)

  • Han, Jinju;Lee, Ingyu;On, Byung-Won
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.67 no.4
    • /
    • pp.183-190
    • /
    • 2018
  • Predicting accurate electricity prices is an important task in the electricity trading market. To address the electricity price forecasting problem, various approaches have been proposed so far and it is known that linear regression-based approaches are the best. However, the use of such linear regression-based methods is limited due to low accuracy and performance. In traditional linear regression methods, it is not practical to find a nonlinear regression model that explains the training data well. If the training data is complex (i.e., small-sized individual data and large-sized features), it is difficult to find the polynomial function with n terms as the model that fits to the training data. On the other hand, as a linear regression model approximating a nonlinear regression model is used, the accuracy of the model drops considerably because it does not accurately reflect the characteristics of the training data. To cope with this problem, we propose a new electricity price forecasting method that divides the entire dataset to multiple split datasets and find the best linear regression models, each of which is the optimal model in each dataset. Meanwhile, to improve the performance of the proposed method, we modify the proposed localized linear regression method in the map and reduce way that is a framework for parallel processing data stored in a Hadoop distributed file system. Our experimental results show that the proposed model outperforms the existing linear regression model. Specifically, the accuracy of the proposed method is improved by 45% and the performance is faster 5 times than the existing linear regression-based model.

Interoperability between NoSQL and RDBMS via Auto-mapping Scheme in Distributed Parallel Processing Environment (분산병렬처리 환경에서 오토매핑 기법을 통한 NoSQL과 RDBMS와의 연동)

  • Kim, Hee Sung;Lee, Bong Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.11
    • /
    • pp.2067-2075
    • /
    • 2017
  • Lately big data processing is considered as an emerging issue. As a huge amount of data is generated, data processing capability is getting important. In processing big data, both Hadoop distributed file system and unstructured date processing-based NoSQL data store are getting a lot of attention. However, there still exists problems and inconvenience to use NoSQL. In case of low volume data, MapReduce of NoSQL normally consumes unnecessary processing time and requires relatively much more data retrieval time than RDBMS. In order to address the NoSQL problem, in this paper, an interworking scheme between NoSQL and the conventional RDBMS is proposed. The developed auto-mapping scheme enables to choose an appropriate database (NoSQL or RDBMS) depending on the amount of data, which results in fast search time. The experimental results for a specific data set shows that the database interworking scheme reduces data searching time by 35% at the maximum.

A Study on the Data Collection Methods based Hadoop Distributed Environment (하둡 분산 환경 기반의 데이터 수집 기법 연구)

  • Jin, Go-Whan
    • Journal of the Korea Convergence Society
    • /
    • v.7 no.5
    • /
    • pp.1-6
    • /
    • 2016
  • Many studies have been carried out for the development of big data utilization and analysis technology recently. There is a tendency that government agencies and companies to introduce a Hadoop of a processing platform for analyzing big data is increasing gradually. Increased interest with respect to the processing and analysis of these big data collection technology of data has become a major issue in parallel to it. However, study of the collection technology as compared to the study of data analysis techniques, it is insignificant situation. Therefore, in this paper, to build on the Hadoop cluster is a big data analysis platform, through the Apache sqoop, stylized from relational databases, to collect the data. In addition, to provide a sensor through the Apache flume, a system to collect on the basis of the data file of the Web application, the non-structured data such as log files to stream. The collection of data through these convergence would be able to utilize as a basic material of big data analysis.

Design and Implementation of an Efficient Web Services Data Processing Using Hadoop-Based Big Data Processing Technique (하둡 기반 빅 데이터 기법을 이용한 웹 서비스 데이터 처리 설계 및 구현)

  • Kim, Hyun-Joo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.1
    • /
    • pp.726-734
    • /
    • 2015
  • Relational databases used by structuralizing data are the most widely used in data management at present. However, in relational databases, service becomes slower as the amount of data increases because of constraints in the reading and writing operations to save or query data. Furthermore, when a new task is added, the database grows and, consequently, requires additional infrastructure, such as parallel configuration of hardware, CPU, memory, and network, to support smooth operation. In this paper, in order to improve the web information services that are slowing down due to increase of data in the relational databases, we implemented a model to extract a large amount of data quickly and safely for users by processing Hadoop Distributed File System (HDFS) files after sending data to HDFSs and unifying and reconstructing the data. We implemented our model in a Web-based civil affairs system that stores image files, which is irregular data processing. Our proposed system's data processing was found to be 0.4 sec faster than that of a relational database system. Thus, we found that it is possible to support Web information services with a Hadoop-based big data processing technique in order to process a large amount of data, as in conventional relational databases. Furthermore, since Hadoop is open source, our model has the advantage of reducing software costs. The proposed system is expected to be used as a model for Web services that provide fast information processing for organizations that require efficient processing of big data because of the increase in the size of conventional relational databases.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.