• Title/Summary/Keyword: Apache Sqoop

Search Result 3, Processing Time 0.017 seconds

Analysis of the Influence Factors of Data Loading Performance Using Apache Sqoop (아파치 스쿱을 사용한 하둡의 데이터 적재 성능 영향 요인 분석)

  • Chen, Liu;Ko, Junghyun;Yeo, Jeongmo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.2
    • /
    • pp.77-82
    • /
    • 2015
  • Big Data technology has been attracted much attention in aspect of fast data processing. Research of practicing Big Data technology is also ongoing to process large-scale structured data much faster in Relatioinal Database(RDB). Although there are lots of studies about measuring analyzing performance, studies about structured data loading performance, prior step of analyzing, is very rare. Thus, in this study, structured data in RDB is tested the performance that loads distributed processing platform Hadoop using Apache sqoop. Also in order to analyze the influence factors of data loading, it is tested repeatedly with different options of data loading and compared with data loading performance among RDB based servers. Although data loading performance of Apache Sqoop in test environment was low, but in large-scale Hadoop cluster environment we can expect much better performance because of getting more hardware resources. It is expected to be based on study improving data loading performance and whole steps of performance analyzing structured data in Hadoop Platform.

A Study on the Data Collection Methods based Hadoop Distributed Environment (하둡 분산 환경 기반의 데이터 수집 기법 연구)

  • Jin, Go-Whan
    • Journal of the Korea Convergence Society
    • /
    • v.7 no.5
    • /
    • pp.1-6
    • /
    • 2016
  • Many studies have been carried out for the development of big data utilization and analysis technology recently. There is a tendency that government agencies and companies to introduce a Hadoop of a processing platform for analyzing big data is increasing gradually. Increased interest with respect to the processing and analysis of these big data collection technology of data has become a major issue in parallel to it. However, study of the collection technology as compared to the study of data analysis techniques, it is insignificant situation. Therefore, in this paper, to build on the Hadoop cluster is a big data analysis platform, through the Apache sqoop, stylized from relational databases, to collect the data. In addition, to provide a sensor through the Apache flume, a system to collect on the basis of the data file of the Web application, the non-structured data such as log files to stream. The collection of data through these convergence would be able to utilize as a basic material of big data analysis.

Implementation and comparison with Structured data collection modules (정형 빅데이터 수집 모듈 구현 및 비교)

  • Jang, Dong-Hwon;Lee, Min-Woo;Kim, Woosaeng
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.635-638
    • /
    • 2014
  • 빅데이터 시대의 대두에 따라 기존의 관계형 데이터베이스로는 처리하기 어려운 형태의 데이터가 발생하였다. 이런 성질의 데이터를 저장, 활용하기 위한 방법으로 Apache 하둡이 널리 사용되고 있다. 기존의 RDBMS 상의 데이터를 하둡 데이터 분석의 원천 데이터로 활용하려고 하는 경우, 혹은 데이터 크기와 복잡도의 증가로 저장방식을 바꿔야 하는 경우 데이터를 HDFS(Hadoop Distributed File System) 으로 전송해야 한다. 본 논문에서는 정형 데이터 수집 모듈인 Sqoop과 Nosqoop4u의 개발을 통하여 데이터 전송 성능을 비교하였다.