• Title/Summary/Keyword: 하둡 분산 파일 시스템

Search Result 48, Processing Time 0.028 seconds

Security Threats and Review for SQL on Hadoop (SQL on Hadoop 기술 동향 및 보안 위협)

  • Youn, Han Jung;Suk, Sang Kee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.04a
    • /
    • pp.691-693
    • /
    • 2015
  • SQL on Hadoop 기술은 하둡 분산 파일 시스템에 저장된 데이터를 대상으로 SQL을 이용하여 사용자의 질의를 처리하는 기술이다. 기존의 Hadoop 시스템이 맵리듀스의 한계와 기존 시스템의 호환성으로 인해 RDBMS와 병행사용이 불가피하다는 단점을 SQL을 이용해 극복하고자 하는 것이다. 본 논문에서는 SQL on Hadoop의 대표적 프레임워크인 Hive와 Impala의 특징과, 연구동향에 대해 살펴보고 예상되는 보안 위협에 대해 고찰한다.

Dynamic Core Affinity for High-Performance I/O Devices Supporting Multiple Queues (다중 큐를 지원하는 고속 I/O 장치를 위한 동적 코어 친화도)

  • Cho, Joong-Yeon;Uhm, Junyong;Jin, Hyun-Wook;Jung, Sungin
    • Journal of KIISE
    • /
    • v.43 no.7
    • /
    • pp.736-743
    • /
    • 2016
  • Several studies have reported the impact of core affinity on the network I/O performance of multi-core systems. As the network bandwidth increases significantly, it becomes more important to determine the effective core affinity. Although a framework for dynamic core affinity that considers both network and disk I/O has been suggested, the multiple queues provided by high-speed I/O devices are not properly supported. In this paper, we extend the existing framework of dynamic core affinity to efficiently support the multiple queues of high-speed I/O devices, such as 40 Gigabit Ethernet and NVM Express. Our experimental results show that the extended framework can improve the HDFS file upload throughput by up to 32%, and can provide improved scalability in terms of the number of cores. In addition, we analyze the impact of the assignment policy of multiple I/O queues across a number of cores.

Interoperability between NoSQL and RDBMS via Auto-mapping Scheme in Distributed Parallel Processing Environment (분산병렬처리 환경에서 오토매핑 기법을 통한 NoSQL과 RDBMS와의 연동)

  • Kim, Hee Sung;Lee, Bong Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.11
    • /
    • pp.2067-2075
    • /
    • 2017
  • Lately big data processing is considered as an emerging issue. As a huge amount of data is generated, data processing capability is getting important. In processing big data, both Hadoop distributed file system and unstructured date processing-based NoSQL data store are getting a lot of attention. However, there still exists problems and inconvenience to use NoSQL. In case of low volume data, MapReduce of NoSQL normally consumes unnecessary processing time and requires relatively much more data retrieval time than RDBMS. In order to address the NoSQL problem, in this paper, an interworking scheme between NoSQL and the conventional RDBMS is proposed. The developed auto-mapping scheme enables to choose an appropriate database (NoSQL or RDBMS) depending on the amount of data, which results in fast search time. The experimental results for a specific data set shows that the database interworking scheme reduces data searching time by 35% at the maximum.

FAST Design for Large-Scale Satellite Image Processing (대용량 위성영상 처리를 위한 FAST 시스템 설계)

  • Lee, Youngrim;Park, Wanyong;Park, Hyunchun;Shin, Daesik
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.25 no.4
    • /
    • pp.372-380
    • /
    • 2022
  • This study proposes a distributed parallel processing system, called the Fast Analysis System for remote sensing daTa(FAST), for large-scale satellite image processing and analysis. FAST is a system that designs jobs in vertices and sequences, and distributes and processes them simultaneously. FAST manages data based on the Hadoop Distributed File System, controls entire jobs based on Apache Spark, and performs tasks in parallel in multiple slave nodes based on a docker container design. FAST enables the high-performance processing of progressively accumulated large-volume satellite images. Because the unit task is performed based on Docker, it is possible to reuse existing source codes for designing and implementing unit tasks. Additionally, the system is robust against software/hardware faults. To prove the capability of the proposed system, we performed an experiment to generate the original satellite images as ortho-images, which is a pre-processing step for all image analyses. In the experiment, when FAST was configured with eight slave nodes, it was found that the processing of a satellite image took less than 30 sec. Through these results, we proved the suitability and practical applicability of the FAST design.

Data Processing Architecture for Cloud and Big Data Services in Terms of Cost Saving (비용절감 측면에서 클라우드, 빅데이터 서비스를 위한 대용량 데이터 처리 아키텍쳐)

  • Lee, Byoung-Yup;Park, Jae-Yeol;Yoo, Jae-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.5
    • /
    • pp.570-581
    • /
    • 2015
  • In recent years, many institutions predict that cloud services and big data will be popular IT trends in the near future. A number of leading IT vendors are focusing on practical solutions and services for cloud and big data. In addition, cloud has the advantage of unrestricted in selecting resources for business model based on a variety of internet-based technologies which is the reason that provisioning and virtualization technologies for active resource expansion has been attracting attention as a leading technology above all the other technologies. Big data took data prediction model to another level by providing the base for the analysis of unstructured data that could not have been analyzed in the past. Since what cloud services and big data have in common is the services and analysis based on mass amount of data, efficient operation and designing of mass data has become a critical issue from the early stage of development. Thus, in this paper, I would like to establish data processing architecture based on technological requirements of mass data for cloud and big data services. Particularly, I would like to introduce requirements that must be met in order for distributed file system to engage in cloud computing, and efficient compression technology requirements of mass data for big data and cloud computing in terms of cost-saving, as well as technological requirements of open-source-based system such as Hadoop eco system distributed file system and memory database that are available in cloud computing.

Service Delivery Time Improvement using HDFS in Desktop Virtualization (데스크탑 가상화에서 HDFS를 이용한 서비스 제공시간 개선 연구)

  • Lee, Wan-Hee;Lee, Bong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.5
    • /
    • pp.913-921
    • /
    • 2012
  • The current PC-based desktop environment is being converted into server-based virtual desktop environment due to security, mobility, and low upgrade cost. In this paper, a desktop virtualization system is implemented using an open source-based cloud computing platform and hypervisor. The implemented system is applied to the virtualziation of computer in university. In order to reduce the image transfer time, we propose a solution using HDFS. In addition, an image management structure needed for desktop virtualization is designed and implemented, and applied to a real computer lab which accommodates 30 PCs. The performance of the proposed system is evaluated in various aspects including implementation cost, power saving rate, reduction rate of license cost, and management cost. The experimental results showed that the proposed system considerably reduced the image transfer time for desktop service.

Spatial Computation on Spark Using GPGPU (GPGPU를 활용한 스파크 기반 공간 연산)

  • Son, Chanseung;Kim, Daehee;Park, Neungsoo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.8
    • /
    • pp.181-188
    • /
    • 2016
  • Recently, as the amount of spatial information increases, an interest in the study of spatial information processing has been increased. Spatial database systems extended from the traditional relational database systems are difficult to handle large data sets because of the scalability. SpatialHadoop extended from Hadoop system has a low performance, because spatial computations in SpationHadoop require a lot of write operations of intermediate results to the disk, resulting in the performance degradation. In this paper, Spatial Computation Spark(SC-Spark) is proposed, which is an in-memory based distributed processing framework. SC-Spark is extended from Spark in order to efficiently perform the spatial operation for large-scale data. In addition, SC-Spark based on the GPGPU is developed to improve the performance of the SC-Spark. SC-Spark uses the advantage of the Spark holding intermediate results in the memory. And GPGPU-based SC-Spark can perform spatial operations in parallel using a plurality of processing elements of an GPU. To verify the proposed work, experiments on a single AMD system were performed using SC-Spark and GPGPU-based SC-Spark for Point-in-Polygon and spatial join operation. The experimental results showed that the performance of SC-Spark and GPGPU-based SC-Spark were up-to 8 times faster than SpatialHadoop.

Development of Information Technology Infrastructures through Construction of Big Data Platform for Road Driving Environment Analysis (도로 주행환경 분석을 위한 빅데이터 플랫폼 구축 정보기술 인프라 개발)

  • Jung, In-taek;Chong, Kyu-soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.3
    • /
    • pp.669-678
    • /
    • 2018
  • This study developed information technology infrastructures for building a driving environment analysis platform using various big data, such as vehicle sensing data, public data, etc. First, a small platform server with a parallel structure for big data distribution processing was developed with H/W technology. Next, programs for big data collection/storage, processing/analysis, and information visualization were developed with S/W technology. The collection S/W was developed as a collection interface using Kafka, Flume, and Sqoop. The storage S/W was developed to be divided into a Hadoop distributed file system and Cassandra DB according to the utilization of data. Processing S/W was developed for spatial unit matching and time interval interpolation/aggregation of the collected data by applying the grid index method. An analysis S/W was developed as an analytical tool based on the Zeppelin notebook for the application and evaluation of a development algorithm. Finally, Information Visualization S/W was developed as a Web GIS engine program for providing various driving environment information and visualization. As a result of the performance evaluation, the number of executors, the optimal memory capacity, and number of cores for the development server were derived, and the computation performance was superior to that of the other cloud computing.