• Title/Summary/Keyword: 분산 파일 시스템

Search Result 383, Processing Time 0.028 seconds

Development of SAR Image Quality Performance Analysis Tool for High Resolution Spaceborne Synthetic Aperture Radar (고해상도 위성 SAR 영상품질 성능 분석 툴 개발)

  • Oh, Tae-Bong;Jung, Chul-Ho;Song, Sun-Ho;Shin, Jae-Min;Kwag, Young-Kil
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.38 no.2
    • /
    • pp.188-194
    • /
    • 2010
  • In this paper, the typical Synthetic Aperture Radar (SAR) image quality parameters and analysis method are defined, and the SAR image analysis tool is presented for SAR image evaluation. The structure of the developed SAR image analysis tool consists of four key modules; point target analysis (PTA) module, distributed target analysis (DTA) module, ambiguity analysis (AMA) module, and NESZ analysis (NESZA) module. The developed tool is able to extract the various SAR system parameters from standard SAR product format files. Based on these extracted system parameters, typical SAR image quality parameters are derived from SAR image data.

Dynamic File Migration And Mathematical model in Distributed Computer Systems (분산 시스템에서 동적 파일 이전과 수학적 모델)

  • Moon, Won Sik
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.10 no.3
    • /
    • pp.35-40
    • /
    • 2014
  • Many researches have been conducted to achieve improvement in distributed system that connects multiple computer systems via communication lines. Among others, the load balancing and file migration are considered to have significant impact on the performance of distributed system. The dynamic file migration algorithm common in distributed processing system involved complex calculations of decision function necessary for file migration and required migration of control messages for the performance of decision function. However, the performance of this decision function puts significant computational strain on computer. As one single network is shared by all computers, more computers connected to network means migration of more control messages from file migration, causing the network to trigger bottleneck in distributed processing system. Therefore, it has become imperative to carry out the research that aims to reduce the number of control messages that will be migrated. In this study, the learning automata was used for file migration which would requires only the file reference-related information to determine whether file migration has been made or determine the time and site of file migration, depending on the file conditions, thus reflecting the status of current system well and eliminating the message transfer and additional calculation overhead for file migration. Moreover, mathematical model for file migration was described in order to verify the proposed model. The results from mathematical model and simulation model suggest that the proposed model is well-suited to the distributed system.

Mobile Data System Implementation of P2P used (P2P를 이용한 모바일 데이터 시스템 구현)

  • Kim, Dong-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.9 no.8
    • /
    • pp.1691-1695
    • /
    • 2005
  • Mobile P2P Service can compose free network from one client to another without central server function. Diversified Information & Data are able to be transmitted among peer to peer based upon extended mobile concept. In this paper Mobile P2P Service is applied to the program which gathering, sharing, analysis agricultural information and the natural disasters Information We desire to authenticate request about service to user who is administered between each user to limit connection. Wish to admit authentication mechanism to mechanism that can do information sharing suety in P2P environment to solve this in this treatise and design authentication mechanism.

Sim-Hadoop : Leveraging Hadoop Distributed File System and Parallel I/O for Reliable and Efficient N-body Simulations (Sim-Hadoop : 신뢰성 있고 효율적인 N-body 시뮬레이션을 위한 Hadoop 분산 파일 시스템과 병렬 I / O)

  • Awan, Ammar Ahmad;Lee, Sungyoung;Chung, Tae Choong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.05a
    • /
    • pp.476-477
    • /
    • 2013
  • Gadget-2 is a scientific simulation code has been used for many different types of simulations like, Colliding Galaxies, Cluster Formation and the popular Millennium Simulation. The code is parallelized with Message Passing Interface (MPI) and is written in C language. There is also a Java adaptation of the original code written using MPJ Express called Java Gadget. Java Gadget writes a lot of checkpoint data which may or may not use the HDF-5 file format. Since, HDF-5 is MPI-IO compliant, we can use our MPJ-IO library to perform parallel reading and writing of the checkpoint files and improve I/O performance. Additionally, to add reliability to the code execution, we propose the usage of Hadoop Distributed File System (HDFS) for writing the intermediate (checkpoint files) and final data (output files). The current code writes and reads the input, output and checkpoint files sequentially which can easily become bottleneck for large scale simulations. In this paper, we propose Sim-Hadoop, a framework to leverage HDFS and MPJ-IO for improving the I/O performance of Java Gadget code.

Efficient Policy for ECC Parity Storing of NAND Flash Memory (낸드플래시 메모리의 효율적인 ECC 패리티 저장 방법)

  • Kim, Seokman;Oh, Minseok;Cho, Kyoungrok
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.10
    • /
    • pp.477-482
    • /
    • 2016
  • This paper presents a new method of parity storing for ECC(error correcting code) in SSD (solid-state drive) and suitable structure of the controller. In general usage of NAND flash memory, we partition a page into data and spare area. ECC parity is stored in the spare area. The method has overhead on area and timing due to access of the page memory discontinuously. This paper proposes a new parity policy storing method that reduces overhead and R(read)/W(write) timing by using whole page area continuously without partitioning. We analyzed overhead and R/W timing. As a result, the proposed parity storing has 13.6% less read access time than the conventional parity policy with 16KB page size. For 4GB video file transfer, it has about a minute less than the conventional parity policy. It will enhance the system performance because the read operation is key function in SSD.

Spatial Computation on Spark Using GPGPU (GPGPU를 활용한 스파크 기반 공간 연산)

  • Son, Chanseung;Kim, Daehee;Park, Neungsoo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.8
    • /
    • pp.181-188
    • /
    • 2016
  • Recently, as the amount of spatial information increases, an interest in the study of spatial information processing has been increased. Spatial database systems extended from the traditional relational database systems are difficult to handle large data sets because of the scalability. SpatialHadoop extended from Hadoop system has a low performance, because spatial computations in SpationHadoop require a lot of write operations of intermediate results to the disk, resulting in the performance degradation. In this paper, Spatial Computation Spark(SC-Spark) is proposed, which is an in-memory based distributed processing framework. SC-Spark is extended from Spark in order to efficiently perform the spatial operation for large-scale data. In addition, SC-Spark based on the GPGPU is developed to improve the performance of the SC-Spark. SC-Spark uses the advantage of the Spark holding intermediate results in the memory. And GPGPU-based SC-Spark can perform spatial operations in parallel using a plurality of processing elements of an GPU. To verify the proposed work, experiments on a single AMD system were performed using SC-Spark and GPGPU-based SC-Spark for Point-in-Polygon and spatial join operation. The experimental results showed that the performance of SC-Spark and GPGPU-based SC-Spark were up-to 8 times faster than SpatialHadoop.

Translating Java Bytecode to SPARC Code using Retargetable Code Generating Techniques (재목적 코드 생성 기법을 이용한 자바 Bytecode에서 SPARC 코드로의 번역)

  • Oh, Se-Man;Jung, Chan-Sung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.3
    • /
    • pp.356-363
    • /
    • 2000
  • Java programming language is designed to run effectively on internet and distributed network environments. However, because it has a deficit to be executed by the interpreter method on each platform, to execute Java programs efficiently the code generation system which transforms Bytecode into SPARC code as target machine code must be developed. In this paper, we implement a code generation system which translates Bytecode into SPARC code using the retargetable code generating techniques. For the sake of code expander, we wrote a Bytecode table describing a rule of SPARC code generation from Bytecode, and implemented the information extractor transforming Bytecode to suitable form during expanding of source code from class file. The information extractor determines constant pool entry of each Bytecode instruction operand and then the code expander translates the Bytecode into SPARC code accoring to the Bytecode table. Also, the retargetable code generation system can be systematically reconfigured to generate code for a variety of distinct target computers.

  • PDF

Service Delivery Time Improvement using HDFS in Desktop Virtualization (데스크탑 가상화에서 HDFS를 이용한 서비스 제공시간 개선 연구)

  • Lee, Wan-Hee;Lee, Bong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.5
    • /
    • pp.913-921
    • /
    • 2012
  • The current PC-based desktop environment is being converted into server-based virtual desktop environment due to security, mobility, and low upgrade cost. In this paper, a desktop virtualization system is implemented using an open source-based cloud computing platform and hypervisor. The implemented system is applied to the virtualziation of computer in university. In order to reduce the image transfer time, we propose a solution using HDFS. In addition, an image management structure needed for desktop virtualization is designed and implemented, and applied to a real computer lab which accommodates 30 PCs. The performance of the proposed system is evaluated in various aspects including implementation cost, power saving rate, reduction rate of license cost, and management cost. The experimental results showed that the proposed system considerably reduced the image transfer time for desktop service.

Proper Regulation of the Cutoff System in Offshore Landfill Built on Clay Ground with Double Walls (점토지반에 이중벽체가 적용된 해상폐기물매립장의 적정 차수 기준)

  • Hwang, Woong-Ki;Kim, Hyang-Eun;Choi, Hoseong;Kim, Tae-Hyung
    • Journal of the Korean Geotechnical Society
    • /
    • v.35 no.8
    • /
    • pp.5-15
    • /
    • 2019
  • This study was conducted to propose a reasonable requirement regulation of cutoff barriers composed of bottom layer and vertical barrier of offshore landfill for the prevention of contaminant leakage. The bottom layer was composed of impermeable clay layer; vertical walls were composed of double walls; outer wall was composed of steel sheet pile which registed against outer force; cutoff vertical barrier took the role of inner wall. Seepage-advection-dispersion numerical analysis was conducted using SEEP/W and CTRAN/W programs under steady and unsteady flows. The results showed that the values calculated under steady flow showed higher migration of pollutant than those of unsteady flow. The values calculated under steady flow are more valid from a design point of view. Under steady flow and the bottom clay layer and when the vertical barrier are homogeneous and completely well installed, respectively, the minimum required cutoff regulations for hydraulic conductivity, thickness, and embedded depth of the bottom clay layer and vertical barrier were suggested.

Tracking Data through Tracking Data Server in Edge Computing (엣지 컴퓨팅 환경에서 추적 데이터 서버를 통한 데이터 추적)

  • Lim, Han-wool;Byoun, Won-jun;Yun, Joobeom
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.31 no.3
    • /
    • pp.443-452
    • /
    • 2021
  • One of the key technologies in edge computing is that it always provides services close to the user by moving data between edge servers according to the user's movements. As such, the movement of data between edge servers is frequent. As IoT technology advances and usage areas expand, the data generated also increases, requiring technology to accurately track and process each data to properly manage the data present in the edge computing environment. Currently, cloud systems do not have data disposal technology based on tracking technology for data movement and distribution in their environment, so users cannot see where it is now, whether it is properly removed or not left in the cloud system if users request it to be deleted. In this paper, we propose a tracking data server to create and manage the movement and distribution of data for each edge server and data stored in the central cloud in an edge computing environment.