• Title/Summary/Keyword: Map Reduce

Search Result 852, Processing Time 0.023 seconds

Designing and Developing the Agricultural Information Management System of North Korea

  • Tao, Song;Kim, Kye-Hyun
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2005.11a
    • /
    • pp.239-244
    • /
    • 2005
  • In North Korea, there has been the considerable loss of human lives every yew due to the deficiency of foods. Thus, in order to reduce such damages, a research project should be launched to provide various information for cooperation with North Korean government, and to develop proper agricultural management system. Furthermore, based on the water resources information map generated by KOWACO (Korea Water Resources Corporation) and the environmental information system developed by MOE (Ministry of Environment), an agricultural information infrastructure of North Korea and a management system need to be effectively performed. Therefore, this research is mainly to develop the Agricultural Information Management System of North Korea (NKAIMS), which can collect, manage and analyze agricultural information and water resources utilization status of North Korea, and further support to make relevant decisions and establish the agricultural land-use plans. This research has three phases. The major outcome of the first phase is collecting the agricultural and water resources utilization data such as soils, rivers, streams, collective farms, etc., designing and building database, and developing integrated management system considering the users' requirements. The main work of the second phase is improving and reinforcing database such as adding the information of dams, land-over data, bridges, tunnels, satellite images, etc., inspecting and renewing such as importing detail attribute information of reservoirs, and improving system for more conveniently using. The third phase will be to supplement more useful functions such as statistic analysis, continually inspecting and improving database, and developing web-based system. The product of this research supports collecting and analyzing relevant data to facilitate easier agricultural activities and support effective decision making for food production in the preparation of unification. Moreover, through designing database considering sharing information and system expendability, it can support systematic data usability of agricultural information and save cost for data management.

  • PDF

Development of Scanner Test and Vectorizing Programs for Digitization of Cadastral Maps (지적도면 전산화를 위한 스캐너 검사 및 벡터화 프로그램 개발)

  • Jeong, Dong-Heon;Jeong, Jae-Jun;Shin, Sang-Hee;Kim, Byung-Guk;Kim, Young-Il
    • Journal of Korea Spatial Information System Society
    • /
    • v.1 no.2 s.2
    • /
    • pp.115-125
    • /
    • 1999
  • Much efforts are being process at many ways for digitization cadastral maps that will be the base map of Parceled Based Land Information Systems. But, current digitizing systems need too much time and cost digitizing about 720,000 cadastral maps. That's way we develop new digitization system for cadastral maps by using scanning and vectorizing methods. In this paper, we treat scanner test and vectorizing program that are the most important parts of new digitization system for cadastral maps. we analyze needs of Korean Cadastral Survey Corporation, and discuss algorithms and functions of developed programs. Using newly developed scanner test program, user could test various scanners, and use inexpensive scanner if it satisfy the accuracy needed. And vectorizing program will reduce much time and cost, because it is designed and customized practically to he adequate to cadastral maps and to improve work speed, accuracy and usage.

  • PDF

Assessments of RELAP5/MOD3.2 and RELAP5/CANDU in a Reactor Inlet Header Break Experiment B9401 of RD-14M

  • Cho Yong Jin;Jeun Gyoo Dong
    • Nuclear Engineering and Technology
    • /
    • v.35 no.5
    • /
    • pp.426-441
    • /
    • 2003
  • A reactor inlet header break experiment, B9401, performed in the RD-14M multi channel test facility was analyzed using RELAP5/MOD3.2 and RELAP5/CANDU[1]. The RELAP5 has been developed for the use in the analysis of the transient behavior of the pressurized water reactor. A recent study showed that the RELAP5 could be feasible even for the simulation of the thermal hydraulic behavior of CANDU reactors. However, some deficiencies in the prediction of fuel sheath temperature and transient behavior in athe headers were identified in the RELAP5 assessments. The RELAP5/CANDU has been developing to resolve the deficiencies in the RELAP5 and to improve the predictability of the thermal-hydraulic behaviors of the CANDU reactors. In the RELAP5/CANDU, critical heat flux model, horizontal flow regime map, heat transfer model in horizontal channel, etc. were modified or added to the RELAP5/MOD3.2. This study aims to identify the applicability of both codes, in particular, in the multi-channel simulation of the CANDU reactors. The RELAP5/MOD3.2 and the RELAP5/CANDU analyses demonstrate the code's capability to predict reasonably the major phenomena occurred during the transient. The thermal-hydraulic behaviors of both codes are almost identical, however, the RELAP5/CANDU predicts better the heater sheath temperature than the RELAP5/MOD3.2. Pressure differences between headers govern the flow characteristics through the heated sections, particularly after the ECI. In determining header pressure, there are many uncertainties arisen from the complicated effects including steady state pressure distribution. Therefore, it would be concluded that further works are required to reduce these uncertainties, and consequently predict appropriately thermal-hydraulic behaviors in the reactor coolant system during LOCA analyses.

Big Data Platform Based on Hadoop and Application to Weight Estimation of FPSO Topside

  • Kim, Seong-Hoon;Roh, Myung-Il;Kim, Ki-Su;Oh, Min-Jae
    • Journal of Advanced Research in Ocean Engineering
    • /
    • v.3 no.1
    • /
    • pp.32-40
    • /
    • 2017
  • Recently, the amount of data to be processed and the complexity thereof have been increasing due to the development of information and communication technology, and industry's interest in such big data is increasing day by day. In the shipbuilding and offshore industry also, there is growing interest in the effective utilization of data, since various and vast amounts of data are being generated in the process of design, production, and operation. In order to effectively utilize big data in the shipbuilding and offshore industry, it is necessary to store and process large amounts of data. In this study, it was considered efficient to apply Hadoop and R, which are mostly used in big data related research. Hadoop is a framework for storing and processing big data. It provides the Hadoop Distributed File System (HDFS) for storing big data, and the MapReduce function for processing. Meanwhile, R provides various data analysis techniques through the language and environment for statistical calculation and graphics. While Hadoop makes it is easy to handle big data, it is difficult to finely process data; and although R has advanced analysis capability, it is difficult to use to process large data. This study proposes a big data platform based on Hadoop for applications in the shipbuilding and offshore industry. The proposed platform includes the existing data of the shipyard, and makes it possible to manage and process the data. To check the applicability of the platform, it is applied to estimate the weights of offshore structure topsides. In this study, we store data of existing FPSOs in Hadoop-based Hortonworks Data Platform (HDP), and perform regression analysis using RHadoop. We evaluate the effectiveness of large data processing by RHadoop by comparing the results of regression analysis and the processing time, with the results of using the conventional weight estimation program.

Noise Source Identification of Electric Parking Brake by Using Noise Contribution Analysis and Identifying Resonance of Vehicle System (차량 시스템의 소음 기여도분석 및 공진 규명을 통한 전자식 주차 브레이크 소음원 규명)

  • Park, Goon-Dong;Seo, Bum-June;Yang, In-Hyung;Jeong, Jae-Eun;Oh, Jae-Eung;Lee, Jung-Youn
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.20 no.3
    • /
    • pp.119-125
    • /
    • 2012
  • Caliper intergrated Electric Parking Brake (EPB) is an automatic parking brake system, attached to rear caliper. Because EPB uses luxury vehicles recently, the drivers of vehicles are sensitive to the EPB noise. EPB is operated by the motor and gear, so noise is generated by motor and gear. In order to reduce noise, One of EPB manufacturers uses helical gear and changes the shape of EPB housing. But these methods are not optimized for reduction of interior noise. There are many noise transfer paths into vehicle interior and it is difficult to identify the noise sources. Therefore, in this study, we performed contribution analysis and modal testing in the vehicle system. It is possible to distinguish between air-borne noise and structure-borne noise in the vehicle interior noise by comparing interior noise peak with resonance mode map.

A Study on Fabric Color Mapping for 2D Virtual Wearing System (2D 가상 착의 시스템의 직물 컬러 매핑에 관한 연구)

  • Kwak, No-Yoon
    • Journal of Digital Contents Society
    • /
    • v.7 no.4
    • /
    • pp.287-294
    • /
    • 2006
  • Mass-customization is fast growing a segment of the apparel market. 2D Virtual wearing system is one of visual support tools that make possible to sell apparel before producing and reduce the time and costs related to product development and manufacturing in the world of apparel mass-customization. This paper is related to fabric color mapping method for 2D image-based virtual wearing system. In proposed method, clothing shape section of interest is segmented from a clothes model image using a region growing method, and then mapping a new fabric color selected by user into it based on its intensity difference map is processed. With the proposed method in 2D virtual wearing system, regardless of color or intensity of model clothes, it is possible to virtually change the fabric color with holding the illumination and shading properties of the selected clothing shape section, and also to quickly and easily simulate, compare, and select multiple fabric color combinations for individual styles or entire outfits.

  • PDF

Automatic Segmentation of Renal Parenchyma using Graph-cuts with Shape Constraint based on Multi-probabilistic Atlas in Abdominal CT Images (복부 컴퓨터 단층촬영영상에서 다중 확률 아틀라스 기반 형상제한 그래프-컷을 사용한 신실질 자동 분할)

  • Lee, Jaeseon;Hong, Helen;Rha, Koon Ho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.4
    • /
    • pp.11-19
    • /
    • 2016
  • In this paper, we propose an automatic segmentation method of renal parenchyma on abdominal CT image using graph-cuts with shape constraint based on multi-probabilistic atlas. The proposed method consists of following three steps. First, to use the various shape information of renal parenchyma, multi-probabilistic atlas is generated by cortex-based similarity registration. Second, initial seeds for graph-cuts are extracted by maximum a posteriori (MAP) estimation and renal parenchyma is segmented by graph-cuts with shape constraint. Third, to reduce alignment error of probabilistic atlas and increase segmentation accuracy, registration and segmentation are iteratively performed. To evaluate the performance of proposed method, qualitative and quantitative evaluation are performed. Experimental results show that the proposed method avoids a leakage into neighbor regions with similar intensity of renal parenchyma and shows improved segmentation accuracy.

The Bigdata Processing Environment Building for the Learning System (학습 시스템을 위한 빅데이터 처리 환경 구축)

  • Kim, Young-Geun;Kim, Seung-Hyun;Jo, Min-Hui;Kim, Won-Jung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.7
    • /
    • pp.791-797
    • /
    • 2014
  • In order to create an environment for Apache Hadoop for parallel distributed processing system of Bigdata, by connecting a plurality of computers, or to configure the node, using the configuration of the virtual nodes on a single computer it is necessary to build a cloud fading environment. However, be constructed in practice for education in these systems, there are many constraints in terms of cost and complex system configuration. Therefore, it is possible to be used as training for educational institutions and beginners in the field of Bigdata processing, development of learning systems and inexpensive practical is urgent. Based on the Raspberry Pi board, training and analysis of Big data processing, such as Hadoop and NoSQL is now the design and implementation of a learning system of parallel distributed processing of possible Bigdata in this study. It is expected that Bigdata parallel distributed processing system that has been implemented, and be a useful system for beginners who want to start a Bigdata and education.

Large Scale Incremental Reasoning using SWRL Rules in a Distributed Framework (분산 처리 환경에서 SWRL 규칙을 이용한 대용량 점증적 추론 방법)

  • Lee, Wan-Gon;Bang, Sung-Hyuk;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.44 no.4
    • /
    • pp.383-391
    • /
    • 2017
  • As we enter a new era of Big Data, the amount of semantic data has rapidly increased. In order to derive meaningful information from this large semantic data, studies that utilize the SWRL(Semantic Web Rule Language) are being actively conducted. SWRL rules are based on data extracted from a user's empirical knowledge. However, conventional reasoning systems developed on single machines cannot process large scale data. Similarly, multi-node based reasoning systems have performance degradation problems due to network shuffling. Therefore, this paper overcomes the limitations of existing systems and proposes more efficient distributed inference methods. It also introduces data partitioning strategies to minimize network shuffling. In addition, it describes a method for optimizing the incremental reasoning process through data selection and determining the rule order. In order to evaluate the proposed methods, the experiments were conducted using WiseKB consisting of 200 million triples with 83 user defined rules and the overall reasoning task was completed in 32.7 minutes. Also, the experiment results using LUBM bench datasets showed that our approach could perform reasoning twice as fast as MapReduce based reasoning systems.

An Analysis of Utilization on Virtualized Computing Resource for Hadoop and HBase based Big Data Processing Applications (Hadoop과 HBase 기반의 빅 데이터 처리 응용을 위한 가상 컴퓨팅 자원 이용률 분석)

  • Cho, Nayun;Ku, Mino;Kim, Baul;Xuhua, Rui;Min, Dugki
    • Journal of Information Technology and Architecture
    • /
    • v.11 no.4
    • /
    • pp.449-462
    • /
    • 2014
  • In big data era, there are a number of considerable parts in processing systems for capturing, storing, and analyzing stored or streaming data. Unlike traditional data handling systems, a big data processing system needs to concern the characteristics (format, velocity, and volume) of being handled data in the system. In this situation, virtualized computing platform is an emerging platform for handling big data effectively, since virtualization technology enables to manage computing resources dynamically and elastically with minimum efforts. In this paper, we analyze resource utilization of virtualized computing resources to discover suitable deployment models in Apache Hadoop and HBase-based big data processing environment. Consequently, Task Tracker service shows high CPU utilization and high Disk I/O overhead during MapReduce phases. Moreover, HRegion service indicates high network resource consumption for transfer the traffic data from DataNode to Task Tracker. DataNode shows high memory resource utilization and Disk I/O overhead for reading stored data.