• Title/Summary/Keyword: increasing mapping

Search Result 269, Processing Time 0.021 seconds

Assessing Future Water Demand for Irrigating Paddy Rice under Shared Socioeconomic Pathways (SSPs) Scenario Using the APEX-Paddy Model (APEX-paddy 모델을 활용한 SSPs 시나리오에 따른 논 필요수량 변동 평가)

  • Choi, Soon-Kun;Cho, Jaepil;Jeong, Jaehak;Kim, Min-Kyeong;Yeob, So-Jin;Jo, Sera;Owusu Danquah, Eric;Bang, Jeong Hwan
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.63 no.6
    • /
    • pp.1-16
    • /
    • 2021
  • Global warming due to climate change is expected to significantly affect the hydrological cycle of agriculture. Therefore, in order to predict the magnitude of climate impact on agricultural water resources in the future, it is necessary to estimate the water demand for irrigation as the climate change. This study aimed at evaluating the future changes in water demand for irrigation under two Shared Socioeconomic Pathways (SSPs) (SSP2-4.5 and SSP5-8.5) scenarios for paddy rice in Gimje, South Korea. The APEX-Paddy model developed for the simulation of paddy environment was used. The model was calibrated and validated using the H2O flux observation data by the eddy covariance system installed at the field. Sixteen General Circulation Models (GCMs) collected from the Climate Model Intercomparison Project phase 6 (CMIP6) and downscaled using Simple Quantile Mapping (SQM) were used. The future climate data obtained were subjected to APEX-Paddy model simulation to evaluate the future water demand for irrigation at the paddy field. Changes in water demand for irrigation were evaluated for Near-future-NF (2011-2040), Mid-future-MF (2041-2070), and Far-future-FF (2071-2100) by comparing with historical data (1981-2010). The result revealed that, water demand for irrigation would increase by 2.3%, 4.8%, and 7.5% for NF, MF and FF respectively under SSP2-4.5 as compared to the historical demand. Under SSP5-8.5, the water demand for irrigation will worsen by 1.6%, 5.7%, 9.7%, for NF, MF and FF respectively. The increasing water demand for irrigating paddy field into the future is due to increasing evapotranspiration resulting from rising daily mean temperatures and solar radiation under the changing climate.

Factors Influencing Technology Commercialization of Universities in Korea : Systematic Literature Review on Domestic Research (우리나라 대학의 기술사업화 영향요인 연구 : 국내 논문에 대한 체계적 문헌 고찰)

  • Lee, Cheol-Ju;Choi, Jong-in
    • Journal of Korea Technology Innovation Society
    • /
    • v.22 no.1
    • /
    • pp.50-84
    • /
    • 2019
  • As the technology commercialization of American universities has been greatly activated since the implementation of the Bayh Dole Act, that of Korean universities has been steadily increasing since the enactment of the Technology Transfer Promotion Act of 2000, due to numerous related laws, government support programs, and accumulated experience of technology transfer. However, the level of technology commercialization of domestic universities is still insufficient in comparison to that of advanced countries such as the United States. So, in this study, we tried to identify factors promoting technology transfer and start-ups in Korean universities by examining domestic prior researches carried out since 2000 using SLR (Systematic Literature Review) methodology. As a result of our analysis, researches in the field of technology transfer were the most studied while papers on start-up are actively increasing recently. As for factors influencing commercialization of technology, internal and external factors were identified. The former were categorized as human resource, technology and knowledge resource, financial resource, managing resource and strategy, university type, and education and culture, while the latter were grouped into consumer, region, and infrastructure. And then detailed factors were integrated in each field by systematic mapping. Our study has its meaning in that it systematically accumulated the results of researches on technological commercialization of Korean universities and identified areas that are lacking or need additional research. And the integrated promoting factors for technology transfer or start-up can also be used as a checklist for universities or public institutes.

Spatio-temporal analysis with risk factors for five major violent crimes (위험요인이 포함된 시공간 모형을 이용한 5대 강력범죄 분석)

  • Jeon, Young Eun;Kang, Suk-Bok;Seo, Jung-In
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.5
    • /
    • pp.619-629
    • /
    • 2022
  • The five major violent crimes including murder, robbery, rape·forced indecent act, theft, and violence are representative crimes that threaten the safety of members of society and occur frequently in real life. These crimes have negative effects such as lowering the quality of citizens' life. In the case of Seoul, the capital of Korea, the risk for the five major violent crimes is increasing because the population density of Seoul is increasing as a large number of people in the provinces move to Seoul. In this study, to reduce this risk, the relative risk for the occurrence of the five major violent crimes in Seoul is modeled using three spatio-temporal models. In addition, various risk factors are included to identify factors that significantly affect the relative risk of the five major violent crimes. The best model is selected in terms of the deviance information criterion, and the analysis results including various visualizations for the best model are provided. This study will help to establish efficient strategies to sustain people's safe everyday living by analyzing important risk factors affecting the risk of the five major violent crimes and the relative risk of each region.

Mapping Precise Two-dimensional Surface Deformation on Kilauea Volcano, Hawaii using ALOS2 PALSAR2 Spotlight SAR Interferometry (ALOS-2 PALSAR-2 Spotlight 영상의 위성레이더 간섭기법을 활용한 킬라우에아 화산의 정밀 2차원 지표변위 매핑)

  • Hong, Seong-Jae;Baek, Won-Kyung;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_3
    • /
    • pp.1235-1249
    • /
    • 2019
  • Kilauea Volcano is one of the most active volcano in the world. In this study, we used the ALOS-2 PALSAR-2 satellite imagery to measure the surface deformation occurring near the summit of the Kilauea volcano from 2015 to 2017. In order to measure two-dimensional surface deformation, interferometric synthetic aperture radar (InSAR) and multiple aperture SAR interferometry (MAI) methods were performed using two interferometric pairs. To improve the precision of 2D measurement, we compared root-mean-squared deviation (RMSD) of the difference of measurement value as we change the effective antenna length and normalized squint value, which are factors that can affect the measurement performance of the MAI method. Through the compare, the values of the factors, which can measure deformation most precisely, were selected. After select optimal values of the factors, the RMSD values of the difference of the MAI measurement were decreased from 4.07 cm to 2.05 cm. In each interferograms, the maximum deformation in line-of-sight direction is -28.6 cm and -27.3 cm, respectively, and the maximum deformation in the along-track direction is 20.2 cm and 20.8 cm, in the opposite direction is -24.9 cm and -24.3 cm, respectively. After stacking the two interferograms, two-dimensional surface deformation mapping was performed, and a maximum surface deformation of approximately 30.4 cm was measured in the northwest direction. In addition, large deformation of more than 20 cm were measured in all directions. The measurement results show that the risk of eruption activity is increasing in Kilauea Volcano. The measurements of the surface deformation of Kilauea volcano from 2015 to 2017 are expected to be helpful for the study of the eruption activity of Kilauea volcano in the future.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Calculation of future rainfall scenarios to consider the impact of climate change in Seoul City's hydraulic facility design standards (서울시 수리시설 설계기준의 기후변화 영향 고려를 위한 미래강우시나리오 산정)

  • Yoon, Sun-Kwon;Lee, Taesam;Seong, Kiyoung;Ahn, Yujin
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.6
    • /
    • pp.419-431
    • /
    • 2021
  • In Seoul, it has been confirmed that the duration of rainfall is shortened and the frequency and intensity of heavy rains are increasing with a changing climate. In addition, due to high population density and urbanization in most areas, floods frequently occur in flood-prone areas for the increase in impermeable areas. Furthermore, the Seoul City is pursuing various projects such as structural and non-structural measures to resolve flood-prone areas. A disaster prevention performance target was set in consideration of the climate change impact of future precipitation, and this study conducted to reduce the overall flood damage in Seoul for the long-term. In this study, 29 GCMs with RCP4.5 and RCP8.5 scenarios were used for spatial and temporal disaggregation, and we also considered for 3 research periods, which is short-term (2006-2040, P1), mid-term (2041-2070, P2), and long-term (2071-2100, P3), respectively. For spatial downscaling, daily data of GCM was processed through Quantile Mapping based on the rainfall of the Seoul station managed by the Korea Meteorological Administration and for temporal downscaling, daily data were downscaled to hourly data through k-nearest neighbor resampling and nonparametric temporal detailing techniques using genetic algorithms. Through temporal downscaling, 100 detailed scenarios were calculated for each GCM scenario, and the IDF curve was calculated based on a total of 2,900 detailed scenarios, and by averaging this, the change in the future extreme rainfall was calculated. As a result, it was confirmed that the probability of rainfall for a duration of 100 years and a duration of 1 hour increased by 8 to 16% in the RCP4.5 scenario, and increased by 7 to 26% in the RCP8.5 scenario. Based on the results of this study, the amount of rainfall designed to prepare for future climate change in Seoul was estimated and if can be used to establish purpose-wise water related disaster prevention policies.

Gridded Expansion of Forest Flux Observations and Mapping of Daily CO2 Absorption by the Forests in Korea Using Numerical Weather Prediction Data and Satellite Images (국지예보모델과 위성영상을 이용한 극상림 플럭스 관측의 공간연속면 확장 및 우리나라 산림의 일일 탄소흡수능 격자자료 산출)

  • Kim, Gunah;Cho, Jaeil;Kang, Minseok;Lee, Bora;Kim, Eun-Sook;Choi, Chuluong;Lee, Hanlim;Lee, Taeyun;Lee, Yangwon
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_1
    • /
    • pp.1449-1463
    • /
    • 2020
  • As recent global warming and climate changes become more serious, the importance of CO2 absorption by forests is increasing to cope with the greenhouse gas issues. According to the UN Framework Convention on Climate Change, it is required to calculate national CO2 absorptions at the local level in a more scientific and rigorous manner. This paper presents the gridded expansion of forest flux observations and mapping of daily CO2 absorption by the forests in Korea using numerical weather prediction data and satellite images. To consider the sensitive daily changes of plant photosynthesis, we built a machine learning model to retrieve the daily RACA (reference amount of CO2 absorption) by referring to the climax forest in Gwangneung and adopted the NIFoS (National Institute of Forest Science) lookup table for the CO2 absorption by forest type and age to produce the daily AACA (actual amount of CO2 absorption) raster data with the spatial variation of the forests in Korea. In the experiment for the 1,095 days between Jan 1, 2013 and Dec 31, 2015, our RACA retrieval model showed high accuracy with a correlation coefficient of 0.948. To achieve the tier 3 daily statistics for AACA, long-term and detailed forest surveying should be combined with the model in the future.

Development and Performance Evaluation of Multi-sensor Module for Use in Disaster Sites of Mobile Robot (조사로봇의 재난현장 활용을 위한 다중센서모듈 개발 및 성능평가에 관한 연구)

  • Jung, Yonghan;Hong, Junwooh;Han, Soohee;Shin, Dongyoon;Lim, Eontaek;Kim, Seongsam
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_3
    • /
    • pp.1827-1836
    • /
    • 2022
  • Disasters that occur unexpectedly are difficult to predict. In addition, the scale and damage are increasing compared to the past. Sometimes one disaster can develop into another disaster. Among the four stages of disaster management, search and rescue are carried out in the response stage when an emergency occurs. Therefore, personnel such as firefighters who are put into the scene are put in at a lot of risk. In this respect, in the initial response process at the disaster site, robots are a technology with high potential to reduce damage to human life and property. In addition, Light Detection And Ranging (LiDAR) can acquire a relatively wide range of 3D information using a laser. Due to its high accuracy and precision, it is a very useful sensor when considering the characteristics of a disaster site. Therefore, in this study, development and experiments were conducted so that the robot could perform real-time monitoring at the disaster site. Multi-sensor module was developed by combining LiDAR, Inertial Measurement Unit (IMU) sensor, and computing board. Then, this module was mounted on the robot, and a customized Simultaneous Localization and Mapping (SLAM) algorithm was developed. A method for stably mounting a multi-sensor module to a robot to maintain optimal accuracy at disaster sites was studied. And to check the performance of the module, SLAM was tested inside the disaster building, and various SLAM algorithms and distance comparisons were performed. As a result, PackSLAM developed in this study showed lower error compared to other algorithms, showing the possibility of application in disaster sites. In the future, in order to further enhance usability at disaster sites, various experiments will be conducted by establishing a rough terrain environment with many obstacles.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Structures and Policies of British Geographic Information Dissemination for Korea National GIS Project (국가지리정보사업 추진을 위한 영국지리정보 유통구조 및 정책 연구 - 영국지리정보원의 역할을 중심으로-)

  • Kim, Bok-Hwan;Kim, Young-Hoon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.9 no.4
    • /
    • pp.22-33
    • /
    • 2006
  • The objective of this paper is to discuss geographic information policies and strategies of British government and suggest effective policies for Korea National GIS project GIS that has been implemented geographic information infrastructures into public sectors and private markets for the last ten years. To obtain the research aim, this paper reviews the main factors of the Britain GIS project such as distribution structure and process of GI markets, government policies and strategies that are led by Ordnance Survey, most leading mapping agency in the UK. In conclusion, some issues have been explored with reference to the experiences of the Britain GIS projects. The first of these is the nature of geographic information and the second concerns the notion of the circulation policies of spatial data, and the last proposes Korea GIS policies and strategies for successful geographic information and spatial data implementation. The findings of the analysis of the Britain GIS development indicate that a shift began to take place from central government coordinate toward more extensive utilization of private and commercial sectors. This reflects both the increasing importance of geospatial data circulation in all levels of GIS stakeholders. Finally these discussions are particularly to be the issues where multi-agency collaboration of Korea government is concerned and can take the form of joint ventures by consortiums of both involving data producers and data users in order to increase commercial participation for value-added geospatial items, and to encourage both research and development sectors with low or free price policies.

  • PDF