• Title/Summary/Keyword: frequency-based flood

Search Result 181, Processing Time 0.022 seconds

Research of Water-related Disaster Monitoring Using Satellite Bigdata Based on Google Earth Engine Cloud Computing Platform (구글어스엔진 클라우드 컴퓨팅 플랫폼 기반 위성 빅데이터를 활용한 수재해 모니터링 연구)

  • Park, Jongsoo;Kang, Ki-mook
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_3
    • /
    • pp.1761-1775
    • /
    • 2022
  • Due to unpredictable climate change, the frequency of occurrence of water-related disasters and the scale of damage are also continuously increasing. In terms of disaster management, it is essential to identify the damaged area in a wide area and monitor for mid-term and long-term forecasting. In the field of water disasters, research on remote sensing technology using Synthetic Aperture Radar (SAR) satellite images for wide-area monitoring is being actively conducted. Time-series analysis for monitoring requires a complex preprocessing process that collects a large amount of images and considers the noisy radar characteristics, and for this, a considerable amount of time is required. With the recent development of cloud computing technology, many platforms capable of performing spatiotemporal analysis using satellite big data have been proposed. Google Earth Engine (GEE)is a representative platform that provides about 600 satellite data for free and enables semi real time space time analysis based on the analysis preparation data of satellite images. Therefore, in this study, immediate water disaster damage detection and mid to long term time series observation studies were conducted using GEE. Through the Otsu technique, which is mainly used for change detection, changes in river width and flood area due to river flooding were confirmed, centered on the torrential rains that occurred in 2020. In addition, in terms of disaster management, the change trend of the time series waterbody from 2018 to 2022 was confirmed. The short processing time through javascript based coding, and the strength of spatiotemporal analysis and result expression, are expected to enable use in the field of water disasters. In addition, it is expected that the field of application will be expanded through connection with various satellite bigdata in the future.

An Analysis on Climate Change and Military Response Strategies (기후변화와 군 대응전략에 관한 연구)

  • Park Chan-Young;Kim Chang-Jun
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.2
    • /
    • pp.171-179
    • /
    • 2023
  • Due to man-made climate change, global abnormal weather phenomena have occurred, increasing disasters. Major developed countries(military) are preparing for disasters caused by extreme weather appearances. However, currently, disaster prevention plans and facilities have been implemented based on the frequency and intensity method based on statistical data, it is not enough to prepare for disasters caused by frequent extreme weather based on probability basis. The U.S. and British forces have been the fastest to take research and policy approaches related to climate change and the threat of disaster change, and are considering both climate change mitigation and adaptation. The South Korean military regards the perception of disasters to be storm and flood damage, and there is a lack of discussion on extreme weather and disasters due to climate change. In this study, the process of establishing disaster management systems in developed countries(the United States and the United Kingdom) was examined, and the response policies of each country(military) were analyzed using literature analysis techniques. In order to maintain tight security, our military should establish a response policy focusing on sustainability and resilience, and the following three policy approaches are needed. First, it is necessary to analyze the future operational environment of the Korean Peninsula in preparation for the environment that will change due to climate change. Second, it is necessary to discuss climate change 'adaptation policy' for sustainability. Third, it is necessary to prepare for future disasters that may occur due to climate change.

Evaluation of Typhoon Hazard Factors using the EST Approach (EST 기법에 의한 태풍의 재해위험인자 평가)

  • Lee, Soon-Cheol;Kim, Jin-Kyoo;Oh, Kyoung-Doo;Jun, Byong-Ho;Hong, Il-Pyo
    • Journal of Korea Water Resources Association
    • /
    • v.38 no.10 s.159
    • /
    • pp.825-839
    • /
    • 2005
  • Application of the EST approach for the simulation of the risk-based typhoon hazard potential is described in this paper. For six selected cities In the Korean peninsula, EST simulations for one hundred years were performed one hundred times using historical typhoon data as a training data set. The analytical results of EST simulations were then post-processed to estimate the means, standard deviations, and ranges of variation for the maximum wind velocities and the daily rainfalls. From the comparison of the averages of the wind velocities for the 100 year recurrence interval typhoons, the wind hazard potential of them was revealed to be highest for Mokpo among the six cities, followed by Busan, Cheju, Inchun, Taegu, and Seoul in descending order For the flood hazard potential associated with a typhoon, Busan was ranked to be the highest hazard potential area, followed by Mokpo, Cheju, Seoul, Inckun, and Taegu. In terms of the overall typhoon hazard potential, cities in the southern coastal regions were identified as being exposed to the most severe typhoon hazard.

High-Precision and 3D GIS Matching and Projection Based User-Friendly Radar Display Technique (3차원 GIS 정합 및 투영에 기반한 사용자 친화적 레이더 자료 표출 기법)

  • Jang, Bong-Joo;Lee, Keon-Haeng;Lee, Dong-Ryul;Lim, Sanghun
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.12
    • /
    • pp.1145-1154
    • /
    • 2014
  • In recent years, as frequency and intensity of severe weather disasters such as flash flood have been increasing, providing accurate and prompt information to the public is very important and needs of user-friendly monitoring/warning system are growing. This paper introduces a method that re-produces radar observations as multimedia contents and applies reproduced data to mesh-up services. In addition, a accurate GIS matching technique to help to track the exact location going on serious atmospheric phenomena is presented. The proposed method create multimedia contents having structures such as two dimensional images, vector graphics or three dimensional volume data by re-producing various radar variables obtained from a weather radar. After then, the multimedia formatted weather radar data are matched with various detailed raster or vector GIS map platform. Results of simulation test with various scenarios indicate that the display system based on the proposed method can support for users to figure out easily and intuitively routes and degrees of risk of severe weather. We expect that this technique can also help for emergency manager to interpret radar observations properly and to forecast meteorological disasters more effectively.

Evaluation of satellite-based soil moisture retrieval over the korean peninsula : using AMSR2 LPRM algorithm and ground measurement data (위성기반 토양수분 자료의 한반도 지역 적용성 평가: AMSR2 LPRM 알고리즘과 지점관측 자료를 이용하여)

  • Kim, Seongkyun;Kim, Hyunglok;Choi, Minha
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.5
    • /
    • pp.423-429
    • /
    • 2016
  • This study aims at assessing the quality of the Advanced Microwave Scanning Radiometer 2 (AMSR2) soil moisture products onboard GCOM-W1 satellite based on Land Parameter Retrieval Model (LPRM) soil moisture retrieval algorithm with field measurements in South Korea from March to September, 2014. Results of mean bias and root mean square error between AMSR2 LPRM soil moisture products (X-band) and ground measurements showed reasonable value of 0.03 and 0.16. Also, the maximum of the Pearson correlation coefficients was 0.67, which showed good agreement in terms of temporal variability with ground measurements. By comparing AMSR2 soil moisture with in-situ measurement according to the overpass time and band frequency, X-band products on the ascending time outperformed than those of C1-band and C2-band. Furthermore, this study offers an insight into the applicability of the AMSR2 soil moisture products for monitoring various natural disasters at a large scale such as drought and flood.

Statistical significance test of polynomial regression equation for Huff's quartile method of design rainfall (설계강우량의 Huff 4분위 방법 다항회귀식에 대한 유의성 검정)

  • Park, Jinhee;Lee, Jaejoon;Lee, Sungho
    • Journal of Korea Water Resources Association
    • /
    • v.51 no.3
    • /
    • pp.263-272
    • /
    • 2018
  • For the design of hydraulic structures, the design flood discharge corresponding to a specific frequency is generally used by using the design storm calculated according to the rainfall-runoff relationship. In the past, empirical equations such as rational equations were used to calculate the peak flow rate. However, as the duration of rainfall is prolonged, the outflow patterns are different from the actual events, so the accuracy of the temporal distribution of the probability rainfall becomes important. In the present work, Huff's quartile method is used for the temporal distribution of rainfall, and the third quartile is generally used. The regression equation for Huff's quadratic curve applies a sixth order polynomial equation because of its high accuracy throughout the duration of rainfall. However, in statistical modeling, the regression equation needs to be concise in accordance with the principle of simplicity, and it is necessary to determine the regression coefficient based on the statistical significance level. Therefore, in this study, the statistical significance test for regression equation for temporal distribution of the Huff's quartile method, which is used as the temporal distribution method of design rainfall, is conducted for 69 rainfall observation stations under the jurisdiction of the Korea Meteorological Administration. It is statistically significant that the regression equation of the Huff's quartile method can be considered only up to the 4th order polynomial equation, as the regression coefficient is significant in most of the 69 rainfall observation stations.

A study on the derivation and evaluation of flow duration curve (FDC) using deep learning with a long short-term memory (LSTM) networks and soil water assessment tool (SWAT) (LSTM Networks 딥러닝 기법과 SWAT을 이용한 유량지속곡선 도출 및 평가)

  • Choi, Jung-Ryel;An, Sung-Wook;Choi, Jin-Young;Kim, Byung-Sik
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.spc1
    • /
    • pp.1107-1118
    • /
    • 2021
  • Climate change brought on by global warming increased the frequency of flood and drought on the Korean Peninsula, along with the casualties and physical damage resulting therefrom. Preparation and response to these water disasters requires national-level planning for water resource management. In addition, watershed-level management of water resources requires flow duration curves (FDC) derived from continuous data based on long-term observations. Traditionally, in water resource studies, physical rainfall-runoff models are widely used to generate duration curves. However, a number of recent studies explored the use of data-based deep learning techniques for runoff prediction. Physical models produce hydraulically and hydrologically reliable results. However, these models require a high level of understanding and may also take longer to operate. On the other hand, data-based deep-learning techniques offer the benefit if less input data requirement and shorter operation time. However, the relationship between input and output data is processed in a black box, making it impossible to consider hydraulic and hydrological characteristics. This study chose one from each category. For the physical model, this study calculated long-term data without missing data using parameter calibration of the Soil Water Assessment Tool (SWAT), a physical model tested for its applicability in Korea and other countries. The data was used as training data for the Long Short-Term Memory (LSTM) data-based deep learning technique. An anlysis of the time-series data fond that, during the calibration period (2017-18), the Nash-Sutcliffe Efficiency (NSE) and the determinanation coefficient for fit comparison were high at 0.04 and 0.03, respectively, indicating that the SWAT results are superior to the LSTM results. In addition, the annual time-series data from the models were sorted in the descending order, and the resulting flow duration curves were compared with the duration curves based on the observed flow, and the NSE for the SWAT and the LSTM models were 0.95 and 0.91, respectively, and the determination coefficients were 0.96 and 0.92, respectively. The findings indicate that both models yield good performance. Even though the LSTM requires improved simulation accuracy in the low flow sections, the LSTM appears to be widely applicable to calculating flow duration curves for large basins that require longer time for model development and operation due to vast data input, and non-measured basins with insufficient input data.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Calculation of Unit Hydrograph from Discharge Curve, Determination of Sluice Dimension and Tidal Computation for Determination of the Closure curve (단위유량도와 비수갑문 단면 및 방조제 축조곡선 결정을 위한 조속계산)

  • 최귀열
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.7 no.1
    • /
    • pp.861-876
    • /
    • 1965
  • During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula $h= \frac{V^2}{2g}$ and must be equal to the velocity detertnined from the current. If there is a difference in velocity, a new estimate of the inner water level must be made and entire procedure should be repeated. When the higher water level is equal to or more than 2/3 times the difference between the lower water level and the crest of the dam, we speak of a "free weir." The flow over the weir is then dependent upon the higher water level and not on the difference between high and low water levels. When the weir is "submerged", that is, the higher water level is less than 2/3 times the difference between the lower water and the crest of the dam, the difference between the high and low levels being decisive. The free weir normally occurs first during ebb, and is due to. the fact that mean level in the estuary is higher than the mean level of . the tide in building dams with barges the maximum velocity in the closing gap may not be more than 3m/sec. As the maximum velocities are higher than this limit we must use other construction methods in closing the gap. This can be done by dump-cars from each side or by using a cable way.e or by using a cable way.

  • PDF

Agroclimatic Zone and Characters of the Area Subject to Climatic Disaster in Korea (농업 기후 지대 구분과 기상 재해 특성)

  • 최돈향;윤성호
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.34 no.s02
    • /
    • pp.13-33
    • /
    • 1989
  • Agroclimate should be analyzed and evaluated accurately to make better use of available chimatic resources for the establishment of optimum cropping systems. Introducing of appropriate cultivars and their cultivation techniques into classified agroclimatic zone could contribute to the stability and costs of crop production. To classify the agroclimatic zones, such climatic factors as temperature, precipitation, sunshine, humidity and wind were considered as major influencing factors on the crop growth and yield. For the classification of rice agroclimatic zones, precipitation and drought index during transplanting time, the first occurrence of effective growth temperature (above 15$^{\circ}C$) and its duration, the probability of low temperature occurrence, variation in temperature and sunshine hours, and climatic productivity index were used in the analysis. The agroclimatic zones for rice crop were classified into 19 zones as follows; (1) Taebaek Alpine Zone, (2) Taebaek Semi-Alpine Zone, (3) Sobaek Mountainous Zone, (4) Noryeong Sobaek Mountainous Zone, (5) Yeongnam Inland Mountainous Zone, (6) Northern Central Inland Zone, (7) Central Inland Zone, (8) Western Soebaek Inland Zone, (9) Noryeong Eastern and Western Inland Zone, (10) Honam Inland Zone, (ll) Yeongnam Basin Zone, (12) Yeongnam Inland Zone, (13) Western Central Plain Zone, (14) Southern Charyeong Plain Zone, (15) South Western Coastal Zone, (16) Southern Coastal Zone, (17) Northern Eastern Coastal Zone, (18) Central Eastern Coastal Zone, and (19) South Eastern Coastal Zone. The classification of agroclimatic zones for cropping systems was based on the rice agroclimatic zones considering zonal climatic factors for both summer and winter crops and traditional cropping systems. The agroclimatic zones were identified for cropping systems as follows: (I) Alpine Zone, (II) Mountainous Zone, (III) Central Northern Inland Zone, (IV) Central Northern West Coastal Zone, (V) Cental Southern West Coastal Zone, (VI) Gyeongbuk Inland Zone, (VII) Southern Inland Zone, (VIII) Southern Coastal Zone, and (IX) Eastern Coastal Zone. The agroclimatic zonal characteristics of climatic disasters under rice cultivation were identified: as frequent drought zones of (11) Yeongnam Basin Zone, (17) North Eastern Coastal Zone with the frequency of low temperature occurrence below 13$^{\circ}C$ at root setting stage above 9.1%, and (2) Taebaek Semi-Alpine Zone with cold injury during reproductive stages, as the thphoon and intensive precipitation zones of (10) Hanam Inland Zone, (15) Southern West Coastal Zone, (16) Southern Coastal Zone with more than 4 times of damage in a year and with typhoon path and heavy precipitation intensity concerned. Especially the three east coastal zones, (17), (18), and (19), were subjected to wind and flood damages 2 to 3 times a year as well as subjected to drought and cold temperature injury.

  • PDF