• Title/Summary/Keyword: Short-term prediction

Search Result 621, Processing Time 0.028 seconds

Temporal Change in Radiological Environments on Land after the Fukushima Daiichi Nuclear Power Plant Accident

  • Saito, Kimiaki;Mikami, Satoshi;Andoh, Masaki;Matsuda, Norihiro;Kinase, Sakae;Tsuda, Shuichi;Sato, Tetsuro;Seki, Akiyuki;Sanada, Yukihisa;Wainwright-Murakami, Haruko;Yoshimura, Kazuya;Takemiya, Hiroshi;Takahashi, Junko;Kato, Hiroaki;Onda, Yuichi
    • Journal of Radiation Protection and Research
    • /
    • v.44 no.4
    • /
    • pp.128-148
    • /
    • 2019
  • Massive environmental monitoring has been conducted continuously since the Fukushima Daiichi Nuclear Power accident in March of 2011 by different monitoring methods that have different features together with migration studies of radiocesium in diverse environments. These results have clarified the characteristics of radiological environments and their temporal change around the Fukushima site. At three months after the accident, multiple radionuclides including radiostrontium and plutonium were detected in many locations; and it was confirmed that radiocesium was most important from the viewpoint of long-term exposure. Radiation levels around the Fukushima site have decreased greatly over time. The decreasing trend was found to change variously according to local conditions. The air dose rates in environments related to human living have decreased faster than expected from radioactive decay by a factor of 2-3 on average; those in pure forest have decreased more closely to physical decay. The main causes of air dose rate reduction were judged to be radioactive decay, movement of radiocesium in vertical and horizontal directions, and decontamination. Land-use categories and human activities have significantly affected the reduction tendency. Difference in the air dose rate reduction trends can be explained qualitatively according to the knowledge obtained in radiocesium migration studies; whereas, the quantitative explanation for individual sites is an important future challenge. The ecological half-lives of air dose rates have been evaluated by several researchers, and a short-term half-life within 1 year was commonly observed in the studies. An empirical model for predicting air dose rate distribution was developed based on statistical analysis of an extensive car-borne survey dataset, which enabled the prediction with confidence intervals. Different types of contamination maps were integrated to better quantify the spatial data. The obtained data were used for extended studies such as for identifying the main reactor that caused the contamination of arbitrary regions and developing standard procedures for environmental measurement and sampling. Annual external exposure doses for residents who intended to return to their homes were estimated as within a few millisieverts. Different forms of environmental data and knowledge have been provided for wide spectrum of people. Diverse aspects of lessons learned from the Fukushima accident, including practical ones, must be passed on to future generations.

Development of Grid Based Distributed Rainfall-Runoff Model with Finite Volume Method (유한체적법을 이용한 격자기반의 분포형 강우-유출 모형 개발)

  • Choi, Yun-Seok;Kim, Kyung-Tak;Lee, Jin-Hee
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.9
    • /
    • pp.895-905
    • /
    • 2008
  • To analyze hydrologic processes in a watershed requires both various geographical data and hydrological time series data. Recently, not only geographical data such as DEM(Digital Elevation Model) and hydrologic thematic map but also hydrological time series from numerical weather prediction and rainfall radar have been provided as grid data, and there are studies on hydrologic analysis using these grid data. In this study, GRM(Grid based Rainfall-runoff Model) which is physically-based distributed rainfall-runoff model has been developed to simulate short term rainfall-runoff process effectively using these grid data. Kinematic wave equation is used to simulate overland flow and channel flow, and Green-Ampt model is used to simulate infiltration process. Governing equation is discretized by finite volume method. TDMA(TriDiagonal Matrix Algorithm) is applied to solve systems of linear equations, and Newton-Raphson iteration method is applied to solve non-linear term. Developed model was applied to simplified hypothetical watersheds to examine model reasonability with the results from $Vflo^{TM}$. It was applied to Wicheon watershed for verification, and the applicability to real site was examined, and simulation results showed good agreement with measured hydrographs.

Comparative Analysis of Radiative Flux Based on Satellite over Arctic (북극해 지역의 위성 기반 복사 에너지 산출물의 비교 분석)

  • Seo, Minji;Lee, Eunkyung;Lee, Kyeong-sang;Choi, Sungwon;Jin, Donghyun;Seong, Noh-hun;Han, Hyeon-gyeong;Kim, Hyun-Cheol;Han, Kyung-soo
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_2
    • /
    • pp.1193-1202
    • /
    • 2018
  • It is important to quantitatively analyze the energy budget for understanding of long-term climate change in Arctic. High-quality and long-term radiative parameters are needed to understand the energy budget. Since most of radiative flux components based on satellite are provide for a short period, several data must be used together. It is important to acquaint differences between data to link for conjunction with several data. In this study, we investigated the comparative analysis of Arctic radiative flux product such as CERES and GEWEX to provide basic information for data linkage and analysis of changes in Arctic climate. As a result, GEWEX was underestimated the radiative variables, and it difference between the two data was about $3{\sim}25W/m^2$. In addition, the difference in high-latitude and sea ice regions have increased. In case of comparing with monthly means, the other variables except for longwave downward flux represent high difference of $9.26{\sim}26.71W/m^2$ in spring-summer season. The results of this study can be used standard data for blending and selecting GEWEX and CERES radiative flux data due to recognition of characteristics according to ice-ocean area, season, and regions.

A Study of the Influence of Short-Term Air-Sea Interaction on Precipitation over the Korean Peninsula Using Atmosphere-Ocean Coupled Model (기상-해양 접합모델을 이용한 단기간 대기-해양 상호작용이 한반도 강수에 미치는 영향 연구)

  • Han, Yong-Jae;Lee, Ho-Jae;Kim, Jin-Woo;Koo, Ja-Yong;Lee, Youn-Gyoun
    • Journal of the Korean earth science society
    • /
    • v.40 no.6
    • /
    • pp.584-598
    • /
    • 2019
  • In this study, the effects of air-sea interactions on precipitation over the Seoul-Gyeonggi region of the Korean Peninsula from 28 to 30 August 2018, were analyzed using a Regional atmosphere-ocean Coupled Model (RCM). In the RCM, a WRF (Weather Research Forecasts) was used as the atmosphere model whereas ROMS (Regional Oceanic Modeling System) was used as the ocean model. In a Regional Single atmosphere Model (RSM), only the WRF model was used. In addition, the sea surface temperature data of ECMWF Reanalysis Interim was used as low boundary data. Compared with the observational data, the RCM considering the effect of air-sea interaction represented that the spatial correlations were 0.6 and 0.84, respectively, for the precipitation and the Yellow Sea surface temperature in the Seoul-Gyeonggi area, which was higher than the RSM. whereas the mean bias error (MBE) was -2.32 and -0.62, respectively, which was lower than the RSM. The air-sea interaction effect, analyzed by equivalent potential temperature, SST, dynamic convergence fields, induced the change of SST in the Yellow Sea. In addition, the changed SST caused the difference in thermal instability and kinematic convergence in the lower atmosphere. The thermal instability and convergence over the Seoul-Gyeonggi region induced upward motion, and consequently, the precipitation in the RCM was similar to the spatial distribution of the observed data compared to the precipitation in the RSM. Although various case studies and climatic analyses are needed to clearly understand the effects of complex air-sea interaction, this study results provide evidence for the importance of the air-sea interaction in predicting precipitation in the Seoul-Gyeonggi region.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Regionality and Variability of Net Primary Productivity and Rice Yield in Korea (우리 나라의 순1차생산력 및 벼 수량의 지역성과 변이성)

  • JUNG YEONG-SANG;BANG JUNG-HO;HAYASHI YOSEI
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.1 no.1
    • /
    • pp.1-11
    • /
    • 1999
  • Rice yield and primary productivity (NPP) are dependent upon the variability of climate and soil. The variability and regionality of the rice yield and net primary productivity were evaluated with the meteorological data collected from Korea Meteorology Administration and the actual rice yield data from the Ministration of Agriculture and Forestry, Korea. The estimated NPP using the three models, dependent upon temperature(NPP-T), precipitation(NPP-P) and net radiation(NPP-R), ranged from 10.87 to 17.52 Mg ha$^{-1}$ with average of 14.69 Mg ha$^{-1}$ in the South Korea and was ranged 6.47 to 15.58 Mg ha$^{-1}$ with average of 12.59 Mg ha$^{-1}$ in the North Korea. The primary limiting factor of NPP in Korea was net radiation, and the secondary limiting factor was temperature. Spectral analysis on the long term change in air temperature in July and August showed periodicity. The short periodicity was 3 to 7 years and the long periodicity was 15 to 43 years. The coefficient of variances, CV, of the rice yield from 1989 to 1998 ranged 3.23 percents to 12.37 percents which were lower than past decades. The CV's in Kangwon and Kyeongbuk were high while that in Chonbuk was the lowest. The prediction model based on th e yield index and yield response to temperature obtain ed from the field crop situation showed reasonable results and thus the spatial distributions of rice yield and predicted yield could be expressed in the maps. The predicted yields was well fitted with the actual yield except Kyungbuk. For better prediction, modification should be made considering radiation factor in further development.

  • PDF

Development of 1ST-Model for 1 hour-heavy rain damage scale prediction based on AI models (1시간 호우피해 규모 예측을 위한 AI 기반의 1ST-모형 개발)

  • Lee, Joonhak;Lee, Haneul;Kang, Narae;Hwang, Seokhwan;Kim, Hung Soo;Kim, Soojun
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.5
    • /
    • pp.311-323
    • /
    • 2023
  • In order to reduce disaster damage by localized heavy rains, floods, and urban inundation, it is important to know in advance whether natural disasters occur. Currently, heavy rain watch and heavy rain warning by the criteria of the Korea Meteorological Administration are being issued in Korea. However, since this one criterion is applied to the whole country, we can not clearly recognize heavy rain damage for a specific region in advance. Therefore, in this paper, we tried to reset the current criteria for a special weather report which considers the regional characteristics and to predict the damage caused by rainfall after 1 hour. The study area was selected as Gyeonggi-province, where has more frequent heavy rain damage than other regions. Then, the rainfall inducing disaster or hazard-triggering rainfall was set by utilizing hourly rainfall and heavy rain damage data, considering the local characteristics. The heavy rain damage prediction model was developed by a decision tree model and a random forest model, which are machine learning technique and by rainfall inducing disaster and rainfall data. In addition, long short-term memory and deep neural network models were used for predicting rainfall after 1 hour. The predicted rainfall by a developed prediction model was applied to the trained classification model and we predicted whether the rain damage after 1 hour will be occurred or not and we called this as 1ST-Model. The 1ST-Model can be used for preventing and preparing heavy rain disaster and it is judged to be of great contribution in reducing damage caused by heavy rain.

Rapid Earthquake Location for Earthquake Early Warning (지진조기경보를 위한 신속 진앙위치 결정)

  • Kim, Kwang-Hee;Rydelek, Paul A.;Suk, Bong-Chool
    • Journal of the Korean Society of Hazard Mitigation
    • /
    • v.8 no.6
    • /
    • pp.73-79
    • /
    • 2008
  • Economic growth, industrialization and urbanization have made society more vulnerable than ever to seismic hazard in Korea. Although Korea has not experienced severe damage due to earthquakes during the last few decades, there is little doubt of the potential for large earthquakes in Korea as documented in the historical literature. As we see no immediate promise of short-term earthquake prediction with current science and technology, earthquake early warning systems attract more and more attention as a practical measure to mitigate damage from earthquakes. Earthquake early warning systems provide a few seconds to tens of seconds of warning time before the onset of strong ground shaking. To achieve rapid earthquake location, we propose to take full advantage of information from existing seismic networks; by using P wave arrival times at two nearest stations from the earthquake hypocenter and also information that P waves have not yet arrived at other stations. Ten earthquakes in the Korean peninsula and its vicinity are selected for the feasibility study. We observed that location results are not reliable when earthquakes occur outside of the seismic network. Earthquakes inside the seismic network, however, can be located very rapidly for the purpose of earthquake early warning. Seoul metropolitan area may secure $10{\sim}50$ seconds of warning time before any strong shaking starts for certain events. Carefully orchestrated actions during the given warning time should be able to reduce hazard and mitigate damages due to potentially disastrous earthquakes.