• Title/Summary/Keyword: Bayesian 분석

Search Result 689, Processing Time 0.03 seconds

A Study on the Overall Economic Risks of a Hypothetical Severe Accident in Nuclear Power Plant Using the Delphi Method (델파이 기법을 이용한 원전사고의 종합적인 경제적 리스크 평가)

  • Jang, Han-Ki;Kim, Joo-Yeon;Lee, Jai-Ki
    • Journal of Radiation Protection and Research
    • /
    • v.33 no.4
    • /
    • pp.127-134
    • /
    • 2008
  • Potential economic impact of a hypothetical severe accident at a nuclear power plant(Uljin units 3/4) was estimated by applying the Delphi method, which is based on the expert judgements and opinions, in the process of quantifying uncertain factors. For the purpose of this study, it is assumed that the radioactive plume directs the inland direction. Since the economic risk can be divided into direct costs and indirect effects and more uncertainties are involved in the latter, the direct costs were estimated first and the indirect effects were then estimated by applying a weighting factor to the direct cost. The Delphi method however subjects to risk of distortion or discrimination of variables because of the human behavior pattern. A mathematical approach based on the Bayesian inferences was employed for data processing to improve the Delphi results. For this task, a model for data processing was developed. One-dimensional Monte Carlo Analysis was applied to get a distribution of values of the weighting factor. The mean and median values of the weighting factor for the indirect effects appeared to be 2.59 and 2.08, respectively. These values are higher than the value suggested by OECD/NEA, 1.25. Some factors such as small territory and public attitude sensitive to radiation could affect the judgement of panel. Then the parameters of the model for estimating the direct costs were classified as U- and V-types, and two-dimensional Monte Carlo analysis was applied to quantify the overall economic risk. The resulting median of the overall economic risk was about 3.9% of the gross domestic products(GDP) of Korea in 2006. When the cost of electricity loss, the highest direct cost, was not taken into account, the overall economic risk was reduced to 2.2% of GDP. This assessment can be used as a reference for justifying the radiological emergency planning and preparedness.

Estimating Fine Particulate Matter Concentration using GLDAS Hydrometeorological Data (GLDAS 수문기상인자를 이용한 초미세먼지 농도 추정)

  • Lee, Seulchan;Jeong, Jaehwan;Park, Jongmin;Jeon, Hyunho;Choi, Minha
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.919-932
    • /
    • 2019
  • Fine particulate matter (PM2.5) is not only affected by anthropogenic emissions, but also intensifies, migrates, decreases by hydrometeorological factors. Therefore, it is essential to understand relationships between the hydrometeorological factors and PM2.5 concentration. In Korea, PM2.5 concentration is measured at the ground observatories and estimated data are given to locations where observatories are not present. In this way, the data is not suitable to represent an area, hence it is impossible to know accurate concentration at such locations. In addition, it is hard to trace migration, intensification, reduction of PM2.5. In this study, we analyzed the relationships between hydrometeorological factors, acquired from Global Land Data Assimilation System (GLDAS), and PM2.5 by means of Bayesian Model Averaging (BMA). By BMA, we also selected factors that have meaningful relationship with the variation of PM2.5 concentration. 4 PM2.5 concentration models for different seasons were developed using those selected factors, with Aerosol Optical Depth (AOD) from MODerate resolution Imaging Spectroradiometer (MODIS). Finally, we mapped the result of the model, to show spatial distribution of PM2.5. The model correlated well with the observed PM2.5 concentration (R ~0.7; IOA ~0.78; RMSE ~7.66 ㎍/㎥). When the models were compared with the observed PM2.5 concentrations at different locations, the correlation coefficients differed (R: 0.32-0.82), although there were similarities in data distribution. The developed concentration map using the models showed its capability in representing temporal, spatial variation of PM2.5 concentration. The result of this study is expected to be able to facilitate researches that aim to analyze sources and movements of PM2.5, if the study area is extended to East Asia.

A Study on derivation of drought severity-duration-frequency curve through a non-stationary frequency analysis (비정상성 가뭄빈도 해석 기법에 따른 가뭄 심도-지속기간-재현기간 곡선 유도에 관한 연구)

  • Jeong, Minsu;Park, Seo-Yeon;Jang, Ho-Won;Lee, Joo-Heon
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.2
    • /
    • pp.107-119
    • /
    • 2020
  • This study analyzed past drought characteristics based on the observed rainfall data and performed a long-term outlook for future extreme droughts using Representative Concentration Pathways 8.5 (RCP 8.5) climate change scenarios. Standardized Precipitation Index (SPI) used duration of 1, 3, 6, 9 and 12 months, a meteorological drought index, was applied for quantitative drought analysis. A single long-term time series was constructed by combining daily rainfall observation data and RCP scenario. The constructed data was used as SPI input factors for each different duration. For the analysis of meteorological drought observed relatively long-term since 1954 in Korea, 12 rainfall stations were selected and applied 10 general circulation models (GCM) at the same point. In order to analyze drought characteristics according to climate change, trend analysis and clustering were performed. For non-stationary frequency analysis using sampling technique, we adopted the technique DEMC that combines Bayesian-based differential evolution ("DE") and Markov chain Monte Carlo ("MCMC"). A non-stationary drought frequency analysis was used to derive Severity-Duration-Frequency (SDF) curves for the 12 locations. A quantitative outlook for future droughts was carried out by deriving SDF curves with long-term hydrologic data assuming non-stationarity, and by quantitatively identifying potential drought risks. As a result of performing cluster analysis to identify the spatial characteristics, it was analyzed that there is a high risk of drought in the future in Jeonju, Gwangju, Yeosun, Mokpo, and Chupyeongryeong except Jeju corresponding to Zone 1-2, 2, and 3-2. They could be efficiently utilized in future drought management policies.

Feeding Behavior of Crustaceans (Cladocera, Copepoda and Ostracoda): Food Selection Measured by Stable Isotope Analysis Using R Package SIAR in Mesocosm Experiment (메소코즘을 이용한 지각류, 요각류 및 패충류의 섭식 성향 분석; 탄소, 질소 안정동위원소비의 믹싱모델 (R package SIAR)을 이용한 정량 분석)

  • Chang, Kwang-Hyeon;Seo, Dong-Il;Go, Soon-Mi;Sakamoto, Masaki;Nam, Gui-Sook;Choi, Jong-Yun;Kim, Min-Seob;Jeong, Kwang-Seok;La, Geung-Hwan;Kim, Hyun-Woo
    • Korean Journal of Ecology and Environment
    • /
    • v.49 no.4
    • /
    • pp.279-288
    • /
    • 2016
  • Stable Isotope Analysis(SIA) of carbon and nitrogen is useful tool for the understanding functional roles of target organisms in biological interactions in the food web. Recently, mixing model based on SIA is frequently used to determine which of the potential food sources predominantly assimilated by consumers, however, application of model is often limited and difficult for non-expert users of software. In the present study, we suggest easy manual of R software and package SIAR with example data regarding selective feeding of crustaceans dominated freshwater zooplankton community. We collected SIA data from the experimental mesocosms set up at the littoral area of eutrophic Chodae Reservoir, and analyzed the dominant crustacean species main food sources among small sized particulate organic matters (POM, <$50{\mu}m$), large sized POM (>$50{\mu}m$), and attached POM using mixing model. From the results obtained by SIAR model, Daphnia galeata and Ostracoda mainly consumed small sized POM while Simocephalus vetulus consumed both small and large sized POM simultaneously. Copepods collected from the reservoir showed no preferences on various food items, but in the mesocosm tanks, main food sources for the copepods was attached POM rather than planktonic preys including rotifers. The results have suggested that their roles as grazers in food web of eutrophicated reservoirs are different, and S. vetulus is more efficient grazer on wide range of food items such as large colony of phytoplankton and cyanobacteria during water bloom period.

Study on the Multilevel Effects of Integrated Crisis Intervention Model for the Prevention of Elderly Suicide: Focusing on Suicidal Ideation and Depression (노인자살예방을 위한 통합적 위기개입모델 다층효과 연구: 자살생각·우울을 중심으로)

  • Kim, Eun Joo;Yook, Sung Pil
    • 한국노년학
    • /
    • v.37 no.1
    • /
    • pp.173-200
    • /
    • 2017
  • This study is designed to verify the actual effect on the prevention of the elderly suicide of the integrated crisis intervention service which has been widely provided across all local communities in Gyeonggi-province focusing on the integrated crisis intervention model developed for the prevention of elderly suicide. The integrated crisis intervention model for the local communities and its manual were developed for the prevention of elderly suicide by integrating the crisis intervention theory which contains local community's integrated system approach and the stress vulnerability theory. For the analysis of the effect, the geriatric depression and suicidal ideation scale was adopted and the data was collected as follows; The data was collected from 258 people in the first preliminary test. Then, it was collected from the secondary test of 184 people after the integrated crisis intervention service was performed for 6 months. The third collection of data was made from 124 people after 2 or 3 years later using the backward tracing method. As for the analysis, the researcher used the R Statistics computing to conduct the test equating, and the vertical scaling between measuring points. Then, the researcher conducted descriptive statistics analysis and univariate analysis of variance, and performed multi-level modeling analysis using Bayesian estimation. As a result of the study, it was found out that the integrated crisis intervention model which has been developed for the elderly suicide prevention has a statistically significant effect on the reduction of elderly suicide in terms of elderly depression and suicide ideation in the follow-up measurement after the implementation of crisis intervention rather than in the first preliminary scores. The integrated crisis intervention model for the prevention of elderly suicide was found to be effective to the extent of 0.56 for the reduction of depression and 0.39 for the reduction of suicidal ideation. However, it was found out in the backward tracing test conducted 2-3 years after the first crisis intervention that the improved values returned to its original state, thus showing that the effect of the intervention is not maintained for long. Multilevel analysis was conducted to find out the factors such as the service type(professional counseling, medication, peer counseling), characteristics of the client (sex, age), the characteristics of the counselor(age, career, major) and the interaction between the characteristics of the counselor and intervention which affect depression and suicidal ideation. It was found that only medication can significantly reduce suicidal ideation and that if the counselor's major is counseling, it significantly further reduces suicidal ideation by interacting with professional counseling. Furthermore, as the characteristics of the suicide prevention experts are found to regulate the intervention effect on elderly suicide prevention in applying integrated crisis intervention model, the primary consideration should be given to the counseling ability of these experts.

Development of Quantification Methods for the Myocardial Blood Flow Using Ensemble Independent Component Analysis for Dynamic $H_2^{15}O$ PET (동적 $H_2^{15}O$ PET에서 앙상블 독립성분분석법을 이용한 심근 혈류 정량화 방법 개발)

  • Lee, Byeong-Il;Lee, Jae-Sung;Lee, Dong-Soo;Kang, Won-Jun;Lee, Jong-Jin;Kim, Soo-Jin;Choi, Seung-Jin;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.6
    • /
    • pp.486-491
    • /
    • 2004
  • Purpose: factor analysis and independent component analysis (ICA) has been used for handling dynamic image sequences. Theoretical advantages of a newly suggested ICA method, ensemble ICA, leaded us to consider applying this method to the analysis of dynamic myocardial $H_2^{15}O$ PET data. In this study, we quantified patients' blood flow using the ensemble ICA method. Materials and Methods: Twenty subjects underwent $H_2^{15}O$ PET scans using ECAT EXACT 47 scanner and myocardial perfusion SPECT using Vertex scanner. After transmission scanning, dynamic emission scans were initiated simultaneously with the injection of $555{\sim}740$ MBq $H_2^{15}O$. Hidden independent components can be extracted from the observed mixed data (PET image) by means of ICA algorithms. Ensemble learning is a variational Bayesian method that provides an analytical approximation to the parameter posterior using a tractable distribution. Variational approximation forms a lower bound on the ensemble likelihood and the maximization of the lower bound is achieved through minimizing the Kullback-Leibler divergence between the true posterior and the variational posterior. In this study, posterior pdf was approximated by a rectified Gaussian distribution to incorporate non-negativity constraint, which is suitable to dynamic images in nuclear medicine. Blood flow was measured in 9 regions - apex, four areas in mid wall, and four areas in base wall. Myocardial perfusion SPECT score and angiography results were compared with the regional blood flow. Results: Major cardiac components were separated successfully by the ensemble ICA method and blood flow could be estimated in 15 among 20 patients. Mean myocardial blood flow was $1.2{\pm}0.40$ ml/min/g in rest, $1.85{\pm}1.12$ ml/min/g in stress state. Blood flow values obtained by an operator in two different occasion were highly correlated (r=0.99). In myocardium component image, the image contrast between left ventricle and myocardium was 1:2.7 in average. Perfusion reserve was significantly different between the regions with and without stenosis detected by the coronary angiography (P<0.01). In 66 segment with stenosis confirmed by angiography, the segments with reversible perfusion decrease in perfusion SPECT showed lower perfusion reserve values in $H_2^{15}O$ PET. Conclusions: Myocardial blood flow could be estimated using an ICA method with ensemble learning. We suggest that the ensemble ICA incorporating non-negative constraint is a feasible method to handle dynamic image sequence obtained by the nuclear medicine techniques.

A Study on Differences of Contents and Tones of Arguments among Newspapers Using Text Mining Analysis (텍스트 마이닝을 활용한 신문사에 따른 내용 및 논조 차이점 분석)

  • Kam, Miah;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.53-77
    • /
    • 2012
  • This study analyses the difference of contents and tones of arguments among three Korean major newspapers, the Kyunghyang Shinmoon, the HanKyoreh, and the Dong-A Ilbo. It is commonly accepted that newspapers in Korea explicitly deliver their own tone of arguments when they talk about some sensitive issues and topics. It could be controversial if readers of newspapers read the news without being aware of the type of tones of arguments because the contents and the tones of arguments can affect readers easily. Thus it is very desirable to have a new tool that can inform the readers of what tone of argument a newspaper has. This study presents the results of clustering and classification techniques as part of text mining analysis. We focus on six main subjects such as Culture, Politics, International, Editorial-opinion, Eco-business and National issues in newspapers, and attempt to identify differences and similarities among the newspapers. The basic unit of text mining analysis is a paragraph of news articles. This study uses a keyword-network analysis tool and visualizes relationships among keywords to make it easier to see the differences. Newspaper articles were gathered from KINDS, the Korean integrated news database system. KINDS preserves news articles of the Kyunghyang Shinmun, the HanKyoreh and the Dong-A Ilbo and these are open to the public. This study used these three Korean major newspapers from KINDS. About 3,030 articles from 2008 to 2012 were used. International, national issues and politics sections were gathered with some specific issues. The International section was collected with the keyword of 'Nuclear weapon of North Korea.' The National issues section was collected with the keyword of '4-major-river.' The Politics section was collected with the keyword of 'Tonghap-Jinbo Dang.' All of the articles from April 2012 to May 2012 of Eco-business, Culture and Editorial-opinion sections were also collected. All of the collected data were handled and edited into paragraphs. We got rid of stop-words using the Lucene Korean Module. We calculated keyword co-occurrence counts from the paired co-occurrence list of keywords in a paragraph. We made a co-occurrence matrix from the list. Once the co-occurrence matrix was built, we used the Cosine coefficient matrix as input for PFNet(Pathfinder Network). In order to analyze these three newspapers and find out the significant keywords in each paper, we analyzed the list of 10 highest frequency keywords and keyword-networks of 20 highest ranking frequency keywords to closely examine the relationships and show the detailed network map among keywords. We used NodeXL software to visualize the PFNet. After drawing all the networks, we compared the results with the classification results. Classification was firstly handled to identify how the tone of argument of a newspaper is different from others. Then, to analyze tones of arguments, all the paragraphs were divided into two types of tones, Positive tone and Negative tone. To identify and classify all of the tones of paragraphs and articles we had collected, supervised learning technique was used. The Na$\ddot{i}$ve Bayesian classifier algorithm provided in the MALLET package was used to classify all the paragraphs in articles. After classification, Precision, Recall and F-value were used to evaluate the results of classification. Based on the results of this study, three subjects such as Culture, Eco-business and Politics showed some differences in contents and tones of arguments among these three newspapers. In addition, for the National issues, tones of arguments on 4-major-rivers project were different from each other. It seems three newspapers have their own specific tone of argument in those sections. And keyword-networks showed different shapes with each other in the same period in the same section. It means that frequently appeared keywords in articles are different and their contents are comprised with different keywords. And the Positive-Negative classification showed the possibility of classifying newspapers' tones of arguments compared to others. These results indicate that the approach in this study is promising to be extended as a new tool to identify the different tones of arguments of newspapers.

An Active Learning-based Method for Composing Training Document Set in Bayesian Text Classification Systems (베이지언 문서분류시스템을 위한 능동적 학습 기반의 학습문서집합 구성방법)

  • 김제욱;김한준;이상구
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.12
    • /
    • pp.966-978
    • /
    • 2002
  • There are two important problems in improving text classification systems based on machine learning approach. The first one, called "selection problem", is how to select a minimum number of informative documents from a given document collection. The second one, called "composition problem", is how to reorganize selected training documents so that they can fit an adopted learning method. The former problem is addressed in "active learning" algorithms, and the latter is discussed in "boosting" algorithms. This paper proposes a new learning method, called AdaBUS, which proactively solves the above problems in the context of Naive Bayes classification systems. The proposed method constructs more accurate classification hypothesis by increasing the valiance in "weak" hypotheses that determine the final classification hypothesis. Consequently, the proposed algorithm yields perturbation effect makes the boosting algorithm work properly. Through the empirical experiment using the Routers-21578 document collection, we show that the AdaBUS algorithm more significantly improves the Naive Bayes-based classification system than other conventional learning methodson system than other conventional learning methods

Human Gait-Phase Classification to Control a Lower Extremity Exoskeleton Robot (하지근력증강로봇 제어를 위한 착용자의 보행단계구분)

  • Kim, Hee-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39B no.7
    • /
    • pp.479-490
    • /
    • 2014
  • A lower extremity exoskeleton is a robot device that attaches to the lower limbs of the human body to augment or assist with the walking ability of the wearer. In order to improve the wearer's walking ability, the robot senses the wearer's walking locomotion and classifies it into a gait-phase state, after which it drives the appropriate robot motions for each state using its actuators. This paper presents a method by which the robot senses the wearer's locomotion along with a novel classification algorithm which classifies the sensed data as a gait-phase state. The robot determines its control mode using this gait-phase information. If erroneous information is delivered, the robot will fail to improve the walking ability or will bring some discomfort to the wearer. Therefore, it is necessary for the algorithm constantly to classify the correct gait-phase information. However, our device for sensing a human's locomotion has very sensitive characteristics sufficient for it to detect small movements. With only simple logic like a threshold-based classification, it is difficult to deliver the correct information continually. In order to overcome this and provide correct information in a timely manner, a probabilistic gait-phase classification algorithm is proposed. Experimental results demonstrate that the proposed algorithm offers excellent accuracy.

An estimation method for non-response model using Monte-Carlo expectation-maximization algorithm (Monte-Carlo expectation-maximaization 방법을 이용한 무응답 모형 추정방법)

  • Choi, Boseung;You, Hyeon Sang;Yoon, Yong Hwa
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.3
    • /
    • pp.587-598
    • /
    • 2016
  • In predicting an outcome of election using a variety of methods ahead of the election, non-response is one of the major issues. Therefore, to address the non-response issue, a variety of methods of non-response imputation may be employed, but the result of forecasting tend to vary according to methods. In this study, in order to improve electoral forecasts, we studied a model based method of non-response imputation attempting to apply the Monte Carlo Expectation Maximization (MCEM) algorithm, introduced by Wei and Tanner (1990). The MCEM algorithm using maximum likelihood estimates (MLEs) is applied to solve the boundary solution problem under the non-ignorable non-response mechanism. We performed the simulation studies to compare estimation performance among MCEM, maximum likelihood estimation, and Bayesian estimation method. The results of simulation studies showed that MCEM method can be a reasonable candidate for non-response model estimation. We also applied MCEM method to the Korean presidential election exit poll data of 2012 and investigated prediction performance using modified within precinct error (MWPE) criterion (Bautista et al., 2007).