• Title/Summary/Keyword: Probability Decision Model

Search Result 239, Processing Time 0.247 seconds

A Study on the Optimal Allocation for Intelligence Assets Using MGIS and Genetic Algorithm (MGIS 및 유전자 알고리즘을 활용한 정보자산 최적배치에 관한 연구)

  • Kim, Younghwa;Kim, Suhwan
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.41 no.4
    • /
    • pp.396-407
    • /
    • 2015
  • The literature about intelligence assets allocation focused on mainly single or partial assets such as TOD and GSR. Thus, it is limited in application to the actual environment of operating various assets. In addition, field units have generally vulnerabilities because of depending on qualitative analysis. Therefore, we need a methodology to ensure the validity and reliability of intelligence asset allocation. In this study, detection probability was generated using digital geospatial data in MGIS (Military Geographic Information System) and simulation logic of BCTP (Battle Commander Training Programs) in the R.O.K army. Then, the optimal allocation mathematical model applied concept of simultaneous integrated management, which was developed based on the partial set covering model. Also, the proposed GA (Genetic Algorithm) provided superior results compared to the mathematical model. Consequently, this study will support effectively decision making by the commander by offering the best alternatives for optimal allocation within a reasonable time.

A Unit Commitment Study considering Generation System Reliabilty (발전계통의 신뢰성을 고려한 발전기 병렬함수 추정에 관한 연구)

  • 김준현;유인근
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.33 no.10
    • /
    • pp.387-395
    • /
    • 1984
  • This paper proposes an improved unit commitment algorithm for the purpose of rational operation of electric power systems. Security function is introduced to consider generation system reliability as well as economical properties in the algotithm . As a stte model for assessment of state probabilities, the 3-state model in which the probability of the units for commitment can be considered is proposed and applied, so that the algorithm becomes more practical and reasonable one. A decision bounding scheme which can be applied to all kinds of system is described and the numerical results obtained for KEPCO and model systems are also presented to assure the availability of the proposed technique.

  • PDF

Two-Stage Logistic Regression for Cancer Classi cation and Prediction from Copy-Numbe Changes in cDNA Microarray-Based Comparative Genomic Hybridization

  • Kim, Mi-Jung
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.5
    • /
    • pp.847-859
    • /
    • 2011
  • cDNA microarray-based comparative genomic hybridization(CGH) data includes low-intensity spots and thus a statistical strategy is needed to detect subtle differences between different cancer classes. In this study, genes displaying a high frequency of alteration in one of the different classes were selected among the pre-selected genes that show relatively large variations between genes compared to total variations. Utilizing copy-number changes of the selected genes, this study suggests a statistical approach to predict patients' classes with increased performance by pre-classifying patients with similar genetic alteration scores. Two-stage logistic regression model(TLRM) was suggested to pre-classify homogeneous patients and predict patients' classes for cancer prediction; a decision tree(DT) was combined with logistic regression on the set of informative genes. TLRM was constructed in cDNA microarray-based CGH data from the Cancer Metastasis Research Center(CMRC) at Yonsei University; it predicted the patients' clinical diagnoses with perfect matches (except for one patient among the high-risk and low-risk classified patients where the performance of predictions is critical due to the high sensitivity and specificity requirements for clinical treatments. Accuracy validated by leave-one-out cross-validation(LOOCV) was 83.3% while other classification methods of CART and DT performed as comparisons showed worse performances than TLRM.

Region Growing Based Variable Window Size Decision Algorithm for Image Denoising (영상 잡음 제거를 위한 영역 확장 기반 가변 윈도우 크기 결정 알고리즘)

  • 엄일규;김유신
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.111-116
    • /
    • 2004
  • It is essential to know the information about the prior model for wavelet coefficients, the probability distribution of noise, and the variance of wavelet coefficients for noise reduction using Bayesian estimation in wavelet domain. In general denoising methods, the signal variance is estimated from the proper prior model for wavelet coefficients. In this paper, we propose a variable window size decision algorithm to estimate signal variance according to image region. Simulation results shows the proposed method have better PSNRs than those of the state of art denoising methods.

Development of Prediction Method for Highway Pavement Condition (포장상태 예측방법 개선에 관한 연구)

  • Park, Sang-Wook;Suh, Young-Chan;Chung, Chul-Gi
    • International Journal of Highway Engineering
    • /
    • v.10 no.3
    • /
    • pp.199-208
    • /
    • 2008
  • Prediction the performance of pavement provides proper information to an agency on decision-making process; especially evaluating the pavement performance and prioritizing the work plan. To date, there are a number of approaches to predict the future deterioration of pavements. However, there are some limitation to proper prediction of the pavement service life. In this paper, pavement performance model and pavement condition prediction model are developed in order to improve pavement condition prediction method. The prediction model of pavement condition through the regression analysis of real pavement condition is based on the probability distribution of pavement condition, which set to 5%, 15%, 25% and 50%, by condition of the pavement and traffic volume. The pavement prediction model presented from the behavior of individual pavement condition which are set to 5%, 15%, 25% and 50% of probability distribution. The performance of the prediction model is evaluated from analyzing the average, standard deviation of HPCI, and the percentage of HPCI which is lower than 3.0 of comparable section. In this paper, we will suggest the more rational method to determine the future pavement conditions, including the probabilistic duration and deterministic modeling methods regarding the impact of traffic volume, age, and the type of the pavement.

  • PDF

Predicting Crime Risky Area Using Machine Learning (머신러닝기반 범죄발생 위험지역 예측)

  • HEO, Sun-Young;KIM, Ju-Young;MOON, Tae-Heon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.4
    • /
    • pp.64-80
    • /
    • 2018
  • In Korea, citizens can only know general information about crime. Thus it is difficult to know how much they are exposed to crime. If the police can predict the crime risky area, it will be possible to cope with the crime efficiently even though insufficient police and enforcement resources. However, there is no prediction system in Korea and the related researches are very much poor. From these backgrounds, the final goal of this study is to develop an automated crime prediction system. However, for the first step, we build a big data set which consists of local real crime information and urban physical or non-physical data. Then, we developed a crime prediction model through machine learning method. Finally, we assumed several possible scenarios and calculated the probability of crime and visualized the results in a map so as to increase the people's understanding. Among the factors affecting the crime occurrence revealed in previous and case studies, data was processed in the form of a big data for machine learning: real crime information, weather information (temperature, rainfall, wind speed, humidity, sunshine, insolation, snowfall, cloud cover) and local information (average building coverage, average floor area ratio, average building height, number of buildings, average appraised land value, average area of residential building, average number of ground floor). Among the supervised machine learning algorithms, the decision tree model, the random forest model, and the SVM model, which are known to be powerful and accurate in various fields were utilized to construct crime prevention model. As a result, decision tree model with the lowest RMSE was selected as an optimal prediction model. Based on this model, several scenarios were set for theft and violence cases which are the most frequent in the case city J, and the probability of crime was estimated by $250{\times}250m$ grid. As a result, we could find that the high crime risky area is occurring in three patterns in case city J. The probability of crime was divided into three classes and visualized in map by $250{\times}250m$ grid. Finally, we could develop a crime prediction model using machine learning algorithm and visualized the crime risky areas in a map which can recalculate the model and visualize the result simultaneously as time and urban conditions change.

Conjoint-like Analysis Using Elimination-by-Aspects Model (EBA 모형을 활용한 유사 컨조인트 분석)

  • Park, Sang-Jun
    • Korean Management Science Review
    • /
    • v.25 no.1
    • /
    • pp.139-147
    • /
    • 2008
  • Conjoint Analysis is marketers' favorite methodology for finding out how buyers make trade-offs among competing products and suppliers. Thousands of applications of conjoint analysis have been carried out over the past three decades. The conjoint analysis has been so popular as a management decision tool due to the availability of a choice simulator. A conjoint simulator enables managers to perform 'what if' question accompanying the output of a conjoint study. Traditionally the First Choice Model (FCM) has been widely used as a choice simulator. The FCM is simple to do, easy to understand. In the FCM, the probability of an alternative is zero until its value is greater than others in the set. Once its value exceeds that threshold, however, it receives 100%. The LOGIT simulation model, which is also called as "Share of Preference", has been used commonly as an alternative of the FCM. In the model part worth utilities aren't required to be positive. Besides, it doesn't require part worth utilities computed under LOGIT model. The simulator can be used based on regression, monotone regression, linear programming, and so on. However, it is not free from the Independent from Irrelevant Alternatives (IIA) problem. This paper proposes the EBA (Elimination-By-Aspects) model as a useful conjoint-like method. One advantage of the EBA model is that it models choice in terms of the actual psychological processes that might be taking place. According to EBA, when choosing from choice objects, a person chooses one of the aspects that are effective for the objects and eliminates all objects which do not have this aspect. This process continues until only one alternative remains.

A Quantitative Trust Model with consideration of Subjective Preference (주관적 선호도를 고려한 정량적 신뢰모델)

  • Kim, Hak-Joon;Lee, Sun-A;Lee, Kyung-Mi;Lee, Keon-Myung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.1
    • /
    • pp.61-65
    • /
    • 2006
  • This paper is concerned with a quantitative computational trust model which lakes into account multiple evaluation criteria and uses the recommendation from others in order to get the trust value for entities. In the proposed trust model, the trust for an entity is defined as the expectation for the entity to yield satisfactory outcomes in the given situation. Once an interaction has been made with an entity, it is assumed that outcomes are observed with respect to evaluation criteria. When the trust information is needed, the satisfaction degree, which is the probability to generate satisfactory outcomes for each evaluation criterion, is computed based on the outcome probability distributions and the entity's preference degrees on the outcomes. Then, the satisfaction degrees for evaluation criteria are aggregated into a trust value. At that time, the reputation information is also incorporated into the trust value. This paper presents in detail how the trust model works.

Durability Analysis and Development of Probability-Based Carbonation Prediction Model in Concrete Structure (콘크리트 구조물의 확률론적 탄산화 예측 모델 개발 및 내구성 해석)

  • Jung, Hyunjun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.4A
    • /
    • pp.343-352
    • /
    • 2010
  • Recently, many researchers have been carried out to estimate more controlled service life and long-term performance of carbonated concrete structures. Durability analysis and design based on probability have been induced to new concrete structures for design. This paper provides a carbonation prediction model based on the Fick's 1st law of diffusion using statistic data of carbonated concrete structures and the probabilistic analysis of the durability performance has been carried out by using a Bayes' theorem. The influence of concerned design parameters such as $CO_2$ diffusion coefficient, atmospheric $CO_2$ concentration, absorption quantity of $CO_2$ and the degree of hydration was investigated. Using a monitoring data, this model which was based on probabilistic approach was predicted a carbonation depth and a remaining service life at a variety of environmental concrete structures. Form the result, the application method using a realistic carbonation prediction model can be to estimate erosion-open-time, controlled durability and to determine a making decision for suitable repair and maintenance of carbonated concrete structures.

A Study on Commercial Power of Traditional Market

  • Baik, Key-Young;Youn, Myoung-Kil
    • East Asian Journal of Business Economics (EAJBE)
    • /
    • v.4 no.2
    • /
    • pp.1-11
    • /
    • 2016
  • This study investigated commercial power theory of traditional market through the analysis of literature review. Consumers' store selection models are made up a theory based on normative hypothesis, theory of mutual reaction, utility function estimation model, and cognitive-behavioral model. Detailed models are as follows. Normative hypothesis based theory is divided into Reilly's retail gratification theory and Converse's revised retail g ratification theory. Interaction theory is composed of Huff's probability gratification theory, MCI model and Multi-nominal Logit Model (MNL model). There are four models in retail organization position theory such as central place theories, single store position theory, multi store position - assign model, and retail growth potential model. In case of single store position theory, theoretical and empirical techniques have developed for a decision to optimum single store position. Those are like these, a check list, the most simple and systematic method, analogy, and microanalysis technique. Aforementioned models are theoretical and mathematical commercial power measurement and/or model. The study has rather limitations because the variation factors included in formula are only a part of actual commercial power. Therefore, further study shall be made continuously to commercial power areas and variables.