• Title/Summary/Keyword: decision algorithm

Search Result 2,359, Processing Time 0.025 seconds

Calibration of Car-Following Models Using a Dual Genetic Algorithm with Central Composite Design (중심합성계획법 기반 이중유전자알고리즘을 활용한 차량추종모형 정산방법론 개발)

  • Bae, Bumjoon;Lim, Hyeonsup;So, Jaehyun (Jason)
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.18 no.2
    • /
    • pp.29-43
    • /
    • 2019
  • The calibration of microscopic traffic simulation models has received much attention in the simulation field. Although no standard has been established for it, a genetic algorithm (GA) has been widely employed in recent literature because of its high efficiency to find solutions in such optimization problems. However, the performance still falls short in simulation analyses to support fast decision making. This paper proposes a new calibration procedure using a dual GA and central composite design (CCD) in order to improve the efficiency. The calibration exercise goes through three major sequential steps: (1) experimental design using CCD for a quadratic response surface model (RSM) estimation, (2) 1st GA procedure using the RSM with CCD to find a near-optimal initial population for a next step, and (3) 2nd GA procedure to find a final solution. The proposed method was applied in calibrating the Gipps car-following model with respect to maximizing the likelihood of a spacing distribution between a lead and following vehicle. In order to evaluate the performance of the proposed method, a conventional calibration approach using a single GA was compared under both simulated and real vehicle trajectory data. It was found that the proposed approach enhances the optimization speed by starting to search from an initial population that is closer to the optimum than that of the other approach. This result implies the proposed approach has benefits for a large-scale traffic network simulation analysis. This method can be extended to other optimization tasks using GA in transportation studies.

A Performance Comparison of CCA and RMMA Algorithm for Blind Adaptive Equalization (블라인드 적응 등화를 위한 CCA와 RMMA 알고리즘의 성능 비교)

  • Lim, Seung-Gag
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.1
    • /
    • pp.51-56
    • /
    • 2019
  • This paper related with the performance comparison of CCA and RMMA blind adaptive equalization in order to reduce the intersymbol interference which is occurred in channel when transmitting the 16-QAM signal, high spectrum efficiencies of nonconstant modulus characteristic. The CCA possible to improve the misadustment and initial convergence by compacting the every signal constellation of 16 by using the sliced symbol of the decision device output, namely statistical symbol, but incresing the computational cost. The RMMA possible to minimize the fast convergence speed and misadjustment and channel tracking capability without increasing the computational cost by obtain the error signal after transform to 4 constant modulus signal based on the region of signal constellation located. In this paper, these algorithm were implemented in the same channel, and the blind adaptive equalization performance were compared using the equalizer output signal constellation, residual isi, MSE, SER. As a result of simulation, the RMMA has better performance in output signal constellation, residual isi and MSE compared to the CCA, but has slow convergence speed about 1.3 times. And the SER performance presenting the robustness to the noise signal, the CCA has more beeter in less SNR, but the RMMA has better in greater than 6dB in SNR.

Secure Training Support Vector Machine with Partial Sensitive Part

  • Park, Saerom
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.4
    • /
    • pp.1-9
    • /
    • 2021
  • In this paper, we propose a training algorithm of support vector machine (SVM) with a sensitive variable. Although machine learning models enable automatic decision making in the real world applications, regulations prohibit sensitive information from being used to protect privacy. In particular, the privacy protection of the legally protected attributes such as race, gender, and disability is compulsory. We present an efficient least square SVM (LSSVM) training algorithm using a fully homomorphic encryption (FHE) to protect a partial sensitive attribute. Our framework posits that data owner has both non-sensitive attributes and a sensitive attribute while machine learning service provider (MLSP) can get non-sensitive attributes and an encrypted sensitive attribute. As a result, data owner can obtain the encrypted model parameters without exposing their sensitive information to MLSP. In the inference phase, both non-sensitive attributes and a sensitive attribute are encrypted, and all computations should be conducted on encrypted domain. Through the experiments on real data, we identify that our proposed method enables to implement privacy-preserving sensitive LSSVM with FHE that has comparable performance with the original LSSVM algorithm. In addition, we demonstrate that the efficient sensitive LSSVM with FHE significantly improves the computational cost with a small degradation of performance.

A Study on Classification of Crown Classes and Selection of Thinned Trees for Major Conifers Using Machine Learning Techniques (머신러닝 기법을 활용한 주요 침엽수종의 수관급 분류와 간벌목 선정 연구)

  • Lee, Yong-Kyu;Lee, Jung-Soo;Park, Jin-Woo
    • Journal of Korean Society of Forest Science
    • /
    • v.111 no.2
    • /
    • pp.302-310
    • /
    • 2022
  • Here we aimed to classify the major coniferous tree species (Pinus densiflora, Pinus koraiensis, and Larix kaempferi) by tree measurement information and machine learning algorithms to establish an efficient forest management plan. We used national forest monitoring information amassed over nine years for the measurement information of trees, and random forest (RF), XGBoost (XGB), and light GBM (LGBM) as machine learning algorithms. We compared and evaluated the accuracy of the algorithm through performance evaluation using the accuracy, precision, recall, and F1 score of the algorithm. The RF algorithm had the highest performance evaluation score for all tree species, and highest scores for Pinus densiflora, with an accuracy of about 65%, a precision of about 72%, a recall of about 60%, and an F1 score of about 66%. The classification accuracy for the dominant trees was higher than about 80% in the crown classes, but that of the co-dominant trees, the intermediate trees, and the overtopper trees was evaluated as low. We consider that the results of this study can be used as reference data for decision-making in the selection of thinning trees for forest management.

Establishing meteorological drought severity considering the level of emergency water supply (비상급수의 규모를 고려한 기상학적 가뭄 강도 수립)

  • Lee, Seungmin;Wang, Wonjoon;Kim, Donghyun;Han, Heechan;Kim, Soojun;Kim, Hung Soo
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.10
    • /
    • pp.619-629
    • /
    • 2023
  • Recent intensification of climate change has led to an increase in damages caused by droughts. Currently, in Korea, the Standardized Precipitation Index (SPI) is used as a criterion to classify the intensity of droughts. Based on the accumulated precipitation over the past six months (SPI-6), meteorological drought intensities are classified into four categories: concern, caution, alert, and severe. However, there is a limitation in classifying drought intensity solely based on precipitation. To overcome the limitations of the meteorological drought warning criteria based on SPI, this study collected emergency water supply damage data from the National Drought Information Portal (NDIP) to classify drought intensity. Factors of SPI, such as precipitation, and factors used to calculate evapotranspiration, such as temperature and humidity, were indexed using min-max normalization. Coefficients for each factor were determined based on the Genetic Algorithm (GA). The drought intensity based on emergency water supply was used as the dependent variable, and the coefficients of each meteorological factor determined by GA were used as coefficients to derive a new Drought Severity Classification Index (DSCI). After deriving the DSCI, cumulative distribution functions were used to present intensity stage classification boundaries. It is anticipated that using the proposed DSCI in this study will allow for more accurate drought intensity classification than the traditional SPI, supporting decision-making for disaster management personnel.

A DB Pruning Method in a Large Corpus-Based TTS with Multiple Candidate Speech Segments (대용량 복수후보 TTS 방식에서 합성용 DB의 감량 방법)

  • Lee, Jung-Chul;Kang, Tae-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.6
    • /
    • pp.572-577
    • /
    • 2009
  • Large corpus-based concatenating Text-to-Speech (TTS) systems can generate natural synthetic speech without additional signal processing. To prune the redundant speech segments in a large speech segment DB, we can utilize a decision-tree based triphone clustering algorithm widely used in speech recognition area. But, the conventional methods have problems in representing the acoustic transitional characteristics of the phones and in applying context questions with hierarchic priority. In this paper, we propose a new clustering algorithm to downsize the speech DB. Firstly, three 13th order MFCC vectors from first, medial, and final frame of a phone are combined into a 39 dimensional vector to represent the transitional characteristics of a phone. And then the hierarchically grouped three question sets are used to construct the triphone trees. For the performance test, we used DTW algorithm to calculate the acoustic similarity between the target triphone and the triphone from the tree search result. Experimental results show that the proposed method can reduce the size of speech DB by 23% and select better phones with higher acoustic similarity. Therefore the proposed method can be applied to make a small sized TTS.

Intelligent Optimal Route Planning Based on Context Awareness (상황인식 기반 지능형 최적 경로계획)

  • Lee, Hyun-Jung;Chang, Yong-Sik
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.117-137
    • /
    • 2009
  • Recently, intelligent traffic information systems have enabled people to forecast traffic conditions before hitting the road. These convenient systems operate on the basis of data reflecting current road and traffic conditions as well as distance-based data between locations. Thanks to the rapid development of ubiquitous computing, tremendous context data have become readily available making vehicle route planning easier than ever. Previous research in relation to optimization of vehicle route planning merely focused on finding the optimal distance between locations. Contexts reflecting the road and traffic conditions were then not seriously treated as a way to resolve the optimal routing problems based on distance-based route planning, because this kind of information does not have much significant impact on traffic routing until a a complex traffic situation arises. Further, it was also not easy to take into full account the traffic contexts for resolving optimal routing problems because predicting the dynamic traffic situations was regarded a daunting task. However, with rapid increase in traffic complexity the importance of developing contexts reflecting data related to moving costs has emerged. Hence, this research proposes a framework designed to resolve an optimal route planning problem by taking full account of additional moving cost such as road traffic cost and weather cost, among others. Recent technological development particularly in the ubiquitous computing environment has facilitated the collection of such data. This framework is based on the contexts of time, traffic, and environment, which addresses the following issues. First, we clarify and classify the diverse contexts that affect a vehicle's velocity and estimates the optimization of moving cost based on dynamic programming that accounts for the context cost according to the variance of contexts. Second, the velocity reduction rate is applied to find the optimal route (shortest path) using the context data on the current traffic condition. The velocity reduction rate infers to the degree of possible velocity including moving vehicles' considerable road and traffic contexts, indicating the statistical or experimental data. Knowledge generated in this papercan be referenced by several organizations which deal with road and traffic data. Third, in experimentation, we evaluate the effectiveness of the proposed context-based optimal route (shortest path) between locations by comparing it to the previously used distance-based shortest path. A vehicles' optimal route might change due to its diverse velocity caused by unexpected but potential dynamic situations depending on the road condition. This study includes such context variables as 'road congestion', 'work', 'accident', and 'weather' which can alter the traffic condition. The contexts can affect moving vehicle's velocity on the road. Since these context variables except for 'weather' are related to road conditions, relevant data were provided by the Korea Expressway Corporation. The 'weather'-related data were attained from the Korea Meteorological Administration. The aware contexts are classified contexts causing reduction of vehicles' velocity which determines the velocity reduction rate. To find the optimal route (shortest path), we introduced the velocity reduction rate in the context for calculating a vehicle's velocity reflecting composite contexts when one event synchronizes with another. We then proposed a context-based optimal route (shortest path) algorithm based on the dynamic programming. The algorithm is composed of three steps. In the first initialization step, departure and destination locations are given, and the path step is initialized as 0. In the second step, moving costs including composite contexts into account between locations on path are estimated using the velocity reduction rate by context as increasing path steps. In the third step, the optimal route (shortest path) is retrieved through back-tracking. In the provided research model, we designed a framework to account for context awareness, moving cost estimation (taking both composite and single contexts into account), and optimal route (shortest path) algorithm (based on dynamic programming). Through illustrative experimentation using the Wilcoxon signed rank test, we proved that context-based route planning is much more effective than distance-based route planning., In addition, we found that the optimal solution (shortest paths) through the distance-based route planning might not be optimized in real situation because road condition is very dynamic and unpredictable while affecting most vehicles' moving costs. For further study, while more information is needed for a more accurate estimation of moving vehicles' costs, this study still stands viable in the applications to reduce moving costs by effective route planning. For instance, it could be applied to deliverers' decision making to enhance their decision satisfaction when they meet unpredictable dynamic situations in moving vehicles on the road. Overall, we conclude that taking into account the contexts as a part of costs is a meaningful and sensible approach to in resolving the optimal route problem.

A Distributed Web-DSS Approach for Coordinating Interdepartmental Decisions - Emphasis on Production and Marketing Decision (부서간 의사결정 조정을 위한 분산 웹 의사결정지원시스템에 관한 연구)

  • 이건창;조형래;김진성
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.10a
    • /
    • pp.291-300
    • /
    • 1999
  • 인터넷을 기반으로 한 정보통신의 급속한 발전이라는 기업환경의 변화에 적응하기 위해서 기업은 점차 모든 경영시스템을 인터넷을 기반으로 하도록 변화시키고 있을 뿐만 아니라, 기업 조직 또한 전세계를 기반으로한 글로벌 기업 형태로 변화하고 있다. 이러한 급속한 경영환경의 변화로 인해서 기업 내에서는 종전과는 다른 형태의 부서간 상호의사결정조정 과정이 필요하게 되었다. 일반 기업들을 대상으로 한 상호의사결정의 지원과정에 대해서는 기존에 많은 연구들이 있었으나 글로벌기업과 같은 네트워크 형태의 새로운 형태의 기업에 있어서의 상호의사결정과정을 지원할 수 있는 의사결정지원시스템에 대해서는 단순한 그룹의사결정지원시스템 또는 분산의사결정지원시스템과 같은 연구들이 주를 이루고 있다. 따라서 본 연구에서는 인터넷 특히, 웹을 기반으로 한 기업의 글로벌경영 및 분산 경영에서 비롯되는 부서간 상호의사결정이라는 문제를 효율적으로 지원할 수 있는 기업의 글로벌경영 및 분산 경영에서 비롯되는 부서간 상호의사결정이라는 문제를 효율적으로 지원할 수 있는 메커니즘을 제시하고 이에 기반한 프로토타입 형태의 시스템을 구현하여 성능을 검증하고자 한다. 특히, 기업 내에서 가장 대표적으로 상호의사결정지원이 필요한 생산과 마케팅 부서를 대상으로 상호의사결정지원 메커니즘을 개발하고 실험을 진행하였다. 그 결과 글로벌 기업내의 생산과 마케팅 부서간 상호의사결정을 효율적으로 지원 할 수 있는 상호조정 메카니즘인 개선된 PROMISE(PROduction and Marketing Interface Support Environment)를 기반으로 한 웹 분산의사결정지원시스템 (Web-DSS : Web-Decision Support Systems)을 제안하는 바이다.자대상 벤처기업의 선정을 위한 전문가시스템을 구축중이다.의 밀도를 비재무적 지표변수로 산정하여 로지스틱회귀 분석과 인공신경망 기법으로 검증하였다. 로지스틱회귀분석 결과에서는 재무적 지표변수 모형의 전체적 예측적중률이 87.50%인 반면에 재무/비재무적 지표모형은 90.18%로서 비재무적 지표변수 사용에 대한 개선의 효과가 나타났다. 표본기업들을 훈련과 시험용으로 구분하여 분석한 결과는 전체적으로 재무/비재무적 지표를 고려한 인공신경망기법의 예측적중률이 높은 것으로 나타났다. 즉, 로지스틱회귀 분석의 재무적 지표모형은 훈련, 시험용이 84.45%, 85.10%인 반면, 재무/비재무적 지표모형은 84.45%, 85.08%로서 거의 동일한 예측적중률을 가졌으나 인공신경망기법 분석에서는 재무적 지표모형이 92.23%, 85.10%인 반면, 재무/비재무적 지표모형에서는 91.12%, 88.06%로서 향상된 예측적중률을 나타내었다.ting LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity of computation. Adaptive transversal filter with proposed data recycling buffer

  • PDF

Cost-Effectiveness Analysis of Different Management Strategies for Detection CIN2+ of Women with Atypical Squamous Cells of Undetermined Significance (ASC-US) Pap Smear in Thailand

  • Tantitamit, Tanitra;Termrungruanglert, Wichai;Oranratanaphan, Shina;Niruthisard, Somchai;Tanbirojn, Patuou;Havanond, Piyalamporn
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.16 no.16
    • /
    • pp.6857-6862
    • /
    • 2015
  • Background: To identify the optimal cost effective strategy for the management of women having ASC-US who attended at King Chulalongkorn Memorial Hospital (KMCH). Design: An Economical Analysis based on a retrospective study. Subject: The women who were referred to the gynecological department due to screening result of ASC-US at King Chulalongkorn Memorial Hospital, a general and tertiary referral center in Bangkok Thailand, from Jan 2008 - Dec 2012. Materials and Methods: A decision tree-based was constructed to evaluate the cost effectiveness of three follow up strategies in the management of ASC-US results: repeat cytology, triage with HPV testing and immediate colposcopy. Each ASC-US woman made the decision of each strategy after receiving all details about this algorithm, advantages and disadvantages of each strategy from a doctor. The model compared the incremental costs per case of high-grade cervical intraepithelial neoplasia (CIN2+) detected as measured by incremental cost-effectiveness ratio (ICER). Results: From the provider's perspective, immediate colposcopy is the least costly strategy and also the most effective option among the three follow up strategies. Compared with HPV triage, repeat cytology triage is less costly than HPV triage, whereas the latter provides a more effective option at an incremental cost-effectiveness ratio (ICER) of 56,048 Baht per additional case of CIN 2+ detected. From the patient's perspective, the least costly and least effective is repeat cytology triage. Repeat colposcopy has an incremental cost-effectiveness (ICER) of 2,500 Baht per additional case of CIN2+ detected when compared to colposcopy. From the sensitivity analysis, immediate colposcopy triage is no longer cost effective when the cost exceeds 2,250 Baht or the cost of cytology is less than 50 Baht (1USD = 31.58 THB). Conclusions: In women with ASC-US cytology, colposcopy is more cost-effective than repeat cytology or triage with HPV testing for both provider and patient perspectives.

Classification of Fall in Sick Times of Liver Cirrhosis using Magnetic Resonance Image (자기공명영상을 이용한 간경변 단계별 분류에 관한 연구)

  • Park, Byung-Rae;Jeon, Gye-Rok
    • Journal of radiological science and technology
    • /
    • v.26 no.1
    • /
    • pp.71-82
    • /
    • 2003
  • In this paper, I proposed a classifier of liver cirrhotic step using T1-weighted MRI(magnetic resonance imaging) and hierarchical neural network. The data sets for classification of each stage, which were normal, 1type, 2type and 3type, were obtained in Pusan National University Hospital from June 2001 to december 2001. And the number of data was 46. We extracted liver region and nodule region from T1-weighted MR liver image. Then objective interpretation classifier of liver cirrhotic steps in T1-weighted MR liver images. Liver cirrhosis classifier implemented using hierarchical neural network which gray-level analysis and texture feature descriptors to distinguish normal liver and 3 types of liver cirrhosis. Then proposed Neural network classifier teamed through error back-propagation algorithm. A classifying result shows that recognition rate of normal is 100%, 1type is 82.3%, 2type is 86.7%, 3type is 83.7%. The recognition ratio very high, when compared between the result of obtained quantified data to that of doctors decision data and neural network classifier value. If enough data is offered and other parameter is considered, this paper according to we expected that neural network as well as human experts and could be useful as clinical decision support tool for liver cirrhosis patients.

  • PDF