• Title/Summary/Keyword: prediction error methods

Search Result 526, Processing Time 0.03 seconds

Evaluation of a multi-stage convolutional neural network-based fully automated landmark identification system using cone-beam computed tomography-synthesized posteroanterior cephalometric images

  • Kim, Min-Jung;Liu, Yi;Oh, Song Hee;Ahn, Hyo-Won;Kim, Seong-Hun;Nelson, Gerald
    • The korean journal of orthodontics
    • /
    • v.51 no.2
    • /
    • pp.77-85
    • /
    • 2021
  • Objective: To evaluate the accuracy of a multi-stage convolutional neural network (CNN) model-based automated identification system for posteroanterior (PA) cephalometric landmarks. Methods: The multi-stage CNN model was implemented with a personal computer. A total of 430 PA-cephalograms synthesized from cone-beam computed tomography scans (CBCT-PA) were selected as samples. Twenty-three landmarks used for Tweemac analysis were manually identified on all CBCT-PA images by a single examiner. Intra-examiner reproducibility was confirmed by repeating the identification on 85 randomly selected images, which were subsequently set as test data, with a two-week interval before training. For initial learning stage of the multi-stage CNN model, the data from 345 of 430 CBCT-PA images were used, after which the multi-stage CNN model was tested with previous 85 images. The first manual identification on these 85 images was set as a truth ground. The mean radial error (MRE) and successful detection rate (SDR) were calculated to evaluate the errors in manual identification and artificial intelligence (AI) prediction. Results: The AI showed an average MRE of 2.23 ± 2.02 mm with an SDR of 60.88% for errors of 2 mm or lower. However, in a comparison of the repetitive task, the AI predicted landmarks at the same position, while the MRE for the repeated manual identification was 1.31 ± 0.94 mm. Conclusions: Automated identification for CBCT-synthesized PA cephalometric landmarks did not sufficiently achieve the clinically favorable error range of less than 2 mm. However, AI landmark identification on PA cephalograms showed better consistency than manual identification.

Evaluating the Effects of Dose Rate on Dynamic Intensity-Modulated Radiation Therapy Quality Assurance

  • Kim, Kwon Hee;Back, Tae Seong;Chung, Eun Ji;Suh, Tae Suk;Sung, Wonmo
    • Progress in Medical Physics
    • /
    • v.32 no.4
    • /
    • pp.116-121
    • /
    • 2021
  • Purpose: To investigate the effects of dose rate on intensity-modulated radiation therapy (IMRT) quality assurance (QA). Methods: We performed gamma tests using portal dose image prediction and log files of a multileaf collimator. Thirty treatment plans were randomly selected for the IMRT QA plan, and three verification plans for each treatment plan were generated with different dose rates (200, 400, and 600 monitor units [MU]/min). These verification plans were delivered to an electronic portal imager attached to a Varian medical linear accelerator, which recorded and compared with the planned dose. Root-mean-square (RMS) error values of the log files were also compared. Results: With an increase in dose rate, the 2%/2-mm gamma passing rate decreased from 90.9% to 85.5%, indicating that a higher dose rate was associated with lower radiation delivery accuracy. Accordingly, the average RMS error value increased from 0.0170 to 0.0381 cm as dose rate increased. In contrast, the radiation delivery time reduced from 3.83 to 1.49 minutes as the dose rate increased from 200 to 600 MU/min. Conclusions: Our results indicated that radiation delivery accuracy was lower at higher dose rates; however, the accuracy was still clinically acceptable at dose rates of up to 600 MU/min.

Domestic Automotive Exterior Lamp-LEDs Demand and Forecasting using BASS Diffusion Model (BASS 확산 모형을 이용한 국내 자동차 외장 램프 LED 수요예측 분석)

  • Lee, Jae-Heun
    • Journal of Korean Society for Quality Management
    • /
    • v.50 no.3
    • /
    • pp.349-371
    • /
    • 2022
  • Purpose: Compared to the rapid growth rate of the domestic automotive LED industry so far, the predictive analysis method for demand forecasting or market outlook was insufficient. Accordingly, product characteristics are analyzed through the life trend of LEDs for automotive exterior lamps and the relative strengths of p and q using the Bass model. Also, future demands are predicted. Methods: We used sales data of a leading company in domestic market of automotive LEDs. Considering the autocorrelation error term of this data, parameters m, p, and q were estimated through the modified estimation method of OLS and the NLS(Nonlinear Least Squares) method, and the optimal method was selected by comparing prediction error performance such as RMSE. Future annual demands and cumulative demands were predicted through the growth curve obtained from Bass-NLS model. In addition, various nonlinear growth curve models were applied to the data to compare the Bass-NLS model with potential market demand, and an optimal model was derived. Results: From the analysis, the parameter estimation results by Bass-NLS obtained m=1338.13, p=0.0026, q=0.3003. If the current trend continues, domestic automotive LED market is predicted to reach its maximum peak in 2021 and the maximum demand is $102.23M. Potential market demand was $1338.13M. In the nonlinear growth curve model analysis, the Gompertz model was selected as the optimal model, and the potential market size was $2864.018M. Conclusion: It is expected that the Bass-NLS method will be applied to LED sales data for automotive to find out the characteristics of the relative strength of q/p of products and to be used to predict current demand and future cumulative demand.

Ensemble Design of Machine Learning Technigues: Experimental Verification by Prediction of Drifter Trajectory (앙상블을 이용한 기계학습 기법의 설계: 뜰개 이동경로 예측을 통한 실험적 검증)

  • Lee, Chan-Jae;Kim, Yong-Hyuk
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.3
    • /
    • pp.57-67
    • /
    • 2018
  • The ensemble is a unified approach used for getting better performance by using multiple algorithms in machine learning. In this paper, we introduce boosting and bagging, which have been widely used in ensemble techniques, and design a method using support vector regression, radial basis function network, Gaussian process, and multilayer perceptron. In addition, our experiment was performed by adding a recurrent neural network and MOHID numerical model. The drifter data used for our experimental verification consist of 683 observations in seven regions. The performance of our ensemble technique is verified by comparison with four algorithms each. As verification, mean absolute error was adapted. The presented methods are based on ensemble models using bagging, boosting, and machine learning. The error rate was calculated by assigning the equal weight value and different weight value to each unit model in ensemble. The ensemble model using machine learning showed 61.7% improvement compared to the average of four machine learning technique.

On the Errors of the Phased Beam Tracing Method for the Room Acoustic Analysis (실내음향 해석을 위한 위상 빔 추적법의 사용시 오차에 관하여)

  • Jeong, Cheol-Ho;Ih, Jeong-Guon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.1
    • /
    • pp.1-11
    • /
    • 2008
  • To overcome the mid frequency limitation of geometrical acoustic techniques, the phased geometrical method was suggested by introducing the phase information into the sound propagation from the source. By virtue of phase information, the phased tracing method has a definite benefit in taking the interference phenomenon at mid frequencies into account. Still, this analysis technique has suffered from difficulties in dealing with low frequency phenomena, so called, wave nature of sound. At low frequencies, diffraction at corners, edges, and obstacles can cause errors in simulating the transfer function and the impulse response. Due to the use of real valued absorption coefficient, simulated results have shown a discrepancy with measured data. Thus, incorrect phase of the reflection characteristic of a wall should be corrected. In this work, the uniform theory of diffraction was integrated into the phased beam tracing method (PBTM) and the result was compared to the ordinary PBTM. By changing the phase of the reflection coefficient, effects of phase information were investigated. Incorporating such error compensation methods, the acoustic prediction by PBTM can be further extended to low frequency range with improved accuracy in the room acoustic field.

Mixed dentition analysis using a multivariate approach (다변량 기법을 이용한 혼합치열기 분석법)

  • Seo, Seung-Hyun;An, Hong-Seok;Lee, Shin-Jae;Lim, Won Hee;Kim, Bong-Rae
    • The korean journal of orthodontics
    • /
    • v.39 no.2
    • /
    • pp.112-119
    • /
    • 2009
  • Objective: To develop a mixed dentition analysis method in consideration of the normal variation of tooth sizes. Methods: According to the tooth-size of the maxillary central incisor, maxillary 1st molar, mandibular central incisor, mandibular lateral incisor, and mandibular 1st molar, 307 normal occlusion subjects were clustered into the smaller and larger tooth-size groups. Multiple regression analyses were then performed to predict the sizes of the canine and premolars for the 2 groups and both genders separately. For a cross validation dataset, 504 malocclusion patients were assigned into the 2 groups. Then multiple regression equations were applied. Results: Our results show that the maximum errors of the predicted space for the canine, 1st and 2nd premolars were 0.71 and 0.82 mm residual standard deviation for the normal occlusion and malocclusion groups, respectively. For malocclusion patients, the prediction errors did not imply a statistically significant difference depending on the types of malocclusion nor the types of tooth-size groups. The frequency of prediction error more than 1 mm and 2 mm were 17.3% and 1.8%, respectively. The overall prediction accuracy was dramatically improved in this study compared to that of previous studies. Conclusions: The computer aided calculation method used in this study appeared to be more efficient.

Ovarian Cancer Microarray Data Classification System Using Marker Genes Based on Normalization (표준화 기반 표지 유전자를 이용한 난소암 마이크로어레이 데이타 분류 시스템)

  • Park, Su-Young;Jung, Chai-Yeoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.9
    • /
    • pp.2032-2037
    • /
    • 2011
  • Marker genes are defined as genes in which the expression level characterizes a specific experimental condition. Such genes in which the expression levels differ significantly between different groups are highly informative relevant to the studied phenomenon. In this paper, first the system can detect marker genes that are selected by ranking genes according to statistics after normalizing data with methods that are the most widely used among several normalization methods proposed the while, And it compare and analyze a performance of each of normalization methods with mult-perceptron neural network layer. The Result that apply Multi-Layer perceptron algorithm at Microarray data set including eight of marker gene that are selected using ANOVA method after Lowess normalization represent the highest classification accuracy of 99.32% and the lowest prediction error estimate.

Improvement on Similarity Calculation in Collaborative Filtering Recommendation using Demographic Information (인구 통계 정보를 이용한 협업 여과 추천의 유사도 개선 기법)

  • 이용준;이세훈;왕창종
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.5
    • /
    • pp.521-529
    • /
    • 2003
  • In this paper we present an improved method by using demographic information for overcoming the similarity miss-calculation from the sparsity problem in collaborative filtering recommendation systems. The similarity between a pair of users is only determined by the ratings given to co-rated items, so items that have not been rated by both users are ignored. To solve this problem, we add virtual neighbor's rating using demographic information of neighbors for improving prediction accuracy. It is one kind of extentions of traditional collaborative filtering methods using the peason correlation coefficient. We used the Grouplens movie rating data in experiment and we have compared the proposed method with the collaborative filtering methods by the mean absolute error and receive operating characteristic values. The results show that the proposed method is more efficient than the collaborative filtering methods using the pearson correlation coefficient about 9% in MAE and 13% in sensitivity of ROC.

The development of four efficient optimal neural network methods in forecasting shallow foundation's bearing capacity

  • Hossein Moayedi;Binh Nguyen Le
    • Computers and Concrete
    • /
    • v.34 no.2
    • /
    • pp.151-168
    • /
    • 2024
  • This research aimed to appraise the effectiveness of four optimization approaches - cuckoo optimization algorithm (COA), multi-verse optimization (MVO), particle swarm optimization (PSO), and teaching-learning-based optimization (TLBO) - that were enhanced with an artificial neural network (ANN) in predicting the bearing capacity of shallow foundations located on cohesionless soils. The study utilized a database of 97 laboratory experiments, with 68 experiments for training data sets and 29 for testing data sets. The ANN algorithms were optimized by adjusting various variables, such as population size and number of neurons in each hidden layer, through trial-and-error techniques. Input parameters used for analysis included width, depth, geometry, unit weight, and angle of shearing resistance. After performing sensitivity analysis, it was determined that the optimized architecture for the ANN structure was 5×5×1. The study found that all four models demonstrated exceptional prediction performance: COA-MLP, MVO-MLP, PSO-MLP, and TLBO-MLP. It is worth noting that the MVO-MLP model exhibited superior accuracy in generating network outputs for predicting measured values compared to the other models. The training data sets showed R2 and RMSE values of (0.07184 and 0.9819), (0.04536 and 0.9928), (0.09194 and 0.9702), and (0.04714 and 0.9923) for COA-MLP, MVO-MLP, PSO-MLP, and TLBO-MLP methods respectively. Similarly, the testing data sets produced R2 and RMSE values of (0.08126 and 0.07218), (0.07218 and 0.9814), (0.10827 and 0.95764), and (0.09886 and 0.96481) for COA-MLP, MVO-MLP, PSO-MLP, and TLBO-MLP methods respectively.

An integrated approach for optimum design of HPC mix proportion using genetic algorithm and artificial neural networks

  • Parichatprecha, Rattapoohm;Nimityongskul, Pichai
    • Computers and Concrete
    • /
    • v.6 no.3
    • /
    • pp.253-268
    • /
    • 2009
  • This study aims to develop a cost-based high-performance concrete (HPC) mix optimization system based on an integrated approach using artificial neural networks (ANNs) and genetic algorithms (GA). ANNs are used to predict the three main properties of HPC, namely workability, strength and durability, which are used to evaluate fitness and constraint violations in the GA process. Multilayer back-propagation neural networks are trained using the results obtained from experiments and previous research. The correlation between concrete components and its properties is established. GA is employed to arrive at an optimal mix proportion of HPC by minimizing its total cost. A system prototype, called High Performance Concrete Mix-Design System using Genetic Algorithm and Neural Networks (HPCGANN), was developed in MATLAB. The architecture of the proposed system consists of three main parts: 1) User interface; 2) ANNs prediction models software; and 3) GA engine software. The validation of the proposed system is carried out by comparing the results obtained from the system with the trial batches. The results indicate that the proposed system can be used to enable the design of HPC mix which corresponds to its required performance. Furthermore, the proposed system takes into account the influence of the fluctuating unit price of materials in order to achieve the lowest cost of concrete, which cannot be easily obtained by traditional methods or trial-and-error techniques.