• Title/Summary/Keyword: training models

Search Result 1,531, Processing Time 0.026 seconds

Trends in Acupuncture Training Research: Focus on Practical Phantom Models

  • Jang, Jung Eun;Lee, Yeon Sun;Jang, Woo Seok;Sung, Won Suk;Kim, Eun-Jung;Lee, Seung Deok;Kim, Kyung Ho;Jung, Chan Yung
    • Journal of Acupuncture Research
    • /
    • v.39 no.2
    • /
    • pp.77-88
    • /
    • 2022
  • The purpose of this review was to identify research trends in acupuncture training systems and models and to analyze acupuncture training using phantom models. Articles on acupuncture training were retrieved from domestic and foreign electronic databases (PubMed, CNKI, CiNii, NDSL, KISS, RISS and KMBase). The search included studies conducted from January 1, 2010 to October 1, 2021. Acupuncture training was analyzed by categorization into acupoint location training and needling training. Acupuncture training was most frequently studied in China, acupoint location training was the most studied in 2012, and needling training was the most studied in 2013 and 2020. Among them, a silicone model with a sensor was used for training in acupoint location, and silicone and agarose gel were frequently used for needling training. Classifications of the phantom models for needling training by topic included phantom development, phantom-based education and evaluation system, phantom-based quantitative measurement, comparison of kinematic characteristics of hand motion between experts and beginners, and phantom models for acupoint location and needling training. Further research on the development of acupuncture practice training systems to improve practical skills is needed.

The Chicken Aorta as a Simulation-Training Model for Microvascular Surgery Training

  • Ramachandran, Savitha;Chui, Christopher Hoe-Kong;Tan, Bien-Keem
    • Archives of Plastic Surgery
    • /
    • v.40 no.4
    • /
    • pp.327-329
    • /
    • 2013
  • As a technically demanding skill, microsurgery is taught in the lab, in the form of a course of variable length (depending on the centre). Microsurgical training courses usually use a mixture of non-living and live animal simulation models. In the literature, a plethora of microsurgical training models have been described, ranging from low to high fidelity models. Given the high costs associated with live animal models, cheaper alternatives are coming into vogue. In this paper we describe the use of the chicken aorta as a simple and cost effective low fidelity microsurgical simulation model for training.

Voting and Ensemble Schemes Based on CNN Models for Photo-Based Gender Prediction

  • Jhang, Kyoungson
    • Journal of Information Processing Systems
    • /
    • v.16 no.4
    • /
    • pp.809-819
    • /
    • 2020
  • Gender prediction accuracy increases as convolutional neural network (CNN) architecture evolves. This paper compares voting and ensemble schemes to utilize the already trained five CNN models to further improve gender prediction accuracy. The majority voting usually requires odd-numbered models while the proposed softmax-based voting can utilize any number of models to improve accuracy. The ensemble of CNN models combined with one more fully-connected layer requires further tuning or training of the models combined. With experiments, it is observed that the voting or ensemble of CNN models leads to further improvement of gender prediction accuracy and that especially softmax-based voters always show better gender prediction accuracy than majority voters. Also, compared with softmax-based voters, ensemble models show a slightly better or similar accuracy with added training of the combined CNN models. Softmax-based voting can be a fast and efficient way to get better accuracy without further training since the selection of the top accuracy models among available CNN pre-trained models usually leads to similar accuracy to that of the corresponding ensemble models.

The Effects of Training for Computer Skills on Outcome Expectations, Ease of Use, Self-Efficacy and Perceived Behavioral Control

  • Lee, Min-Hwa
    • The Journal of Information Systems
    • /
    • v.5
    • /
    • pp.345-371
    • /
    • 1996
  • Previous studies on user training have largely focused on assessing models which describe the determinants of information technology usage or examined the effects of training on user satisfaction, productivity, performance, and so on. Scant research efforts have been made, however, to examine those effects of training by using theoretical models. This study presented a conceptual models to predict intention to use information technology and conducted an experiment to understand how training for computer skill acquisition affects primary variables of the model. The data were obtained from 32 student subjects of an experimental group and 31 students of a control group, and the information technology employed for this study was a university electronic mail system. The study results revealed that attitude toward usage and perceived behavioral control helped to predict user intentions ;; outcome expectations were positively related to attitude toward usage ; and self-efficacy was positively related to perceived behavioral control. The hands-on training for the experimental group led to increases in perceived ease of use, self-efficacy and perceived behavioral control. The changes in those variables suggest more causal effects of user training than other survey studies.

  • PDF

Comparison and optimization of deep learning-based radiosensitivity prediction models using gene expression profiling in National Cancer Institute-60 cancer cell line

  • Kim, Euidam;Chung, Yoonsun
    • Nuclear Engineering and Technology
    • /
    • v.54 no.8
    • /
    • pp.3027-3033
    • /
    • 2022
  • Background: In this study, various types of deep-learning models for predicting in vitro radiosensitivity from gene-expression profiling were compared. Methods: The clonogenic surviving fractions at 2 Gy from previous publications and microarray gene-expression data from the National Cancer Institute-60 cell lines were used to measure the radiosensitivity. Seven different prediction models including three distinct multi-layered perceptrons (MLP), four different convolutional neural networks (CNN) were compared. Folded cross-validation was applied to train and evaluate model performance. The criteria for correct prediction were absolute error < 0.02 or relative error < 10%. The models were compared in terms of prediction accuracy, training time per epoch, training fluctuations, and required calculation resources. Results: The strength of MLP-based models was their fast initial convergence and short training time per epoch. They represented significantly different prediction accuracy depending on the model configuration. The CNN-based models showed relatively high prediction accuracy, low training fluctuations, and a relatively small increase in the memory requirement as the model deepens. Conclusion: Our findings suggest that a CNN-based model with moderate depth would be appropriate when the prediction accuracy is important, and a shallow MLP-based model can be recommended when either the training resources or time are limited.

Three-Stage Framework for Unsupervised Acoustic Modeling Using Untranscribed Spoken Content

  • Zgank, Andrej
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.810-818
    • /
    • 2010
  • This paper presents a new framework for integrating untranscribed spoken content into the acoustic training of an automatic speech recognition system. Untranscribed spoken content plays a very important role for under-resourced languages because the production of manually transcribed speech databases still represents a very expensive and time-consuming task. We proposed two new methods as part of the training framework. The first method focuses on combining initial acoustic models using a data-driven metric. The second method proposes an improved acoustic training procedure based on unsupervised transcriptions, in which word endings were modified by broad phonetic classes. The training framework was applied to baseline acoustic models using untranscribed spoken content from parliamentary debates. We include three types of acoustic models in the evaluation: baseline, reference content, and framework content models. The best overall result of 18.02% word error rate was achieved with the third type. This result demonstrates statistically significant improvement over the baseline and reference acoustic models.

Improved Statistical Grey-Level Models for PCB Inspection (PCB 검사를 위한 개선된 통계적 그레이레벨 모델)

  • Bok, Jin Seop;Cho, Tai-Hoon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.12 no.1
    • /
    • pp.1-7
    • /
    • 2013
  • Grey-level statistical models have been widely used in many applications for object location and identification. However, conventional models yield some problems in model refinement when training images are not properly aligned, and have difficulties for real-time recognition of arbitrarily rotated models. This paper presents improved grey-level statistical models that align training images using image or feature matching to overcome problems in model refinement of conventional models, and that enable real-time recognition of arbitrarily rotated objects using efficient hierarchical search methods. Edges or features extracted from a mean training image are used for accurate alignment of models in the search image. On the aligned position and orientation, fitness measure based on grey-level statistical models is computed for object recognition. It is demonstrated in various experiments in PCB inspection that proposed methods are superior to conventional methods in recognition accuracy and speed.

Software Fault Prediction using Semi-supervised Learning Methods (세미감독형 학습 기법을 사용한 소프트웨어 결함 예측)

  • Hong, Euyseok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.3
    • /
    • pp.127-133
    • /
    • 2019
  • Most studies of software fault prediction have been about supervised learning models that use only labeled training data. Although supervised learning usually shows high prediction performance, most development groups do not have sufficient labeled data. Unsupervised learning models that use only unlabeled data for training are difficult to build and show poor performance. Semi-supervised learning models that use both labeled data and unlabeled data can solve these problems. Self-training technique requires the fewest assumptions and constraints among semi-supervised techniques. In this paper, we implemented several models using self-training algorithms and evaluated them using Accuracy and AUC. As a result, YATSI showed the best performance.

Semi-supervised Model for Fault Prediction using Tree Methods (트리 기법을 사용하는 세미감독형 결함 예측 모델)

  • Hong, Euyseok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.4
    • /
    • pp.107-113
    • /
    • 2020
  • A number of studies have been conducted on predicting software faults, but most of them have been supervised models using labeled data as training data. Very few studies have been conducted on unsupervised models using only unlabeled data or semi-supervised models using enough unlabeled data and few labeled data. In this paper, we produced new semi-supervised models using tree algorithms in the self-training technique. As a result of the model performance evaluation experiment, the newly created tree models performed better than the existing models, and CollectiveWoods, in particular, outperformed other models. In addition, it showed very stable performance even in the case with very few labeled data.

A GPD-BASED DISCRIMINATIVE TRAINING ALGORITHM FOR PREDICTIVE NEURAL NETWORK MODELS

  • Na, Kyung-Min;Rheem, Jae-Yeol;Ann, Sou-Guil
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • 1994.06a
    • /
    • pp.997-1002
    • /
    • 1994
  • Predictive neural network models are powerful speech recognition models based on a nonlinear pattern prediction. Those models can effectively normalize the temporal and spatial variability of speech signals. But those models suffer from poor discrimination between acoustically similar words. In this paper, we propose a discriminative training algorithm for predictive neural network models based on a generalized probabilistic descent (GPD) algorithm and minimum classification error formulation (MCEF). The Evaluation of our training algorithm on ten Korean digits shows its effectiveness by 40% reduction of recognition error.

  • PDF