• 제목/요약/키워드: Bias Training

검색결과 118건 처리시간 0.025초

협응이동훈련이 만성 뇌졸중 환자의 균형에 미치는 효과: 국내연구의 메타분석 (The effects of coordinative locomotor training on balance in patients with chronic stroke: meta-analysis of studies in Korea)

  • 임재헌;박세주
    • 대한물리치료과학회지
    • /
    • 제27권2호
    • /
    • pp.36-47
    • /
    • 2020
  • Background: This study purposed to provide meaningful information for the accumulation of knowledge on coordinative locomotor training in patients with stroke. Design: Meta-analysis. Methods: This study collected articles which the coordinative locomotor training in patients with stroke. For systematic meta-analysis, 6 articles were finally selected after searching based on the PICOSD criteria. This meta-analysis was conducted according to PRISMA guidelines. Randomized controlled trials were included and the risk of bias was evaluated for each study. Pooled standardized mean differences were calculated using a random effects model. To extract the effect size of each study, the R 3.5.3 software was used. Results: The meta-analysis showed that a total effects size was 1.23 indicating that coordinative locomotor training for patients with stroke had a maximum effect size. Conclusion: A meta-analysis is warranted for further research to determine the effects of coordinative locomotor training in patients with stroke on muscle strength, walking and range of motion.

Can carbamide peroxide be as effective as hydrogen peroxide for in-office tooth bleaching and cause less sensitivity? A systematic review

  • Patrick Wesley Marques de Boa;Kaiza de Sousa Santos;Francisca Jennifer Duarte de Oliveira;Boniek Castillo Dutra Borges
    • Restorative Dentistry and Endodontics
    • /
    • 제49권2호
    • /
    • pp.14.1-14.13
    • /
    • 2024
  • This study aimed to answer the question through a systematic review: Can carbamide peroxide be as effective as hydrogen peroxide and cause less in-office bleaching sensitivity? A literature survey was performed in PubMed/MEDLINE, Embase, Scopus, ISI Web of Science, and gray literature. Primary clinical trials that compared the efficacy or the in-office bleaching sensitivity between carbamide and hydrogen peroxides were included. The risk of bias was evaluated using the RoB2. The certainty of the evidence was assessed using the GRADE approach. DPI training significantly improved the mean scores of the dental undergraduates from 7.53 in the pre-DPI-training test to 9.01 in the post-DPI-training test (p < 0.001). After 6 weeks, the mean scores decreased marginally to 8.87 in the retention test (p = 0.563). DPI training increased their confidence level from 5.68 pre-DPI training to 7.09 post-DPI training. The limited evidence suggests that the 37% carbamide peroxide may be similarly effective to the 35% hydrogen peroxide for bleaching teeth in-office and causes less bleaching sensitivity. However, more well-designed split-mouth clinical trials are necessary to strengthen the evidence.

품질이 관리된 스트레스 측정용 테이터셋 구축을 위한 제언 (Recommendations for the Construction of a Quslity-Controlled Stress Measurement Dataset)

  • 김태훈;나인섭
    • 스마트미디어저널
    • /
    • 제13권2호
    • /
    • pp.44-51
    • /
    • 2024
  • 스트레스 측정용 데이터셋의 구축은 건강, 의료분야, 심리향동, 교육분야 등 현대의 다양한 응용 분야에서 핵심적인 역할을 수행하교 있다. 특히, 스트레스 측정용 인공지능 모델의 효율적인 훈련을 위해서는 다양한 편향성을 제거하고 품질 관리된 데이터셋을 구축하는 것이 중요하다. 본 논문에서는 다양한 편향성 제거를 통한 품질의 관리된 스트레스 측정용 데이터셋 구축에 관하여 제안하였다. 이를 위해 스트레스 정의 및 측정도구 소개, 스트레스 인공지능 데이터 셋 구축과정, 품질향상을 위한 편향성 극복 전략 그리고 스트레스 데이터 수집시 고려사항을 제시하였다. 특히, 데이터셋 품질을 관리하기 위해 데이터셋 구축시 고려사항과, 발생할 수 있는 선택편향, 측정편향, 인과관계편향, 확증편향, 인공지능편향과 같은 다양한 편향서에 대해 검토하였다. 본 논문을 통해 스트레스 데이터 수집시 고려사항과 스트레스 데이터셋의 구축에서 발생할 수 있는 다양한 편향성을 체계적으로 이해하고, 이를 극복하여 품질이 보장된 데이터셋을 구축하는데 기여할 것으로 기대된다.

On Line LS-SVM for Classification

  • Kim, Daehak;Oh, KwangSik;Shim, Jooyong
    • Communications for Statistical Applications and Methods
    • /
    • 제10권2호
    • /
    • pp.595-601
    • /
    • 2003
  • In this paper we propose an on line training method for classification based on least squares support vector machine. Proposed method enables the computation cost to be reduced and the training to be peformed incrementally, With the incremental formulation of an inverse matrix in optimization problem, current information and new input data can be used for building the new inverse matrix for the estimation of the optimal bias and Lagrange multipliers, so the large scale matrix inversion operation can be avoided. Numerical examples are included which indicate the performance of proposed algorithm.

강인한 음성인식을 위한 SPLICE 기반 잡음 보상의 성능향상 (Performance Improvement of SPLICE-based Noise Compensation for Robust Speech Recognition)

  • 김형순;김두희
    • 음성과학
    • /
    • 제10권3호
    • /
    • pp.263-277
    • /
    • 2003
  • One of major problems in speech recognition is performance degradation due to the mismatch between the training and test environments. Recently, Stereo-based Piecewise LInear Compensation for Environments (SPLICE), which is frame-based bias removal algorithm for cepstral enhancement using stereo training data and noisy speech model as a mixture of Gaussians, was proposed and showed good performance in noisy environments. In this paper, we propose several methods to improve the conventional SPLICE. First we apply Cepstral Mean Subtraction (CMS) as a preprocessor to SPLICE, instead of applying it as a postprocessor. Secondly, to compensate residual distortion after SPLICE processing, two-stage SPLICE is proposed. Thirdly we employ phonetic information for training SPLICE model. According to experiments on the Aurora 2 database, proposed method outperformed the conventional SPLICE and we achieved a 50% decrease in word error rate over the Aurora baseline system.

  • PDF

Fast Training of Structured SVM Using Fixed-Threshold Sequential Minimal Optimization

  • Lee, Chang-Ki;Jang, Myung-Gil
    • ETRI Journal
    • /
    • 제31권2호
    • /
    • pp.121-128
    • /
    • 2009
  • In this paper, we describe a fixed-threshold sequential minimal optimization (FSMO) for structured SVM problems. FSMO is conceptually simple, easy to implement, and faster than the standard support vector machine (SVM) training algorithms for structured SVM problems. Because FSMO uses the fact that the formulation of structured SVM has no bias (that is, the threshold b is fixed at zero), FSMO breaks down the quadratic programming (QP) problems of structured SVM into a series of smallest QP problems, each involving only one variable. By involving only one variable, FSMO is advantageous in that each QP sub-problem does not need subset selection. For the various test sets, FSMO is as accurate as an existing structured SVM implementation (SVM-Struct) but is much faster on large data sets. The training time of FSMO empirically scales between O(n) and O($n^{1.2}$), while SVM-Struct scales between O($n^{1.5}$) and O($n^{1.8}$).

  • PDF

바이어스 보상과 차원별 Eigenvoice 모델 평균을 이용한 고속화자적응의 성능향상 (Performance Improvement of Rapid Speaker Adaptation Using Bias Compensation and Mean of Dimensional Eigenvoice Models)

  • 박종세;김형순;송화전
    • 한국음향학회지
    • /
    • 제23권5호
    • /
    • pp.383-389
    • /
    • 2004
  • 본 논문에서는 훈련 및 인식 환경이 다른 상황에서 eigenvoice 기반 고속화자적응의 성능향상을 위하여 바이어스 보상을 적용한 eigenvoice 적응방식과 차원별 eigenvoice 모델 평균 가중합 방식을 제안하였다. PBW 452 DB를 사용한 어휘독립 단어인식 실험 결과에서 적은 양의 적응데이터를 사용했을 때 제안된 방식이 기존의 eigenvoice 방식에 비하여 많은 성능향상을 얻을 수 있었다. 적응단어 수를 1개에서 50개로 변경시키면서 바이어스 보상을 적용한 eigenvoice 적응방식을 사용한 경우 기존 eigenvoice 방식보다 단어 오인식률이 약 22∼30% 감소하였다. 또한 차원별 eigenvoice 모델 평균을 이용한 eigenvoice 적응방식에서는 1개의 단어를 적응데이터로 사용했을 경우에 기존 eigenvoice 방식보다 단어 오인식률이 최고 41%까지 감소하였다.

Comparison of different post-processing techniques in real-time forecast skill improvement

  • Jabbari, Aida;Bae, Deg-Hyo
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2018년도 학술발표회
    • /
    • pp.150-150
    • /
    • 2018
  • The Numerical Weather Prediction (NWP) models provide information for weather forecasts. The highly nonlinear and complex interactions in the atmosphere are simplified in meteorological models through approximations and parameterization. Therefore, the simplifications may lead to biases and errors in model results. Although the models have improved over time, the biased outputs of these models are still a matter of concern in meteorological and hydrological studies. Thus, bias removal is an essential step prior to using outputs of atmospheric models. The main idea of statistical bias correction methods is to develop a statistical relationship between modeled and observed variables over the same historical period. The Model Output Statistics (MOS) would be desirable to better match the real time forecast data with observation records. Statistical post-processing methods relate model outputs to the observed values at the sites of interest. In this study three methods are used to remove the possible biases of the real-time outputs of the Weather Research and Forecast (WRF) model in Imjin basin (North and South Korea). The post-processing techniques include the Linear Regression (LR), Linear Scaling (LS) and Power Scaling (PS) methods. The MOS techniques used in this study include three main steps: preprocessing of the historical data in training set, development of the equations, and application of the equations for the validation set. The expected results show the accuracy improvement of the real-time forecast data before and after bias correction. The comparison of the different methods will clarify the best method for the purpose of the forecast skill enhancement in a real-time case study.

  • PDF

A Study on Unbiased Methods in Constructing Classification Trees

  • Lee, Yoon-Mo;Song, Moon Sup
    • Communications for Statistical Applications and Methods
    • /
    • 제9권3호
    • /
    • pp.809-824
    • /
    • 2002
  • we propose two methods which separate the variable selection step and the split-point selection step. We call these two algorithms as CHITES method and F&CHITES method. They adapted some of the best characteristics of CART, CHAID, and QUEST. In the first step the variable, which is most significant to predict the target class values, is selected. In the second step, the exhaustive search method is applied to find the splitting point based on the selected variable in the first step. We compared the proposed methods, CART, and QUEST in terms of variable selection bias and power, error rates, and training times. The proposed methods are not only unbiased in the null case, but also powerful for selecting correct variables in non-null cases.

훈련 데이터 개수와 훈련 횟수에 따른 과도학습과 신뢰도 분석에 대한 연구 (A Study on Reliability Analysis According to the Number of Training Data and the Number of Training)

  • 김성혁;오상진;윤근영;김완기
    • 한국인공지능학회지
    • /
    • 제5권1호
    • /
    • pp.29-37
    • /
    • 2017
  • The range of problems that can be handled by the activation of big data and the development of hardware has been rapidly expanded and machine learning such as deep learning has become a very versatile technology. In this paper, mnist data set is used as experimental data, and the Cross Entropy function is used as a loss model for evaluating the efficiency of machine learning, and the value of the loss function in the steepest descent method is We applied the Gradient Descent Optimize algorithm to minimize and updated weight and bias via backpropagation. In this way we analyze optimal reliability value corresponding to the number of exercises and optimal reliability value without overfitting. And comparing the overfitting time according to the number of data changes based on the number of training times, when the training frequency was 1110 times, we obtained the result of 92%, which is the optimal reliability value without overfitting.