• Title/Summary/Keyword: normalization factor

Search Result 101, Processing Time 0.032 seconds

Codeword-Dependent Distance Normalization and Smoothing of Output Probalities Based on the Instar-formed Fuzzy Contribution in the FVQ-DHMM (퍼지양자화 은닉 마르코프 모델에서 코드워드 종속거리 정규화와 Instar 형태의 퍼지 기여도에 기반한 출력확률의 평활화)

  • Choi, Hwan-Jin;Kim, Yeon-Jun;Oh, Yung-Hwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.2
    • /
    • pp.71-79
    • /
    • 1997
  • In this paper, a codeword-dependent distance normalization(CDDN) and an instar-formed fuzzy smoothing of output distribution are proposed for robust estimation of output probabilities in the FVQ(fuzzy vector quantization)-DHMM(discrete hidden Markov model). The FVQ-DHMM is a variant of DHMM in which the state output probability is estimated by the sum oft he product of the output probability and its weighting factor for each codeword on an input vector. As the performance of the FVQ-DHMM is influenced by weighting factor and output distribution from a state, it is required to get a method to get robust estimation of weighting factors and output distribution for each state. From experimental results, the proposed CDDN method has reduced 24% of error rate over the conventional FVQ-DHMM, and also reduced 79% of error rate when the smoothing of output distribution is also applied to the computation of an output probability. These results indicate that the use of CDDN and the fuzzy smoothing of output distribution to the FVQ-DHMM lead to improved recognition, and therefore it may be used as an alternative to the robust estimation of output probabilities for HMMs.

  • PDF

Vector Quantizer Based Speaker Normalization for Continuos Speech Recognition (연속음성 인식기를 위한 벡터양자화기 기반의 화자정규화)

  • Shin Ok-keun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.8
    • /
    • pp.583-589
    • /
    • 2004
  • Proposed is a speaker normalization method based on vector quantizer for continuous speech recognition (CSR) system in which no acoustic information is made use of. The proposed method, which is an improvement of the previously reported speaker normalization scheme for a simple digit recognizer, builds up a canonical codebook by iteratively training the codebook while the size of codebook is increased after each iteration from a relatively small initial size. Once the codebook established, the warp factors of speakers are estimated by comparing exhaustively the warped versions of each speaker's utterance with the codebook. Two sets of phones are used to estimate the warp factors: one, a set of vowels only. and the other, a set composed of all the Phonemes. A Piecewise linear warping function which corresponds to the estimated warp factor is adopted to warp the power spectrum of the utterance. Then the warped feature vectors are extracted to be used to train and to test the speech recognizer. The effectiveness of the proposed method is investigated by a set of recognition experiments using the TIMIT corpus and HTK speech recognition tool kit. The experimental results showed comparable recognition rate improvement with the formant based warping method.

Calculation of depth dose for irregularly shaped electron fields (부정형 전자선 조사면의 심부선량과 출력비의 계산)

  • Lee, Byoung-Koo;Lee, Sang-Rok;Kwon, Young-Ho
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.14 no.1
    • /
    • pp.79-84
    • /
    • 2002
  • The main cause factor for effective the output, especially in small & irregular shaped field of electron beam therapy, are collimation system, insert block diameter and energy. In the absorption deose of treatment fields, we should consider the lateral build-up ratio (LBR), which the ratio of dose at a point at depth for a given circular field to the dose at the same point for a 'broad-field', for the same incident fluence and profile. The LBR data for a small circular field are used to extract radial spread of the pencil beam, ${\sigma}$, as a function of depth and energy. It's based on elementary pencil beam. We consider availability of the factor, ${\sigma}$, in the small & irregular fields electron beam treatment.

  • PDF

A Study on the Factor Analysis of the Encounter Data in the Maritime Traffic Environment (해상교통 조우데이터 요인분석에 관한 연구)

  • Kim, Kwang-Il;Jeong, Jung Sik;Park, Gyei-Kark
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.3
    • /
    • pp.293-298
    • /
    • 2015
  • The vessel encounter data collected from the vessel trajectories in the maritime traffic situation is possible to analyze vessel collision and near-collision risk using statistical method. In this study, analyzing variables extracted from the vessel encounter data using factor analysis, we determine main factors effecting vessel collision risk from vessel encounter data. In order to calculate each factor, it used principal component analysis for factor analysis after normalization and standardization of vessel encounter variables. As a result of the factor analysis, main effect factors are summarized into the vessel approach factor and collision avoidance variance factor.

A Novel Journal Evaluation Metric that Adjusts the Impact Factors across Different Subject Categories

  • Pyo, Sujin;Lee, Woojin;Lee, Jaewook
    • Industrial Engineering and Management Systems
    • /
    • v.15 no.1
    • /
    • pp.99-109
    • /
    • 2016
  • During the last two decades, impact factor has been widely used as a journal evaluation metric that differentiates the influence of a specific journal compared with other journals. However, impact factor does not provide a reliable metric between journals in different subject categories. For example, higher impact factors are given to biology and general sciences than those assigned to other traditional engineering and social sciences. This study initially analyzes the trend of the time series of the impact factors of the journals listed in Journal Citation Reports during the last decade. This study then proposes new journal evaluation metrics that adjust the impact factors across different subject categories. The proposed metrics possibly provides a consistent measure to mitigate the differences in impact factors among subject categories. On the basis of experimental results, we recommend the most reliable and appropriate metric to evaluate journals that are less dependent on the characteristics of subject categories.

Data Cleaning and Integration of Multi-year Dietary Survey in the Korea National Health and Nutrition Examination Survey (KNHANES) using Database Normalization Theory (데이터베이스 정규화 이론을 이용한 국민건강영양조사 중 다년도 식이조사 자료 정제 및 통합)

  • Kwon, Namji;Suh, Jihye;Lee, Hunjoo
    • Journal of Environmental Health Sciences
    • /
    • v.43 no.4
    • /
    • pp.298-306
    • /
    • 2017
  • Objectives: Since 1998, the Korea National Health and Nutrition Examination Survey (KNHANES) has been conducted in order to investigate the health and nutritional status of Koreans. The food intake data of individuals in the KNHANES has also been utilized as source dataset for risk assessment of chemicals via food. To improve the reliability of intake estimation and prevent missing data for less-responded foods, the structure of integrated long-standing datasets is significant. However, it is difficult to merge multi-year survey datasets due to ineffective cleaning processes for handling extensive numbers of codes for each food item along with changes in dietary habits over time. Therefore, this study aims at 1) cleaning the process of abnormal data 2) generation of integrated long-standing raw data, and 3) contributing to the production of consistent dietary exposure factors. Methods: Codebooks, the guideline book, and raw intake data from KNHANES V and VI were used for analysis. The violation of the primary key constraint and the $1^{st}-3rd$ normal form in relational database theory were tested for the codebook and the structure of the raw data, respectively. Afterwards, the cleaning process was executed for the raw data by using these integrated codes. Results: Duplication of key records and abnormality in table structures were observed. However, after adjusting according to the suggested method above, the codes were corrected and integrated codes were newly created. Finally, we were able to clean the raw data provided by respondents to the KNHANES survey. Conclusion: The results of this study will contribute to the integration of the multi-year datasets and help improve the data production system by clarifying, testing, and verifying the primary key, integrity of the code, and primitive data structure according to the database normalization theory in the national health data.

The Algorithm Design and Implement of Microarray Data Classification using the Byesian Method (베이지안 기법을 적용한 마이크로어레이 데이터 분류 알고리즘 설계와 구현)

  • Park, Su-Young;Jung, Chai-Yeoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.12
    • /
    • pp.2283-2288
    • /
    • 2006
  • As development in technology of bioinformatics recently makes it possible to operate micro-level experiments, we can observe the expression pattern of total genome through on chip and analyze the interactions of thousands of genes at the same time. Thus, DNA microarray technology presents the new directions of understandings for complex organisms. Therefore, it is required how to analyze the enormous gene information obtained through this technology effectively. In this thesis, We used sample data of bioinformatics core group in harvard university. It designed and implemented system that evaluate accuracy after dividing in class of two using Bayesian algorithm, ASA, of feature extraction method through normalization process, reducing or removing of noise that occupy by various factor in microarray experiment. It was represented accuracy of 98.23% after Lowess normalization.

Evaluation of Candidate Housekeeping Genes for the Normalization of RT-qPCR Analysis using Developing Embryos and Prolarvae in Russian Sturgeon Acipenser gueldenstaedtii (러시아 철갑상어(Acipenser gueldenstaedtii) 발생 시료의 RT-qPCR 분석을 위한 내재 대조군 유전자의 선정)

  • Nam, Yoon Kwon;Lee, Sang Yoon;Kim, Eun Jeong
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.51 no.1
    • /
    • pp.95-106
    • /
    • 2018
  • To evaluate appropriate reference genes for the normalization of quantitative reverse transcription PCR (RT-qPCR) data with embryonic and larval samples from Russian sturgeon Acipenser gueldenstaedtii, the expression stability of eight candidate housekeeping genes, including beta-actin (ACTB), elongation factor-1A (EF1A), glyceraldehyde-3-phosphate dehydrogenase (GAPDH), histone 2A (H2A), ribosomal protein L5 (RPL5), ribosomal protein L7 (RPL7), succinate dehydrogenase (SDHA), and ubiquitin-conjugating enzyme E2 (UBE2A), were tested using embryonic samples from 12 developmental stages and larval samples from 11 ontogenic stages. Based on the stability rankings from three statistic software packages, geNorm, NormFinder, and BestKeeper, the expression stability of the embryonic subset was ranked as UBE2A>H2A>SDHA>GAPDH>RPL5>EF1A>ACTB>RPL7. On the other hand, the ranking in the larval subset was determined as UBE2A>GAPDH>SDHA>RPL5>RPL7>H2A>EF1A>AC TB. When the two subsets were combined, the overall ranking was UBE2A>SDHA>H2A>RPL5>GAPDH>EF1A>ACTB>RPL7. Taken together, our data suggest that UBE2A and SDHA are recommended as suitable references for developmental and ontogenic samples of this sturgeon species, whereas traditional housekeepers such as ACTB and GAPDH may not be suitable candidates.

A Study on the Training Optimization Using Genetic Algorithm -In case of Statistical Classification considering Normal Distribution- (유전자 알고리즘을 이용한 트레이닝 최적화 기법 연구 - 정규분포를 고려한 통계적 영상분류의 경우 -)

  • 어양담;조봉환;이용웅;김용일
    • Korean Journal of Remote Sensing
    • /
    • v.15 no.3
    • /
    • pp.195-208
    • /
    • 1999
  • In the classification of satellite images, the representative of training of classes is very important factor that affects the classification accuracy. Hence, in order to improve the classification accuracy, it is required to optimize pre-classification stage which determines classification parameters rather than to develop classifiers alone. In this study, the normality of training are calculated at the preclassification stage using SPOT XS and LANDSAT TM. A correlation coefficient of multivariate Q-Q plot with 5% significance level and a variance of initial training are considered as an object function of genetic algorithm in the training normalization process. As a result of normalization of training using the genetic algorithm, it was proved that, for the study area, the mean and variance of each class shifted to the population, and the result showed the possibility of prediction of the distribution of each class.

Estimation of Road Sections Vulnerable to Black Ice Using Road Surface Temperatures Obtained by a Mobile Road Weather Observation Vehicle (도로기상차량으로 관측한 노면온도자료를 이용한 도로살얼음 취약 구간 산정)

  • Park, Moon-Soo;Kang, Minsoo;Kim, Sang-Heon;Jung, Hyun-Chae;Jang, Seong-Been;You, Dong-Gill;Ryu, Seong-Hyen
    • Atmosphere
    • /
    • v.31 no.5
    • /
    • pp.525-537
    • /
    • 2021
  • Black ices on road surfaces in winter tend to cause severe and terrible accidents. It is very difficult to detect black ice events in advance due to their localities as well as sensitivities to surface and upper meteorological variables. This study develops a methodology to detect the road sections vulnerable to black ice with the use of road surface temperature data obtained from a mobile road weather observation vehicle. The 7 experiments were conducted on the route from Nam-Wonju IC to Nam-Andong IC (132.5 km) on the Jungang Expressway during the period from December 2020 to February 2021. Firstly, temporal road surface temperature data were converted to the spatial data with a 50 m resolution. Then, the spatial road surface temperature was normalized with zero mean and one standard deviation using a simple normalization, a linear de-trend and normalization, and a low-pass filter and normalization. The resulting road thermal map was calculated in terms of road surface temperature differences. A road ice index was suggested using the normalized road temperatures and their horizontal differences. Road sections vulnerable to black ice were derived from road ice indices and verified with respect to road geometry and sky view, etc. It was found that black ice could occur not only over bridges, but also roads with a low sky view factor. These results are expected to be applicable to the alarm service for black ice to drivers.