• Title/Summary/Keyword: approximation model

Search Result 1,476, Processing Time 0.036 seconds

A Study on a Model Parameter Compensation Method for Noise-Robust Speech Recognition (잡음환경에서의 음성인식을 위한 모델 파라미터 변환 방식에 관한 연구)

  • Chang, Yuk-Hyeun;Chung, Yong-Joo;Park, Sung-Hyun;Un, Chong-Kwan
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.5
    • /
    • pp.112-121
    • /
    • 1997
  • In this paper, we study a model parameter compensation method for noise-robust speech recognition. We study model parameter compensation on a sentence by sentence and no other informations are used. Parallel model combination(PMC), well known as a model parameter compensation algorithm, is implemented and used for a reference of performance comparision. We also propose a modified PMC method which tunes model parameter with an association factor that controls average variability of gaussian mixtures and variability of single gaussian mixture per state for more robust modeling. We obtain a re-estimation solution of environmental variables based on the expectation-maximization(EM) algorithm in the cepstral domain. To evaluate the performance of the model compensation methods, we perform experiments on speaker-independent isolated word recognition. Noise sources used are white gaussian and driving car noise. To get corrupted speech we added noise to clean speech at various signal-to-noise ratio(SNR). We use noise mean and variance modeled by 3 frame noise data. Experimental result of the VTS approach is superior to other methods. The scheme of the zero order VTS approach is similar to the modified PMC method in adapting mean vector only. But, the recognition rate of the Zero order VTS approach is higher than PMC and modified PMC method based on log-normal approximation.

  • PDF

Stereo Image-based 3D Modelling Algorithm through Efficient Extraction of Depth Feature (효율적인 깊이 특징 추출을 이용한 스테레오 영상 기반의 3차원 모델링 기법)

  • Ha, Young-Su;Lee, Heng-Suk;Han, Kyu-Phil
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.10
    • /
    • pp.520-529
    • /
    • 2005
  • A feature-based 3D modeling algorithm is presented in this paper. Since conventional methods use depth-based techniques, they need much time for the image matching to extract depth information. Even feature-based methods have less computation load than that of depth-based ones, the calculation of modeling error about whole pixels within a triangle is needed in feature-based algorithms. It also increase the computation time. Therefore, the proposed algorithm consists of three phases, which are an initial 3D model generation, model evaluation, and model refinement phases, in order to acquire an efficient 3D model. Intensity gradients and incremental Delaunay triangulation are used in the Initial model generation. In this phase, a morphological edge operator is adopted for a fast edge filtering, and the incremental Delaunay triangulation is modified to decrease the computation time by avoiding the calculation errors of whole pixels and selecting a vertex at the near of the centroid within the previous triangle. After the model generation, sparse vertices are matched, then the faces are evaluated with the size, approximation error, and disparity fluctuation of the face in evaluation stage. Thereafter, the faces which have a large error are selectively refined into smaller faces. Experimental results showed that the proposed algorithm could acquire an adaptive model with less modeling errors for both smooth and abrupt areas and could remarkably reduce the model acquisition time.

Approximation of Multiple Trait Effective Daughter Contribution by Dairy Proven Bulls for MACE (젖소 국제유전능력 평가를 위한 종모우별 다형질 Effective Daughter Contribution 추정)

  • Cho, Kwang-Hyun;Choi, Tae-Jeong;Cho, Chung-Il;Park, Kyung-Do;Do, Kyoung-Tag;Oh, Jae-Don;Lee, Hak-Kyo;Kong, Hong-Sik;Lee, Joon-Ho
    • Journal of Animal Science and Technology
    • /
    • v.55 no.5
    • /
    • pp.399-403
    • /
    • 2013
  • This study was conducted to investigate the basic concept of multiple trait effective daughter contribution (MTEDC) for dairy cattle sires and calculate effective daughter contribution (EDC) by applying a five lactation multiple trait model using milk yield test records of daughters for the Multiple-trait Across Country Evaluation (MACE). Milk yield data and pedigree information of 301,551 cows that were the progeny of 2,046 Korean and imported dairy bulls were collected from the National Agricultural Cooperative Federation and used in this study. For MTEDC approximation, the reliability of the breeding value was separated based on parents average, own yield deviation and mate adjusted progeny contribution. EDC was then calculated by lactation using these reliabilities. The average number of recorded daughters per sire by lactations were 140.57, 94.24, 55.14, 29.20 and 14.06 from the first to fifth lactation, respectively. However, the average EDC per sire by lactation using the five lactation multiple trait model was 113.49, 89.28, 73.56, 54.02 and 35.08 from the first to fifth lactation, respectively, while the decrease of EDC in late lactations was comparably lower than the average number of recorded daughters per sire. These findings indicate that the availability of daughters without late lactation records is increased by genetic correlation using the multiple trait model. Owing to the relatedness between the EDC and reliability of the estimated breeding value for sire, understanding the MTEDC algorithm and continuous monitoring of EDC is required for correct MACE application of the five lactation multiple trait model.

A Comparative Study on Approximate Models and Sensitivity Analysis of Active Type DSF for Offshore Plant Float-over Installation Using Orthogonal Array Experiment (직교배열실험을 이용한 해양플랜트 플로트오버 설치 작업용 능동형 DSF의 민감도해석과 근사모델 비교연구)

  • Kim, Hun-Gwan;Song, Chang Yong
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.3
    • /
    • pp.187-196
    • /
    • 2021
  • The paper deals with comparative study for characteristics of approximation of design space according to various approximate models and sensitivity analysis using orthogonal array experiments in structure design of active type DSF which was developed for float-over installation of offshore plant. This study aims to propose the orthogonal array experiments based design methodology which is able to efficiently explore an optimum design case and to generate the accurate approximate model. Thickness sizes of main structure member were applied to the design factors, and output responses were considered structure weight and strength performances. Quantitative effects on the output responses for each design factor were evaluated using the orthogonal array experiment. Best design case was also identified to improve the structure design with weight minimization. From the orthogonal array experiment results, various approximate models such as response surface model, Kriging model, Chebyshev orthogonal polynomial model, and radial basis function based neural network model were generated. The experiment results from orthogonal array method were validated by the approximate modeling results. It was found that the radial basis function based neural network model among the approximate models was able to approximate the design space of the active type DSF with the highest accuracy.

An Adaptive Input Data Space Parting Solution to the Synthesis of N euro- Fuzzy Models

  • Nguyen, Sy Dzung;Ngo, Kieu Nhi
    • International Journal of Control, Automation, and Systems
    • /
    • v.6 no.6
    • /
    • pp.928-938
    • /
    • 2008
  • This study presents an approach for approximation an unknown function from a numerical data set based on the synthesis of a neuro-fuzzy model. An adaptive input data space parting method, which is used for building hyperbox-shaped clusters in the input data space, is proposed. Each data cluster is implemented here as a fuzzy set using a membership function MF with a hyperbox core that is constructed from a min vertex and a max vertex. The focus of interest in proposed approach is to increase degree of fit between characteristics of the given numerical data set and the established fuzzy sets used to approximate it. A new cutting procedure, named NCP, is proposed. The NCP is an adaptive cutting procedure using a pure function $\Psi$ and a penalty function $\tau$ for direction the input data space parting process. New algorithms named CSHL, HLM1 and HLM2 are presented. The first new algorithm, CSHL, built based on the cutting procedure NCP, is used to create hyperbox-shaped data clusters. The second and the third algorithm are used to establish adaptive neuro- fuzzy inference systems. A series of numerical experiments are performed to assess the efficiency of the proposed approach.

Application of the Taguchi Method to the Analysis of the Numerical Parameters Influencing Springback Characteristics (스프링백 특성에 영향을 미치는 수치변수의 분석을 위한 다구치 실험계획법의 응용)

  • Kim, Hyung-Jong;Jeon, Tae-Bo
    • Journal of Industrial Technology
    • /
    • v.20 no.A
    • /
    • pp.211-218
    • /
    • 2000
  • It is desirable but difficult to predict springback quantitatively and accurately for successful tool and process design in sheet stamping operations. The result of springback analysis by the finite element method (FEM) is sensitively influenced by numerical factors such as blank element size, number of integration points, punch velocity, contact algorithm, etc. In the present work, a parametric study by Taguchi method is performed in order to evaluate the influence of numerical factors on the result of springback analysis quantitatively and to obtain the combination of numerical factors which gives the best approximation to experimental data. Since springback is determined by the residual stress after forming process, it is important to evaluate stress distribution accurately. The oscillation in the time history curve of stress obtained by the dynamic-explicit finite element method says that the stress solution at termination time is in very unstable state. Therefore, a variability study is also carried out in this study in order to assess the stability of implicit springback analysis starting from the stress solution by explicit forming simulation. The U-draw bending process, one of the NUMISHEET '93 benchmark problems, is adopted as an application model because it is most popular one for evaluating the springback characteristic.

  • PDF

Channel Capacity Analysis of SSW Technique in Wireless Channels for ITS System (ITS 시스템을 위한 무선 채널에서의 SSW 기법의 채널용량 분석)

  • Kim, Joo-Chan;Bae, Jung-Nam;Kim, Jin-Young
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.9 no.2
    • /
    • pp.68-74
    • /
    • 2010
  • In this paper, we analyze and simulate the channel capacity of a spread spectrum watermarking (SSW) technique in wireless fading channel to apply ITS system. Channel capacity analysis causing minimum effect to existing system is required necessary to apply SSW technique. We derive the channel capacity as a closed-form approximation formula in Rayleigh and Rician fading channel model. The numerical results are demonstrated and good approximated results are reported.

Simpson Style Caricature based on MLS

  • Lee, Jiye;Byun, Hae Won
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.6
    • /
    • pp.1449-1462
    • /
    • 2013
  • We present a novel approach to producing facial caricature with Simpson cartoon style based on Moving Least Squares (MLS). We take advantage of employing the caricature stylization rule of caricature artist, Justin. Our method allows Simpson-style cartoon character similar to user's features by using Justin's technique, which is a set of caricature stylization rules. Our method transforms input photo image into Simpson style caricature by using MLS approximation. The unique characteristics of user in the photo can be detected by comparing to the mean face feature and the input face feature extracted by AAM(Active Appearance Model). To exaggerate the detected unique characteristics, we set up the exaggeration rules using Justin's technique. In addition, during the cartooning process, user's hairs and accessories are used to the deformed image to make a close resemblance. Our method preserves the reliable and stylized caricature through the exaggeration rules of the actual caricature artist's techniques. From this study, we can easily create a Simpson-style cartoon caricature to resemble user's features by combining a caricature with existing cartoon researches.

Validation of nuclide depletion capabilities in Monte Carlo code MCS

  • Ebiwonjumi, Bamidele;Lee, Hyunsuk;Kim, Wonkyeong;Lee, Deokjung
    • Nuclear Engineering and Technology
    • /
    • v.52 no.9
    • /
    • pp.1907-1916
    • /
    • 2020
  • In this work, the depletion capability implemented in Monte Carlo code MCS is investigated to predict the isotopic compositions of spent nuclear fuel (SNF). By comparison of MCS calculation results to post irradiation examination (PIE) data obtained from one pressurized water reactor (PWR), the validation of this capability is conducted. The depletion analysis is performed with the ENDF/B-VII.1 library and a fuel assembly model. The transmutation equation is solved by the Chebyshev Rational Approximation Method (CRAM) with a depletion chain of 3820 isotopes. 18 actinides and 19 fission products are analyzed in 14 SNF samples. The effect of statistical uncertainties on the calculated number densities is discussed. On average, most of the actinides and fission products analyzed are predicted within ±6% of the experiment. MCS depletion results are also compared to other depletion codes based on publicly reported information in literature. The code-to-code analysis shows comparable accuracy. Overall, it is demonstrated that the depletion capability in MCS can be reliably applied in the prediction of SNF isotopic inventory.

A Study on the Development of a Program to Body Circulation Measurement Using the Machine Learning and Depth Camera

  • Choi, Dong-Gyu;Jang, Jong-Wook
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.1
    • /
    • pp.122-129
    • /
    • 2020
  • The circumference of the body is not only an indicator in order to buy clothes in our life but an important factor which can increase the effectiveness healing properly after figuring out the shape of body in a hospital. There are several measurement tools and methods so as to know this, however, it spends a lot of time because of the method measured by hand for accurate identification, compared to the modern advanced societies. Also, the current equipments for automatic body scanning are not easy to use due to their big volume or high price generally. In this papers, OpenPose model which is a deep learning-based Skeleton Tracking is used in order to solve the problems previous methods have and for ease of application. It was researched to find joints and an approximation by applying the data of the deep camera via reference data of the measurement parts provided by the hospitals and to develop a program which is able to measure the circumference of the body lighter and easier by utilizing the elliptical circumference formula.