• Title/Summary/Keyword: conditional test

Search Result 192, Processing Time 0.026 seconds

Study on characteristics of noncontact vibrating displacement sensor (비접촉식 진동 변위센서의 특성에 관한 연구)

  • Cho, C.W.;Cho, S.T.;Yang, K.H.
    • Journal of Power System Engineering
    • /
    • v.15 no.2
    • /
    • pp.13-18
    • /
    • 2011
  • This thesis is about the result of conducting a specific experiment for the development of noncontact vibration displacement sensor for measuring the spindle vibration that is used for conditional monitoring of machinery. One should be careful when using the eddy current type displacement sensor because the sensitivity of it is different according to the quality of the material. While the probe used for nondestructive inspection adopts the effect of transmitting the material by using the high frequency domain, the eddy current type displacement sensor uses the lower frequency of around 1MHz. Also, while the nondestructive probe uses the method of enhancing output by using the resonance zone, the vibration displacement sensor utilizes the stable zone by avoiding the resonance zone. Since the oscillator of the converter uses the "L" element as Probe, its characteristic changes with the variation of a relevant impedance. In other words, if the length of Probe's Cable gets extended (Impedance increase), the sensitivity declines accordingly. The effect of surrounding temperature was small, but the influence of the quality of Sensor Coil used was high. Moreover, following an experimental demonstration of the phenomenon where the sensitivity decreases as the frequency of the tested material increases from a frequency response test, the maximum frequency that could be measured was approximately 1KHz. It was noted that the degree of precision could be maintained by using the gap of the probe in the linear zone at the installation site.

A Monte Carlo Comparison of the Small Sample Behavior of Disparity Measures (소표본에서 차이측도 통계량의 비교연구)

  • 홍종선;정동빈;박용석
    • The Korean Journal of Applied Statistics
    • /
    • v.16 no.2
    • /
    • pp.455-467
    • /
    • 2003
  • There has been a long debate on the applicability of the chi-square approximation to statistics based on small sample size. Extending comparison results among Pearson chi-square Χ$^2$, generalized likelihood .ratio G$^2$, and the power divergence Ι(2/3) statistics suggested by Rudas(1986), recently developed disparity statistics (BWHD(1/9), BWCS(1/3), NED(4/3)) we compared and analyzed in this paper. By Monte Carlo studies about the independence model of two dimension contingency tables, the conditional model and one variable independence model of three dimensional tables, simulated 90 and 95 percentage points and approximate 95% confidence intervals for the true percentage points are obtained. It is found that the Χ$^2$, Ι(2/3), BWHD(1/9) test statistics have very similar behavior and there seem to be applcable for small sample sizes than others.

Voice Personality Transformation Using a Probabilistic Method (확률적 방법을 이용한 음성 개성 변환)

  • Lee Ki-Seung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.24 no.3
    • /
    • pp.150-159
    • /
    • 2005
  • This paper addresses a voice personality transformation algorithm which makes one person's voices sound as if another person's voices. In the proposed method, one person's voices are represented by LPC cepstrum, pitch period and speaking rate, the appropriate transformation rules for each Parameter are constructed. The Gaussian Mixture Model (GMM) is used to model one speaker's LPC cepstrums and conditional probability is used to model the relationship between two speaker's LPC cepstrums. To obtain the parameters representing each probabilistic model. a Maximum Likelihood (ML) estimation method is employed. The transformed LPC cepstrums are obtained by using a Minimum Mean Square Error (MMSE) criterion. Pitch period and speaking rate are used as the parameters for prosody transformation, which is implemented by using the ratio of the average values. The proposed method reveals the superior performance to the previous VQ-based method in subjective measures including average cepstrum distance reduction ratio and likelihood increasing ratio. In subjective test. we obtained almost the same correct identification ratio as the previous method and we also confirmed that high qualify transformed speech is obtained, which is due to the smoothly evolving spectral contours over time.

Landslide Susceptibility Analysis Using Bayesian Network and Semantic Technology (시맨틱 기술과 베이시안 네트워크를 이용한 산사태 취약성 분석)

  • Lee, Sang-Hoon
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.4
    • /
    • pp.61-69
    • /
    • 2010
  • The collapse of a slope or cut embankment brings much damage to life and property. Accordingly, it is very important to analyze the spatial distribution by calculating the landslide susceptibility in the estimation of the risk of landslide occurrence. The heuristic, statistic, deterministic, and probabilistic methods have been introduced to make landslide susceptibility maps. In many cases, however, the reliability is low due to insufficient field data, and the qualitative experience and knowledge of experts could not be combined with the quantitative mechanical?analysis model in the existing methods. In this paper, new modeling method for a probabilistic landslide susceptibility analysis combined Bayesian Network with ontology model about experts' knowledge and spatial data was proposed. The ontology model, which was made using the reasoning engine, was automatically converted into the Bayesian Network structure. Through conditional probabilistic reasoning using the created Bayesian Network, landslide susceptibility with uncertainty was analyzed, and the results were described in maps, using GIS. The developed Bayesian Network was then applied to the test-site to verify its effect, and the result corresponded to the landslide traces boundary at 86.5% accuracy. We expect that general users will be able to make a landslide susceptibility analysis over a wide area without experts' help.

Long Memory and Cointegration in Crude Oil Market Dynamics (국제원유시장의 동적 움직임에 내재하는 장기기억 특성과 공적분 관계 연구)

  • Kang, Sang Hoon;Yoon, Seong-Min
    • Environmental and Resource Economics Review
    • /
    • v.19 no.3
    • /
    • pp.485-508
    • /
    • 2010
  • This paper examines the long memory property and investigates cointegration in the dynamics of crude oil markets. For these purposes, we apply the joint ARMA-FIAPARCH model with structural break and the vector error correction model (VECM) to three daily crude oil prices: Brent, Dubai and West Texas Intermediate (WTI). In all crude oil markets, the property of long memory exists in their volatility, and the ARMA-FIAPARCH model adequately captures this long memory property. In addition, the results of the cointegration test and VECM estimation indicate a bi-directional relationship between returns and the conditional variance of crude oil prices. This finding implies that the dynamics of returns affect volatility, and vice versa. These findings can be utilized for improving the understanding of the dynamics of crude oil prices and forecasting market risk for buyers and sellers in crude oil markets.

  • PDF

Model-based Body Motion Tracking of a Walking Human (모델 기반의 보행자 신체 추적 기법)

  • Lee, Woo-Ram;Ko, Han-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.75-83
    • /
    • 2007
  • A model based approach of tracking the limbs of a walking human subject is proposed in this paper. The tracking process begins by building a data base composed of conditional probabilities of motions between the limbs of a walking subject. With a suitable amount of video footage from various human subjects included in the database, a probabilistic model characterizing the relationships between motions of limbs is developed. The motion tracking of a test subject begins with identifying and tracking limbs from the surveillance video image using the edge and silhouette detection methods. When occlusion occurs in any of the limbs being tracked, the approach uses the probabilistic motion model in conjunction with the minimum cost based edge and silhouette tracking model to determine the motion of the limb occluded in the image. The method has shown promising results of tracking occluded limbs in the validation tests.

Boundary Detection using Adaptive Bayesian Approach to Image Segmentation (적응적 베이즈 영상분할을 이용한 경계추출)

  • Kim Kee Tae;Choi Yoon Su;Kim Gi Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.22 no.3
    • /
    • pp.303-309
    • /
    • 2004
  • In this paper, an adaptive Bayesian approach to image segmentation was developed for boundary detection. Both image intensities and texture information were used for obtaining better quality of the image segmentation by using the C programming language. Fuzzy c-mean clustering was applied fer the conditional probability density function, and Gibbs random field model was used for the prior probability density function. To simply test the algorithm, a synthetic image (256$\times$256) with a set of low gray values (50, 100, 150 and 200) was created and normalized between 0 and 1 n double precision. Results have been presented that demonstrate the effectiveness of the algorithm in segmenting the synthetic image, resulting in more than 99% accuracy when noise characteristics are correctly modeled. The algorithm was applied to the Antarctic mosaic that was generated using 1963 Declassified Intelligence Satellite Photographs. The accuracy of the resulting vector map was estimated about 300-m.

Context-based Predictive Coding Scheme for Lossless Image Compression (무손실 영상 압축을 위한 컨텍스트 기반 적응적 예측 부호화 방법)

  • Kim, Jongho;Yoo, Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.1
    • /
    • pp.183-189
    • /
    • 2013
  • This paper proposes a novel lossless image compression scheme composed of direction-adaptive prediction and context-based entropy coding. In the prediction stage, we analyze the directional property with respect to the current coding pixel and select an appropriate prediction pixel. In order to further reduce the prediction error, we propose a prediction error compensation technique based on the context model defined by the activities and directional properties of neighboring pixels. The proposed scheme applies a context-based Golomb-Rice coding as the entropy coding since the coding efficiency can be improved by using the conditional entropy from the viewpoint of the information theory. Experimental results indicate that the proposed lossless image compression scheme outperforms the low complexity and high efficient JPEG-LS in terms of the coding efficiency by 1.3% on average for various test images, specifically for the images with a remarkable direction the proposed scheme shows better results.

Discretization of Numerical Attributes and Approximate Reasoning by using Rough Membership Function) (러프 소속 함수를 이용한 수치 속성의 이산화와 근사 추론)

  • Kwon, Eun-Ah;Kim, Hong-Gi
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.545-557
    • /
    • 2001
  • In this paper we propose a hierarchical classification algorithm based on rough membership function which can reason a new object approximately. We use the fuzzy reasoning method that substitutes fuzzy membership value for linguistic uncertainty and reason approximately based on the composition of membership values of conditional sttributes Here we use the rough membership function instead of the fuzzy membership function It can reduce the process that the fuzzy algorithm using fuzzy membership function produces fuzzy rules In addition, we transform the information system to the understandable minimal decision information system In order to do we, study the discretization of continuous valued attributes and propose the discretization algorithm based on the rough membership function and the entropy of the information theory The test shows a good partition that produce the smaller decision system We experimented the IRIS data etc. using our proposed algorithm The experimental results with IRIS data shows 96%~98% rate of classification.

  • PDF

Denoise of Astronomical Images with Deep Learning

  • Park, Youngjun;Choi, Yun-Young;Moon, Yong-Jae;Park, Eunsu;Lim, Beomdu;Kim, Taeyoung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.54.2-54.2
    • /
    • 2019
  • Removing noise which occurs inevitably when taking image data has been a big concern. There is a way to raise signal-to-noise ratio and it is regarded as the only way, image stacking. Image stacking is averaging or just adding all pixel values of multiple pictures taken of a specific area. Its performance and reliability are unquestioned, but its weaknesses are also evident. Object with fast proper motion can be vanished, and most of all, it takes too long time. So if we can handle single shot image well and achieve similar performance, we can overcome those weaknesses. Recent developments in deep learning have enabled things that were not possible with former algorithm-based programming. One of the things is generating data with more information from data with less information. As a part of that, we reproduced stacked image from single shot image using a kind of deep learning, conditional generative adversarial network (cGAN). r-band camcol2 south data were used from SDSS Stripe 82 data. From all fields, image data which is stacked with only 22 individual images and, as a pair of stacked image, single pass data which were included in all stacked image were used. All used fields are cut in $128{\times}128$ pixel size, so total number of image is 17930. 14234 pairs of all images were used for training cGAN and 3696 pairs were used for verify the result. As a result, RMS error of pixel values between generated data from the best condition and target data were $7.67{\times}10^{-4}$ compared to original input data, $1.24{\times}10^{-3}$. We also applied to a few test galaxy images and generated images were similar to stacked images qualitatively compared to other de-noising methods. In addition, with photometry, The number count of stacked-cGAN matched sources is larger than that of single pass-stacked one, especially for fainter objects. Also, magnitude completeness became better in fainter objects. With this work, it is possible to observe reliably 1 magnitude fainter object.

  • PDF