• Title/Summary/Keyword: Characteristic Interval Value

Search Result 94, Processing Time 0.017 seconds

A Study on Image Binarization using Intensity Information (밝기 정보를 이용한 영상 이진화에 관한 연구)

  • 김광백
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.721-726
    • /
    • 2004
  • The image binarization is applied frequently as one part of the preprocessing phase for a variety of image processing techniques such as character recognition and image analysis, etc. The performance of binarization algorithms is determined by the selection of threshold value for binarization, and most of the previous binarization algorithms analyze the intensity distribution of the original images by using the histogram and determine the threshold value using the mean value of Intensity or the intensity value corresponding to the valley of the histogram. The previous algorithms could not get the proper threshold value in the case that doesn't show the bimodal characteristic in the intensity histogram or for the case that tries to separate the feature area from the original image. So, this paper proposed the novel algorithm for image binarization, which, first, segments the intensity range of grayscale images to several intervals and calculates mean value of intensity for each interval, and next, repeats the interval integration until getting the final threshold value. The interval integration of two neighborhood intervals calculates the ratio of the distances between mean value and adjacent boundary value of two intervals and determine as the threshold value of the new integrated interval the intensity value that divides the distance between mean values of two intervals according to the ratio. The experiment for performance evaluation of the proposed binarization algorithm showed that the proposed algorithm generates the more effective threshold value than the previous algorithms.

Effects of comparison interval and order on subjective evaluation test of loudness

  • Yonshida, Junji;Hasegawa, Hiroshi;Kasuga, Masao
    • Proceedings of the IEEK Conference
    • /
    • 2002.07c
    • /
    • pp.1792-1795
    • /
    • 2002
  • In this paper we investigated effects of the presentation time interval on a subjective evaluation test of loudness. We carried out paired comparison experiments of pure tones loudness with changing the time interval of the comparison. As the results, two characteristic effects were obtained. The difference limen of the loudness was almost proportional to the time interval in below 10 s and was almost the same value of 1.5 dB in above 10 s. On the other hand, the effect of the presentation order was smallest at the time interval of about 5 s.

  • PDF

A Study on Taguchi's Feed-back Control System (다구찌의 피드백 제어 시스템에 관한 연구)

  • 김지훈;정해성;김재주
    • Journal of Korean Society for Quality Management
    • /
    • v.26 no.3
    • /
    • pp.60-70
    • /
    • 1998
  • When driving the expected loss generated by the quality deviation, Taguchi(1991b) assumed that an objective characteristic has the uniform distribution in its control limit. But it is reasonable to assume that an objective characteristic has the normal distribution than the uniform distribution. Since the triangular distribution is similar to the normal distribution and easy to handle as well, in this article, we first find the optimum measurement interval and the optimum control limit under the triangular distribution. Under the normal assumption, the modified method is compared to Taguchi's. Secondly we find the numerical value solution of the optimum measurement interval and the optimum control limit under the normal distribution.

  • PDF

A Study on the Estimation of Glottal Spectrum Slope Using the LSP (Line Spectrum Pairs) (LSP를 이용한 성문 스펙트럼 기울기 추정에 관한 연구)

  • Min, So-Yeon;Jang, Kyung-A
    • Speech Sciences
    • /
    • v.12 no.4
    • /
    • pp.43-52
    • /
    • 2005
  • The common form of pre-emphasis filter is $H(z)\;=\;1\;- az^{-1}$, where a typically lies between 0.9 and 1.0 in voiced signal. Also, this value reflects the degree of filter and equals R(1)/R(0) in Auto-correlation method. This paper proposes a new flattening algorithm to compensate the weaked high frequency components that occur by vocal cord characteristic. We used interval information of LSP to estimate formant frequency. After obtaining the value of slope and inverse slope using linear interpolation among formant frequency, flattening process is followed. Experimental results show that the proposed algorithm flattened the weaked high frequency components effectively. That is, we could improve the flattened characteristics by using interval information of LSP as flattening factor at the process that compensates weaked high frequency components.

  • PDF

Improving the Performance of Cellular Network by Controlling SIP Retransmission Time Interval and Implementing Home Network (셀룰러 망에서 SIP 재전송 간격조절에 의한 성능 개선과 이를 이용한 홈 네트워크 구현)

  • Kwon, Kyung-Hee;Kim, Jin-Hee;Go, Yun-Mi
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.2
    • /
    • pp.67-73
    • /
    • 2008
  • Recently, due to the rapid advances of mobile communication, multimedia service can be provided by mobile devices. Cellular network tends to uses SIP as a call setup protocol in order to provide various multimedia service to consumers. Cellular network holds the characteristic of higher BER(Bit Error Rate), narrower bandwidth than the wired network. The value of SIP RTI(Retransmission Time interval) based on the wired network decreases a network efficiency and increases a call setup delay over cellular network. By using NS-2 simulator, we show new SIP RTI value adequate over cellular network. We design and implement home network by using the modified SIP that is suitable for cellular network.

Evaluation Method of College English Education Effect Based on Improved Decision Tree Algorithm

  • Dou, Fang
    • Journal of Information Processing Systems
    • /
    • v.18 no.4
    • /
    • pp.500-509
    • /
    • 2022
  • With the rapid development of educational informatization, teaching methods become diversified characteristics, but a large number of information data restrict the evaluation on teaching subject and object in terms of the effect of English education. Therefore, this study adopts the concept of incremental learning and eigenvalue interval algorithm to improve the weighted decision tree, and builds an English education effect evaluation model based on association rules. According to the results, the average accuracy of information classification of the improved decision tree algorithm is 96.18%, the classification error rate can be as low as 0.02%, and the anti-fitting performance is good. The classification error rate between the improved decision tree algorithm and the original decision tree does not exceed 1%. The proposed educational evaluation method can effectively provide early warning of academic situation analysis, and improve the teachers' professional skills in an accelerated manner and perfect the education system.

A Study on Extraction of Vocal Tract Characteristic After Canceling the Vocal Cord Property Using the Line Spectrum Pairs (선형 스펙트럼쌍을 이용한 성문특성이 제거된 성도특성 추출법에 관한 연구)

  • 민소연;장경아;배명진
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.7
    • /
    • pp.665-670
    • /
    • 2002
  • The most common form of pre-emphasis is y(n)=s(n)-As(n-1), where A typically lies between 0.9 and 1.0 in voiced signal. Also, this value reflects the degree of pre-emphasis and equals R(1)/R(0) in conventional method. This paper proposes a new flattening method to compensate the weaked high frequency components that occur by vocal cord characteristic. We used interval information of LSP to estimate formant frequency, After obtaining the value of slope and inverse slope using linear interpolation among formant frequency, flattening process is followed. Experimental results show that the proposed method flattened the weaked high frequency components effectively. That is, we could improve the flattening characteristics by using interval information of LSP as flattening factor at the process that compensates weaked high frequency components.

Hydrocephalus: Ventricular Volume Quantification Using Three-Dimensional Brain CT Data and Semiautomatic Three-Dimensional Threshold-Based Segmentation Approach

  • Hyun Woo Goo
    • Korean Journal of Radiology
    • /
    • v.22 no.3
    • /
    • pp.435-441
    • /
    • 2021
  • Objective: To evaluate the usefulness of the ventricular volume percentage quantified using three-dimensional (3D) brain computed tomography (CT) data for interpreting serial changes in hydrocephalus. Materials and Methods: Intracranial and ventricular volumes were quantified using the semiautomatic 3D threshold-based segmentation approach for 113 brain CT examinations (age at brain CT examination ≤ 18 years) in 38 patients with hydrocephalus. Changes in ventricular volume percentage were calculated using 75 serial brain CT pairs (time interval 173.6 ± 234.9 days) and compared with the conventional assessment of changes in hydrocephalus (increased, unchanged, or decreased). A cut-off value for the diagnosis of no change in hydrocephalus was calculated using receiver operating characteristic curve analysis. The reproducibility of the volumetric measurements was assessed using the intraclass correlation coefficient on a subset of 20 brain CT examinations. Results: Mean intracranial volume, ventricular volume, and ventricular volume percentage were 1284.6 ± 297.1 cm3, 249.0 ± 150.8 cm3, and 19.9 ± 12.8%, respectively. The volumetric measurements were highly reproducible (intraclass correlation coefficient = 1.0). Serial changes (0.8 ± 0.6%) in ventricular volume percentage in the unchanged group (n = 28) were significantly smaller than those in the increased and decreased groups (6.8 ± 4.3% and 5.6 ± 4.2%, respectively; p = 0.001 and p < 0.001, respectively; n = 11 and n = 36, respectively). The ventricular volume percentage was an excellent parameter for evaluating the degree of hydrocephalus (area under the receiver operating characteristic curve = 0.975; 95% confidence interval, 0.948-1.000; p < 0.001). With a cut-off value of 2.4%, the diagnosis of unchanged hydrocephalus could be made with 83.0% sensitivity and 100.0% specificity. Conclusion: The ventricular volume percentage quantified using 3D brain CT data is useful for interpreting serial changes in hydrocephalus.

The Use of Confidence Interval of Measures of Diagnostic Accuracy (진단검사 정확도 평가지표의 신뢰구간)

  • Oh, Tae-Ho;Pak, Son-Il
    • Journal of Veterinary Clinics
    • /
    • v.32 no.4
    • /
    • pp.319-323
    • /
    • 2015
  • The performance of diagnostic test accuracy is usually summarized by a variety of statistics such as sensitivity, specificity, predictive value, likelihood ratio, and kappa. These indices are most commonly presented when evaluations of competing diagnostic tests are reported, and it is of utmost importance to compare the accuracies of diagnostic tests to decide on the best available test for certain medical disorder. However, it is important to emphasize that specific point values of these indices are merely estimates. If parameter estimates are reported without a measure of uncertainty (precision), knowledgeable readers cannot know the range within which the true values of the indices are likely to lie. Therefore, when evaluations of diagnostic accuracy are reported the precision of estimates should be stated in parallel. To reflect the precision of any estimate of a diagnostic performance characteristic or of the difference between performance characteristics, the computation of confidential interval (CI), an indicator of precision, is widely used in medical literatures in that CIs are more informative to interpret test results than the simple point estimates. The majority of peer-reviewed journals usually require CIs to be specified for descriptive estimates, whereas domestic veterinary journals seem less vigilant on this issues. This paper describes how to calculate the indices and associated CIs using practical examples when assessing diagnostic test performance.

스토케스틱 방법에 의한 공작기계의 안정성 해석

  • Kim, Gwang-Jun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.1 no.1
    • /
    • pp.34-49
    • /
    • 1984
  • The stability of machine tool systems is analyzed by considering the machining process as a stochastic process without decomposing into machine tool structural dynamics and cutting processes. In doing so the time series analysis technique developed by Wu and Pandit is applied systematically to the relative vibration between cutting tool and work- piece measured under actual working conditions. Various characteristic properties derived from the fitted ARMA(Autoregressive Moving Average) Models and those from raw data directly are investigated in relation with the system stability. Both damping ratio and absolute value of the characteristic roots of the AR part of the most significant dynamic mode are preferred as stability indicating factors to the other pro-perties such as theoretical variance .gamma. (o) or absolute power of the most dominant dynamic mode. Maximum aplitude during a certain interval and variance estimated from raw data are shown to be very sensi- tive to the type of the signal and the location of measurement point although they can be obtained rather easily. The relative vibration signal is also analyzed by FFT(Fast Fourier Transform) Analyzer for the purpose of comparison with the spectrums derived from the fitted ARMA models.

  • PDF