• Title/Summary/Keyword: Cumulative Histogram

Search Result 59, Processing Time 0.021 seconds

Selectivity Estimation using the Generalized Cumulative Density Histogram (일반화된 누적밀도 히스토그램을 이용한 공간 선택율 추정)

  • Chi, Jeong-Hee;Kim, Sang-Ho;Ryu, Keun-Ho
    • The KIPS Transactions:PartD
    • /
    • v.11D no.4
    • /
    • pp.983-990
    • /
    • 2004
  • Multiple-count problem is occurred when rectangle objects span across several buckets. The CD histogram is a technique which selves this problem by keeping four sub-histograms corresponding to the four points of rectangle. Although It provides exact results with constant response time, there is still a considerable issue. Since it is based on a query window which aligns with a given grid, a number of errors nay be occurred when it is applied to real applications. In this paper, we propose selectivity estimation techniques using the generalized cumulative density histogram based on two probabilistic models : \circled1 probabilistic model which considers the query window area ratio, \circled2 probabilistic model which considers intersection area between a given grid and objects. Our method has the capability of eliminating an impact of the restriction on query window which the existing cumulative density histogram has. We experimented with real datasets to evaluate the proposed methods. Experimental results show that the proposed technique is superior to the existing selectivity estimation techniques. Furthermore, selectivity estimation technique based on probabilistic model considering the intersection area is very accurate(less than 5% errors) at 20% query window. The proposed techniques can be used to accurately quantify the selectivity of the spatial range query on rectangle objects.

Class-Based Histogram Equalization for Robust Speech Recognition

  • Suh, Young-Joo;Kim, Hoi-Rin
    • ETRI Journal
    • /
    • v.28 no.4
    • /
    • pp.502-505
    • /
    • 2006
  • A new class-based histogram equalization method is proposed for robust speech recognition. The proposed method aims at not only compensating the acoustic mismatch between training and test environments, but also at reducing the discrepancy between the phonetic distributions of training and test speech data. The algorithm utilizes multiple class-specific reference and test cumulative distribution functions, classifies the noisy test features into their corresponding classes, and equalizes the features by using their corresponding class-specific reference and test distributions. Experiments on the Aurora 2 database proved the effectiveness of the proposed method by reducing relative errors by 18.74%, 17.52%, and 23.45% over the conventional histogram equalization method and by 59.43%, 66.00%, and 50.50% over mel-cepstral-based features for test sets A, B, and C, respectively.

  • PDF

Robust Histogram Equalization Using Compensated Probability Distribution

  • Kim, Sung-Tak;Kim, Hoi-Rin
    • MALSORI
    • /
    • v.55
    • /
    • pp.131-142
    • /
    • 2005
  • A mismatch between the training and the test conditions often causes a drastic decrease in the performance of the speech recognition systems. In this paper, non-linear transformation techniques based on histogram equalization in the acoustic feature space are studied for reducing the mismatched condition. The purpose of histogram equalization(HEQ) is to convert the probability distribution of test speech into the probability distribution of training speech. While conventional histogram equalization methods consider only the probability distribution of a test speech, for noise-corrupted test speech, its probability distribution is also distorted. The transformation function obtained by this distorted probability distribution maybe bring about miss-transformation of feature vectors, and this causes the performance of histogram equalization to decrease. Therefore, this paper proposes a new method of calculating noise-removed probability distribution by using assumption that the CDF of noisy speech feature vectors consists of component of speech feature vectors and component of noise feature vectors, and this compensated probability distribution is used in HEQ process. In the AURORA-2 framework, the proposed method reduced the error rate by over $44\%$ in clean training condition compared to the baseline system. For multi training condition, the proposed methods are also better than the baseline system.

  • PDF

Retrieval of Identical Clothing Images Based on Non-Static Color Histogram Analysis

  • Choi, Yoo-Joo;Moon, Nam-Mee;Kim, Ku-Jin
    • Journal of Broadcast Engineering
    • /
    • v.14 no.4
    • /
    • pp.397-408
    • /
    • 2009
  • In this paper, we present a non-static color histogram method to retrieve clothing images that are similar to a query clothing. Given clothing area, our method automatically extracts major colors by using the octree-based quantization approach[16]. Then, a color palette that is composed of the major colors is generated. The feature of each clothing, which can be either a query or a database clothing image, is represented as a color histogram based on its color palette. We define the match color bins between two possibly different color palettes, and unify the color palettes by merging or deleting some color bins if necessary. The similarity between two histograms is measured by using the weighted Euclidean distance between the match color bins, where the weight is derived from the frequency of each bin. We compare our method with previous histogram matching methods through experiments. Compared to HSV cumulative histogram-based approach, our method improves the retrieval precision by 13.7 % with less number of color bins.

An Efficient Shaking Correction Techniques for Image Stabilization of Moving Vehicles (이동차량 영상 안정화를 위한 효율적인 흔들림 보정 기법)

  • Hong, Sung-Il;Lin, Chi-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.3
    • /
    • pp.155-162
    • /
    • 2014
  • In this paper, we propose an efficient shaking correction techniques for a moving vehicle image stabilization. The proposed shaking correction techniques was calculated cumulative histogram for the conversion and the separating information via color separation of video image frame of the input received. And it were to matching the histogram for match the color information as compensation result of the shaking vehicle video imaging. In this paper, the proposed the shaking correction techniques was obtained to the restoration result when compared to the existing shaking correction techniques that the smallest noise and better the naturalness of image through stabilization of luminance level and color level. Also, the imaging stabilization method was demonstrated the efficiency compared to other methods through to the real-time processing without the use of the memory.

Hangeul detection method based on histogram and character structure in natural image (다양한 배경에서 히스토그램과 한글의 구조적 특징을 이용한 문자 검출 방법)

  • Pyo, Sung-Kook;Park, Young-Soo;Lee, Gang Seung;Lee, Sang-Hun
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.3
    • /
    • pp.15-22
    • /
    • 2019
  • In this paper, we proposed a Hangeul detection method using structural features of histogram, consonant, and vowel to solve the problem of Hangul which is separated and detected consonant and vowel The proposed method removes background by using DoG (Difference of Gaussian) to remove unnecessary noise in Hangul detection process. In the image with the background removed, we converted it to a binarized image using a cumulative histogram. Then, the horizontal position histogram was used to find the position of the character string, and character combination was performed using the vertical histogram in the found character image. However, words with a consonant vowel such as '가', '라' and '귀' are combined using a structural characteristic of characters because they are difficult to combine into one character. In this experiment, an image composed of alphabets with various backgrounds, an image composed of Korean characters, and an image mixed with alphabets and Hangul were tested. The detection rate of the proposed method is about 2% lower than that of the K-means and MSER character detection method, but it is about 5% higher than that of the character detection method including Hangul.

Weighted Histogram Equalization Method adopting Weber-Fechner's Law for Image Enhancement (이미지 화질개선을 위한 Weber-Fechner 법칙을 적용한 가중 히스토그램 균등화 기법)

  • Kim, Donghyung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.7
    • /
    • pp.4475-4481
    • /
    • 2014
  • A histogram equalization method have been used traditionally for the image enhancement of low quality images. This uses the transformation function, which is a cumulative density function of an input image, and it has mathematically maximum entropy. This method, however, may yield whitening artifacts. This paper proposes the weighted histogram equalization method based on histogram equalization. It has Weber-Fechner's law for a human's vision characteristics, and a dynamic range modification to solve the problem of some methods, which yield a transformation function, regardless of the input image. Finally, the proposed transformation function was calculated using the weighted average of Weber-Fechner and the histogram equalization transformation functions in a modified dynamic range. The simulation results showed that the proposed algorithm effectively enhances the contrast in terms of the subjective quality. In addition, the proposed method has similar or higher entropy than the other conventional approaches.

A Study on a Statistical Modeling of 3-Dimensional MPEG Data and Smoothing Method by a Periodic Mean Value (3차원 동영상 데이터의 통계적 모델링과 주기적 평균값에 의한 Smoothing 방법에 관한 연구)

  • Kim, Duck-Sung;Kim, Tae-Hyung;Rhee, Byung-Ho
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.6
    • /
    • pp.87-95
    • /
    • 1999
  • We propose a simulation model of 3-dimensional MPEG data over Asynchronous transfer Mode(ATM) networks. The model is based on a slice level and is named to Projected Vector Autoregressive(PVAR) model. The PVAR model is modeled using the Autoregressive(AR) model in order to meet the autocorrelation condition and fit the histogram, and maps real data by a projection function. For the projection function, we use the Cumulative Distribution Probability Function (CDPF), and the procedure is performed at each slice level. Our proposed model shows good performance in meeting the autocorrelation condition and fitting the histogram, and is found important in analyzing the performance of networks. In addiotion, we apply a smoothing method by which a periodic mean value. In general. the Quality of Service(QoS) depends on the Cell Loss Rate(CLR), which is related to the cell loss and a maximum delay in a buffer. Hence the proposed smoothing method can be used to improve the QoS.

  • PDF

An image enhancement algorithm for detecting the license plate region using the image of the car personal recorder (차량 번호판 검출을 위한 자동차 개인 저장 장치 이미지 향상 알고리즘)

  • Yun, Jong-Ho;Choi, Myung-Ryul;Lee, Sang-Sun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.3
    • /
    • pp.1-8
    • /
    • 2016
  • We propose an adaptive histogram stretching algorithm for application to a car's personal recorder. The algorithm was used for pre-processing to detect the license plate region in an image from a personal recorder. The algorithm employs a Probability Density Function (PDF) and Cumulative Distribution Function (CDF) to analyze the distribution diagram of the images. These two functions are calculated using an image obtained by sampling at a certain pixel interval. The images were subjected to different levels of stretching, and experiments were done on the images to extract their characteristics. The results show that the proposed algorithm provides less deterioration than conventional algorithms. Moreover, contrast is enhanced according to the characteristics of the image. The algorithm could provide better performance than existing algorithms in applications for detecting search regions for license plates.

Spatial Selectivity Estimation using Cumulative Wavelet Histograms (누적밀도 웨이블릿 히스토그램을 이용한 공간 선택율 추정)

  • Chi, Jeong-Hee;Jeong, Jae-Hyuk;Ryu, Keun-Ho
    • Journal of KIISE:Databases
    • /
    • v.32 no.5
    • /
    • pp.547-557
    • /
    • 2005
  • The purpose of selectivity estimation is to maintain the summary data in a very small memory space and to minimize the error of estimated value and query result. In case of estimating selectivity for large spatial data, the existing works need summary information which reflect spatial data distribution well to get the exact result for query. In order to get such summary information, they require a much memory space. Therefore In this paper, we propose a new technique cumulative density wavelet Histogram, called CDW Histogram, which gets a high accurate selectivity in small memory space. The proposed method is to utilize the sub-histograms created by CD histogram. The each sub-histograms are used to generate the wavelet summary information by applying the wavelet transform. This fact gives us good selectivity even if the memory sire is very small. The experimental results show that the proposed method simultaneously takes full advantage of their strong points - gets a good selectivity using the previous histogram in ($25\%\~50\%$) memory space and is superior to the existing selectivity estimation techniques. The proposed technique can be used to accurately quantify the selectivity of the spatial range query in databases which have very restrictive memory.