• Title/Summary/Keyword: Haar measure

Search Result 16, Processing Time 0.018 seconds

METRIC THEOREM AND HAUSDORFF DIMENSION ON RECURRENCE RATE OF LAURENT SERIES

  • Hu, Xue-Hai;Li, Bing;Xu, Jian
    • Bulletin of the Korean Mathematical Society
    • /
    • v.51 no.1
    • /
    • pp.157-171
    • /
    • 2014
  • We show that the recurrence rates of Laurent series about continued fractions almost surely coincide with their pointwise dimensions of the Haar measure. Moreover, let $E_{{\alpha},{\beta}}$ be the set of points with lower and upper recurrence rates ${\alpha},{\beta}$, ($0{\leq}{\alpha}{\leq}{\beta}{\leq}{\infty}$), we prove that all the sets $E_{{\alpha},{\beta}}$, are of full Hausdorff dimension. Then the recurrence sets $E_{{\alpha},{\beta}}$ have constant multifractal spectra.

HAAR MEASURES OF SOME SPECIFIC SETS ARISING FROM THE ELLIPTIC TORI

  • Kim, Yangkohn
    • Bulletin of the Korean Mathematical Society
    • /
    • v.30 no.1
    • /
    • pp.79-82
    • /
    • 1993
  • We let F be a p-adic field with ring of integers O. Suppose .THETA.$_{i}$ .mem. $F^{x}$ /( $F^{x}$ )$^{2}$ for i=1,2 and write $E^{{\theta}_{i}}$:= F(.root..THETA.$_{i}$ ). Then there appear some specific sets such as ( $E^{{\theta}_{i}}$)$^{x}$ / $F^{x}$ in [1] which we need to measure. In addition to that, nanother possible condition attached to the generalized results in [2] had better be presented even though they may not be quite so important. This paper is concerned with these matters. Most notations and conventions are standard and have been used also in [1] and [2].

  • PDF

A New Confidence Measure for Eye Detection Using Pixel Selection (눈 검출에서의 픽셀 선택을 이용한 신뢰 척도)

  • Lee, Yonggeol;Choi, Sang-Il
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.7
    • /
    • pp.291-296
    • /
    • 2015
  • In this paper, we propose a new confidence measure using pixel selection for eye detection and design a hybrid eye detector. For this, we produce sub-images by applying a pixel selection method to the eye patches and construct the BDA(Biased Discriminant Analysis) feature space for measuring the confidence of the eye detection results. For a hybrid eye detector, we select HFED(Haar-like Feature based Eye Detector) and MFED(MCT Feature based Eye Detector), which are complementary to each other, as basic detectors. For a given image, each basic detector conducts eye detection and the confidence of each result is estimated in the BDA feature space by calculating the distances between the produced eye patches and the mean of positive samples in the training set. Then, the result with higher confidence is adopted as the final eye detection result and is used to the face alignment process for face recognition. The experimental results for various face databases show that the proposed method performs more accurate eye detection and consequently results in better face recognition performance compared with other methods.

Lq-ESTIMATES OF MAXIMAL OPERATORS ON THE p-ADIC VECTOR SPACE

  • Kim, Yong-Cheol
    • Communications of the Korean Mathematical Society
    • /
    • v.24 no.3
    • /
    • pp.367-379
    • /
    • 2009
  • For a prime number p, let $\mathbb{Q}_p$ denote the p-adic field and let $\mathbb{Q}_p^d$ denote a vector space over $\mathbb{Q}_p$ which consists of all d-tuples of $\mathbb{Q}_p$. For a function f ${\in}L_{loc}^1(\mathbb{Q}_p^d)$, we define the Hardy-Littlewood maximal function of f on $\mathbb{Q}_p^d$ by $$M_pf(x)=sup\frac{1}{\gamma{\in}\mathbb{Z}|B_{\gamma}(x)|H}{\int}_{B\gamma(x)}|f(y)|dy$$, where |E|$_H$ denotes the Haar measure of a measurable subset E of $\mathbb{Q}_p^d$ and $B_\gamma(x)$ denotes the p-adic ball with center x ${\in}\;\mathbb{Q}_p^d$ and radius $p^\gamma$. If 1 < q $\leq\;\infty$, then we prove that $M_p$ is a bounded operator of $L^q(\mathbb{Q}_p^d)$ into $L^q(\mathbb{Q}_p^d)$; moreover, $M_p$ is of weak type (1, 1) on $L^1(\mathbb{Q}_p^d)$, that is to say, |{$x{\in}\mathbb{Q}_p^d:|M_pf(x)|$>$\lambda$}|$_H{\leq}\frac{p^d}{\lambda}||f||_{L^1(\mathbb{Q}_p^d)},\;\lambda$ > 0 for any f ${\in}L^1(\mathbb{Q}_p^d)$.

Real Time Traffic Signal Recognition Using HSI and YCbCr Color Models and Adaboost Algorithm (HSI/YCbCr 색상모델과 에이다부스트 알고리즘을 이용한 실시간 교통신호 인식)

  • Park, Sanghoon;Lee, Joonwoong
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.24 no.2
    • /
    • pp.214-224
    • /
    • 2016
  • This paper proposes an algorithm to effectively detect the traffic lights and recognize the traffic signals using a monocular camera mounted on the front windshield glass of a vehicle in day time. The algorithm consists of three main parts. The first part is to generate the candidates of a traffic light. After conversion of RGB color model into HSI and YCbCr color spaces, the regions considered as a traffic light are detected. For these regions, edge processing is applied to extract the borders of the traffic light. The second part is to divide the candidates into traffic lights and non-traffic lights using Haar-like features and Adaboost algorithm. The third part is to recognize the signals of the traffic light using a template matching. Experimental results show that the proposed algorithm successfully detects the traffic lights and recognizes the traffic signals in real time in a variety of environments.

Wavelet Transform Technology for Translation-invariant Iris Recognition (위치 이동에 무관한 홍채 인식을 위한 웨이블렛 변환 기술)

  • Lim, Cheol-Su
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.459-464
    • /
    • 2003
  • This paper proposes the use of a wavelet based image transform algorithm in human iris recognition method and the effectiveness of this technique will be determined in preprocessing of extracting Iris image from the user´s eye obtained by imaging device such as CCD Camera or due to torsional rotation of the eye, and it also resolves the problem caused by invariant under translations and dilations due to tilt of the head. This technique values through the proposed translation-invariant wavelet transform algorithm rather than the conventional wavelet transform method. Therefore we extracted the best-matching iris feature values and compared the stored feature codes with the incoming data to identify the user. As result of our experimentation, this technique demonstrate the significant advantage over verification when it compares with other general types of wavelet algorithm in the measure of FAR & FRR.