• Title/Summary/Keyword: Illumination Variations

Search Result 118, Processing Time 0.025 seconds

Multimodal Face Biometrics by Using Convolutional Neural Networks

  • Tiong, Leslie Ching Ow;Kim, Seong Tae;Ro, Yong Man
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.2
    • /
    • pp.170-178
    • /
    • 2017
  • Biometric recognition is one of the major challenging topics which needs high performance of recognition accuracy. Most of existing methods rely on a single source of biometric to achieve recognition. The recognition accuracy in biometrics is affected by the variability of effects, including illumination and appearance variations. In this paper, we propose a new multimodal biometrics recognition using convolutional neural network. We focus on multimodal biometrics from face and periocular regions. Through experiments, we have demonstrated that facial multimodal biometrics features deep learning framework is helpful for achieving high recognition performance.

Wide-Viewing Liquid Crystal Displays with Periodic Surface Gratings

  • Lee, Sin-Doo;Park, Jae-Hong;Yoon, Tae-Young;Yu, Chang-Jae
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2002.08a
    • /
    • pp.947-952
    • /
    • 2002
  • A new concept of forming self-aligned multidomains is used for fabricating wide-viewing liquid crystal displays (LCDs) with periodic surface gratings.An array of the periodic surface gratings is produced on substrates using a photosensitive polymer by the illumination o. the UV light through a patterned photomask. A multidomain structure is naturally formed on the grating surface by the initial director distortions together with continuous variations of an external electric field. The LCD cells with periodic surface gratings are found to show excellent extinction in the off-state and wide-viewing property m the on-state.

  • PDF

Robust Extraction of Facial Features under Illumination Variations (조명 변화에 견고한 얼굴 특징 추출)

  • So, In-Mi;Kim, Myong-Hoon;Kim, Young-Un;Lee, Chi-Guen;Jung, Sung-Tae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.11a
    • /
    • pp.697-700
    • /
    • 2005
  • 컴퓨터 비전 기술의 발달에 따라 사용자 인터페이스, 사용자 인증, 보안 등 여러 가지 응용에서 얼굴 정보를 사용하려는 많은 연구가 이루어지고 있다. 얼굴 정보를 이용하는데 있어서, 눈, 코, 입술과 같이 얼굴의 특징을 효과적으로 추출할 필요가 있다. 본 논문에서는 적응성을 갖는 여러 가지 정보를 결합함으로써 조명의 변화가 있는 경우에도 얼굴을 특징을 견고하게 추출할 수 있는 방법을 제안한다.

  • PDF

Object Cataloging Using Heterogeneous Local Features for Image Retrieval

  • Islam, Mohammad Khairul;Jahan, Farah;Baek, Joong Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.11
    • /
    • pp.4534-4555
    • /
    • 2015
  • We propose a robust object cataloging method using multiple locally distinct heterogeneous features for aiding image retrieval. Due to challenges such as variations in object size, orientation, illumination etc. object recognition is extraordinarily challenging problem. In these circumstances, we adapt local interest point detection method which locates prototypical local components in object imageries. In each local component, we exploit heterogeneous features such as gradient-weighted orientation histogram, sum of wavelet responses, histograms using different color spaces etc. and combine these features together to describe each component divergently. A global signature is formed by adapting the concept of bag of feature model which counts frequencies of its local components with respect to words in a dictionary. The proposed method demonstrates its excellence in classifying objects in various complex backgrounds. Our proposed local feature shows classification accuracy of 98% while SURF,SIFT, BRISK and FREAK get 81%, 88%, 84% and 87% respectively.

Robust Three-step facial landmark localization under the complicated condition via ASM and POEM

  • Li, Weisheng;Peng, Lai;Zhou, Lifang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.9
    • /
    • pp.3685-3700
    • /
    • 2015
  • To avoid influences caused by pose, illumination and facial expression variations, we propose a robust three-step algorithm based on ASM and POEM for facial landmark localization. Firstly, Model Selection Factor is utilized to achieve a pose-free initialized shape. Then, we use the global shape model of ASM to describe the whole face and the texture model POEM to adjust the position of each landmark. Thirdly, a second localization is presented to discriminatively refine the subtle shape variation for some organs and contours. Experiments are conducted in four main face datasets, and the results demonstrate that the proposed method accurately localizes facial landmarks and outperforms other state-of-the-art methods.

Probabilistic Head Tracking Based on Cascaded Condensation Filtering (순차적 파티클 필터를 이용한 다중증거기반 얼굴추적)

  • Kim, Hyun-Woo;Kee, Seok-Cheol
    • The Journal of Korea Robotics Society
    • /
    • v.5 no.3
    • /
    • pp.262-269
    • /
    • 2010
  • This paper presents a probabilistic head tracking method, mainly applicable to face recognition and human robot interaction, which can robustly track human head against various variations such as pose/scale change, illumination change, and background clutters. Compared to conventional particle filter based approaches, the proposed method can effectively track a human head by regularizing the sample space and sequentially weighting multiple visual cues, in the prediction and observation stages, respectively. Experimental results show the robustness of the proposed method, and it is worthy to be mentioned that some proposed probabilistic framework could be easily applied to other object tracking problems.

Contour Shape Matching based Motion Vector Estimation for Subfield Gray-scale Display Devices (서브필드계조방식 디스플레이 장치를 위한 컨투어 쉐이프 매칭 기반의 모션벡터 추정)

  • Choi, Im-Su;Kim, Jae-Hee
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.327-328
    • /
    • 2007
  • A contour shape matching based pixel motion estimation is proposed. The pixel motion information is very useful to compensate the motion artifact generated at the specific gray level contours in the moving image for subfield gray-scale display devices. In this motion estimation method, the gray level boundary contours are extracted from the input image. Then using contour shape matching, the most similar contour in next frame is found, and the contour is divided into segment unit. The pixel motion vector is estimated from the displacement of the each segment in the contour by segment matching. From this method, more precise motion vector can be estimated and this method is more robust to image motion with rotation or from illumination variations.

  • PDF

Multi-feature local sparse representation for infrared pedestrian tracking

  • Wang, Xin;Xu, Lingling;Ning, Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1464-1480
    • /
    • 2019
  • Robust tracking of infrared (IR) pedestrian targets with various backgrounds, e.g. appearance changes, illumination variations, and background disturbances, is a great challenge in the infrared image processing field. In the paper, we address a new tracking method for IR pedestrian targets via multi-feature local sparse representation (SR), which consists of three important modules. In the first module, a multi-feature local SR model is constructed. Considering the characterization of infrared pedestrian targets, the gray and edge features are first extracted from all target templates, and then fused into the model learning process. In the second module, an effective tracker is proposed via the learned model. To improve the computational efficiency, a sliding window mechanism with multiple scales is first used to scan the current frame to sample the target candidates. Then, the candidates are recognized via sparse reconstruction residual analysis. In the third module, an adaptive dictionary update approach is designed to further improve the tracking performance. The results demonstrate that our method outperforms several classical methods for infrared pedestrian tracking.

Nonparaxial Imaging Theory for Differential Phase Contrast Imaging

  • Jeongmin Kim
    • Current Optics and Photonics
    • /
    • v.7 no.5
    • /
    • pp.537-544
    • /
    • 2023
  • Differential phase contrast (DPC) microscopy, a central quantitative phase imaging (QPI) technique in cell biology, facilitates label-free, real-time monitoring of intrinsic optical phase variations in biological samples. The existing DPC imaging theory, while important for QPI, is grounded in paraxial diffraction theory. However, this theory lacks accuracy when applied to high numerical aperture (NA) systems that are vital for high-resolution cellular studies. To tackle this limitation, we have, for the first time, formulated a nonparaxial DPC imaging equation with a transmission cross-coefficient (TCC) for high NA DPC microscopy. Our theoretical framework incorporates the apodization of the high NA objective lens, nonparaxial light propagation, and the angular distribution of source intensity or detector sensitivity. Thus, our TCC model deviates significantly from traditional paraxial TCCs, influenced by both NA and the angular variation of illumination or detection. Our nonparaxial imaging theory could enhance phase retrieval accuracy in QPI based on high NA DPC imaging.

Automatic Segmentation of Product Bottle Label Based on GrabCut Algorithm

  • Na, In Seop;Chen, Yan Juan;Kim, Soo Hyung
    • International Journal of Contents
    • /
    • v.10 no.4
    • /
    • pp.1-10
    • /
    • 2014
  • In this paper, we propose a method to build an accurate initial trimap for the GrabCut algorithm without the need for human interaction. First, we identify a rough candidate for the label region of a bottle by applying a saliency map to find a salient area from the image. Then, the Hough Transformation method is used to detect the left and right borders of the label region, and the k-means algorithm is used to localize the upper and lower borders of the label of the bottle. These four borders are used to build an initial trimap for the GrabCut method. Finally, GrabCut segments accurate regions for the label. The experimental results for 130 wine bottle images demonstrated that the saliency map extracted a rough label region with an accuracy of 97.69% while also removing the complex background. The Hough transform and projection method accurately drew the outline of the label from the saliency area, and then the outline was used to build an initial trimap for GrabCut. Finally, the GrabCut algorithm successfully segmented the bottle label with an average accuracy of 92.31%. Therefore, we believe that our method is suitable for product label recognition systems that automatically segment product labels. Although our method achieved encouraging results, it has some limitations in that unreliable results are produced under conditions with varying illumination and reflections. Therefore, we are in the process of developing preprocessing algorithms to improve the proposed method to take into account variations in illumination and reflections.