• Title/Summary/Keyword: Illumination Variations

Search Result 117, Processing Time 0.03 seconds

Research Trends for Deep Learning-Based High-Performance Face Recognition Technology (딥러닝 기반 고성능 얼굴인식 기술 동향)

  • Kim, H.I.;Moon, J.Y.;Park, J.Y.
    • Electronics and Telecommunications Trends
    • /
    • v.33 no.4
    • /
    • pp.43-53
    • /
    • 2018
  • As face recognition (FR) has been well studied over the past decades, FR technology has been applied to many real-world applications such as surveillance and biometric systems. However, in the real-world scenarios, FR performances have been known to be significantly degraded owing to variations in face images, such as the pose, illumination, and low-resolution. Recently, visual intelligence technology has been rapidly growing owing to advances in deep learning, which has also improved the FR performance. Furthermore, the FR performance based on deep learning has been reported to surpass the performance level of human perception. In this article, we discuss deep-learning based high-performance FR technologies in terms of representative deep-learning based FR architectures and recent FR algorithms robust to face image variations (i.e., pose-robust FR, illumination-robust FR, and video FR). In addition, we investigate big face image datasets widely adopted for performance evaluations of the most recent deep-learning based FR algorithms.

Object Tracking with the Multi-Templates Regression Model Based MS Algorithm

  • Zhang, Hua;Wang, Lijia
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1307-1317
    • /
    • 2018
  • To deal with the problems of occlusion, pose variations and illumination changes in the object tracking system, a regression model weighted multi-templates mean-shift (MS) algorithm is proposed in this paper. Target templates and occlusion templates are extracted to compose a multi-templates set. Then, the MS algorithm is applied to the multi-templates set for obtaining the candidate areas. Moreover, a regression model is trained to estimate the Bhattacharyya coefficients between the templates and candidate areas. Finally, the geometric center of the tracked areas is considered as the object's position. The proposed algorithm is evaluated on several classical videos. The experimental results show that the regression model weighted multi-templates MS algorithm can track an object accurately in terms of occlusion, illumination changes and pose variations.

Generic Training Set based Multimanifold Discriminant Learning for Single Sample Face Recognition

  • Dong, Xiwei;Wu, Fei;Jing, Xiao-Yuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.1
    • /
    • pp.368-391
    • /
    • 2018
  • Face recognition (FR) with a single sample per person (SSPP) is common in real-world face recognition applications. In this scenario, it is hard to predict intra-class variations of query samples by gallery samples due to the lack of sufficient training samples. Inspired by the fact that similar faces have similar intra-class variations, we propose a virtual sample generating algorithm called k nearest neighbors based virtual sample generating (kNNVSG) to enrich intra-class variation information for training samples. Furthermore, in order to use the intra-class variation information of the virtual samples generated by kNNVSG algorithm, we propose image set based multimanifold discriminant learning (ISMMDL) algorithm. For ISMMDL algorithm, it learns a projection matrix for each manifold modeled by the local patches of the images of each class, which aims to minimize the margins of intra-manifold and maximize the margins of inter-manifold simultaneously in low-dimensional feature space. Finally, by comprehensively using kNNVSG and ISMMDL algorithms, we propose k nearest neighbor virtual image set based multimanifold discriminant learning (kNNMMDL) approach for single sample face recognition (SSFR) tasks. Experimental results on AR, Multi-PIE and LFW face datasets demonstrate that our approach has promising abilities for SSFR with expression, illumination and disguise variations.

Experimental Analysis for Environments Variations of Greenhouses -Distributions and Variations of Temperature, Relative humidity, Illumination, Carbon dioxide and Wind velocity- (온실환경변화(溫室環境變化)에 대(對)한 실험적(實驗的) 분석(分析)(II) -온습도(溫濕度)·조도(照度)·탄산(炭酸)가스·풍속(風速)의 변화(變化) 및 분포(分布)-)

  • Kim, Y.B.;Park, J.C.;Paek, Y.
    • Journal of Biosystems Engineering
    • /
    • v.18 no.1
    • /
    • pp.60-70
    • /
    • 1993
  • In this study, the environment variations and distributions in different types of greenhouses were measured and analyzed. The elements of environment analyzed were temperature, relative humidity, illumination, carbon dioxide and wind velocity. The analyzed greenhouse types were auto-multi type which has an automatic environment control system and multiple continuous arches, regular-multi type which has an temperature control system and multiple continuous arches, and single arch type which has no environment control system without manual temperature keeping method. The results of this study can be used for the greenhouse building and managements.

  • PDF

A Method for Predicting the Color Appearance Values of Textiles Depending on Illumination (광원에 따른 텍스타일의 Color Appearance 수치 예측 방법)

  • Chae, Youngjoo
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.44 no.1
    • /
    • pp.68-83
    • /
    • 2020
  • This study suggests a method to predict the color appearance of textiles that shifts depending on illumination variations. The suggested method allows the calculations of lightness, chroma, and hue appearance values from the spectral reflectance values of the textile and illuminant. The accuracy of the method was evaluated through numerical and statistical comparisons between the predicted and the measured color appearance values of 24 fabric samples under CIE standard illuminant D65. As a result, there were excellent agreements between the two data sets with the error values close to zero. The predicted color appearance values of 24 samples under two illuminating (color temperature-luminance) conditions, 2700 K-100 cd/㎡ and 6500 K-100 cd/㎡, were then compared to prove the significant effect of illumination on the color appearance of textiles. The color appearance values were also compared with spectrophotometrically measured physical color attributes, that is, true colors of the samples. The physical color attributes of samples were unchanged; however, differences in color appearance under different conditions were generally much larger than the suprathreshold color difference tolerances discussed in the color science literature. Finally, the magnitude of the illumination effect depending on the physical color attributes of samples was also analyzed.

Real-Time Face Tracking Algorithm Robust to illumination Variations (조명 변화에 강인한 실시간 얼굴 추적 알고리즘)

  • Lee, Yong-Beom;You, Bum-Jae;Lee, Seong-Whan;Kim, Kwang-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.3037-3040
    • /
    • 2000
  • Real-Time object tracking has emerged as an important component in several application areas including machine vision. surveillance. Human-Computer Interaction. image-based control. and so on. And there has been developed various algorithms for a long time. But in many cases. they have showed limited results under uncontrolled situation such as illumination changes or cluttered background. In this paper. we present a novel. computationally efficient algorithm for tracking human face robustly under illumination changes and cluttered backgrounds. Previous algorithms usually defines color model as a 2D membership function in a color space without consideration for illumination changes. Our new algorithm developed here. however. constructs a 3D color model by analysing plenty of images acquired under various illumination conditions. The algorithm described is applied to a mobile head-eye robot and experimented under various uncontrolled environments. It can track an human face more than 100 frames per second excluding image acquisition time.

  • PDF

Normalized Region Extraction of Facial Features by Using Hue-Based Attention Operator (색상기반 주목연산자를 이용한 정규화된 얼굴요소영역 추출)

  • 정의정;김종화;전준형;최흥문
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.6C
    • /
    • pp.815-823
    • /
    • 2004
  • A hue-based attention operator and a combinational integral projection function(CIPF) are proposed to extract the normalized regions of face and facial features robustly against illumination variation. The face candidate regions are efficiently detected by using skin color filter, and the eyes are located accurately nil robustly against illumination variation by applying the proposed hue- and symmetry-based attention operator to the face candidate regions. And the faces are confirmed by verifying the eyes with the color-based eye variance filter. The proposed CIPF, which combines the weighted hue and intensity, is applied to detect the accurate vertical locations of the eyebrows and the mouth under illumination variations and the existence of mustache. The global face and its local feature regions are exactly located and normalized based on these accurate geometrical information. Experimental results on the AR face database[8] show that the proposed eye detection method yields better detection rate by about 39.3% than the conventional gray GST-based method. As a result, the normalized facial features can be extracted robustly and consistently based on the exact eye location under illumination variations.

Real-Time Face Recognition System Based on Illumination-insensitive MCT and Frame Consistency (조명변화에 강인한 MCT와 프레임 연관성 기반 실시간 얼굴인식 시스템)

  • Cho, Gwang-Shin;Park, Su-Kyung;Sim, Dong-Gyu;Lee, Soo-Youn
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.3
    • /
    • pp.123-134
    • /
    • 2008
  • In this paper, we propose a real-tin e face recognition system that is robust under various lighting conditions. Th Modified Census Transform algorithm that is insensitive to illumination variations is employed to extract local structure features. In a practical face recognition system, acquired images through a camera are likely to be blurred and some of them could be side face images, resulting that unacceptable performance could be obtained. To improve stability of a practical face recognition system, we propose a real-time algorithm that rejects unnecessary facial picture and makes use of recognition consistency between successive frames. Experimental results on the Yale database with large illumination variations show that the proposed approach is approximately 20% better than conventional appearance-based approaches. We also found that the proposed real-time method is more stable than existing methods that produces recognition result for each frame.

A Novel Face Recognition Method Robust to Illumination Changes (조명 변화에 강인한 얼굴 인식 방법)

  • 양희성;김유호;이준호
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.460-463
    • /
    • 1999
  • We present an efficient face recognition method that is robust to illumination changes. We named the proposed method as SKKUfaces. We first compute eigenfaces from training images and then apply fisher discriminant analysis using the obtained eigenfaces that exclude eigenfaces correponding to first few largest eigenvalues. This way, SKKUfaces can achieve the maximum class separability without considering eigenfaces that are responsible for illumination changes, facial expressions and eyewear. In addition, we have developed a method that efficiently computes beween-scatter and within-scatter matrices in terms of memory space and computation time. We have tested the performance of SKKUfaces on the YALE and the SKKU face databases. Initial Experimental results show that SKKUfaces performs greatly better over Fisherfaces on the input images of large variations in lighting and eyewear.

  • PDF

A Deblocking Filtering Method for Illumination Compensation in Multiview Video Coding (다시점 비디오 코딩에서 휘도 보상 방법에 적합한 디블록킹 필터링 방법)

  • Park, Min-Woo;Park, Gwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.13 no.3
    • /
    • pp.401-410
    • /
    • 2008
  • Multiview Video Coding contains a macroblock-based illumination compensation tool which can compensate the variations of illuminations according to view or temporal directions. Thanks to illumination compensation tool, the coding efficiency of Multiview Video Coding has been enhanced. However illumination compensation tool also generates additional subjective drawbacks of the blocking artifacts due to macroblock-based compensations of mean values. A deblocking filtering method for Multiview Video Coding which is the same as in H.264/AVC does not consider illumination difference between the illumination compensated blocks, thus it can not effectively eliminate the blocking artifacts. Therefore, this paper analyzes the phenomena of blocking artifacts caused by illumination compensation and proposes a method which can effectively eliminate the blocking artifacts with the minimum changes of the H.264 deblockding filtering method. In the simulation results, it can be easily found the blocking artifacts are clearly eliminated in the subjective comparisons, and the average bit-rate reduction is up to 1.44%.