• Title/Summary/Keyword: Multiple Distance Face Image

Search Result 11, Processing Time 0.023 seconds

The Long Distance Face Recognition using Multiple Distance Face Images Acquired from a Zoom Camera (줌 카메라를 통해 획득된 거리별 얼굴 영상을 이용한 원거리 얼굴 인식 기술)

  • Moon, Hae-Min;Pan, Sung Bum
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.24 no.6
    • /
    • pp.1139-1145
    • /
    • 2014
  • User recognition technology, which identifies or verifies a certain individual is absolutely essential under robotic environments for intelligent services. The conventional face recognition algorithm using single distance face image as training images has a problem that face recognition rate decreases as distance increases. The face recognition algorithm using face images by actual distance as training images shows good performance but this has a problem that it requires user cooperation. This paper proposes the LDA-based long distance face recognition method which uses multiple distance face images from a zoom camera for training face images. The proposed face recognition technique generated better performance by average 7.8% than the technique using the existing single distance face image as training. Compared with the technique that used face images by distance as training, the performance fell average 8.0%. However, the proposed method has a strength that it spends less time and requires less cooperation to users when taking face images.

Long Distance Face Recognition System using the Automatic Face Image Creation by Distance (거리별 얼굴영상 자동 생성 방법을 이용한 원거리 얼굴인식 시스템)

  • Moon, Hae Min;Pan, Sung Bum
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.11
    • /
    • pp.137-145
    • /
    • 2014
  • This paper suggests an LDA-based long distance face recognition algorithm for intelligent surveillance system. The existing face recognition algorithm using single distance face image as training images caused a problem that face recognition rate is decreased with increasing distance. The face recognition algorithm using face images by actual distance as training images showed good performance. However, this also causes user inconvenience as it requires the user to move one to five meters in person to acquire face images for initial user registration. In this paper, proposed method is used for training images by using single distance face image to automatically create face images by various distances. The test result showed that the proposed face recognition technique generated better performance by average 16.3% in short distance and 18.0% in long distance than the technique using the existing single distance face image as training. When it was compared with the technique that used face images by distance as training, the performance fell 4.3% on average at a close distance and remained the same at a long distance.

Performance Analysis of Face Recognition by Distance according to Image Normalization and Face Recognition Algorithm (영상 정규화 및 얼굴인식 알고리즘에 따른 거리별 얼굴인식 성능 분석)

  • Moon, Hae-Min;Pan, Sung Bum
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.23 no.4
    • /
    • pp.737-742
    • /
    • 2013
  • The surveillance system has been developed to be intelligent which can judge and cope by itself using human recognition technique. The existing face recognition is excellent at a short distance but recognition rate is reduced at a long distance. In this paper, we analyze the performance of face recognition according to interpolation and face recognition algorithm in face recognition using the multiple distance face images to training. we use the nearest neighbor, bilinear, bicubic, Lanczos3 interpolations to interpolate face image and PCA and LDA to face recognition. The experimental results show that LDA-based face recognition with bilinear interpolation provides performance in face recognition.

3D Face Recognition using Local Depth Information

  • 이영학;심재창;이태홍
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.11
    • /
    • pp.818-825
    • /
    • 2002
  • Depth information is one of the most important factor for the recognition of a digital face image. Range images are very useful, when comparing one face with other faces, because of implicating depth information. As the processing for the whole fare produces a lot of calculations and data, face images ran be represented in terms of a vector of feature descriptors for a local area. In this paper, depth areas of a 3 dimensional(3D) face image were extracted by the contour line from some depth value. These were resampled and stored in consecutive location in feature vector using multiple feature method. A comparison between two faces was made based on their distance in the feature space, using Euclidian distance. This paper reduced the number of index data in the database and used fewer feature vectors than other methods. Proposed algorithm can be highly recognized for using local depth information and less feature vectors or the face.

Deep Learning based Human Recognition using Integration of GAN and Spatial Domain Techniques

  • Sharath, S;Rangaraju, HG
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.127-136
    • /
    • 2021
  • Real-time human recognition is a challenging task, as the images are captured in an unconstrained environment with different poses, makeups, and styles. This limitation is addressed by generating several facial images with poses, makeup, and styles with a single reference image of a person using Generative Adversarial Networks (GAN). In this paper, we propose deep learning-based human recognition using integration of GAN and Spatial Domain Techniques. A novel concept of human recognition based on face depiction approach by generating several dissimilar face images from single reference face image using Domain Transfer Generative Adversarial Networks (DT-GAN) combined with feature extraction techniques such as Local Binary Pattern (LBP) and Histogram is deliberated. The Euclidean Distance (ED) is used in the matching section for comparison of features to test the performance of the method. A database of millions of people with a single reference face image per person, instead of multiple reference face images, is created and saved on the centralized server, which helps to reduce memory load on the centralized server. It is noticed that the recognition accuracy is 100% for smaller size datasets and a little less accuracy for larger size datasets and also, results are compared with present methods to show the superiority of proposed method.

Detection of Faces Located at a Long Range with Low-resolution Input Images for Mobile Robots (모바일 로봇을 위한 저해상도 영상에서의 원거리 얼굴 검출)

  • Kim, Do-Hyung;Yun, Woo-Han;Cho, Young-Jo;Lee, Jae-Jeon
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.257-264
    • /
    • 2009
  • This paper proposes a novel face detection method that finds tiny faces located at a long range even with low-resolution input images captured by a mobile robot. The proposed approach can locate extremely small-sized face regions of $12{\times}12$ pixels. We solve a tiny face detection problem by organizing a system that consists of multiple detectors including a mean-shift color tracker, short- and long-rage face detectors, and an omega shape detector. The proposed method adopts the long-range face detector that is well trained enough to detect tiny faces at a long range, and limiting its operation to only within a search region that is automatically determined by the mean-shift color tracker and the omega shape detector. By focusing on limiting the face search region as much as possible, the proposed method can accurately detect tiny faces at a long distance even with a low-resolution image, and decrease false positives sharply. According to the experimental results on realistic databases, the performance of the proposed approach is at a sufficiently practical level for various robot applications such as face recognition of non-cooperative users, human-following, and gesture recognition for long-range interaction.

  • PDF

A Novel Cross Channel Self-Attention based Approach for Facial Attribute Editing

  • Xu, Meng;Jin, Rize;Lu, Liangfu;Chung, Tae-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.6
    • /
    • pp.2115-2127
    • /
    • 2021
  • Although significant progress has been made in synthesizing visually realistic face images by Generative Adversarial Networks (GANs), there still lacks effective approaches to provide fine-grained control over the generation process for semantic facial attribute editing. In this work, we propose a novel cross channel self-attention based generative adversarial network (CCA-GAN), which weights the importance of multiple channels of features and archives pixel-level feature alignment and conversion, to reduce the impact on irrelevant attributes while editing the target attributes. Evaluation results show that CCA-GAN outperforms state-of-the-art models on the CelebA dataset, reducing Fréchet Inception Distance (FID) and Kernel Inception Distance (KID) by 15~28% and 25~100%, respectively. Furthermore, visualization of generated samples confirms the effect of disentanglement of the proposed model.

Real time tracking of multiple humans for mobile robot application

  • Park, Joon-Hyuk;Park, Byung-Soo;Lee, Seok;Park, Sung-Kee;Kim, Munsang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.100.3-100
    • /
    • 2002
  • This paper presents the method for detection and tracking of multiple humans robustly in mobile platform. The perception of human is performed in real time through the processing of images acquired from a moving stereo vision system. We performed multi-cue integration such as human shape, skin color and depth information to detect and track each human in moving background scene. Human shape is measured by edge-based template matching on distance transformed image. Improving robustness for human detection, we apply the human face skin color in HSV color space. And we could increase the accuracy and the robustness in both detection and tracking by applying random sampling stochastic estimati...

  • PDF

Adaptive Background Modeling Considering Stationary Object and Object Detection Technique based on Multiple Gaussian Distribution

  • Jeong, Jongmyeon;Choi, Jiyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.11
    • /
    • pp.51-57
    • /
    • 2018
  • In this paper, we studied about the extraction of the parameter and implementation of speechreading system to recognize the Korean 8 vowel. Face features are detected by amplifying, reducing the image value and making a comparison between the image value which is represented for various value in various color space. The eyes position, the nose position, the inner boundary of lip, the outer boundary of upper lip and the outer line of the tooth is found to the feature and using the analysis the area of inner lip, the hight and width of inner lip, the outer line length of the tooth rate about a inner mouth area and the distance between the nose and outer boundary of upper lip are used for the parameter. 2400 data are gathered and analyzed. Based on this analysis, the neural net is constructed and the recognition experiments are performed. In the experiment, 5 normal persons were sampled. The observational error between samples was corrected using normalization method. The experiment show very encouraging result about the usefulness of the parameter.

PERSONAL SPACE-BASED MODELING OF RELATIONSHIPS BETWEEN PEOPLE FOR NEW HUMAN-COMPUTER INTERACTION

  • Amaoka, Toshitaka;Laga, Hamid;Saito, Suguru;Nakajima, Masayuki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.746-750
    • /
    • 2009
  • In this paper we focus on the Personal Space (PS) as a nonverbal communication concept to build a new Human Computer Interaction. The analysis of people positions with respect to their PS gives an idea on the nature of their relationship. We propose to analyze and model the PS using Computer Vision (CV), and visualize it using Computer Graphics. For this purpose, we define the PS based on four parameters: distance between people, their face orientations, age, and gender. We automatically estimate the first two parameters from image sequences using CV technology, while the two other parameters are set manually. Finally, we calculate the two-dimensional relationship of multiple persons and visualize it as 3D contours in real-time. Our method can sense and visualize invisible and unconscious PS distributions and convey the spatial relationship of users by an intuitive visual representation. The results of this paper can be used to Human Computer Interaction in public spaces.

  • PDF