• Title/Summary/Keyword: Vision Face Detection

Search Result 96, Processing Time 0.029 seconds

Face-Mask Detection with Micro processor (마이크로프로세서 기반의 얼굴 마스크 감지)

  • Lim, Hyunkeun;Ryoo, Sooyoung;Jung, Hoekyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.3
    • /
    • pp.490-493
    • /
    • 2021
  • This paper proposes an embedded system that detects mask and face recognition based on a microprocessor instead of Nvidia Jetson Board what is popular development kit. We use a class of efficient models called Mobilenets for mobile and embedded vision applications. MobileNets are based on a streamlined architechture that uses depthwise separable convolutions to build light weight deep neural networks. The device used a Maix development board with CNN hardware acceleration function, and the training model used MobileNet_V2 based SSD(Single Shot Multibox Detector) optimized for mobile devices. To make training model, 7553 face data from Kaggle are used. As a result of test dataset, the AUC (Area Under The Curve) value is as high as 0.98.

Design and Implementation of Real-time High Performance Face Detection Engine (고성능 실시간 얼굴 검출 엔진의 설계 및 구현)

  • Han, Dong-Il;Cho, Hyun-Jong;Choi, Jong-Ho;Cho, Jae-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.2
    • /
    • pp.33-44
    • /
    • 2010
  • This paper propose the structure of real-time face detection hardware architecture for robot vision processing applications. The proposed architecture is robust against illumination changes and operates at no less than 60 frames per second. It uses Modified Census Transform to obtain face characteristics robust against illumination changes. And the AdaBoost algorithm is adopted to learn and generate the characteristics of the face data, and finally detected the face using this data. This paper describes the face detection hardware structure composed of Memory Interface, Image Scaler, MCT Generator, Candidate Detector, Confidence Comparator, Position Resizer, Data Grouper, and Detected Result Display, and verification Result of Hardware Implementation with using Virtex5 LX330 FPGA of Xilinx. Verification result with using the images from a camera showed that maximum 32 faces per one frame can be detected at the speed of maximum 149 frame per second.

EAR: Enhanced Augmented Reality System for Sports Entertainment Applications

  • Mahmood, Zahid;Ali, Tauseef;Muhammad, Nazeer;Bibi, Nargis;Shahzad, Imran;Azmat, Shoaib
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.12
    • /
    • pp.6069-6091
    • /
    • 2017
  • Augmented Reality (AR) overlays virtual information on real world data, such as displaying useful information on videos/images of a scene. This paper presents an Enhanced AR (EAR) system that displays useful statistical players' information on captured images of a sports game. We focus on the situation where the input image is degraded by strong sunlight. Proposed EAR system consists of an image enhancement technique to improve the accuracy of subsequent player and face detection. The image enhancement is followed by player and face detection, face recognition, and players' statistics display. First, an algorithm based on multi-scale retinex is proposed for image enhancement. Then, to detect players' and faces', we use adaptive boosting and Haar features for feature extraction and classification. The player face recognition algorithm uses boosted linear discriminant analysis to select features and nearest neighbor classifier for classification. The system can be adjusted to work in different types of sports where the input is an image and the desired output is display of information nearby the recognized players. Simulations are carried out on 2096 different images that contain players in diverse conditions. Proposed EAR system demonstrates the great potential of computer vision based approaches to develop AR applications.

Development of an Emotion Recognition Robot using a Vision Method (비전 방식을 이용한 감정인식 로봇 개발)

  • Shin, Young-Geun;Park, Sang-Sung;Kim, Jung-Nyun;Seo, Kwang-Kyu;Jang, Dong-Sik
    • IE interfaces
    • /
    • v.19 no.3
    • /
    • pp.174-180
    • /
    • 2006
  • This paper deals with the robot system of recognizing human's expression from a detected human's face and then showing human's emotion. A face detection method is as follows. First, change RGB color space to CIElab color space. Second, extract skin candidate territory. Third, detect a face through facial geometrical interrelation by face filter. Then, the position of eyes, a nose and a mouth which are used as the preliminary data of expression, he uses eyebrows, eyes and a mouth. In this paper, the change of eyebrows and are sent to a robot through serial communication. Then the robot operates a motor that is installed and shows human's expression. Experimental results on 10 Persons show 78.15% accuracy.

Robust human tracking via key face information

  • Li, Weisheng;Li, Xinyi;Zhou, Lifang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.10
    • /
    • pp.5112-5128
    • /
    • 2016
  • Tracking human body is an important problem in computer vision field. Tracking failures caused by occlusion can lead to wrong rectification of the target position. In this paper, a robust human tracking algorithm is proposed to address the problem of occlusion, rotation and improve the tracking accuracy. It is based on Tracking-Learning-Detection framework. The key auxiliary information is used in the framework which motivated by the fact that a tracking target is usually embedded in the context that provides useful information. First, face localization method is utilized to find key face location information. Second, the relative position relationship is established between the auxiliary information and the target location. With the relevant model, the key face information will get the current target position when a target has disappeared. Thus, the target can be stably tracked even when it is partially or fully occluded. Experiments are conducted in various challenging videos. In conjunction with online update, the results demonstrate that the proposed method outperforms the traditional TLD algorithm, and it has a relatively better tracking performance than other state-of-the-art methods.

Segmentation of Pointed Objects for Service Robots (서비스 로봇을 위한 지시 물체 분할 방법)

  • Kim, Hyung-O;Kim, Soo-Hwan;Kim, Dong-Hwan;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.2
    • /
    • pp.139-146
    • /
    • 2009
  • This paper describes how a person extracts a unknown object with pointing gesture while interacting with a robot. Using a stereo vision sensor, our proposed method consists of two stages: the detection of the operators' face, the estimation of the pointing direction, and the extraction of the pointed object. The operator's face is recognized by using the Haar-like features. And then we estimate the 3D pointing direction from the shoulder-to-hand line. Finally, we segment an unknown object from 3D point clouds in estimated region of interest. On the basis of this proposed method, we implemented an object registration system with our mobile robot and obtained reliable experimental results.

  • PDF

An Improved Object Detection Method using Hausdorff Distance based on Elastic Deformation Energy (탄성변형 에너지 기반 Hausdorff 거리를 이용한 개선된 객체검출)

  • Won, Bo-Whan;Koo, Ja-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.2 s.46
    • /
    • pp.71-76
    • /
    • 2007
  • Object detection process which makes decision on the existence of meaningful objects in a given image is a crucial part of image recognition in computer vision system. Hausdorff distance metric has been used in object detection and shows good results in applications such as face recognition. It defines the dissimilarity between two sets of points and is used to find the object that is most similar to the given model. This paper proposes a Hausdorff distance based detection method that uses directional information of points to improve detection accuracy when the sets of points are derived from edge extraction as is in usual cases. In this method, elastic energy needed to make two directional points coincident is used as a measure of similarity.

  • PDF

Gaze Detection Using Two Neural Networks (다중 신경망을 이용한 사용자의 응시 위치 추출)

  • 박강령;이정준;이동재;김재희
    • Proceedings of the IEEK Conference
    • /
    • 1999.06a
    • /
    • pp.587-590
    • /
    • 1999
  • Gaze detection is to locate the position on a monitor screen where a user is looking at. We implement it by a computer vision system setting a camera above a monitor, and a user move (rotates and or translates) her face to gaze at a different position on the monitor. Up to now, we have tried several different approaches and among them the Two Neural Network approach shows the best result which is described in this paper (1.7 inch error for test data including facial rotation. 3.1 inch error for test data including facial rotation and translation).

  • PDF

Face Tracking Using Face Feature and Color Information (색상과 얼굴 특징 정보를 이용한 얼굴 추적)

  • Lee, Kyong-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.11
    • /
    • pp.167-174
    • /
    • 2013
  • TIn this paper, we find the face in color images and the ability to track the face was implemented. Face tracking is the work to find face regions in the image using the functions of the computer system and this function is a necessary for the robot. But such as extracting skin color in the image face tracking can not be performed. Because face in image varies according to the condition such as light conditions, facial expressions condition. In this paper, we use the skin color pixel extraction function added lighting compensation function and the entire processing system was implemented, include performing finding the features of eyes, nose, mouth are confirmed as face. Lighting compensation function is a adjusted sine function and although the result is not suitable for human vision, the function showed about 4% improvement. Face features are detected by amplifying, reducing the value and make a comparison between the represented image. The eye and nose position, lips are detected. Face tracking efficiency was good.

Driver Drowsiness Detection Algorithm based on Facial Features (얼굴 특징점 기반의 졸음운전 감지 알고리즘)

  • Oh, Meeyeon;Jeong, Yoosoo;Park, Kil-Houm
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.11
    • /
    • pp.1852-1861
    • /
    • 2016
  • Drowsy driving is a significant factor in traffic accidents, so driver drowsiness detection system based on computer vision for convenience and safety has been actively studied. However, it is difficult to accurately detect the driver drowsiness in complex background and environmental change. In this paper, it proposed the driver drowsiness detection algorithm to determine whether the driver is drowsy through the measurement standard of a yawn, eyes drowsy status, and nod based on facial features. The proposed algorithm detect the driver drowsiness in the complex background, and it is robust to changes in the environment. The algorithm can be applied in real time because of the processing speed faster. Throughout the experiment, we confirmed that the algorithm reliably detected driver drowsiness. The processing speed of the proposed algorithm is about 0.084ms. Also, the proposed algorithm can achieve an average detection rate of 98.48% and 97.37% for a yawn, drowsy eyes, and nod in the daytime and nighttime.