• Title/Summary/Keyword: Face detection and tracking

Search Result 149, Processing Time 0.025 seconds

Implementation of Face Recognition Applications for Factory Work Management

  • Rho, Jungkyu;Shin, Woochang
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.246-252
    • /
    • 2020
  • Facial recognition is a biometric technology that is used in various fields such as user authentication and identification of human characteristics. Face recognition applications are practically used in various fields, but very few applications have been developed to improve the factory work environment. We implemented applications that uses face recognition to identify a specific employee in a factory .work environment and provide customized information for each employee. Factory workers need documents describing the work in order to do their assigned work. Factory managers can use our application to register documents needed for each worker, and workers can view the documents assigned to them. Each worker is identified using face recognition, and by tracking the worker's face during work, it is possible to know that the worker is in the workplace. In addition, as a mobile app for workers is provided, workers can view the contents using a tablet, and we have defined a simple communication protocol to exchange information between our applications. We demonstrated the applications in a factory work environment and found several improvements were required for practical use. We expect these results can be used to improve factory work environments.

Optimum Region-of-Interest Acquisition for Intelligent Surveillance System using Multiple Active Cameras

  • Kim, Young-Ouk;Park, Chang-Woo;Sung, Ha-Gyeong;Park, Chang-Han;Namkung, Jae-Chan
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.628-631
    • /
    • 2003
  • In this paper, we present real-time, accurate face region detection and tracking technique for an intelligent surveillance system. It is very important to obtain the high-resolution images, which enables accurate identification of an object-of-interest. Conventional surveillance or security systems, however, usually provide poor image quality because they use one or more fixed cameras and keep recording scenes without any cine. We implemented a real-time surveillance system that tracks a moving person using four pan-tilt-zoom (PTZ) cameras. While tracking, the region-of-interest (ROI) can be obtained by using a low-pass filter and background subtraction. Color information in the ROI is updated to extract features for optimal tracking and zooming. The experiment with real human faces showed highly acceptable results in the sense of both accuracy and computational efficiency.

  • PDF

Using CNN- VGG 16 to detect the tennis motion tracking by information entropy and unascertained measurement theory

  • Zhong, Yongfeng;Liang, Xiaojun
    • Advances in nano research
    • /
    • v.12 no.2
    • /
    • pp.223-239
    • /
    • 2022
  • Object detection has always been to pursue objects with particular properties or representations and to predict details on objects including the positions, sizes and angle of rotation in the current picture. This was a very important subject of computer vision science. While vision-based object tracking strategies for the analysis of competitive videos have been developed, it is still difficult to accurately identify and position a speedy small ball. In this study, deep learning (DP) network was developed to face these obstacles in the study of tennis motion tracking from a complex perspective to understand the performance of athletes. This research has used CNN-VGG 16 to tracking the tennis ball from broadcasting videos while their images are distorted, thin and often invisible not only to identify the image of the ball from a single frame, but also to learn patterns from consecutive frames, then VGG 16 takes images with 640 to 360 sizes to locate the ball and obtain high accuracy in public videos. VGG 16 tests 99.6%, 96.63%, and 99.5%, respectively, of accuracy. In order to avoid overfitting, 9 additional videos and a subset of the previous dataset are partly labelled for the 10-fold cross-validation. The results show that CNN-VGG 16 outperforms the standard approach by a wide margin and provides excellent ball tracking performance.

The Real-Time Face Detection and Tracking System based on Skin-Color (색상에 기반한 실시간 얼굴 검출 및 추적 시스템)

  • 임옥현;이우주;이배호
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10b
    • /
    • pp.751-753
    • /
    • 2004
  • 본 논문에서 색상을 기반으로 한 알고리즘으로 얼굴을 검출하고 검출된 얼굴을 움직이는 Pan-Tilt 카메라 상에서 추적하는 방법을 제안하고자 한다. 얼굴 검출 알고리즘은 얼굴색의 특징인 피부색상을 이용하여 후보영역을 검출하고 후보 영역에서 얼굴형태의 특징인 타원 형태를 이용하여 최종적으로 얼굴을 검출하였다. 얼굴 추적은 영상에서 검출된 얼굴의 크기 및 위치 정보와 Pan-Tilt 카메라의 위치정보를 이용하여 항상 얼굴이 카메라의 중심에 위치하도록 하였다. 우리는 실제 실험에서 초당 10프레임 이상의 실시간 얼굴 검출 및 추적에 성공하였다.

  • PDF

The Real-Time Face Detection and Tracking System using Pan-Tilt Camera (Pan-Tilt 카메라를 이용한 실시간 얼굴 검출 및 추적 시스템)

  • 임옥현;김진철;이배호
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.814-816
    • /
    • 2004
  • 본 논문에서는 웨이블릿을 이용한 알고리즘으로 얼굴을 검출하고 검출된 얼굴을 움직이는 Pan-Tilt 카메라상에서 추적하는 방법을 제안하고자 한다. 우리는 얼굴 검출을 위해 다섯 종류의 간단한 웨이블릿을 사용하여 특징을 추출하였고 AdaBoost(Adaptive Boosting) 알고리즘을 이용한 계층적 분류기를 통하여 추출된 특징들 중에서 얼굴을 검출하는데 강인한 특징들만을 모았다. 이렇게 만들어진 특징집합들을 이용하여 입력받은 영상에서 초당 20프레임의 실시간으로 얼굴을 검출하였고 영상에서 얼굴 위치와 Pan-Tilt 카메라 위치를 계산하여 실시간으로 움직임을 추적하는데 성공하였다.

  • PDF

Gaze Recognition System using Random Forests in Vehicular Environment based on Smart-Phone (스마트 폰 기반 차량 환경에서의 랜덤 포레스트를 이용한 시선 인식 시스템)

  • Oh, Byung-Hun;Chung, Kwang-Woo;Hong, Kwang-Seok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.1
    • /
    • pp.191-197
    • /
    • 2015
  • In this paper, we propose the system which recognize the gaze using Random Forests in vehicular environment based on smart-phone. Proposed system is mainly composed of the following: face detection using Adaboost, face component estimation using Histograms, and gaze recognition based on Random Forests. We detect a driver based on the image information with a smart-phone camera, and the face component of driver is estimated. Next, we extract the feature vectors from the estimated face component and recognize gaze direction using Random Forest recognition algorithm. Also, we collected gaze database including a variety gaze direction in real environments for the experiment. In the experiment result, the face detection rate and the gaze recognition rate showed 82.02% and 84.77% average accuracies, respectively.

Effective Eye Detection for Face Recognition to Protect Medical Information (의료정보 보호를 위해 얼굴인식에 필요한 효과적인 시선 검출)

  • Kim, Suk-Il;Seok, Gyeong-Hyu
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.5
    • /
    • pp.923-932
    • /
    • 2017
  • In this paper, we propose a GRNN(: Generalized Regression Neural Network) algorithms for new eyes and face recognition identification system to solve the points that need corrective action in accordance with the existing problems of facial movements gaze upon it difficult to identify the user and. Using a Kalman filter structural information elements of a face feature to determine the authenticity of the face was estimated future location using the location information of the current head and the treatment time is relatively fast horizontal and vertical elements of the face using a histogram analysis the detected. And the light obtained by configuring the infrared illuminator pupil effects in real-time detection of the pupil, the pupil tracking was to extract the text print vector. The abstract is to be in fully-justified italicized text as it is here, below the author information.

Anomaly Sewing Pattern Detection for AIoT System using Deep Learning and Decision Tree

  • Nguyen Quoc Toan;Seongwon Cho
    • Smart Media Journal
    • /
    • v.13 no.2
    • /
    • pp.85-94
    • /
    • 2024
  • Artificial Intelligence of Things (AIoT), which combines AI and the Internet of Things (IoT), has recently gained popularity. Deep neural networks (DNNs) have achieved great success in many applications. Deploying complex AI models on embedded boards, nevertheless, may be challenging due to computational limitations or intelligent model complexity. This paper focuses on an AIoT-based system for smart sewing automation using edge devices. Our technique included developing a detection model and a decision tree for a sufficient testing scenario. YOLOv5 set the stage for our defective sewing stitches detection model, to detect anomalies and classify the sewing patterns. According to the experimental testing, the proposed approach achieved a perfect score with accuracy and F1score of 1.0, False Positive Rate (FPR), False Negative Rate (FNR) of 0, and a speed of 0.07 seconds with file size 2.43MB.

Effective real-time identification using Bayesian statistical methods gaze Network (베이지안 통계적 방안 네트워크를 이용한 효과적인 실시간 시선 식별)

  • Kim, Sung-Hong;Seok, Gyeong-Hyu
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.3
    • /
    • pp.331-338
    • /
    • 2016
  • In this paper, we propose a GRNN(: Generalized Regression Neural Network) algorithms for new eyes and face recognition identification system to solve the points that need corrective action in accordance with the existing problems of facial movements gaze upon it difficult to identify the user and. Using a Kalman filter structural information elements of a face feature to determine the authenticity of the face was estimated future location using the location information of the current head and the treatment time is relatively fast horizontal and vertical elements of the face using a histogram analysis the detected. And the light obtained by configuring the infrared illuminator pupil effects in real-time detection of the pupil, the pupil tracking was - to extract the text print vector.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.