• Title/Summary/Keyword: Face Feature detection

Search Result 314, Processing Time 0.025 seconds

A Study on Controlling IPTV Interface Based on Tracking of Face and Eye Positions (얼굴 및 눈 위치 추적을 통한 IPTV 화면 인터페이스 제어에 관한 연구)

  • Lee, Won-Oh;Lee, Eui-Chul;Park, Kang-Ryoung;Lee, Hee-Kyung;Park, Min-Sik;Lee, Han-Kyu;Hong, Jin-Woo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.6B
    • /
    • pp.930-939
    • /
    • 2010
  • Recently, many researches for making more comfortable input device based on gaze detection have been vigorously performed in human computer interaction. However, these previous researches are difficult to be used in IPTV environment because these methods need additional wearing devices or do not work at a distance. To overcome these problems, we propose a new way of controlling IPTV interface by using a detected face and eye positions in single static camera. And although face or eyes are not detected successfully by using Adaboost algorithm, we can control IPTV interface by using motion vectors calculated by pyramidal KLT (Kanade-Lucas-Tomasi) feature tracker. These are two novelties of our research compared to previous works. This research has following advantages. Different from previous research, the proposed method can be used at a distance about 2m. Since the proposed method does not require a user to wear additional equipments, there is no limitation of face movement and it has high convenience. Experimental results showed that the proposed method could be operated at real-time speed of 15 frames per second. Wd confirmed that the previous input device could be sufficiently replaced by the proposed method.

Learning efficiency checking system by measuring human motion detection (사람의 움직임 감지를 측정한 학습 능률 확인 시스템)

  • Kim, Sukhyun;Lee, Jinsung;Yu, Eunsang;Park, Seon-u;Kim, Eung-Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.290-293
    • /
    • 2021
  • In this paper, we implement a learning efficiency verification system to inspire learning motivation and help improve concentration by detecting the situation of the user studying. To this aim, data on learning attitude and concentration are measured by extracting the movement of the user's face or body through a real-time camera. The Jetson board was used to implement the real-time embedded system, and a convolutional neural network (CNN) was implemented for image recognition. After detecting the feature part of the object using a CNN, motion detection is performed. The captured image is shown in a GUI written in PYQT5, and data is collected by sending push messages when each of the actions is obstructed. In addition, each function can be executed on the main screen made with the GUI, and functions such as a statistical graph that calculates the collected data, To do list, and white noise are performed. Through learning efficiency checking system, various functions including data collection and analysis of targets were provided to users.

  • PDF

Driver Assistance System for Integration Interpretation of Driver's Gaze and Selective Attention Model (운전자 시선 및 선택적 주의 집중 모델 통합 해석을 통한 운전자 보조 시스템)

  • Kim, Jihun;Jo, Hyunrae;Jang, Giljin;Lee, Minho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.3
    • /
    • pp.115-122
    • /
    • 2016
  • This paper proposes a system to detect driver's cognitive state by internal and external information of vehicle. The proposed system can measure driver's eye gaze. This is done by concept of information delivery and mutual information measure. For this study, we set up two web-cameras at vehicles to obtain visual information of the driver and front of the vehicle. We propose Gestalt principle based selective attention model to define information quantity of road scene. The saliency map based on gestalt principle is prominently represented by stimulus such as traffic signals. The proposed system assumes driver's cognitive resource allocation on the front scene by gaze analysis and head pose direction information. Then we use several feature algorithms for detecting driver's characteristics in real time. Modified census transform (MCT) based Adaboost is used to detect driver's face and its component whereas POSIT algorithms are used for eye detection and 3D head pose estimation. Experimental results show that the proposed system works well in real environment and confirm its usability.

Automatic Tagging Scheme for Plural Faces (다중 얼굴 태깅 자동화)

  • Lee, Chung-Yeon;Lee, Jae-Dong;Chin, Seong-Ah
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.3
    • /
    • pp.11-21
    • /
    • 2010
  • To aim at improving performance and reflecting user's needs of retrieval, the number of researches has been actively conducted in recent year as the quantity of information and generation of the web pages exceedingly increase. One of alternative approaches can be a tagging system. It makes users be able to provide a representation of metadata including writings, pictures, and movies etc. called tag and be convenient in use of retrieval of internet resources. Tags similar to keywords play a critical role in maintaining target pages. However, they still needs time consuming labors to annotate tags, which sometimes are found to be a hinderance caused by overuse of tagging. In this paper, we present an automatic tagging scheme for a solution of current tagging system conveying drawbacks and inconveniences. To realize the approach, face recognition-based tagging system on SNS is proposed by building a face area detection procedure, linear-based classification and boosting algorithm. The proposed novel approach of tagging service can increase possibilities that utilized SNS more efficiently. Experimental results and performance analysis are shown as well.