• Title/Summary/Keyword: 얼굴감지

Search Result 69, Processing Time 0.033 seconds

A Tracking Algorithm to Certain People Using Recognition of Face and Cloth Color and Motion Analysis with Moving Energy in CCTV (폐쇄회로 카메라에서 운동에너지를 이용한 모션인식과 의상색상 및 얼굴인식을 통한 특정인 추적 알고리즘)

  • Lee, In-Jung
    • The KIPS Transactions:PartB
    • /
    • v.15B no.3
    • /
    • pp.197-204
    • /
    • 2008
  • It is well known that the tracking a certain person is a vary needed technic in the humanoid robot. In robot technic, we should consider three aspects that is cloth color matching, face recognition and motion analysis. Because a robot technic use some sensors, it is many different with the robot technic to track a certain person through the CCTV images. A system speed should be fast in CCTV images, hence we must have small calculation numbers. We need the statistical variable for color matching and we adapt the eigen-face for face recognition to speed up the system. In this situation, motion analysis have to added for the propose of the efficient detecting system. But, in many motion analysis systems, the speed and the recognition rate is low because the system operates on the all image area. In this paper, we use the moving energy only on the face area which is searched when the face recognition is processed, since the moving energy has low calculation numbers. When the proposed algorithm has been compared with Girondel, V. et al's method for experiment, we obtained same recognition rate as Girondel, V., the speed of the proposed algorithm was the more faster. When the LDA has been used, the speed was same and the recognition rate was better than Girondel, V.'s method, consequently the proposed algorithm is more efficient for tracking a certain person.

Fusion algorithm for Integrated Face and Gait Identification (얼굴과 발걸음을 결합한 인식)

  • Nizami, Imran Fareed;An, Sung-Je;Hong, Sung-Jun;Lee, Hee-Sung;Kim, Eun-Tai;Park, Mig-Non
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.72-77
    • /
    • 2008
  • Identification of humans from multiple view points is an important task for surveillance and security purposes. For optimal performance the system should use the maximum information available from sensors. Multimodal biometric systems are capable of utilizing more than one physiological or behavioral characteristic for enrollment, verification, or identification. Since gait alone is not yet established as a very distinctive feature, this paper presents an approach to fuse face and gait for identification. In this paper we will use the single camera case i.e both the face and gait recognition is done using the same set of images captured by a single camera. The aim of this paper is to improve the performance of the system by utilizing the maximum amount of information available in the images. Fusion in considered at decision level. The proposed algorithm is tested on the NLPR database.

Design and Implementation of a Robot Analyzing Mental Disorder Risks for a Single-person Household Worker through Facial Expression-Detecting System (표정 감지 시스템을 통한 직장 생활을 하는 1인 가구의 정신질환 발병 위험도 분석 로봇 설계 및 구현)

  • Lee, Seong-Ung;Lee, Kang-Hee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.1
    • /
    • pp.489-494
    • /
    • 2020
  • We propose to designs and to implements a robot analyzing the risk of occurrence of mental disorder of single-person households' workers through the facial expression-detecting system. Due to complex social factors, the number and proportion of single-person households continues to increase. In addition, contrast to the household of many family members, the prevalence of mental disorder among single-person household varies greatly. Since most patients with mental can not detect the disease on their own, counseling and treatment with doctors are often ignored. In this study, we design and implement a robot analyzing the risk of mental disorder of single-person households workers by constructing a system with Q.bo One, a social robot created by Thecorpora. Q.bo One is consisted of Arduino, ar raspberry pie, and other sensors designed to detect and respond to sensors in the direction users want to implement. Based on the DSM-5 provided by the American Psychiatric Association, the risk of mental disorder occurrence was specified based on mental disorder. Q.bo One analyzed the facial expressions of the subjects for a week or two to evaluate depressive disorder, anxiety disorder. If the mental disorder occurrence risk is high, Q.bo One is designd to inform the subject to counsel and have medical treatment with a specialist.

Infants Manless Management System (영유아 무인 관리 시스템)

  • Min, YG;Gwon, GM;I, Eon Jo;Park, SJ;Chung, HC
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.413-416
    • /
    • 2017
  • The development of the internet, which is due to the advent of the fourth industrial revolution, has been slowly affecting our lives. Based on this trend, various products have recently emerged, and have yet to be developed in the context of identifying the dangers of infant babies. Increasingly, children are experiencing problems with detecting and responding to children's lives because of their daily living noise and housekeeping activities. This project attempts to develop a raspberry pie and an audiovisual sensor module to avoid the risk of preventing unwanted behavior from tripping the child's sudden behavior in everyday life. Further, it was designed to provide convenience to the guardian's convenience by implementing the smartphone app with the Wifi signal.

  • PDF

Presentation control of the computer using the motion identification rules (모션 식별 룰을 이용한 컴퓨터의 프레젠테이션 제어)

  • Lee, Sang-yong;Lee, Kyu-won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.05a
    • /
    • pp.586-589
    • /
    • 2015
  • A computer presentation system by using hand-motion identification rules is proposed. To identify hand motions of a presenter, a face region is extracted first using haar classifier. A motion status(patterns) and position of hands is discriminated using the center of gravities of user's face and hand after segmenting the hand area on the YCbCr color model. User's hand is applied to the motion detection rules and then presentation control command is then executed. The proposed system utilizes the motion identification rules without the use of additional equipment and it is then capable of controlling the presentation and does not depend on the complexity of the background. The proposed algorithm confirmed the stable control operation via the presentation of the experiment in the dark illumination range of indoor atmosphere (lx) 15-20-30.

  • PDF

Implementation of A Safe Driving Assistance System and Doze Detection (졸음 인식과 안전운전 보조시스템 구현)

  • Song, Hyok;Choi, Jin-Mo;Lee, Chul-Dong;Choi, Byeong-Ho;Yoo, Ji-Sang
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.3
    • /
    • pp.30-39
    • /
    • 2012
  • In this paper, a safe driving assistance system is proposed by detecting the status of driver's doze based on face and eye detection. By the level of the fatigue, safe driving system alarms or set the seatbelt on vibration. To reduce the effect of backward light and too strong solar light which cause a decrease of face and eye detection rate and false fatigue detection, post processing techniques like image equalization are used. Haar transform and PCA are used for face detection. By using the statistic of the face and eye structural ratio of normal Koreans, we can reduce the eye candidate area in the face, which results in reduction of the computational load. We also propose a new eye status detection algorithm based on Hough transform and eye width-height ratio, which are used to detect eye's blinking status which decides doze level by measuring the blinking period. The system alarms and operates seatbelt on vibration through controller area network(CAN) when the driver's doze level is detected. In this paper, four algorithms are implemented and proposed algorithm is made based on the probability model and we achieves 84.88% of correct detection rate through indoor and in-car environment experiments. And also we achieves 69.81% of detection rate which is better result than that of other algorithms using IR camera.

Stress Detection System for Emotional Labor Based On Deep Learning Facial Expression Recognition (감정노동자를 위한 딥러닝 기반의 스트레스 감지시스템의 설계)

  • Og, Yu-Seon;Cho, Woo-hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.613-617
    • /
    • 2021
  • According to the growth of the service industry, stresses from emotional labor workers have been emerging as a social problem, thereby so-called the Emotional Labor Protection Act was implemented in 2018. However, insufficient substantial protection systems for emotional workers emphasizes the necessity of a digital stress management system. Thus, in this paper, we suggest a stress detection system for customer service representatives based on deep learning facial expression recognition. This system consists of a real-time face detection module, an emotion classification FER module that deep-learned big data including Korean emotion images, and a monitoring module that only visualizes stress levels. We designed the system to aim to monitor stress and prevent mental illness in emotional workers.

  • PDF

Untact-based elevator operating system design using deep learning of private buildings (프라이빗 건물의 딥러닝을 활용한 언택트 기반 엘리베이터 운영시스템 설계)

  • Lee, Min-hye;Kang, Sun-kyoung;Shin, Seong-yoon;Mun, Hyung-jin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.161-163
    • /
    • 2021
  • In an apartment or private building, it is difficult for the user to operate the elevator button in a similar situation with luggage in both hands. In an environment where human contact must be minimized due to a highly infectious virus such as COVID-19, it is inevitable to operate an elevator based on untact. This paper proposes an operating system capable of operating the elevator by using the user's voice and image processing through the user's face without pressing the elevator button. The elevator can be operated to a designated floor without pressing a button by detecting the face of a person entering the elevator by detecting the person's face from the camera installed in the elevator, matching the information registered in advance. When it is difficult to recognize a person's face, it is intended to enhance the convenience of elevator use in an untouched environment by controlling the floor of the elevator using the user's voice through a microphone and automatically recording access information.

  • PDF

상황인지형 인터랙티브 텔레스크린 기술

  • Lee, Hyeon-Jin;Eom, Tae-Won;Jo, Gi-Seong;Lee, Hyeon-U;Ryu, Won
    • Information and Communications Magazine
    • /
    • v.30 no.8
    • /
    • pp.69-75
    • /
    • 2013
  • 공공 장소에서 주변 상황 정보와 연계하여 각종 정보 및 광고를 인터랙티브하게 제공하는 차세대 디지털 사이니지인 텔레스크린은 구석기 시대의 벽화에서 시작한 상호 의사 소통 방법의 하나이다. 텔레스크린은 초기에 단방향 광고 또는 단순 정보 전달형 서비스를 제공하여 왔으나, 최근에는 카메라, 센서, NFC(Near Field Communication), 스마트 폰 등을 활용하거나, 양방향 UI/UX(User Interface/User eXperience), 얼굴 인식 기술과 연동하여 소비자의 참여를 유도하고 있다. 또한, 사용자의 주변 상황과 사용자의 상태 정보 등을 수집, 분석함으로써 상황인지 기반의 양방향 커뮤니케이션이 가능한 인터랙티브 텔레스크린으로 진화하고 있다. 최근에는 3D 기술과 사람의 반응을 감지하는 인지 기술들이 지속적으로 발전하고 있기에, 멀지 않은 미래에는 사용자의 감성에 반응하는 텔레스크린 서비스가 제공될 것으로 기대된다[1]. 그리하여, 사용자가 인식하지 못하는 사이에 주변 상황과 더불어 사용자의 감성에 기반하여 가장 효과가 높을 것으로 기대되는 맞춤형 정보에 자연스럽게 노출되게 될 것이다.

Face-Mask Detection with Micro processor (마이크로프로세서 기반의 얼굴 마스크 감지)

  • Lim, Hyunkeun;Ryoo, Sooyoung;Jung, Hoekyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.3
    • /
    • pp.490-493
    • /
    • 2021
  • This paper proposes an embedded system that detects mask and face recognition based on a microprocessor instead of Nvidia Jetson Board what is popular development kit. We use a class of efficient models called Mobilenets for mobile and embedded vision applications. MobileNets are based on a streamlined architechture that uses depthwise separable convolutions to build light weight deep neural networks. The device used a Maix development board with CNN hardware acceleration function, and the training model used MobileNet_V2 based SSD(Single Shot Multibox Detector) optimized for mobile devices. To make training model, 7553 face data from Kaggle are used. As a result of test dataset, the AUC (Area Under The Curve) value is as high as 0.98.