• Title/Summary/Keyword: 실시간 얼굴인식

Search Result 249, Processing Time 0.029 seconds

Smart Card User Identification Using Low-sized Face Feature Information (경량화된 얼굴 특징 정보를 이용한 스마트 카드 사용자 인증)

  • Park, Jian;Cho, Seongwon;Chung, Sun-Tae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.4
    • /
    • pp.349-354
    • /
    • 2014
  • PIN(Personal Identification Number)-based identification method has been used to identify the user of smart cards. However, this type of identification method has several problems. Firstly, PIN can be forgotten by owners of the card. Secondly, PIN can be used by others illegally. Furthermore, the possibility of hacking PIN can be high because this PIN type matching process is performed on terminal. Thus, in this paper we suggest a new identification method which is performed on smart card using face feature information. The proposed identification method uses low-sized face feature vectors and simple matching algorithm in order to get around the limits in computing capability and memory size of smart card.

Hardware Implementation of Facial Feature Detection Algorithm (얼굴 특징 검출 알고리즘의 하드웨어 설계)

  • Kim, Jung-Ho;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.1
    • /
    • pp.1-10
    • /
    • 2008
  • In this paper, we designed a facial feature(eyes, a moult and a nose) detection hardware based on the ICT transform which was developed for face detection earlier. Our design used a pipeline architecture for high throughput and it also tried to reduce memory size and memory access rate. The algerian and its hardware implementation were tested on the BioID database, which is a worldwide face detection test bed, and its facial feature detection rate was 100% both in software and hardware, assuming the face boundary was correctly detected. After synthesizing the hardware on Dongbu $0.18{\mu}m$ CMOS library, its die size was $376,821{\mu}m^2$ with the maximum operating clock 78MHz.

Real-time Vital Signs Measurement System using Facial Image Data (안면 이미지 데이터를 이용한 실시간 생체징후 측정시스템)

  • Kim, DaeYeol;Kim, JinSoo;Lee, KwangKee
    • Journal of Broadcast Engineering
    • /
    • v.26 no.2
    • /
    • pp.132-142
    • /
    • 2021
  • The purpose of this study is to present an effective methodology that can measure heart rate, heart rate variability, oxygen saturation, respiration rate, mental stress level, and blood pressure using mobile front camera that can be accessed most in real life. Face recognition was performed in real-time using Blaze Face to acquire facial image data, and the forehead was designated as ROI (Region Of Interest) using feature points of the eyes, nose, and mouth, and ears. Representative values for each channel of the ROI were generated and aligned on the time axis to measure vital signs. The vital signs measurement method was based on Fourier transform, and noise was removed and filtered according to the desired vital signs to increase the accuracy of the measurement. To verify the results, vital signs measured using facial image data were compared with pulse oximeter contact sensor, and TI non-contact sensor. As a result of this work, the possibility of extracting a total of six vital signs (heart rate, heart rate variability, oxygen saturation, respiratory rate, stress, and blood pressure) was confirmed through facial images.

Development of Home Automation Robots using Face Recognition Image Processing (안면인식 영상처리를 활용한 가정용 로봇 개발)

  • Choi, Min-kyu;Woo, In-hyuk;Kim, Dong-hyuk;Ahn, Yong-hyun;Han, Joon-ho;Park, Joo-young;Ko, Ji-hye;Park, Je-hee;Moon, Ha-young;Kim, Min-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.374-376
    • /
    • 2018
  • In this study, we developed a mobile home robot using a face recognition method using a camera attached to a raspberry pie. It receives the real time image through the camera attached to the raspberry pie, recognizes the face of the person, and distinguishes the operation of the smart cool air temperature device according to the result. It is expected that the robot will be able to increase the energy utilization efficiency by allowing the robot to operate in cold and hot winds only where there is no human being, instead of operating the hot and cold air conditioner.

  • PDF

Development of Smart Mirror System for Hair Styling (헤어 스타일링을 위한 스마트 미러 시스템 개발)

  • Kim, Seong-Deok;Song, Min-Seok;Joo, Hyun-Jin;Park, Hyun-A;Han, Young-Oh
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.1
    • /
    • pp.93-100
    • /
    • 2020
  • In this paper, we implemented a smart mirror that helps users to measure the results before the procedure by reflecting various hairstyles on the head. When the camera is used to capture a face in real time and recognize the face, the stored hair image is uploaded to provide a virtual image. In addition, the high production cost, which is a problem of the existing smart mirrors, was reduced by using Raspberry Pi, Open CV, and half mirror film, and various functions were implemented through touch control. It is also designed to provide information such as weather, calendar and time.

An Event-Related Potential Investigation of Response Inhibition in Psychopathy using Emotional Go/NoGo Tasks : A Preliminary Study (사건관련전위를 이용한 정서 Go/NoGo과제에서 나타난 정신병질집단의 반응억제: 예비 연구)

  • Kim, Young Youn
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2011.05a
    • /
    • pp.361-362
    • /
    • 2011
  • 본 연구는 사건관련전위를 이용하여 정신병질자들이 정서인식에 따른 반응억제의 어려움이 있는지를 알아보기 위해 수행되었다. 교도소에 수용 중인 수형자들을 대상으로 PCL-R(Psychopathy Checklist-Revised)의 점수에 따라 6명의 정신병질 수형자집단과 4명의 정상 수형자집단을 선별하고, 얼굴자극을 이용한 시각 Go/NoGo 과제를 실시하였다. 모든 피험자들은 Go 자극에 버튼을 누르고 NoGo 자극에 버튼을 누르지 않도록 지시를 받았으며, 과제를 실시하는 동안 사건관련전위를 측정하였다. 과제 1에서는 공포표정을 NoGo자극으로 사용하고 중성표정을 Go자극으로 사용하였으며, 과제 2에서는 슬픈 표정을 NoGo자극으로 사용하고 중성표정을 Go자극으로 사용하였다. 정신병질 수형자집단은 공포표정의 NoGo P3 진폭이 중성표정의 Go P3 진폭보다 크게 나타난 반면에 정상 수형자집단은 공포표정과 중성표정 간의 P3 요인의 진폭이 유사하거나 공포표정의 NoGo P3 진폭보다 중성표정의 Go P3 진폭이 더 크게 나타났다. P3 잠재기를 분석한 결과, 정신병질 수형자집단은 슬픔 표정자극의 NoGo조건에서 중성표정자극의 Go조건보다 느린 P3 잠재기를 나타낸 반면, 정상 수형자집단은 NoGo조건에서 Go조건보다 빠른 P3 잠재기를 나타냈다. 정서인식검사결과, 정신병질 수형자집단은 정상 수형자 집단보다 유의미하게 낮은 정확도를 나타냈다. 이러한 결과는 정신병질자들이 공포, 슬픔과 같은 부정적인 정서를 인식한 후에 반응을 억제하는데 인지적인 어려움을 겪는다는 것을 보여준다.

  • PDF

Smart Bus System using BLE Beacon and Computer Vision (BLE 비콘과 컴퓨터비전을 적용한 스마트 버스 시스템)

  • You, Minjung;Rhee, Eugene
    • Journal of IKEEE
    • /
    • v.22 no.2
    • /
    • pp.250-257
    • /
    • 2018
  • In this paper, a smart bus system that automates public bus traffic payment by applying beacon and computer vision and provides bus route information, real-time location information, getting off alarm is proposed. By using the beacon to recognize busses near the stop and to board the bus to be boarded, this system automatically processes the payment when boarding by using the distance from the beacon and the information provided by the beacon and the face comparison. After the payment processing, the system provides the route information of the boarded bus and the real-time bus location information to the user, and when the user sets an alarm using these informations, the alarm is activated when the bus leaves the bus stop.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Gender Classification System Based on Deep Learning in Low Power Embedded Board (저전력 임베디드 보드 환경에서의 딥 러닝 기반 성별인식 시스템 구현)

  • Jeong, Hyunwook;Kim, Dae Hoe;Baddar, Wisam J.;Ro, Yong Man
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.1
    • /
    • pp.37-44
    • /
    • 2017
  • While IoT (Internet of Things) industry has been spreading, it becomes very important for object to recognize user's information by itself without any control. Above all, gender (male, female) is dominant factor to analyze user's information on account of social and biological difference between male and female. However since each gender consists of diverse face feature, face-based gender classification research is still in challengeable research field. Also to apply gender classification system to IoT, size of device should be reduced and device should be operated with low power. Consequently, To port the function that can classify gender in real-world, this paper contributes two things. The first one is new gender classification algorithm based on deep learning and the second one is to implement real-time gender classification system in embedded board operated by low power. In our experiment, we measured frame per second for gender classification processing and power consumption in PC circumstance and mobile GPU circumstance. Therefore we verified that gender classification system based on deep learning works well with low power in mobile GPU circumstance comparing to in PC circumstance.

Real Time Face Detection and Recognition based on Embedded System (임베디드 시스템 기반 실시간 얼굴 검출 및 인식)

  • Lee, A-Reum;Seo, Yong-Ho;Yang, Tae-Kyu
    • Journal of The Institute of Information and Telecommunication Facilities Engineering
    • /
    • v.11 no.1
    • /
    • pp.23-28
    • /
    • 2012
  • In this paper, we proposed and developed a fast and efficient real time face detection and recognition which can be run on embedded system instead of high performance desktop. In the face detection process, we detect a face by finding eye part which is one of the most salient facial features after applying various image processing methods, then in the face recognition, we finally recognize the face by comparing the current face with the prepared face database using a template matching algorithm. Also we optimized the algorithm in our system to be successfully used in the embedded system, and performed the face detection and recognition experiments on the embedded board to verify the performance. The developed method can be applied to automatic door, mobile computing environment and various robot.

  • PDF