• Title/Summary/Keyword: Face Tracking

Search Result 342, Processing Time 0.025 seconds

Real-time Face Tracking using the Relative Similarity of Local Area (지역적영역의 상대적 유사도를 이용한 실시간 얼굴추적)

  • Lee, JeaHyuk;Shin, DongWha;Kim, HyunJung;Weon, ILYong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2013.11a
    • /
    • pp.1408-1411
    • /
    • 2013
  • 객체의 인식과 추적은 컴퓨터 비전 및 영상처리 분야에서 연구가 활발히 진행되고 있다. 특히 얼굴을 인식하고 추적하는 기술은 많은 분야에서 응용될 수 있다. 기존에 연구되어 온 기준 프레임과 관찰 프레임 사이의 차를 이용하여 객체를 인식하고 추적하는 방식은 관찰 대상이 다수인 경우 동일성을 확보하기에는 어려움이 많다. 따라서 본 논문에서는 각각의 프레임에서 빠르게 얼굴 영역을 인식하고, 독립적으로 인지된 얼굴들의 동일성을 연결하는 방법을 제시한다. 제안된 방법의 유용성은 실험으로 검증하였으며, 어느 정도 의미 있는 결과를 관찰할 수 있었다.

Realtime Facial Expression Representation Method For Virtual Online Meetings System

  • Zhu, Yinge;Yerkovich, Bruno Carvacho;Zhang, Xingjie;Park, Jong-il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.212-214
    • /
    • 2021
  • In a society with Covid-19 as part of our daily lives, we had to adapt ourselves to a new reality to maintain our lifestyles as normal as possible. An example of this is teleworking and online classes. However, several issues appeared on the go as we started the new way of living. One of them is the doubt of knowing if real people are in front of the camera or if someone is paying attention during a lecture. Therefore, we encountered this issue by creating a 3D reconstruction tool to identify human faces and expressions actively. We use a web camera, a lightweight 3D face model, and use the 2D facial landmark to fit expression coefficients to drive the 3D model. With this Model, it is possible to represent our faces with an Avatar and fully control its bones with rotation and translation parameters. Therefore, in order to reconstruct facial expressions during online meetings, we proposed the above methods as our solution to solve the main issue.

  • PDF

Kiosk System Development Using Eye Tracking And Face-Recognition Technology (시선추적 기술과 얼굴인식 기술을 이용한 무인단말기(키오스크)시스템)

  • Kim, Min-Jae;Kim, Tae-Won;Lee, Hyo-Jin;Jo, Il-Hyun;Kim, Woongsup
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.486-489
    • /
    • 2020
  • 본 설계는 얼굴과 눈을 인식한 후, 시선추적을 통해 마우스와 눈동자의 움직임을 연결하여 메뉴를 주문하는 기술이다. 시선추적을 통해 키오스크를 터치하지 않아도 메뉴를 간편하게 주문할 수 있고, 얼굴인식을 이용해 자신의 최근 주문기록을 확인하여 쉽고 빠르게 메뉴를 주문할 수 있다. 얼굴이 등록되어있지 않은 새로운 사용자는 안드로이드 앱을 이용하여 사진과 메뉴를 선택하여 장바구니에 담아 주문 시간을 단축할 수 있어 바쁜 현대인들에게 편리함을 제공할 수 있도록 구현하였다.

Life Prevention Service for COVID-19 using Machine Learning (머신러닝을 활용한 코로나 바이러스 생활방역 서비스)

  • Lee, Se-Hoon;Kim, Young-jin;Jeong, Ji-Seok;Seo, Hee-Ju;Kwon, Hyeon-guen
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.95-96
    • /
    • 2020
  • 본 논문은 발열 검사시에 QR코드를 이용해 1차적인 본인인증 단계 후 K-NN알고리즘을 통한 얼굴인식으로 2차적인 본인인증 을 거친후 비대면식으로 발열검사가 가능한 방법을 제시하였다. 이를 통해서 추적관리 뿐만 아니라 CCTV영상을 통하여 확진자 발생시 인접 인원 추적까지 가능하고, 신속한 추적관리가 가능하게 제공하였다.

  • PDF

Estimation of a Gaze Point in 3D Coordinates using Human Head Pose (휴먼 헤드포즈 정보를 이용한 3차원 공간 내 응시점 추정)

  • Shin, Chae-Rim;Yun, Sang-Seok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.177-179
    • /
    • 2021
  • This paper proposes a method of estimating location of a target point at which an interactive robot gazes in an indoor space. RGB images are extracted from low-cost web-cams, user head pose is obtained from the face detection (Openface) module, and geometric configurations are applied to estimate the user's gaze direction in the 3D space. The coordinates of the target point at which the user stares are finally measured through the correlation between the estimated gaze direction and the plane on the table plane.

  • PDF

Anomaly Sewing Pattern Detection for AIoT System using Deep Learning and Decision Tree

  • Nguyen Quoc Toan;Seongwon Cho
    • Smart Media Journal
    • /
    • v.13 no.2
    • /
    • pp.85-94
    • /
    • 2024
  • Artificial Intelligence of Things (AIoT), which combines AI and the Internet of Things (IoT), has recently gained popularity. Deep neural networks (DNNs) have achieved great success in many applications. Deploying complex AI models on embedded boards, nevertheless, may be challenging due to computational limitations or intelligent model complexity. This paper focuses on an AIoT-based system for smart sewing automation using edge devices. Our technique included developing a detection model and a decision tree for a sufficient testing scenario. YOLOv5 set the stage for our defective sewing stitches detection model, to detect anomalies and classify the sewing patterns. According to the experimental testing, the proposed approach achieved a perfect score with accuracy and F1score of 1.0, False Positive Rate (FPR), False Negative Rate (FNR) of 0, and a speed of 0.07 seconds with file size 2.43MB.

Eye Tracking Using Neural Network and Mean-shift (신경망과 Mean-shift를 이용한 눈 추적)

  • Kang, Sin-Kuk;Kim, Kyung-Tai;Shin, Yun-Hee;Kim, Na-Yeon;Kim, Eun-Yi
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.1
    • /
    • pp.56-63
    • /
    • 2007
  • In this paper, an eye tracking method is presented using a neural network (NN) and mean-shift algorithm that can accurately detect and track user's eyes under the cluttered background. In the proposed method, to deal with the rigid head motion, the facial region is first obtained using skin-color model and con-nected-component analysis. Thereafter the eye regions are localized using neural network (NN)-based tex-ture classifier that discriminates the facial region into eye class and non-eye class, which enables our method to accurately detect users' eyes even if they put on glasses. Once the eye region is localized, they are continuously and correctly tracking by mean-shift algorithm. To assess the validity of the proposed method, it is applied to the interface system using eye movement and is tested with a group of 25 users through playing a 'aligns games.' The results show that the system process more than 30 frames/sec on PC for the $320{\times}240$ size input image and supply a user-friendly and convenient access to a computer in real-time operation.

A Study on the Visual Attention of Popular Animation Characters Utilizing Eye Tracking (아이트래킹을 활용한 인기 애니메이션 캐릭터의 시각적 주의에 관한 연구)

  • Hwang, Mi-Kyung;Kwon, Mahn-Woo;Park, Min-Hee;Yin, Shuo-Han
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.6
    • /
    • pp.214-221
    • /
    • 2019
  • Visual perception information acquired through human eyes contains much information on how to view visual stimuli using eye tracking technology, it is possible to acquire and analyze consumer visual information as quantitative data. These measurements can be used to measure emotions that customers feel unconsciously, and they can be directly collected by numerically quantifying the character's search response through eye tracking. In this study, we traced the character's area of interest (AOI) and found that the average of fixation duration, count, average of visit duration, count, and finally the time to first fixation was analyzed. As a result of analysis, it was found that there were many cognitive processing processes on the face than the character's body, and the visual attention was high. The visual attention of attraction factor has also been able to verify that attraction is being presented as an important factor in determining preferences for characters. Based on the results of this study, further studies of more characters will be conducted and quantitative interpretation methods can be used as basic data for character development and factors to be considered in determining character design.

Speech Activity Detection using Lip Movement Image Signals (입술 움직임 영상 선호를 이용한 음성 구간 검출)

  • Kim, Eung-Kyeu
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.4
    • /
    • pp.289-297
    • /
    • 2010
  • In this paper, A method to prevent the external acoustic noise from being misrecognized as the speech recognition object is presented in the speech activity detection process for the speech recognition. Also this paper confirmed besides the acoustic energy to the lip movement image signals. First of all, the successive images are obtained through the image camera for personal computer and the lip movement whether or not is discriminated. The next, the lip movement image signal data is stored in the shared memory and shares with the speech recognition process. In the mean time, the acoustic energy whether or not by the utterance of a speaker is verified by confirming data stored in the shared memory in the speech activity detection process which is the preprocess phase of the speech recognition. Finally, as a experimental result of linking the speech recognition processor and the image processor, it is confirmed to be normal progression to the output of the speech recognition result if face to the image camera and speak. On the other hand, it is confirmed not to the output the result of the speech recognition if does not face to the image camera and speak. Also, the initial feature values under off-line are replaced by them. Similarly, the initial template image captured while off-line is replaced with a template image captured under on-line, so the discrimination of the lip movement image tracking is raised. An image processing test bed was implemented to confirm the lip movement image tracking process visually and to analyze the related parameters on a real-time basis. As a result of linking the speech and image processing system, the interworking rate shows 99.3% in the various illumination environments.

Adaptive Skin Color Segmentation in a Single Image using Image Feedback (영상 피드백을 이용한 단일 영상에서의 적응적 피부색 검출)

  • Do, Jun-Hyeong;Kim, Keun-Ho;Kim, Jong-Yeol
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.3
    • /
    • pp.112-118
    • /
    • 2009
  • Skin color segmentation techniques have been widely utilized for face/hand detection and tracking in many applications such as a diagnosis system using facial information, human-robot interaction, an image retrieval system. In case of a video image, it is common that the skin color model for a target is updated every frame for the robust target tracking against illumination change. As for a single image, however, most of studies employ a fixed skin color model which may result in low detection rate or high false positive errors. In this paper, we propose a novel method for effective skin color segmentation in a single image, which modifies the conditions for skin color segmentation iteratively by the image feedback of segmented skin color region in a given image.