• Title/Summary/Keyword: Face Tracking

Search Result 342, Processing Time 0.033 seconds

Analysis of Visual Attention of Students with Developmental Disabilities in Virtual Reality Based Training Contents (가상현실기반 훈련 콘텐츠에서 발달장애인의 시각적 주의집중도 분석)

  • Jo, Junghee
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.2
    • /
    • pp.328-335
    • /
    • 2021
  • In the era of 'Untact', virtual reality-based job training platforms are actively being used as part of non-face-to-face education for students with developmental disabilities. Because the people with developmental disabilities may lack sufficient cognitive abilities, it is difficult to conduct untact training seamlessly without the help of a third party. Therefore, it is necessary for training programs to identify the right timing to provide help so that the training can be continued. This research analyzed the visual attention of students with developmental disabilities in virtual reality-based job training program in order to determine the point of time when an intervention is required by the trainee. Results showed that students who completed the mission tended to have intense visual attention on a small number of objects for a certain period of time; the visual attention of the students who failed tended to shift erratically among multiple objects.

New Digital Esthetic Rehabilitation Technique with Three-dimensional Augmented Reality: A Case Report

  • Hang-Nga, Mai;Du-Hyeong, Lee
    • Journal of Korean Dental Science
    • /
    • v.15 no.2
    • /
    • pp.166-171
    • /
    • 2022
  • This case report describes a dynamic digital esthetic rehabilitation procedure that integrates a new three-dimensional augmented reality (3D-AR) technique to treat a patient with multiple missing anterior teeth. The prostheses were designed using computer-aided design (CAD) software and virtually trialed using static and dynamic visualization methods. In the static method, the prostheses were visualized by integrating the CAD model with a 3D face scan of the patient. For the dynamic method, the 3D-AR application was used for real-time tracking and projection of the CAD prostheses in the patient's mouth. Results of a quick survey on patient satisfaction with the two visualization methods showed that the patient felt more satisfied with the dynamic visualization method because it allowed him to observe the prostheses directly on his face and be more proactive in the treatment process.

A Computer Vision Approach for Identifying Acupuncture Points on the Face and Hand Using the MediaPipe Framework (MediaPipe Framework를 이용한 얼굴과 손의 경혈 판별을 위한 Computer Vision 접근법)

  • Hadi S. Malekroodi;Myunggi Yi;Byeong-il Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.563-565
    • /
    • 2023
  • Acupuncture and acupressure apply needles or pressure to anatomical points for therapeutic benefit. The over 350 mapped acupuncture points in the human body can each treat various conditions, but anatomical variations make precisely locating these acupoints difficult. We propose a computer vision technique using the real-time hand and face tracking capabilities of the MediaPipe framework to identify acupoint locations. Our model detects anatomical facial and hand landmarks, and then maps these to corresponding acupoint regions. In summary, our proposed model facilitates precise acupoint localization for self-treatment and enhances practitioners' abilities to deliver targeted acupuncture and acupressure therapies.

Implementation to human-computer interface system with motion tracking using OpenCV (OpenCV를 이용한 눈동자 모션인식을 통한 의사소통 시스템 구현)

  • Heo, Seung Won;Lee, Seung Jun;Lee, Hee Bin;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.700-702
    • /
    • 2018
  • In this abstract, introduces a system that enables communication by tracking the pupils of Lou Gehrig's disease patients who are unable to move their bodies. Face and eye pupil tracking perform using OpenCV, and eye movement recognition and character selection by eye movement is obtained using Python. In this paper, you will use the webcams, track your eyes, determine eye movements based on the coordinates of your pupils, and print characters that meet your preferences. It can easily output text messages using Bluetooth.

  • PDF

Dynamic Manipulation of a Virtual Object in Marker-less AR system Based on Both Human Hands

  • Chun, Jun-Chul;Lee, Byung-Sung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.4
    • /
    • pp.618-632
    • /
    • 2010
  • This paper presents a novel approach to control the augmented reality (AR) objects robustly in a marker-less AR system by fingertip tracking and hand pattern recognition. It is known that one of the promising ways to develop a marker-less AR system is using human's body such as hand or face for replacing traditional fiducial markers. This paper introduces a real-time method to manipulate the overlaid virtual objects dynamically in a marker-less AR system using both hands with a single camera. The left bare hand is considered as a virtual marker in the marker-less AR system and the right hand is used as a hand mouse. To build the marker-less system, we utilize a skin-color model for hand shape detection and curvature-based fingertip detection from an input video image. Using the detected fingertips the camera pose are estimated to overlay virtual objects on the hand coordinate system. In order to manipulate the virtual objects rendered on the marker-less AR system dynamically, a vision-based hand control interface, which exploits the fingertip tracking for the movement of the objects and pattern matching for the hand command initiation, is developed. From the experiments, we can prove that the proposed and developed system can control the objects dynamically in a convenient fashion.

Real-Time Face Tracking System Of Object Segmentation Tracking Method Applied To Motion and Color Information (움직임과 색상정보에서 객체 분할 추적 기법을 적용한 실시간 얼굴 추적 시스템)

  • Choi, Young-Kwan;Cho, Sung-Min;Choi, Chul;Hwang, Hoon;Park, Chang-Choon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2002.11a
    • /
    • pp.669-672
    • /
    • 2002
  • 최근 멀티미디어 기술의 급속한 발달로 인해 개인의 신원 확인, 보안 시스템 등의 영역에서 얼굴과 관련된 연구가 활발히 진행 되고 있다. 기존의 연구에서는 원거리 추적이 어려우며, 연산시간, 잡음(noise), 배경과 조명등에 따라 추적 효율이 낮은 단점을 가지고 있다. 본 논문에서는 빠르고 정확한 얼굴 추적을 위한 차 영상 기법(differential image method)을 이용한 분할영역(segmentation region)에서 움직임(motion)과 피부색(skin color) 특성 기반의 객체분할추적(Tracking Of Object segmentation) 방법을 이용하였다. 객체분할추적은 얼굴을 하나의 객체(object)로 인식하고 제안한 방법으로 얼굴 부분만 분할하는 단계와 얼굴특징추출 단계를 적용하여 피부색 기반의 연구에서 나타난 입력영상(Current Frame)에서의 유동적인 피부색의 노출 대한 얼굴 추적 연구의 문제점을 해결했다. 시스템은 현재 컴퓨터에 일반적으로 사용되는 카메라를 이용하여 구현 하였고, 실시간(real-time) 영상에서 비교적 성공적인 얼굴 추적을 하였다[4].

  • PDF

A Study on real time Gaze Discimination Using Kalman Fillter (Kalman-Filer를 이용한 효과적인 실시간 시선검출)

  • Jeong, You-Sun;Hong, Sung-Soo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.5 no.4
    • /
    • pp.399-405
    • /
    • 2010
  • In this paper, the movement faces the problem of the difficult points upon the gaze of the user that corrective action is needed to solve the identification system offers a new perspective. Using the Kalman filter using the position information of the current head position estimated the future. In order to determine the authenticity of the face features of the face structural element information and the processing time is relatively fast horizontal and vertical histogram analysis method to detect the elements of the face. and people grow and infrared bright pupil effect obtained by constructing a real-time pupil detection, tracking and pupil - geulrinteu vectors are extracted.

Detection of Pupil Center using Projection Function and Hough Transform (프로젝션 함수와 허프 변환을 이용한 눈동자 중심점 찾기)

  • Choi, Yeon-Seok;Mun, Won-Ho;Kim, Cheol-Ki;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.10a
    • /
    • pp.167-170
    • /
    • 2010
  • In this paper, we proposed a novel algorithm to detect the center of pupil in frontal view face. This algorithm, at first, extract an eye region from the face image using integral projection function and variance projection function. In an eye region, detect the center of pupil positions using circular hough transform with sobel edge mask. The experimental results show good performance in detecting pupil center from FERET face image.

  • PDF

Real-Time Human Tracking Using Skin Area and Modified Multi-CAMShift Algorithm (피부색과 변형된 다중 CAMShift 알고리즘을 이용한 실시간 휴먼 트래킹)

  • Min, Jae-Hong;Kim, In-Gyu;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.6
    • /
    • pp.1132-1137
    • /
    • 2011
  • In this paper, we propose Modified Multi CAMShift Algorithm(Modified Multi Continuously Adaptive Mean Shift Algorithm) that extracts skin color area and tracks several human body parts for real-time human tracking system. Skin color area is extracted by filtering input image in predefined RGB value range. These areas are initial search windows of hands and face for tracking. Gaussian background model prevents search window expending because it restricts skin color area. Also when occluding between these areas, we give more weights in occlusion area and move mass center of target area in color probability distribution image. As result, the proposed algorithm performs better than the original CAMShift approach in multiple object tracking and even when occluding of objects with similar colors.

Detection and Blocking of a Face Area Using a Tracking Facility in Color Images (컬러 영상에서 추적 기능을 활용한 얼굴 영역 검출 및 차단)

  • Jang, Seok-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.10
    • /
    • pp.454-460
    • /
    • 2020
  • In recent years, the rapid increases in video distribution and viewing over the Internet have increased the risk of personal information exposure. In this paper, a method is proposed to robustly identify areas in images where a person's privacy is compromised and simultaneously blocking the object area by blurring it while rapidly tracking it using a prediction algorithm. With this method, the target object area is accurately identified using artificial neural network-based learning. The detected object area is then tracked using a location prediction algorithm and is continuously blocked by blurring it. Experimental results show that the proposed method effectively blocks private areas in images by blurring them, while at the same time tracking the target objects about 2.5% more accurately than another existing method. The proposed blocking method is expected to be useful in many applications, such as protection of personal information, video security, object tracking, etc.