• Title/Summary/Keyword: Face recognition/tracking

Search Result 103, Processing Time 0.029 seconds

Development of a Non-contact Input System Based on User's Gaze-Tracking and Analysis of Input Factors

  • Jiyoung LIM;Seonjae LEE;Junbeom KIM;Yunseo KIM;Hae-Duck Joshua JEONG
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.1
    • /
    • pp.9-15
    • /
    • 2023
  • As mobile devices such as smartphones, tablets, and kiosks become increasingly prevalent, there is growing interest in developing alternative input systems in addition to traditional tools such as keyboards and mouses. Many people use their own bodies as a pointer to enter simple information on a mobile device. However, methods using the body have limitations due to psychological factors that make the contact method unstable, especially during a pandemic, and the risk of shoulder surfing attacks. To overcome these limitations, we propose a simple information input system that utilizes gaze-tracking technology to input passwords and control web surfing using only non-contact gaze. Our proposed system is designed to recognize information input when the user stares at a specific location on the screen in real-time, using intelligent gaze-tracking technology. We present an analysis of the relationship between the gaze input box, gaze time, and average input time, and report experimental results on the effects of varying the size of the gaze input box and gaze time required to achieve 100% accuracy in inputting information. Through this paper, we demonstrate the effectiveness of our system in mitigating the challenges of contact-based input methods, and providing a non-contact alternative that is both secure and convenient.

Interactive user authentication combining face recognition with real-time eye tracking (얼굴 인식과 실시간 눈동자 인식을 결합한 인터렉티브한 사용자 인증)

  • Jun, Young-Si;Lee, Chang-Hoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.11a
    • /
    • pp.1094-1097
    • /
    • 2010
  • 생체 인식 기술 중 얼굴 인식과 눈동자 인식을 활용하여 기존의 얼굴 인식 내지 지문 인식만을 사용했을 때 보다 좀 더 인터렉티브한 인증 환경을 제공함으로써 복사한 이미지를 이용해 인증 체계를 회피할 수 있는 가능성을 줄였다.

Effective Eye Detection for Face Recognition to Protect Medical Information (의료정보 보호를 위해 얼굴인식에 필요한 효과적인 시선 검출)

  • Kim, Suk-Il;Seok, Gyeong-Hyu
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.5
    • /
    • pp.923-932
    • /
    • 2017
  • In this paper, we propose a GRNN(: Generalized Regression Neural Network) algorithms for new eyes and face recognition identification system to solve the points that need corrective action in accordance with the existing problems of facial movements gaze upon it difficult to identify the user and. Using a Kalman filter structural information elements of a face feature to determine the authenticity of the face was estimated future location using the location information of the current head and the treatment time is relatively fast horizontal and vertical elements of the face using a histogram analysis the detected. And the light obtained by configuring the infrared illuminator pupil effects in real-time detection of the pupil, the pupil tracking was to extract the text print vector. The abstract is to be in fully-justified italicized text as it is here, below the author information.

Speech Activity Detection using Lip Movement Image Signals (입술 움직임 영상 선호를 이용한 음성 구간 검출)

  • Kim, Eung-Kyeu
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.4
    • /
    • pp.289-297
    • /
    • 2010
  • In this paper, A method to prevent the external acoustic noise from being misrecognized as the speech recognition object is presented in the speech activity detection process for the speech recognition. Also this paper confirmed besides the acoustic energy to the lip movement image signals. First of all, the successive images are obtained through the image camera for personal computer and the lip movement whether or not is discriminated. The next, the lip movement image signal data is stored in the shared memory and shares with the speech recognition process. In the mean time, the acoustic energy whether or not by the utterance of a speaker is verified by confirming data stored in the shared memory in the speech activity detection process which is the preprocess phase of the speech recognition. Finally, as a experimental result of linking the speech recognition processor and the image processor, it is confirmed to be normal progression to the output of the speech recognition result if face to the image camera and speak. On the other hand, it is confirmed not to the output the result of the speech recognition if does not face to the image camera and speak. Also, the initial feature values under off-line are replaced by them. Similarly, the initial template image captured while off-line is replaced with a template image captured under on-line, so the discrimination of the lip movement image tracking is raised. An image processing test bed was implemented to confirm the lip movement image tracking process visually and to analyze the related parameters on a real-time basis. As a result of linking the speech and image processing system, the interworking rate shows 99.3% in the various illumination environments.

A Passport Recognition and face Verification Using Enhanced fuzzy ART Based RBF Network and PCA Algorithm (개선된 퍼지 ART 기반 RBF 네트워크와 PCA 알고리즘을 이용한 여권 인식 및 얼굴 인증)

  • Kim Kwang-Baek
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.17-31
    • /
    • 2006
  • In this paper, passport recognition and face verification methods which can automatically recognize passport codes and discriminate forgery passports to improve efficiency and systematic control of immigration management are proposed. Adjusting the slant is very important for recognition of characters and face verification since slanted passport images can bring various unwanted effects to the recognition of individual codes and faces. Therefore, after smearing the passport image, the longest extracted string of characters is selected. The angle adjustment can be conducted by using the slant of the straight and horizontal line that connects the center of thickness between left and right parts of the string. Extracting passport codes is done by Sobel operator, horizontal smearing, and 8-neighborhood contour tracking algorithm. The string of codes can be transformed into binary format by applying repeating binary method to the area of the extracted passport code strings. The string codes are restored by applying CDM mask to the binary string area and individual codes are extracted by 8-neighborhood contour tracking algerian. The proposed RBF network is applied to the middle layer of RBF network by using the fuzzy logic connection operator and proposing the enhanced fuzzy ART algorithm that dynamically controls the vigilance parameter. The face is authenticated by measuring the similarity between the feature vector of the facial image from the passport and feature vector of the facial image from the database that is constructed with PCA algorithm. After several tests using a forged passport and the passport with slanted images, the proposed method was proven to be effective in recognizing passport codes and verifying facial images.

  • PDF

Automatic Person Identification using Multiple Cues

  • Swangpol, Danuwat;Chalidabhongse, Thanarat
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1202-1205
    • /
    • 2005
  • This paper describes a method for vision-based person identification that can detect, track, and recognize person from video using multiple cues: height and dressing colors. The method does not require constrained target's pose or fully frontal face image to identify the person. First, the system, which is connected to a pan-tilt-zoom camera, detects target using motion detection and human cardboard model. The system keeps tracking the moving target while it is trying to identify whether it is a human and identify who it is among the registered persons in the database. To segment the moving target from the background scene, we employ a version of background subtraction technique and some spatial filtering. Once the target is segmented, we then align the target with the generic human cardboard model to verify whether the detected target is a human. If the target is identified as a human, the card board model is also used to segment the body parts to obtain some salient features such as head, torso, and legs. The whole body silhouette is also analyzed to obtain the target's shape information such as height and slimness. We then use these multiple cues (at present, we uses shirt color, trousers color, and body height) to recognize the target using a supervised self-organization process. We preliminary tested the system on a set of 5 subjects with multiple clothes. The recognition rate is 100% if the person is wearing the clothes that were learned before. In case a person wears new dresses the system fail to identify. This means height is not enough to classify persons. We plan to extend the work by adding more cues such as skin color, and face recognition by utilizing the zoom capability of the camera to obtain high resolution view of face; then, evaluate the system with more subjects.

  • PDF

Bayesian Network Model for Human Fatigue Recognition (피로 인식을 위한 베이지안 네트워크 모델)

  • Lee Young-sik;Park Ho-sik;Bae Cheol-soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.9C
    • /
    • pp.887-898
    • /
    • 2005
  • In this paper, we introduce a probabilistic model based on Bayesian networks BNs) for recognizing human fatigue. First of all, we measured face feature information such as eyelid movement, gaze, head movement, and facial expression by IR illumination. But, an individual face feature information does not provide enough information to determine human fatigue. Therefore in this paper, a Bayesian network model was constructed to fuse as many as possible fatigue cause parameters and face feature information for probabilistic inferring human fatigue. The MSBNX simulation result ending a 0.95 BN fatigue index threshold. As a result of the experiment, when comparisons are inferred BN fatigue index and the TOVA response time, there is a mutual correlation and from this information we can conclude that this method is very effective at recognizing a human fatigue.

Effective real-time identification using Bayesian statistical methods gaze Network (베이지안 통계적 방안 네트워크를 이용한 효과적인 실시간 시선 식별)

  • Kim, Sung-Hong;Seok, Gyeong-Hyu
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.3
    • /
    • pp.331-338
    • /
    • 2016
  • In this paper, we propose a GRNN(: Generalized Regression Neural Network) algorithms for new eyes and face recognition identification system to solve the points that need corrective action in accordance with the existing problems of facial movements gaze upon it difficult to identify the user and. Using a Kalman filter structural information elements of a face feature to determine the authenticity of the face was estimated future location using the location information of the current head and the treatment time is relatively fast horizontal and vertical elements of the face using a histogram analysis the detected. And the light obtained by configuring the infrared illuminator pupil effects in real-time detection of the pupil, the pupil tracking was - to extract the text print vector.

Study on the Real-time COVID-19 Confirmed Case Web Monitoring System (실시간 코로나19 확진자 웹 모니터링 시스템에 대한 연구)

  • You, Youngkyon;Jo, Seonguk;Ko, Dongbeom;Park, Jeongmin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.1
    • /
    • pp.171-179
    • /
    • 2022
  • This paper introduces a monitoring and tracking system for corona19 confirmed patients based on the collected data by installing a device that can manage the access list at the entrance to each building on the campus. The existing QR-based electronic access list can't measure the temperature of the person entering the building and it is inconvenient that members have to scan their QR codes with a smartphone. In addition, when the state manages information about confirmed patients and contacts on campus, it is not easy for members to quickly share and track information. These could lead to cases where a person is in close contact with an infected person developing another patient. Therefore, this paper introduces a device using face recognition library and a temperature sensor installed at the entrance of each building on the campus, enabling the administrator to monitor the access status and quickly track members of each building in real-time.