• Title/Summary/Keyword: facial detection

Search Result 377, Processing Time 0.03 seconds

Real Time Eye and Gaze Tracking (실시간 눈과 시선 위치 추적)

  • Hwang, suen ki;Kim, Moon-Hwan;Cha, Sam;Cho, Eun-Seuk;Bae, Cheol-Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.61-69
    • /
    • 2009
  • In this paper, to propose a new approach to real-time eye tracking. Existing methods of tracking the user's attention to the little I move my head was not going to get bad results for each of the users needed to perform the calibration process. Infrared eye tracking methods proposed lighting and Generalized Regression Neural Networks (GRNN) By using the calibration process, the movement of the head is large, even without the reliable and accurate eye tracking, mapping function was to enable each of the calibration process by the generalization can be omitted, did not participate in the study eye other users tracking was possible. Experimental results of facial movements that an average 90% of cases, other users on average 85% of the eye tracking results were shown.

  • PDF

Three-dimensional Face Recognition based on Feature Points Compression and Expansion

  • Yoon, Andy Kyung-yong;Park, Ki-cheul;Park, Sang-min;Oh, Duck-kyo;Cho, Hye-young;Jang, Jung-hyuk;Son, Byounghee
    • Journal of Multimedia Information System
    • /
    • v.6 no.2
    • /
    • pp.91-98
    • /
    • 2019
  • Many researchers have attempted to recognize three-dimensional faces using feature points extracted from two-dimensional facial photographs. However, due to the limit of flat photographs, it is very difficult to recognize faces rotated more than 15 degrees from original feature points extracted from the photographs. As such, it is difficult to create an algorithm to recognize faces in multiple angles. In this paper, it is proposed a new algorithm to recognize three-dimensional face recognition based on feature points extracted from a flat photograph. This method divides into six feature point vector zones on the face. Then, the vector value is compressed and expanded according to the rotation angle of the face to recognize the feature points of the face in a three-dimensional form. For this purpose, the average of the compressibility and the expansion rate of the face data of 100 persons by angle and face zone were obtained, and the face angle was estimated by calculating the distance between the middle of the forehead and the tail of the eye. As a result, very improved recognition performance was obtained at 30 degrees of rotated face angle.

An Automatic Strabismus Screening Method with Corneal Light Reflex based on Image Processing

  • Huang, Xi-Lang;Kim, Chang Zoo;Choi, Seon Han
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.5
    • /
    • pp.642-650
    • /
    • 2021
  • Strabismus is one of the most common disease that might be associated with vision impairment. Especially in infants and children, it is critical to detect strabismus at an early age because uncorrected strabismus may go on to develop amblyopia. To this end, ophthalmologists usually perform the Hirschberg test, which observes corneal light reflex (CLR) to determine the presence and type of strabismus. However, this test is usually done manually in a hospital, which might be difficult for patients who live in a remote area with poor medical access. To address this issue, we propose an automatic strabismus screening method that calculates the CLR ratio to determine the presence of strabismus based on image processing. In particular, the method first employs a pre-trained face detection model and a 68 facial landmarks detector to extract the eye region image. The data points located in the limbus are then collected, and the least square method is applied to obtain the center coordinates of the iris. Finally, the coordinate of the reflective light point center within the iris is extracted and used to calculate the CLR ratio with the coordinate of iris edges. Experimental results with several images demonstrate that the proposed method can be a promising solution to provide strabismus screening for patients who cannot visit hospitals.

Appearance of nasopalatine duct cysts on dental magnetic resonance imaging using a mandibular coil: Two case reports with a literature review

  • Adib Al-Haj Husain ;Daphne Schonegg ;Silvio Valdec ;Bernd Stadlinger ;Marco Piccirelli ;Sebastian Winklhofer
    • Imaging Science in Dentistry
    • /
    • v.53 no.2
    • /
    • pp.161-168
    • /
    • 2023
  • Nasopalatine duct cysts (NPDCs), the most common non-odontogenic cysts of maxilla, are often incidental findings on diagnostic imaging. When symptomatic, they usually present as a painless swelling with possible fistula. Conventional radiography shows a round-to-ovoid or heart-shaped radiolucency between the roots of central maxillary incisors. While the radiographic features of NPDCs in X-ray-based modalities have been well described, their magnetic resonance imaging (MRI) features have rarely been reported. Developments in dental MRI in recent years and the introduction of various dental MRI protocols now allow a wide range of applications in dental medicine. MRI is becoming an important tool for the detection and diagnosis of incidental or non-incidental dentomaxillofacial cysts. This report presented and discussed the characteristics of 2 NPDC cases visualized on MRI using both conventional and newly implemented specific dental MRI protocols with a novel 15-channel mandibular coil, demonstrating the use of these protocols for radiation-free maxillofacial diagnoses.

Machine Learning-Based Reversible Chaotic Masking Method for User Privacy Protection in CCTV Environment

  • Jimin Ha;Jungho Kang;Jong Hyuk Park
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.767-777
    • /
    • 2023
  • In modern society, user privacy is emerging as an important issue as closed-circuit television (CCTV) systems increase rapidly in various public and private spaces. If CCTV cameras monitor sensitive areas or personal spaces, they can infringe on personal privacy. Someone's behavior patterns, sensitive information, residence, etc. can be exposed, and if the image data collected from CCTV is not properly protected, there can be a risk of data leakage by hackers or illegal accessors. This paper presents an innovative approach to "machine learning based reversible chaotic masking method for user privacy protection in CCTV environment." The proposed method was developed to protect an individual's identity within CCTV images while maintaining the usefulness of the data for surveillance and analysis purposes. This method utilizes a two-step process for user privacy. First, machine learning models are trained to accurately detect and locate human subjects within the CCTV frame. This model is designed to identify individuals accurately and robustly by leveraging state-of-the-art object detection techniques. When an individual is detected, reversible chaos masking technology is applied. This masking technique uses chaos maps to create complex patterns to hide individual facial features and identifiable characteristics. Above all, the generated mask can be reversibly applied and removed, allowing authorized users to access the original unmasking image.

Caricaturing using Local Warping and Edge Detection (로컬 와핑 및 윤곽선 추출을 이용한 캐리커처 제작)

  • Choi, Sung-Jin;Bae, Hyeon;Kim, Sung-Shin;Woo, Kwang-Bang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.4
    • /
    • pp.403-408
    • /
    • 2003
  • A general meaning of caricaturing is that a representation, especially pictorial or literary, in which the subject's distinctive features or peculiarities are deliberately exaggerated to produce a comic or grotesque effect. In other words, a caricature is defined as a rough sketch(dessin) which is made by detecting features from human face and exaggerating or warping those. There have been developed many methods which can make a caricature image from human face using computer. In this paper, we propose a new caricaturing system. The system uses a real-time image or supplied image as an input image and deals with it on four processing steps and then creates a caricatured image finally. The four Processing steps are like that. The first step is detecting a face from input image. The second step is extracting special coordinate values as facial geometric information. The third step is deforming the face image using local warping method and the coordinate values acquired in the second step. In fourth step, the system transforms the deformed image into the better improved edge image using a fuzzy Sobel method and then creates a caricatured image finally. In this paper , we can realize a caricaturing system which is simpler than any other exiting systems in ways that create a caricatured image and does not need complex algorithms using many image processing methods like image recognition, transformation and edge detection.

Affective Priming Effect on Cognitive Processes Reflected by Event-related Potentials (ERP로 확인되는 인지정보 처리에 대한 정서 점화효과)

  • Kim, Choong-Myung
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.5
    • /
    • pp.242-250
    • /
    • 2016
  • This study was conducted to investigate whether Stroop-related cognitive task will be affected according to the preceding affective valence factored by matchedness in response time(RT) and whether facial recognition will be indexed by specific event-related potentials(ERPs) signature in normal person as in patients suffering from affective disorder. ERPs primed by subliminal(30ms) facial stimuli were recorded when presented with four pairs of affect(positive or negative) and cognitive task(matched or mismatched) to get ERP effects(N2 and P300) in terms of its amplitude and peak latency variations. Behavioral response analysis based on RTs confirmed that subliminal affective stimuli primed the target processing in all affective condition except for the neutral stimulus. Additional results for the ERPs performed in the negative affect with mismatched condition reached significance of emotional-face specificity named N2 showing more amplitude and delayed peak latency compared to the positive counterpart. Furthermore the condition shows more positive amplitude and earlier peak latency of P300 effect denoting cognitive closure than the corresponding positive affect condition. These results are suggested to reflect that negative affect stimulus in subliminal level is automatically inhibited such that this effect had influence on accelerating detection of the affect and facilitating response allowing adequate reallocation of attentional resources. The functional and cognitive significance with these findings was implied in terms of subliminal effect and affect-related recognition modulating the cognitive tasks.

Classification of Three Different Emotion by Physiological Parameters

  • Jang, Eun-Hye;Park, Byoung-Jun;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.2
    • /
    • pp.271-279
    • /
    • 2012
  • Objective: This study classified three different emotional states(boredom, pain, and surprise) using physiological signals. Background: Emotion recognition studies have tried to recognize human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 122 college students participated in this experiment. Three different emotional stimuli were presented to participants and physiological signals, i.e., EDA(Electrodermal Activity), SKT(Skin Temperature), PPG(Photoplethysmogram), and ECG (Electrocardiogram) were measured for 1 minute as baseline and for 1~1.5 minutes during emotional state. The obtained signals were analyzed for 30 seconds from the baseline and the emotional state and 27 features were extracted from these signals. Statistical analysis for emotion classification were done by DFA(discriminant function analysis) (SPSS 15.0) by using the difference values subtracting baseline values from the emotional state. Results: The result showed that physiological responses during emotional states were significantly differed as compared to during baseline. Also, an accuracy rate of emotion classification was 84.7%. Conclusion: Our study have identified that emotions were classified by various physiological signals. However, future study is needed to obtain additional signals from other modalities such as facial expression, face temperature, or voice to improve classification rate and to examine the stability and reliability of this result compare with accuracy of emotion classification using other algorithms. Application: This could help emotion recognition studies lead to better chance to recognize various human emotions by using physiological signals as well as is able to be applied on human-computer interaction system for emotion recognition. Also, it can be useful in developing an emotion theory, or profiling emotion-specific physiological responses as well as establishing the basis for emotion recognition system in human-computer interaction.

Expression Analysis System of Game Player based on Multi-modal Interface (멀티 모달 인터페이스 기반 플레이어 얼굴 표정 분석 시스템 개발)

  • Jung, Jang-Young;Kim, Young-Bin;Lee, Sang-Hyeok;Kang, Shin-Jin
    • Journal of Korea Game Society
    • /
    • v.16 no.2
    • /
    • pp.7-16
    • /
    • 2016
  • In this paper, we propose a method for effectively detecting specific behavior. The proposed method detects outlying behavior based on the game players' characteristics. These characteristics are captured non-invasively in a general game environment and add keystroke based on repeated pattern. In this paper, cameras were used to analyze observed data such as facial expressions and player movements. Moreover, multimodal data from the game players was used to analyze high-dimensional game-player data for a detection effect of repeated behaviour pattern. A support vector machine was used to efficiently detect outlying behaviors. We verified the effectiveness of the proposed method using games from several genres. The recall rate of the outlying behavior pre-identified by industry experts was approximately 70%. In addition, Repeated behaviour pattern can be analysed possible. The proposed method can also be used for feedback and quantification about analysis of various interactive content provided in PC environments.

Detection Method of Human Face, Facial Components and Rotation Angle Using Color Value and Partial Template (컬러정보와 부분 템플릿을 이용한 얼굴영역, 요소 및 회전각 검출)

  • Lee, Mi-Ae;Park, Ki-Soo
    • The KIPS Transactions:PartB
    • /
    • v.10B no.4
    • /
    • pp.465-472
    • /
    • 2003
  • For an effective pre-treatment process of a face input image, it is necessary to detect each of face components, calculate the face area, and estimate the rotary angle of the face. A proposed method of this study can estimate an robust result under such renditions as some different levels of illumination, variable fate sizes, fate rotation angels, and background color similar to skin color of the face. The first step of the proposed method detects the estimated face area that can be calculated by both adapted skin color Information of the band-wide HSV color coordinate converted from RGB coordinate, and skin color Information using histogram. Using the results of the former processes, we can detect a lip area within an estimated face area. After estimating a rotary angle slope of the lip area along the X axis, the method determines the face shape based on face information. After detecting eyes in face area by matching a partial template which is made with both eyes, we can estimate Y axis rotary angle by calculating the eye´s locations in three dimensional space in the reference of the face area. As a result of the experiment on various face images, the effectuality of proposed algorithm was verified.