• Title/Summary/Keyword: FACE method

Search Result 3,443, Processing Time 0.035 seconds

A Fast Method for Face Detection Based on PCA and SVM (PCA와 SVM에 기반하는 빠른 얼굴탐지 방법)

  • Xia, Chun-Lei;Shin, Hyeon-Gab;Park, Myeong-Chul;Ha, Seok-Wun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.6
    • /
    • pp.1129-1135
    • /
    • 2007
  • Human face detection technique plays an important role in computer vision area. It has lots of applications such as face recognition, video surveillance, human computer interface, face image database management, and querying image databases. In this paper, a fast face detection approach using Principal Component Analysis (PCA) and Support Vector Machines (SVM) is proposed based on the previous study on face detection technique. In the proposed detection system, firstly it filter the face potential area using statistical feature which is generated by analyzing the local histogram distribution the detection process is speeded up by eliminating most of the non-face area in this step. In the next step, PCA feature vectors are generated, and then detect whether there are faces present in the test image using SVM classifier. Finally, store the detection results and output the results on the test image. The test images in this paper are from CMU face database. The face and non-face samples are selected from the MIT data set. The experimental results indicate the proposed method has good performance for face detection.

A Study on the Construction of Non-face-to-face Lecture of KAOMPT: Delphi Survey Research to Post COVID-19 Untact Era (대한정형도수물리치료학회 비대면 강의 체계 구축 연구: 포스트 코로나19 대비 델파이 기법 분석 적용 사례)

  • Kim, Jin-young;Shin, Young-il;Yang, Sung-hwa
    • The Journal of Korean Academy of Orthopedic Manual Physical Therapy
    • /
    • v.27 no.1
    • /
    • pp.1-11
    • /
    • 2021
  • Background: The purpose of this study is to identify the elements for the construction of the Korean academy of orthopedic manipulative physical therapy's (KAOMPT's) non-face-to-face lecture system using the Delphi method. Methods: The Delphi method was applied to 50 expert panel members of the Central Committee and the Provincial Branch of the KAOMPT. The Delphi survey was conducted in two rounds, and the first Delphi survey collected opinions on 40 questions on 12 topics. The second Delphi survey was collected into 25 questions on 4 topics. As a result of the survey, the content validity ratio (CVR), consensus and convergence were measured. Referring to the number of expert panels and previous studies were determine a CVR of at least 2.29, a consensus of at least .75 and a convergence of 0 to .5. Result: In the first Delphi result, out of the total 40 items, 20 items with high content validity ratio were found, and 10 items found double agreement. In the second Delphi result, 13 out of the total 25 items had a content validity ratio higher than 2.29, and 5 items found a double agreement. Conclusion: This study derived items on the role of central and municipal councils, lecture support and lecture room construction, non-face regular course and special lecture operation and personnel for the establishment of non-face-to-face lecture system. Based on this content, it is expected that it will help establish a non-face-to-face lecture system in 2021 through a pilot non-face-to-face lecture that will be implemented in the future.

A Facial Feature Area Extraction Method for Improving Face Recognition Rate in Camera Image (일반 카메라 영상에서의 얼굴 인식률 향상을 위한 얼굴 특징 영역 추출 방법)

  • Kim, Seong-Hoon;Han, Gi-Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.5
    • /
    • pp.251-260
    • /
    • 2016
  • Face recognition is a technology to extract feature from a facial image, learn the features through various algorithms, and recognize a person by comparing the learned data with feature of a new facial image. Especially, in order to improve the rate of face recognition, face recognition requires various processing methods. In the training stage of face recognition, feature should be extracted from a facial image. As for the existing method of extracting facial feature, linear discriminant analysis (LDA) is being mainly used. The LDA method is to express a facial image with dots on the high-dimensional space, and extract facial feature to distinguish a person by analyzing the class information and the distribution of dots. As the position of a dot is determined by pixel values of a facial image on the high-dimensional space, if unnecessary areas or frequently changing areas are included on a facial image, incorrect facial feature could be extracted by LDA. Especially, if a camera image is used for face recognition, the size of a face could vary with the distance between the face and the camera, deteriorating the rate of face recognition. Thus, in order to solve this problem, this paper detected a facial area by using a camera, removed unnecessary areas using the facial feature area calculated via a Gabor filter, and normalized the size of the facial area. Facial feature were extracted through LDA using the normalized facial image and were learned through the artificial neural network for face recognition. As a result, it was possible to improve the rate of face recognition by approx. 13% compared to the existing face recognition method including unnecessary areas.

Far Distance Face Detection from The Interest Areas Expansion based on User Eye-tracking Information (시선 응시 점 기반의 관심영역 확장을 통한 원 거리 얼굴 검출)

  • Park, Heesun;Hong, Jangpyo;Kim, Sangyeol;Jang, Young-Min;Kim, Cheol-Su;Lee, Minho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.113-127
    • /
    • 2012
  • Face detection methods using image processing have been proposed in many different ways. Generally, the most widely used method for face detection is an Adaboost that is proposed by Viola and Jones. This method uses Haar-like feature for image learning, and the detection performance depends on the learned images. It is well performed to detect face images within a certain distance range, but if the image is far away from the camera, face images become so small that may not detect them with the pre-learned Haar-like feature of the face image. In this paper, we propose the far distance face detection method that combine the Aadaboost of Viola-Jones with a saliency map and user's attention information. Saliency Map is used to select the candidate face images in the input image, face images are finally detected among the candidated regions using the Adaboost with Haar-like feature learned in advance. And the user's eye-tracking information is used to select the interest regions. When a subject is so far away from the camera that it is difficult to detect the face image, we expand the small eye gaze spot region using linear interpolation method and reuse that as input image and can increase the face image detection performance. We confirmed the proposed model has better results than the conventional Adaboost in terms of face image detection performance and computational time.

Exploring the effect of Learning Motivation type on Immersion According to the Non-Face-To-Face Teaching Method in the Major Classes for Preschool Teachers at Christian Universities (기독교 대학의 예비유아교사 전공수업에서 비대면수업 방식에 따라 학습동기 유형이 몰입에 미치는 영향 탐색)

  • Lee, Eunchul
    • Journal of Christian Education in Korea
    • /
    • v.69
    • /
    • pp.139-162
    • /
    • 2022
  • This study verified the effect of learning motivation on immersion by non-face-to-face class method. For this purpose, 101 college students majoring in early childhood education were selected as research subjects. The average age of the study subjects was 22.6 years old, and 51 students took non-real-time non-face-to-face classes, and 50 students took real-time non-face-to-face classes. The study measured the level of immersion and the type of learning motivation after the non-face-to-face class was finished. The measured data were analyzed using descriptive statistical analysis and multiple regression analysis. As a result, in the results for all students, the performance approach goal had the most influence on immersion, and the mastery goal orientation had the next effect. Performance avoidance orientation had no effect. For students in non-face-to-face classes, performance approach goal orientation had an effect on immersion, and for students in real-time non-face-to-face classes, mastery goal orientation had an effect. The implications that can be obtained from the results of this study are as follows. First, non-real-time non-face-to-face classes should cover basic knowledge and skills so that there are no mistakes and failures. Second, non-real-time non-face-to-face classes should allow tasks with appropriate difficulty to be performed with a deadline. Third, real-time non-face-to-face classes should lower the fear of mistakes and failures.

Face Detection based on Pupil Color Distribution Maps with the Frequency under the Illumination Variance (빈도수를 고려한 눈동자색 분포맵에 기반한 조명 변화에 강건한 얼굴 검출 방법)

  • Cho, Han-Soo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.5
    • /
    • pp.225-232
    • /
    • 2009
  • In this paper, a new face detection method based on pupil color distribution maps with the frequency under the illumination variance is proposed. Face-like regions are first extracted by applying skin color distribution maps to a color image and then, they are reduced by using the standard deviation of chrominance components. In order to search for eye candidates effectively, the proposed method extracts eye-like regions from face-like regions by using pupil color distribution maps. Furthermore, the proposed method is able to detect eyes very well by segmenting the eye-like regions, based on a lighting compensation technique and a segmentation algorithm even though face regions are changed into dark-tone due to varying illumination conditions. Eye candidates are then detected by means of template matching method. Finally, face regions are detected by using the evaluation values of two eye candidates and a mouth. Experimental results show that the proposed method can achieve a high performance.

  • PDF

Face recognition method using embedded data in Principal Component Analysis (주성분분석 방법에서의 임베디드 데이터를 이용한 얼굴인식 방법)

  • Park Chang-Han;Namkung Jae-Chan
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.1
    • /
    • pp.17-23
    • /
    • 2005
  • In this paper, we propose face recognition method using embedded data in super states segmentalized that is specification region exist to face region, hair, forehead, eyes, ears, nose, mouth, and chin. Proposed method defines super states that is specification area in normalized size (92×112), and embedded data that is extract internal factor in super states segmentalized achieve face recognition by PCA algorithm. Proposed method can receive specification data that is less in proposed image's size (92×112) because do orignal image to learn embedded data not to do all loaming. And Showed face recognition rate in image of 92×112 size averagely 99.05%, step 1 99.05%, step 2 98.93%, step 3 98.54%, step 4 97.85%. Therefore, method that is proposed through an experiment showed that the processing speed improves as well as reduce existing face image's information.

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.10
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

Face Detection Using Shapes and Colors in Various Backgrounds

  • Lee, Chang-Hyun;Lee, Hyun-Ji;Lee, Seung-Hyun;Oh, Joon-Taek;Park, Seung-Bo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.7
    • /
    • pp.19-27
    • /
    • 2021
  • In this paper, we propose a method for detecting characters in images and detecting facial regions, which consists of two tasks. First, we separate two different characters to detect the face position of the characters in the frame. For fast detection, we use You Only Look Once (YOLO), which finds faces in the image in real time, to extract the location of the face and mark them as object detection boxes. Second, we present three image processing methods to detect accurate face area based on object detection boxes. Each method uses HSV values extracted from the region estimated by the detection figure to detect the face region of the characters, and changes the size and shape of the detection figure to compare the accuracy of each method. Each face detection method is compared and analyzed with comparative data and image processing data for reliability verification. As a result, we achieved the highest accuracy of 87% when using the split rectangular method among circular, rectangular, and split rectangular methods.

A Face Recognition using the Hidden Markov Model and Karhuman Loevs Transform (Hidden Markov Model과 Karhuman Loevs Transform를 이용한 얼굴인식)

  • Kim, Do-Hyun;Hwang, Suen-Ki;Kang, Yong-Seok;Kim, Tae-Woo;Kim, Moon-Hwan;Bae, Cheol-Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.4 no.1
    • /
    • pp.3-8
    • /
    • 2011
  • The work presented in this paper describes a Hidden Markov Model(HMM)-based framework for face recognition and face detection. The observation vectors used to characterize the statics of the HMM are obtained using the coefficients of the Karhuman-Loves Transform(KLT). The face recognition method presented in this paper reduces significantly the computational complexity of previous HMM-based face recognition systems, while slightly improving the recognition rate. In addition, the suggested method is more effective than the exiting ones in face extraction in terms of accuracy and others even under complex changes to the surroundings such as lighting.