• Title/Summary/Keyword: Facial Image Processing

Search Result 158, Processing Time 0.03 seconds

Development of Dental Light Robotic System using Image Processing Technology (영상처리 기술을 이용한 치과용 로봇 조명장치의 개발)

  • Moon, Hyun-Il;Kim, Myoung-Nam;Lee, Kyu-Bok
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.26 no.3
    • /
    • pp.285-296
    • /
    • 2010
  • Robot-assisted illuminating equipment based on image-processing technology was developed and then its accuracy was measured. The current system was designed to detect facial appearance using a camera and to illuminate it using a robot-assisted system. It was composed of a motion control component, a light control component and an image-processing component. Images were captured with a camera and following their acquisition the images that showed motion change were extracted in accordance with the Adaboost algorithm. Following the detection experiment for the oral cavity of patients based on image-processing technology, a higher degree of the facial recognition was obtained from the frontal view and the light robot arm was stably controlled.

People Counting System by Facial Age Group (얼굴 나이 그룹별 피플 카운팅 시스템)

  • Ko, Ginam;Lee, YongSub;Moon, Nammee
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.69-75
    • /
    • 2014
  • Existing People Counting System using a single overhead mounted camera has limitation in object recognition and counting in various environments. Those limitations are attributable to overlapping, occlusion and external factors, such as over-sized belongings and dramatic light change. Thus, this paper proposes the new concept of People Counting System by Facial Age Group using two depth cameras, at overhead and frontal viewpoints, in order to improve object recognition accuracy and robust people counting to external factors. The proposed system is counting the pedestrians by five process such as overhead image processing, frontal image processing, identical object recognition, facial age group classification and in-coming/out-going counting. The proposed system developed by C++, OpenCV and Kinect SDK, and it target group of 40 people(10 people by each age group) was setup for People Counting and Facial Age Group classification performance evaluation. The experimental results indicated approximately 98% accuracy in People Counting and 74.23% accuracy in the Facial Age Group classification.

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.

Light 3D Modeling with mobile equipment (모바일 카메라를 이용한 경량 3D 모델링)

  • Ju, Seunghwan;Seo, Heesuk;Han, Sunghyu
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.4
    • /
    • pp.107-114
    • /
    • 2016
  • Recently, 3D related technology has become a hot topic for IT. 3D technologies such as 3DTV, Kinect and 3D printers are becoming more and more popular. According to the flow of the times, the goal of this study is that the general public is exposed to 3D technology easily. we have developed a web-based application program that enables 3D modeling of facial front and side photographs using a mobile phone. In order to realize 3D modeling, two photographs (front and side) are photographed with a mobile camera, and ASM (Active Shape Model) and skin binarization technique are used to extract facial height such as nose from facial and side photographs. Three-dimensional coordinates are generated using the face extracted from the front photograph and the face height obtained from the side photograph. Using the 3-D coordinates generated for the standard face model modeled with the standard face as a control point, the face becomes the face of the subject when the RBF (Radial Basis Function) interpolation method is used. Also, in order to cover the face with the modified face model, the control point found in the front photograph is mapped to the texture map coordinate to generate the texture image. Finally, the deformed face model is covered with a texture image, and the 3D modeled image is displayed to the user.

Biometric verified authentication of Automatic Teller Machine (ATM)

  • Jayasri Kotti
    • Advances in environmental research
    • /
    • v.12 no.2
    • /
    • pp.113-122
    • /
    • 2023
  • Biometric authentication has become an essential part of modern-day security systems, especially in financial institutions like banks. A face recognition-based ATM is a biometric authentication system, that uses facial recognition technology to verify the identity of bank account holders during ATM transactions. This technology offers a secure and convenient alternative to traditional ATM transactions that rely on PIN numbers for verification. The proposed system captures users' pictures and compares it with the stored image in the bank's database to authenticate the transaction. The technology also offers additional benefits such as reducing the risk of fraud and theft, as well as speeding up the transaction process. However, privacy and data security concerns remain, and it is important for the banking sector to instrument solid security actions to protect customers' personal information. The proposed system consists of two stages: the first stage captures the user's facial image using a camera and performs pre-processing, including face detection and alignment. In the second stage, machine learning algorithms compare the pre-processed image with the stored image in the database. The results demonstrate the feasibility and effectiveness of using face recognition for ATM authentication, which can enhance the security of ATMs and reduce the risk of fraud.

Improving the Processing Speed and Robustness of Face Detection for a Psychological Robot Application (심리로봇적용을 위한 얼굴 영역 처리 속도 향상 및 강인한 얼굴 검출 방법)

  • Ryu, Jeong Tak;Yang, Jeen Mo;Choi, Young Sook;Park, Se Hyun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.20 no.2
    • /
    • pp.57-63
    • /
    • 2015
  • Compared to other emotion recognition technology, facial expression recognition technology has the merit of non-contact, non-enforceable and convenience. In order to apply to a psychological robot, vision technology must be able to quickly and accurately extract the face region in the previous step of facial expression recognition. In this paper, we remove the background from any image using the YCbCr skin color technology, and use Haar-like Feature technology for robust face detection. We got the result of improved processing speed and robust face detection by removing the background from the input image.

Facial Image Recognition Based on Wavelet Transform and Neural Networks (웨이브렛 변환과 신경망 기반 얼굴 인식)

  • 임춘환;이상훈;편석범
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.37 no.3
    • /
    • pp.104-113
    • /
    • 2000
  • In this study, we propose facial image recognition based on wavelet transform and neural network. This algorithm is proposed by following processes. First, two gray level images is captured in constant illumination and, after removing input image noise using a gaussian filter, differential image is obtained between background and face input image, and this image has a process of erosion and dilation. Second, a mask is made from dilation image and background and facial image is divided by projecting the mask into face input image Then, characteristic area of square shape that consists of eyes, a nose, a mouth, eyebrows and cheeks is detected by searching the edge of divided face image. Finally, after characteristic vectors are extracted from performing discrete wavelet transform(DWT) of this characteristic area and is normalized, normalized vectors become neural network input vectors. And recognition processing is performed based on neural network learning. Simulation results show recognition rate of 100 % about learned image and 92% about unlearned image.

  • PDF

A Facial Expression Recognition Method Using Two-Stream Convolutional Networks in Natural Scenes

  • Zhao, Lixin
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.399-410
    • /
    • 2021
  • Aiming at the problem that complex external variables in natural scenes have a greater impact on facial expression recognition results, a facial expression recognition method based on two-stream convolutional neural network is proposed. The model introduces exponentially enhanced shared input weights before each level of convolution input, and uses soft attention mechanism modules on the space-time features of the combination of static and dynamic streams. This enables the network to autonomously find areas that are more relevant to the expression category and pay more attention to these areas. Through these means, the information of irrelevant interference areas is suppressed. In order to solve the problem of poor local robustness caused by lighting and expression changes, this paper also performs lighting preprocessing with the lighting preprocessing chain algorithm to eliminate most of the lighting effects. Experimental results on AFEW6.0 and Multi-PIE datasets show that the recognition rates of this method are 95.05% and 61.40%, respectively, which are better than other comparison methods.

Using Ensemble Learning Algorithm and AI Facial Expression Recognition, Healing Service Tailored to User's Emotion (앙상블 학습 알고리즘과 인공지능 표정 인식 기술을 활용한 사용자 감정 맞춤 힐링 서비스)

  • Yang, seong-yeon;Hong, Dahye;Moon, Jaehyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.818-820
    • /
    • 2022
  • The keyword 'healing' is essential to the competitive society and culture of Koreans. In addition, as the time at home increases due to COVID-19, the demand for indoor healing services has increased. Therefore, this thesis analyzes the user's facial expression so that people can receive various 'customized' healing services indoors, and based on this, provides lighting, ASMR, video recommendation service, and facial expression recording service.The user's expression was analyzed by applying the ensemble algorithm to the expression prediction results of various CNN models after extracting only the face through object detection from the image taken by the user.

Feature Variance and Adaptive classifier for Efficient Face Recognition (효과적인 얼굴 인식을 위한 특징 분포 및 적응적 인식기)

  • Dawadi, Pankaj Raj;Nam, Mi Young;Rhee, Phill Kyu
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.11a
    • /
    • pp.34-37
    • /
    • 2007
  • Face recognition is still a challenging problem in pattern recognition field which is affected by different factors such as facial expression, illumination, pose etc. The facial feature such as eyes, nose, and mouth constitute a complete face. Mouth feature of face is under the undesirable effect of facial expression as many factors contribute the low performance. We proposed a new approach for face recognition under facial expression applying two cascaded classifiers to improve recognition rate. All facial expression images are treated by general purpose classifier at first stage. All rejected images (applying threshold) are used for adaptation using GA for improvement in recognition rate. We apply Gabor Wavelet as a general classifier and Gabor wavelet with Genetic Algorithm for adaptation under expression variance to solve this issue. We have designed, implemented and demonstrated our proposed approach addressing this issue. FERET face image dataset have been chosen for training and testing and we have achieved a very good success.