• Title/Summary/Keyword: Mask Recognition

Search Result 180, Processing Time 0.024 seconds

Full face recognition using the feature extracted gy shape analyzing and the back-propagation algorithm (형태분석에 의한 특징 추출과 BP알고리즘을 이용한 정면 얼굴 인식)

  • 최동선;이주신
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.10
    • /
    • pp.63-71
    • /
    • 1996
  • This paper proposes a method which analyzes facial shape and extracts positions of eyes regardless of the tilt and the size of input iamge. With the extracted feature parameters of facial element by the method, full human faces are recognized by a neural network which BP algorithm is applied on. Input image is changed into binary codes, and then labelled. Area, circumference, and circular degree of the labelled binary image are obtained by using chain code and defined as feature parameters of face image. We first extract two eyes from the similarity and distance of feature parameter of each facial element, and then input face image is corrected by standardizing on two extracted eyes. After a mask is genrated line historgram is applied to finding the feature points of facial elements. Distances and angles between the feature points are used as parameters to recognize full face. To show the validity learning algorithm. We confirmed that the proposed algorithm shows 100% recognition rate on both learned and non-learned data for 20 persons.

  • PDF

Object Recognition and Pose Estimation Based on Deep Learning for Visual Servoing (비주얼 서보잉을 위한 딥러닝 기반 물체 인식 및 자세 추정)

  • Cho, Jaemin;Kang, Sang Seung;Kim, Kye Kyung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.1-7
    • /
    • 2019
  • Recently, smart factories have attracted much attention as a result of the 4th Industrial Revolution. Existing factory automation technologies are generally designed for simple repetition without using vision sensors. Even small object assemblies are still dependent on manual work. To satisfy the needs for replacing the existing system with new technology such as bin picking and visual servoing, precision and real-time application should be core. Therefore in our work we focused on the core elements by using deep learning algorithm to detect and classify the target object for real-time and analyzing the object features. We chose YOLO CNN which is capable of real-time working and combining the two tasks as mentioned above though there are lots of good deep learning algorithms such as Mask R-CNN and Fast R-CNN. Then through the line and inside features extracted from target object, we can obtain final outline and estimate object posture.

Artificial Intelligence Image Segmentation for Extracting Construction Formwork Elements (거푸집 부재 인식을 위한 인공지능 이미지 분할)

  • Ayesha Munira, Chowdhury;Moon, Sung-Woo
    • Journal of KIBIM
    • /
    • v.12 no.1
    • /
    • pp.1-9
    • /
    • 2022
  • Concrete formwork is a crucial component for any construction project. Artificial intelligence offers great potential to automate formwork design by offering various design options and under different criteria depending on the requirements. This study applied image segmentation in 2D formwork drawings to extract sheathing, strut and pipe support formwork elements. The proposed artificial intelligence model can recognize, classify, and extract formwork elements from 2D CAD drawing image and training and test results confirmed the model performed very well at formwork element recognition with average precision and recall better than 80%. Recognition systems for each formwork element can be implemented later to generate 3D BIM models.

Eye and Mouth Images Based Facial Expressions Recognition Using PCA and Template Matching (PCA와 템플릿 정합을 사용한 눈 및 입 영상 기반 얼굴 표정 인식)

  • Woo, Hyo-Jeong;Lee, Seul-Gi;Kim, Dong-Woo;Ryu, Sung-Pil;Ahn, Jae-Hyeong
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.11
    • /
    • pp.7-15
    • /
    • 2014
  • This paper proposed a recognition algorithm of human facial expressions using the PCA and the template matching. Firstly, face image is acquired using the Haar-like feature mask from an input image. The face image is divided into two images. One is the upper image including eye and eyebrow. The other is the lower image including mouth and jaw. The extraction of facial components, such as eye and mouth, begins getting eye image and mouth image. Then an eigenface is produced by the PCA training process with learning images. An eigeneye and an eigenmouth are produced from the eigenface. The eye image is obtained by the template matching the upper image with the eigeneye, and the mouth image is obtained by the template matching the lower image with the eigenmouth. The face recognition uses geometrical properties of the eye and mouth. The simulation results show that the proposed method has superior extraction ratio rather than previous results; the extraction ratio of mouth image is particularly reached to 99%. The face recognition system using the proposed method shows that recognition ratio is greater than 80% about three facial expressions, which are fright, being angered, happiness.

Recognition System of Car License Plate using Fuzzy Neural Networks (퍼지 신경망을 이용한 자동차 번호판 인식 시스템)

  • Kim, Kwang-Baek;Cho, Jae-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.5
    • /
    • pp.313-319
    • /
    • 2007
  • In this paper, we propose a novel method to extract an area of car licence plate and codes of vehicle number from a photographed car image using features on vertical edges and a new Fuzzy neural network algorithm to recognize extracted codes. Prewitt mask is used in searching for vertical edges for detection of an area of vehicle number plate and feature information of vehicle number palate is used to eliminate image noises and extract the plate area and individual codes of vehicle number. Finally, for recognition of extracted codes, we use the proposed Fuzzy neural network algorithm, in which FCM is used as the learning structure between input and middle layers and Max_Min neural network is used as the learning structure within inhibition and output layers. Through a variety of experiments using real 150 images of vehicle, we showed that the proposed method is more efficient than others.

  • PDF

Enhancing the performance of the facial keypoint detection model by improving the quality of low-resolution facial images (저화질 안면 이미지의 화질 개선를 통한 안면 특징점 검출 모델의 성능 향상)

  • KyoungOok Lee;Yejin Lee;Jonghyuk Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.171-187
    • /
    • 2023
  • When a person's face is recognized through a recording device such as a low-pixel surveillance camera, it is difficult to capture the face due to low image quality. In situations where it is difficult to recognize a person's face, problems such as not being able to identify a criminal suspect or a missing person may occur. Existing studies on face recognition used refined datasets, so the performance could not be measured in various environments. Therefore, to solve the problem of poor face recognition performance in low-quality images, this paper proposes a method to generate high-quality images by performing image quality improvement on low-quality facial images considering various environments, and then improve the performance of facial feature point detection. To confirm the practical applicability of the proposed architecture, an experiment was conducted by selecting a data set in which people appear relatively small in the entire image. In addition, by choosing a facial image dataset considering the mask-wearing situation, the possibility of expanding to real problems was explored. As a result of measuring the performance of the feature point detection model by improving the image quality of the face image, it was confirmed that the face detection after improvement was enhanced by an average of 3.47 times in the case of images without a mask and 9.92 times in the case of wearing a mask. It was confirmed that the RMSE for facial feature points decreased by an average of 8.49 times when wearing a mask and by an average of 2.02 times when not wearing a mask. Therefore, it was possible to verify the applicability of the proposed method by increasing the recognition rate for facial images captured in low quality through image quality improvement.

A Study on Local Micro Pattern for Facial Expression Recognition (얼굴 표정 인식을 위한 지역 미세 패턴 기술에 관한 연구)

  • Jung, Woong Kyung;Cho, Young Tak;Ahn, Yong Hak;Chae, Ok Sam
    • Convergence Security Journal
    • /
    • v.14 no.5
    • /
    • pp.17-24
    • /
    • 2014
  • This study proposed LDP (Local Directional Pattern) as a new local micro pattern for facial expression recognition to solve noise sensitive problem of LBP (Local Binary Pattern). The proposed method extracts 8-directional components using $m{\times}m$ mask to solve LBP's problem and choose biggest k components, each chosen component marked with 1 as a bit, otherwise 0. Finally, generates a pattern code with bit sequence as 8-directional components. The result shows better performance of rotation and noise adaptation. Also, a new local facial feature can be developed to present both PFF (permanent Facial Feature) and TFF (Transient Facial Feature) based on the proposed method.

Music Recognition by Partial Template Matching (부분적 템플릿 매칭을 활용한 악보인식)

  • Yoo, Jae-Myeong;Kim, Gi-Hong;Lee, Guee-Sang
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.11
    • /
    • pp.85-93
    • /
    • 2008
  • For music score recognition, several approaches have been proposed including shape matching, statistical methods, neural network based methods and structural methods. In this paper, we deal with recognition for low resolution images which are captured by the digital camera of a mobile phone. Considerable distortions are included in these low resolution images, so when existing technology is used, many problems appear. First, captured images are not stable in the sense that they contain lots of distortions or non-uniform illumination changes. Therefore, notes or symbols in the music score are damaged and recognition process gets difficult. This paper presents recognition technology to overcome these problems. First, musical note to head, stick, tail part are separated. Then template matching on head part of musical note, and remainder part is applied. Experimental results show nearly 100% recognition rate for music scores with single musical notes.

Virtual core point detection and ROI extraction for finger vein recognition (지정맥 인식을 위한 가상 코어점 검출 및 ROI 추출)

  • Lee, Ju-Won;Lee, Byeong-Ro
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.3
    • /
    • pp.249-255
    • /
    • 2017
  • The finger vein recognition technology is a method to acquire a finger vein image by illuminating infrared light to the finger and to authenticate a person through processes such as feature extraction and matching. In order to recognize a finger vein, a 2D mask-based two-dimensional convolution method can be used to detect a finger edge but it takes too much computation time when it is applied to a low cost micro-processor or micro-controller. To solve this problem and improve the recognition rate, this study proposed an extraction method for the region of interest based on virtual core points and moving average filtering based on the threshold and absolute value of difference between pixels without using 2D convolution and 2D masks. To evaluate the performance of the proposed method, 600 finger vein images were used to compare the edge extraction speed and accuracy of ROI extraction between the proposed method and existing methods. The comparison result showed that a processing speed of the proposed method was at least twice faster than those of the existing methods and the accuracy of ROI extraction was 6% higher than those of the existing methods. From the results, the proposed method is expected to have high processing speed and high recognition rate when it is applied to inexpensive microprocessors.

Fingerprint Pore Extraction Method using 1D Gaussian Model (1차원 가우시안 모델을 이용한 지문 땀샘 추출 방법)

  • Cui, Junjian;Ra, Moonsoo;Kim, Whoi-Yul
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.4
    • /
    • pp.135-144
    • /
    • 2015
  • Fingerprint pores have proven to be useful features for fingerprint recognition and several pore-based fingerprint recognition systems have been reported recently. In order to recognize fingerprints using pore information, it is very important to extract pores reliably and accurately. Existing pore extraction methods utilize 2D model fitting to detect pore centers. This paper proposes a pore extraction method using 1D Gaussian model which is much simpler than 2D model. During model fitting process, 1D model requires less computational cost than 2D model. The proposed method first calculates local ridge orientation; then, ridge mask is generated. Since pore center is brighter than its neighboring pixels, pore candidates are extracted using a $3{\times}3$ filter and a $5{\times}5$ filter successively. Pore centers are extracted by fitting 1D Gaussian model on the pore candidates. Extensive experiments show that the proposed pore extraction method can extract pores more effectively and accurately than other existing methods, and pore matching results show the proposed pore extraction method could be used in fingerprint recognition.