• Title/Summary/Keyword: facial recognition technology

Search Result 170, Processing Time 0.036 seconds

A Method for Determining Face Recognition Suitability of Face Image (얼굴영상의 얼굴인식 적합성 판정 방법)

  • Lee, Seung Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.11
    • /
    • pp.295-302
    • /
    • 2018
  • Face recognition (FR) has been widely used in various applications, such as smart surveillance systems, immigration control in airports, user authentication in smart devices, and so on. FR in well-controlled conditions has been extensively studied and is relatively mature. However, in unconstrained conditions, FR performance could degrade due to undesired characteristics of the input face image (such as irregular facial pose variations). To overcome this problem, this paper proposes a new method for determining if an input image is suitable for FR. In the proposed method, for an input face image, reconstruction error is computed by using a predefined set of reference face images. Then, suitability can be determined by comparing the reconstruction error with a threshold value. In order to reduce the effect of illumination changes on the determination of suitability, a preprocessing algorithm is applied to the input and reference face images before the reconstruction. Experimental results show that the proposed method is able to accurately discriminate non-frontal and/or incorrectly aligned face images from correctly aligned frontal face images. In addition, only 3 ms is required to process a face image of $64{\times}64$ pixels, which further demonstrates the efficiency of the proposed method.

Research on Classification of Human Emotions Using EEG Signal (뇌파신호를 이용한 감정분류 연구)

  • Zubair, Muhammad;Kim, Jinsul;Yoon, Changwoo
    • Journal of Digital Contents Society
    • /
    • v.19 no.4
    • /
    • pp.821-827
    • /
    • 2018
  • Affective computing has gained increasing interest in the recent years with the development of potential applications in Human computer interaction (HCI) and healthcare. Although momentous research has been done on human emotion recognition, however, in comparison to speech and facial expression less attention has been paid to physiological signals. In this paper, Electroencephalogram (EEG) signals from different brain regions were investigated using modified wavelet energy features. For minimization of redundancy and maximization of relevancy among features, mRMR algorithm was deployed significantly. EEG recordings of a publically available "DEAP" database have been used to classify four classes of emotions with Multi class Support Vector Machine. The proposed approach shows significant performance compared to existing algorithms.

Representation of Dynamic Facial ImageGraphic for Multi-Dimensional (다차원 데이터의 동적 얼굴 이미지그래픽 표현)

  • 최철재;최진식;조규천;차홍준
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.10
    • /
    • pp.1291-1300
    • /
    • 2001
  • This article come to study the visualization representation technique of eye brain of person, basing on the ground of the dynamic graphics which is able to change the real time, manipulating the image as graphic factors of the multi-data. And the important thought in such realization is as follows ; corresponding the character points of human face and the parameter control value which obtains basing on the existing image recognition algorithm to the multi-dimensional data, synthesizing the image, it is to create the virtual image from the emotional expression according to the changing contraction expression. The proposed DyFIG system is realized that it as the completing module and we suggest the module of human face graphics which is able to express the emotional expression by manipulating and experimenting, resulting in realizing the emotional data expression description and technology.

  • PDF

Study on the Camera Image Frame's Comparison for Authenticating Smart Phone Users (스마트폰 사용자 인증을 위한 카메라 영상 프레임 비교에 관한 연구)

  • Jang, Eun-Gyeom;Nam, Seok-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.6
    • /
    • pp.155-164
    • /
    • 2011
  • APP based on the smart phone is being utilized to various scopes such as medical services in hospitals, financing services at banks and credit card companies, and ubiquitous technologies in companies and homes etc. In this service environment, exposures of smart phones cause loss of assets including leaks of official/private information by outsiders. Though secret keys, pattern recognition technologies, and single image authentication techniques are being applied as protective methods, but they have problems in that accesses are possible by utilizing static key values or images like pictures. Therefore, this study proposes a face authentication technology for protecting smart phones from these dangerous factors and problems. The proposed technology authenticates users by extracting key frames of user's facial images by real time, and also controls accesses to the smart phone. Authentication information is composed of multiple key frames, and the user' access is controlled by distinction algorism of similarity utilizing DC values of image's pixel and luminance.

A Study of Facial Organs Classification System Based on Fusion of CNN Features and Haar-CNN Features

  • Hao, Biao;Lim, Hye-Youn;Kang, Dae-Seong
    • The Journal of Korean Institute of Information Technology
    • /
    • v.16 no.11
    • /
    • pp.105-113
    • /
    • 2018
  • In this paper, we proposed a method for effective classification of eye, nose, and mouth of human face. Most recent image classification uses Convolutional Neural Network(CNN). However, the features extracted by CNN are not sufficient and the classification effect is not too high. We proposed a new algorithm to improve the classification effect. The proposed method can be roughly divided into three parts. First, the Haar feature extraction algorithm is used to construct the eye, nose, and mouth dataset of face. The second, the model extracts CNN features of image using AlexNet. Finally, Haar-CNN features are extracted by performing convolution after Haar feature extraction. After that, CNN features and Haar-CNN features are fused and classify images using softmax. Recognition rate using mixed features could be increased about 4% than CNN feature. Experiments have demonstrated the performance of the proposed algorithm.

Biometrics for Person Authentication: A Survey (개인 인증을 위한 생체인식시스템 사례 및 분류)

  • Ankur, Agarwal;Pandya, A.-S.;Lho, Young-Uhg;Kim, Kwang-Baek
    • Journal of Intelligence and Information Systems
    • /
    • v.11 no.1
    • /
    • pp.1-15
    • /
    • 2005
  • As organizations search fur more secure authentication methods (Dr user access, e-commerce, and other security applications, biometrics is gaining increasing attention. Biometrics offers greater security and convenience than traditional methods of personal recognition. In some applications, biometrics can replace or supplement the existing technology. In others, it is the only viable approach. Several biometric methods of identification, including fingerprint hand geometry, facial, ear, iris, eye, signature and handwriting have been explored and compared in this paper. They all are well suited for the specific application to their domain. This paper briefly identifies and categorizes them in particular domain well suited for their application. Some methods are less intrusive than others.

  • PDF

Deep learning based face mask recognition for access control (출입 통제에 활용 가능한 딥러닝 기반 마스크 착용 판별)

  • Lee, Seung Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.8
    • /
    • pp.395-400
    • /
    • 2020
  • Coronavirus disease 2019 (COVID-19) was identified in December 2019 in China and has spread globally, resulting in an ongoing pandemic. Because COVID-19 is spread mainly from person to person, every person is required to wear a facemask in public. On the other hand, many people are still not wearing facemasks despite official advice. This paper proposes a method to predict whether a human subject is wearing a facemask or not. In the proposed method, two eye regions are detected, and the mask region (i.e., face regions below two eyes) is predicted and extracted based on the two eye locations. For more accurate extraction of the mask region, the facial region was aligned by rotating it such that the line connecting the two eye centers was horizontal. The mask region extracted from the aligned face was fed into a convolutional neural network (CNN), producing the classification result (with or without a mask). The experimental result on 186 test images showed that the proposed method achieves a very high accuracy of 98.4%.

Research on Generative AI for Korean Multi-Modal Montage App (한국형 멀티모달 몽타주 앱을 위한 생성형 AI 연구)

  • Lim, Jeounghyun;Cha, Kyung-Ae;Koh, Jaepil;Hong, Won-Kee
    • Journal of Service Research and Studies
    • /
    • v.14 no.1
    • /
    • pp.13-26
    • /
    • 2024
  • Multi-modal generation is the process of generating results based on a variety of information, such as text, images, and audio. With the rapid development of AI technology, there is a growing number of multi-modal based systems that synthesize different types of data to produce results. In this paper, we present an AI system that uses speech and text recognition to describe a person and generate a montage image. While the existing montage generation technology is based on the appearance of Westerners, the montage generation system developed in this paper learns a model based on Korean facial features. Therefore, it is possible to create more accurate and effective Korean montage images based on multi-modal voice and text specific to Korean. Since the developed montage generation app can be utilized as a draft montage, it can dramatically reduce the manual labor of existing montage production personnel. For this purpose, we utilized persona-based virtual person montage data provided by the AI-Hub of the National Information Society Agency. AI-Hub is an AI integration platform aimed at providing a one-stop service by building artificial intelligence learning data necessary for the development of AI technology and services. The image generation system was implemented using VQGAN, a deep learning model used to generate high-resolution images, and the KoDALLE model, a Korean-based image generation model. It can be confirmed that the learned AI model creates a montage image of a face that is very similar to what was described using voice and text. To verify the practicality of the developed montage generation app, 10 testers used it and more than 70% responded that they were satisfied. The montage generator can be used in various fields, such as criminal detection, to describe and image facial features.

Analysis of Success Cases of InsurTech and Digital Insurance Platform Based on Artificial Intelligence Technologies: Focused on Ping An Insurance Group Ltd. in China (인공지능 기술 기반 인슈어테크와 디지털보험플랫폼 성공사례 분석: 중국 평안보험그룹을 중심으로)

  • Lee, JaeWon;Oh, SangJin
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.71-90
    • /
    • 2020
  • Recently, the global insurance industry is rapidly developing digital transformation through the use of artificial intelligence technologies such as machine learning, natural language processing, and deep learning. As a result, more and more foreign insurers have achieved the success of artificial intelligence technology-based InsurTech and platform business, and Ping An Insurance Group Ltd., China's largest private company, is leading China's global fourth industrial revolution with remarkable achievements in InsurTech and Digital Platform as a result of its constant innovation, using 'finance and technology' and 'finance and ecosystem' as keywords for companies. In response, this study analyzed the InsurTech and platform business activities of Ping An Insurance Group Ltd. through the ser-M analysis model to provide strategic implications for revitalizing AI technology-based businesses of domestic insurers. The ser-M analysis model has been studied so that the vision and leadership of the CEO, the historical environment of the enterprise, the utilization of various resources, and the unique mechanism relationships can be interpreted in an integrated manner as a frame that can be interpreted in terms of the subject, environment, resource and mechanism. As a result of the case analysis, Ping An Insurance Group Ltd. has achieved cost reduction and customer service development by digitally innovating its entire business area such as sales, underwriting, claims, and loan service by utilizing core artificial intelligence technologies such as facial, voice, and facial expression recognition. In addition, "online data in China" and "the vast offline data and insights accumulated by the company" were combined with new technologies such as artificial intelligence and big data analysis to build a digital platform that integrates financial services and digital service businesses. Ping An Insurance Group Ltd. challenged constant innovation, and as of 2019, sales reached $155 billion, ranking seventh among all companies in the Global 2000 rankings selected by Forbes Magazine. Analyzing the background of the success of Ping An Insurance Group Ltd. from the perspective of ser-M, founder Mammingz quickly captured the development of digital technology, market competition and changes in population structure in the era of the fourth industrial revolution, and established a new vision and displayed an agile leadership of digital technology-focused. Based on the strong leadership led by the founder in response to environmental changes, the company has successfully led InsurTech and Platform Business through innovation of internal resources such as investment in artificial intelligence technology, securing excellent professionals, and strengthening big data capabilities, combining external absorption capabilities, and strategic alliances among various industries. Through this success story analysis of Ping An Insurance Group Ltd., the following implications can be given to domestic insurance companies that are preparing for digital transformation. First, CEOs of domestic companies also need to recognize the paradigm shift in industry due to the change in digital technology and quickly arm themselves with digital technology-oriented leadership to spearhead the digital transformation of enterprises. Second, the Korean government should urgently overhaul related laws and systems to further promote the use of data between different industries and provide drastic support such as deregulation, tax benefits and platform provision to help the domestic insurance industry secure global competitiveness. Third, Korean companies also need to make bolder investments in the development of artificial intelligence technology so that systematic securing of internal and external data, training of technical personnel, and patent applications can be expanded, and digital platforms should be quickly established so that diverse customer experiences can be integrated through learned artificial intelligence technology. Finally, since there may be limitations to generalization through a single case of an overseas insurance company, I hope that in the future, more extensive research will be conducted on various management strategies related to artificial intelligence technology by analyzing cases of multiple industries or multiple companies or conducting empirical research.

Review of Research Trends on Virtual Reality-Based Intervention for Students with Autism Spectrum Disorders and Intervention Characteristics (자폐 범주성 학생을 위한 가상현실 기반 중재 연구동향 및 중재 특성 고찰)

  • Yang, Yi;Lee, Suk-Hyang;Suh, Min-Kyung
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.2
    • /
    • pp.623-636
    • /
    • 2017
  • The use of virtual reality(VR)-based interventions for students with autism spectrum disorders(ASD) has received special attention as evidence-based practices for its feasiblity, practicality, and appropriateness. However, there is little research to investigate the effects of VR-based intervention for students with ASD in Korea. This study identifies and reviews studies applying VR-based interventions. In total, 13 experimental studies were found that examine the effects of VR interventions published from 1990 to 2016. The selected studies were analyzed by 6 variables including publication year, participants, research design, independent variable, dependent variable, and outcome. The results of this study showed the feasibility of the implementing VR-based interventions in various age group students with ASD. In addition, the utilization of VR techniques was particularly effective in improving a wide range of social communication skills including facial recognition, empathy, joint attention, understanding social context, and resolving issues due to limited cognitive abilities. Several recommendations for the future study on VR-based intervention for students with ASD such as interdisciplinary approach to VR-based interventions, support needs regarding characteristics of ASD, generalization and maintenance of acquired technology, and consideration for participants' cultural background. were discussed.