• Title/Summary/Keyword: 실시간 얼굴인식

Search Result 249, Processing Time 0.034 seconds

Panorama Background Generation and Object Tracking using Pan-Tilt-Zoom Camera (Pan-Tilt-Zoom 카메라를 이용한 파노라마 배경 생성과 객체 추적)

  • Paek, In-Ho;Im, Jae-Hyun;Park, Kyoung-Ju;Paik, Jun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.3
    • /
    • pp.55-63
    • /
    • 2008
  • This paper presents a panorama background generation and object tracking technique using a Pan-Tilt-Zoom camera. The proposed method estimates local motion vectors rapidly using phase correlation matching at the prespecified multiple local regions, and it makes minimized estimation error by vector quantization. We obtain the required image patches, by estimating the overlapped region using local motion vectors, we can then project the images to cylinder and realign the images to make the panoramic image. The object tracking is performed by extracting object's motion and by separating foreground from input image using background subtraction. The proposed PTZ-based object tracking method can efficiently generated a stable panorama background, which covers up to 360 degree FOV The proposed algorithm is designed for real-time implementation and it can be applied to many commercial applications such as object shape detection and face recognition in various surveillance video systems.

Learning efficiency checking system by measuring human motion detection (사람의 움직임 감지를 측정한 학습 능률 확인 시스템)

  • Kim, Sukhyun;Lee, Jinsung;Yu, Eunsang;Park, Seon-u;Kim, Eung-Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.290-293
    • /
    • 2021
  • In this paper, we implement a learning efficiency verification system to inspire learning motivation and help improve concentration by detecting the situation of the user studying. To this aim, data on learning attitude and concentration are measured by extracting the movement of the user's face or body through a real-time camera. The Jetson board was used to implement the real-time embedded system, and a convolutional neural network (CNN) was implemented for image recognition. After detecting the feature part of the object using a CNN, motion detection is performed. The captured image is shown in a GUI written in PYQT5, and data is collected by sending push messages when each of the actions is obstructed. In addition, each function can be executed on the main screen made with the GUI, and functions such as a statistical graph that calculates the collected data, To do list, and white noise are performed. Through learning efficiency checking system, various functions including data collection and analysis of targets were provided to users.

  • PDF

The Etrance Authentication Systems Using Real-Time Object Extraction and the RFID Tag (얼굴 인식과 RFID를 이용한 실시간 객체 추적 및 인증 시스템)

  • Jung, Young Hoon;Lee, Chang Soo;Lee, Kwang Hyung;Jun, Moon Seog
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.4 no.4
    • /
    • pp.51-62
    • /
    • 2008
  • In this paper, the proposal system can achieve the more safety of RFID System with the 2-step authentication procedures for the enhancement about the security of general RFID systems. After authentication RFID Tag, additionally, the proposal system extract the characteristic information in the user image for acquisition of the additional authentication information of the user with the camera. In this paper, the system which was proposed more enforce the security of the automatic entrance and exit authentication system with the cognitive characters of RFID Tag and the extracted characteristic information of the user image through the camera. The RFID system which use the active tag and reader with 2.4GHz bandwidth can recognize the tag of RFID in the various output manner. Additionally, when the RFID system have errors, the characteristic information of the user image is designed to replace the RFID system as it compare with the similarity of the color, outline and input image information which was recorded to the database previously. In the result of experiment, the system can acquire more exact results as compared with the single authentication system when it using RFID Tag and the information of color characteristics.

Robust Real-time Tracking of Facial Features with Application to Emotion Recognition (안정적인 실시간 얼굴 특징점 추적과 감정인식 응용)

  • Ahn, Byungtae;Kim, Eung-Hee;Sohn, Jin-Hun;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

Development of a Ream-time Facial Expression Recognition Model using Transfer Learning with MobileNet and TensorFlow.js (MobileNet과 TensorFlow.js를 활용한 전이 학습 기반 실시간 얼굴 표정 인식 모델 개발)

  • Cha Jooho
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.3
    • /
    • pp.245-251
    • /
    • 2023
  • Facial expression recognition plays a significant role in understanding human emotional states. With the advancement of AI and computer vision technologies, extensive research has been conducted in various fields, including improving customer service, medical diagnosis, and assessing learners' understanding in education. In this study, we develop a model that can infer emotions in real-time from a webcam using transfer learning with TensorFlow.js and MobileNet. While existing studies focus on achieving high accuracy using deep learning models, these models often require substantial resources due to their complex structure and computational demands. Consequently, there is a growing interest in developing lightweight deep learning models and transfer learning methods for restricted environments such as web browsers and edge devices. By employing MobileNet as the base model and performing transfer learning, our study develops a deep learning transfer model utilizing JavaScript-based TensorFlow.js, which can predict emotions in real-time using facial input from a webcam. This transfer model provides a foundation for implementing facial expression recognition in resource-constrained environments such as web and mobile applications, enabling its application in various industries.

Stress Detection System for Emotional Labor Based On Deep Learning Facial Expression Recognition (감정노동자를 위한 딥러닝 기반의 스트레스 감지시스템의 설계)

  • Og, Yu-Seon;Cho, Woo-hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.613-617
    • /
    • 2021
  • According to the growth of the service industry, stresses from emotional labor workers have been emerging as a social problem, thereby so-called the Emotional Labor Protection Act was implemented in 2018. However, insufficient substantial protection systems for emotional workers emphasizes the necessity of a digital stress management system. Thus, in this paper, we suggest a stress detection system for customer service representatives based on deep learning facial expression recognition. This system consists of a real-time face detection module, an emotion classification FER module that deep-learned big data including Korean emotion images, and a monitoring module that only visualizes stress levels. We designed the system to aim to monitor stress and prevent mental illness in emotional workers.

  • PDF

우리나라의 갈릴레오 탐색구조 지상시스템 개발 참여 방안

  • Ju, In-Won;Lee, Sang-Uk;Kim, Jae-Hun;Seo, Sang-Hyeon;Han, Dong-Su;Im, Jong-Geun
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • v.2
    • /
    • pp.608-611
    • /
    • 2006
  • COSPAS-SARSAT 시스템은 위성체와 지상 설비를 이용하여 항공기 또는 선박 등이 조난 시에 탐색구조(SAR: Search and Rescue) 활동을 도울 수 있도록 조난경보와 위치정보를 제공하는 시스템이다. COSPAS-SARSAT 서비스의 경우, 조난신호 접수에서 조난위치확정까지 평균 1시간 이상이 소요되고, 위치정확도가 수 Km 정도로 범위가 넓은 편이다. 이러한 문제점을 개선하기 위해서 중궤도 위성을 이용한 차세대 탐색구조 시스템 개발이 추진 중에 있으며 EU에서 2011년 FOC(Full Operation Capability)를 목표로 개발중인 갈릴레오 항법위성 프로젝트의 경우 SAR 중계기를 탑재하여 탐색구조 서비스를 제공할 계획에 있다. 갈릴레오 탐색구조(SAR/Galileo) 서비스는 수 m급의 위치정확도, 10분 이내의 조난신호 접수에서 구조까지 소요시간, 및 조난자에게 회신링크 서비스 제공 등 보다 향상된 탐색구조 성능을 제공하기 위해 개발 중에 있으므로, 갈릴레오 위성 서비스가 시작되면 탐색구조시스템 체계에 보다 신속하고 정확한 구조가 가능할 것으로 예상된다. 우리나라에서는 COSPAS-SARSAT 회원국으로 가입하여 현재 송도 해양경찰청 내에 LEOLUT와 MCC가 설치되어 운용되고 있다. 날로 더해가는 다양한 재난에 대한 인명구조를 신속하고 효과적으로 대처하기 위해 차세대 갈릴레오 탐색구조 지상국 도입이 절실하다고 할 수 있다. 따라서, 탐색구조 단말기를 포함한 지상국 인프라의 구축 등 갈릴레오 탐색구조 지상시스템 개발의 참여 방안에 관한 연구는 매우 시기적절하고 중요한 연구이다. 본 논문은 갈릴레오 사업에 참여하여 SAR/Galileo 개발을 주관하고 있는 중국의 사례를 분석함으로 우리나라가 차세대 갈릴레오 탐색구조 지상시스템 개발에 참여하기 위해서 필요한 참여방법 및 절차 등을 도출하고, 참여 가능한 개발범위, 참여전략 및 추진체계에 대해서 제안한다.법의 성능을 평가를 위하여 원본 여권에서 얼굴 부분을 위조한 여권과 기울어진 여권 영상을 대상으로 실험한 결과, 제안된 방법이 여권의 코드 인식 및 얼굴 인증에 있어서 우수한 성능이 있음을 확인하였다.진행하고 있다.태도와 유아의 창의성간에는 상관이 없는 것으로 나타났고, 일반 유아의 아버지 양육태도와 유아의 창의성간의 상관에서는 아버지 양육태도의 성취-비성취 요인에서와 창의성제목의 추상성요인에서 상관이 있는 것으로 나타났다. 따라서 창의성이 높은 아동의 아버지의 양육태도는 일반 유아의 아버지와 보다 더 애정적이며 자율성이 높지만 창의성이 높은 아동의 집단내에서 창의성에 특별한 영향을 더 미치는 아버지의 양육방식은 발견되지 않았다. 반면 일반 유아의 경우 아버지의 성취지향성이 낮을 때 자녀의 창의성을 향상시킬 수 있는 것으로 나타났다. 이상에서 자녀의 창의성을 향상시키는 중요한 양육차원은 애정성이나 비성취지향성으로 나타나고 있어 정서적인 측면의 지원인 것으로 밝혀졌다.징에서 나타나는 AD-SR맥락의 반성적 탐구가 자주 나타났다. 반성적 탐구 척도 두 그룹을 비교 했을 때 CON 상호작용의 특징이 낮게 나타나는 N그룹이 양적으로 그리고 내용적으로 더 의미 있는 반성적 탐구를 했다용을 지원하는 홈페이지를 만들어 자료 제공 사이트에 대한 메타 자료를 데이터베이스화했으며 이를 통해 학생들이 원하는 실시간 자료를 검색하여 찾을 수 있고 홈페이지를 방분했을 때 이해하기 어려운 그래프나 각 홈페이지가 제공하는 자료들에 대한 처리 방법을 도움말로 제공받을 수 있게 했다. 실시간 자료들을 이용한 학습은 학생들의 학습 의욕과 탐구 능력을 향상시켰으며 컴퓨터 활용 능력과 외국어 자료 활용 능력을 향상 시키는데도 도움을 주었다.지역산업 발전을 위한 기술역량이 강화될 것이다.정 ${\rightarrow}$ 분배 ${\rightarrow}$ 최대다수의 최대행복이다.는 역할을 한다. 따라

  • PDF

Caricaturing using Local Warping and Edge Detection (로컬 와핑 및 윤곽선 추출을 이용한 캐리커처 제작)

  • Choi, Sung-Jin;Bae, Hyeon;Kim, Sung-Shin;Woo, Kwang-Bang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.4
    • /
    • pp.403-408
    • /
    • 2003
  • A general meaning of caricaturing is that a representation, especially pictorial or literary, in which the subject's distinctive features or peculiarities are deliberately exaggerated to produce a comic or grotesque effect. In other words, a caricature is defined as a rough sketch(dessin) which is made by detecting features from human face and exaggerating or warping those. There have been developed many methods which can make a caricature image from human face using computer. In this paper, we propose a new caricaturing system. The system uses a real-time image or supplied image as an input image and deals with it on four processing steps and then creates a caricatured image finally. The four Processing steps are like that. The first step is detecting a face from input image. The second step is extracting special coordinate values as facial geometric information. The third step is deforming the face image using local warping method and the coordinate values acquired in the second step. In fourth step, the system transforms the deformed image into the better improved edge image using a fuzzy Sobel method and then creates a caricatured image finally. In this paper , we can realize a caricaturing system which is simpler than any other exiting systems in ways that create a caricatured image and does not need complex algorithms using many image processing methods like image recognition, transformation and edge detection.

Fixed-Point Modeling and Performance Analysis of a SIFT Keypoints Localization Algorithm for SoC Hardware Design (SoC 하드웨어 설계를 위한 SIFT 특징점 위치 결정 알고리즘의 고정 소수점 모델링 및 성능 분석)

  • Park, Chan-Ill;Lee, Su-Hyun;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.6
    • /
    • pp.49-59
    • /
    • 2008
  • SIFT(Scale Invariant Feature Transform) is an algorithm to extract vectors at pixels around keypoints, in which the pixel colors are very different from neighbors, such as vortices and edges of an object. The SIFT algorithm is being actively researched for various image processing applications including 3-D image constructions, and its most computation-intensive stage is a keypoint localization. In this paper, we develope a fixed-point model of the keypoint localization and propose its efficient hardware architecture for embedded applications. The bit-length of key variables are determined based on two performance measures: localization accuracy and error rate. Comparing with the original algorithm (implemented in Matlab), the accuracy and error rate of the proposed fixed point model are 93.57% and 2.72% respectively. In addition, we found that most of missing keypoints appeared at the edges of an object which are not very important in the case of keypoints matching. We estimate that the hardware implementation will give processing speed of $10{\sim}15\;frame/sec$, while its fixed point implementation on Pentium Core2Duo (2.13 GHz) and ARM9 (400 MHz) takes 10 seconds and one hour each to process a frame.

Emotion fusion video communication services for real-time avatar matching technology (영상통신 감성융합 서비스를 위한 실시간 아바타 정합기술)

  • Oh, Dong Sik;Kang, Jun Ku;Sin, Min Ho
    • Journal of Digital Convergence
    • /
    • v.10 no.10
    • /
    • pp.283-288
    • /
    • 2012
  • 3D is the one of the current world in the spotlight as part of the future earnings of the business sector. Existing flat 2D and stereoscopic 3D to change the 3D shape and texture make walking along the dimension of the real world and the virtual reality world by making it feel contemporary reality of coexistence good show. 3D for the interest of the people has been spreading throughout the movie which is based on a 3D Avata. 3D TV market of the current conglomerate of changes in the market pioneer in the 3D market, further leap into the era of the upgrade was. At the same time, however, the modern man of the world, if becoming a necessity in the smartphone craze new innovation in the IT market mobile phone market and also has made. A small computer called a smartphone enough, the ripple velocity and the aftermath of the innovation of the telephone, the Internet as much as to leave many issues. Smartphone smart phone is a mobile phone that can be several functions. The current iPhone, Android. In addition to a large number of Windows Phone smartphones are released. Above the overall prospects of the future and a business service model for 3D facial expression as input avatar virtual 3D character on camera on your smartphone camera to recognize a user's emotional expressions on the face of the person is able to synthetic synthesized avatars in real-time to other mobile phone users matching, transmission, and be able to communicate in real-time sensibility fused video communication services to the development of applications.