• Title/Summary/Keyword: 얼굴 영상

Search Result 1,525, Processing Time 0.022 seconds

User-Steered Extraction of Geometric Features for 3D Triangular Meshes (사용자 의도에 의한 삼차원 삼각형 메쉬의 기하적 특징 추출)

  • Yoo, Kwan-Hee;Ha, Jong Sung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.9 no.2
    • /
    • pp.11-18
    • /
    • 2003
  • For extracting geometric features in 3D meshes according to user-steering with effective interactions. this paper generalizes the 2D algorithms of snapping and wrapping that. respectively. moves a cursor to a nearby feature and constructs feature boundaries. First. we define approximate curvatures and move cost functions that are the numerical values measuring the geometric characteristics of the meshes, By exploiting the measuring values. the algorithms of geometric snapping and geometric wrapping are developed and implemented. We also visualize the results from applying the algorithms to extracting geometric features of general 3D mesh models such as a face model and a tooth model.

  • PDF

Synchronizationof Synthetic Facial Image Sequences and Synthetic Speech for Virtual Reality (가상현실을 위한 합성얼굴 동영상과 합성음성의 동기구현)

  • 최장석;이기영
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.7
    • /
    • pp.95-102
    • /
    • 1998
  • This paper proposes a synchronization method of synthetic facial iamge sequences and synthetic speech. The LP-PSOLA synthesizes the speech for each demi-syllable. We provide the 3,040 demi-syllables for unlimited synthesis of the Korean speech. For synthesis of the Facial image sequences, the paper defines the total 11 fundermental patterns for the lip shapes of the Korean consonants and vowels. The fundermental lip shapes allow us to pronounce all Korean sentences. Image synthesis method assigns the fundermental lip shapes to the key frames according to the initial, the middle and the final sound of each syllable in korean input text. The method interpolates the naturally changing lip shapes in inbetween frames. The number of the inbetween frames is estimated from the duration time of each syllable of the synthetic speech. The estimation accomplishes synchronization of the facial image sequences and speech. In speech synthesis, disk memory is required to store 3,040 demi-syllable. In synthesis of the facial image sequences, however, the disk memory is required to store only one image, because all frames are synthesized from the neutral face. Above method realizes synchronization of system which can real the Korean sentences with the synthetic speech and the synthetic facial iage sequences.

  • PDF

The Recent Technology and Standardization Status and Future Vitalizations for Digital Signage (디지털 사이니지 기술 및 표준화 동향과 향후 활성화 방향)

  • Kim, Beom-Joon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.6
    • /
    • pp.545-552
    • /
    • 2016
  • As a future key industry, digital signage is being noticed that provides various type of contents and message through a digital display. In particular, korean digital signage industry gets a favorable evaluation for the high level of digital display industry and wired and wireless networking infrastructure. This paper discusses on not only the current status of digital signage in terms of the overall development and standardization, but the very recent technologies such as the ultra-high definition video and human perception that can be applied for future digital signage. Then, this paper concludes by deriving the problems with the current digital signage industry and presenting solutions for future vitalizations of digital signage.

Pupil and Lip Detection using Shape and Weighted Vector based on Shape (형태와 가중치 벡터를 이용한 눈동자와 입술 검출)

  • Jang, kyung-Shik
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.5
    • /
    • pp.311-318
    • /
    • 2002
  • In this paper, we propose an efficient method for recognizing pupils and lip in a human face. Pupils are detected by a cost function, which uses features based on the eye's shape and a relation between pupil and eyebrow. The inner boundary of lip is detected by weighted vectors based on lip's shape and on the difference of gray level between lip and face skin. These vectors extract four feature points of lip : the top of the upper lip, the bottom of the lower lip, and the two corners. The experiments have been performed for many images and show very encouraging result.

Artificial Intelligence Babysitter System Using Infant Condition Analysis (영유아 상태분석을 이용한 인공지능 베이비시터 시스템)

  • Kim, Yong-Min;Nam, Ji-Seong;Moon, Dae-Hee;Choi, Won-Tae;Kim, Woongsup
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.354-357
    • /
    • 2019
  • 최근 맞벌이 가정이 많아지면서 베이비 시터를 고용해 영아를 양육하는 경우가 많아지고 있는 추세이다. 본 논문에서는 영유아 상태분석에 따른 인공지능 베이비시터 시스템에 대하여 기술하였다. 보다 상세하게는 얼굴인식을 위한 Opencv 영상처리 기법, MS(azure)API 를 이용한 머신러닝 기반의 감정분석과 악취 센서(MQ-135 Sensor)를 이용하여 영유아의 상태를 파악한다. 파악한 영유아의 상태를 바탕으로 스스로 학습하여 요람을 제어하고 어플리케이션을 통해 원격제어를 할 수 있도록 제작한 스마트 베이비시터 시스템에 관한 것이다. 이에 따라 양육에 대한 부담감이 줄어들 것으로 기대하고 양육에 대한 부담감을 조금이나마 경감 시켜 주어 저출산과 양육 지출 비용 절약으로 사회적 측면, 경제적 측면 모두에 기여할 것을 기대한다.

Metal Object Detection System For Drive Inside Protection (내부 운전자 보호를 위한 금속 물체 탐지 시스템)

  • Kim, Jin-Kyu;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.5
    • /
    • pp.609-614
    • /
    • 2009
  • The purpose of this paper is to design the metal object detection system for drive inside protection. To do this, we propose the algorithm for designing the color filter that can detect the metal object using fuzzy theory and the algorithm for detecting area of the driver's face using fuzzy skin color filter. Also, by using the proposed algorithm, we propose the algorithm for detecting the metallic object candidate regions. And, the metallic object color filter is then applied to find the candidate regions. Finally, we show the effectiveness and feasibility of the proposed method through some experiments.

Research and Development of Image Synthesis Model Based on Emotion for the Mobile Environment (모바일 환경에서 감성을 기반으로 한 영상 합성 기법 연구 및 개발)

  • Sim, SeungMin;Lee, JiYeon;Yoon, YongIk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.11
    • /
    • pp.51-58
    • /
    • 2013
  • Camera performance of smartphone recently has been developed as much as the digital camera. Interest in applications As a result, many people take pictures and the number of people who are interested in application according to photos has been steadily increasing. However, there are only synthesis programs which are arraying some photos, overlapping multiple images. The model proposed in this paper, base on the emotion that is extracted from the facial expressions by combining the background and applying effects filters. And it can be also utilized in various fields more than any other synthesis programs.

Differences of Aesthetic Experience Response Code by Player's Experience Level in Adventure Game (어드벤처 게임에서 플레이어 경험수준별 미적경험 반응코드 차이)

  • Choi, GyuHyeok;Kim, Mijin
    • Journal of Korea Game Society
    • /
    • v.20 no.6
    • /
    • pp.3-12
    • /
    • 2020
  • Player experience study includes the behavior and psychological responses of player to game content. And, it focuses on the player's aesthetic experience for specific game elements rather than comprehensive game experiences. This paper presents the player's aesthetic experience data derived from the gameplay process as a code, and analyzes the aesthetic experience data by player experience level for adventure games. These researching results might complement the limitations of existing game analysis research, and provide practical data which creator could apply at the design stage of game.

Measurement Method on Aesthetic Experience of Game Player (게임플레이어의 미적경험 데이터 측정방법)

  • Choi, Gyu-Hyeok;Kim, Mi-Jin
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.5
    • /
    • pp.207-215
    • /
    • 2020
  • Studies on aesthetic experience in games mostly focus on engineering approach centered on structural determination of specific target within the game as well as humanistic·social approach in forms of artistic conversation on game play experience. This paper have established a theoretic guideline regarding the progress of aesthetic experience which allows analysis from the perspective of player experience acquired during game play. Based on such guideline, the study classifies cognitive data (Eye-Tracking, Playing Action, Facial Expression) on aesthetic experience and suggests the methods to measure such data. By deducing errors and points to be considered related to measuring methods through pilot tests, this study will contribute to the execution of empirical study focused on player perspective.

Lightweight Deep Learning Model for Heart Rate Estimation from Facial Videos (얼굴 영상 기반의 심박수 추정을 위한 딥러닝 모델의 경량화 기법)

  • Gyutae Hwang;Myeonggeun Park;Sang Jun Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.2
    • /
    • pp.51-58
    • /
    • 2023
  • This paper proposes a deep learning method for estimating the heart rate from facial videos. Our proposed method estimates remote photoplethysmography (rPPG) signals to predict the heart rate. Although there have been proposed several methods for estimating rPPG signals, most previous methods can not be utilized in low-power single board computers due to their computational complexity. To address this problem, we construct a lightweight student model and employ a knowledge distillation technique to reduce the performance degradation of a deeper network model. The teacher model consists of 795k parameters, whereas the student model only contains 24k parameters, and therefore, the inference time was reduced with the factor of 10. By distilling the knowledge of the intermediate feature maps of the teacher model, we improved the accuracy of the student model for estimating the heart rate. Experiments were conducted on the UBFC-rPPG dataset to demonstrate the effectiveness of the proposed method. Moreover, we collected our own dataset to verify the accuracy and processing time of the proposed method on a real-world dataset. Experimental results on a NVIDIA Jetson Nano board demonstrate that our proposed method can infer the heart rate in real time with the mean absolute error of 2.5183 bpm.