• 제목/요약/키워드: lip tracking

검색결과 11건 처리시간 0.028초

An Experimental Multimodal Command Control Interface toy Car Navigation Systems

  • Kim, Kyungnam;Ko, Jong-Gook;SeungHo choi;Kim, Jin-Young;Kim, Ki-Jung
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 ITC-CSCC -1
    • /
    • pp.249-252
    • /
    • 2000
  • An experimental multimodal system combining natural input modes such as speech, lip movement, and gaze is proposed in this paper. It benefits from novel human-compute. interaction (HCI) modalities and from multimodal integration for tackling the problem of the HCI bottleneck. This system allows the user to select menu items on the screen by employing speech recognition, lip reading, and gaze tracking components in parallel. Face tracking is a supplementary component to gaze tracking and lip movement analysis. These key components are reviewed and preliminary results are shown with multimodal integration and user testing on the prototype system. It is noteworthy that the system equipped with gaze tracking and lip reading is very effective in noisy environment, where the speech recognition rate is low, moreover, not stable. Our long term interest is to build a user interface embedded in a commercial car navigation system (CNS).

  • PDF

바이모달 음성인식기의 시각 특징 추출을 위한 색상 분석자 SVM을 이용한 입술 위치 검출 (Lip Detection using Color Distribution and Support Vector Machine for Visual Feature Extraction of Bimodal Speech Recognition System)

  • 정지년;양현승
    • 한국정보과학회논문지:소프트웨어및응용
    • /
    • 제31권4호
    • /
    • pp.403-410
    • /
    • 2004
  • 바이모달 음성인식기는 잡음 환경하 음성인식 성능을 향상하기 위해 고안되었다. 바이모달 음 성인식기에 있어 영상을 통한 시각 특징 추출은 매우 중요한 역할을 하며 이를 위한 입술 위치 검출은 시각 특징 추출을 위한 중요한 선결 과제이다 본 논문은 색상분포와 SVM을 이용하여 시각 특징 추출을 위한 입술 위치 검출 방법을 제안하였다. 제안된 방법은 얼굴색/입술 색상 분포를 학습하여 이로부터 입술의 초기 위치를 빠르게 찾아내고 SVM을 이용하여 입술의 정확한 위치를 찾음으로써 정확하고 빠르게 입술의 위치를 찾도록 하였으며 실험을 통해 바이모달 인식기에 적용하기에 적합함을 알 수 있었다.

Word-boundary and rate effects on upper and lower lip movements in the articulation of the bilabial stop /p/ in Korean

  • Son, Minjung
    • 말소리와 음성과학
    • /
    • 제10권1호
    • /
    • pp.23-31
    • /
    • 2018
  • In this study, we examined how the upper and lower lips articulate to produce labial /p/. Using electromagnetic midsagittal articulography, we collected flesh-point tracking movement data from eight native speakers of Seoul Korean (five females and three males). Individual articulatory movements in /p/ were examined in terms of minimum vertical upper lip position, maximum vertical lower lip position, and corresponding vertical upper lip position aligned with maximum vertical lower lip position. Using linear mixed-effect models, we tested two factors (word boundary [across-word vs. within-word] and speech rate [comfortable vs. fast]) and their interaction, considering subjects as random effects. The results are summarized as follows. First, maximum lower lip position varied with different word boundaries and speech rates, but no interaction was detected. In particular, maximum lower lip position was lower (e.g., less constricted or more reduced) in fast rate condition and across-word boundary condition. Second, minimum lower lip position, as well as lower lip position, measured at the time of maximum lower lip position only varied with different word boundaries, showing that they were consistently lower in across-word condition. We provide further empirical evidence of lower lip movement sensitive to both different word boundaries (e.g., linguistic factor) and speech rates (e.g., paralinguistic factor); this supports the traditional idea that the lower lip is an actively moving articulator. The sensitivity of upper lip movement is also observed with different word boundaries; this counters the traditional idea that the upper lip is the target area, which presupposes immobility. Taken together, the lip aperture gesture is a good indicator that takes into account upper and lower lip vertical movements, compared to the traditional approach that distinguishes a movable articulator from target place. Respective of different speech rates, the results of the present study patterned with cross-linguistic lenition-related allophonic variation, which is known to be more sensitive to fast rate.

Robust Lip Extraction and Tracking of the Mouth Region

  • Min, Duk-Soo;Kim, Jin-Young;Park, Seung-Ho;Kim, Ki-Jung
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 ITC-CSCC -2
    • /
    • pp.927-930
    • /
    • 2000
  • Visual features of lip area play an important role in the visual speech information. We are concerned about correct lip area as region of interest (ROI). In this paper, we propose a robust and fast method for locating the mouth corners. Also, we define a region of interest at mouth during speech. A method, which we have used, only uses the horizontal and vertical image operators at mouth area. This searching is performed by fitting the ROI-template to image with illumination control. Most of the lip extraction algorithms are dependent on luminosity of image. We just used the binary image where the variable threshold is applied. The variable threshold varies to illumination condition. In order to control those variations, the gray-tone is converted to binary image by threshold, which is obtained through Multiple Linear Regression Analysis (MLRA) about divided 2D special region. Thus we obtained the region of interest at mouth area, which is the robust extraction about illumination. A region of interest is automatically extracted.

  • PDF

입술 움직임 영상 선호를 이용한 음성 구간 검출 (Speech Activity Detection using Lip Movement Image Signals)

  • 김응규
    • 융합신호처리학회논문지
    • /
    • 제11권4호
    • /
    • pp.289-297
    • /
    • 2010
  • 본 논문에서는 음성인식을 위한 음성구간 검출과정에서 유입될 수 있는 동적인 음향에너지 이외에 화자의 입술움직임 영상신호까지 확인함으로써 외부 음향잡음이 음성인식 대상으로 오인식되는 것을 방지하기 위한 한 가지 방법이 제시된다. 우선, 연속적인 영상이 PC용 영상카메라를 통하여 획득되고 그 입술움직임 여부가 식별된다. 다음으로, 입술움직임 영상신호 데이터는 공유메모리에 저장되어 음성인식 프로세서와 공유한다. 한편, 음성인식의 전처리 단계인 음성구간 검출과정에서 공유메모리에 저장되어진 데이터를 확인함으로써 화자의 발성에 의한 음향에너지인지의 여부가 입증된다. 최종적으로, 음성인식기와 영상처리기를 연동시켜 실험한 결과, 영상카메라에 대면해서 발성하면 음성인식 결과의 출력에 이르기까지 연동처리가 정상적으로 진행됨을 확인하였고, 영상카메라에 대면치 않고 발성하면 연동처리시스템이 그 음성인식 결과를 출력치 못함을 확인하였다. 또한, 오프라인하의 입술움직임 초기 특정값 및 템플릿 초기영상을 온라인하에서 추출된 입술움직임 초기특정값 및 템플릿 영상으로 대체함으로써 입술움직임 영상 추적의 변별력을 향상시켰다. 입술움직임 영상 추적과정을 시각적으로 확인하고 실시간으로 관련된 패러미터를 해석하기 위해 영상처리 테스트베드를 구축하였다, 음성과 영상처리 시스템의 연동결과 다양한 조명환경 하에서도 약 99.3%의 연동율을 나타냈다.

얼굴로봇 Buddy의 기능 및 구동 메커니즘 (Functions and Driving Mechanisms for Face Robot Buddy)

  • 오경균;장명수;김승종;박신석
    • 로봇학회논문지
    • /
    • 제3권4호
    • /
    • pp.270-277
    • /
    • 2008
  • The development of a face robot basically targets very natural human-robot interaction (HRI), especially emotional interaction. So does a face robot introduced in this paper, named Buddy. Since Buddy was developed for a mobile service robot, it doesn't have a living-being like face such as human's or animal's, but a typically robot-like face with hard skin, which maybe suitable for mass production. Besides, its structure and mechanism should be simple and its production cost also should be low enough. This paper introduces the mechanisms and functions of mobile face robot named Buddy which can take on natural and precise facial expressions and make dynamic gestures driven by one laptop PC. Buddy also can perform lip-sync, eye-contact, face-tracking for lifelike interaction. By adopting a customized emotional reaction decision model, Buddy can create own personality, emotion and motive using various sensor data input. Based on this model, Buddy can interact probably with users and perform real-time learning using personality factors. The interaction performance of Buddy is successfully demonstrated by experiments and simulations.

  • PDF

Understanding the Importance of Presenting Facial Expressions of an Avatar in Virtual Reality

  • Kim, Kyulee;Joh, Hwayeon;Kim, Yeojin;Park, Sohyeon;Oh, Uran
    • International journal of advanced smart convergence
    • /
    • 제11권4호
    • /
    • pp.120-128
    • /
    • 2022
  • While online social interactions have been more prevalent with the increased popularity of Metaverse platforms, little has been studied the effects of facial expressions in virtual reality (VR), which is known to play a key role in social contexts. To understand the importance of presenting facial expressions of a virtual avatar under different contexts, we conducted a user study with 24 participants where they were asked to have a conversation and play a charades game with an avatar with and without facial expressions. The results show that participants tend to gaze at the face region for the majority of the time when having a conversation or trying to guess emotion-related keywords when playing charades regardless of the presence of facial expressions. Yet, we confirmed that participants prefer to see facial expressions in virtual reality as well as in real-world scenarios as it helps them to better understand the contexts and to have more immersive and focused experiences.

외부응향잡음 차단을 위한 강인한 입술움직임 영상영역 추적방법 (A Tracking Method of Robust Lip Movement Image Regions for Blocking the External Acoustic Noise)

  • 김응규
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2009년도 제40회 하계학술대회
    • /
    • pp.1913_1914
    • /
    • 2009
  • 본 논문에서 조명환경하에서 음성/영상 연동시스템을 통해서 외부음향잡음의 차단을 위한 강인한 입술움직임 영상영역을 추적하는 한 가지 방법을 제안한다. 조명환경하에서 강인한 입술움직임 영상영역을 추적하기 위해 온라인상에서 입술움직임 표준영상을 수집하였고 다양한 조명환경에 적응하는 입술 움직임 영상의 특징들을 추출하였다. 동시에 온라인 템플릿 영상을 획득하였고, 이 영상들을 템플릿 정합을 위해 사용했다. 음성/영상처리시스템의 연동결과, 다양한 조명환경하에서 그 연동률을 99.3%까지 높일 수 있었고 음향잡음에 의한 음성인식 실행을 원천적으로 차단할 수 있었다.

  • PDF

범용 USB PC 카메라를 이용한 얼굴 특징점의 추적 (Facial Feature Tracking from a General USB PC Camera)

  • 양정석;이칠우
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2001년도 가을 학술발표논문집 Vol.28 No.2 (2)
    • /
    • pp.412-414
    • /
    • 2001
  • In this paper, we describe an real-time facial feature tracker. We only used a general USB PC Camera without a frame grabber. The system has achieved a rate of 8+ frames/second without any low-level library support. It tracks pupils, nostrils and corners of the lip. The signal from USB Camera is YUV 4:2:0 vertical Format. we converted the signal into RGB color model to display the image and We interpolated V channel of the signal to be used for extracting a facial region. and we analysis 2D blob features in the Y channel, the luminance of the image with geometric restriction to locate each facial feature within the detected facial region. Our method is so simple and intuitive that we can make the system work in real-time.

  • PDF