• Title/Summary/Keyword: Face Tracking

검색결과 344건 처리시간 0.038초

Human-Robot Interaction in Real Environments by Audio-Visual Integration

  • Kim, Hyun-Don;Choi, Jong-Suk;Kim, Mun-Sang
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권1호
    • /
    • pp.61-69
    • /
    • 2007
  • In this paper, we developed not only a reliable sound localization system including a VAD(Voice Activity Detection) component using three microphones but also a face tracking system using a vision camera. Moreover, we proposed a way to integrate three systems in the human-robot interaction to compensate errors in the localization of a speaker and to reject unnecessary speech or noise signals entering from undesired directions effectively. For the purpose of verifying our system's performances, we installed the proposed audio-visual system in a prototype robot, called IROBAA(Intelligent ROBot for Active Audition), and demonstrated how to integrate the audio-visual system.

Elimination of the Red-Eye Area using Skin Color Information

  • Kim, Kwang-Baek;Song, Doo-Heon
    • Journal of information and communication convergence engineering
    • /
    • 제7권2호
    • /
    • pp.131-134
    • /
    • 2009
  • The red-eye effect in photography occurs when using a photographic flash very close to the camera lens, in ambient low light due to in experience. Once occurred, the photographer needs to remove it with image tool that requires time consuming, skillful process. In this paper, we propose a new method to extract and remove such red-eye area automatically. Our method starts with transforming ROB space to YCbCr and HSI space and it extracts the face area by using skin color information. The target red-eye area is then extracted by applying 8-direction contour tracking algorithm and removed. The experiment shows our method's effectiveness.

얼굴 방향에 기반을 둔 컴퓨터 화면 응시점 추적 (A Gaze Tracking based on the Head Pose in Computer Monitor)

  • 오승환;이희영
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 하계종합학술대회 논문집(3)
    • /
    • pp.227-230
    • /
    • 2002
  • In this paper we concentrate on overall direction of the gaze based on a head pose for human computer interaction. To decide a gaze direction of user in a image, it is important to pick up facial feature exactly. For this, we binarize the input image and search two eyes and the mouth through the similarity of each block ( aspect ratio, size, and average gray value ) and geometric information of face at the binarized image. We create a imaginary plane on the line made by features of the real face and the pin hole of the camera to decide the head orientation. We call it the virtual facial plane. The position of a virtual facial plane is estimated through projected facial feature on the image plane. We find a gaze direction using the surface normal vector of the virtual facial plane. This study using popular PC camera will contribute practical usage of gaze tracking technology.

  • PDF

A Novel Computer Human Interface to Remotely Pick up Moving Human's Voice Clearly by Integrating ]Real-time Face Tracking and Microphones Array

  • Hiroshi Mizoguchi;Takaomi Shigehara;Yoshiyasu Goto;Hidai, Ken-ichi;Taketoshi Mishima
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1998년도 제13차 학술회의논문집
    • /
    • pp.75-80
    • /
    • 1998
  • This paper proposes a novel computer human interface, named Virtual Wireless Microphone (VWM), which utilizes computer vision and signal processing. It integrates real-time face tracking and sound signal processing. VWM is intended to be used as a speech signal input method for human computer interaction, especially for autonomous intelligent agent that interacts with humans like as digital secretary. Utilizing VWM, the agent can clearly listen human master's voice remotely as if a wireless microphone was put just in front of the master.

  • PDF

Visual Modeling and Content-based Processing for Video Data Storage and Delivery

  • Hwang Jae-Jeong;Cho Sang-Gyu
    • Journal of information and communication convergence engineering
    • /
    • 제3권1호
    • /
    • pp.56-61
    • /
    • 2005
  • In this paper, we present a video rate control scheme for storage and delivery in which the time-varying viewing interests are controlled by human gaze. To track the gaze, the pupil's movement is detected using the three-step process : detecting face region, eye region, and pupil point. To control bit rates, the quantization parameter (QP) is changed by considering the static parameters, the video object priority derived from the pupil tracking, the target PSNR, and the weighted distortion value of the coder. As results, we achieved human interfaced visual model and corresponding region-of-interest rate control system.

Invisible Messenger: A System to Whisper in a Person′s Ear Remotely by integrating Visual Tracking and Speaker Array

  • Mizoguchi, Hiroshi;Kanamori, Tomohiko;Okabe, Kosuke;Hiraoka, Kazuyuki;Tanaka, Masaru;Shigehara, Takaomi;Mishima, Taketoshi
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2002년도 ITC-CSCC -3
    • /
    • pp.1897-1900
    • /
    • 2002
  • This paper proposes a novel computer-human interface, named invisible Messenger. It integrates face detection and tracking, and speaker array signal processing. By speaker array it is possible to form acoustic focus at the arbitrary location that is measured by the face tracking. Thus the proposed system can whisper in a person's ear as if an invisible virtual messenger were standing by the person. Not only speculative discussion, the authors have implemented a working prototype system based upon the proposed idea. This paper also describes about this prototype. In order to confirm effectiveness of the proposed idea, the authors conduct experiments using the implemented system. Experimental results demonstrate the effectivenss of the proposed idea.

  • PDF

시선 추적을 활용한 패션 디자인 인지에 관한 연구 (A Study on Fashion Design Cognition Using Eye Tracking)

  • 이신영
    • 한국의류산업학회지
    • /
    • 제23권3호
    • /
    • pp.323-336
    • /
    • 2021
  • This study investigated the cognitive process of fashion design images through eye activity tracking. Differences in the cognitive process and gaze activity according to image elements were confirmed. The results of the study are as follows. First, a difference was found between groups in the gaze time for each section according to the model and design. Although model diversity is an important factor leading the interest of observers, the simplicity of the model was deemed more effective for observing the design. Second, the examination of the differences by segments regarding the gaze weight of the image area showed differences for each group. When a similar type of model is repeated, the proportion of face recognition decreases, and the proportion of design recognition time increases. Conversely, when the model diversity is high, the same amount of time is devoted to recognizing the model's face in all the processes. Additionally, there was a difference in the gaze activity in recognizing the same design according to the type of model. These results enabled the confirmation of the importance of the model as an image recognition factor in fashion design. In the fashion industry, it is important to find a cognitive factor that attracts and retains consumers' attention. If the design recognition effect is further maximized by finding service points to be utilized, the brand's sustainability is expected to be enhanced even in the rapidly changing fashion industry.

동영상에서 적응적 배경영상을 이용한 실시간 객체 추적 (Real-time Object Tracking using Adaptive Background Image in Video)

  • 최내원;지정규
    • 한국멀티미디어학회논문지
    • /
    • 제6권3호
    • /
    • pp.409-418
    • /
    • 2003
  • 동영상에서 객체 추적은 몇 년간 컴퓨터 비전 및 여러 실용적 응용 분야에서 관심을 가지는 주제 중 하나이다. 본 논문에서는 보안 및 감시 시스템 분야에 적용할 수 있는 실시간 객체 추적과 얼굴영역 추출 방법을 제안한다. 이를 위해 카메라가 고정되어 있고 배경영상의 변화가 거의 없는 환경으로 제한하고, 입력영상과 배경영상의 차를 이용하여 객체의 위치를 탐지하고 움직임을 추적한다. 보다 안정적인 객체 추출을 위해 적응적 배경영상을 생성하고, 객체 위치 탐지 시 그물식 탐색방법을 이용하여 객체의 내부점을 추출한다. 추출된 점들을 이용하여 MBR(Minimum Bounding Rectangle)을 설정하여 객체의 실시간 추적을 가능하도록 하였다. 또한 설정된 MBR 내에서 얼굴영역을 추출함으로써 보안 및 감시 시스템의 효용성을 향상시켰다. 그리고 실험을 통하여 제한된 환경 하에서 실시간으로 빠른 객체의 추적을 보인다.

  • PDF

화상 회의 인터페이스를 위한 눈 위치 검출 (Eye Location Algorithm For Natural Video-Conferencing)

  • 이재준;최정일;이필규
    • 한국정보처리학회논문지
    • /
    • 제4권12호
    • /
    • pp.3211-3218
    • /
    • 1997
  • 기존의 화상 회의 시스템에서는 카메라가 고정되어 있어서 사용자의 움직임에 제약을 주어 사용자를 부자연스럽게 한다. 이러한 부자연스러움을 해갈하기 위해서는 얼굴의 움직임을 추적해야 하는데, 이때 얼굴 전체를 정보로 추적하는 것은 얼굴 전체를 하나의 특징으로 규정짓기도 힘들고 연산 시간이 많이 걸린다는 문제점을 가지고 있다. 따라서, 얼굴의 움직임을 효율적으로 추적하기 위해서는 얼굴상의 몇 개의 특징점을 이용하는 것 이 바람직하다. 본 논문은 화상 회의에서 자연스러운 사용자 인터페이스를 위한 자동 얼굴 추적 시스템의 필수적인 요소인 눈 위치 검출의 효과적인 방법에 대하여 논한다. 눈은 얼굴 내에서 가장 뚜렷하며 단순한 특징을 가지고 있으므로 얼굴을 추적하기 위한 가장 중요한 정보가 된다. 본 논문에서 제안한 알고리즘은 얼굴 후보 영역 추출 단계를 거친 얼굴 후보 영역들에 대해 적용되며, 기존 방법들에 비해 조명에 특별한 제약을 받지 않으며 얼굴 크기와 안경에 대한 제약도 가지고 있지 않다. 또한, 화상 회의 환경에 대한 on-line 실험에서 좋은 결과를 나타냈다.

  • PDF