• 제목/요약/키워드: Visual and Audio System

검색결과 148건 처리시간 0.029초

A 3D Audio-Visual Animated Agent for Expressive Conversational Question Answering

  • Martin, J.C.;Jacquemin, C.;Pointal, L.;Katz, B.
    • 한국정보컨버전스학회:학술대회논문집
    • /
    • 한국정보컨버전스학회 2008년도 International conference on information convergence
    • /
    • pp.53-56
    • /
    • 2008
  • This paper reports on the ACQA(Animated agent for Conversational Question Answering) project conducted at LIMSI. The aim is to design an expressive animated conversational agent(ACA) for conducting research along two main lines: 1/ perceptual experiments(eg perception of expressivity and 3D movements in both audio and visual channels): 2/ design of human-computer interfaces requiring head models at different resolutions and the integration of the talking head in virtual scenes. The target application of this expressive ACA is a real-time question and answer speech based system developed at LIMSI(RITEL). The architecture of the system is based on distributed modules exchanging messages through a network protocol. The main components of the system are: RITEL a question and answer system searching raw text, which is able to produce a text(the answer) and attitudinal information; this attitudinal information is then processed for delivering expressive tags; the text is converted into phoneme, viseme, and prosodic descriptions. Audio speech is generated by the LIMSI selection-concatenation text-to-speech engine. Visual speech is using MPEG4 keypoint-based animation, and is rendered in real-time by Virtual Choreographer (VirChor), a GPU-based 3D engine. Finally, visual and audio speech is played in a 3D audio and visual scene. The project also puts a lot of effort for realistic visual and audio 3D rendering. A new model of phoneme-dependant human radiation patterns is included in the speech synthesis system, so that the ACA can move in the virtual scene with realistic 3D visual and audio rendering.

  • PDF

SWAT의 시청각 매뉴얼을 통한 학습 효과 분석 (Analysis of learning effects using audio-visual manual of SWAT)

  • 이주영;김태호;류지철;강현우;금동혁;우원희;장춘화;최중대;임경재
    • 농업과학연구
    • /
    • 제38권4호
    • /
    • pp.731-737
    • /
    • 2011
  • In the modern society, GIS-based decision support system has been used in evaluating environmental issues and changes due to spatial and temporal analysis capabilities of the GIS. However without proper manual of these systems, its desired goals could not be achieved. In this study, audio-visual SWAT tutorial system was developed to evaluate its effectives in learning the SWAT model. Learning effects was analyzed after in-class demonstration and survey. The survey was conducted for $3^{rd}$ grade students with/without audio-visual materials using 30 questionnaires, composed of 3 items for trend of respondent, 5 items for effects of audio-visual materials, and 12 items for effects of with/without manual in learning the model. For group without audio-visual manual, 2.98 out of 5 was obtained and 4.05 out of 5 was obtained for group with audio-visual manual, indicating higher content delivery with audio-visual learning effects. As shown in this study, the audio-visual learning material should be developed and used in various computer-based modeling system.

A Novel Integration Scheme for Audio Visual Speech Recognition

  • Pham, Than Trung;Kim, Jin-Young;Na, Seung-You
    • 한국음향학회지
    • /
    • 제28권8호
    • /
    • pp.832-842
    • /
    • 2009
  • Automatic speech recognition (ASR) has been successfully applied to many real human computer interaction (HCI) applications; however, its performance tends to be significantly decreased under noisy environments. The invention of audio visual speech recognition (AVSR) using an acoustic signal and lip motion has recently attracted more attention due to its noise-robustness characteristic. In this paper, we describe our novel integration scheme for AVSR based on a late integration approach. Firstly, we introduce the robust reliability measurement for audio and visual modalities using model based information and signal based information. The model based sources measure the confusability of vocabulary while the signal is used to estimate the noise level. Secondly, the output probabilities of audio and visual speech recognizers are normalized respectively before applying the final integration step using normalized output space and estimated weights. We evaluate the performance of our proposed method via Korean isolated word recognition system. The experimental results demonstrate the effectiveness and feasibility of our proposed system compared to the conventional systems.

이미지를 활용한 오디오-비쥬얼 시스템 구성 (Configuration of Audio-Visual System using Visual Image)

  • 서준석;홍성대;박진완
    • 한국콘텐츠학회논문지
    • /
    • 제8권6호
    • /
    • pp.121-129
    • /
    • 2008
  • 소리를 이용한 정보의 표현 방법은 무형의 특징을 가진 매체를 이용하여 어떠한 방법을 통하여 구체적인 형태를 이끌어 내는가에 대한 문제에서 시작된다. 이 과정에서 소리를 매개체로 이용하여 구성되는 오디오-비쥬얼 시스템은 청각적 소재를 이용한 시각적 표현이라는 방법적 측면에서 감각 기관의 연계에 대한 역할을 맡고 있는 부분에서 청각의 시각화라는 비 구체적 감각에 대한 구체화라는 변형의 의미를 갖는다. 오디오-비쥬얼 시스템 형태를 통한 작품을 표현하는데 있어 기존에 사용되던 프로그래밍을 통한 비규칙적 프로시쥬얼(Procedure)적 동적(動的) 이미지 또는 비동적(非動的) 이미지를 사용한 표현 방법에서 시각적 출력 방법의 제한으로 인한 표현 방법의 제한이 생겨날 수 있는 부분에 있어 동적 이미지를 이용한 오디오-비쥬얼 시스템을 통하여 소리를 매체로 한 다양한 표현 결과물을 이끌어낼 수 있다. 본 논문에서는 동적 이미지를 사용한 오디오-비쥬얼 시스템을 통해 다양한 청각적 소재의 시각화 방법 및 소리를 이용한 애니메이션 표현법의 새로운 대안을 제시한다.

Human-Robot Interaction in Real Environments by Audio-Visual Integration

  • Kim, Hyun-Don;Choi, Jong-Suk;Kim, Mun-Sang
    • International Journal of Control, Automation, and Systems
    • /
    • 제5권1호
    • /
    • pp.61-69
    • /
    • 2007
  • In this paper, we developed not only a reliable sound localization system including a VAD(Voice Activity Detection) component using three microphones but also a face tracking system using a vision camera. Moreover, we proposed a way to integrate three systems in the human-robot interaction to compensate errors in the localization of a speaker and to reject unnecessary speech or noise signals entering from undesired directions effectively. For the purpose of verifying our system's performances, we installed the proposed audio-visual system in a prototype robot, called IROBAA(Intelligent ROBot for Active Audition), and demonstrated how to integrate the audio-visual system.

Robust Person Identification Using Optimal Reliability in Audio-Visual Information Fusion

  • Tariquzzaman, Md.;Kim, Jin-Young;Na, Seung-You;Choi, Seung-Ho
    • The Journal of the Acoustical Society of Korea
    • /
    • 제28권3E호
    • /
    • pp.109-117
    • /
    • 2009
  • Identity recognition in real environment with a reliable mode is a key issue in human computer interaction (HCI). In this paper, we present a robust person identification system considering score-based optimal reliability measure of audio-visual modalities. We propose an extension of the modified reliability function by introducing optimizing parameters for both of audio and visual modalities. For degradation of visual signals, we have applied JPEG compression to test images. In addition, for creating mismatch in between enrollment and test session, acoustic Babble noises and artificial illumination have been added to test audio and visual signals, respectively. Local PCA has been used on both modalities to reduce the dimension of feature vector. We have applied a swarm intelligence algorithm, i.e., particle swarm optimization for optimizing the modified convection function's optimizing parameters. The overall person identification experiments are performed using VidTimit DB. Experimental results show that our proposed optimal reliability measures have effectively enhanced the identification accuracy of 7.73% and 8.18% at different illumination direction to visual signal and consequent Babble noises to audio signal, respectively, in comparison with the best classifier system in the fusion system and maintained the modality reliability statistics in terms of its performance; it thus verified the consistency of the proposed extension.

시청각기록물의 기술요소 확장에 관한 연구 (A Study on the Extension of the Description Elements for Audio-visual Archives)

  • 남영준;문정현
    • 한국비블리아학회지
    • /
    • 제21권4호
    • /
    • pp.67-80
    • /
    • 2010
  • 정보산업의 발달로 다양한 기록매체가 출현함에 따라 시청각기록물의 생산량과 이용률이 급증하였으나, 시청각기록물에 대한 인식은 부수적인 가치를 지닌 별도의 기록물로 취급되고 있다. 이와 같이 시청각기록물을 소장하고 있는 기관들은 그 형태의 종류와 보관방법 등의 부분에서 상당히 취약한 면모를 보이고 있으며, 관리하는 방식도 모두 다르기 때문에 이용자들이 시청각기록물의 검색 및 활용에 불편을 겪고 있다. 따라서 본 연구는 국내 주요 기관에서 사용되고 있는 시청각기록물 기술요소의 비교 분석을 통해 시청각기록물의 통합관리 가능성을 조사하였다. 이를 통해 시청각기록물의 기관별 메타데이터 요소와 기관 간 통합관리 가능성을 파악하며, 각 기관에서의 효율적인 시청각기록물의 관리 검색 서비스 제공과 이용에 대한 효과를 제안하고, 시청각기록물의 통합 메타데이터 기술요소 개선안을 제시하였다.

청각 및 시가 정보를 이용한 강인한 음성 인식 시스템의 구현 (Constructing a Noise-Robust Speech Recognition System using Acoustic and Visual Information)

  • 이종석;박철훈
    • 제어로봇시스템학회논문지
    • /
    • 제13권8호
    • /
    • pp.719-725
    • /
    • 2007
  • In this paper, we present an audio-visual speech recognition system for noise-robust human-computer interaction. Unlike usual speech recognition systems, our system utilizes the visual signal containing speakers' lip movements along with the acoustic signal to obtain robust speech recognition performance against environmental noise. The procedures of acoustic speech processing, visual speech processing, and audio-visual integration are described in detail. Experimental results demonstrate the constructed system significantly enhances the recognition performance in noisy circumstances compared to acoustic-only recognition by using the complementary nature of the two signals.

시청각(사진/동영상) 기록물 관리를 위한 시스템 구축과 운영 사례 연구 (A Case Study of the Audio-Visual Archives System Development and Management)

  • 신동헌;정세영;김선현
    • 한국기록관리학회지
    • /
    • 제9권1호
    • /
    • pp.33-50
    • /
    • 2009
  • 국방과학연구소에서는 보유하고 있는 아날로그 형태 시청각 기록물을 디지털 변환을 통하여 이용자의 접근 용이성을 확보하고 시스템을 통한 보다 체계적인 관리를 위해 "영상기록관리시스템"을 구축하고 운영 중에 있다. 본 연구는 이에 대한 전체 구축 과정과 실제 운영 사항에 관한 내용을 담고 있는 것으로, 시청각 기록물의 디지털 변환을 통한 DB 구축과 이용자의 직접적인 검색 활용을 통하여 기록물에 대한 보존과 활용에 대한 실제 사례를 기술하고 있다. 구체적으로는 이미지와 동영상 데이터를 관리하고 활용하기 위한 시스템 개발요구사항 분석에서부터 아날로그형 자료의 디지털 변환을 통한 DB 구축 시 표준 업무절차 구현, 품질 기준 설정, 메타데이터 항목 설정 등에 관한 내용을 포함하고 있다. 또한, 실제로 시청각 기록물 관리를 위한 시스템을 운영함으로써 얻을 수 있는 시스템 효과 분석을 통하여 시청각 기록물 관리 시스템 구축의 필요성에 대해서도 언급하고 있다.

차량 주행 감각 재현을 위한 운전 시뮬레이터 개발에 관한 연구 (I) (A study on the Development of a Driving Simulator for Reappearance of Vehicle Motion (I))

  • 박민규;이민철;손권;유완석;한명철;이장명
    • 한국정밀공학회지
    • /
    • 제16권6호
    • /
    • pp.90-99
    • /
    • 1999
  • A vehicle driving simulator is a virtual reality device which a human being feels as if the one drives a vehicle actually. The driving simulator is used effectively for studying interaction of a driver-vehicle and developing vehicle system of a new concept. The driving simulator consists of a vehicle motion bed system, motion controller, visual and audio system, vehicle dynamic analysis system, cockpit system, and etc. In it is paper, the main procedures to develop the driving simulator are classified by five parts. First, a motion bed system and a motion controller, which can track a reference trajectory, are developed. Secondly, a performance evaluation of the motion bed system for the driving simulator is carried out using LVDTs and accelerometers. Thirdly, a washout algorithm to realize a motion of an actual vehicle in the driving simulator is developed. The algorithm changes the motion space of a vehicle into the workspace of the driving simulator. Fourthly, a visual and audio system for feeling higher realization is developed. Finally, an integration system to communicate and monitor between sub systems is developed.

  • PDF