• Title/Summary/Keyword: Virtual Microphone

Search Result 16, Processing Time 0.022 seconds

Active Sound Control Approach Using Virtual Microphones for Formation of Quiet Zones at a Chair (좌석의 정음공간 형성을 위한 가상마이크로폰 기반 능동음향제어 기법 연구)

  • Ryu, Seokhoon;Kim, Jeakwan;Lee, Young-Sup
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.25 no.9
    • /
    • pp.628-636
    • /
    • 2015
  • In this study, theoretical and experimental analyses were performed for creating and moving the zone of quiet(ZoQ) to the ear location of a sitter by using active sound control technique. As the ZoQ is actively created at the location of the error microphone basically with an active sound control system using an algorithm such as the filtered-x least mean square(FxLMS), the virtual microphone control(VMC) method was considered to move the location of the ZoQ to around the sitter`s ear. A chair system with microphones and loudspeakers on both sides was manufactured for the experiment and thus an active headrest against the swept narrowband noise as the primary noise was implemented with a real-time controller in which the VMC algorithm was embedded. After the control experiment with and without the VMC method, the location variation of the ZoQ by analyzing the error signals measured by the error and the virtual microphones. Therefore, it is observed that the FxLMS with the VMC technique can provide the re-location of the ZoQ from the error microphone location to the virtual microphone location. Also it is found that the amount of the attenuation difference between the two locations was small.

Obstacle Avoidance of a Moving Sound Following Robot using Active Virtual Impedance (능동 가상 임피던스를 이용한 이동 음원 추종 로봇의 장애물 회피)

  • Han, Jong-Ho;Park, Sook-Hee;Noh, Kyung-Wook;Lee, Dong-Hyuk;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.2
    • /
    • pp.200-210
    • /
    • 2014
  • An active virtual impedance algorithm is newly proposed to track a sound source and to avoid obstacles while a mobile robot is following the sound source. The tracking velocity of a mobile robot to the sound source is determined by virtual repulsive and attraction forces to avoid obstacles and to follow the sound source, respectively. Active virtual impedance is defined as a function of distances and relative velocities to the sound source and obstacles from the mobile robot, which is used to generate the tracking velocity of the mobile robot. Conventional virtual impedance methods have fixed coefficients for the relative distances and velocities. However, in this research the coefficients are dynamically adjusted to elaborate the obstacle avoidance performance in multiple obstacle environments. The relative distances and velocities are obtained using a microphone array consisting of three microphones in a row. The geometrical relationships of the microphones are utilized to estimate the relative position and orientation of the sound source against the mobile robot which carries the microphone array. Effectiveness of the proposed algorithm has been demonstrated by real experiments.

MICROPHONE-BASED WIND VELOCITY SENSORS AND THEIR APPLICATION TO INTERACTIVE ANIMATION

  • Kanno, Ken-ichi;Chiba, Norishige
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.596-600
    • /
    • 2009
  • We are developing a simple low-cost wind velocity sensor based on small microphones. The sensor system consists of 4 microphones covered with specially shaped wind screens, 4 pre-amplifiers that respond to low frequency, and a commercial sound interface with multi channel inputs. In this paper, we first present the principle of the sensor, i.e., technique to successfully suppress the influence of external noise existing in the environment in order to determine the wind velocity and the wind direction from the output from a microphone. Then, we present an application for generating realistic motions of a virtual tree swaying in real wind. Although the current sensor outputs significant leaps in a measured sequence of directions, the interactive animations demonstrate that it is usable for such applications, if we could reduce the leaps to some degree.

  • PDF

Interactive Virtual Studio & Immersive Viewer Environment (인터렉티브 가상 스튜디오와 몰입형 시청자 환경)

  • 김래현;박문호;고희동;변혜란
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06b
    • /
    • pp.87-93
    • /
    • 1999
  • In this paper, we introduce a novel virtual studio environment where a broadcaster in the virtual set interacts with tele-viewers as if they are sharing the same environment as participants. A tele-viewer participates physically in the virtual studio environment by a dummy-head equipped with video "eyes" and microphone "ears" physically located in the studio. The dummy head as a surrogate of the tole-viewer follows the tele-viewer's head movements and views and hears through the dummy head like a tele-operated robot. By introducing the tele-presence technology in the virtual studio setting, the broadcaster can not only interact with the virtual set elements like the regular virtual studio environment but also share the physical studio with the surrogates of the tele-viewers as participants. The tele-viewer may see the real broadcaster in the virtual set environment and other participants as avatars in place of their respective dummy heads. With an immersive display like HMD, the tele-viewer may look around the studio and interact with other avatars. The new interactive virtual studio with the immersive viewer environment may be applied to immersive tele-conferencing, tele-teaching, and interactive TV program productions.program productions.

  • PDF

Design of Next-Generation Ship Simulator System Using Virtual Reality (가상현실을 이용한 차세대 선박 시뮬레이터의 시스템 설계)

  • 임정빈;박계각
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.6 no.1
    • /
    • pp.1-9
    • /
    • 2000
  • The paper describes system design of next-generation Ship Simulator using Virtual Reality (VRSS), well known as human-computer interaction. VRSS system is required to have special condition that comprises multiple user participants such as captain, officer, pilot, and quartermaster. To cope with that condition, core technologies were explored and proposed multi-networking system with broker server. The evaluation of the proposed system was done with PC-based immersion-type VR device, constituted with HMD (Head Mounted Display), Head Tracking Sensor, Puck, Headphone, and Microphone. Using the VR device, assessment test was carried out in a virtual bridge with 3D objects, which are created by VRML (Virtual Reality Model Language) program. As results of tests, it is shown that the cybernetic 3D objects were act as if real things in a real ship's bridge. Therefore, interesting interaction with participants can be obtained in the system, Thus, we found that the proposed system architecture can be applicable to VRSS system construction.

  • PDF

A Novel Computer Human Interface to Remotely Pick up Moving Human's Voice Clearly by Integrating ]Real-time Face Tracking and Microphones Array

  • Hiroshi Mizoguchi;Takaomi Shigehara;Yoshiyasu Goto;Hidai, Ken-ichi;Taketoshi Mishima
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.75-80
    • /
    • 1998
  • This paper proposes a novel computer human interface, named Virtual Wireless Microphone (VWM), which utilizes computer vision and signal processing. It integrates real-time face tracking and sound signal processing. VWM is intended to be used as a speech signal input method for human computer interaction, especially for autonomous intelligent agent that interacts with humans like as digital secretary. Utilizing VWM, the agent can clearly listen human master's voice remotely as if a wireless microphone was put just in front of the master.

  • PDF

A Design and Implementation of Natural User Interface System Using Kinect (키넥트를 사용한 NUI 설계 및 구현)

  • Lee, Sae-Bom;Jung, Il-Hong
    • Journal of Digital Contents Society
    • /
    • v.15 no.4
    • /
    • pp.473-480
    • /
    • 2014
  • As the use of computer has been popularized these days, an active research is in progress to make much more convenient and natural interface compared to the existing user interfaces such as keyboard or mouse. For this reason, there is an increasing interest toward Microsoft's motion sensing module called Kinect, which can perform hand motions and speech recognition system in order to realize communication between people. Kinect uses its built-in sensor to recognize the main joint movements and depth of the body. It can also provide a simple speech recognition through the built-in microphone. In this paper, the goal is to use Kinect's depth value data, skeleton tracking and labeling algorithm to recognize information about the extraction and movement of hand, and replace the role of existing peripherals using a virtual mouse, a virtual keyboard, and a speech recognition.

Face-to-face Communication in Cyberspace using Analysis and Synthesis of Facial Expression

  • Shigeo Morishima
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06a
    • /
    • pp.111-118
    • /
    • 1999
  • Recently computer can make cyberspace to walk through by an interactive virtual reality technique. An a avatar in cyberspace can bring us a virtual face-to-face communication environment. In this paper, an avatar is realized which has a real face in cyberspace and a multiuser communication system is constructed by voice transmitted through network. Voice from microphone is transmitted and analyzed, then mouth shape and facial expression of avatar are synchronously estimated and synthesized on real time. And also an entertainment application of a real-time voice driven synthetic face is introduced and this is an example of interactive movie. Finally, face motion capture system using physics based face model is introduced.

Gaze Matching Based on Multi-microphone for Remote Tele-conference (멀티 마이크로폰 기반 원격지 간 화상회의 시선 일치 기법)

  • Lee, Daeseong;Jo, Dongsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.429-431
    • /
    • 2021
  • Recently, as an alternative to replace face-to-face meetings, video conferencing systems between remote locations has increased. However, video conferencing systems have limitations in terms of mismatch of the eyes of remote users. Therefore, it is necessary to apply a technology that can increase the level of immersion in video conferences by matching the gaze information of participants between different remote locations. In this paper, we propose a novel technique to realize video conferencing with the same gaze by estimating the speaker's location based on a multi-microphone. Using our method, it can be applied to various fields such as robot interaction and virtual human interface as well as video conferencing between remote locations.

  • PDF

SPACIAL POEM: A New Type of Experimental Visual Interaction in 3D Virtual Environment

  • Choi, Jin-Young
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02b
    • /
    • pp.405-410
    • /
    • 2008
  • There is always a rhythm in our language and speech. As soon as we speech out, even just simple words and voice we make are edited as various emotions and information. Through this process we succeed or fail in our communication, and it becomes a fun communication or a monotonous delivery. Even with the same music, impression of the play can be different according to each musician' s emotion and their understanding. We 'play' our language in the same way as that. However, I think, people are used to the variety, which is, in fact, the variation of a set format covered with hollow variety. People might have been living loosing or limiting their own creative way to express themselves by that hollow variety. SPACIAL POEM started from this point. This is a new type of 'real-time visual interaction' expressing our own creative narrative as real-time visual by playing a musical instrument which is an emotional human behavior. Producing many kinds of sound by playing musical instruments is the same behavior with which we express our emotions through. There are sensors on each hole on the surface of the musical instrument. When you play it, sensors recognize that you have covered the holes. All sensors are connected to a keyboard, which means your playing behavior becomes a typing action on the keyboard. And I programmed the visual of your words to spread out in a virtual 3D space when you play the musical instrument. The behavior when you blow the instrument, to make sounds, changes into the energy that makes you walk ahead continuously in a virtual space. I used a microphone sensor for this. After all by playing musical instrument, we get back the emotion we forgot so far, and my voice is expressed with my own visual language in virtual space.

  • PDF