• Title/Summary/Keyword: 사운드

Search Result 583, Processing Time 0.039 seconds

The Realtime method of 3D Sound Rendering for Virtual Reality : Complexity Reduction of Scene and Sound Sources (장면 및 음원 복잡도 축소에 의한 3차원 사운드 재현의 실시간화 기법)

  • Seong SukJeong;Yi JeongSeon;Oh SuJin;Nam YangHee
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.550-552
    • /
    • 2005
  • 실감 재현이 중요한 가상현실 응용에서는 사용자에게 고급 그래픽 환경을 제시하고 사용자의 인터랙션에 즉각적인 피드백을 제공함으로서 실재감과 몰입감을 증대시키는 연구가 진행되어왔다. 실재감, 공간감 전달을 위해 시각과 청각을 함께 활용하는 것이 효과적이나, 가상공간의 특징을 반영한 3차원 사운도 재현 연구는 국내외 통틀어 초기단계에 머물러 있다. 실재감과 공간감을 반영한 3차원 사운드의 재현을 위해서는 음원의 전파, 반사, 잔향 풍의 계산이 사용자의 인터랙션에 따라 새롭게 계산되어야한다. 그러나 사운드 전파경로와 공간을 이루는 모든 폴리곤들과의 충돌을 검사하며 반사 등을 계산하는 것은 실시간성이 중요한 가상현실응용에서는 무리가 따르므로 실 시간성을 보장하기 위한 계산량 축소가 요구된다. 본 논문에서는 다수의 음원이 존재하는 복잡한 가상공간에서의 3차원 사운드를 재현하기 위하여 사운드 신과 계산에 필요한 최소한의 정보를 가지는 오디오 씬 그라프의 공간을 재구성하고 다수의 음원을 대상으로 음원 축소 및 군집화를 적용하여 3차원 사운드효과를 실시간으로 재현하는 알고리즘을 제안한다.

  • PDF

High Directivity Sound Beamforming Algorithm (방향성이 높은 사운드 빔 형성 알고리즘)

  • Kim, Seona-Woo;Hur, Yoo-Mi;Park, Young-Chul;Youn, Dae-Hee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.1
    • /
    • pp.24-33
    • /
    • 2010
  • This paper proposes a technique of sound beamforming that can generate high-directive sound beams, and this paper also presents applications of the proposed algorithm to multi-channel 3D sound systems. The proposed algorithm consists of two phases: first, optimum weights maximizing a sound pressure level ratio between the target and control acoustic regions are designed, and later, the directivity of the pre-designed sound beam is iteratively enhanced by modifying the covariance matrix. The proposed method was evaluated under various situations, and the results showed that it could provide more focused sound beams than the conventional methods.

Assessment of Compressive Strength of Granitic Gneiss Using Nondestructive Testing based on Sound Energy (사운드에너지 기반 화강편마암의 비파괴 압축강도 산정)

  • Son, Moorak;Kim, Moojun
    • Journal of the Korean GEO-environmental Society
    • /
    • v.19 no.8
    • /
    • pp.5-10
    • /
    • 2018
  • This study provides a method to assess the compressive strength of granitic gneiss using total sound signal energy, which is calculated from the signal of sound pressure measured when an object impacts on rock surface, and its results. For this purpose, many test specimens of granitic gneiss were prepared. Each specimen was impacted using a devised device (impacting a specimen by an initial rotating free falling and following repetitive rebound actions) and all sound pressures were measured as a signal over time. The sound signal was accumulated over time (called total sound signal energy) for each specimen of granitic gneiss and it was compared with the directly measured compressive strength of the specimen. The comparison showed that the total sound signal energy was directly proportional to the measured compressive strength, and with this result the compressive strength of granitic gneiss can be reliably assessed by an estimation equation of total sound signal energy. Furthermore, from the study results it is clearly believed that the compressive strength of other rocks and concrete can be assessed nondestructively using the total sound signal energy.

'Hongdae Sound' as a Historic Musical Trend Based on Regional Classification: through Comparative Analysis with 'US 8th Army Sound' and 'London Punk' (지역기반 음악사조로서의 '홍대 사운드' : 미8군 사운드와 런던 펑크와의 비교를 중심으로)

  • Kim, Minoh
    • Trans-
    • /
    • v.8
    • /
    • pp.1-28
    • /
    • 2020
  • This study examines musical characteristics of so-called 'Hongdae Sound' as a historic musical trend by comparing with 'US 8th Army Sound' and British 'London Punk'. Hongdae Sound refers to the musical trend that was formed with independent bands and musicians who mostly performed live in the club called 'Drug' in Hongdae area, and voluntarily adopted minor musical sensitivity and indie spirit of 'post-punk rock' genre. But as an industrial standpoint the superficial identity of 'indie' interferes with academic approach when analysing musical aspects of Hongdae Sound. Therefore it is necessary to rearrange its characteristics as the musical trend based on regional classification in order to fully appreciate its status in history of Korean popular music. US 8th Army Sound refers to the musical trend that was played within the live stages in US military bases in Korea. Many hired Korean musicians for those shows were able to learn the current popular musical trend in the States, and to spread those to the general public outside the bases. The industrial system of the Army Sound was very similar to that of K-Pop, but when it comes to leading the newest musical trend of 'rock-n-roll', it had more resemblance to that of Hongdae Sound. London punk was the back-to-basic form of pure rock that was armed with social angst and rebel, indie spirit. Its primal motto was 'do-it-yourself', and Hongdae Sound mostly followed its industrial, musical and spiritual paths. London punk was short-lived because it abandoned its indie spirits and became absorbed to the mainstream. But Hongdae sound maintain its longevity by maintaining the spirit and truthfulness of indie, while endlessly experimenting with new trends.

  • PDF

Sound event detection based on multi-channel multi-scale neural networks for home monitoring system used by the hard-of-hearing (청각 장애인용 홈 모니터링 시스템을 위한 다채널 다중 스케일 신경망 기반의 사운드 이벤트 검출)

  • Lee, Gi Yong;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.6
    • /
    • pp.600-605
    • /
    • 2020
  • In this paper, we propose a sound event detection method using a multi-channel multi-scale neural networks for sound sensing home monitoring for the hearing impaired. In the proposed system, two channels with high signal quality are selected from several wireless microphone sensors in home. The three features (time difference of arrival, pitch range, and outputs obtained by applying multi-scale convolutional neural network to log mel spectrogram) extracted from the sensor signals are applied to a classifier based on a bidirectional gated recurrent neural network to further improve the performance of sound event detection. The detected sound event result is converted into text along with the sensor position of the selected channel and provided to the hearing impaired. The experimental results show that the sound event detection method of the proposed system is superior to the existing method and can effectively deliver sound information to the hearing impaired.

Sound-driven Vibration System using Digital Signal Processor (DSP를 이용한 사운드 기반 진동 시스템)

  • Cho, Dong-Hyun;Oh, Sung-Jin;You, Yong-Hee;Sung, Mee-Young;Jun, Kyung-Koo
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.553-558
    • /
    • 2008
  • In this paper, we develop a vibration system which can generate diverse vibration effects in realtime by analyzing signals from the sound output of PC. This system is able to detect the occurrences of particular sounds in order to generate corresponding pre-programmed vibration patterns. It contributes to the improvement of the reality and the immersiveness of games and virtual reality applications. In addition, its advantage is to easily add vibration features to applications which were originally developed without consideration for vibration. Our system consists of an external DSP board for signal processing and a vibration pad which can be put on wrists. It is superior to other sound-driven vibration devices because its DSP board can detect more diverse sounds, has higher performance and does not interfere with PC. Also the wrist-wearable vibration pad is able to generate more realistic vibration than other mouse or joystick type devices.

  • PDF

Towards the Generation of Language-based Sound Summaries Using Electroencephalogram Measurements (뇌파측정기술을 활용한 언어 기반 사운드 요약의 생성 방안 연구)

  • Kim, Hyun-Hee;Kim, Yong-Ho
    • Journal of the Korean Society for information Management
    • /
    • v.36 no.3
    • /
    • pp.131-148
    • /
    • 2019
  • This study constructed a cognitive model of information processing to understand the topic of a sound material and its characteristics. It then proposed methods to generate sound summaries, by incorporating anterior-posterior N400/P600 components of event-related potential (ERP) response, into the language representation of the cognitive model of information processing. For this end, research hypotheses were established and verified them through ERP experiments, finding that P600 is crucial in screening topic-relevant shots from topic-irrelevant shots. The results of this study can be applied to the design of classification algorithm, which can then be used to generate the content-based metadata, such as generic or personalized sound summaries and video skims.

Inquiring Activities on the Acoustic Phenomena Using Sound Card in Personal Computer (사운드카드를 이용한 음향학 탐구학습 사례)

  • Lee, Seung-Koog;Lee, Jong-Rim;Kim, Hyun-Byuk;Kim, Young-H.
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.5
    • /
    • pp.249-254
    • /
    • 2011
  • Inquiring activities on the acoustic phenomena have been carried out by using a sound card installed in a personal computer. A sound card is cheaper and more accessible to the students than the precision equipment such as a function generator or an oscilloscope. The students record the sounds from various acoustic phenomena to the sound card. Then they analyze the frequency spectrums of that sounds by using a program. Inquired phenomena include beat by two tuning forks, sound from Rijke tube, pouring sound, breaking of a wine glass and pop-up sound of a wine bottle. Through these activities students perform quantitative analysis of various phenomena due to superposition, resonance and standing wave.

A Sound Interpolation Method Using Deep Neural Network for Virtual Reality Sound (가상현실 음향을 위한 심층신경망 기반 사운드 보간 기법)

  • Choi, Jaegyu;Choi, Seung Ho
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.227-233
    • /
    • 2019
  • In this paper, we propose a deep neural network-based sound interpolation method for realizing virtual reality sound. Through this method, sound between two points is generated by using acoustic signals obtained from two points. Sound interpolation can be performed by statistical methods such as arithmetic mean or geometric mean, but this is insufficient to reflect actual nonlinear acoustic characteristics. In order to solve this problem, in this study, the sound interpolation is performed by training the deep neural network based on the acoustic signals of the two points and the target point, and the experimental results show that the deep neural network-based sound interpolation method is superior to the statistical methods.