• Title/Summary/Keyword: 사운드

Search Result 583, Processing Time 0.033 seconds

Narrative Functions of Sound Design in Films (영화 사운드디자인의 내러티브 기능 연구)

  • Lee, Dong-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.626-637
    • /
    • 2013
  • Film sound should be analyzed based upon the profound understanding of sound design process and the principle behind it. Lack of this knowledge prevents the research from having the comprehensiveness and concreteness. This study aims to reaffirm the important role of sound design in filmmaking while also investigating the narrative functions of sound design. The role of sound design as a cinematic technique was defined as creating audio-visual experience with a collection of information, emotion, and ideas organized and presented in the sound narrative of films. Each process of sound design, dialog, ambience, Foley, and sound effects, was proved to be performing this role as a result of a case study of various narrative films. The narrative functions of sound design was able to be defined as well.

'EVE-SoundTM' Toolkit for Interactive Sound in Virtual Environment (가상환경의 인터랙티브 사운드를 위한 'EVE-SoundTM' 툴킷)

  • Nam, Yang-Hee;Sung, Suk-Jeong
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.273-280
    • /
    • 2007
  • This paper presents a new 3D sound toolkit called $EVE-Sound^{TM}$ that consists of pre-processing tool for environment simplification preserving sound effect and 3D sound API for real-time rendering. It is designed so that it can allow users to interact with complex 3D virtual environments by audio-visual modalities. $EVE-Sound^{TM}$ toolkit would serve two different types of users: high-level programmers who need an easy-to-use sound API for developing realistic 3D audio-visually rendered applications, and the researchers in 3D sound field who need to experiment with or develop new algorithms while not wanting to re-write all the required code from scratch. An interactive virtual environment application is created with the sound engine constructed using $EVE-Sound^{TM}$ toolkit, and it shows the real-time audio-visual rendering performance and the applicability of proposed $EVE-Sound^{TM}$ for building interactive applications with complex 3D environments.

Sound Design Emotion-Response on TV-CF Audio using the Brain Quotient-Test (TV광고음향의 사운드디자인 감성반응 -뇌 지수(BQT)분석기법으로-)

  • Yoo, Whoi-Jong;Moon, Nam-Mee
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2008.11a
    • /
    • pp.45-49
    • /
    • 2008
  • 본 연구는 TV광고음향의 사운드디자인에 대한 수용자 감성반응에 관한 연구이다. 연구방법으로 영상구조에서 음향이 없는 상태와, 음악설계사운드디자인을 제시 하였을 때 수용자가 받아들이는 음향감성반응을 뇌파측정을 통한 뇌 지수(BQT) 비교 방법으로 분석하였다. 본 연구를 통하여 영상에 있어 사운드의 시청각적 감성효과가 사운드디자인설계에 의하여 달라질 수 있음을 정량적 방법으로 확인할 수 있었다.

  • PDF

Vibration Stimulus Generation using Sound Detection Algorithm for Improved Sound Experience (사운드 실감성 증진을 위한 사운드 감지 알고리즘 기반 촉각진동자극 생성)

  • Ji, Dong-Ju;Oh, Sung-Jin;Jun, Kyung-Koo;Sung, Mee-Young
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.158-162
    • /
    • 2009
  • Sound effects coming with appropriate tactile stimuli can strengthen its reality. For example, gunfire in games and movies, if it is accompanied by vibrating effects, can enhance the impressiveness. On a similar principle, adding the vibration information to existing sound data file and playing sound while generating vibration effects through haptic interfaces can augment the sound experience. In this paper, we propose a method to generate vibration information by analyzing the sound. The vibration information consists of vibration patterns and the timing within a sound file. Adding the vibration information is labor-intensive if it is done manually. We propose a sound detection algorithm to search the moments when specific sounds occur in a sound file and a method to create vibration effects at those moments. The sound detection algorithm compares the frequency characteristic of specific sounds and finds the moments which have similar frequency characteristic within a sound file. The detection ratio of the algorithm was 98% for five different kinds of gunfire. We also develop a GUI based vibrating pattern editor to easily perform the sound search and vibration generation.

  • PDF

Study of Sound Art Curating (사운드아트 큐레이팅 연구)

  • Lim, Shan
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.5
    • /
    • pp.171-176
    • /
    • 2022
  • This paper examines the historical meaning and value of sound art curating as a key type of interdisciplinary and convergence art practice that has been unfolding since the mid-20th century. Accordingly, this paper summarizes the developmental process from the beginning of 'sound art' to the present, but examines the context of visual art in which the material 'sound' functioned in chronological order, and focuses on curating cases of major sound art exhibitions. The purpose of this study is to analyze the impact and contemporary significance of the provided aesthetic experience. To this end, the text is divided into three sections and developed. The first section recognizes that the late 19th century futurist and Dadaist sound poetry, followed by Marcel Duchamp's 1913 attempt to combine musical score with visual art, had a profound influence on the visual music of avant-garde composer John Cage. This explains why this background caused the emergence of exhibitions dealing with 'sound' as a new medium. The second section explains that in the 1970s, sound as an artistic medium played a role in reflecting the critical relationship with the exhibition space dominated by visuality. In the third section, we analyze the curatorial methodology that allows the audience to experience sound as if it were a visual object within the organization of the exhibition hall from the 1980s to the present. Through this process, this paper critically treats the historical practice of customizing the perceptual structure in the exhibition hall, and considers the meaningful methodology of sound art curating considering the role of sound full of vitality in the contemporary art scene.

Research on Animation Sound (애니메이션 사운드에 관한 연구)

  • Lim, Woon-Ju
    • The Journal of the Korea Contents Association
    • /
    • v.7 no.6
    • /
    • pp.127-134
    • /
    • 2007
  • The biggest purpose of sound is communication with spectator in animation. Animation sound delivers background to spectator more naturally through music age enemy, space enemy of fantastic, realistic work and use rhythm and rhythm according to smoke action of character and character delineation and specific character of character, atmosphere as is intimate to spectator. This research achieved research of connection literature data to grasp the importance of role and function that sound reaches in animation reflex. Because is companionate with reflex at animation sound manufacture process with this, in work whole atmosphere in Naereotibeu's structure progress of the event smoothly connect and focused to if make video and interaction of sound smoothly and deliver to spectator.

Conversion of Image into Sound Based on HSI Histogram (HSI 히스토그램에 기초한 이미지-사운드 변환)

  • Kim, Sung-Il
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.3
    • /
    • pp.142-148
    • /
    • 2011
  • The final aim of the present study is to develop the intelligent robot, emulating human synesthetic skills which make it possible to associate a color image with a specific sound. This can be done on the basis of the mutual conversion between color image and sound. As a first step of the final goal, this study focused on a basic system using a conversion of color image into sound. This study describes a proposed method to convert color image into sound, based on the likelihood in the physical frequency information between light and sound. The method of converting color image into sound was implemented by using HSI histograms through RGB-to-HSI color model conversion, which was done by Microsoft Visual C++ (ver. 6.0). Two different color images were used on the simulation experiments, and the results revealed that the hue, saturation and intensity elements of each input color image were converted into fundamental frequency, harmonic and octave elements of a sound, respectively. Through the proposed system, the converted sound elements were then synthesized to automatically generate a sound source with wav file format, using Csound.

The sound analysis of (<이야기 속의 이야기> 사운드 분석)

  • Mok, Hae-Jung
    • Cartoon and Animation Studies
    • /
    • s.20
    • /
    • pp.87-104
    • /
    • 2010
  • Animation creates meaning and affection by combinig image and sound like film. directed by Yuri Norstein is a good text for analyzing animation sound in that it combines image and various music and sound effects well. This study focuses on analyzing the way that sound function to make meaning in this text. Generally sound is categorized into dialogue, music, and sound effect. And animation has its own characteristic in each category. The voice for dialogue is created corresponding to the image of the character and the rhythm is very important in Animation. Plus Sound effect in animation can be said to mimic not just sound but also movement. This study analyzes sound based on three sound factors and the concepts of the point of listening, subjective sound, and sound bridge. Subjective sound using the point of listening of the wolf and the baby bestows a special position on the main characters in the text. It is the overall characteristic of the sound use of this text that the repetitive combination of sound and image, the linguistic and annotative function of sound effect, and comparatively conventional use of music and sound effect enhance the affection and readability.

  • PDF

The narrative space of sound design in films (영화 사운드디자인의 내러티브 공간 연구)

  • Lee, Dong-Hwan
    • Journal of Digital Contents Society
    • /
    • v.17 no.5
    • /
    • pp.391-400
    • /
    • 2016
  • The purpose of this study is to reassert the important role of sound design in creating narrative space in films. The main focus is on re-interpreting Chion's sound space composition as a narrative structure. The sound design process has been analyzed to find that the physical properties of sound are purposely manipulated to create the layers of sound to be perceived by the audience in the same way as human perception and cognition of the actual reality work, eventually to create the cinematic reality. The hierarchy of the layers is determined by the importance of the narrative information contained in each sound, with the higher layers being appropriate to convey the information of the narrative, and the lower layers being efficient to deliver the emotion to the audience. With this idea, each of the Chion's space composition is explained as a distinctive area in telling a story with the separate narrative role from the others.

Polyphonic sound event detection using multi-channel audio features and gated recurrent neural networks (다채널 오디오 특징값 및 게이트형 순환 신경망을 사용한 다성 사운드 이벤트 검출)

  • Ko, Sang-Sun;Cho, Hye-Seung;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.4
    • /
    • pp.267-272
    • /
    • 2017
  • In this paper, we propose an effective method of applying multichannel-audio feature values to GRNNs (Gated Recurrent Neural Networks) in polyphonic sound event detection. Real life sounds are often overlapped with each other, so that it is difficult to distinguish them by using a mono-channel audio features. In the proposed method, we tried to improve the performance of polyphonic sound event detection by using multi-channel audio features. In addition, we also tried to improve the performance of polyphonic sound event detection by applying a gated recurrent neural network which is simpler than LSTM (Long Short Term Memory), which shows the highest performance among the current recurrent neural networks. The experimental results show that the proposed method achieves better sound event detection performance than other existing methods.