• Title/Summary/Keyword: Sound Contents

Search Result 493, Processing Time 0.023 seconds

Audio Event Detection Using Deep Neural Networks (깊은 신경망을 이용한 오디오 이벤트 검출)

  • Lim, Minkyu;Lee, Donghyun;Park, Hosung;Kim, Ji-Hwan
    • Journal of Digital Contents Society
    • /
    • v.18 no.1
    • /
    • pp.183-190
    • /
    • 2017
  • This paper proposes an audio event detection method using Deep Neural Networks (DNN). The proposed method applies Feed Forward Neural Network (FFNN) to generate output probabilities of twenty audio events for each frame. Mel scale filter bank (FBANK) features are extracted from each frame, and its five consecutive frames are combined as one vector which is the input feature of the FFNN. The output layer of FFNN produces audio event probabilities for each input feature vector. More than five consecutive frames of which event probability exceeds threshold are detected as an audio event. An audio event continues until the event is detected within one second. The proposed method achieves as 71.8% accuracy for 20 classes of the UrbanSound8K and the BBC Sound FX dataset.

Real-time Implementation of Sound into Color Conversion System Based on the Colored-hearing Synesthetic Perception (색-청 공감각 인지 기반 사운드-컬러 신호 실시간 변환 시스템의 구현)

  • Bae, Myung-Jin;Kim, Sung-Ill
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.12
    • /
    • pp.8-17
    • /
    • 2015
  • This paper presents a sound into color signal conversion using a colored-hearing synesthesia. The aim of the present paper is to implement a real-time conversion system which focuses on both hearing and sight which account for a great part of bodily senses. The proposed method of the real-time conversion of color into sound, in this paper, was simple and intuitive where scale, octave and velocity were extracted from MIDI input signals, which were converted into hue, intensity and saturation, respectively, as basic elements of HSI color model. In experiments, we implemented both the hardware system for delivering MIDI signals to PC and the VC++ based software system for monitoring both input and output signals, so we made certain that the conversion was correctly performed by the proposed method.

Improvement of Environment Recognition using Multimodal Signal (멀티 신호를 이용한 환경 인식 성능 개선)

  • Park, Jun-Qyu;Baek, Seong-Joon
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.12
    • /
    • pp.27-33
    • /
    • 2010
  • In this study, we conducted the classification experiments with GMM (Gaussian Mixture Model) from combining the extracted features by using microphone, Gyro sensor and Acceleration sensor in 9 different environment types. Existing studies of Context Aware wanted to recognize the Environment situation mainly using the Environment sound data with microphone, but there was limitation of reflecting recognition owing to structural characteristics of Environment sound which are composed of various noises combination. Hence we proposed the additional application methods which added Gyro sensor and Acceleration sensor data in order to reflect recognition agent's movement feature. According to the experimental results, the method combining Acceleration sensor data with the data of existing Environment sound feature improves the recognition performance by more than 5%, when compared with existing methods of getting only Environment sound feature data from the Microphone.

IIR Filter Design of HRTF for Real-Time Implementation of 3D Sound by Synthetic Stereo Method (합성 스테레오 방식 3차원 입체음향의 실시간 구현을 위한 머리전달 함수의 IIR 필터 설계)

  • Park Jang-Sik;Kim Hyun-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.6
    • /
    • pp.74-86
    • /
    • 2005
  • In this paper, we proposed an algorithm for the approximation of high order FIR filters by low order IIR filters to efficient implementing two channel 3-D surround sound systems using Head-related transfer functions(HRTFs). The algorithm is based on a concept of the balanced model reduction. The binaural sounds using the approximated HRTFs are reproduced by headphone, and serves as a cue of sound image localization. HRTFs of dummy-head are approximated from 512-order FIR filters by 32-order IIR filters and compare with each other. .Experiment of sound image are carried out for 10 participants. We perform the experiment based on computer simulation and hardware experiment with TMS320C32. The results of the experiments show that the localization using the approximated HRTFs is the same accuracy as the case of FIR filters that simulate the HRTFs.

  • PDF

Nonverbal Expressions in New Media Art -Case Studies about Facial Expressions and Sound (뉴미디어 아트에 나타난 비언어적 표현 -표정과 소리의 사례연구를 중심으로)

  • Yoo, Mi;An, KyoungHee
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.10
    • /
    • pp.146-156
    • /
    • 2019
  • New media art moves out of place and time constraints, sublimates the benefits of technology into art, and presents a new way of communication with the audience. This paper analyses the tendency of nonverbal communication methods by analysing examples of facial expressions and sound used in new media art from early times. As a result, it can be seen that the digital paradigm in the new media art has a nonlinear thinking, which makes a perceptual reduction of immersion and dispersion. The facial expression in new media art made it possible not only to overcome the limit of space and time of various expressions through 'visual distortions, enlargement, and virtualisation', but also to enable new ways of communication to display facial parts combined or separated in the digital environment. The sound in new media art does not stay in auditory sense, but pursues multi-sensory and synesthesia by cooperating with visual and tactile, evolves by revealing characteristics of space expansion and sensibility and interaction of audience.

Natural 3D Lip-Synch Animation Based on Korean Phonemic Data (한국어 음소를 이용한 자연스러운 3D 립싱크 애니메이션)

  • Jung, Il-Hong;Kim, Eun-Ji
    • Journal of Digital Contents Society
    • /
    • v.9 no.2
    • /
    • pp.331-339
    • /
    • 2008
  • This paper presents the development of certain highly efficient and accurate system for producing animation key data for 3D lip-synch animation. The system developed herein extracts korean phonemes from sound and text data automatically and then computes animation key data using the segmented phonemes. This animation key data is used for 3D lip-synch animation system developed herein as well as commercial 3D facial animation system. The conventional 3D lip-synch animation system segments the sound data into the phonemes based on English phonemic system and produces the lip-synch animation key data using the segmented phoneme. A drawback to this method is that it produces the unnatural animation for Korean contents. Another problem is that this method needs the manual supplementary work. In this paper, we propose the 3D lip-synch animation system that can segment the sound and text data into the phonemes automatically based on Korean phonemic system and produce the natural lip-synch animation using the segmented phonemes.

  • PDF

Environmental Education in the Moral Education (도덕과 교육에서의 환경 교육)

  • 윤현진
    • Hwankyungkyoyuk
    • /
    • v.12 no.1
    • /
    • pp.64-75
    • /
    • 1999
  • The goals of moral education according to the 7th educational curriculum are (1) to learn the basic life custom and ethical norms necessary to desirable life, (2) to develop the judgment to solve desirably and practically the ethical matters in daily life, (3) to develop the sound citizenship, national identity and consciousness, and the consciousness of world peace and mankind's mutual prosperity, and (4) to develop the ethical propensity to practice the ideal and principle of life systematically Based on the goals in the above, the following can be established as goals of environmental education possible: (1) to learn judgment to solve practically the environmental problems in the society with their ethical understanding, and (2) to recognize that environmental consciousness is the basic necessity of sound citizenship and national identity and consciousness, and mankind's mutual prosperity, and to have attitudes to practice environmental preservation in daily life. Like these, the intellectual aspect, the affective aspect, and the active aspect can be established in the environmental education in the ethics education keeping their balance. In order to achieve its goals, the contents of ethics subject are organized largely with 4 domains: (1) individual life, (2) home life, life with neighbors, and school life, (3) social life, and (4) national life. Among these, environmental education is mainly included in the domain of social life. These contents concerning environmental education take 22 (32.4%) out of the whole 68 teaching factors which are taught in the ethics subject from the 3rd grade to 10th grade. These 22 environmental teaching factors are mainly related to environmental ethics, environmental preservation and measures, and sound consumption life. Classified according to each goal, the environmental contents in the 7th curriculum for ethics subject put emphasis on environmental value and attitudes, action and participation, and information and knowledge. Therefore, the recommendable teaching and learning method for the environmental education in ethics subject is to motivate students' practice or to make them practice in person. For example, role-play model, value-conflict model, group study model can be applied according to the topics of environmental education.

  • PDF

A Program for Korean Animation Sound Libraries (국내용 애니메이션 사운드 라이브러리 구축 방안)

  • Rhim, Young-Kyu
    • Cartoon and Animation Studies
    • /
    • s.15
    • /
    • pp.221-235
    • /
    • 2009
  • Most of the sounds used in animated films are artificially made. A large number of the sounds used are either actual sound recordings or diversely processed artificial sounds made with professional sound equipments such as synthesizers. One animation episode contains numerous amounts of sounds, resulting in significant sound production costs. These sounds have full potential to be reused in different films or animations, but in reality we fail to do so. This thesis discusses ways these sound sources can be acknowledged as added new values to the present market situation as a usable 'digital content'. The iTunes Music Store is an American Apple company product that is acknowledged as the most successful digital content distribution model at the time being. Its system's sound library has potential for application in the Korean sound industry. In result, this system allows the sound creator to connect directly to the online store and become the initiative content supplier. At the same time, the user can receive a needed content easily at a low price. The most important part in the construction of this system is the search engine, which allows users to search for data in short periods of time. The search engine will have to be made in a new manner that takes into consideration the characteristics of the Korean language. This thesis presents a device incorporating the Wiki System to allow users to search and build their own data bases to share with other users. Using this system as a base, the Korean animation sound library will provide development and growth in the sound source industry as a new digital sound content.

  • PDF

Yoga of Consilience through Immersive Sound Experience (실감음향 체험을 통한 통섭의 요가)

  • Hyon, Jinoh
    • Journal of Broadcast Engineering
    • /
    • v.26 no.5
    • /
    • pp.643-651
    • /
    • 2021
  • Most people acquire information visually. Screens of computers, smart phones, etc. constantly stimulate people's eyes, increasing fatigue. In this social phenomenon, the realistic and rich sound of the 21st century's state-of-art sound system can affect people's bodies and minds in various ways. Through sound, human beings are given space to calm and observe themselves. The purpose of this paper is to introduce immersive yoga training based on 3D sound conducted together by ALgruppe & Rory's PranaLab and to promote the understanding of immersive audio system. As a result, people, experienced immersive yoga, not only enjoy the effect of sound, but also receive a powerful energy that gives them a sense of inner self-awareness. This is a response to multidisciplinary exchange required by the knowledge of modern society, and at the same time, informs the possibility of new cultural contents.

EEG-based Analysis of Auditory Stimulations Generated from Watching Disgust-Eliciting Videos (혐오 영상 시청시 청각적 자극에 대한 EEG 기반의 분석)

  • Lee, Mi-Jin;Kim, Hae-Lin;Kang, Hang-Bong
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.4
    • /
    • pp.756-764
    • /
    • 2016
  • In this paper, we present electroencephalography (EEG)-based power spectra analysis and auditory stimuli methods as coping mechanisms for disgust affection and phobia. Disgust affection is a negative emotion generated from trying to eliminate something harmful to one. It is usually related to mental illnesses such as obsessive-compulsive disorder, specifically phobia and depression. In our experiments, participants watched videos on horrible body mutilation and disgusting creatures, with either the original sound track or relaxing and exciting music as auditory stimulation. After watching the videos with original sound track, the participants watched the same video with a different audio background, such as soothing or cheerful music. We analyzed the EEG data utilizing relative power spectra and examined survey results of the participants. The results demonstrated that disgust affection is decreased when participants watched the video with relaxing or exciting music instead of the original soundtracks. Moreover, we confirmed that human's brainwave reacts according to types of audio and sources of disgust affection.