• Title/Summary/Keyword: audio frequency

Search Result 376, Processing Time 0.024 seconds

Design and Implementation of a Bluetooth Baseband Module with DMA Interface (DMA 인터페이스를 갖는 블루투스 기저대역 모듈의 설계 및 구현)

  • Cheon, Ik-Jae;O, Jong-Hwan;Im, Ji-Suk;Kim, Bo-Gwan;Park, In-Cheol
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.39 no.3
    • /
    • pp.98-109
    • /
    • 2002
  • Bluetooth technology is a publicly available specification proposed for Radio Frequency (RF) communication for short-range :1nd point-to-multipoint voice and data transfer. It operates in the 2.4㎓ ISM(Industrial, Scientific and Medical) band and offers the potential for low-cost, broadband wireless access for various mobile and portable devices at range of about 10 meters. In this paper, we describe the structure and the test results of the bluetooth baseband module with direct memory access method we have developed. This module consists of three blocks; link controller, UART interface, and audio CODEC. This module has a bus interface for data communication between this module and main processor and a RF interface for the transmission of bit-stream between this module and RF module. The bus interface includes DMA interface. Compared with the link controller with FIFOs, The module with DMA has a wide difference in size of module and speed of data processing. The small size module supplies lorr cost and various applications. In addition, this supports a firmware upgrade capability through UART. An FPGA and an ASIC implementation of this module, designed as soft If, are tested for file and bit-stream transfers between PCs.

Difference of Autonomic Nervous System Responses among Boredom, Pain, and Surprise (무료함, 통증, 그리고 놀람 정서 간 자율신경계 반응의 차이)

  • Jang, Eun-Hye;Eum, Yeong-Ji;Park, Byoung-Jun;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.14 no.4
    • /
    • pp.503-512
    • /
    • 2011
  • Recently in HCI research, emotion recognition is one of the core processes to implement emotional intelligence. There are many studies using bio signals in order to recognize human emotions, but it has been done merely for the basic emotions and very few exists for the other emotions. The purpose of present study is to confirm the difference of autonomic nervous system (ANS) response in three emotions (boredom, pain, and surprise). There were totally 217 of participants (male 96, female 121), we presented audio-visual stimulus to induce boredom and surprise, and pressure by using the sphygmomanometer for pain. During presented emotional stimuli, we measured electrodermal activity (EDA), skin temperature (SKT), electrocardiac activity (ECG) and photoplethysmography (PPG), besides; we required them to classify their present emotion and its intensity according to the emotion assessment scale. As the results of emotional stimulus evaluation, emotional stimulus which we used was shown to mean 92.5% of relevance and 5.43 of efficiency; this inferred that each emotional stimulus caused its own emotion quite effectively. When we analyzed the results of the ANS response which had been measured, we ascertained the significant difference between the baseline and emotional state on skin conductance response, SKT, heart rate, low frequency and blood volume pulse amplitude. In addition, the ANS response caused by each emotion had significant differences among the emotions. These results can probably be able to use to extend the emotion theory and develop the algorithm in recognition of three kinds of emotions (boredom, surprise, and pain) by response measurement indicators and be used to make applications for differentiating various human emotions in computer system.

  • PDF

Geoelectrical Structure and Groundwater Distribution in the South-eastern Region of Jeju Island Revealed by Controlled Source Audio-frequency Magneto Telluric (CSAMT) survey (인공송신원 가청주파수 자기지전류 탐사를 이용한 제주 동남부의 전기비저항 구조 및 지하수 분포 조사)

  • Yang, Jun-Mo;Kwon, Byung-Doo;Lee, Hei-Soon;Song, Sung-Ho;Park, Gyeo-Soon;Lee, Kyu-Sang
    • Economic and Environmental Geology
    • /
    • v.40 no.1 s.182
    • /
    • pp.67-85
    • /
    • 2007
  • We have performed the CSAMT survey to examine the geoelectrical structure and groundwater distribution for two survey lines across the south-eastern region of Jeju Island. Three kinds of 1-D inversion techniques were employed taking account of the geological situation around the observation sites, and their inversion results were concurrently compared and analyzed to improve the reliability of interpretation. The resultant inverted resistivity structures reveals the three-layered structure, which is composed of the layers with a high-low-lower resistivity from the surface downward. Through the comparison of the inverted resistivity model and core log of deep borehole nearby observation sites, the lithology of each inverted layer was inferred. The first layer and second layer corresponded to the basaltic layer with a thickness of $100{\sim}250m$, and the third layer to the Seoguipo Formation and the U Formation; the thickness of the Seoguipo Formation could not be estimated due to the limitation of investigation depth and little resistivity difference between both Formations. Nevertheless, the Seoguipo Formation, which is strongly associated with the groundwater system in the south-eastern region of Jeju Island, showed the conspicuous spatial continuity from the middle mountain area to coastal area.

An Analysis of The Repetitive Sound Effects Influencing on Game User's Flow (반복사운드 활용이 게임 유저의 몰입에 미치는 영향 분석)

  • Kim, Wan-Suk;Yun, Jae-Sun;Lim, Chan;Min, Byung-Chul
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.3
    • /
    • pp.149-156
    • /
    • 2010
  • There are elements for the game user get into the emotion of flow (the mental state of operation in which the person is fully immersed in what he or she is doing by a feeling of energized focus, full involvement, and success in the process of the activity). In game contents, for example, a considerable sophisticated application of 'sound' is one of the important elements must be considered for a qualified game development process. If a proper audio condition is satisfied, a game user is intrinsically solving problems by auditorial sense and the participant get into immersing into the game spontaneously. There are elements in game contents storytelling for the user to be in flow condition, this study will be analyzing a game user's flowing, especially with repetitive usage of sound. To be accurate, 'flow analysis' of Csikszentmihalyi. M, and 'flow factors' of Donna L. Hoffman & Thomas P. Novak, in addition, would be proper references in the research. comparing to a precedent study that analyzed a game and flow focused on visual elements. Ponpoko(Sigma Enterprise Inc., 1981) and Bio Hazard 4(Capcom, 2007) will be given as the main texts. To achieve the desired proposition in the study, user's reaction is monitored by listening repeatable and ordinary sound. Questionnaires are including Frequency Analysis, MANOVA(multivariate analysis of variance).

The Analysis and Implementation of Realistic Sound using Doppler Effect (도플러 효과를 이용한 실감 음향 분석 및 구현)

  • Yim, Yong-Min;Lim, Heung-Jun;Heo, Jun-Seok;Park, Jun-Young;Do, Yun-Hyung;Lee, Kangwhan
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.523-526
    • /
    • 2017
  • In modern recently technology, 3D-Audio is used to enhance immersion in Virtual Reality. This includes interest of people about VR and AR, which related to the field of computer graphics. In fact, a lot of research has been carried out in recent years into a 3D sound field. However, the existing 3D generator device used for virtual reality does not contain Doppler effect occurred by the sound comes to or leave from a listener, while an angle from the listener and the altitude of the source sound are applied. Therefore, this paper present 3D real sound utilizing Doppler effect with spatial-rotation-speaker. We map the source sound in 3D-space into a real space where a user stays and present 3D real sound by manipulating with rotation angle, phase difference, sound output volume of the sound in 3D-space, according to the location of a virtual source sound. Utilizing both natural Doppler effect of rotating sound that is occurring by spatial-rotation-speaker and artificial Doppler effect generated by frequency-modulation of sound quality could improving the virtual reality for sound condition for perspective listening.

  • PDF

A Diagnostic Study of safety education in elementary schools based on PRECEDE Model (PRECEDE 모형을 이용한 일부 초등학교 안전교육의 진단적 연구)

  • 백경원;이명선
    • Korean Journal of Health Education and Promotion
    • /
    • v.18 no.1
    • /
    • pp.35-47
    • /
    • 2001
  • As the complexity of the our environment is further complicated by advancements in industry and increase in vehicle traffic flow, the incidents of injury causing accidents are on the rise. Consequently, there is increasing emphasis on the importance of systematic and continual safety education for injury preventive behaviors. This study investigates safety related problems of elementary school students based on the PRECEDE model, proposed by Green et al.(1980 Green), to comprehensively identify the requirements of school safety education. The identified requirements were used to diagnose the current state of elementary school safety education through the analysis of multidimensional factors. A questionnaire survey was conducted on 594 sixth grade students from randomly selected 4 schools in Seoul to examine their injury preventive behaviors and to determine the educational diagnosis variables that affect it. The duration of the survey was 3 weeks starting from April 12, 1999 to May 8, 1999. A summary of the survey results are presented below; 1. Situations in which accidents have occurred were, in their order of frequency, ‘during play or sports activities within the school grounds’ was most frequent at 59.6%, ‘during play on local streets’ at 49.5%, and ‘traffic accidents’ at 41.6%. 2. Categorization of the injury preventive behavior showed that ‘not playing at high traffic flow locations such as streets and construction sites’ had the higher level of observance, while ‘wearing of helmets and joint protection devices during playing’ was least observed. 3. Considering injury preventive behaviors in relation to educational diagnosis variables indicated, for predisposing factors, lower ‘perception to injury accidents’ (p〈0.001) combined with higher ‘concerns for injury accidents’(p〈0.001), ‘practice of preventive behavior’(p〈0.001), and ‘the level of safety knowledge’(p〈0.001) resulted in significantly higher observance of injury preventive behaviors. For enabling factors, higher ‘perceived level of the school safety education’ (p〈0.001) and ‘availability of safety education resources’(p〈0.01) indicated significantly higher observance of injury preventive behaviors. For the reinforcing factor, frequent exposure to ‘safety education brochure’ (p〈0.01) and ‘audio-visual material for safety education’(p〈0.01) combined with more ‘regional safety education’ (p〈0.01), ‘home safety education’ (p〈0.01), ‘school safety education’(p〈0.001), and, ‘parents’ observance of preventive behaviors' (p〈0.001) showed significantly higher observance of injury preventive behaviors. 4. An analysis of the factors that affect injury preventive behaviors showed that the enabling factor ‘awareness of school safety education’ had the highest correlation with injury preventive behaviors followed by factors, in their order of significance, ‘practice of preventive behavior’, ‘perception to injury accidents’, ‘level of safety knowledge’, ‘parents’ observances of preventive behaviors', and ‘concerns for injury accidents.’

  • PDF

Development of Safe Stove System using Sound Wave Fire Extinguisher (음파 소화기를 이용한 안전 스토브 시스템 개발)

  • Seo, Yunwon;Lee, Sukjae;Park, yungjoo;Kim, Kinam;Choi, Yongrae;Hwang, Hyungjun;Han, Seunghan;Shim, Dongha
    • Fire Science and Engineering
    • /
    • v.32 no.6
    • /
    • pp.34-39
    • /
    • 2018
  • In this paper, the architecture of a safe stove with an automatic fire suppression function using a sound wave fire extinguisher has been proposed and developed for the first time. A microcontroller connected to a fire sensor detects and suppresses a fire by driving a fire extinguisher. The sound wave fire extinguisher is composed of a speaker and collimator, and is driven by a driver module including an audio amplifier. The attenuation of the sound wave is reduced by preventing the sound diffusion with an enclosure surrounding a stove. The frequency of the sound wave is set to 50 Hz, and the sound pressure of 93 dBA is measured at the distance of 0.5 m. It takes maximum 8 and 15 seconds to suppress the flame from 7-cc and 14-cc flammable liquid, respectively, which corresponds to 24% and 42% of the natural extinguishing time. Since the proposed safe stove is non-toxic and leaves no residues over the conventional ones, it would combine with various home appliances to suppress early-stage fires and prevent fire expansion.

Comprehensive analysis of deep learning-based target classifiers in small and imbalanced active sonar datasets (소량 및 불균형 능동소나 데이터세트에 대한 딥러닝 기반 표적식별기의 종합적인 분석)

  • Geunhwan Kim;Youngsang Hwang;Sungjin Shin;Juho Kim;Soobok Hwang;Youngmin Choo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.4
    • /
    • pp.329-344
    • /
    • 2023
  • In this study, we comprehensively analyze the generalization performance of various deep learning-based active sonar target classifiers when applied to small and imbalanced active sonar datasets. To generate the active sonar datasets, we use data from two different oceanic experiments conducted at different times and ocean. Each sample in the active sonar datasets is a time-frequency domain image, which is extracted from audio signal of contact after the detection process. For the comprehensive analysis, we utilize 22 Convolutional Neural Networks (CNN) models. Two datasets are used as train/validation datasets and test datasets, alternatively. To calculate the variance in the output of the target classifiers, the train/validation/test datasets are repeated 10 times. Hyperparameters for training are optimized using Bayesian optimization. The results demonstrate that shallow CNN models show superior robustness and generalization performance compared to most of deep CNN models. The results from this paper can serve as a valuable reference for future research directions in deep learning-based active sonar target classification.

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.

Sound Engine for Korean Traditional Instruments Using General Purpose Digital Signal Processor (범용 디지털 신호처리기를 이용한 국악기 사운드 엔진 개발)

  • Kang, Myeong-Su;Cho, Sang-Jin;Kwon, Sun-Deok;Chong, Ui-Pil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.3
    • /
    • pp.229-238
    • /
    • 2009
  • This paper describes a sound engine of Korean traditional instruments, which are the Gayageum and Taepyeongso, by using a TMS320F2812. The Gayageum and Taepyeongso models based on commuted waveguide synthesis (CWS) are required to synthesize each sound. There is an instrument selection button to choose one of instruments in the proposed sound engine, and thus a corresponding sound is produced by the relative model at every certain time. Every synthesized sound sample is transmitted to a DAC (TLV5638) using SPI communication, and it is played through a speaker via an audio interface. The length of the delay line determines a fundamental frequency of a desired sound. In order to determine the length of the delay line, it is needed that the time for synthesizing a sound sample should be checked by using a GPIO. It takes $28.6{\mu}s$ for the Gayageum and $21{\mu}s$ for the Taepyeongso, respectively. It happens that each sound sample is synthesized and transferred to the DAC in an interrupt service routine (ISR) of the proposed sound engine. A timer of the TMS320F2812 has four events for generating interrupts. In this paper, the interrupt is happened by using the period matching event of it, and the ISR is called whenever the interrupt happens, $60{\mu}s$. Compared to original sounds with their spectra, the results are good enough to represent timbres of instruments except 'Mu, Hwang, Tae, Joong' of the Taepyeongso. Moreover, only one sound is produced when playing the Taepyeongso and it takes $21{\mu}s$ for the real-time playing. In the case of the Gayageum, players usually use their two fingers (thumb and middle finger or thumb and index finger), so it takes $57.2{\mu}s$ for the real-time playing.