• Title/Summary/Keyword: 소리-이미지 변환

Search Result 15, Processing Time 0.026 seconds

A Basic Study on the System of Converting Color Image into Sound (컬러이미지-소리 변환 시스템에 관한 기초연구)

  • Kim, Sung-Ill;Jung, Jin-Seung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.2
    • /
    • pp.251-256
    • /
    • 2010
  • This paper aims for developing the intelligent robot emulating human synesthetic skills which associate a color image with sound, so that we are able to build an application system based on the principle of mutual conversion between color image and sound. As the first step, in this study, we have tried to realize a basic system using the color image to sound conversion. This study describes a new conversion method to convert color image into sound, based on the likelihood in the physical frequency information between light and sound. In addition, we present the method of converting color image into sound using color model conversion as well as histograms in the converted color model. In the basis of the method proposed in this study, we built a basic system using Microsoft Visual C++(ver. 6.0). The simulation results revealed that the hue, saturation and intensity elements of a input color image were converted into F0, harmonic and octave elements of a sound, respectively. The converted sound elements were synthesized to generate a sound source with WAV file format using Csound toolkit.

A Study on Sound Recognition System Based on 2-D Transformation and CNN Deep Learning (2차원 변환과 CNN 딥러닝 기반 음향 인식 시스템에 관한 연구)

  • Ha, Tae Min;Cho, Seongwon;Tra, Ngo Luong Thanh;Thanh, Do Chi;Lee, Keeseong
    • Smart Media Journal
    • /
    • v.11 no.1
    • /
    • pp.31-37
    • /
    • 2022
  • This paper proposes a study on applying signal processing and deep learning for sound recognition that detects sounds commonly heard in daily life (Screaming, Clapping, Crowd_clapping, Car_passing_by and Back_ground, etc.). In the proposed sound recognition, several techniques related to the spectrum of sound waves, augmentation of sound data, ensemble learning for various predictions, convolutional neural networks (CNN) deep learning, and two-dimensional (2-D) data are used for improving the recognition accuracy. The proposed sound recognition technology shows that it can accurately recognize various sounds through experiments.

Noise Robust System for Pig Wasting Diseases Detection (잡음에 강인한 돼지 호흡기 질병 탐지 시스템)

  • Choi, Yongju;Choi, Yoona;Park, Daihee;Chung, Yongwha
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.720-723
    • /
    • 2017
  • 돼지 호흡기 질병은 돈사에 막대한 경제적 손실을 초래하는 질병들 중 하나이다. 본 논문에서는 저비용으로도 구축이 가능한 소리 센서 기반의 돼지 호흡기 질병 탐지 시스템을 제안하며, 특히 잡음 환경에서도 강인한 시스템의 구성에 초점을 두었다. 제안하는 시스템은 먼저, 돈사 내의 소리 센서로부터 취득한 돼지 소리를 2차원 회색조 이미지로 변환한다. 이후, 잡음에 강인한 성능을 보이는 Dominant Neighborhood Structure(DNS) 알고리즘을 이용하여 질감정보를 추출한다. 마지막으로, 이미지 분류에서 그 성능이 이미 입증된 딥러닝의 대표적 모델인 Convolutional Neural Network(CNN)에 사용하여 돼지 호흡기 질병을 탐지 및 분류한다. 실제 국내 돈사에서 취득한 돼지 소리를 이용하여 제안하는 시스템의 성능을 실험적으로 검증한 바 96%가 넘는 안정적인 시스템임을 확인하였다.

Conversion of Image into Sound Based on HSI Histogram (HSI 히스토그램에 기초한 이미지-사운드 변환)

  • Kim, Sung-Il
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.3
    • /
    • pp.142-148
    • /
    • 2011
  • The final aim of the present study is to develop the intelligent robot, emulating human synesthetic skills which make it possible to associate a color image with a specific sound. This can be done on the basis of the mutual conversion between color image and sound. As a first step of the final goal, this study focused on a basic system using a conversion of color image into sound. This study describes a proposed method to convert color image into sound, based on the likelihood in the physical frequency information between light and sound. The method of converting color image into sound was implemented by using HSI histograms through RGB-to-HSI color model conversion, which was done by Microsoft Visual C++ (ver. 6.0). Two different color images were used on the simulation experiments, and the results revealed that the hue, saturation and intensity elements of each input color image were converted into fundamental frequency, harmonic and octave elements of a sound, respectively. Through the proposed system, the converted sound elements were then synthesized to automatically generate a sound source with wav file format, using Csound.

CNN-based Automatic Machine Fault Diagnosis Method Using Spectrogram Images (스펙트로그램 이미지를 이용한 CNN 기반 자동화 기계 고장 진단 기법)

  • Kang, Kyung-Won;Lee, Kyeong-Min
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.21 no.3
    • /
    • pp.121-126
    • /
    • 2020
  • Sound-based machine fault diagnosis is the automatic detection of abnormal sound in the acoustic emission signals of the machines. Conventional methods of using mathematical models were difficult to diagnose machine failure due to the complexity of the industry machinery system and the existence of nonlinear factors such as noises. Therefore, we want to solve the problem of machine fault diagnosis as a deep learning-based image classification problem. In the paper, we propose a CNN-based automatic machine fault diagnosis method using Spectrogram images. The proposed method uses STFT to effectively extract feature vectors from frequencies generated by machine defects, and the feature vectors detected by STFT were converted into spectrogram images and classified by CNN by machine status. The results show that the proposed method can be effectively used not only to detect defects but also to various automatic diagnosis system based on sound.

A Basic Study on the Conversion of Color Image into Musical Elements based on a Synesthetic Perception (공감각인지기반 컬러이미지-음악요소 변환에 관한 기초연구)

  • Kim, Sung-Il
    • Science of Emotion and Sensibility
    • /
    • v.16 no.2
    • /
    • pp.187-194
    • /
    • 2013
  • The final aim of the present study is to build a system of converting a color image into musical elements based on a synesthetic perception, emulating human synesthetic skills, which make it possible to associate a color image with a specific sound. This can be done on the basis of the similarities between physical frequency information of both light and sound. As a first step, an input true color image is converted into hue, saturation, and intensity domains based on a color model conversion theory. In the next step, musical elements including note, octave, loudness, and duration are extracted from each domain of the HSI color model. A fundamental frequency (F0) is then extracted from both hue and intensity histograms. The loudness and duration are extracted from both intensity and saturation histograms, respectively. In experiments, the proposed system on the conversion of a color image into musical elements was implemented using standard C and Microsoft Visual C++(ver. 6.0). Through the proposed system, the extracted musical elements were synthesized to finally generate a sound source in a WAV file format. The simulation results revealed that the musical elements, which were extracted from an input RGB color image, reflected in its output sound signals.

  • PDF

A Study on the Sound Model Construct Using Color Tone Cognizance Sensationalizing Method (색상인식 감각화 방법을 활용한 사운드모델 구축에 관한 연구)

  • Kim Beom-Seok;Kim Jung-Ui;Ko Young-Hyuk
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.11a
    • /
    • pp.559-563
    • /
    • 2005
  • 본 연구는 이미지와 영상의 내용을 인간의 시각과 촉각에 전달하는 방법에 관한 것으로서, 더욱 상세하게는 색상의 파장과 진폭을 소리와 진동으로 변환하기 위한 원리와 방법을 찾고 이것을 통해 얻어진 에너지를 다시 인간의 감각기관에 전달하는 방법을 제시하고자 한다.

  • PDF

A Study on Audio-Visual Interactive Art interacting with Sound -Focused on 21C Boogie Woogie (사운드에 반응하는 시청각적인 인터랙티브 아트에 관한 연구)

  • Son, Jin-Seok;Yang, Jee-Hyun;Kim, Kyu-Jung
    • Cartoon and Animation Studies
    • /
    • s.35
    • /
    • pp.329-346
    • /
    • 2014
  • Art is the product from the combination of politics, economy, and social and cultural aspects. Recent development of digital media has affected on the expansion of visual expression in art. Digital media allow artists to use sound and physical interaction as well as image as an plastic element for making a work of art. Also, digital media help artists create an interactive, synaesthetic and visual perceptive environment by combining viewers' physical interaction with the reconstruction of image, sound, light, and among other plastic elements. This research was focused on the analysis of the relationship between images in art work and the viewer and data visualization using sound from the perspective of visual perception. This research also aimed to develop an interactive art by visualizing physical data with sound generating from outer stimulus or the viewer. Physical data generating from outer sound can be analyzed in various aspects. For example, Sound data can be analyzed and sampled within pitch, volume, frequency, and etc. This researcher implemented a new form of media art through the visual experiment of LED light triggered by sound frequency generating from viewers' voice or outer physical stimulus. Also, this researcher explored the possibility of various visual image expression generating from the viewer's reaction to illusionary characteristics of light(LED), which can be transformed within external physical data in real time. As the result, this researcher used a motif from Piet Mondrian's Broadway Boogie Woogie in order to implement a visual perceptive interactive work reacting with sound. Mondrian tried to approach at the essence of visual object by eliminating unnecessary representation elements and simplifying them in painting and making them into abstraction consisting of color, vertical and horizontal lines. This researcher utilized Modrian's simplified visual composition as a representation metaphor in oder to transform external sound stimulus into the element of light(LED), and implemented an environment inducing viewers' participation, which is a dynamic composition maximizing a synaesthetic expression, differing from Modrian's static composition.

A COVID-19 Diagnosis Model based on Various Transformations of Cough Sounds (기침 소리의 다양한 변환을 통한 코로나19 진단 모델)

  • Minkyung Kim;Gunwoo Kim;Keunho Choi
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.57-78
    • /
    • 2023
  • COVID-19, which started in Wuhan, China in November 2019, spread beyond China in 2020 and spread worldwide in March 2020. It is important to prevent a highly contagious virus like COVID-19 in advance and to actively treat it when confirmed, but it is more important to identify the confirmed fact quickly and prevent its spread since it is a virus that spreads quickly. However, PCR test to check for infection is costly and time consuming, and self-kit test is also easy to access, but the cost of the kit is not easy to receive every time. Therefore, if it is possible to determine whether or not a person is positive for COVID-19 based on the sound of a cough so that anyone can use it easily, anyone can easily check whether or not they are confirmed at anytime, anywhere, and it can have great economic advantages. In this study, an experiment was conducted on a method to identify whether or not COVID-19 was confirmed based on a cough sound. Cough sound features were extracted through MFCC, Mel-Spectrogram, and spectral contrast. For the quality of cough sound, noisy data was deleted through SNR, and only the cough sound was extracted from the voice file through chunk. Since the objective is COVID-19 positive and negative classification, learning was performed through XGBoost, LightGBM, and FCNN algorithms, which are often used for classification, and the results were compared. Additionally, we conducted a comparative experiment on the performance of the model using multidimensional vectors obtained by converting cough sounds into both images and vectors. The experimental results showed that the LightGBM model utilizing features obtained by converting basic information about health status and cough sounds into multidimensional vectors through MFCC, Mel-Spectogram, Spectral contrast, and Spectrogram achieved the highest accuracy of 0.74.

Noise-Robust Porcine Respiratory Diseases Classification Using Texture Analysis and CNN (질감 분석과 CNN을 이용한 잡음에 강인한 돼지 호흡기 질병 식별)

  • Choi, Yongju;Lee, Jonguk;Park, Daihee;Chung, Yongwha
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.3
    • /
    • pp.91-98
    • /
    • 2018
  • Automatic detection of pig wasting diseases is an important issue in the management of group-housed pigs. In particular, porcine respiratory diseases are one of the main causes of mortality among pigs and loss of productivity in intensive pig farming. In this paper, we propose a noise-robust system for the early detection and recognition of pig wasting diseases using sound data. In this method, first we convert one-dimensional sound signals to two-dimensional gray-level images by normalization, and extract texture images by means of dominant neighborhood structure technique. Lastly, the texture features are then used as inputs of convolutional neural networks as an early anomaly detector and a respiratory disease classifier. Our experimental results show that this new method can be used to detect pig wasting diseases both economically (low-cost sound sensor) and accurately (over 96% accuracy) even under noise-environmental conditions, either as a standalone solution or to complement known methods to obtain a more accurate solution.