• Title/Summary/Keyword: Sound to Color Conversion

Search Result 14, Processing Time 0.026 seconds

Conversion of Image into Sound Based on HSI Histogram (HSI 히스토그램에 기초한 이미지-사운드 변환)

  • Kim, Sung-Il
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.3
    • /
    • pp.142-148
    • /
    • 2011
  • The final aim of the present study is to develop the intelligent robot, emulating human synesthetic skills which make it possible to associate a color image with a specific sound. This can be done on the basis of the mutual conversion between color image and sound. As a first step of the final goal, this study focused on a basic system using a conversion of color image into sound. This study describes a proposed method to convert color image into sound, based on the likelihood in the physical frequency information between light and sound. The method of converting color image into sound was implemented by using HSI histograms through RGB-to-HSI color model conversion, which was done by Microsoft Visual C++ (ver. 6.0). Two different color images were used on the simulation experiments, and the results revealed that the hue, saturation and intensity elements of each input color image were converted into fundamental frequency, harmonic and octave elements of a sound, respectively. Through the proposed system, the converted sound elements were then synthesized to automatically generate a sound source with wav file format, using Csound.

A Basic Study on the Conversion of Sound into Color Image using both Pitch and Energy

  • Kim, Sung-Ill
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.12 no.2
    • /
    • pp.101-107
    • /
    • 2012
  • This study describes a proposed method of converting an input sound signal into a color image by emulating human synesthetic skills which make it possible to associate an sound source with a specific color image. As a first step of sound-to-image conversion, features such as fundamental frequency(F0) and energy are extracted from an input sound source. Then, a musical scale and an octave can be calculated from F0 signals, so that scale, energy and octave can be converted into three elements of HSI model such hue, saturation and intensity, respectively. Finally, a color image with the BMP file format is created as an output of the process of the HSI-to-RGB conversion. We built a basic system on the basis of the proposed method using a standard C-programming. The simulation results revealed that output color images with the BMP file format created from input sound sources have diverse hues corresponding to the change of the F0 signals, where the hue elements have different intensities depending on octaves with the minimum frequency of 20Hz. Furthermore, output images also have various levels of chroma(or saturation) which is directly converted from the energy.

A Basic Study on the System of Converting Color Image into Sound (컬러이미지-소리 변환 시스템에 관한 기초연구)

  • Kim, Sung-Ill;Jung, Jin-Seung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.2
    • /
    • pp.251-256
    • /
    • 2010
  • This paper aims for developing the intelligent robot emulating human synesthetic skills which associate a color image with sound, so that we are able to build an application system based on the principle of mutual conversion between color image and sound. As the first step, in this study, we have tried to realize a basic system using the color image to sound conversion. This study describes a new conversion method to convert color image into sound, based on the likelihood in the physical frequency information between light and sound. In addition, we present the method of converting color image into sound using color model conversion as well as histograms in the converted color model. In the basis of the method proposed in this study, we built a basic system using Microsoft Visual C++(ver. 6.0). The simulation results revealed that the hue, saturation and intensity elements of a input color image were converted into F0, harmonic and octave elements of a sound, respectively. The converted sound elements were synthesized to generate a sound source with WAV file format using Csound toolkit.

A Basic Study on the Pitch-based Sound into Color Image Conversion (피치 기반 사운드-컬러이미지 변환에 관한 기초연구)

  • Kang, Kun-Woo;Kim, Sung-Ill
    • Science of Emotion and Sensibility
    • /
    • v.15 no.2
    • /
    • pp.231-238
    • /
    • 2012
  • This study aims for building an application system of converting sound into color image based on synesthetic perception. As the major features of input sound, both scale and octave elements extracted from F0(fundamental frequency) were converted into both hue and intensity elements of HSI color model, respectively. In this paper, we used the fixed saturation value as 0.5. On the basis of color model conversion theory, the HSI color model was then converted into the RGB model, so that a color image of the BMP format was finally created. In experiments, the basic system was implemented on both software and hardware(TMS320C6713 DSP) platforms based on the proposed sound-color image conversion method. The results revealed that diverse color images with different hues and intensities were created depending on scales and octaves extracted from the F0 of input sound signals. The outputs on the hardware platform were also identical to those on the software platform.

  • PDF

A Basic Study on the Conversion of Color Image into Musical Elements based on a Synesthetic Perception (공감각인지기반 컬러이미지-음악요소 변환에 관한 기초연구)

  • Kim, Sung-Il
    • Science of Emotion and Sensibility
    • /
    • v.16 no.2
    • /
    • pp.187-194
    • /
    • 2013
  • The final aim of the present study is to build a system of converting a color image into musical elements based on a synesthetic perception, emulating human synesthetic skills, which make it possible to associate a color image with a specific sound. This can be done on the basis of the similarities between physical frequency information of both light and sound. As a first step, an input true color image is converted into hue, saturation, and intensity domains based on a color model conversion theory. In the next step, musical elements including note, octave, loudness, and duration are extracted from each domain of the HSI color model. A fundamental frequency (F0) is then extracted from both hue and intensity histograms. The loudness and duration are extracted from both intensity and saturation histograms, respectively. In experiments, the proposed system on the conversion of a color image into musical elements was implemented using standard C and Microsoft Visual C++(ver. 6.0). Through the proposed system, the extracted musical elements were synthesized to finally generate a sound source in a WAV file format. The simulation results revealed that the musical elements, which were extracted from an input RGB color image, reflected in its output sound signals.

  • PDF

Real-time Implementation of Sound into Color Conversion System Based on the Colored-hearing Synesthetic Perception (색-청 공감각 인지 기반 사운드-컬러 신호 실시간 변환 시스템의 구현)

  • Bae, Myung-Jin;Kim, Sung-Ill
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.12
    • /
    • pp.8-17
    • /
    • 2015
  • This paper presents a sound into color signal conversion using a colored-hearing synesthesia. The aim of the present paper is to implement a real-time conversion system which focuses on both hearing and sight which account for a great part of bodily senses. The proposed method of the real-time conversion of color into sound, in this paper, was simple and intuitive where scale, octave and velocity were extracted from MIDI input signals, which were converted into hue, intensity and saturation, respectively, as basic elements of HSI color model. In experiments, we implemented both the hardware system for delivering MIDI signals to PC and the VC++ based software system for monitoring both input and output signals, so we made certain that the conversion was correctly performed by the proposed method.

Implementation of Muscular Sense into both Color and Sound Conversion System based on Wearable Device (웨어러블 디바이스 기반 근감각-색·음 변환 시스템의 구현)

  • Bae, Myungjin;Kim, Sungill
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.3
    • /
    • pp.642-649
    • /
    • 2016
  • This paper presents a method for conversion of muscular sense into both visual and auditory senses based on synesthetic perception. Muscular sense can be defined by rotation angles, direction changes and motion degrees of human body. Synesthetic interconversion can be made by learning, so that it can be possible to create intentional synesthetic phenomena. In this paper, the muscular sense was converted into both color and sound signals which comprise the great majority of synesthetic phenomena. The measurement of muscular sense was performed by using the AHRS(attitude heading reference system). Roll, yaw and pitch signals of the AHRS were converted into three basic elements of color as well as sound, respectively. The proposed method was finally applied to a wearable device, Samsung gear S, successfully.

Implementation of ARM based Embedded System for Muscular Sense into both Color and Sound Conversion (근감각-색·음 변환을 위한 ARM 기반 임베디드시스템의 구현)

  • Kim, Sung-Ill
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.8
    • /
    • pp.427-434
    • /
    • 2016
  • This paper focuses on a real-time hardware processing by implementing the ARM Cortex-M4 based embedded system, using a conversion algorithm from a muscular sense to both visual and auditory elements, which recognizes rotations of a human body, directional changes and motion amounts out of human senses. As an input method of muscular sense, AHRS(Attitude Heading Reference System) was used to acquire roll, pitch and yaw values in real time. These three input values were converted into three elements of HSI color model such as intensity, hue and saturation, respectively. Final color signals were acquired by converting HSI into RGB color model. In addition, Three input values of muscular sense were converted into three elements of sound such as octave, scale and velocity, which were synthesized to give an output sound using MIDI(Musical Instrument Digital Interface). The analysis results of both output color and sound signals revealed that input signals of muscular sense were correctly converted into both color and sound in real time by the proposed conversion method.

Lip Region Extraction by Gaussian Classifier (가우스 분류기를 이용한 입술영역 추출)

  • Kim, Jeong Yeop
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.2
    • /
    • pp.108-114
    • /
    • 2017
  • Lip reading is a field of image processing to assist the process of sound recognition. In some environment, the capture of sound signal usually has significant noise and therefore, the recognition rate of sound signal decreases. Lip reading can be a good feature for the increase of recognition rates. Conventional lip extraction methods have been proposed widely. Maia et. al. proposed a method by the sum of Cr and Cb. However, there are two problems as follows: the point with maximum saturation is not always regarded as lips region and the inner part of lips such as oral cavity and teeth can be classified as lips. To solve these problems, this paper proposes a method which adopts the histogram-based classifier for the extraction of lips region. The proposed method consists of two stages, learning and test. The amount of computation is minimized because this method has no color conversion. The performance of proposed method gives 66.8% of detection rate compared to 28% of conventional ones.

Music Generation Algorithm based on the Color-Emotional Effect of a Painting (그림의 색채 감정 효과를 기반으로 한 음악 생성 알고리즘)

  • Choi, Hee Ju;Hwang, Jung-Hun;Ryu, Shinhye;Kim, Sangwook
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.6
    • /
    • pp.765-771
    • /
    • 2020
  • To enable AI(artificial intelligence) to realize visual emotions, it attempts to create music centered on color, an element that causes emotions in paintings. Traditional image-based music production studies have a limitation in playing notes that are unrelated to the picture because of the absence of musical elements. In this paper, we propose a new algorithm to set the group of music through the average color of the picture, and to produce music after adding diatonic code progression and deleting sound using median value. And the results obtained through the proposed algorithm were analyzed.