• Title/Summary/Keyword: musical user interface

Search Result 16, Processing Time 0.024 seconds

A Study on Interactive Sound Installation and User Intention Analysis - Focusing on an Installation: Color note (인터렉티브 사운드 설치와 사용자 의도 분석에 관한 연구 - 작품 Color note 를 중심으로)

  • Han, Yoon-Jung;Han, Byeong-Jun
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02b
    • /
    • pp.268-273
    • /
    • 2008
  • This work defines user intention according to intention range, and also proposes an interactive sound installation which reflects and varies above features. User intention consists of several decomposition concepts, which are elemental intentions, partial intentions, and a universal intention. And also, each concept is defined as inclusion/affiliation relationship with other concepts. For the representation of elemental intention, we implemented an musical interface, Color note, which represents the colors and notes according to response of participants. We also propose Harmonic Defragmentation (HD), which arranges the partial intentions with harmonic rule. Finally, the universal intention is inferred to the comprehensive direction of elemental intentions. We used Karhunen-Lo$\`{e}$ve(K-L) Transform for the inference. For verifying the validity of our proposed interface, the "Color Note," and the various techniques, we installed our work and surveyed various users for the evaluation of HD and statistical techniques. Also, we commissioned another survey to find out satisfaction measurement which was used for expressing universal intention.

  • PDF

A Development of Multi-Emotional Signal Receiving Modules for Cellphone Using Robotic Interaction

  • Jung, Yong-Rae;Kong, Yong-Hae;Um, Tai-Joon;Kim, Seung-Woo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.2231-2236
    • /
    • 2005
  • CP (Cellular Phone) is currently one of the most attractive technologies and RT (Robot Technology) is also considered as one of the most promising next generation technology. We present a new technological concept named RCP (Robotic Cellular Phone), which combines RT and CP. RCP consists of 3 sub-modules, $RCP^{Mobility}$, $RCP^{Interaction}$, and $RCP^{Integration}$. $RCP^{Interaction}$ is the main focus of this paper. It is an interactive emotion system which provides CP with multi-emotional signal receiving functionalities. $RCP^{Interaction}$ is linked with communication functions of CP in order to interface between CP and user through a variety of emotional models. It is divided into a tactile, an olfactory and a visual mode. The tactile signal receiving module is designed by patterns and beat frequencies which are made by mechanical-vibration conversion of the musical melody, rhythm and harmony. The olfactory signal receiving module is designed by switching control of perfume-injection nozzles which are able to give the signal receiving to the CP-called user through a special kind of smell according to the CP-calling user. The visual signal receiving module is made by motion control of DC-motored wheel-based system which can inform the CP-called user of the signal receiving through a desired motion according to the CP-calling user. In this paper, a prototype system is developed for multi-emotional signal receiving modes of CP. We describe an overall structure of the system and provide experimental results of the functional modules.

  • PDF

A Development of Multi-Emotional Signal Receiving Modules for Ubiquitous RCP Interaction (유비쿼터스 RCP 상호작용을 위한 다감각 착신기능모듈의 개발)

  • Jang Kyung-Jun;Jung Yong-Rae;Kim Dong-Wook;Kim Seung-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.1
    • /
    • pp.33-40
    • /
    • 2006
  • We present a new technological concept named RCP (Robotic Cellular Phone), which combines RT and CP. That is an ubiquitous robot. RCP consists of 3 sub-modules, RCP Mobility, RCP interaction, and RCP Integration. RCP Interaction is the main focus of this paper. It is an interactive emotion system which provides CP with multi-emotional signal receiving functionalities. RCP Interaction is linked with communication functions of CP in order to interface between CP and user through a variety of emotional models. It is divided into a tactile, an olfactory and a visual mode. The tactile signal receiving module is designed by patterns and beat frequencies which are made by mechanical-vibration conversion of the musical melody, rhythm and harmony. The olfactory signal receiving module is designed by switching control of perfume-injection nozzles which are able to give the signal receiving to the CP-called user through a special kind of smell according to the CP-calling user. The visual signal receiving module is made by motion control of DC-motored wheel-based system which can inform the CP-called user of the signal receiving through a desired motion according to the CP-calling user. In this paper, a prototype system is developed far multi-emotional signal receiving modes of CP. We describe an overall structure of the system and provide experimental results of the functional modules.

Implementation of an Intelligent Audio Graphic Equalizer System (지능형 오디오 그래픽 이퀄라이저 시스템 구현)

  • Lee Kang-Kyu;Cho Youn-Ho;Park Kyu-Sik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.3 s.309
    • /
    • pp.76-83
    • /
    • 2006
  • A main objective of audio equalizer is for user to tailor acoustic frequency response to increase sound comfort and example applications of audio equalizer includes large-scale audio system to portable audio such as mobile MP3 player. Up to now, all the audio equalizer requires manual setting to equalize frequency bands to create suitable sound quality for each genre of music. In this paper, we propose an intelligent audio graphic equalizer system that automatically classifies the music genre using music content analysis and then the music sound is boosted with the given frequency gains according to the classified musical genre when playback. In order to reproduce comfort sound, the musical genre is determined based on two-step hierarchical algorithm - coarse-level and fine-level classification. It can prevent annoying sound reproduction due to the sudden change of the equalizer gains at the beginning of the music playback. Each stage of the music classification experiments shows at least 80% of success with complete genre classification and equalizer operation within 2 sec. Simple S/W graphical user interface of 3-band automatic equalizer is implemented using visual C on personal computer.

A Study of the Accessibility Evaluation of TTS-1 for the Screen Reader User (스크린리더 사용자를 위한 플러그인 가상악기 TTS-1의 접근성 평가 연구)

  • Seok, Yong-Hwan
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.513-522
    • /
    • 2022
  • The purpose of this study is to evaluate the accessibility of the Cakewalk TTS-1 for the screen reader users. An evaluation was performed by testing the accessibility of a editing virtual instrument that is a part of MIDI production based on the NCS(National Competency Standards) by using the TTS-1 and the Sense Reader. The results of this study are as follows. The TTS-1 itself can't provide enough accessibility for the screen users to do an above task. But the screen reader users can perform the above tasks if they use extended access functions like Sense Reader's Mouse Pointer function, Position Memory function and MIDI Control Signal function. Even if they use the extended access function, there are functions that is difficult to access. To solve this problem, several suggestions are proposed.

A Novel Query-by-Singing/Humming Method by Estimating Matching Positions Based on Multi-layered Perceptron

  • Pham, Tuyen Danh;Nam, Gi Pyo;Shin, Kwang Yong;Park, Kang Ryoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.7
    • /
    • pp.1657-1670
    • /
    • 2013
  • The increase in the number of music files in smart phone and MP3 player makes it difficult to find the music files which people want. So, Query-by-Singing/Humming (QbSH) systems have been developed to retrieve music from a user's humming or singing without having to know detailed information about the title or singer of song. Most previous researches on QbSH have been conducted using musical instrument digital interface (MIDI) files as reference songs. However, the production of MIDI files is a time-consuming process. In addition, more and more music files are newly published with the development of music market. Consequently, the method of using the more common MPEG-1 audio layer 3 (MP3) files for reference songs is considered as an alternative. However, there is little previous research on QbSH with MP3 files because an MP3 file has a different waveform due to background music and multiple (polyphonic) melodies compared to the humming/singing query. To overcome these problems, we propose a new QbSH method using MP3 files on mobile device. This research is novel in four ways. First, this is the first research on QbSH using MP3 files as reference songs. Second, the start and end positions on the MP3 file to be matched are estimated by using multi-layered perceptron (MLP) prior to performing the matching with humming/singing query file. Third, for more accurate results, four MLPs are used, which produce the start and end positions for dynamic time warping (DTW) matching algorithm, and those for chroma-based DTW algorithm, respectively. Fourth, two matching scores by the DTW and chroma-based DTW algorithms are combined by using PRODUCT rule, through which a higher matching accuracy is obtained. Experimental results with AFA MP3 database show that the accuracy (Top 1 accuracy of 98%, with an MRR of 0.989) of the proposed method is much higher than that of other methods. We also showed the effectiveness of the proposed system on consumer mobile device.