• Title/Summary/Keyword: Voice user interface

Search Result 146, Processing Time 0.051 seconds

System Performance and Traffic Control for the AAL Type 2 Traffic in IMT-2000 Networks (IMT-2000 망에서 AAL-2 구조의 트래픽 제어 및 시스템 성능)

  • Ryu, Byung-Han;Ahn, Jee-Hwan;Baek, Jang-Hyun
    • IE interfaces
    • /
    • v.13 no.2
    • /
    • pp.178-187
    • /
    • 2000
  • In this paper, we investigate the system performance when the voice traffic is constructed as the ATM Adaptation Layer type 2(AAL-2) and then it is transmitted to the Base Station Controller(BSC) from the Base Station Transceiver Subsystem(BTS) through El link in International Mobile Telecommunication-2000 (IMT-2000) network. For this purpose, we first briefly describe the architecture of the BTS and the BSC, and then model it as a queueing network. By simulation study, we present the required processing time at traffic control blocks and the timeout time which should be set for multiplexing the user packets in the LIU(Line Interface Unit). Further, we evaluate the performance of physical links and the timeout probability that user packets can not be multiplexed within the established timeout time, and the multiplexing gain. Finally, we present the number of voice users who can be simultaneously admitted on one El link and 99.9% value of the transmission delay from the Radio Channel Element(RCE) to the Selector & Transcoder Subsystem(STS).

  • PDF

Automatic Generation of Voice Web Pages Based on SALT (SALT 기반 음성 웹 페이지의 자동 생성)

  • Ko, You-Jung;Kim, Yoon-Joong
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.3
    • /
    • pp.177-184
    • /
    • 2010
  • As a voice browser is introduced, voice dialog application becomes available on the Web environment. The voice dialog application consists of voice Web pages that need to translate the dialog scripts into SALT(Speech Application Language Tags). The current Web pages have been designed for visual. They, however, are potentially capable of using voice dialog. This paper, therefore, proposes an automated voice Web generation method that finds the elements for voice dialog from Web pages based HTML and converts them into SALT. The automatic generation system of a voice Web page consists of a lexical analyzer and a syntactic analyzer that converts a Web page which is described in HTML to voice Web page which is described in HTML+SALT. The converted voice Web page is designed to be able to handle not only the current mouse and keyboard input but also voice dialog.

Design of Gesture based Interfaces for Controlling GUI Applications (GUI 어플리케이션 제어를 위한 제스처 인터페이스 모델 설계)

  • Park, Ki-Chang;Seo, Seong-Chae;Jeong, Seung-Moon;Kang, Im-Cheol;Kim, Byung-Gi
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.1
    • /
    • pp.55-63
    • /
    • 2013
  • NUI(Natural User Interfaces) has been developed through CLI(Command Line Interfaces) and GUI(Graphical User Interfaces). NUI uses many different input modalities, including multi-touch, motion tracking, voice and stylus. In order to adopt NUI to legacy GUI applications, he/she must add device libraries, modify relevant source code and debug it. In this paper, we propose a gesture-based interface model that can be applied without modification of the existing event-based GUI applications and also present the XML schema for the specification of the model proposed. This paper shows a method of using the proposed model through a prototype.

Home Appliance Control through Speech Recognition User Interface (음성 인식 사용자 인터페이스를 통한 가전기기 제어 기법)

  • Song, Wook;Jang, Hyun-Su;Eom, Young-Ik
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2006.11a
    • /
    • pp.265-268
    • /
    • 2006
  • 유비쿼터스 컴퓨팅 환경이 확대됨에 따라, 기존의 키보드와 마우스만을 사용자 인터페이스로 주로 사용했던 방법에서 벗어나 좀 더 사용자 중심의 멀티모달 유저 인터페이스 적응이 요구되고 있다. 이에 XHTML+Voice는 음성 및 시각을 모두 제공할 수 있는 새로운 서비스 패러다임으로서 기존의 음성정보만을 제공하거나 시각적인 정보만을 제공하는 시스템과는 달리 XHTML내에 VoiceXML을 삽입함으로써 두 언어의 장점을 모두 활용할 수 있다. 본 논문에서는 VoiceXML의 이러한 장점을 살려 스마트 홈을 구성하는 여러 가전기기들의 인터페이스를 미리 템플릿으로 만들어 두어 모바일 디바이스를 통해 이것들을 제어하는 시나리오를 제안하고 구현하는 방법에 대해 실험하였다.

  • PDF

A Design and Implementation of the VoiceXML Multiple-View Editor Using MVC Framework (MVC 프레임 워크를 사용한 VoiceXML 다중 뷰 편집기의 설계 및 구현)

  • 유재우;염세훈
    • The Journal of the Acoustical Society of Korea
    • /
    • v.23 no.5
    • /
    • pp.390-399
    • /
    • 2004
  • In this paper, we design and implement a multiple-view VoiceXML editor to improve editing efficiency of the VoiceXML. The VoiceXML multiple-view Editor uses a MVC framework to support multiple views and paradigm. Our multiple-view editor consists of Model. View and Controller using MVC framework. A model, core data structure. is constructed of abstract syntax tree and abstract grammar. A view. user interface. is formalized in unparsing rules and unparser. A controller. to control model and view. is made of command interpreter and tree handler. The VoiceXML multiple-view editor overcomes a drawbacks of existing XML editors by showing document structure and context concurrently. as well as document flows. Our VoiceXML multiple-view editor. which MVC framework has been applied, provides various editing views concurrently to users. Thereby. it supports efficient and convenient editing environments for voice-web documents to users and it guarantees transparency of editors. as various views have a same consistent model.

The Graphical User Interface Design for Optimal MRI Operation (MRI 시스템의 최적화 운용을 위한 GUI 디자인)

  • Moon, J.Y.;Kang, S.H.;Kim, K.S.;Kim, J.S.;Im, H.J.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1997 no.05
    • /
    • pp.235-238
    • /
    • 1997
  • The Graphical User Interface (GUI) software is developed for 0.3 Tesla Permanent Magnet Resonance Imaging (MRI) system and the state of art of designing GUI system is discussed in this paper. The Object-Oriented concepts are applied for designing GUI software utilizing Interbase ODBC Database layer. Also, Multimedia concepts such as voice, sound and music are integrated in GUI system to enhance the efficiency of MRI operation.

  • PDF

Compact Robotic Arm to Assist with Eating using a Closed Link Mechanism (크로스 링크 기구를 적용한 소형 식사지원 로봇)

  • 강철웅;임종환
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.20 no.3
    • /
    • pp.202-209
    • /
    • 2003
  • We succeeded to build a cost effective assistance robotic arm with a compact and lightweight body. The robotic arm has three joints, and the tip of robotic arm to install tools consists of a closed link mechanism, which consisted of two actuators and several links. The robotic arm has been made possible by the use of actuators typically used in radio control devices. The controller of the robotic arm consists of a single chip PIC only. The robotic arm has a friendly user interface, as the operators are aged and disabled in most cases. The operator can manipulate the robotic arm by voice commands or by pressing a push button. The robotic arm has been successfully prototyped and tested on an elderly patient to assist with eating. The results of field test were satisfactory.

A Study on Integrated User Interface transfer model base on UIML (UIML에 기반한 통합 사용자 인터페이스 변환 모델에 관한 연구)

  • Park, Byung-Chul;Son, Min-Woo;Kim, Kang;Shin, Dong-Il;Shin, Dong-Kyoo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2004.05a
    • /
    • pp.865-868
    • /
    • 2004
  • 오늘날 스마트 홈, 홈오토메이션, 홈 네트워크 등의 연구 개발로 여러 기기들과 컴퓨터가 연동한다. 이에 따라 다양한 기기에 맞춰 사용자 인터페이스를 여러 번 개발해야하는 비효율적인 일이 빈번하다. 또한, 개발자에게 있어서 여러 기기에 사용되는 서로 다른 언어를 모두 익히고 개발하는 것 역시 많은 부담을 준다. 이러한 불합리함을 위해 새로운 markup language가 제시되었는데 바로 UIML(User Interface Markup Language)이다. UIML은 XML-compliant 언어로 여러 기기를 위한 사용자 인터페이스를 하나의 문서로서 구현이 가능하다. 따라서 개발자는 UIML 문서 하나만을 개발하면 HTML, WML, VoiceXML 등 다른 언어로 변환이 용이하다. 그러나 UIML도 여러 언어로의 변환을 위해 각각의 문서를 생성해야하는 번거로움이 있다. 본 연구에서는 이러한 UIML을 보완하여 통합된 사용자 인터페이스 변환의 모델을 제시한다.

  • PDF

An Arrangement Method of Voice and Sound Feedback According to the Operation : For Interaction of Domestic Appliance (조작 방식에 따른 음성과 소리 피드백의 할당 방법 가전제품과의 상호작용을 중심으로)

  • Hong, Eun-ji;Hwang, Hae-jeong;Kang, Youn-ah
    • Journal of the HCI Society of Korea
    • /
    • v.11 no.2
    • /
    • pp.15-22
    • /
    • 2016
  • The ways to interact with digital appliances are becoming more diverse. Users can control appliances using a remote control and a touch-screen, and appliances can send users feedback through various ways such as sound, voice, and visual signals. However, there is little research on how to define which output method to use for providing feedback according to the user' input method. In this study, we designed an experimental study that seeks to identify how to appropriately match the output method - voice and sound - based on the user input - voice and button. We made four types of interaction with two kinds input methods and two kinds of output methods. For the four interaction types, we compared the usability, perceived satisfaction, preference and suitability. Results reveals that the output method affects the ease of use and perceived satisfaction of the input method. The voice input method with sound feedback was evaluated more satisfying than with the voice feedback. However, the keying input method with voice feedback was evaluated more satisfying than with sound feedback. The keying input method was more dependent on the output method than the voice input method. We also found that the feedback method of appliances determines the perceived appropriateness of the interaction.

NUI/NUX of the Virtual Monitor Concept using the Concentration Indicator and the User's Physical Features (사용자의 신체적 특징과 뇌파 집중 지수를 이용한 가상 모니터 개념의 NUI/NUX)

  • Jeon, Chang-hyun;Ahn, So-young;Shin, Dong-il;Shin, Dong-kyoo
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.11-21
    • /
    • 2015
  • As growing interest in Human-Computer Interaction(HCI), research on HCI has been actively conducted. Also with that, research on Natural User Interface/Natural User eXperience(NUI/NUX) that uses user's gesture and voice has been actively conducted. In case of NUI/NUX, it needs recognition algorithm such as gesture recognition or voice recognition. However these recognition algorithms have weakness because their implementation is complex and a lot of time are needed in training because they have to go through steps including preprocessing, normalization, feature extraction. Recently, Kinect is launched by Microsoft as NUI/NUX development tool which attracts people's attention, and studies using Kinect has been conducted. The authors of this paper implemented hand-mouse interface with outstanding intuitiveness using the physical features of a user in a previous study. However, there are weaknesses such as unnatural movement of mouse and low accuracy of mouse functions. In this study, we designed and implemented a hand mouse interface which introduce a new concept called 'Virtual monitor' extracting user's physical features through Kinect in real-time. Virtual monitor means virtual space that can be controlled by hand mouse. It is possible that the coordinate on virtual monitor is accurately mapped onto the coordinate on real monitor. Hand-mouse interface based on virtual monitor concept maintains outstanding intuitiveness that is strength of the previous study and enhance accuracy of mouse functions. Further, we increased accuracy of the interface by recognizing user's unnecessary actions using his concentration indicator from his encephalogram(EEG) data. In order to evaluate intuitiveness and accuracy of the interface, we experimented it for 50 people from 10s to 50s. As the result of intuitiveness experiment, 84% of subjects learned how to use it within 1 minute. Also, as the result of accuracy experiment, accuracy of mouse functions (drag(80.4%), click(80%), double-click(76.7%)) is shown. The intuitiveness and accuracy of the proposed hand-mouse interface is checked through experiment, this is expected to be a good example of the interface for controlling the system by hand in the future.