• Title/Summary/Keyword: Voice user interface

Search Result 146, Processing Time 0.028 seconds

An Advanced User-friendly Wireless Smart System for Vehicle Safety Monitoring and Accident Prevention (차량 안전 모니터링 및 사고 예방을 위한 친사용자 환경의 첨단 무선 스마트 시스템)

  • Oh, Se-Bin;Chung, Yeon-Ho;Kim, Jong-Jin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.9
    • /
    • pp.1898-1905
    • /
    • 2012
  • This paper presents an On-board Smart Device (OSD) for moving vehicle, based on a smooth integration of Android-based devices and a Micro-control Unit (MCU). The MCU is used for the acquisition and transmission of various vehicle-borne data. The OSD has threefold functions: Record, Report and Alarm. Based on these RRA functions, the OSD is basically a safety and convenience oriented smart device, where it facilitates alert services such as accident report and rescue as well as alarm for the status of vehicle. In addition, voice activated interface is developed for the convenience of users. Vehicle data can also be uploaded to a remote server for further access and data manipulation. Therefore, unlike conventional blackboxes, the developed OSD lends itself to a user-friendly smart device for vehicle safety: It basically stores monitoring images in driving plus vehicle data collection. Also, it reports on accident and enables subsequent rescue operation. The developed OSD can thus be considered an essential safety smart device equipped with comprehensive wireless data service, image transfer and voice activated interface.

음성인식 기반 인터렉티브 미디어아트의 연구 - 소리-시각 인터렉티브 설치미술 "Water Music" 을 중심으로-

  • Lee, Myung-Hak;Jiang, Cheng-Ri;Kim, Bong-Hwa;Kim, Kyu-Jung
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.354-359
    • /
    • 2008
  • This Audio-Visual Interactive Installation is composed of a video projection of a video Projection and digital Interface technology combining with the viewer's voice recognition. The Viewer can interact with the computer generated moving images growing on the screen by blowing his/her breathing or making sound. This symbiotic audio and visual installation environment allows the viewers to experience an illusionistic spacephysically as well as psychologically. The main programming technologies used to generate moving water waves which can interact with the viewer in this installation are visual C++ and DirectX SDK For making water waves, full-3D rendering technology and particle system were used.

  • PDF

A Study on the Multi-Modal Browsing System by Integration of Browsers Using lava RMI (자바 RMI를 이용한 브라우저 통합에 의한 멀티-모달 브라우징 시스템에 관한 연구)

  • Jang Joonsik;Yoon Jaeseog;Kim Gukboh
    • Journal of Internet Computing and Services
    • /
    • v.6 no.1
    • /
    • pp.95-103
    • /
    • 2005
  • Recently researches about multi-modal system has been studied widely and actively, Such multi-modal systems are enable to increase possibility of HCI(Human-computer Interaction) realization, enable to provide information in various ways and also enable to be applicable in e-business application, If ideal multi-modal system can be realized in future, eventually user can maximize interactive usability between information instrument and men in hands-free and eyes-free, In this paper, a new multi-modal browsing system using Java RMI as communication interface, which integrated by HTML browser and voice browser is suggested and also English-English dictionary search application system is implemented as example.

  • PDF

Design and implement of the Educational Humanoid Robot D2 for Emotional Interaction System (감성 상호작용을 갖는 교육용 휴머노이드 로봇 D2 개발)

  • Kim, Do-Woo;Chung, Ki-Chull;Park, Won-Sung
    • Proceedings of the KIEE Conference
    • /
    • 2007.07a
    • /
    • pp.1777-1778
    • /
    • 2007
  • In this paper, We design and implement a humanoid robot, With Educational purpose, which can collaborate and communicate with human. We present an affective human-robot communication system for a humanoid robot, D2, which we designed to communicate with a human through dialogue. D2 communicates with humans by understanding and expressing emotion using facial expressions, voice, gestures and posture. Interaction between a human and a robot is made possible through our affective communication framework. The framework enables a robot to catch the emotional status of the user and to respond appropriately. As a result, the robot can engage in a natural dialogue with a human. According to the aim to be interacted with a human for voice, gestures and posture, the developed Educational humanoid robot consists of upper body, two arms, wheeled mobile platform and control hardware including vision and speech capability and various control boards such as motion control boards, signal processing board proceeding several types of sensors. Using the Educational humanoid robot D2, we have presented the successful demonstrations which consist of manipulation task with two arms, tracking objects using the vision system, and communication with human by the emotional interface, the synthesized speeches, and the recognition of speech commands.

  • PDF

Understanding how agent control based on social status affects user experience factors in multi-user autonomous driving environments (다중 사용자 자율 주행 운전 환경에서 사회적 지위에 따른 에이전트의 제어권이 사용자 경험 요소에 미치는 영향)

  • JiYeon Kim;JuHye Ha;ChangHoon Oh
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.735-745
    • /
    • 2023
  • The purpose of this study is to examine how the control of an agent according to a driver's social status affects user experience factors in a multi-user environment of self-driving vehicles. We conducted a user study where participants viewed four scenarios (route changing/parking x accepting/declining a fellow passenger's command) and answered a survey, followed by a post-hoc interview. Results showed that either the routing scenario or accepting a passenger's command scenario had higher usefulness (convenience, effectiveness, efficiency) than their counterparts. Regardless of the car owner's social status, participants rated AI agents more positively when they met their goals effectively. They also stressed that vehicle owners should always be in control of their agents. This study can provide guidelines for designing future autonomous driving scenarios where an agent interacts with a driver, and passengers.

Implementation of Embedded Speech Recognition System for Supporting Voice Commander to Control an Audio and a Video on Telematics Terminals (텔레메틱스 단말기 내의 오디오/비디오 명령처리를 위한 임베디드용 음성인식 시스템의 구현)

  • Kwon, Oh-Il;Lee, Heung-Kyu
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.42 no.11
    • /
    • pp.93-100
    • /
    • 2005
  • In this paper, we implement the embedded speech recognition system to support various application services such as audio and video control using speech recognition interface on cars. The embedded speech recognition system is implemented and ported in a DSP board. Because MIC type and speech codecs affect the accuracy of speech recognition. And also, we optimize the simulation and test environment to effectively remove the real noises on a car. We applied a noise suppression and feature compensation algorithm to increase an accuracy of sppech recognition on a car. And we used a context dependent tied-mixture acoustic modeling. The performance evaluation showed high accuracy of proposed system in office environment and even real car environment.

A Study on the Development of Text Communication System based on AIS and ECDIS for Safe Navigation (항해안전을 위한 AIS와 ECDIS 기반의 문자통신시스템 개발에 관한 연구)

  • Ahn, Young-Joong;Kang, Suk-Young;Lee, Yun-Sok
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.21 no.4
    • /
    • pp.403-408
    • /
    • 2015
  • A text-based communication system has been developed with a communication function on AIS and display and input function on ECDIS as a way to complement voice communication. It features no linguistic error and is not affected by VHF restrictions on use and noise. The text communication system is designed to use messages for clear intentions and further improves convenience of users by using various UI through software. It works without additional hardware installation and modification and can transmit a sentence by selecting only via Message Banner Interface without keyboard input and furthermore has a advantage to enhance processing speed through its own message coding and decoding. It is determined as the most useful alternative to reduce language limitations and recognition errors of the user and solve the problem of various voice communications on VHF. In addition, it will help to prevent collisions between ships with decrease in VHF use, accurate communication and request of cooperation based on text at heavy traffic areas.

English Conversation System Using Artificial Intelligent of based on Virtual Reality (가상현실 기반의 인공지능 영어회화 시스템)

  • Cheon, EunYoung
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.55-61
    • /
    • 2019
  • In order to realize foreign language education, various existing educational media have been provided, but there are disadvantages in that the cost of the parish and the media program is high and the real-time responsiveness is poor. In this paper, we propose an artificial intelligence English conversation system based on VR and speech recognition. We used Google CardBoard VR and Google Speech API to build the system and developed artificial intelligence algorithms for providing virtual reality environment and talking. In the proposed speech recognition server system, the sentences spoken by the user can be divided into word units and compared with the data words stored in the database to provide the highest probability. Users can communicate with and respond to people in virtual reality. The function provided by the conversation is independent of the contextual conversations and themes, and the conversations with the AI assistant are implemented in real time so that the user system can be checked in real time. It is expected to contribute to the expansion of virtual education contents service related to the Fourth Industrial Revolution through the system combining the virtual reality and the voice recognition function proposed in this paper.

Implementation of User-friendly Intelligent Space for Ubiquitous Computing (유비쿼터스 컴퓨팅을 위한 사용자 친화적 지능형 공간 구현)

  • Choi, Jong-Moo;Baek, Chang-Woo;Koo, Ja-Kyoung;Choi, Yong-Suk;Cho, Seong-Je
    • The KIPS Transactions:PartD
    • /
    • v.11D no.2
    • /
    • pp.443-452
    • /
    • 2004
  • The paper presents an intelligent space management system for ubiquitous computing. The system is basically a home/office automation system that could control light, electronic key, and home appliances such as TV and audio. On top of these basic capabilities, there are four elegant features in the system. First, we can access the system using either a cellular Phone or using a browser on the PC connected to the Internet, so that we control the system at any time and any place. Second, to provide more human-oriented interface, we integrate voice recognition functionalities into the system. Third, the system supports not only reactive services but also proactive services, based on the regularities of user behavior. Finally, by exploiting embedded technologies, the system could be run on the hardware that has less-processing power and storage. We have implemented the system on the embedded board consisting of StrongARM CPU with 205MHz, 32MB SDRAM, 16MB NOR-type flash memory, and Relay box. Under these hardware platforms, software components such as embedded Linux, HTK voice recognition tools, GoAhead Web Server, and GPIO driver are cooperated to support user-friendly intelligent space.

Interactive Game Designed for Early Child using Multimedia Interface : Physical Activities (멀티미디어 인터페이스 기술을 이용한 유아 대상의 체감형 게임 설계 : 신체 놀이 활동 중심)

  • Won, Hye-Min;Lee, Kyoung-Mi
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.3
    • /
    • pp.116-127
    • /
    • 2011
  • This paper proposes interactive game elements for children : contents, design, sound, gesture recognition, and speech recognition. Interactive games for early children must use the contents which reflect the educational needs and the design elements which are all bright, friendly, and simple to use. Also the games should consider the background music which is familiar with children and the narration which make easy to play the games. In gesture recognition and speech recognition, the interactive games must use gesture and voice data which hits to the age of the game user. Also, this paper introduces the development process for the interactive skipping game and applies the child-oriented contents, gestures, and voices to the game.