• Title/Summary/Keyword: intelligent interface

Search Result 648, Processing Time 0.029 seconds

Implementation of a Smartphone Interface for a Personal Mobility System Using a Magnetic Compass Sensor and Wireless Communication (지자기 센서와 무선통신을 이용한 PMS의 스마트폰 인터페이스 구현)

  • Kim, Yeongyun;Kim, Dong Hun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.1
    • /
    • pp.48-56
    • /
    • 2015
  • In the paper, a smartphone-controlled personal mobility system(PMS) based on a compass sensor is developed. The use of a magnetic compass sensor makes the PMS move according to the heading direction of a smartphone controlled by a rider. The proposed smartphone-controlled PMS allows more intuitive interface than PMS controlled by pushing a button. As well, the magnetic compass sensor makes a role in compensating for the mechanical characteristics of motors mounted on the PMS. For adequate control of the robot, two methods: absolute and relative direction methods based on the magnetic compass sensor and wireless communication are presented. Experimental results show that the PMS is conveniently and effectively controlled by the proposed two methods.

Discriminative Power Feature Selection Method for Motor Imagery EEG Classification in Brain Computer Interface Systems

  • Yu, XinYang;Park, Seung-Min;Ko, Kwang-Eun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.1
    • /
    • pp.12-18
    • /
    • 2013
  • Motor imagery classification in electroencephalography (EEG)-based brain-computer interface (BCI) systems is an important research area. To simplify the complexity of the classification, selected power bands and electrode channels have been widely used to extract and select features from raw EEG signals, but there is still a loss in classification accuracy in the state-of- the-art approaches. To solve this problem, we propose a discriminative feature extraction algorithm based on power bands with principle component analysis (PCA). First, the raw EEG signals from the motor cortex area were filtered using a bandpass filter with ${\mu}$ and ${\beta}$ bands. This research considered the power bands within a 0.4 second epoch to select the optimal feature space region. Next, the total feature dimensions were reduced by PCA and transformed into a final feature vector set. The selected features were classified by applying a support vector machine (SVM). The proposed method was compared with a state-of-art power band feature and shown to improve classification accuracy.

Design of game interface based on 3-Axis accelerometer for physical Interactive game (체감형 게임을 위한 3축 가속도 센서 기반 게임 인터페이스 개발)

  • Kim, Sung-Ho;Chae, Bu-Kyung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.538-544
    • /
    • 2009
  • As the world game marcket has been recording weak growth, the development of new gane concept is required to attract the gamer's attention from those who were fed up with the previous game paradigm. Recently, the game which can recognize the player's motions and provide new interaction between the game and user is more popular than ever before. In the games with somesthesia based on Virtual Reality, the sense of the reality is considered the most important factor for drawing immersion from the gamers. In this work, a new type of 3-axis accelerometer based interactive game interface which can effectively recognize the gamer' motion is suggested. Furthermore, various experiments are carried out to verify the effectiveness of the proposed scheme.

Study of iPhone Interface for Remote Robot Control Based on WiFi Communication (WiFi 통신 기반의 로봇제어를 위한 아이폰 인터페이스 연구)

  • Jung, Hah-Min;Kim, Dong-Hun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.5
    • /
    • pp.669-674
    • /
    • 2012
  • This study presents the remote control of a mobile robot using iPhone based on Wi-Fi communication. The paper proposes the following set of user interfaces : acceleration mode, arrow touch mode, and jog-shuttle mode. To evaluate the proposed three interfaces, a virtual robot is controlled in a monitor to follow a referenced trajectory using iPhone. In simulation, the standard deviation and summed errors are analysed for showing good and weak points of the proposed three interfaces. The proposed interface replace an additional remote controller requiring cost with a cellular phone. Results of an experiment show that the proposed interfaces can be effectively used for remote robot control.

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.4
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

Performance Improvements of Brain-Computer Interface Systems based on Variance-Considered Machines (Variance-Considered Machine에 기반한 Brain-Computer Interface 시스템의 성능 향상)

  • Yeom, Hong-Gi;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.1
    • /
    • pp.153-158
    • /
    • 2010
  • This paper showed the possibilities of performance improvement of Brain-Computer Interface (BCI) decreasing classification error rates of EEG signals by applying Variance-Considered Machine (VCM) which proposed in our previous study. BCI means controlling system such as computer by brain signals. There are many factors which affect performances of BCI. In this paper, we used suggested algorithm as a classification algorithm, the most important factor of the system, and showed the increased correct rates. For the experiments, we used data which are measured during imaginary movements of left hand and foot. The results indicated that superiority of VCM by comparing error rates of the VCM and SVM. We had shown excellence of VCM with theoretical results and simulation results. In this study, superiority of VCM is demonstrated by error rates of real data.

Arduino-based power control system implemented by the MyndPlay (MyndPlay를 이용한 Arduino기반의 전원제어시스템 구현)

  • Kim, Byeongsu;Kim, Seungjin;Kim, Taehyung;Baek, Dongin;Shin, Jaehwan;An, Jeong-Eun;Jeong, Deok-Gil
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.924-926
    • /
    • 2015
  • In this paper, we use the interface, which many countries concentrates research of Brain - Computer Interface with the device and MyndPlay based on the IoT intelligent Arduino. Finally we will make the Brain - Computer Connection environment, the purpose of Brain - Computer Interface. Recognizes the EEG of a person who wearing the equipment, analyze, classify, and we did a research to design an intelligent thing to suit user's condition. In addition, we use the XBee, and Bluetooth to communicate to other devices, such as smart phone. In conclusion, this paper check users current status via brain waves, and it allows to control the power and other objects by using the EEG(Electroencephalography).

  • PDF

Developing the Core Modules of for Viz-Platform for Supporting Public Service in the City (도시의 공공서비스 제공을 위한 시각화 플랫폼의 핵심모듈 개발)

  • Kim, Mi-Yun
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.9
    • /
    • pp.1131-1139
    • /
    • 2015
  • The purpose of this research is creating the visualization platform of the user interface to make able to be provide the demanded service while users communication with surrounding space in a smart city environment. This comes from the latest enhanced interface technology and social media, and it uses the shares information to support the proper interface environment according to a life style and spatial properties. "EzCity" which is the user interface platform suggested in this research, can control the enormous amount of public data for the smart city. The core module of user platform is made up with Public data module, Interface module, Visualization module and Service module. The role of this platform is to be provide "Geo-Intelligent Interface Service" for space users to access the data in easier and more practical interface environment. This reinforces the visualization process for data collecting, systematization, visualization and providing service. Also this will be expected to be the base to solve the problem which complexity and rapidly increasing amount of data.

A Study on Developmental Direction of Interface Design for Gesture Recognition Technology

  • Lee, Dong-Min;Lee, Jeong-Ju
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.499-505
    • /
    • 2012
  • Objective: Research on the transformation of interaction between mobile machines and users through analysis on current gesture interface technology development trend. Background: For smooth interaction between machines and users, interface technology has evolved from "command line" to "mouse", and now "touch" and "gesture recognition" have been researched and being used. In the future, the technology is destined to evolve into "multi-modal", the fusion of the visual and auditory senses and "3D multi-modal", where three dimensional virtual world and brain waves are being used. Method: Within the development of computer interface, which follows the evolution of mobile machines, actively researching gesture interface and related technologies' trend and development will be studied comprehensively. Through investigation based on gesture based information gathering techniques, they will be separated in four categories: sensor, touch, visual, and multi-modal gesture interfaces. Each category will be researched through technology trend and existing actual examples. Through this methods, the transformation of mobile machine and human interaction will be studied. Conclusion: Gesture based interface technology realizes intelligent communication skill on interaction relation ship between existing static machines and users. Thus, this technology is important element technology that will transform the interaction between a man and a machine more dynamic. Application: The result of this study may help to develop gesture interface design currently in use.