• Title/Summary/Keyword: Command Interface

Search Result 216, Processing Time 0.034 seconds

A Study on Developmental Direction of Interface Design for Gesture Recognition Technology

  • Lee, Dong-Min;Lee, Jeong-Ju
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.499-505
    • /
    • 2012
  • Objective: Research on the transformation of interaction between mobile machines and users through analysis on current gesture interface technology development trend. Background: For smooth interaction between machines and users, interface technology has evolved from "command line" to "mouse", and now "touch" and "gesture recognition" have been researched and being used. In the future, the technology is destined to evolve into "multi-modal", the fusion of the visual and auditory senses and "3D multi-modal", where three dimensional virtual world and brain waves are being used. Method: Within the development of computer interface, which follows the evolution of mobile machines, actively researching gesture interface and related technologies' trend and development will be studied comprehensively. Through investigation based on gesture based information gathering techniques, they will be separated in four categories: sensor, touch, visual, and multi-modal gesture interfaces. Each category will be researched through technology trend and existing actual examples. Through this methods, the transformation of mobile machine and human interaction will be studied. Conclusion: Gesture based interface technology realizes intelligent communication skill on interaction relation ship between existing static machines and users. Thus, this technology is important element technology that will transform the interaction between a man and a machine more dynamic. Application: The result of this study may help to develop gesture interface design currently in use.

Design of Gesture based Interfaces for Controlling GUI Applications (GUI 어플리케이션 제어를 위한 제스처 인터페이스 모델 설계)

  • Park, Ki-Chang;Seo, Seong-Chae;Jeong, Seung-Moon;Kang, Im-Cheol;Kim, Byung-Gi
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.1
    • /
    • pp.55-63
    • /
    • 2013
  • NUI(Natural User Interfaces) has been developed through CLI(Command Line Interfaces) and GUI(Graphical User Interfaces). NUI uses many different input modalities, including multi-touch, motion tracking, voice and stylus. In order to adopt NUI to legacy GUI applications, he/she must add device libraries, modify relevant source code and debug it. In this paper, we propose a gesture-based interface model that can be applied without modification of the existing event-based GUI applications and also present the XML schema for the specification of the model proposed. This paper shows a method of using the proposed model through a prototype.

Analysis on a New Intrinsic Vulnerability to Keyboard Security (PS/2 키보드에서의 RESEND 명령을 이용한 패스워드 유출 취약점 분석)

  • Lee, Kyung-Roul;Yim, Kang-Bin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.21 no.3
    • /
    • pp.177-182
    • /
    • 2011
  • This paper introduces a possibility for attackers to acquire the keyboard scan codes through using the RESEND command provided by the keyboard hardware itself, based on the PS/2 interface that is a dominant interface for input devices. Accordingly, a keyboard sniffing program using the introduced vulnerability is implemented to prove the severeness of the vulnerability, which shows that user passwords can be easily exposed. As one of the intrinsic vulnerabilities found on the existing platforms, for which there were little considerations on the security problems when they were designed, it is required to consider a hardware approach to countermeasure the introduced vulnerability.

Interface Modeling for Digital Device Control According to Disability Type in Web

  • Park, Joo Hyun;Lee, Jongwoo;Lim, Soon-Bum
    • Journal of Multimedia Information System
    • /
    • v.7 no.4
    • /
    • pp.249-256
    • /
    • 2020
  • Learning methods using various assistive and smart devices have been developed to enable independent learning of the disabled. Pointer control is the most important consideration for the disabled when controlling a device and the contents of an existing graphical user interface (GUI) environment; however, difficulties can be encountered when using a pointer, depending on the disability type; Although there are individual differences depending on the blind, low vision, and upper limb disability, problems arise in the accuracy of object selection and execution in common. A multimodal interface pilot solution is presented that enables people with various disability types to control web interactions more easily. First, we classify web interaction types using digital devices and derive essential web interactions among them. Second, to solve problems that occur when performing web interactions considering the disability type, the necessary technology according to the characteristics of each disability type is presented. Finally, a pilot solution for the multimodal interface for each disability type is proposed. We identified three disability types and developed solutions for each type. We developed a remote-control operation voice interface for blind people and a voice output interface applying the selective focusing technique for low-vision people. Finally, we developed a gaze-tracking and voice-command interface for GUI operations for people with upper-limb disability.

Implementation of LabVIEW based Testbed for MHA FTSR (LabVIEW 기반의 MHA 명령방식 비행종단수신기 점검장비 구현)

  • Kim, Myung-Hwan;Hwang, Soo-Sul;Lim, You-Cheol;Ma, Keun-Su
    • Aerospace Engineering and Technology
    • /
    • v.13 no.1
    • /
    • pp.55-62
    • /
    • 2014
  • FTSR(Flight Termination System Receiver) is a device that receives a ground command signal to abort a flight mission when abnormal conditions occur in the space launch vehicle. The secure tone command message shall consist of a series of 11 character tone pattern. Each character consists of the sum of two tones which taken from a set of 7 tones defined by IRIG(Inter-Range Instrumentation Group) in the audio frequency range. The MHA(Modified High alphabet) command adds a security feature to the secure tone command by using the predefined difference code. In order to check the function and performance of MHA FTSR, which is under development, for KSLV-II, the testbed should have functions of RF signal generation, receiver's output port monitoring, RS-422 communication and test data management. In this paper, we first briefly introduce MHA command and FTSR interface, and then show the LavVIEW based testbed include its H/W configuration, S/W implementation and test results.

Development of Autonomous Mobile Robot with Speech Teaching Command Recognition System Based on Hidden Markov Model (HMM을 기반으로 한 자율이동로봇의 음성명령 인식시스템의 개발)

  • Cho, Hyeon-Soo;Park, Min-Gyu;Lee, Hyun-Jeong;Lee, Min-Cheol
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.726-734
    • /
    • 2007
  • Generally, a mobile robot is moved by original input programs. However, it is very hard for a non-expert to change the program generating the moving path of a mobile robot, because he doesn't know almost the teaching command and operating method for driving the robot. Therefore, the teaching method with speech command for a handicapped person without hands or a non-expert without an expert knowledge to generate the path is required gradually. In this study, for easily teaching the moving path of the autonomous mobile robot, the autonomous mobile robot with the function of speech recognition is developed. The use of human voice as the teaching method provides more convenient user-interface for mobile robot. To implement the teaching function, the designed robot system is composed of three separated control modules, which are speech preprocessing module, DC servo motor control module, and main control module. In this study, we design and implement a speaker dependent isolated word recognition system for creating moving path of an autonomous mobile robot in the unknown environment. The system uses word-level Hidden Markov Models(HMM) for designated command vocabularies to control a mobile robot, and it has postprocessing by neural network according to the condition based on confidence score. As the spectral analysis method, we use a filter-bank analysis model to extract of features of the voice. The proposed word recognition system is tested using 33 Korean words for control of the mobile robot navigation, and we also evaluate the performance of navigation of a mobile robot using only voice command.

Automation of Bio-Industrial Process Via Tele-Task Command(I) -identification and 3D coordinate extraction of object- (원격작업 지시를 이용한 생물산업공정의 생력화 (I) -대상체 인식 및 3차원 좌표 추출-)

  • Kim, S. C.;Choi, D. Y.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.26 no.1
    • /
    • pp.21-28
    • /
    • 2001
  • Major deficiencies of current automation scheme including various robots for bioproduction include the lack of task adaptability and real time processing, low job performance for diverse tasks, and the lack of robustness of take results, high system cost, failure of the credit from the operator, and so on. This paper proposed a scheme that could solve the current limitation of task abilities of conventional computer controlled automatic system. The proposed scheme is the man-machine hybrid automation via tele-operation which can handle various bioproduction processes. And it was classified into two categories. One category was the efficient task sharing between operator and CCM(computer controlled machine). The other was the efficient interface between operator and CCM. To realize the proposed concept, task of the object identification and extraction of 3D coordinate of an object was selected. 3D coordinate information was obtained from camera calibration using camera as a measurement device. Two stereo images were obtained by moving a camera certain distance in horizontal direction normal to focal axis and by acquiring two images at different locations. Transformation matrix for camera calibration was obtained via least square error approach using specified 6 known pairs of data points in 2D image and 3D world space. 3D world coordinate was obtained from two sets of image pixel coordinates of both camera images with calibrated transformation matrix. As an interface system between operator and CCM, a touch pad screen mounted on the monitor and remotely captured imaging system were used. Object indication was done by the operator’s finger touch to the captured image using the touch pad screen. A certain size of local image processing area was specified after the touch was made. And image processing was performed with the specified local area to extract desired features of the object. An MS Windows based interface software was developed using Visual C++6.0. The software was developed with four modules such as remote image acquisiton module, task command module, local image processing module and 3D coordinate extraction module. Proposed scheme shoed the feasibility of real time processing, robust and precise object identification, and adaptability of various job and environments though selected sample tasks.

  • PDF

LPG/CNG Interface Box Hardware Design (LPG/CNG Interface Box 제품 Hardware 설계)

  • An, Jeong-Hoon;Jung, Jae-Min
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.15 no.6
    • /
    • pp.23-29
    • /
    • 2007
  • In Korea, the number of LPG vehicles is increasing continuously because LPG is cheaper than Gasoline. Also in Europe, the CNG fuel is a good solution to meet $CO_2$ regulation. In order to use LPG/CNG fuel, new EMS ECU must be developed for every type of vehicles and it requires huge development cost. In order to reduce development cost and time, SIEMENS VDO has developed an Interface Box. It supports EMS ECU in the car and manages LPG/CNG fuel injection system. Basically the Interface box can be used with any kind of EMS ECU. The Interface Box controls LPG/CNG injector through the injection command of gasoline EMS ECU. It calculates required amount of based on the fuel temperature and pressure and sends feedback signal to ECU for fuel correction. Also, it controls LPG/CNG specific actuator such a Shut off valves and LPG switch inputs.

Development for Java/RTI Test Suite (Java/RTI를 위한 Test Suite 개발)

  • 이정욱;김용주;김영찬
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.05a
    • /
    • pp.749-752
    • /
    • 2003
  • The HLA is defined by three components: (1) Rules, (2) the HLA Interface Specification, and (3) the Object Model Template(OMT). The RTI(Run-Time Infrastructure) software implements the interface specification. It provides services in a manner that is comparable to the way a distributed operating system provides services to applications. A way to test whether is suitable for a standard, and all service was implemented is tested through two phases of processes to verify the RTI which proposed in DMSO. In this paper, we discuss Level One Test Procedures and a method. Confirms whether RTI was implemented in Interface Specification according to Level One Test Procedures appropriately through a test. Develops Test Suite for every each step to test whether a correct command and the expected results are occur.

  • PDF

On the Development of the Generic CFCS for Engineering Level Simulation of the Surface Ship (공학수준 수상함 지휘무장통제체계 범용 모델 개발방안 연구)

  • Jung, Young-Ran;Han, Woong-Gie;Kim, Cheol-Ho;Kim, Jae-Ick
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.14 no.3
    • /
    • pp.380-387
    • /
    • 2011
  • In this paper, we considered the authoritative representation of Command and Fire Control System(CFCS) for the surface ship that was the engineering level model to develop system specifications and to analyze operational concepts on the concept design phase and to analyze military requirements, effectiveness and performance for the system. The engineering level model of CFCS can be used in simulation independently of the surface ship's type, and also it takes reuse, interoperability, and extension into consideration. The detailed sub-models, internal and external data interface, data flow among each sub-model, sensor and weapon models about the engineering level model of CFCS was defined. It was verified via engineering level simulations according to the V&V process.