• Title/Summary/Keyword: 손 인터페이스

Search Result 294, Processing Time 0.029 seconds

Designing and Building a Fire Monitoring Web GIS System Using MODIS Image - Using ArcIMS 4.0 - (MODIS 위성영상을 이용한 산불 모니터링 Web GIS 시스템 설계 및 구축 - ArcIMS 4.0을 활용하여 -)

  • Son Jeong-Hoon;Huh Yong;Byun Young-Gi;Yu Ki-Yun;Kim Yong-Il
    • Spatial Information Research
    • /
    • v.14 no.1 s.36
    • /
    • pp.151-161
    • /
    • 2006
  • This paper has a goal to construct monitoring web GIS system which displays maps that are results of the fire detecting algorithms using MODIS image. To design and build more efficient system, foreign fire monitoring systems using satellite image are researched and analyzed. As a result of that, the information about interfaces and services provided by them are obtained. In concretely, new logical DFD is used to do a process modelling. ArcIMS 4.0 of ESRI, IIS 5.1 of Microsoft are utilized to build the web GIS System. In the aspects of data input and transfer, a specific module, which converts a binary image to a kind of vector file, is developed to adjust raster data to the web GIS system.

  • PDF

A Study on the Eye-Hand Coordination for Korean Text Entry Interface Development (한글 문자 입력 인터페이스 개발을 위한 눈-손 Coordination에 대한 연구)

  • Kim, Jung-Hwan;Hong, Seung-Kweon;Myung, Ro-Hae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.26 no.2
    • /
    • pp.149-155
    • /
    • 2007
  • Recently, various devices requiring text input such as mobile phone IPTV, PDA and UMPC are emerging. The frequency of text entry for them is also increasing. This study was focused on the evaluation of Korean text entry interface. Various models to evaluate text entry interfaces have been proposed. Most of models were based on human cognitive process for text input. The cognitive process was divided into two components; visual scanning process and finger movement process. The time spent for visual scanning process was modeled as Hick-Hyman law, while the time for finger movement was determined as Fitts' law. There are three questions on the model-based evaluation of text entry interface. Firstly, are human cognitive processes (visual scanning and finger movement) during the entry of text sequentially occurring as the models. Secondly, is it possible to predict real text input time by previous models. Thirdly, does the human cognitive process for text input vary according to users' text entry speed. There was time gap between the real measured text input time and predicted time. The time gap was larger in the case of participants with high speed to enter text. The reason was found out investigating Eye-Hand Coordination during text input process. Differently from an assumption that visual scan on the keyboard is followed by a finger movement, the experienced group performed both visual scanning and finger movement simultaneously. Arrival Lead Time was investigated to measure the extent of time overlapping between two processes. 'Arrival Lead Time' is the interval between the eye fixation on the target button and the button click. In addition to the arrival lead time, it was revealed that the experienced group uses the less number of fixations during text entry than the novice group. This result will contribute to the improvement of evaluation model for text entry interface.

User Activity Estimation by Non-intrusively Measurement (무구속적인 측정에 의한 사용자 활동 상태 추정 기법)

  • Baek, Jong-Hun;Yun, Byoung-Ju
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.5
    • /
    • pp.101-110
    • /
    • 2009
  • The unconscious and non-intrusive measurements of activity signals or physiological signals represent important enabling technologies for realizing a ubiquitous healthcare environment as well as a related UI. Particularly, non-intrusive measurements should be used in activity monitoring system for long-term monitoring. This paper is based on activity estimation by measuring the activity signals of a user using a handhold device with an accelerometer. The user activity estimation system (UAES) presented in this paper makes non-intrusive measurements of activity signals to minimize inconveniencing a user and to create a more practical implementation in real life. Thus, a variety of positions in which the handhold device can be carried by a user for daily use is considered, such as in the front/hip/shirt pockets, a backpack, on the waist, and in the hand.

Design of a Background Image Based Multi-Degree-of-Freedom Pointing Device (배경영상 기반 다자유도 포인팅 디바이스의 설계)

  • Jang, Suk-Yoon;Kho, Jae-Won
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.133-141
    • /
    • 2008
  • As interactive multimedia have come into wide use, user interfaces such as remote controllers or classical computer mice have several limitations that cause inconvenience. We propose a vision-based pointing device to resolve this problem. We analyzed the moving image from the camera which is embedded in the pointing device and estimate the movement of the device. The pose of the cursor can be determined from this result. To process in the real time, we used the low resolution of $288{\times}208$ pixel camera and comer points of the screen were tracked using local optical flow method. The distance from screen and device was calculated from the size of screen in the image. The proposed device has simple configurations, low cost, easy use, and intuitive handhold operation like traditional mice. Moreover it shows reliable performance even in the dark condition.

Implementation of Paper Keyboard Piano with a Kinect (키넥트를 이용한 종이건반 피아노 구현 연구)

  • Lee, Jung-Chul;Kim, Min-Seong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.12
    • /
    • pp.219-228
    • /
    • 2012
  • In this paper, we propose a paper keyboard piano implementation using the finger movement detection with the 3D image data from a kinect. Keyboard pattern and keyboard depth information are extracted from the color image and depth image to detect the touch event on the paper keyboard and to identify the touched key. Hand region detection error is unavoidable when using the simple comparison method between input depth image and background depth image, and this error is critical in key touch detection. Skin color is used to minimize the error. And finger tips are detected using contour detection with area limit and convex hull. Finally decision of key touch is carried out with the keyboard pattern information at the finger tip position. The experimental results showed that the proposed method can detect key touch with high accuracy. Paper keyboard piano can be utilized for the easy and convenient interface for the beginner to learn playing piano with the PC-based learning software.

Posture Recognition for a Bi-directional Participatory TV Program based on Face Color Region and Motion Map (시청자 참여형 양방향 TV 방송을 위한 얼굴색 영역 및 모션맵 기반 포스처 인식)

  • Hwang, Sunhee;Lim, Kwangyong;Lee, Suwoong;Yoo, Hoyoung;Byun, Hyeran
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.8
    • /
    • pp.549-554
    • /
    • 2015
  • As intuitive hardware interfaces continue to be developed, it has become more important to recognize the posture of the user. An efficient alternative to adding expensive sensors is to implement computer vision systems. This paper proposes a method to recognize a user's postured in a live broadcast bi-directional participatory TV program. The proposed method first estimates the position of the user's hands by generation a facial color map for the user and a motion map. The posture is then recognized by computing the relative position of the face and the hands. This method exhibited 90% accuracy in an experiment to recognize three defined postures during the live broadcast bi-directional participatory TV program, even when the input images contained a complex background.

Annotation Repositioning Methods in XML Documents (XML문서에서 어노테이션의 위치재생성 기법)

  • Sohn Won-Sung;Kim Jae-Kyung;Ko Myeong-Cheol;Lim Soon-Bum;Choy Yoon-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.7
    • /
    • pp.650-662
    • /
    • 2005
  • A robust repositioning method is required for annotations to always maintain proper positions when original documents were modified. Robust anchoring in the XML document provides better anchoring results when it includes features of structured documents as well as annotated texts. This paper proposes robust annotation anchoring method in XML document. To do this, this work presents annotation information as logical structure trees, and creates candidate anchors by analyzing matching relations between the annotation and document trees. To select the appropriate candidate anchor among many candidate anchors, this work presents several anchoring criteria based on the textual and label context of anchor nodes in the logical structure trees. As a result, robust anchoring is realized even after various modifications of contexts in the structured document.

A Study on Improved SMETA System and Applying Encryption Function (개선된 SMETA 시스템과 암호화적용에 관한 연구)

  • Hwang, In-Moon;Yoo, Nam-Hyun;Son, Cheol-Su;Kim, Won-Jung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.5
    • /
    • pp.849-856
    • /
    • 2008
  • As the XML is used as the standard format for information delivery and exchange in various fields, the SVG is frequently used as a user interface or expression tool for the embedded system like an internet based mobile phone. The SVG file must contain additional information specifying the structure of the document and it consumes more transmission time than the actual data sent. The SMETA(Svg transmission MEthod using Semantic meTAdata) system[9] is a study to reduce the size of the SVG file by partitioning the SVG file to a minimal size and assigning meaningful meta data to each portion of the data. In this paper, instead of the meta data exchange method for reducing the size of the file exchanged in the existing SMETA system, we studied an improved version of SMETA system that analyzes the data for each user in the server system and only transmits the data that a user needs. In addition, through our simulation, we verified that it provides better performance than the existing system even if encryption is used.

A Study of Efficient Transmission of SVG File using SMETA (SMETA를 이용한 효과적인 SVG 파일 전송에 관한 연구)

  • Yoo, Nam-Hyun;Son, Cheol-Su;Kim, Won-Jung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.1
    • /
    • pp.14-19
    • /
    • 2007
  • As XML is used by standard format for information expression and information exchange in various field, Many Company began to use SVG by user interface or information expression tool of embedded system such as mobile phone based wireless internet. Because SVG has many additional information to keep structure of SVG document exception real data, there is a problem that transfer time of SVG file is so cost for quantity of transmitted data actually. To solve this problem, many researches using compression conception have been conducted for applying to an embedded system. This paper proposes SMETA that can use existing researches using compression concept at once. SMETA divides SVG file to each part that can allocate meaning, and gives semantic metadata to each part without alteration of SVG structure. SMETA can reduce site of transmitted SVG file actually by transmitting specification part of SVG file that metadata does not agree or has unlisted part in an embedded system between an embedded system and server, before transmit whole SVG file. By size of transmitting SVG file is decreasing, transfer time can be shortened accordingly.

Object Detection and Optical Character Recognition for Mobile-based Air Writing (모바일 기반 Air Writing을 위한 객체 탐지 및 광학 문자 인식 방법)

  • Kim, Tae-Il;Ko, Young-Jin;Kim, Tae-Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.5
    • /
    • pp.53-63
    • /
    • 2019
  • To provide a hand gesture interface through deep learning in mobile environments, research on the light-weighting of networks is essential for high recognition rates while at the same time preventing degradation of execution speed. This paper proposes a method of real-time recognition of written characters in the air using a finger on mobile devices through the light-weighting of deep-learning model. Based on the SSD (Single Shot Detector), which is an object detection model that utilizes MobileNet as a feature extractor, it detects index finger and generates a result text image by following fingertip path. Then, the image is sent to the server to recognize the characters based on the learned OCR model. To verify our method, 12 users tested 1,000 words using a GALAXY S10+ and recognized their finger with an average accuracy of 88.6%, indicating that recognized text was printed within 124 ms and could be used in real-time. Results of this research can be used to send simple text messages, memos, and air signatures using a finger in mobile environments.