• Title/Summary/Keyword: Mouse Interface

Search Result 187, Processing Time 0.03 seconds

Design of color graphic monitoring system (공정 감시를 위한 Software의 개발)

  • 가민호;임준홍;조영조;김광배
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10a
    • /
    • pp.364-369
    • /
    • 1992
  • 본 논문에서는 연속 공정자동화를 위한 시스템 제어장치 중 공정의 제반운영 및 제어상태를 감시하기 위하여 공정상태를 나타내는 여러가지의 data를 그래픽 모니터상에 graphics로 표시해 주는 User Interface System인 Operator Terminal의 설계 및 구현을 다룬다. 이를 위하여 Operator Terminal은 출력장치로 VGA board와 color monitor, 입력장치로는 mouse와 function keyboard, 통신을 위한 RS232C serial port를 갖는 IBM-PC AT급의 mother board로 구성된다. 또한 program 저장은 상위 computer의 HDD를 이용한다. 그래픽 편집 및 모니터링을 위한 software는 한글 MS Windows 환경에서 구현한다.

  • PDF

Short-Circuit Calculation Using Tow-Port Network (4단자망을 아용한 고장계산에 관한 연구)

  • 김주용;이재용;백영식
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.43 no.4
    • /
    • pp.533-542
    • /
    • 1994
  • This paper presents the new algorithm for fault analysis and this algorithm obtains requisite term for fault analysis by the two-port network technique. Therefore, the fault calculation time is composed of only few fundamental arithmetic calculation. The graphic user environment for fault analysis is implemented in mouse-oriented user interface with window and pull-down menu. The result of the algorithm proved to be identical with the sample system in Ref.[8]. this package can be a useful tool for fault analysis.

Fault Analysis, Using Two-Port Network (4 단자망을 이용한 고장해석)

  • Kim, Jo-Yong;Baek, Young-Sik
    • Proceedings of the KIEE Conference
    • /
    • 1993.07a
    • /
    • pp.124-127
    • /
    • 1993
  • This paper presents the new algorithm for fault analysis and the fault analysis package for executing this algorithm. This algorithm obtains requisite term for fault analysis by the two-port network technique. Therefore, the fault calculation time is minimized because ${Y_{BUS}}^{-1}$ calculation time is removed. And, the graphic user environment for fault analysis is implemented in mouse-oriented user interface with window and pull-down menu. Therefore, this package can be a useful tool for fault analysis.

  • PDF

Implementation of non-Wearable Air-Finger Mouse by Infrared Diffused Illumination (적외선 확산 투광에 의한 비장착형 공간 손가락 마우스 구현)

  • Lee, Woo-Beom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.15 no.2
    • /
    • pp.167-173
    • /
    • 2015
  • Extraction of Finger-end points is one of the most process for user multi-commands in the Hand-Gesture interface technology. However, most of previous works use the geometric and morphological method for extracting a finger-end points. Therefore, this paper proposes the method of user finger-end points extraction that is motivated a ultrared diffused illumination, which is used for the user commands in the multi-touch display device. Proposed air-mouse is worked by the quantity state and moving direction of extracted finger-end points. Also, our system includes a basic mouse event, as well as the continuous command function for expending a user multi-gesture. In order to evaluate the performance of the our proposed method, after applying to the web browser application as a command device. As a result, the proposed method showed the average 90% success-rate for the various user-commands.

A Study On Positioning Of Mouse Cursor Using Kinect Depth Camera (Kinect Depth 카메라를이용한 마우스 커서의 위치 선정에 관한 연구)

  • Goo, Bong-Hoe;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.18 no.4
    • /
    • pp.478-484
    • /
    • 2014
  • In this paper, we propose new algorithm for positioning of mouse cursor using fingertip direction on kinect depth camera. The proposed algorithm uses center of parm points from distance transform when fingertip point toward screen. Otherwise, algorithm use fingertip points. After image preprocessing, the center of parm points is calculated from distance transform results. If the direction of the finger towards the camera becomes close to the distance between the fingertip point and center of parm point, it is possible to improve the accuracy of positioning by using the center of parm point. After remove arm on image, the fingertip points is obtained by using a pixel on the long distance from the center of the image. To calculate accuracy of mouse positioning, we selected any 5 points. Also, we calculated error rate between reference points and mouse points by performed 500 times. The error rate results could be confirmed the accuracy of our algorithm indicated an average error rate of less than 11%.

An alternative method for smartphone input using AR markers

  • Kang, Yuna;Han, Soonhung
    • Journal of Computational Design and Engineering
    • /
    • v.1 no.3
    • /
    • pp.153-160
    • /
    • 2014
  • As smartphones came into wide use recently, it has become increasingly popular not only among young people, but among middle-aged people as well. Most smartphones adopt capacitive full touch screen, so touch commands are made by fingers unlike the PDAs in the past that use touch pens. In this case, a significant portion of the smartphone's screen is blocked by the finger so it is impossible to see the screens around the finger touching the screen; this causes difficulties in making precise inputs. To solve this problem, this research proposes a method of using simple AR markers to improve the interface of smartphones. A marker is placed in front of the smartphone camera. Then, the camera image of the marker is analyzed to determine the position of the marker as the position of the mouse cursor. This method can enable click, double-click, drag-and-drop used in PCs as well as touch, slide, long-touch-input in smartphones. Through this research, smartphone inputs can be made more precise and simple, and show the possibility of the application of a new concept of smartphone interface.

A Character Animation Tool Based on Motion Mapping (모션 매핑 기반의 캐릭터 애니메이션 개발 도구)

  • Lee, Minguen;Lee, Myeong Won
    • Journal of the Korea Computer Graphics Society
    • /
    • v.5 no.2
    • /
    • pp.43-52
    • /
    • 1999
  • In this paper, we present an animation toolkit based on motion mapping technique in a graphics user interface that can represent data structures necessary for generating character motions. The motion mapping means that an animation sequence generated once can be mapped to another object directly according a data structure in the graphics user interface. Users can generate animation sequences interactively using a mouse. These are obtained automatically by modifying motion data structures interactively. Compared with other conventional tools, the toolkit has different features that two hierarchical structures necessary for representing modeling and animation data are managed independently each other, and that animations generated can be applied to any other characters by connecting the two hierarchical structures in the user interface.

  • PDF

Implementing User Interface of Looms Management with Spatial Aggregate Query Functions (공간적 집계 질의 기능을 가진 직기 관리 사용자 인터페이스의 구현)

  • Jeon, Il-Soo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.6 no.1
    • /
    • pp.37-47
    • /
    • 2003
  • In this paper, a component was designed for a loom in a window, and then a user interface was implemented to be able to connect database and to process various queries. The implemented system has aggregate query processing functions for the loom components existing in the selected area by the mouse and it also supports high level query processing functions represented with chart and pivot table; we can use it as a decision support system. The proposed system can detect temporal or persistent problems in the looms. Therefore, it can be used to raise the productivity and to reduce the cost in textile companies by coping with the situation properly.

  • PDF

Differential expression of the metastasis suppressor KAI1 in decidual cells and trophoblast giant cells at the feto-maternal interface

  • Koo, Tae Bon;Han, Min-Su;Tadashi, Yamashita;Seong, Won Joon;Choi, Je-Yong
    • BMB Reports
    • /
    • v.46 no.10
    • /
    • pp.507-512
    • /
    • 2013
  • Invasion of trophoblasts into maternal uterine tissue is essential for establishing mature feto-maternal circulation. The trophoblast invasion associated with placentation is similar to tumor invasion. In this study, we investigated the role of KAI1, an anti-metastasis factor, at the maternal-fetal interface during placentation. Mouse embryos were obtained from gestational days 5.5 (E5.5) to E13.5. Immunohistochemical analysis revealed that KAI1 was expressed on decidual cells around the track made when a fertilized ovum invaded the endometrium, at days E5.5 and E7.5, and on trophoblast giant cells, along the central maternal artery of the placenta at E9.5. KAI1 in trophoblast giant cells was increased at E11.5, and then decreased at E13.5. Furthermore, KAI1 was upregulated during the forskolin-mediated trophoblastic differentiation of BeWo cells. Collectively, these results indicate that KAI1 is differentially expressed in decidual cells and trophoblasts at the maternal-fetal interface, suggesting that KAI1 prevents trophoblast invasion during placentation.

Recognition-Based Gesture Spotting for Video Game Interface (비디오 게임 인터페이스를 위한 인식 기반 제스처 분할)

  • Han, Eun-Jung;Kang, Hyun;Jung, Kee-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.9
    • /
    • pp.1177-1186
    • /
    • 2005
  • In vision-based interfaces for video games, gestures are used as commands of the games instead of pressing down a keyboard or a mouse. In these Interfaces, unintentional movements and continuous gestures have to be permitted to give a user more natural interface. For this problem, this paper proposes a novel gesture spotting method that combines spotting with recognition. It recognizes the meaningful movements concurrently while separating unintentional movements from a given image sequence. We applied our method to the recognition of the upper-body gestures for interfacing between a video game (Quake II) and its user. Experimental results show that the proposed method is on average $93.36\%$ in spotting gestures from continuous gestures, confirming its potential for a gesture-based interface for computer games.

  • PDF