• Title/Summary/Keyword: 마우스 제어

Search Result 158, Processing Time 0.026 seconds

Untouched Camera Mouse System for The Disabled Person's Assistant Device (뇌병변 장애인 보조기기용 노터치 카메라 마우스 시스템)

  • Bae, Ki-Tae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.9
    • /
    • pp.3465-3471
    • /
    • 2010
  • In this paper, we propose assistant device system based on camera that can perform general mouse function for the physically disabled. Most disabled people use special hardware and rely largely on foreign-made assistance device and the using rate of assistance device was very low because of high price. To solve this problem, we propose effective system based on web camera that can perform general mouse function and can raise accessibility for the physically disabled. In the proposed method, we firstly attach color marker on arbitrary device or body parts and then input the information of color using camera and extract coordinate information of color marker and finally, match processing information(coordinate) to special event(keyboard, mouse) and control various applications. The proposed method shows good performance in price and performance compared with other techniques by real field test.

Non-contact Input Method based on Face Recognition and Pyautogui Mouse Control (얼굴 인식과 Pyautogui 마우스 제어 기반의 비접촉식 입력 기법)

  • Park, Sung-jin;Shin, Ye-eun;Lee, Byung-joon;Oh, Ha-young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.9
    • /
    • pp.1279-1292
    • /
    • 2022
  • This study proposes a non-contact input method based on face recognition and Pyautogui mouse control as a system that can help users who have difficulty using input devices such as conventional mouse due to physical discomfort. This study includes features that help web surfing more conveniently, especially screen zoom, scroll function, and also solves the problem of eye fatigue, which has been suggested as a limitation in existing non-contact input systems. In addition, various set values can be adjusted in consideration of individual physical differences and Internet usage habits. Furthermore, no high-performance CPU or GPU environment is required, and no separate tracker devices or high-performance cameras are required. Through these studies, we intended to contribute to the realization of barrier-free access by increasing the web accessibility of the disabled and the elderly who find it difficult to use web content.

Interactive laser pointing mouse system (인터랙티브 레이져 포인팅 마우스 시스템)

  • Park, Min-Sun
    • Journal of the Korea Computer Industry Society
    • /
    • v.6 no.5
    • /
    • pp.697-714
    • /
    • 2005
  • In this paper, we made a windows-based interactive presentation system using a laser pointer mouse. The system provides that a speaker controls the presentation interactively by means of a laser pointer. During the presentation, a display PC generates on its local display a bitmap corresponding to the presentation. This bitmap is then transmitted to the projector and the bitmap is then projected onto the screen. The display of the presentation is controlled by the monitoring of the laser spots that are also projected onto the screen. Laser spot control is achieved through a control system. When the processing section matches the successive laser spot positions with a pre-established spatial pattern, the corresponding display command is issued. The display command may be transmitted to the display computer which responds by an action like mouse function.

  • PDF

A Development of the Next-generation Interface System Based on the Finger Gesture Recognizing in Use of Image Process Techniques (영상처리를 이용한 지화인식 기반의 차세대 인터페이스 시스템 개발)

  • Kim, Nam-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.4
    • /
    • pp.935-942
    • /
    • 2011
  • This study aims to design and implement the finger gesture recognizing system that automatically recognizes finger gestures input through a camera and controls the computer. Common CCD cameras were redesigned as infrared light cameras to acquire the images. The recorded images go through the pre-process to find the hand features, the finger gestures are read accordingly, and an event takes place for the follow-up mouse controlling and presentation, and finally the way to control computers is suggested. The finger gesture recognizing system presented in this study has been verified as the next-generation interface to replace the mouse and keyboard for the future information-based units.

Real Time System Realization for Binocular Eyeball Tracking Mouse (실시간 쌍안구 추적 마우스 시스템 구현에 관한 연구)

  • Ryu Kwang-Ryol;Choi Duck-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.9
    • /
    • pp.1671-1678
    • /
    • 2006
  • A real time system realization for binocular eyeball tracking mouse on the computer monitor being far from 30-40cm is presented in the paper. The processing for searching eyeball and tracking the cursor are that a facial image is acquired by the small CCD camera, convert it into binary image, search for the eye two using the five region mask method in the eye surroundings and the side four points diagonal positioning method is searched the each iris. The tracking cursor is matched by measuring the iris central moving position. The cursor controlling is achieved by comparing two related distances between the iris maximum moving and the cursor moving to calculate the moving distance from gazing position and screen. The experimental results show that the binocular eyeball mouse system is simple and fast to be real time.

NUI/NUX of the Virtual Monitor Concept using the Concentration Indicator and the User's Physical Features (사용자의 신체적 특징과 뇌파 집중 지수를 이용한 가상 모니터 개념의 NUI/NUX)

  • Jeon, Chang-hyun;Ahn, So-young;Shin, Dong-il;Shin, Dong-kyoo
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.11-21
    • /
    • 2015
  • As growing interest in Human-Computer Interaction(HCI), research on HCI has been actively conducted. Also with that, research on Natural User Interface/Natural User eXperience(NUI/NUX) that uses user's gesture and voice has been actively conducted. In case of NUI/NUX, it needs recognition algorithm such as gesture recognition or voice recognition. However these recognition algorithms have weakness because their implementation is complex and a lot of time are needed in training because they have to go through steps including preprocessing, normalization, feature extraction. Recently, Kinect is launched by Microsoft as NUI/NUX development tool which attracts people's attention, and studies using Kinect has been conducted. The authors of this paper implemented hand-mouse interface with outstanding intuitiveness using the physical features of a user in a previous study. However, there are weaknesses such as unnatural movement of mouse and low accuracy of mouse functions. In this study, we designed and implemented a hand mouse interface which introduce a new concept called 'Virtual monitor' extracting user's physical features through Kinect in real-time. Virtual monitor means virtual space that can be controlled by hand mouse. It is possible that the coordinate on virtual monitor is accurately mapped onto the coordinate on real monitor. Hand-mouse interface based on virtual monitor concept maintains outstanding intuitiveness that is strength of the previous study and enhance accuracy of mouse functions. Further, we increased accuracy of the interface by recognizing user's unnecessary actions using his concentration indicator from his encephalogram(EEG) data. In order to evaluate intuitiveness and accuracy of the interface, we experimented it for 50 people from 10s to 50s. As the result of intuitiveness experiment, 84% of subjects learned how to use it within 1 minute. Also, as the result of accuracy experiment, accuracy of mouse functions (drag(80.4%), click(80%), double-click(76.7%)) is shown. The intuitiveness and accuracy of the proposed hand-mouse interface is checked through experiment, this is expected to be a good example of the interface for controlling the system by hand in the future.

An EMG-based Input Interface Technology for the Tetraplegic and Its Applications (사지마비 장애인을 위한 근전도 기반 입력 인터페이스 기술 및 그 응용)

  • Jeong, Hyuk;Kim, Jong-Sung;Son, Wook-Ho;Kim, Young-Hoon
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.2
    • /
    • pp.9-17
    • /
    • 2006
  • We propose an EMG-based input interface technology for helping the tetraplegic to utilize mouse, keyboard and power wheelchair. Among possible actions for the tetraplegic utilizing these devices, teeth-clenching is chosen as an input action. By clenching left, right or both teeth, and controlling the clenching duration, several input commands for utilizing the devices can be conducted. EMG signals generated by teeth-clenching are acquired around one's left and right temples and they are used as control sources for utilizing the devices. We develop signal acquisition devices, signal processing algorithms, and prototype systems such as power wheelchair control, mouse control, and game control. Our experimental results with the tetraplegic show that the proposed method is useful for utilizing the devices.

  • PDF

Development of an EMG-based computer interface for the physically handicapped (지체장애인을 위한 근전도기반의 컴퓨터 인터페이스 개발)

  • Choi, Chang-Mok;Han, Hyon-Young;Ha, Sung-Do;Kim, Jung
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.222-227
    • /
    • 2007
  • 본 논문에서는 지체장애인들이 유효한 말초신경신호를 이용하여 컴퓨터를 사용할 수 있는 인터페이스를 개발하였다. 손목의 움직임을 통해 아래팔 4부분으로부터 근전도 (electromyogram, EMG) 신호를 추출하였고, 다층 인식 신경망을 사용하여 사용자의 의도를 추출하였다. 이를 통하여 마우스 커서의 움직임을 제어하고, 마우스 버튼을 클릭하는 동작을 할 수 있으며, 시각 디스플레이 장치에 표시된 핸드폰 자판과 같은 유저 인터페이스를 통해 컴퓨터에 글자를 입력할 수 있게 하였다. 추가적으로 Fitts' law를 사용하여 본 인터페이스의 사용성을 평가하였고, 이를 기존연구와 비교함으로써 본 인터페이스의 효용성을 검증하였다.

  • PDF

The implementation of the wireless tablet system using GalaxyNote device (갤럭시노트 디바이스를 이용한 무선 태블릿 시스템의 구현)

  • Yoon, Dong-June;Choi, Byeong-Yoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.05a
    • /
    • pp.447-450
    • /
    • 2014
  • In this paper efficient design of wireless tablet system using GalaxyNote device for PC was proposed. The designed portable tablet consists of GalaxyNote device, Stylus Pen, and Bluetooth-to-serial converter. To transmit coordinate information of Stylus Pen on GalaxyNote device to PC, wireless portable tablet uses bluetooth wireless communications. After the custom mouse filter driver divides received coordinate into x-coordinate and y-coordinate, it controls position of mouse pointer using the converted coordinates while Windows application programs are running.

  • PDF

Motor Imagery Brain Signal Analysis for EEG-based Mouse Control (뇌전도 기반 마우스 제어를 위한 동작 상상 뇌 신호 분석)

  • Lee, Kyeong-Yeon;Lee, Tae-Hoon;Lee, Sang-Yoon
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.2
    • /
    • pp.309-338
    • /
    • 2010
  • In this paper, we studied the brain-computer interface (BCI). BCIs help severely disabled people to control external devices by analyzing their brain signals evoked from motor imageries. The findings in the field of neurophysiology revealed that the power of $\beta$(14-26 Hz) and $\mu$(8-12 Hz) rhythms decreases or increases in synchrony of the underlying neuronal populations in the sensorymotor cortex when people imagine the movement of their body parts. These are called Event-Related Desynchronization / Synchronization (ERD/ERS), respectively. We implemented a BCI-based mouse interface system which enabled subjects to control a computer mouse cursor into four different directions (e.g., up, down, left, and right) by analyzing brain signal patterns online. Tongue, foot, left-hand, and right-hand motor imageries were utilized to stimulate a human brain. We used a non-invasive EEG which records brain's spontaneous electrical activity over a short period of time by placing electrodes on the scalp. Because of the nature of the EEG signals, i.e., low amplitude and vulnerability to artifacts and noise, it is hard to analyze and classify brain signals measured by EEG directly. In order to overcome these obstacles, we applied statistical machine-learning techniques. We could achieve high performance in the classification of four motor imageries by employing Common Spatial Pattern (CSP) and Linear Discriminant Analysis (LDA) which transformed input EEG signals into a new coordinate system making the variances among different motor imagery signals maximized for easy classification. From the inspection of the topographies of the results, we could also confirm ERD/ERS appeared at different brain areas for different motor imageries showing the correspondence with the anatomical and neurophysiological knowledge.

  • PDF