• Title/Summary/Keyword: Wearable Input Device

Search Result 44, Processing Time 0.035 seconds

A MEMS-Based Finger Wearable Computer Input Devices (MEMS 기반 손가락 착용형 컴퓨터 입력장치)

  • Kim, Chang-su;Jung, Se-hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.6
    • /
    • pp.1103-1108
    • /
    • 2016
  • The development of a variety of sensor technology, users smart phone, the use of motion recognition apparatus such as a console game machines is increasing. It tends to user needs motion recognition-based input device are increasing. Existing motion recognition mouse is equipped with a modified form of the mouse button on the outside and serves as a wheel mouse left and right buttons. Existing motion recognition mouse is to manufacture a small, there is a difficulty to operate the button. It is to apply the motion recognition technology the motion recognition technology is used only pointing the cursor there is a limit. In this paper, use of MEMS-based motion recognition sensor, the body of the two-point operation data by recognizing the operation of the (thumb and forefinger) and generating a control signal, followed by studies on the generated control signal to a wireless transmitting computer input device.

A Study of an MEMS-based finger wearable computer input devices (MEMS 기반 손가락 착용형 컴퓨터 입력장치에 관한 연구)

  • Kim, Chang-su;Jung, Se-hyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.791-793
    • /
    • 2016
  • In the development of various types of sensor technology, the general users smartphone, the environment is increased, which can be seen in contact with the movement recognition device, such as a console game machine (Nintendo Wii), an increase in the user needs of the action recognition-based input device there is a tendency to have. Mouse existing behavior recognition, attached to the outside, is mounted in the form of mouse button is deformed, the left mouse was the role of the right button and a wheel, an acceleration sensor (or a gyro sensor) inside to, plays the role of a mouse cursor, is to manufacture a compact, there is a difficulty in operating the button, to apply a motion recognition technology is used to operate recognition technology only pointing cursor is limited. Therefore, in this paper, using a MEMS-based motion-les Koguni tion sensor (Motion Recognition Sensor), to recognize the behavior of the two points of the human body (thumb and forefinger), to generate the motion data, and this to the foundation, compared to the pre-determined matching table (moving and mouse button events cursor), and generates a control signal by determining, were studied the generated control signal input device of the computer wirelessly transmitting.

  • PDF

Introducing Depth Camera for Spatial Interaction in Augmented Reality (증강현실 기반의 공간 상호작용을 위한 깊이 카메라 적용)

  • Yun, Kyung-Dahm;Woo, Woon-Tack
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.62-67
    • /
    • 2009
  • Many interaction methods for augmented reality has attempted to reduce difficulties in tracking of interaction subjects by either allowing a limited set of three dimensional input or relying on auxiliary devices such as data gloves and paddles with fiducial markers. We propose Spatial Interaction (SPINT), a noncontact passive method that observes an occupancy state of the spaces around target virtual objects for interpreting user input. A depth-sensing camera is introduced for constructing the virtual space sensors, and then manipulating the augmented space for interaction. The proposed method does not require any wearable device for tracking user input, and allow versatile interaction types. The depth perception anomaly caused by an incorrect occlusion between real and virtual objects is also minimized for more precise interaction. The exhibits of dynamic contents such as Miniature AR System (MINARS) could benefit from this fluid 3D user interface.

  • PDF

A Method to Separate Respiration and Pulse Signals from BCG Sensing Data for Companion Animals

  • Kwak, Ho-Young;Chang, Jin-Wook;Kim, Soo Kyun;Song, Woo Jin;Yun, Young-Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.163-170
    • /
    • 2022
  • Currently, as the number of families living with companion animals increases, the demand for information about the health status of companion animals has increased. As the demand for this increases, there is a need for a method to measure respiration and pulse in companion animals. Considering the characteristics of hairy companion animals, we want to measure respiration and pulse signals using BCG, which is different from adsorption ECG. Since this BCG method is made by mixing respiration and pulse signals into one signal, it is necessary to separate the respiration signal waveform and the pulse signal waveform from one signal waveform. In this paper, a wearable device for BCG measurement was implemented to measure the signal, and a method of separating the signal input from the BCG wearable device into a respiration signal and a pulse signal was proposed.

Deep Learning-Based Companion Animal Abnormal Behavior Detection Service Using Image and Sensor Data

  • Lee, JI-Hoon;Shin, Min-Chan;Park, Jun-Hee;Moon, Nam-Mee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.10
    • /
    • pp.1-9
    • /
    • 2022
  • In this paper, we propose the Deep Learning-Based Companion Animal Abnormal Behavior Detection Service, which using video and sensor data. Due to the recent increase in households with companion animals, the pet tech industry with artificial intelligence is growing in the existing food and medical-oriented companion animal market. In this study, companion animal behavior was classified and abnormal behavior was detected based on a deep learning model using various data for health management of companion animals through artificial intelligence. Video data and sensor data of companion animals are collected using CCTV and the manufactured pet wearable device, and used as input data for the model. Image data was processed by combining the YOLO(You Only Look Once) model and DeepLabCut for extracting joint coordinates to detect companion animal objects for behavior classification. Also, in order to process sensor data, GAT(Graph Attention Network), which can identify the correlation and characteristics of each sensor, was used.

A Study on the Click Recognition Improvement of a Wearable Input Device (착용형 입력장치에서의 클릭 인식성능 향상방법에 관한 연구)

  • Soh, Byung-Seok;Kim, Yoon-Sang;Lee, Sang-Goog
    • Proceedings of the KIEE Conference
    • /
    • 2004.07d
    • /
    • pp.2553-2555
    • /
    • 2004
  • 본 논문은 사용자의 손가락 및 손의 움직임을 이용하여 키보드와 마우스로 사용하도록 개발된 착용형 입력장치 ($SCURRY^{TM}$)의 손가락 움직임을 감지하는 클릭인식 성능의 향상 방법을 제안한다. 제작된 착용형 입력장치는 클릭 감지용 가속도계가 장착된 손가락 부분과 손 움직임 감지용 각속도계가 장착된 손등 부분, 그리고 화면상의 가상키보드의 구성된다. 제안하는 클릭 감지 방법은 사용자의 클릭 의도를 파악하는 특징 추출부, 클릭 동작을 수행한 손가락을 판별하는 유효클릭 인식부, 그리고 손 움직임에 의해 발생된 외도하지 않은 클릭신호를 제거하는 신호혼선 제거부로 구성된다. 제안된 방법을 이용하여 착용형 입력장치의 클릭 일시성능이 향상됨을 실험으로부터 확인하였다.

  • PDF

Customized Resource Collaboration System based on Ontology and User Model in Resource Sharing Environments

  • Park, Jong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.4
    • /
    • pp.107-114
    • /
    • 2018
  • Recently, various wearable personal devices such as a smart watch have been developed and these personal devices are being miniaturized. The user desires to receive new services from personal devices as well as services that have been received from personal computers, anytime and anywhere. However, miniaturization of devices involves constraints on resources such as limited input and output and insufficient power. In order to solve these resource constraints, this paper proposes a resource collaboration system which provides a service by composing sharable resources in the resource sharing environment like IoT. the paper also propose a method to infer and recommend user-customized resources among various sharable resources. For this purpose, the paper defines an ontology for resource inference. This paper also classifies users behavior types based on a user model and then uses them for resource recommendation. The paper implements the proposed method as a prototype system on a personal device with limited resources developed for resource collaboration and shows the effectiveness of the proposed method by evaluating user satisfaction.

A study on the wearable device input system using the membrane potentiometer (선형 위치 센서를 활용한 웨어러블 디바이스 입력 시스템에 대한 연구)

  • Kim, Jaeyoung;Bianchi, Andrea
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.10a
    • /
    • pp.1813-1815
    • /
    • 2015
  • 최근 몇 년간 스마트 디바이스 시장의 주요 트렌드였던 스마트폰의 성장세가 둔화됨에 따라 웨어러블 디바이스가 스마트 디바이스 시장의 새로운 트렌드로 주목 받고 있다. 그러나 기존 스마트 디바이스에 적용해오던 터치 기반의 입력방식을 웨어러블 디바이스에 적용하고자 한 시도는 디바이스의 구조적 차이로 인해 Fat finger problem과 Occlusion problem에 직면하게 되었다. 본 논문은 터치 기반의 입력방식을 웨어러블 디바이스에 적용하였을 때 발생하는 문제점들을 해결하기 위해 선형 위치 센서를 활용한 새로운 입력 시스템을 제안하고자 한다.

CNN-Based Hand Gesture Recognition for Wearable Applications (웨어러블 응용을 위한 CNN 기반 손 제스처 인식)

  • Moon, Hyeon-Chul;Yang, Anna;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.23 no.2
    • /
    • pp.246-252
    • /
    • 2018
  • Hand gestures are attracting attention as a NUI (Natural User Interface) of wearable devices such as smart glasses. Recently, to support efficient media consumption in IoT (Internet of Things) and wearable environments, the standardization of IoMT (Internet of Media Things) is in the progress in MPEG. In IoMT, it is assumed that hand gesture detection and recognition are performed on a separate device, and thus provides an interoperable interface between these modules. Meanwhile, deep learning based hand gesture recognition techniques have been recently actively studied to improve the recognition performance. In this paper, we propose a method of hand gesture recognition based on CNN (Convolutional Neural Network) for various applications such as media consumption in wearable devices which is one of the use cases of IoMT. The proposed method detects hand contour from stereo images acquisitioned by smart glasses using depth information and color information, constructs data sets to learn CNN, and then recognizes gestures from input hand contour images. Experimental results show that the proposed method achieves the average 95% hand gesture recognition rate.

Design of a pen-shaped input device using the low-cost inertial measurement units (저가격 관성 센서를 이용한 펜 형 입력 장치의 개발)

  • Chang, Wook;Kang, Kyoung-Ho;Choi, Eun-Seok;Bang, Won-Chul;Potanin, Alexy;Kim, Dong-Yoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.2
    • /
    • pp.247-258
    • /
    • 2003
  • In this paper, we present a pen-shaped input device equipped with accelerometers and gyroscopes that measure inertial movements when a user writes on 2 or 3 dimensional space with the pen. The measurements from gyroscope are integrated once to find the attitude of the system and are used to compensate gravitational effect in the accelerations. Further, the compensated accelerations are integrated twice to yield the position of the system, whose basic concept stems from the field of inertial navigation. However, the accuracy of the position measurement significantly deteriorates with time due to the integrations involved in recovering the handwriting trajectory This problem is common in the inertial navigation system and is usually solved by the periodic or aperiodic calibration of the system with external reference sources or other information in the filed of inertial navigation. In the presented paper, the calibration of the position or velocity is performed on-line and off-line. In the on-line calibration stage, the complementary filter technique is used, where a Kalman filter plays an important role. In the off-line calibration stage, the constant component of the resultant navigational error of the system is removed using the velocity information and motion detection algorithm. The effectiveness and feasibility of the presented system is shown through the experimental results.