• Title/Summary/Keyword: Mouse Interface

Search Result 187, Processing Time 0.032 seconds

Music Exploring Interface using Emotional Model (감성모델을 이용한 음악 탐색 인터페이스)

  • Yoo, Min-Joon;Kim, Hyun-Ju;Lee, In-Kwon
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.707-710
    • /
    • 2009
  • In this paper, we introduce an interface for exploring music using emotional model. First, we survey arousal-valence factors of various music and calculate a correlation between audio fefatures of music and arousal-valence factors to build an AV model. Then, various music is aligned and arranged using the AV model and the user can explore music in this interface. To select the desired music more intuitively, we introduce new fade in/out function based on the location of the user's mouse point. We also offer several mode of selecting music so user can explore music using most suitable mode of interface. With our interface, the user can find the emotionally desired music more easily.

  • PDF

NUI/NUX framework based on intuitive hand motion (직관적인 핸드 모션에 기반한 NUI/NUX 프레임워크)

  • Lee, Gwanghyung;Shin, Dongkyoo;Shin, Dongil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.11-19
    • /
    • 2014
  • The natural user interface/experience (NUI/NUX) is used for the natural motion interface without using device or tool such as mice, keyboards, pens and markers. Up to now, typical motion recognition methods used markers to receive coordinate input values of each marker as relative data and to store each coordinate value into the database. But, to recognize accurate motion, more markers are needed and much time is taken in attaching makers and processing the data. Also, as NUI/NUX framework being developed except for the most important intuition, problems for use arise and are forced for users to learn many NUI/NUX framework usages. To compensate for this problem in this paper, we didn't use markers and implemented for anyone to handle it. Also, we designed multi-modal NUI/NUX framework controlling voice, body motion, and facial expression simultaneously, and proposed a new algorithm of mouse operation by recognizing intuitive hand gesture and mapping it on the monitor. We implement it for user to handle the "hand mouse" operation easily and intuitively.

Fingertip Extraction and Hand Motion Recognition Method for Augmented Reality Applications (증강현실 응용을 위한 손 끝점 추출과 손 동작 인식 기법)

  • Lee, Jeong-Jin;Kim, Jong-Ho;Kim, Tae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.2
    • /
    • pp.316-323
    • /
    • 2010
  • In this paper, we propose fingertip extraction and hand motion recognition method for augmented reality applications. First, an input image is transformed into HSV color space from RGB color space. A hand area is segmented using double thresholding of H, S value, region growing, and connected component analysis. Next, the end points of the index finger and thumb are extracted using morphology operation and subtraction for a virtual keyboard and mouse interface. Finally, the angle between the end points of the index finger and thumb with respect to the center of mass point of the palm is calculated to detect the touch between the index finger and thumb for implementing the click of a mouse button. Experimental results on various input images showed that our method segments the hand, fingertips, and recognizes the movements of the hand fast and accurately. Proposed methods can be used the input interface for augmented reality applications.

An EMG-based Input Interface Technology for the Tetraplegic and Its Applications (사지마비 장애인을 위한 근전도 기반 입력 인터페이스 기술 및 그 응용)

  • Jeong, Hyuk;Kim, Jong-Sung;Son, Wook-Ho;Kim, Young-Hoon
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.2
    • /
    • pp.9-17
    • /
    • 2006
  • We propose an EMG-based input interface technology for helping the tetraplegic to utilize mouse, keyboard and power wheelchair. Among possible actions for the tetraplegic utilizing these devices, teeth-clenching is chosen as an input action. By clenching left, right or both teeth, and controlling the clenching duration, several input commands for utilizing the devices can be conducted. EMG signals generated by teeth-clenching are acquired around one's left and right temples and they are used as control sources for utilizing the devices. We develop signal acquisition devices, signal processing algorithms, and prototype systems such as power wheelchair control, mouse control, and game control. Our experimental results with the tetraplegic show that the proposed method is useful for utilizing the devices.

  • PDF

Man-Machine Interface Device for Dismantling Factory

  • Yi, Hwa-Cho;Park, Jung-Whan;Park, Myon Woong;Nam, Taek-Jun
    • Clean Technology
    • /
    • v.23 no.3
    • /
    • pp.248-255
    • /
    • 2017
  • In dismantling factories for recycling, it is important to input actual working data to a personal computer (PC) in order to monitor the work results and related recycling rate of the inputs. This should be performed with a keyboard, a mouse, or other devices. But when a worker is working in the factory, it could be bothersome or time consuming to go to the PC. Especially, workers who works at dismantling factories have a generally low education level are scared to use a PC, which could be used as a pretext for not using the PC. In some cases, data input is performed by a worker after the day's job. In this case, it could take additional time, the worker can make more mistakes, and the data could be unreliable. In this study, we developed a man-machine interface (MMI) device using a safety helmet. A joystick-like device, pushbuttons, and a radio frequency (RF) device for wireless communication is equipped in a safety helmet. This MMI device has functions similar to a PC mouse, and it has a long communication distance. RF is used because it consumes less battery power than Bluetooth. With this MMI device, workers need not go to a PC to input data or to control the PC, and they can control the PC from a long distance. The efficiency of PCs in a factory could be increased by using the developed MMI system, and workers at the dismantling factories could have less reluctance in using the PC.

Development of an EMG-based Wireless and Wearable Computer Interlace (근전도기반의 무선 착용형 컴퓨터 인터페이스 개발)

  • Han, Hyo-Nyoung;Choi, Chang-Mok;Lee, Yun-Joo;Ha, Sung-Do;Kim, Jung
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.240-244
    • /
    • 2008
  • This paper presents an EMG-based wireless and wearable computer interface. The wearable device contains 4 channel EMG sensors and is able to acquire EMG signals using signal processing. Obtained signals are transmitted to a host computer through wireless communication. EMG signals induced by the volitional movements are acquired from four sites in the lower limb to extract a user's intention and six classes of wrist movements are discriminated by employing an artificial neural network (ANN). This interface could provide an aid to the limb disabled to directly access to computers and network environments without conventional computer interface such as a keyboard and a mouse.

  • PDF

A Review of Research Trends on Brain Computer Interface(BCI) Games using Brain Wave (뇌파를 이용한 BCI 게임 동향 고찰)

  • Kim, Gui-Jung;Han, Jung-Soo
    • Journal of Digital Convergence
    • /
    • v.13 no.6
    • /
    • pp.177-184
    • /
    • 2015
  • Brain-computer interface is (BCI) is a communication device that the brain activity is directly input to the computer without input devices, such as a mouse or keyboard. As the brain wave interface hardware technology evolves, expensive and large EEG equipment has been downsized cheaply. So it will be applied to various multimedia applications. Among BCI studies, we suggest the domestic and foreign research trend about how the BCI is applied about the game almost people use. Next, look at the problems of the game with the BCI, we would like to propose the future direction of domestic BMI research and development.

HAND GESTURE INTERFACE FOR WEARABLE PC

  • Nishihara, Isao;Nakano, Shizuo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.664-667
    • /
    • 2009
  • There is strong demand to create wearable PC systems that can support the user outdoors. When we are outdoors, our movement makes it impossible to use traditional input devices such as keyboards and mice. We propose a hand gesture interface based on image processing to operate wearable PCs. The semi-transparent PC screen is displayed on the head mount display (HMD), and the user makes hand gestures to select icons on the screen. The user's hand is extracted from the images captured by a color camera mounted above the HMD. Since skin color can vary widely due to outdoor lighting effects, a key problem is accurately discrimination the hand from the background. The proposed method does not assume any fixed skin color space. First, the image is divided into blocks and blocks with similar average color are linked. Contiguous regions are then subjected to hand recognition. Blocks on the edges of the hand region are subdivided for more accurate finger discrimination. A change in hand shape is recognized as hand movement. Our current input interface associates a hand grasp with a mouse click. Tests on a prototype system confirm that the proposed method recognizes hand gestures accurately at high speed. We intend to develop a wider range of recognizable gestures.

  • PDF

Real-time Multi-device Control System Implementation for Natural User Interactive Platform

  • Kim, Myoung-Jin;Hwang, Tae-min;Chae, Sung-Hun;Kim, Min-Joon;Moon, Yeon-Kug;Kim, SeungJun
    • Journal of Internet Computing and Services
    • /
    • v.23 no.1
    • /
    • pp.19-29
    • /
    • 2022
  • Natural user interface (NUI) is used for the natural motion interface without using a specific device or tool like a mouse, keyboards, and pens. Recently, as non-contact sensor-based interaction technologies for recognizing human motion, gestures, voice, and gaze have been actively studied, an environment has been prepared that can provide more diverse contents based on various interaction methods compared to existing methods. However, as the number of sensors device is rapidly increasing, the system using a lot of sensors can suffer from a lack of computational resources. To address this problem, we proposed a real-time multi-device control system for natural interactive platform. In the proposed system, we classified two types of devices as the HC devices such as high-end commercial sensor and the LC devices such astraditional monitoring sensor with low-cost. we adopt each device manager to control efficiently. we demonstrate a proposed system works properly with user behavior such as gestures, motions, gazes, and voices.

Trends and Implications of Digital Transformation in Vehicle Experience and Audio User Interface (차내 경험의 디지털 트랜스포메이션과 오디오 기반 인터페이스의 동향 및 시사점)

  • Kim, Kihyun;Kwon, Seong-Geun
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.2
    • /
    • pp.166-175
    • /
    • 2022
  • Digital transformation is driving so many changes in daily life and industry. The automobile industry is in a similar situation. In some cases, element techniques in areas called metabuses are also being adopted, such as 3D animated digital cockpit, around view, and voice AI, etc. Through the growth of the mobile market, the norm of human-computer interaction (HCI) has been evolving from keyboard-mouse interaction to touch screen. The core area was the graphical user interface (GUI), and recently, the audio user interface (AUI) has partially replaced the GUI. Since it is easy to access and intuitive to the user, it is quickly becoming a common area of the in-vehicle experience (IVE), especially. The benefits of a AUI are freeing the driver's eyes and hands, using fewer screens, lower interaction costs, more emotional and personal, effective for people with low vision. Nevertheless, when and where to apply a GUI or AUI are actually different approaches because some information is easier to process as we see it. In other cases, there is potential that AUI is more suitable. This is a study on a proposal to actively apply a AUI in the near future based on the context of various scenes occurring to improve IVE.