• 제목/요약/키워드: Computer mouse

Search Result 300, Processing Time 0.023 seconds

The effects of the usability of products on user's emotions - with emphasis on suggestion of methods for measuring user's emotions expressed while using a product -

  • Jeong, Sang-Hoon
    • Archives of design research
    • /
    • v.20 no.2 s.70
    • /
    • pp.5-16
    • /
    • 2007
  • The main objective of our research is analyzing user's emotional changes while using a product, to reveal the influence of usability on human emotions. In this study we have extracted some emotional words that can come up during user interaction with a product and reveal emotional changes through three methods. Finally, we extracted 88 emotional words for measuring user's emotions expressed while using products. And we categorized the 88 words to form 6 groups by using factor analysis. The 6 categories that were extracted as a result of this study were found to be user's representative emotions expressed while using products. It is expected that emotional words and user's representative emotions extracted in this study will be used as subjective evaluation data that is required to measure user's emotional changes while using a product. Also, we proposed the effective methods for measuring user's emotion expressed while using a product in the environment which is natural and accessible for the field of design, by using the emotion mouse and the Eyegaze. An examinee performs several tasks with the emotion mouse through the mobile phone simulator on the computer monitor connected to the Eyegaze. While testing, the emotion mouse senses user's EDA and PPG and transmits the data to the computer. In addition, the Eyegaze can observe the change of pupil size. And a video camera records user's facial expression while testing. After each testing, a subjective evaluation on the emotional changes expressed by the user is performed by the user him/herself using the emotional words extracted from the above study. We aim to evaluate the satisfaction level of usability of the product and compare it with the actual experiment results. Through continuous studies based on these researches, we hope to supply a basic framework for the development of interface with consideration to the user's emotions.

  • PDF

Implementation of Mouse Function Using Web Camera and Hand (웹 카메라와 손을 이용한 마우스 기능의 구현)

  • Kim, Seong-Hoon;Woo, Young-Woon;Lee, Kwang-Eui
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.5
    • /
    • pp.33-38
    • /
    • 2010
  • In this paper, we proposed an algorithm implementing mouse functions using hand motion and number of fingers which are extracted from an image sequence. The sequence is acquired through a web camera and processed with image processing algorithms. The sequence is first converted from RGB model to YCbCr model to efficiently extract skin area and the extracted area is further processed using labeling, opening, and closing operations to decide the center of a hand. Based on the center position, the number of fingers is decided, which serves as the information to decide and perform a mouse function. Experimental results show that 94.0% of pointer moves and 96.0% of finger extractions are successful, which opens the possibility of further development for a commercial product.

A study on 3D Pottery Modeling based on Web (웹기반 3D 도자기 모델링에 관한 연구)

  • Park, Gyoung Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.12
    • /
    • pp.209-217
    • /
    • 2012
  • In this paper, I proposed new system that a user makes modeling 3D symmetric pottery using mouse and can confirm the result immediately in internet browser. The main advantage of proposed system is that users who have no specialized knowledge about 3D graphic can easily create 3D objects. And a user can use it that has only PC connected network and mouse without additional devices as like expensive haptic and camera device. For developing proposed system, VRML/X3D that is International Standard language for virtual reality and 3D graphics was used. Because it was born based on internet that is different from other 3D graphic languages, it was able to interact and navigate with users. With those features and high completeness of 3D pottery realization using mouse considered, the system may be useful and is superior in performance to other pottery modeling system.

Automatic Extraction of the Interest Organization from Full-Color Continuous Images for a Biological Sample

  • Takemoto, Satoko;Yokota, Hideo;Shimai, Hiroyuki;Makinouchi, Akitake;Mishima, Taketoshi
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.196-199
    • /
    • 2002
  • We presented the automatic extraction technique of a biological internal organization from full-color continuous images. It was implemented using the localized homogeneousness of color intensity, and also using the continuity between neighboring images. Moreover, we set the "four-level status value" of area condition as a value showing "area possibility. This played important role of preventing a miss-judgement of area definition. These our approach had a beneficial effect on tracking color and shape change of the interest area in continuous extraction. As a resell we succeeded in extraction of mouse's stomach from continuous 50 images.

  • PDF

3D Game Development For Dementia Prevention (치매 예방을 위한 3D 게임 개발)

  • He, Guan-Feng;Kang, Sun-Kyung;Choi, Wuk-Ho;Jung, Sung-Tae
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2011.01a
    • /
    • pp.233-236
    • /
    • 2011
  • This paper proposes to use gesture recognition technology to develop 3D game for prevention of dementia. In the game, the user can choose camera mode or mouse mode to play game. In the game design, We focused on the characteristics of dementia, Developed four types of games, Including Memory training game, Math game, Logical training game and 3D visual sense training game. Each type of game is composed by a number of small games. Common feature of all the game is random, This ensures that when user play games, Basically the game will not have the same situation. For the older elderly, If can take 10 minutes a day to play the game and has always insisted on playing, For dementia preventive have a positive effect.

  • PDF

Design of OpenCV based Finger Recognition System using binary processing and histogram graph

  • Baek, Yeong-Tae;Lee, Se-Hoon;Kim, Ji-Seong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.2
    • /
    • pp.17-23
    • /
    • 2016
  • NUI is a motion interface. It uses the body of the user without the use of HID device such as a mouse and keyboard to control the device. In this paper, we use a Pi Camera and sensors connected to it with small embedded board Raspberry Pi. We are using the OpenCV algorithms optimized for image recognition and computer vision compared with traditional HID equipment and to implement a more human-friendly and intuitive interface NUI devices. comparison operation detects motion, it proposed a more advanced motion sensors and recognition systems fused connected to the Raspberry Pi.

A Study on input and display interface for industrial computer (산업용 컴퓨터용 입력 및 표시장치 인터페이스 구현에 대한 연구)

  • Cho, Young-Seok;Kim, Jae-Jin
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2015.07a
    • /
    • pp.196-197
    • /
    • 2015
  • 본 논문에서는 산업용 컴퓨터에서 사용할 마우스와 터치스크린을 이용한 산업용 컴퓨터 인터페이스 장치 구현에 대하여 연구하였다. 산업용 제어 현장에서 사용되는 제어용 컴퓨터의 입출력 장치는 확장성을 고려하여 설계되지 않았기 때문에 현재의 컴퓨터 주변장치를 직접 사용할 수 없는 경우가 대부분이다. 본 논문에서는 산업용컴퓨터에서 사용되는 라이트펜과 모니터를 현재 주로 사용하는 마우스와 LCD모니터로 입출력이 가능하도록 하는 인터페이스 장치를 개발하여, 생산 공정에서 사용하는 산업용 컴퓨터의 활용도를 높이고자 한다. 산업용 컴퓨터 인터페이스장치는 ARM 기반 MPU를 이용하여 영상 신호와 외부 입력장치의 자료를 처리하고, CPLD에서 생성된 신호를 제어용 컴퓨터로 입력하였다. 제어컴퓨터의 영상신호를 다양한 모니터와 인터페이스 할 수 있도록 비디오 스케일러(Video Scaler)를 사용하였다. 구현한 인터페이스 장치는 다양한 장치들을 산업용컴퓨터에서 사용이 가능함을 보였다.

  • PDF

A model on git GUI tool development Using mouse for command input (마우스를 이용한 Command 입력 git GUI tool 개발을 위한 모델)

  • Park, Seonghyuk;Kang, Juho;Ju, Hyeonseok;Hwang, Seongjin;Lee, Changhoon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.11a
    • /
    • pp.665-668
    • /
    • 2014
  • git에 대한 관심이 증가하고 있는 요즘 각종 업계에서 Version Control System을 사용하는 인원들은 git을 svn보다 복잡하다고 생각한다. 물론, 전문적인 소프트웨어 개발자라면 컴퓨터 시스템에 대한 이해가 높기 때문에 이에 별로 구애받지 않을 수도 있다. 하지만, 그 외 디자이너, 학생과 같이 전문 개발자가 아닌 인원들은 git을 이해하기 어려워하고 있다. git을 처음 접하는 인원들이 어려움을 느끼는 이유는 개념의 이해와 명령어 사용이었는데, 이들에게 git 사용을 보다 편안하게하기 위한 git GUI tool 개발을 위한 모델을 제안을 한다.

A Development of the Next-generation Interface System Based on the Finger Gesture Recognizing in Use of Image Process Techniques (영상처리를 이용한 지화인식 기반의 차세대 인터페이스 시스템 개발)

  • Kim, Nam-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.4
    • /
    • pp.935-942
    • /
    • 2011
  • This study aims to design and implement the finger gesture recognizing system that automatically recognizes finger gestures input through a camera and controls the computer. Common CCD cameras were redesigned as infrared light cameras to acquire the images. The recorded images go through the pre-process to find the hand features, the finger gestures are read accordingly, and an event takes place for the follow-up mouse controlling and presentation, and finally the way to control computers is suggested. The finger gesture recognizing system presented in this study has been verified as the next-generation interface to replace the mouse and keyboard for the future information-based units.

A Hierarchical Bayesian Network for Real-Time Continuous Hand Gesture Recognition (연속적인 손 제스처의 실시간 인식을 위한 계층적 베이지안 네트워크)

  • Huh, Sung-Ju;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.12
    • /
    • pp.1028-1033
    • /
    • 2009
  • This paper presents a real-time hand gesture recognition approach for controlling a computer. We define hand gestures as continuous hand postures and their movements for easy expression of various gestures and propose a Two-layered Bayesian Network (TBN) to recognize those gestures. The proposed method can compensate an incorrectly recognized hand posture and its location via the preceding and following information. In order to vertify the usefulness of the proposed method, we implemented a Virtual Mouse interface, the gesture-based interface of a physical mouse device. In experiments, the proposed method showed a recognition rate of 94.8% and 88.1% for a simple and cluttered background, respectively. This outperforms the previous HMM-based method, which had results of 92.4% and 83.3%, respectively, under the same conditions.