• Title/Summary/Keyword: User recognition

Search Result 1,342, Processing Time 0.026 seconds

EOG-based User-independent Gaze Recognition using Wavelet Coefficients and Dynamic Positional Warping (웨이블릿 계수와 Dynamic Positional Warping을 통한 EOG기반의 사용자 독립적 시선인식)

  • Chang, Won-Du;Im, Chang-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.9
    • /
    • pp.1119-1130
    • /
    • 2018
  • Writing letters or patterns on a virtual space by moving a person's gaze is called "eye writing," which is a promising tool for various human-computer interface applications. This paper investigates the use of conventional eye writing recognition algorithms for the purpose of user-independent recognition of eye-written characters. Two algorithms are presented to build the user-independent system: eye-written region extraction using wavelet coefficients and template generation. The experimental results of the proposed system demonstrated that with dynamic positional warping, an F1 score of 79.61% was achieved for 12 eye-written patterns, thereby indicating the possibility of user-independent use of eye writing.

A User Recognition Method based on Context Awareness in BLE Beacon-based Electronic Attendance System (BLE 비콘 기반 전자 출결 시스템에서의 상황인지를 기반으로 한 사용자 인식 기법)

  • Kang, Seung-Wan;Kim, Young-Kuk
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.609-610
    • /
    • 2017
  • As interest in IoT has increased recently, services using IoT devices and smart phones have been applied to various industries. Among them, the electronic attendance system has been built and serviced by various institutions, but There is a problem that the user recognition is not accurate yet. In this paper, we propose a context recognition based user recognition method that can improve the accuracy of user recognition part in a system based on BLE beacon among existing electronic attendance systems.

  • PDF

Robust User Activity Recognition using Smartphone Accelerometer Sensors (스마트폰 가속도 센서를 이용한 강건한 사용자 행위 인지 방법)

  • Jeon, Myung Joong;Park, Young Tack
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.9
    • /
    • pp.629-642
    • /
    • 2013
  • Recently, with the advent of smart phones, it brought many changes in lives of modern people. Especially, application utilizing the sensor information of smart phone, which provides the service adapted by user situations, has been emerged. Sensor data of smart phone can be used for recognizing the user situation, Because it is closely related to the behavior and habits of the user. currently, GPS sensor one of mobile sensor has been utilized a lot to recognize basic user activity. But, depending on the user situation, activity recognition system cannot receive GPS signal, and also not collect received data. So utilization is reduced. In this paper, for solving this problem, we suggest a method of user activity recognition that focused on the accelerometer sensor data using smart phone. Accelerometer sensor is stable to collect the data and it's sensitive to user behavior. Finally this paper suggests a noble approach to use state transition diagrams which represent the natural flow of user activity changes for enhancing the accuracy of user activity recognition.

A Consecutive Motion and Situation Recognition Mechanism to Detect a Vulnerable Condition Based on Android Smartphone

  • Choi, Hoan-Suk;Lee, Gyu Myoung;Rhee, Woo-Seop
    • International Journal of Contents
    • /
    • v.16 no.3
    • /
    • pp.1-17
    • /
    • 2020
  • Human motion recognition is essential for user-centric services such as surveillance-based security, elderly condition monitoring, exercise tracking, daily calories expend analysis, etc. It is typically based on the movement data analysis such as the acceleration and angular velocity of a target user. The existing motion recognition studies are only intended to measure the basic information (e.g., user's stride, number of steps, speed) or to recognize single motion (e.g., sitting, running, walking). Thus, a new mechanism is required to identify the transition of single motions for assessing a user's consecutive motion more accurately as well as recognizing the user's body and surrounding situations arising from the motion. Thus, in this paper, we collect the human movement data through Android smartphones in real time for five targeting single motions and propose a mechanism to recognize a consecutive motion including transitions among various motions and an occurred situation, with the state transition model to check if a vulnerable (life-threatening) condition, especially for the elderly, has occurred or not. Through implementation and experiments, we demonstrate that the proposed mechanism recognizes a consecutive motion and a user's situation accurately and quickly. As a result of the recognition experiment about mix sequence likened to daily motion, the proposed adoptive weighting method showed 4% (Holding time=15 sec), 88% (30 sec), 6.5% (60 sec) improvements compared to static method.

Conceptual Fuzzy Sets for Picture Reference System with Visual User Interface and Command Recognition System without Keyboard and Mouse

  • Saito, Maiji;Yamaguchi, Toru
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2003.09a
    • /
    • pp.138-141
    • /
    • 2003
  • This paper proposes conceptual fuzzy sets for picture reference system with visual user interface and command recognition system without keyboard and mouse. The picture reference system consists of the associative picture database, the visual user interface and command recognition system. The associative picture database searches pictures by using conceptual fuzzy sets. To show pictures attractive, the visual user interface provides some visual effect functions. The command recognition unit, without keyboard and mouse, captures user's hand by camera and informs it to the system as a command. We implement and evaluate the picture reference system.

  • PDF

Design and Implementation of a Stereoscopic Image Control System based on User Hand Gesture Recognition (사용자 손 제스처 인식 기반 입체 영상 제어 시스템 설계 및 구현)

  • Song, Bok Deuk;Lee, Seung-Hwan;Choi, HongKyw;Kim, Sung-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.3
    • /
    • pp.396-402
    • /
    • 2022
  • User interactions are being developed in various forms, and in particular, interactions using human gestures are being actively studied. Among them, hand gesture recognition is used as a human interface in the field of realistic media based on the 3D Hand Model. The use of interfaces based on hand gesture recognition helps users access media media more easily and conveniently. User interaction using hand gesture recognition should be able to view images by applying fast and accurate hand gesture recognition technology without restrictions on the computer environment. This paper developed a fast and accurate user hand gesture recognition algorithm using the open source media pipe framework and machine learning's k-NN (K-Nearest Neighbor). In addition, in order to minimize the restriction of the computer environment, a stereoscopic image control system based on user hand gesture recognition was designed and implemented using a web service environment capable of Internet service and a docker container, a virtual environment.

Design and Implementation of a Bimodal User Recognition System using Face and Audio (얼굴과 음성 정보를 이용한 바이모달 사용자 인식 시스템 설계 및 구현)

  • Kim Myung-Hun;Lee Chi-Geun;So In-Mi;Jung Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.5 s.37
    • /
    • pp.353-362
    • /
    • 2005
  • Recently, study of Bimodal recognition has become very active. In this paper we propose a Bimodal user recognition system that uses face information and audio information. Face recognition consists of face detection step and face recognition step. Face detection uses AdaBoost to find face candidate area. After finding face candidates, PCA feature extraction is applied to decrease the dimension of feature vector. And then, SVM classifiers are used to detect and recognize face. Audio recognition uses MFCC for audio feature extraction and HMM is used for audio recognition. Experimental results show that the Bimodal recognition can improve the user recognition rate much more than audio only recognition, especially in the Presence of noise.

  • PDF

Usability Test Guidelines for Speech-Oriented Multimodal User Interface (음성기반 멀티모달 사용자 인터페이스의 사용성 평가 방법론)

  • Hong, Ki-Hyung
    • MALSORI
    • /
    • no.67
    • /
    • pp.103-120
    • /
    • 2008
  • Basic components for multimodal interface, such as speech recognition, speech synthesis, gesture recognition, and multimodal fusion, have their own technological limitations. For example, the accuracy of speech recognition decreases for large vocabulary and in noisy environments. In spite of those technological limitations, there are lots of applications in which speech-oriented multimodal user interfaces are very helpful to users. However, in order to expand application areas for speech-oriented multimodal interfaces, we have to develop the interfaces focused on usability. In this paper, we introduce usability and user-centered design methodology in general. There has been much work for evaluating spoken dialogue systems. We give a summary for PARADISE (PARAdigm for Dialogue System Evaluation) and PROMISE (PROcedure for Multimodal Interactive System Evaluation) that are the generalized evaluation frameworks for voice and multimodal user interfaces. Then, we present usability components for speech-oriented multimodal user interfaces and usability testing guidelines that can be used in a user-centered multimodal interface design process.

  • PDF

Context Awareness Model using the Improved Google Activity Recognition (개선된 Google Activity Recognition을 이용한 상황인지 모델)

  • Baek, Seungeun;Park, Sangwon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.1
    • /
    • pp.57-64
    • /
    • 2015
  • Activity recognition technology is gaining attention because it can provide useful information follow user's situation. In research of activity recognition before smartphone's dissemination, we had to infer user's activity by using independent sensor. But now, with development of IT industry, we can infer user's activity by using inner sensor of smartphone. So, more animated research of activity recognition is being implemented now. By applying activity recognition system, we can develop service like recommending application according to user's preference or providing information of route. Some previous activity recognition systems have a defect using up too much energy, because they use GPS sensor. On the other hand, activity recognition system which Google released recently (Google Activity Recognition) needs only a few power because it use 'Network Provider' instead of GPS. Thus it is suitable to smartphone application system. But through a result from testing performance of Google Activity Recognition, we found that is difficult to getting user's exact activity because of unnecessary activity element and some wrong recognition. So, in this paper, we describe problems of Google Activity Recognition and propose AGAR(Advanced Google Activity Recognition) applied method to improve accuracy level because we need more exact activity recognition for new service based on activity recognition. Also to appraise value of AGAR, we compare performance of other activity recognition systems and ours and explain an applied possibility of AGAR by developing exemplary program.

Design of Multimodal User Interface using Speech and Gesture Recognition for Wearable Watch Platform (착용형 단말에서의 음성 인식과 제스처 인식을 융합한 멀티 모달 사용자 인터페이스 설계)

  • Seong, Ki Eun;Park, Yu Jin;Kang, Soon Ju
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.6
    • /
    • pp.418-423
    • /
    • 2015
  • As the development of technology advances at exceptional speed, the functions of wearable devices become more diverse and complicated, and many users find some of the functions difficult to use. In this paper, the main aim is to provide the user with an interface that is more friendly and easier to use. The speech recognition is easy to use and also easy to insert an input order. However, speech recognition is problematic when using on a wearable device that has limited computing power and battery. The wearable device cannot predict when the user will give an order through speech recognition. This means that while speech recognition must always be activated, because of the battery issue, the time taken waiting for the user to give an order is impractical. In order to solve this problem, we use gesture recognition. This paper describes how to use both speech and gesture recognition as a multimodal interface to increase the user's comfort.