• Title/Summary/Keyword: User recognition

Search Result 1,351, Processing Time 0.03 seconds

Building Control Box Attached Monitor based Color Grid Recognition Methods for User Access Authentication

  • Yoon, Sung Hoon;Lee, Kil Soo;Cha, Jae Sang;Khudaybergenov, Timur;Kim, Min Soo;Woo, Deok Gun;Kim, Jeong Uk
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.2
    • /
    • pp.1-7
    • /
    • 2020
  • The secure access the lighting, Heating, ventilation, and air conditioning (HVAC), fire safety, and security control boxes of building facilities is the primary objective of future smart buildings. This paper proposes an authorized user access to the electrical, lighting, fire safety, and security control boxes in the smart building, by using color grid coded optical camera communication (OCC) with face recognition Technologies. The existing CCTV subsystem can be used as the face recognition security subsystem for the proposed approach. At the same time a smart device attached camera can used as an OCC receiver of color grid code for user access authentication data sent by the control boxes to proceed authorization. This proposed approach allows increasing an authorization control reliability and highly secured authentication on accessing building facility infrastructure. The result of color grid code sequence received by the unauthorized person and his face identification allows getting good results in security and gaining effectiveness of accessing building facility infrastructure. The proposed concept uses the encoded user access authentication information through control box monitor and the smart device application which detect and decode the color grid coded informations combinations and then send user through the smart building network to building management system for authentication verification in combination with the facial features that gives a high protection level. The proposed concept is implemented on testbed model and experiment results verified for the secured user authentication in real-time.

Implement of Finger-Gesture Remote Controller using the Moving Direction Recognition of Single (단일 형상의 이동 방향 인식에 의한 손 동작 리모트 컨트롤러 구현)

  • Jang, Myeong-Soo;Lee, Woo-Beom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.4
    • /
    • pp.91-97
    • /
    • 2013
  • A finger-gesture remote controller using the single camera is implemented in this paper, which is base on the recognition of finger number and finger moving direction. Proposed method uses the transformed YCbCr color-difference information to extract the hand region effectively. The number and position of finger are computer by using a double circle tracing method. Specially, a user continuous-command can be performed repeatedly by recognizing the finger-gesture direction of single shape. The position information of finger enables a user command to amplify a same command in the User eXperience. Also, all processing tasks are implemented by using the Intel OpenCV library and C++ language. In order to evaluate the performance of the our proposed method, after applying to the commercial video player software as a remote controller. As a result, the proposed method showed the average 89% recognition ratio by the user command-mode.

Person Recognition using Ocular Image based on BRISK (BRISK 기반의 눈 영상을 이용한 사람 인식)

  • Kim, Min-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.5
    • /
    • pp.881-889
    • /
    • 2016
  • Ocular region recently emerged as a new biometric trait for overcoming the limitations of iris recognition performance at the situation that cannot expect high user cooperation, because the acquisition of an ocular image does not require high user cooperation and close capture unlike an iris image. This study proposes a new method for ocular image recognition based on BRISK (binary robust invariant scalable keypoints). It uses the distance ratio of the two nearest neighbors to improve the accuracy of the detection of corresponding keypoint pairs, and it also uses geometric constraint for eliminating incorrect keypoint pairs. Experiments for evaluating the validity the proposed method were performed on MMU public database. The person recognition rate on left and right ocular image datasets showed 91.1% and 90.6% respectively. The performance represents about 5% higher accuracy than the SIFT-based method which has been widely used in a biometric field.

Recognition of Virtual Written Characters Based on Convolutional Neural Network

  • Leem, Seungmin;Kim, Sungyoung
    • Journal of Platform Technology
    • /
    • v.6 no.1
    • /
    • pp.3-8
    • /
    • 2018
  • This paper proposes a technique for recognizing online handwritten cursive data obtained by tracing a motion trajectory while a user is in the 3D space based on a convolution neural network (CNN) algorithm. There is a difficulty in recognizing the virtual character input by the user in the 3D space because it includes both the character stroke and the movement stroke. In this paper, we divide syllable into consonant and vowel units by using labeling technique in addition to the result of localizing letter stroke and movement stroke in the previous study. The coordinate information of the separated consonants and vowels are converted into image data, and Korean handwriting recognition was performed using a convolutional neural network. After learning the neural network using 1,680 syllables written by five hand writers, the accuracy is calculated by using the new hand writers who did not participate in the writing of training data. The accuracy of phoneme-based recognition is 98.9% based on convolutional neural network. The proposed method has the advantage of drastically reducing learning data compared to syllable-based learning.

Motion Recognition of Smartphone using Sensor Data (센서 정보를 활용한 스마트폰 모션 인식)

  • Lee, Yong Cheol;Lee, Chil Woo
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.12
    • /
    • pp.1437-1445
    • /
    • 2014
  • A smartphone has very limited input methods regardless of its various functions. In this respect, it is one alternative that sensor motion recognition can make intuitive and various user interface. In this paper, we recognize user's motion using acceleration sensor, magnetic field sensor, and gyro sensor in smartphone. We try to reduce sensing error by gradient descent algorithm because in single sensor it is hard to obtain correct data. And we apply vector quantization by conversion of rotation displacement to spherical coordinate system for elevated recognition rate and recognition of small motion. After vector quantization process, we recognize motion using HMM(Hidden Markov Model).

Logical Activity Recognition Model for Smart Home Environment

  • Choi, Jung-In;Lim, Sung-Ju;Yong, Hwan-Seung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.9
    • /
    • pp.67-72
    • /
    • 2015
  • Recently, studies that interact with human and things through motion recognition are increasing due to the expansion of IoT(Internet of Things). This paper proposed the system that recognizes the user's logical activity in home environment by attaching some sensors to various objects. We employ Arduino sensors and appreciate the logical activity by using the physical activitymodel that we processed in the previous researches. In this System, we can cognize the activities such as watching TV, listening music, talking, eating, cooking, sleeping and using computer. After we produce experimental data through setting virtual scenario, then the average result of recognition rate was 95% but depending on experiment sensor situation and physical activity errors the consequence could be changed. To provide the recognized results to user, we visualized diverse graphs.

Visual Touch Recognition for NUI Using Voronoi-Tessellation Algorithm (보로노이-테셀레이션 알고리즘을 이용한 NUI를 위한 비주얼 터치 인식)

  • Kim, Sung Kwan;Joo, Young Hoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.3
    • /
    • pp.465-472
    • /
    • 2015
  • This paper presents a visual touch recognition for NUI(Natural User Interface) using Voronoi-tessellation algorithm. The proposed algorithms are three parts as follows: hand region extraction, hand feature point extraction, visual-touch recognition. To improve the robustness of hand region extraction, we propose RGB/HSI color model, Canny edge detection algorithm, and use of spatial frequency information. In addition, to improve the accuracy of the recognition of hand feature point extraction, we propose the use of Douglas Peucker algorithm, Also, to recognize the visual touch, we propose the use of the Voronoi-tessellation algorithm. Finally, we demonstrate the feasibility and applicability of the proposed algorithms through some experiments.

Real-Time Hand Gesture Recognition Based on Deep Learning (딥러닝 기반 실시간 손 제스처 인식)

  • Kim, Gyu-Min;Baek, Joong-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.4
    • /
    • pp.424-431
    • /
    • 2019
  • In this paper, we propose a real-time hand gesture recognition algorithm to eliminate the inconvenience of using hand controllers in VR applications. The user's 3D hand coordinate information is detected by leap motion sensor and then the coordinates are generated into two dimensional image. We classify hand gestures in real-time by learning the imaged 3D hand coordinate information through SSD(Single Shot multibox Detector) model which is one of CNN(Convolutional Neural Networks) models. We propose to use all 3 channels rather than only one channel. A sliding window technique is also proposed to recognize the gesture in real time when the user actually makes a gesture. An experiment was conducted to measure the recognition rate and learning performance of the proposed model. Our proposed model showed 99.88% recognition accuracy and showed higher usability than the existing algorithm.

A study on user defined spoken wake-up word recognition system using deep neural network-hidden Markov model hybrid model (Deep neural network-hidden Markov model 하이브리드 구조의 모델을 사용한 사용자 정의 기동어 인식 시스템에 관한 연구)

  • Yoon, Ki-mu;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.2
    • /
    • pp.131-136
    • /
    • 2020
  • Wake Up Word (WUW) is a short utterance used to convert speech recognizer to recognition mode. The WUW defined by the user who actually use the speech recognizer is called user-defined WUW. In this paper, to recognize user-defined WUW, we construct traditional Gaussian Mixture Model-Hidden Markov Model (GMM-HMM), Linear Discriminant Analysis (LDA)-GMM-HMM and LDA-Deep Neural Network (DNN)-HMM based system and compare their performances. Also, to improve recognition accuracy of the WUW system, a threshold method is applied to each model, which significantly reduces the error rate of the WUW recognition and the rejection failure rate of non-WUW simultaneously. For LDA-DNN-HMM system, when the WUW error rate is 9.84 %, the rejection failure rate of non-WUW is 0.0058 %, which is about 4.82 times lower than the LDA-GMM-HMM system. These results demonstrate that LDA-DNN-HMM model developed in this paper proves to be highly effective for constructing user-defined WUW recognition system.

Developing a New Algorithm for Conversational Agent to Detect Recognition Error and Neologism Meaning: Utilizing Korean Syllable-based Word Similarity (대화형 에이전트 인식오류 및 신조어 탐지를 위한 알고리즘 개발: 한글 음절 분리 기반의 단어 유사도 활용)

  • Jung-Won Lee;Il Im
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.267-286
    • /
    • 2023
  • The conversational agents such as AI speakers utilize voice conversation for human-computer interaction. Voice recognition errors often occur in conversational situations. Recognition errors in user utterance records can be categorized into two types. The first type is misrecognition errors, where the agent fails to recognize the user's speech entirely. The second type is misinterpretation errors, where the user's speech is recognized and services are provided, but the interpretation differs from the user's intention. Among these, misinterpretation errors require separate error detection as they are recorded as successful service interactions. In this study, various text separation methods were applied to detect misinterpretation. For each of these text separation methods, the similarity of consecutive speech pairs using word embedding and document embedding techniques, which convert words and documents into vectors. This approach goes beyond simple word-based similarity calculation to explore a new method for detecting misinterpretation errors. The research method involved utilizing real user utterance records to train and develop a detection model by applying patterns of misinterpretation error causes. The results revealed that the most significant analysis result was obtained through initial consonant extraction for detecting misinterpretation errors caused by the use of unregistered neologisms. Through comparison with other separation methods, different error types could be observed. This study has two main implications. First, for misinterpretation errors that are difficult to detect due to lack of recognition, the study proposed diverse text separation methods and found a novel method that improved performance remarkably. Second, if this is applied to conversational agents or voice recognition services requiring neologism detection, patterns of errors occurring from the voice recognition stage can be specified. The study proposed and verified that even if not categorized as errors, services can be provided according to user-desired results.