• 제목/요약/키워드: Recognition devices

검색결과 637건 처리시간 0.024초

2단계 히든마코프 모델을 이용한 제스쳐의 성능향상 연구 (Improvement of Gesture Recognition using 2-stage HMM)

  • 정훤재;박현준;김동한
    • 제어로봇시스템학회논문지
    • /
    • 제21권11호
    • /
    • pp.1034-1037
    • /
    • 2015
  • In recent years in the field of robotics, various methods have been developed to create an intimate relationship between people and robots. These methods include speech, vision, and biometrics recognition as well as gesture-based interaction. These recognition technologies are used in various wearable devices, smartphones and other electric devices for convenience. Among these technologies, gesture recognition is the most commonly used and appropriate technology for wearable devices. Gesture recognition can be classified as contact or noncontact gesture recognition. This paper proposes contact gesture recognition with IMU and EMG sensors by using the hidden Markov model (HMM) twice. Several simple behaviors make main gestures through the one-stage HMM. It is equal to the Hidden Markov model process, which is well known for pattern recognition. Additionally, the sequence of the main gestures, which comes from the one-stage HMM, creates some higher-order gestures through the two-stage HMM. In this way, more natural and intelligent gestures can be implemented through simple gestures. This advanced process can play a larger role in gesture recognition-based UX for many wearable and smart devices.

다중 디바이스에서 손 인식을 통한 선택적 제어 (Selective control of multiple devices via finger recognition)

  • 장호정;김태현;윤영미
    • 한국멀티미디어학회논문지
    • /
    • 제17권1호
    • /
    • pp.60-68
    • /
    • 2014
  • 최근 사용자와 컴퓨터 간의 상호작용을 통하여 컴퓨터를 제어하는 시스템이 활발하게 연구되고 있으며, 특히 사람의 손 인식에 관한 연구가 많이 이루어지고 있다. 현재까지는 하나의 디바이스에서 사람의 손인식 정확도를 높이는 연구와 손동작을 통하여 디바이스를 제어하는 연구가 주를 이루고 있지만, 손 인식 기술이 여러 산업분야에 적용됨에 따라 다양한 환경과 상황에 적용하기 위하여 다중 디바이스 간의 시스템을 선택적으로 제어하는 연구가 필요하다. 본 논문에서는 사용자의 손 인식을 통하여 두 개의 디바이스 중에서 선택적으로 하나의 디바이스를 제어하고자 한다. 더불어 6가지의 조건을 포함한 환경에서 실험을 통해 두 디바이스 간의 선택적 손 인식의 성공률을 높이기 위한 환경을 제시하고자 한다.

Robustness of Face Recognition to Variations of Illumination on Mobile Devices Based on SVM

  • Nam, Gi-Pyo;Kang, Byung-Jun;Park, Kang-Ryoung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제4권1호
    • /
    • pp.25-44
    • /
    • 2010
  • With the increasing popularity of mobile devices, it has become necessary to protect private information and content in these devices. Face recognition has been favored over conventional passwords or security keys, because it can be easily implemented using a built-in camera, while providing user convenience. However, because mobile devices can be used both indoors and outdoors, there can be many illumination changes, which can reduce the accuracy of face recognition. Therefore, we propose a new face recognition method on a mobile device robust to illumination variations. This research makes the following four original contributions. First, we compared the performance of face recognition with illumination variations on mobile devices for several illumination normalization procedures suitable for mobile devices with low processing power. These include the Retinex filter, histogram equalization and histogram stretching. Second, we compared the performance for global and local methods of face recognition such as PCA (Principal Component Analysis), LNMF (Local Non-negative Matrix Factorization) and LBP (Local Binary Pattern) using an integer-based kernel suitable for mobile devices having low processing power. Third, the characteristics of each method according to the illumination va iations are analyzed. Fourth, we use two matching scores for several methods of illumination normalization, Retinex and histogram stretching, which show the best and $2^{nd}$ best performances, respectively. These are used as the inputs of an SVM (Support Vector Machine) classifier, which can increase the accuracy of face recognition. Experimental results with two databases (data collected by a mobile device and the AR database) showed that the accuracy of face recognition achieved by the proposed method was superior to that of other methods.

모바일/임베디드 객체 및 장면 인식 기술 동향 (Recent Trends of Object and Scene Recognition Technologies for Mobile/Embedded Devices)

  • 이수웅;이근동;고종국;이승재;유원영
    • 전자통신동향분석
    • /
    • 제34권6호
    • /
    • pp.133-144
    • /
    • 2019
  • Although deep learning-based visual image recognition technology has evolved rapidly, most of the commonly used methods focus solely on recognition accuracy. However, the demand for low latency and low power consuming image recognition with an acceptable accuracy is rising for practical applications in edge devices. For example, most Internet of Things (IoT) devices have a low computing power requiring more pragmatic use of these technologies; in addition, drones or smartphones have limited battery capacity again requiring practical applications that take this into consideration. Furthermore, some people do not prefer that central servers process their private images, as is required by high performance serverbased recognition technologies. To address these demands, the object and scene recognition technologies for mobile/embedded devices that enable optimized neural networks to operate in mobile and embedded environments are gaining attention. In this report, we briefly summarize the recent trends and issues of object and scene recognition technologies for mobile and embedded devices.

Development of a Hybrid Recognition System Using Biometrics to Manage Smart Devices based on Internet of Things

  • Ban, Ilhak;Jo, Seonghun;Park, Haneum;Um, Junho;Kim, Se-Jin
    • 통합자연과학논문집
    • /
    • 제11권3호
    • /
    • pp.148-153
    • /
    • 2018
  • In this paper, we propose a hybrid-recognition system to obtain the state information and control the Internet of Things (IoT) based smart devices using two recognitions. First, we use a facial recognition for checking the owner of the mobile devices, i.e., smartphones, tablet PCs, and so on, and obtaining the state information of the IoT based smart devices, i.e., smart cars, smart appliance, and so on, and then we use a fingerprint recognition to control them. Further, in the conventional system, the message of the state and control information between the mobile devices and smart devices is only exchanged through the cellar mobile network. Thus, we also propose a direct communication to reduce the total transmission time. In addition, we develop a testbed of the proposed system using smartphones, desktop computers, and Arduino vehicle as one of the smart devices. We evaluate the total transmission time between the conventional and direct communications and show that the direct communication with the proposed system has better performance.

Hand Gesture Recognition Suitable for Wearable Devices using Flexible Epidermal Tactile Sensor Array

  • Byun, Sung-Woo;Lee, Seok-Pil
    • Journal of Electrical Engineering and Technology
    • /
    • 제13권4호
    • /
    • pp.1732-1739
    • /
    • 2018
  • With the explosion of digital devices, interaction technologies between human and devices are required more than ever. Especially, hand gesture recognition is advantageous in that it can be easily used. It is divided into the two groups: the contact sensor and the non-contact sensor. Compared with non-contact gesture recognition, the advantage of contact gesture recognition is that it is able to classify gestures that disappear from the sensor's sight. Also, since there is direct contacted with the user, relatively accurate information can be acquired. Electromyography (EMG) and force-sensitive resistors (FSRs) are the typical methods used for contact gesture recognition based on muscle activities. The sensors, however, are generally too sensitive to environmental disturbances such as electrical noises, electromagnetic signals and so on. In this paper, we propose a novel contact gesture recognition method based on Flexible Epidermal Tactile Sensor Array (FETSA) that is used to measure electrical signals according to movements of the wrist. To recognize gestures using FETSA, we extracted feature sets, and the gestures were subsequently classified using the support vector machine. The performance of the proposed gesture recognition method is very promising in comparison with two previous non-contact and contact gesture recognition studies.

Study of Eye Blinking to Improve Face Recognition for Screen Unlock on Mobile Devices

  • Chu, Chung-Hua;Feng, Yu-Kai
    • Journal of Electrical Engineering and Technology
    • /
    • 제13권2호
    • /
    • pp.953-960
    • /
    • 2018
  • In recently, eye blink recognition, and face recognition are very popular and promising techniques. In some cases, people can use the photos and face masks to hack mobile security systems, so we propose an eye blinking detection, which finds eyes through the proportion of human face. The proposed method detects the movements of eyeball and the number of eye blinking to improve face recognition for screen unlock on the mobile devices. Experimental results show that our method is efficient and robust for the screen unlock on the mobile devices.

1D-CNN-LSTM Hybrid-Model-Based Pet Behavior Recognition through Wearable Sensor Data Augmentation

  • Hyungju Kim;Nammee Moon
    • Journal of Information Processing Systems
    • /
    • 제20권2호
    • /
    • pp.159-172
    • /
    • 2024
  • The number of healthcare products available for pets has increased in recent times, which has prompted active research into wearable devices for pets. However, the data collected through such devices are limited by outliers and missing values owing to the anomalous and irregular characteristics of pets. Hence, we propose pet behavior recognition based on a hybrid one-dimensional convolutional neural network (CNN) and long short- term memory (LSTM) model using pet wearable devices. An Arduino-based pet wearable device was first fabricated to collect data for behavior recognition, where gyroscope and accelerometer values were collected using the device. Then, data augmentation was performed after replacing any missing values and outliers via preprocessing. At this time, the behaviors were classified into five types. To prevent bias from specific actions in the data augmentation, the number of datasets was compared and balanced, and CNN-LSTM-based deep learning was performed. The five subdivided behaviors and overall performance were then evaluated, and the overall accuracy of behavior recognition was found to be about 88.76%.

Vision- Based Finger Spelling Recognition for Korean Sign Language

  • Park Jun;Lee Dae-hyun
    • 한국멀티미디어학회논문지
    • /
    • 제8권6호
    • /
    • pp.768-775
    • /
    • 2005
  • For sign languages are main communication means among hearing-impaired people, there are communication difficulties between speaking-oriented people and sign-language-oriented people. Automated sign-language recognition may resolve these communication problems. In sign languages, finger spelling is used to spell names and words that are not listed in the dictionary. There have been research activities for gesture and posture recognition using glove-based devices. However, these devices are often expensive, cumbersome, and inadequate for recognizing elaborate finger spelling. Use of colored patches or gloves also cause uneasiness. In this paper, a vision-based finger spelling recognition system is introduced. In our method, captured hand region images were separated from the background using a skin detection algorithm assuming that there are no skin-colored objects in the background. Then, hand postures were recognized using a two-dimensional grid analysis method. Our recognition system is not sensitive to the size or the rotation of the input posture images. By optimizing the weights of the posture features using a genetic algorithm, our system achieved high accuracy that matches other systems using devices or colored gloves. We applied our posture recognition system for detecting Korean Sign Language, achieving better than $93\%$ accuracy.

  • PDF

멀티 디스플레이 콘텐츠 전송 시스템을 위한 디바이스 연결 및 배치 인식 기법의 구현 (An Implementation of Device Connection and Layout Recognition Techniques for the Multi-Display Contents Delivery System)

  • 전소연;임순범
    • 한국멀티미디어학회논문지
    • /
    • 제19권8호
    • /
    • pp.1479-1486
    • /
    • 2016
  • According to the advancement of display devices, the multi-screen contents display environment is growing to be accepted for the display exhibition area. The objectives of this research are to find communications technology and to design an editor interface of contents delivery system for the larger and adaptive multi-display workspaces. The proposed system can find existence of display devices and get information without any additional tools like marker, and can recognize device layout with only web-cam and image processing technology. The multi-display contents delivery system is composed of devices with three roles; display device, editor device, and fixed server. The editor device which has the role of main control uses UPnP technology to find existence and receive information of display devices. extract appointed color in captured picture using a tracking library to recognize the physical layout of display devices. After the device information and physical layout of display devices are connected, the content delivery system allows the display contents to be sent to the corresponding display devices through WebSocket technology. Also the experimental results show the possibility of our device connection and layout recognition techniques can be utilized for the large spaced and adaptive multi-display applications.