• 제목/요약/키워드: Motion Recognition System

검색결과 364건 처리시간 0.036초

적외선 카메라를 이용한 에어 인터페이스 시스템(AIS) 연구 (A Study on Air Interface System (AIS) Using Infrared Ray (IR) Camera)

  • 김효성;정현기;김병규
    • 정보처리학회논문지B
    • /
    • 제18B권3호
    • /
    • pp.109-116
    • /
    • 2011
  • 본 논문에서는 기계적인 조작 장치 없이 손동작만으로 컴퓨터를 조작할 수 있는 차세대 인터페이스인 에어 인터페이스를 구현하였다. 에어 인터페이스 시스템 구현을 위해 먼저 적외선의 전반사 원리를 이용하였으며, 이후 획득된 적외선 영상에서 손 영역을 분할한다. 매 프레임에서 분할된 손 영역은 이벤트 처리를 위한 손동작 인식부의 입력으로 사용되고, 최종적으로 개별 제어 이벤트에 맵핑된 손동작 인식을 통하여 일반적인 제어를 수행하게 된다. 본 연구에서는 손영역 검출과 추적, 손동작 인식과정을 위해 구현되어진 영상처리 및 인식 기법들이 소개되며, 개발된 에어 인터페이스 시스템은 길거리 광고, 프레젠테이션, 키오스크 등의 그 활용성이 매우 클 것으로 기대된다.

센서 정보를 활용한 스마트폰 모션 인식 (Motion Recognition of Smartphone using Sensor Data)

  • 이용철;이칠우
    • 한국멀티미디어학회논문지
    • /
    • 제17권12호
    • /
    • pp.1437-1445
    • /
    • 2014
  • A smartphone has very limited input methods regardless of its various functions. In this respect, it is one alternative that sensor motion recognition can make intuitive and various user interface. In this paper, we recognize user's motion using acceleration sensor, magnetic field sensor, and gyro sensor in smartphone. We try to reduce sensing error by gradient descent algorithm because in single sensor it is hard to obtain correct data. And we apply vector quantization by conversion of rotation displacement to spherical coordinate system for elevated recognition rate and recognition of small motion. After vector quantization process, we recognize motion using HMM(Hidden Markov Model).

안면 움직임 분석을 통한 단음절 음성인식 (Monosyllable Speech Recognition through Facial Movement Analysis)

  • 강동원;서정우;최진승;최재봉;탁계래
    • 전기학회논문지
    • /
    • 제63권6호
    • /
    • pp.813-819
    • /
    • 2014
  • The purpose of this study was to extract accurate parameters of facial movement features using 3-D motion capture system in speech recognition technology through lip-reading. Instead of using the features obtained through traditional camera image, the 3-D motion system was used to obtain quantitative data for actual facial movements, and to analyze 11 variables that exhibit particular patterns such as nose, lip, jaw and cheek movements in monosyllable vocalizations. Fourteen subjects, all in 20s of age, were asked to vocalize 11 types of Korean vowel monosyllables for three times with 36 reflective markers on their faces. The obtained facial movement data were then calculated into 11 parameters and presented as patterns for each monosyllable vocalization. The parameter patterns were performed through learning and recognizing process for each monosyllable with speech recognition algorithms with Hidden Markov Model (HMM) and Viterbi algorithm. The accuracy rate of 11 monosyllables recognition was 97.2%, which suggests the possibility of voice recognition of Korean language through quantitative facial movement analysis.

Video Representation via Fusion of Static and Motion Features Applied to Human Activity Recognition

  • Arif, Sheeraz;Wang, Jing;Fei, Zesong;Hussain, Fida
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권7호
    • /
    • pp.3599-3619
    • /
    • 2019
  • In human activity recognition system both static and motion information play crucial role for efficient and competitive results. Most of the existing methods are insufficient to extract video features and unable to investigate the level of contribution of both (Static and Motion) components. Our work highlights this problem and proposes Static-Motion fused features descriptor (SMFD), which intelligently leverages both static and motion features in the form of descriptor. First, static features are learned by two-stream 3D convolutional neural network. Second, trajectories are extracted by tracking key points and only those trajectories have been selected which are located in central region of the original video frame in order to to reduce irrelevant background trajectories as well computational complexity. Then, shape and motion descriptors are obtained along with key points by using SIFT flow. Next, cholesky transformation is introduced to fuse static and motion feature vectors to guarantee the equal contribution of all descriptors. Finally, Long Short-Term Memory (LSTM) network is utilized to discover long-term temporal dependencies and final prediction. To confirm the effectiveness of the proposed approach, extensive experiments have been conducted on three well-known datasets i.e. UCF101, HMDB51 and YouTube. Findings shows that the resulting recognition system is on par with state-of-the-art methods.

VR 환경을 고려한 동작 및 위치 인식에 관한 연구 (A Study on Motion and Position Recognition Considering VR Environments)

  • 오암석
    • 한국정보통신학회논문지
    • /
    • 제21권12호
    • /
    • pp.2365-2370
    • /
    • 2017
  • 본 논문에서는 체험형 VR 환경을 고려한 동작 및 위치 인식 기법을 제안한다. 동작 인식은 신체부위에 복수개의 AHRS 디바이스를 부착하고 이를 기준으로 좌표계를 정의한다. 각각의 AHRS 디바이스로부터 측정되는 9축 움직임 정보를 기반으로 사용자의 동작을 인식하고 신체 분절 간의 관절각을 추출하여 동작을 보정한다. 위치인식은 AHRS 디바이스의 관성센서를 통해 보행 정보를 추출하여 상대위치를 인식하고 BLE Fingerprint를 이용하여 누적오차를 보정한다. 제안하는 동작 및 위치인식 기법의 구현을 위해 AHRS기반의 위치인식과 관절각 추출 실험을 진행하였다. 위치 인식 실험의 평균 오차는 0.25m, 관절 각 추출 실험에서 관절 각 평균 오차는 $3.2^{\circ}$로 나타났다.

이동 로봇을 위한 전정안반사 기반 비젼 추적 시스템의 인식 성능 평가 (Recognition Performance of Vestibular-Ocular Reflex Based Vision Tracking System for Mobile Robot)

  • 박재홍;반욱;최태영;권현일;조동일;김광수
    • 제어로봇시스템학회논문지
    • /
    • 제15권5호
    • /
    • pp.496-504
    • /
    • 2009
  • This paper presents a recognition performance of VOR (Vestibular-Ocular Reflex) based vision tracking system for mobile robot. The VOR is a reflex eye movement which, during head movements, produces an eye movement in the direction opposite to the head movement, thus maintaining the image of interested objects placed on the center of retina. We applied this physiological concept to the vision tracking system for high recognition performance in mobile environments. The proposed method was implemented in a vision tracking system consisting of a motion sensor module and an actuation module with vision sensor. We tested the developed system on an x/y stage and a rate table for linear motion and angular motion, respectively. The experimental results show that the recognition rates of the VOR-based method are three times more than non-VOR conventional vision system, which is mainly due to the fact that VOR-based vision tracking system has the line of sight of vision system to be fixed to the object, eventually reducing the blurring effect of images under the dynamic environment. It suggests that the VOR concept proposed in this paper can be applied efficiently to the vision tracking system for mobile robot.

Implementation of Non-Contact Gesture Recognition System Using Proximity-based Sensors

  • Lee, Kwangjae
    • 반도체디스플레이기술학회지
    • /
    • 제19권3호
    • /
    • pp.106-111
    • /
    • 2020
  • In this paper, we propose the non-contact gesture recognition system and algorithm using proximity-based sensors. The system uses four IR receiving photodiode embedded on a single chip and an IR LED for small area. The goal of this paper is to use the proposed algorithm to solve the problem associated with bringing the four IR receivers close to each other and to implement a gesture sensor capable of recognizing eight directional gestures from a distance of 10cm and above. The proposed system was implemented on a FPGA board using Verilog HDL with Android host board. As a result of the implementation, a 2-D swipe gesture of fingers and palms of 3cm and 15cm width was recognized, and a recognition rate of more than 97% was achieved under various conditions. The proposed system is a low-power and non-contact HMI system that recognizes a simple but accurate motion. It can be used as an auxiliary interface to use simple functions such as calls, music, and games for portable devices using batteries.

Study on User Interface for a Capacitive-Sensor Based Smart Device

  • Jung, Sun-IL;Kim, Young-Chul
    • 스마트미디어저널
    • /
    • 제8권3호
    • /
    • pp.47-52
    • /
    • 2019
  • In this paper, we designed HW / SW interfaces for processing the signals of capacitive sensors like Electric Potential Sensor (EPS) to detect the surrounding electric field disturbance as feature signals in motion recognition systems. We implemented a smart light control system with those interfaces. In the system, the on/off switch and brightness adjustment are controlled by hand gestures using the designed and fabricated interface circuits. PWM (Pulse Width Modulation) signals of the controller with a driver IC are used to drive the LED and to control the brightness and on/off operation. Using the hand-gesture signals obtained through EPS sensors and the interface HW/SW, we can not only construct a gesture instructing system but also accomplish the faster recognition speed by developing dedicated interface hardware including control circuitry. Finally, using the proposed hand-gesture recognition and signal processing methods, the light control module was also designed and implemented. The experimental result shows that the smart light control system can control the LED module properly by accurate motion detection and gesture classification.

연합 학습 기반 분산 FMCW MIMO Radar를 활용한 모션 인식 알고리즘 개발 및 성능 분석 (Development of Federated Learning based Motion Recognition Algorithm using Distributed FMCW MIMO Radars)

  • 강종성;이승호;이정한;양윤지;박재현
    • 대한임베디드공학회논문지
    • /
    • 제17권3호
    • /
    • pp.139-148
    • /
    • 2022
  • In this paper, we implement a distributed FMCW MIMO radar system to obtain Micro Doppler signatures of target motions. In addition, we also develop federated learning based motion recognition algorithm based on the Micro-Doppler radar signature collected by the implemented FMCW MIMO radar system. Through the experiment, we have verified that the proposed federated learning based algorithm can improve the motion recognition accuracy up to 90%.

지휘행동 이해를 위한 손동작 인식 (Hand Gesture Recognition for Understanding Conducting Action)

  • 제홍모;김지만;김대진
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2007년도 가을 학술발표논문집 Vol.34 No.2 (C)
    • /
    • pp.263-266
    • /
    • 2007
  • We introduce a vision-based hand gesture recognition fer understanding musical time and patterns without extra special devices. We suggest a simple and reliable vision-based hand gesture recognition having two features First, the motion-direction code is proposed, which is a quantized code for motion directions. Second, the conducting feature point (CFP) where the point of sudden motion changes is also proposed. The proposed hand gesture recognition system extracts the human hand region by segmenting the depth information generated by stereo matching of image sequences. And then, it follows the motion of the center of the gravity(COG) of the extracted hand region and generates the gesture features such as CFP and the direction-code finally, we obtain the current timing pattern of beat and tempo of the playing music. The experimental results on the test data set show that the musical time pattern and tempo recognition rate is over 86.42% for the motion histogram matching, and 79.75% fer the CFP tracking only.

  • PDF