• Title/Summary/Keyword: Direction of Action Recognition

Search Result 19, Processing Time 0.03 seconds

Recognizing the Direction of Action using Generalized 4D Features (일반화된 4차원 특징을 이용한 행동 방향 인식)

  • Kim, Sun-Jung;Kim, Soo-Wan;Choi, Jin-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.5
    • /
    • pp.518-528
    • /
    • 2014
  • In this paper, we propose a method to recognize the action direction of human by developing 4D space-time (4D-ST, [x,y,z,t]) features. For this, we propose 4D space-time interest points (4D-STIPs, [x,y,z,t]) which are extracted using 3D space (3D-S, [x,y,z]) volumes reconstructed from images of a finite number of different views. Since the proposed features are constructed using volumetric information, the features for arbitrary 2D space (2D-S, [x,y]) viewpoint can be generated by projecting the 3D-S volumes and 4D-STIPs on corresponding image planes in training step. We can recognize the directions of actors in the test video since our training sets, which are projections of 3D-S volumes and 4D-STIPs to various image planes, contain the direction information. The process for recognizing action direction is divided into two steps, firstly we recognize the class of actions and then recognize the action direction using direction information. For the action and direction of action recognition, with the projected 3D-S volumes and 4D-STIPs we construct motion history images (MHIs) and non-motion history images (NMHIs) which encode the moving and non-moving parts of an action respectively. For the action recognition, features are trained by support vector data description (SVDD) according to the action class and recognized by support vector domain density description (SVDDD). For the action direction recognition after recognizing actions, each actions are trained using SVDD according to the direction class and then recognized by SVDDD. In experiments, we train the models using 3D-S volumes from INRIA Xmas Motion Acquisition Sequences (IXMAS) dataset and recognize action direction by constructing a new SNU dataset made for evaluating the action direction recognition.

Hand Gesture Recognition for Understanding Conducting Action (지휘행동 이해를 위한 손동작 인식)

  • Je, Hong-Mo;Kim, Ji-Man;Kim, Dai-Jin
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.10c
    • /
    • pp.263-266
    • /
    • 2007
  • We introduce a vision-based hand gesture recognition fer understanding musical time and patterns without extra special devices. We suggest a simple and reliable vision-based hand gesture recognition having two features First, the motion-direction code is proposed, which is a quantized code for motion directions. Second, the conducting feature point (CFP) where the point of sudden motion changes is also proposed. The proposed hand gesture recognition system extracts the human hand region by segmenting the depth information generated by stereo matching of image sequences. And then, it follows the motion of the center of the gravity(COG) of the extracted hand region and generates the gesture features such as CFP and the direction-code finally, we obtain the current timing pattern of beat and tempo of the playing music. The experimental results on the test data set show that the musical time pattern and tempo recognition rate is over 86.42% for the motion histogram matching, and 79.75% fer the CFP tracking only.

  • PDF

Development of Low-Cost Vision-based Eye Tracking Algorithm for Information Augmented Interactive System

  • Park, Seo-Jeon;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • v.7 no.1
    • /
    • pp.11-16
    • /
    • 2020
  • Deep Learning has become the most important technology in the field of artificial intelligence machine learning, with its high performance overwhelming existing methods in various applications. In this paper, an interactive window service based on object recognition technology is proposed. The main goal is to implement an object recognition technology using this deep learning technology to remove the existing eye tracking technology, which requires users to wear eye tracking devices themselves, and to implement an eye tracking technology that uses only usual cameras to track users' eye. We design an interactive system based on efficient eye detection and pupil tracking method that can verify the user's eye movement. To estimate the view-direction of user's eye, we initialize to make the reference (origin) coordinate. Then the view direction is estimated from the extracted eye pupils from the origin coordinate. Also, we propose a blink detection technique based on the eye apply ratio (EAR). With the extracted view direction and eye action, we provide some augmented information of interest without the existing complex and expensive eye-tracking systems with various service topics and situations. For verification, the user guiding service is implemented as a proto-type model with the school map to inform the location information of the desired location or building.

The Hidden Object Searching Method for Distributed Autonomous Robotic Systems

  • Yoon, Han-Ul;Lee, Dong-Hoon;Sim, Kwee-Bo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.1044-1047
    • /
    • 2005
  • In this paper, we present the strategy of object search for distributed autonomous robotic systems (DARS). The DARS are the systems that consist of multiple autonomous robotic agents to whom required functions are distributed. For instance, the agents should recognize their surrounding at where they are located and generate some rules to act upon by themselves. In this paper, we introduce the strategy for multiple DARS robots to search a hidden object at the unknown area. First, we present an area-based action making process to determine the direction change of the robots during their maneuvers. Second, we also present Q learning adaptation to enhance the area-based action making process. Third, we introduce the coordinate system to represent a robot's current location. In the end of this paper, we show experimental results using hexagon-based Q learning to find the hidden object.

  • PDF

Self-localization of Mobile Robots by the Detection and Recognition of Landmarks (인공표식과 자연표식을 결합한 강인한 자기위치추정)

  • 권인소;장기정;김성호;이왕헌
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.306-311
    • /
    • 2003
  • This paper presents a novel localization paradigm for mobile robots based on artificial and natural landmarks. A model-based object recognition method detects natural landmarks and conducts the global and topological localization. In addition, a metric localization method using artificial landmarks is fused to complement the deficiency of topology map and guide to action behavior. The recognition algorithm uses a modified local Zernike moments and a probabilistic voting method for the robust detection of objects in cluttered indoor environments. An artificial landmark is designed to have a three-dimensional multi-colored structure and the projection distortion of the structure encodes the distance and viewing direction of the robot. We demonstrate the feasibility of the proposed system through real world experiments using a mobile robot, KASIRI-III.

  • PDF

The Study on Gesture Recognition for Fighting Games based on Kinect Sensor (키넥트 센서 기반 격투액션 게임을 위한 제스처 인식에 관한 연구)

  • Kim, Jong-Min;Kim, Eun-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.552-555
    • /
    • 2018
  • This study developed a gesture recognition method using Kinect sensor and proposed a fighting action control interface. To extract the pattern features of a gesture, it used a method of extracting them in consideration of a body rate based on the shoulders, rather than of absolute positions. Although the same gesture is made, the positional coordinates of each joint caught by Kinect sensor can be different depending on a length and direction of the arm. Therefore, this study applied principal component analysis in order for gesture modeling and analysis. The method helps to reduce the effects of data errors and bring about dimensional contraction effect. In addition, this study proposed a modified matching algorithm to reduce motion restrictions of gesture recognition system.

  • PDF

Abstraction of players action in tennis games over various platform (플랫폼에 따른 테니스 게임 플레이어 액션의 추상화 연구)

  • Chung, Don-Uk
    • Journal of Digital Contents Society
    • /
    • v.16 no.4
    • /
    • pp.635-643
    • /
    • 2015
  • This study conducted a case study using various platforms centered on a tennis game to examine what forms the movements of a game player had when they were abstracted in the game. In particular, it summarized the forms of the player's experience that could be attained from the abstracted tennis actions into the 4 types: movement, swing, direction & intensity, and skill; and observed and schematized them in the early video games, console games, mobile games, Gesture recognition games, and wearable games. In conclusion, the development of technology offers the players with greater experience. For example the change of the platform of simple games of pressing buttons into swinging. Furthermore, the study found a consistency in the context even though the difference of action was slightly found by the interface.

Analysis of Previous Make-up Study (화장에 관한 기존연구 유형의 분석)

  • 백경진;김미영
    • The Research Journal of the Costume Culture
    • /
    • v.12 no.1
    • /
    • pp.182-198
    • /
    • 2004
  • The purpose of this study was to analyze the previous make-up studies. A number of publications and journals were reviewed and analyzed carefully. The results of review and analysis were as follows: There were many different subjects in make-up studies and They can be divided into ten types : cosmetics purchase behavior, change of make-up culture and comparison, make-up trend by era, cosmetics industry's standing of today and strategy, art trend in make-up, brand preference of cosmetics, make up attitude, recognition about imported cosmetics and purchase behavior, color preference of cosmetics, the relationship between self-concept and make-up. In general, the cosmetic purchase behavior studies are conducted most actively. According to result that analyze existent study, special duality of cosmetics purchase action appears very variously according to standard of classification of study target and study target. But, study target and method of study are not various, and purchase behavior study collected with make-un and clothes is yew lacking. Therefore, in this study, wished to discover problem of virtue study because analyzes studies about previous make-up and present forward study direction.

  • PDF

Statistical Modeling Methods for Analyzing Human Gait Structure (휴먼 보행 동작 구조 분석을 위한 통계적 모델링 방법)

  • Sin, Bong Kee
    • Smart Media Journal
    • /
    • v.1 no.2
    • /
    • pp.12-22
    • /
    • 2012
  • Today we are witnessing an increasingly widespread use of cameras in our lives for video surveillance, robot vision, and mobile phones. This has led to a renewed interest in computer vision in general and an on-going boom in human activity recognition in particular. Although not particularly fancy per se, human gait is inarguably the most common and frequent action. Early on this decade there has been a passing interest in human gait recognition, but it soon declined before we came up with a systematic analysis and understanding of walking motion. This paper presents a set of DBN-based models for the analysis of human gait in sequence of increasing complexity and modeling power. The discussion centers around HMM-based statistical methods capable of modeling the variability and incompleteness of input video signals. Finally a novel idea of extending the discrete state Markov chain with a continuous density function is proposed in order to better characterize the gait direction. The proposed modeling framework allows us to recognize pedestrian up to 91.67% and to elegantly decode out two independent gait components of direction and posture through a sequence of experiments.

  • PDF

Vision-Based Finger Action Recognition by Angle Detection and Contour Analysis

  • Lee, Dae-Ho;Lee, Seung-Gwan
    • ETRI Journal
    • /
    • v.33 no.3
    • /
    • pp.415-422
    • /
    • 2011
  • In this paper, we present a novel vision-based method of recognizing finger actions for use in electronic appliance interfaces. Human skin is first detected by color and consecutive motion information. Then, fingertips are detected by a novel scale-invariant angle detection based on a variable k-cosine. Fingertip tracking is implemented by detected region-based tracking. By analyzing the contour of the tracked fingertip, fingertip parameters, such as position, thickness, and direction, are calculated. Finger actions, such as moving, clicking, and pointing, are recognized by analyzing these fingertip parameters. Experimental results show that the proposed angle detection can correctly detect fingertips, and that the recognized actions can be used for the interface with electronic appliances.