• 제목/요약/키워드: Human action recognition

검색결과 154건 처리시간 0.022초

사람 행동 인식에서 반복 감소를 위한 저수준 사람 행동 변화 감지 방법 (Detection of Low-Level Human Action Change for Reducing Repetitive Tasks in Human Action Recognition)

  • 노요환;김민정;이도훈
    • 한국멀티미디어학회논문지
    • /
    • 제22권4호
    • /
    • pp.432-442
    • /
    • 2019
  • Most current human action recognition methods based on deep learning methods. It is required, however, a very high computational cost. In this paper, we propose an action change detection method to reduce repetitive human action recognition tasks. In reality, simple actions are often repeated and it is time consuming process to apply high cost action recognition methods on repeated actions. The proposed method decides whether action has changed. The action recognition is executed only when it has detected action change. The action change detection process is as follows. First, extract the number of non-zero pixel from motion history image and generate one-dimensional time-series data. Second, detecting action change by comparison of difference between current time trend and local extremum of time-series data and threshold. Experiments on the proposed method achieved 89% balanced accuracy on action change data and 61% reduced action recognition repetition.

다중 시점 영상 시퀀스를 이용한 강인한 행동 인식 (Robust Action Recognition Using Multiple View Image Sequences)

  • 아마드;이성환
    • 한국정보과학회:학술대회논문집
    • /
    • 한국정보과학회 2006년도 가을 학술발표논문집 Vol.33 No.2 (B)
    • /
    • pp.509-514
    • /
    • 2006
  • Human action recognition is an active research area in computer vision. In this paper, we present a robust method for human action recognition by using combined information of human body shape and motion information with multiple views image sequence. The principal component analysis is used to extract the shape feature of human body and multiple block motion of the human body is used to extract the motion features of human. This combined information with multiple view sequences enhances the recognition of human action. We represent each action using a set of hidden Markov model and we model each action by multiple views. This characterizes the human action recognition from arbitrary view information. Several daily actions of elderly persons are modeled and tested by using this approach and they are correctly classified, which indicate the robustness of our method.

  • PDF

ADD-Net: Attention Based 3D Dense Network for Action Recognition

  • Man, Qiaoyue;Cho, Young Im
    • 한국컴퓨터정보학회논문지
    • /
    • 제24권6호
    • /
    • pp.21-28
    • /
    • 2019
  • Recent years with the development of artificial intelligence and the success of the deep model, they have been deployed in all fields of computer vision. Action recognition, as an important branch of human perception and computer vision system research, has attracted more and more attention. Action recognition is a challenging task due to the special complexity of human movement, the same movement may exist between multiple individuals. The human action exists as a continuous image frame in the video, so action recognition requires more computational power than processing static images. And the simple use of the CNN network cannot achieve the desired results. Recently, the attention model has achieved good results in computer vision and natural language processing. In particular, for video action classification, after adding the attention model, it is more effective to focus on motion features and improve performance. It intuitively explains which part the model attends to when making a particular decision, which is very helpful in real applications. In this paper, we proposed a 3D dense convolutional network based on attention mechanism(ADD-Net), recognition of human motion behavior in the video.

A Survey of Human Action Recognition Approaches that use an RGB-D Sensor

  • Farooq, Adnan;Won, Chee Sun
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제4권4호
    • /
    • pp.281-290
    • /
    • 2015
  • Human action recognition from a video scene has remained a challenging problem in the area of computer vision and pattern recognition. The development of the low-cost RGB depth camera (RGB-D) allows new opportunities to solve the problem of human action recognition. In this paper, we present a comprehensive review of recent approaches to human action recognition based on depth maps, skeleton joints, and other hybrid approaches. In particular, we focus on the advantages and limitations of the existing approaches and on future directions.

모션 히스토리 영상 및 기울기 방향성 히스토그램과 적출 모델을 사용한 깊이 정보 기반의 연속적인 사람 행동 인식 시스템 (Depth-Based Recognition System for Continuous Human Action Using Motion History Image and Histogram of Oriented Gradient with Spotter Model)

  • 음혁민;이희진;윤창용
    • 한국지능시스템학회논문지
    • /
    • 제26권6호
    • /
    • pp.471-476
    • /
    • 2016
  • 본 논문은 깊이 정보를 기반으로 모션 히스토리 영상 및 기울기 방향성 히스토그램과 적출 모델을 사용하여 연속적인 사람 행동들을 인식하는 시스템을 설명하고 연속적인 행동 인식 시스템에서 인식 성능을 개선하기 위해 행동 적출을 수행하는 적출 모델을 제안한다. 본 시스템의 구성은 전처리 과정, 사람 행동 및 적출 모델링 그리고 연속적인 사람 행동 인식으로 이루어져 있다. 전처리 과정에서는 영상 분할과 시공간 템플릿 기반의 특징을 추출하기 위하여 Depth-MHI-HOG 방법을 사용하였으며, 추출된 특징들은 사람 행동 및 적출 모델링 과정을 통해 시퀀스들로 생성된다. 이 생성된 시퀀스들과 은닉 마르코프 모델을 사용하여 정의된 각각의 행동에 적합한 사람 행동 모델과 제안된 적출 모델을 생성한다. 연속적인 사람 행동 인식은 연속적인 행동 시퀀스에서 적출 모델에 의해 의미 있는 행동과 의미 없는 행동을 분할하는 행동 적출과 의미 있는 행동 시퀀스에 대한 모델의 확률 값들을 비교하여 연속적으로 사람 행동들을 인식한다. 실험 결과를 통해 제안된 모델이 연속적인 행동 인식 시스템에서 인식 성능을 효과적으로 개선하는 것을 검증한다.

Human Action Recognition Bases on Local Action Attributes

  • Zhang, Jing;Lin, Hong;Nie, Weizhi;Chaisorn, Lekha;Wong, Yongkang;Kankanhalli, Mohan S
    • Journal of Electrical Engineering and Technology
    • /
    • 제10권3호
    • /
    • pp.1264-1274
    • /
    • 2015
  • Human action recognition received many interest in the computer vision community. Most of the existing methods focus on either construct robust descriptor from the temporal domain, or computational method to exploit the discriminative power of the descriptor. In this paper we explore the idea of using local action attributes to form an action descriptor, where an action is no longer characterized with the motion changes in the temporal domain but the local semantic description of the action. We propose an novel framework where introduces local action attributes to represent an action for the final human action categorization. The local action attributes are defined for each body part which are independent from the global action. The resulting attribute descriptor is used to jointly model human action to achieve robust performance. In addition, we conduct some study on the impact of using body local and global low-level feature for the aforementioned attributes. Experiments on the KTH dataset and the MV-TJU dataset show that our local action attribute based descriptor improve action recognition performance.

Improved DT Algorithm Based Human Action Features Detection

  • Hu, Zeyuan;Lee, Suk-Hwan;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제21권4호
    • /
    • pp.478-484
    • /
    • 2018
  • The choice of the motion features influences the result of the human action recognition method directly. Many factors often influence the single feature differently, such as appearance of the human body, environment and video camera. So the accuracy of action recognition is restricted. On the bases of studying the representation and recognition of human actions, and giving fully consideration to the advantages and disadvantages of different features, the Dense Trajectories(DT) algorithm is a very classic algorithm in the field of behavior recognition feature extraction, but there are some defects in the use of optical flow images. In this paper, we will use the improved Dense Trajectories(iDT) algorithm to optimize and extract the optical flow features in the movement of human action, then we will combined with Support Vector Machine methods to identify human behavior, and use the image in the KTH database for training and testing.

시공간 템플릿과 컨볼루션 신경망을 사용한 깊이 영상 기반의 사람 행동 인식 (Depth Image-Based Human Action Recognition Using Convolution Neural Network and Spatio-Temporal Templates)

  • 음혁민;윤창용
    • 전기학회논문지
    • /
    • 제65권10호
    • /
    • pp.1731-1737
    • /
    • 2016
  • In this paper, a method is proposed to recognize human actions as nonverbal expression; the proposed method is composed of two steps which are action representation and action recognition. First, MHI(Motion History Image) is used in the action representation step. This method includes segmentation based on depth information and generates spatio-temporal templates to describe actions. Second, CNN(Convolution Neural Network) which includes feature extraction and classification is employed in the action recognition step. It extracts convolution feature vectors and then uses a classifier to recognize actions. The recognition performance of the proposed method is demonstrated by comparing other action recognition methods in experimental results.

시각장애인 보조를 위한 영상기반 휴먼 행동 인식 시스템 (Image Based Human Action Recognition System to Support the Blind)

  • 고병철;황민철;남재열
    • 정보과학회 논문지
    • /
    • 제42권1호
    • /
    • pp.138-143
    • /
    • 2015
  • 본 논문에서는 시각장애인의 장면인식 보조를 위해, 귀걸이 형 블루투수 카메라와 행동인식 서버간의 통신을 통해 휴먼의 행동을 인식하는 시스템을 제안한다. 먼저 시각장애인이 귀걸이 형 블루투수 카메라를 이용하여 원하는 위치의 장면을 촬영하면, 촬영된 영상은 카메라와 연동된 스마트 폰을 통해 인식서버로 전송된다. 인식 서버에서는 영상 분석 알고리즘을 이용하여 휴먼 및 객체를 검출하고 휴먼의 포즈를 분석하여 휴먼 행동을 인식한다. 인식된 휴먼 행동 정보는 스마트 폰에 재 전송되고 사용자는 스마트 폰을 통해 text-to-speech (TTS)로 인식결과를 듣게 된다. 본 논문에서 제안한 시스템에서는 실내 외에서 촬영된 실험데이터에 대해서 60.7%의 휴먼 행동 인식 성능을 보여 주었다.

A Human Action Recognition Scheme in Temporal Spatial Data for Intelligent Web Browser

  • Cho, Kyung-Eun
    • 한국멀티미디어학회논문지
    • /
    • 제8권6호
    • /
    • pp.844-855
    • /
    • 2005
  • This paper proposes a human action recognition scheme for Intelligent Web Browser. Based on the principle that a human action can be defined as a combination of multiple articulation movements, the inference of stochastic grammars is applied to recognize each action. Human actions in 3 dimensional (3D) world coordinate are measured, quantized and made into two sets of 4-chain-code for xy and zy projection planes, consequently they are appropriate for applying the stochastic grammar inference method. We confirm this method by experiments, that various physical actions can be classified correctly against a set of real world 3D temporal data. The result revealed a comparatively successful achievement of $93.8\%$ recognition rate through the experiments of 8 movements of human head and $84.9\%$ recognition rate of 60 movements of human upper body. We expect that this scheme can be used for human-machine interaction commands in a web browser.

  • PDF