• 제목/요약/키워드: Movement Recognition

검색결과 493건 처리시간 0.02초

Gait Recognition Based on GF-CNN and Metric Learning

  • Wen, Junqin
    • Journal of Information Processing Systems
    • /
    • 제16권5호
    • /
    • pp.1105-1112
    • /
    • 2020
  • Gait recognition, as a promising biometric, can be used in video-based surveillance and other security systems. However, due to the complexity of leg movement and the difference of external sampling conditions, gait recognition still faces many problems to be addressed. In this paper, an improved convolutional neural network (CNN) based on Gabor filter is therefore proposed to achieve gait recognition. Firstly, a gait feature extraction layer based on Gabor filter is inserted into the traditional CNNs, which is used to extract gait features from gait silhouette images. Then, in the process of gait classification, using the output of CNN as input, we utilize metric learning techniques to calculate distance between two gaits and achieve gait classification by k-nearest neighbors classifiers. Finally, several experiments are conducted on two open-accessed gait datasets and demonstrate that our method reaches state-of-the-art performances in terms of correct recognition rate on the OULP and CASIA-B datasets.

Human Action Recognition Based on An Improved Combined Feature Representation

  • Zhang, Ning;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제21권12호
    • /
    • pp.1473-1480
    • /
    • 2018
  • The extraction and recognition of human motion characteristics need to combine biometrics to determine and judge human behavior in the movement and distinguish individual identities. The so-called biometric technology, the specific operation is the use of the body's inherent biological characteristics of individual identity authentication, the most noteworthy feature is the invariance and uniqueness. In the past, the behavior recognition technology based on the single characteristic was too restrictive, in this paper, we proposed a mixed feature which combined global silhouette feature and local optical flow feature, and this combined representation was used for human action recognition. And we will use the KTH database to train and test the recognition system. Experiments have been very desirable results.

Real-Time Cattle Action Recognition for Estrus Detection

  • Heo, Eui-Ju;Ahn, Sung-Jin;Choi, Kang-Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권4호
    • /
    • pp.2148-2161
    • /
    • 2019
  • In this paper, we present a real-time cattle action recognition algorithm to detect the estrus phase of cattle from a live video stream. In order to classify cattle movement, specifically, to detect the mounting action, the most observable sign of the estrus phase, a simple yet effective feature description exploiting motion history images (MHI) is designed. By learning the proposed features using the support vector machine framework, various representative cattle actions, such as mounting, walking, tail wagging, and foot stamping, can be recognized robustly in complex scenes. Thanks to low complexity of the proposed action recognition algorithm, multiple cattle in three enclosures can be monitored simultaneously using a single fisheye camera. Through extensive experiments with real video streams, we confirmed that the proposed algorithm outperforms a conventional human action recognition algorithm by 18% in terms of recognition accuracy even with much smaller dimensional feature description.

몸 움직임에 따른 감성표현과 공간특성에 관한 연구 (A Study on Emotional Expression and Space Characteristics in Body Movement)

  • 오영근
    • 한국실내디자인학회논문집
    • /
    • 제17권1호
    • /
    • pp.170-177
    • /
    • 2008
  • If we consider the cubism, which newly attempted the avant-garde movement in the mid-19th century, was an experimental movement to feel the space through the human action and observation from various viewpoints at the fixed three-dimensional world, then the futurism was an innovative movement that obtained an ion from continual motions. However, the study about meaning and emotion that correspond with the dis-structuralization era could not have been continuing. Therefore, the goal of this study is to construct an emotional theory and grope a possibility of the way to do through some theoretical investigations and experimental analysis about body movement and emotional expression. For the study method, experiment and analysis have been proceeded based on Miyauji study(1992) which was based on P. Thiel theory that is about direct recognition and empirical study for identical existence or experimentation. As a result of the study, it reached several conclusions. The first, body movement as an emotion that makes meaning is related to the space. The second is that the space is related to the background as an object of body. The last is that body as a creature which becomes the one with spirit in the space makes meaning. We look forward to a possibility of emotional study through the body movement.

딥러닝 기반의 운전자의 안전/위험 상태 인지 시스템 개발 (Development of Driver's Safety/Danger Status Cognitive Assistance System Based on Deep Learning)

  • 미아오 쉬;이현순;강보영
    • 로봇학회논문지
    • /
    • 제13권1호
    • /
    • pp.38-44
    • /
    • 2018
  • In this paper, we propose Intelligent Driver Assistance System (I-DAS) for driver safety. The proposed system recognizes safety and danger status by analyzing blind spots that the driver cannot see because of a large angle of head movement from the front. Most studies use image pre-processing such as face detection for collecting information about the driver's head movement. This not only increases the computational complexity of the system, but also decreases the accuracy of the recognition because the image processing system dose not use the entire image of the driver's upper body while seated on the driver's seat and when the head moves at a large angle from the front. The proposed system uses a convolutional neural network to replace the face detection system and uses the entire image of the driver's upper body. Therefore, high accuracy can be maintained even when the driver performs head movement at a large angle from the frontal gaze position without image pre-processing. Experimental result shows that the proposed system can accurately recognize the dangerous conditions in the blind zone during operation and performs with 95% accuracy of recognition for five drivers.

신경회로망을 이용한 근전도 신호의 특성분석 및 패턴 분류 (Pattern Recognition of EMG Signal using Artificial Neural Network)

  • 이석주;이성환;조영조
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2000년도 추계학술대회 논문집 학회본부 D
    • /
    • pp.769-771
    • /
    • 2000
  • In this paper, pattern recognition scheme for EMG signal using artificial neural network is proposed. For manipulating ability, the movements of human arm are classified into several categories EMG signals of appropriate muscles are collected during arm movement. Patterns of EMG signals of each movement are recognized as follows: 1) The features of each EMG signal are extracted. 2) With these features, the neural network is trained by using feedforward error back-propagation (FFEBP) algorithm. The results show that the arm movements can be classified with EMG signals at high accuracy.

  • PDF

입력 영상의 쉬프트 컨트롤에 의한 패턴인식 (Pattern recognition by shift control of input pattern)

  • 강민숙;조동섭;김병철
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1992년도 하계학술대회 논문집 A
    • /
    • pp.459-461
    • /
    • 1992
  • This paper presents the new method to recognize the 2D patterns dynamically by rotating the input patterns according to the difference vector. Generally neural network with many patterns leads to various recognition ratio. The dynamic management of input patterns means that we can move pixels to desired locations controlled by the difference vector. We divide dual neural network model into two parts at learning phase, respectively. And then we combine them to construct the total network. Our model has some good results such that it has less number of patterns and reduced learning time. At present, we only discuss the four way movement of input patterns. The research for the complex movement will be fulfilled later.

  • PDF

자율주행차량의 장애물 인식을 위한 물체형상 뭇 움직임 포착에 관한 연구 (A Study on Detection of Object Shape and Movement for Obstacle Recognition of Autonomous Vehicle)

  • 이진우;이영진;조현철;손주한;이권순
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 1999년도 하계학술대회 논문집 G
    • /
    • pp.3101-3104
    • /
    • 1999
  • It is important to detect objects movement for obstacle recognition and path searching of autonomous robots and vehicles with vision sensor. This paper shows the method to draw out objects and to trace the trajectory of the moving object using a CCD camera and it describes the method to recognize the shape of objects.

  • PDF

A Structure and Framework for Sign Language Interaction

  • Kim, Soyoung;Pan, Younghwan
    • 대한인간공학회지
    • /
    • 제34권5호
    • /
    • pp.411-426
    • /
    • 2015
  • Objective: The goal of this thesis is to design the interaction structure and framework of system to recognize sign language. Background: The sign language of meaningful individual gestures is combined to construct a sentence, so it is difficult to interpret and recognize the meaning of hand gesture for system, because of the sequence of continuous gestures. This being so, in order to interpret the meaning of individual gesture correctly, the interaction structure and framework are needed so that they can segment the indication of individual gesture. Method: We analyze 700 sign language words to structuralize the sign language gesture interaction. First of all, we analyze the transformational patterns of the hand gesture. Second, we analyze the movement of the transformational patterns of the hand gesture. Third, we analyze the type of other gestures except hands. Based on this, we design a framework for sign language interaction. Results: We elicited 8 patterns of hand gesture on the basis of the fact on whether the gesture has a change from starting point to ending point. And then, we analyzed the hand movement based on 3 elements: patterns of movement, direction, and whether hand movement is repeating or not. Moreover, we defined 11 movements of other gestures except hands and classified 8 types of interaction. The framework for sign language interaction, which was designed based on this mentioned above, applies to more than 700 individual gestures of the sign language, and can be classified as an individual gesture in spite of situation which has continuous gestures. Conclusion: This study has structuralized in 3 aspects defined to analyze the transformational patterns of the starting point and the ending point of hand shape, hand movement, and other gestures except hands for sign language interaction. Based on this, we designed the framework that can recognize the individual gestures and interpret the meaning more accurately, when meaningful individual gesture is input sequence of continuous gestures. Application: When we develop the system of sign language recognition, we can apply interaction framework to it. Structuralized gesture can be used for using database of sign language, inventing an automatic recognition system, and studying on the action gestures in other areas.

A Design and Implementation Mobile Game Based on Kinect Sensor

  • Lee, Won Joo
    • 한국컴퓨터정보학회논문지
    • /
    • 제22권9호
    • /
    • pp.73-80
    • /
    • 2017
  • In this paper, we design and implement a mobile game based on Kinect sensor. This game is a motion recognition maze game based on Kinect sensor using XNA Game Studio. The game consists of three stages. Each maze has different size and clear time limit. A player can move to the next stage only if the player finds the exit within a limited time. However, if the exit is not found within the time limit, the game ends. In addition, two kinds of mini games are included in the game. The first game is a fruit catch game using motion recognition tracking of the Kinect sensor, and player have to pick up a certain number of randomly falling fruits. If a player acquire a certain number of fruits at this time, the movement speed of the player is increased. However, if a player takes a skeleton that appears randomly, the movement speed will decrease. The second game is a Quiz game using the speech recognition function of the Kinect sensor, and a question from random genres of common sense, nonsense, ancient creature, capital, constellation, etc. are issued. If a player correctly answers more than 7 of 10 questions, the player gets useful items to use in finding the maze. This item is a navigator fairy that helps the player to escape the forest.