• Title/Summary/Keyword: Keypoints Extraction

Search Result 13, Processing Time 0.017 seconds

Individual Identification Using Ear Region Based on SIFT (SIFT 기반의 귀 영역을 이용한 개인 식별)

  • Kim, Min-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.1
    • /
    • pp.1-8
    • /
    • 2015
  • In recent years, ear has emerged as a new biometric trait, because it has advantage of higher user acceptance than fingerprint and can be captured at remote distance in an indoor or outdoor environment. This paper proposes an individual identification method using ear region based on SIFT(shift invariant feature transform). Unlike most of the previous studies using rectangle shape for extracting a region of interest(ROI), this study sets an ROI as a flexible expanded region including ear. It also presents an effective extraction and matching method for SIFT keypoints. Experiments for evaluating the performance of the proposed method were performed on IITD public database. It showed correct identification rate of 98.89%, and it showed 98.44% with a deformed dataset of 20% occlusion. These results show that the proposed method is effective in ear recognition and robust to occlusion.

Fall Detection Based on Human Skeleton Keypoints Using GRU

  • Kang, Yoon-Kyu;Kang, Hee-Yong;Weon, Dal-Soo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.12 no.4
    • /
    • pp.83-92
    • /
    • 2020
  • A recent study to determine the fall is focused on analyzing fall motions using a recurrent neural network (RNN), and uses a deep learning approach to get good results for detecting human poses in 2D from a mono color image. In this paper, we investigated the improved detection method to estimate the position of the head and shoulder key points and the acceleration of position change using the skeletal key points information extracted using PoseNet from the image obtained from the 2D RGB low-cost camera, and to increase the accuracy of the fall judgment. In particular, we propose a fall detection method based on the characteristics of post-fall posture in the fall motion analysis method and on the velocity of human body skeleton key points change as well as the ratio change of body bounding box's width and height. The public data set was used to extract human skeletal features and to train deep learning, GRU, and as a result of an experiment to find a feature extraction method that can achieve high classification accuracy, the proposed method showed a 99.8% success rate in detecting falls more effectively than the conventional primitive skeletal data use method.

Real-Time Place Recognition for Augmented Mobile Information Systems (이동형 정보 증강 시스템을 위한 실시간 장소 인식)

  • Oh, Su-Jin;Nam, Yang-Hee
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.5
    • /
    • pp.477-481
    • /
    • 2008
  • Place recognition is necessary for a mobile user to be provided with place-dependent information. This paper proposes real-time video based place recognition system that identifies users' current place while moving in the building. As for the feature extraction of a scene, there have been existing methods based on global feature analysis that has drawback of sensitive-ness for the case of partial occlusion and noises. There have also been local feature based methods that usually attempted object recognition which seemed hard to be applied in real-time system because of high computational cost. On the other hand, researches using statistical methods such as HMM(hidden Markov models) or bayesian networks have been used to derive place recognition result from the feature data. The former is, however, not practical because it requires huge amounts of efforts to gather the training data while the latter usually depends on object recognition only. This paper proposes a combined approach of global and local feature analysis for feature extraction to complement both approaches' drawbacks. The proposed method is applied to a mobile information system and shows real-time performance with competitive recognition result.