• 제목/요약/키워드: Human pose recognition

검색결과 83건 처리시간 0.027초

Multi-Human Behavior Recognition Based on Improved Posture Estimation Model

  • Zhang, Ning;Park, Jin-Ho;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제24권5호
    • /
    • pp.659-666
    • /
    • 2021
  • With the continuous development of deep learning, human behavior recognition algorithms have achieved good results. However, in a multi-person recognition environment, the complex behavior environment poses a great challenge to the efficiency of recognition. To this end, this paper proposes a multi-person pose estimation model. First of all, the human detectors in the top-down framework mostly use the two-stage target detection model, which runs slow down. The single-stage YOLOv3 target detection model is used to effectively improve the running speed and the generalization of the model. Depth separable convolution, which further improves the speed of target detection and improves the model's ability to extract target proposed regions; Secondly, based on the feature pyramid network combined with context semantic information in the pose estimation model, the OHEM algorithm is used to solve difficult key point detection problems, and the accuracy of multi-person pose estimation is improved; Finally, the Euclidean distance is used to calculate the spatial distance between key points, to determine the similarity of postures in the frame, and to eliminate redundant postures.

TRT Pose를 이용한 모바일 로봇의 사람 추종 기법 (Development of Human Following Method of Mobile Robot Using TRT Pose)

  • 최준현;주경진;윤상석;김종욱
    • 대한임베디드공학회논문지
    • /
    • 제15권6호
    • /
    • pp.281-287
    • /
    • 2020
  • In this paper, we propose a method for estimating a walking direction by which a mobile robots follows a person using TRT (Tensor RT) pose, which is motion recognition based on deep learning. Mobile robots can measure individual movements by recognizing key points on the person's pelvis and determine the direction in which the person tries to move. Using these information and the distance between robot and human, the mobile robot can follow the person stably keeping a safe distance from people. The TRT Pose only extracts key point information to prevent privacy issues while a camera in the mobile robot records video. To validate the proposed technology, experiment is carried out successfully where human walks away or toward the mobile robot in zigzag form and the robot continuously follows human with prescribed distance.

인간 행동 분석을 이용한 위험 상황 인식 시스템 구현 (A Dangerous Situation Recognition System Using Human Behavior Analysis)

  • 박준태;한규필;박양우
    • 한국멀티미디어학회논문지
    • /
    • 제24권3호
    • /
    • pp.345-354
    • /
    • 2021
  • Recently, deep learning-based image recognition systems have been adopted to various surveillance environments, but most of them are still picture-type object recognition methods, which are insufficient for the long term temporal analysis and high-dimensional situation management. Therefore, we propose a method recognizing the specific dangerous situation generated by human in real-time, and utilizing deep learning-based object analysis techniques. The proposed method uses deep learning-based object detection and tracking algorithms in order to recognize the situations such as 'trespassing', 'loitering', and so on. In addition, human's joint pose data are extracted and analyzed for the emergent awareness function such as 'falling down' to notify not only in the security but also in the emergency environmental utilizations.

얼굴의 자세추정을 이용한 얼굴인식 속도 향상 (Improvement of Face Recognition Speed Using Pose Estimation)

  • 최선형;조성원;정선태
    • 한국지능시스템학회논문지
    • /
    • 제20권5호
    • /
    • pp.677-682
    • /
    • 2010
  • 본 논문은 AdaBoost 알고리즘을 통한 얼굴 검출 기술에서 학습된 하-웨이블렛의 개별값을 비교하여 대략적인 자세를 추정하는 방법과 이를 이용한 얼굴인식 속도 향상에 대하여 기술한다. 학습된 약한 분류기는 얼굴 검출 과정 중 각각 계수값을 비교하여 각 자세의 특징에 강인한 하-웨이블렛을 선별한다. 하-웨이블렛 선별과정에는 각 항목의 유사도를 나타내는 마할라노비스 거리를 사용하였다. 선별된 하-웨이블렛을 사용하여 임의의 얼굴 이미지를 검출하였을 때 각각의 자세를 구별하는 결과를 전체 실험결과를 통해 평가한다.

Pose Invariant View-Based Enhanced Fisher Linear Discriminant Models for Face Recognition

  • Lee, Sung-Oh;Park, Gwi-Tae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.101.2-101
    • /
    • 2001
  • This paper proposes a novel face recognition algorithm to recognize human face robustly under various conditions, such as changes of pose, illumination, and expression, etc. at indoor environments. A conventional automatic face recognition system consists of the detection and the recognition part. Generally, the detection part is dominant over the other part in the estimating whole recognition rate. So, in this paper, we suggest the view-specific eigenface method as preprocessor to estimate various poses of the face in the input image. Then, we apply the Enhanced FLD Models (EFM) to the result of it, twice. Because, the EFM recognizes human face, and reduces the error of standardization effectively. To deal with view-varying problem, we build one basis vector set for each view individually. Finally, the dimensionalities of ...

  • PDF

단일 이미지에 기반을 둔 사람의 포즈 추정에 대한 연구 동향 (Recent Trends in Human Pose Estimation Based on a Single Image)

  • 조정찬
    • 한국차세대컴퓨팅학회논문지
    • /
    • 제15권5호
    • /
    • pp.31-42
    • /
    • 2019
  • 최근 딥러닝 기술이 발전함에 따라 많은 컴퓨터 비전 연구 분야에서 주목할 만한 성과들이 지속적으로 나오고 있다. 단일 이미지를 기반으로 사람의 2차원 및 3차원 포즈를 추정하는 연구에서도 비약적인 성능향상을 보여주고 있으며, 많은 연구자들이 문제의 범위를 확장하며 활발한 연구 활동을 진행하고 있다. 사람의 포즈 추정은 다양한 응용 분야가 존재하고, 특히 이미지나 비디오 분석에서 사람의 포즈는 행동 및 상태, 의도 파악을 위한 핵심 요소가 되기 때문에 상당히 중요한 연구 분야이다. 이러한 배경에 따라 본 논문은 단일 이미지를 기반으로 한 사람의 포즈 추정 기술에 대한 연구 동향을 살펴보고자 한다. 강인하고 정확한 문제 해결을 위해 다양한 연구 활동 결과가 존재한다는 점에서 본 논문에서는 사람의 포즈 추정 연구를 2차원 및 3차원 포즈 추정에 대해서 나누어 살펴보고자 한다. 끝으로 연구에 필요한 데이터 세트 및 사람의 포즈 추정 기술을 적용하는 다양한 연구 사례를 살펴볼 것이다.

머신러닝 기반 낙상 인식 알고리즘 (Fall Detection Algorithm Based on Machine Learning)

  • 정준현;김남호
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 추계학술대회
    • /
    • pp.226-228
    • /
    • 2021
  • 구글사에서 출시된 ML Kit API의 Pose detection를 사용한 영상기반 낙상 알고리즘을 제안한다. Pose detection 알고리듬을 사용하여 추출된 신체의 33개의 3차원 특징점을 활용하여 낙상을 인식한다. 추출된 특징점을 분석하여 낙상을 인식하는 알고리듬은 k-NN을 사용한다. 영상의 크기와 영상내의 인체의 크기에 영향을 받지 않도록 정규화과정을 거치며 특징점들의 상대적인 움직임을 분석하여 낙상을 인식한다. 본 실험을 위해 사용한 13개의 테스트 영상중 13개의 영상에서 낙상을 인식하여 100%의 성공률을 보였다.

  • PDF

다중크기와 다중객체의 실시간 얼굴 검출과 머리 자세 추정을 위한 심층 신경망 (Multi-Scale, Multi-Object and Real-Time Face Detection and Head Pose Estimation Using Deep Neural Networks)

  • 안병태;최동걸;권인소
    • 로봇학회논문지
    • /
    • 제12권3호
    • /
    • pp.313-321
    • /
    • 2017
  • One of the most frequently performed tasks in human-robot interaction (HRI), intelligent vehicles, and security systems is face related applications such as face recognition, facial expression recognition, driver state monitoring, and gaze estimation. In these applications, accurate head pose estimation is an important issue. However, conventional methods have been lacking in accuracy, robustness or processing speed in practical use. In this paper, we propose a novel method for estimating head pose with a monocular camera. The proposed algorithm is based on a deep neural network for multi-task learning using a small grayscale image. This network jointly detects multi-view faces and estimates head pose in hard environmental conditions such as illumination change and large pose change. The proposed framework quantitatively and qualitatively outperforms the state-of-the-art method with an average head pose mean error of less than $4.5^{\circ}$ in real-time.

3차원 자세 추정 기법의 성능 향상을 위한 임의 시점 합성 기반의 고난도 예제 생성 (Hard Example Generation by Novel View Synthesis for 3-D Pose Estimation)

  • 김민지;김성찬
    • 대한임베디드공학회논문지
    • /
    • 제19권1호
    • /
    • pp.9-17
    • /
    • 2024
  • It is widely recognized that for 3D human pose estimation (HPE), dataset acquisition is expensive and the effectiveness of augmentation techniques of conventional visual recognition tasks is limited. We address these difficulties by presenting a simple but effective method that augments input images in terms of viewpoints when training a 3D human pose estimation (HPE) model. Our intuition is that meaningful variants of the input images for HPE could be obtained by viewing a human instance in the images from an arbitrary viewpoint different from that in the original images. The core idea is to synthesize new images that have self-occlusion and thus are difficult to predict at different viewpoints even with the same pose of the original example. We incorporate this idea into the training procedure of the 3D HPE model as an augmentation stage of the input samples. We show that a strategy for augmenting the synthesized example should be carefully designed in terms of the frequency of performing the augmentation and the selection of viewpoints for synthesizing the samples. To this end, we propose a new metric to measure the prediction difficulty of input images for 3D HPE in terms of the distance between corresponding keypoints on both sides of a human body. Extensive exploration of the space of augmentation probability choices and example selection according to the proposed distance metric leads to a performance gain of up to 6.2% on Human3.6M, the well-known pose estimation dataset.

Development of Pose-Invariant Face Recognition System for Mobile Robot Applications

  • Lee, Tai-Gun;Park, Sung-Kee;Kim, Mun-Sang;Park, Mig-Non
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.783-788
    • /
    • 2003
  • In this paper, we present a new approach to detect and recognize human face in the image from vision camera equipped on the mobile robot platform. Due to the mobility of camera platform, obtained facial image is small and pose-various. For this condition, new algorithm should cope with these constraints and can detect and recognize face in nearly real time. In detection step, ‘coarse to fine’ detection strategy is used. Firstly, region boundary including face is roughly located by dual ellipse templates of facial color and on this region, the locations of three main facial features- two eyes and mouth-are estimated. For this, simplified facial feature maps using characteristic chrominance are made out and candidate pixels are segmented as eye or mouth pixels group. These candidate facial features are verified whether the length and orientation of feature pairs are suitable for face geometry. In recognition step, pseudo-convex hull area of gray face image is defined which area includes feature triangle connecting two eyes and mouth. And random lattice line set are composed and laid on this convex hull area, and then 2D appearance of this area is represented. From these procedures, facial information of detected face is obtained and face DB images are similarly processed for each person class. Based on facial information of these areas, distance measure of match of lattice lines is calculated and face image is recognized using this measure as a classifier. This proposed detection and recognition algorithms overcome the constraints of previous approach [15], make real-time face detection and recognition possible, and guarantee the correct recognition irregardless of some pose variation of face. The usefulness at mobile robot application is demonstrated.

  • PDF