• Title/Summary/Keyword: and skeleton detection

Search Result 43, Processing Time 0.03 seconds

Stroke Width Based Skeletonization for Text Images

  • Nguyen, Minh Hieu;Kim, Soo-Hyung;Yang, Hyung Jeong;Lee, Guee Sang
    • Journal of Computing Science and Engineering
    • /
    • v.8 no.3
    • /
    • pp.149-156
    • /
    • 2014
  • Skeletonization is a morphological operation that transforms an original object into a subset, which is called a 'skeleton'. Skeletonization has been intensively studied for decades and is a challenging issue especially for special target objects. This paper proposes a novel approach to the skeletonization of text images based on stroke width detection. First, the preliminary skeleton is detected by using a Canny edge detector with a Tensor Voting framework. Second, the preliminary skeleton is smoothed, and junction points are connected by interpolation compensation. Experimental results show the validity of the proposed approach.

HSFE Network and Fusion Model based Dynamic Hand Gesture Recognition

  • Tai, Do Nhu;Na, In Seop;Kim, Soo Hyung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3924-3940
    • /
    • 2020
  • Dynamic hand gesture recognition(d-HGR) plays an important role in human-computer interaction(HCI) system. With the growth of hand-pose estimation as well as 3D depth sensors, depth, and the hand-skeleton dataset is proposed to bring much research in depth and 3D hand skeleton approaches. However, it is still a challenging problem due to the low resolution, higher complexity, and self-occlusion. In this paper, we propose a hand-shape feature extraction(HSFE) network to produce robust hand-shapes. We build a hand-shape model, and hand-skeleton based on LSTM to exploit the temporal information from hand-shape and motion changes. Fusion between two models brings the best accuracy in dynamic hand gesture (DHG) dataset.

Human Action Recognition Using Deep Data: A Fine-Grained Study

  • Rao, D. Surendra;Potturu, Sudharsana Rao;Bhagyaraju, V
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.97-108
    • /
    • 2022
  • The video-assisted human action recognition [1] field is one of the most active ones in computer vision research. Since the depth data [2] obtained by Kinect cameras has more benefits than traditional RGB data, research on human action detection has recently increased because of the Kinect camera. We conducted a systematic study of strategies for recognizing human activity based on deep data in this article. All methods are grouped into deep map tactics and skeleton tactics. A comparison of some of the more traditional strategies is also covered. We then examined the specifics of different depth behavior databases and provided a straightforward distinction between them. We address the advantages and disadvantages of depth and skeleton-based techniques in this discussion.

Remote Image Control by Hand Motion Detection (손동작 인지에 의한 원격 영상 제어)

  • Lim, Jung-Geun;Han, Kyongho
    • Journal of IKEEE
    • /
    • v.16 no.4
    • /
    • pp.369-374
    • /
    • 2012
  • This paper handles the UX implementation for system control using the visual input information of hand motion. Kinect sensor from Microsoft is used to acquire the user's skeleton image from the 3-D depth map at a rate of 30 frames per sec. and eventually knows the x-y coordinates of hand joints. The x-y coordinate value changes of hands between the present frame and next frame shows the direction of changes and rotation of changes and the various hand motion is used as a UX input command for remote image control on smart TV, etc. Through the experiments, we showed the implementation of the proposed idea.

A Study on the Gesture Matching Method for the Development of Gesture Contents (체감형 콘텐츠 개발을 위한 연속동작 매칭 방법에 관한 연구)

  • Lee, HyoungGu
    • Journal of Korea Game Society
    • /
    • v.13 no.6
    • /
    • pp.75-84
    • /
    • 2013
  • The recording and matching method of pose and gesture based on PC-window platform is introduced in this paper. The method uses the gesture detection camera, Xtion which is for the Windows PC. To develop the method, the API is first developed which processes and compares the depth data, RGB image data, and skeleton data obtained using the camera. The pose matching method which selectively compares only valid joints is developed. For the gesture matching, the recognition method which can differentiate the wrong pose between poses is developed. The tool which records and tests the sample data to extract the specified pose and gesture is developed. 6 different pose and gesture were captured and tested. Pose was recognized 100% and gesture was recognized 99%, so the proposed method was validated.

Robust Features Extraction by Human-based Hybrid Silhouette (하이브리드 실루엣 기반 인간의 강인한 특징 점 추출)

  • Kim, Jong-Seon;Park, Jin-Bae;Joo, Young-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.4
    • /
    • pp.433-438
    • /
    • 2009
  • In this paper, we propose the robust features extraction method of human by using the skeleton model and hybrid silhouette model. The proposed feature extraction method is divided by hands, shoulder line and elbow region extraction. We use the peer's color information to find the position of hands and propose the circle detection method to extract the shoulder line and elbow. Finally, we show the effectiveness and feasibility of the proposed method through some experiments.

Human Skeleton Keypoints based Fall Detection using GRU (PoseNet과 GRU를 이용한 Skeleton Keypoints 기반 낙상 감지)

  • Kang, Yoon Kyu;Kang, Hee Yong;Weon, Dal Soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.127-133
    • /
    • 2021
  • A recent study of people physically falling focused on analyzing the motions of the falls using a recurrent neural network (RNN) and a deep learning approach to get good results from detecting 2D human poses from a single color image. In this paper, we investigate a detection method for estimating the position of the head and shoulder keypoints and the acceleration of positional change using the skeletal keypoints information extracted using PoseNet from an image obtained with a low-cost 2D RGB camera, increasing the accuracy of judgments about the falls. In particular, we propose a fall detection method based on the characteristics of post-fall posture in the fall motion-analysis method. A public data set was used to extract human skeletal features, and as a result of an experiment to find a feature extraction method that can achieve high classification accuracy, the proposed method showed a 99.8% success rate in detecting falls more effectively than a conventional, primitive skeletal data-use method.

Fast Human Detection Algorithm for High-Resolution CCTV Camera (고해상도 CCTV 카메라를 위한 빠른 사람 검출 알고리즘)

  • Park, In-Cheol
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.8
    • /
    • pp.5263-5268
    • /
    • 2014
  • This paper suggests a fast human detection algorithm that can be applied to a high-resolution CCTV camera. Human detection algorithms, which used a HOG detector show high performance in the region of image processing. On the other hand, it is difficult to apply to real-time high resolution imaging because of its slow processing speed in the extracting figures of HOG. To resolve this problems, we suggest how to detect humans into two stages. First, candidates of a human region are found using background subtraction, and humans and non-humans are distinguished using a HOG detector only. This process increases the detection speed by approximately 2.5 times without any degradation in performance.

Three-dimensional human activity recognition by forming a movement polygon using posture skeletal data from depth sensor

  • Vishwakarma, Dinesh Kumar;Jain, Konark
    • ETRI Journal
    • /
    • v.44 no.2
    • /
    • pp.286-299
    • /
    • 2022
  • Human activity recognition in real time is a challenging task. Recently, a plethora of studies has been proposed using deep learning architectures. The implementation of these architectures requires the high computing power of the machine and a massive database. However, handcrafted features-based machine learning models need less computing power and very accurate where features are effectively extracted. In this study, we propose a handcrafted model based on three-dimensional sequential skeleton data. The human body skeleton movement over a frame is computed through joint positions in a frame. The joints of these skeletal frames are projected into two-dimensional space, forming a "movement polygon." These polygons are further transformed into a one-dimensional space by computing amplitudes at different angles from the centroid of polygons. The feature vector is formed by the sampling of these amplitudes at different angles. The performance of the algorithm is evaluated using a support vector machine on four public datasets: MSR Action3D, Berkeley MHAD, TST Fall Detection, and NTU-RGB+D, and the highest accuracies achieved on these datasets are 94.13%, 93.34%, 95.7%, and 86.8%, respectively. These accuracies are compared with similar state-of-the-art and show superior performance.

Organization of Sensor System and User's Intent Detection Algorithm for Rehabilitation Robot (보행보조 재활로봇의 센서 시스템 구성 및 사용자 의도 감지 알고리즘)

  • Jung, Jun-Young;Park, Hyun-Sub;Lee, Duk-Yeon;Jang, In-Hun;Lee, Dong-Wook;Lee, Ho-Gil
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.10
    • /
    • pp.933-938
    • /
    • 2010
  • In this paper, we propose the organization of a sensor system and user's intent detection algorithm for walking assist rehabilitation robots. The main purpose of walking assist rehabilitation robots is assisting SCI patients to walk in normal environment. To use walking assist rehabilitation robot in normal environment, it is needed to consider various factors about user's safety and detection of user's intent and so on. For these purposes, we have analyzed the use case of rehabilitation robots and organized the system of sensors for walking assist rehabilitation robots and finally, we have developed the algorithm which is used to detect user's intent for those. We applied our proposal method in the rehabilitation robot, ROBIN, and verified their effectiveness by normal, not patient.