• Title/Summary/Keyword: skeleton extraction

Search Result 57, Processing Time 0.026 seconds

3D Automatic Skeleton Extraction of Coronary Artery for Interactive Shape Analysis (관상동맥의 인터랙티브 형상 분석을 위한 3차원 골격의 자동 생성)

  • Lee, Jae-Jin;Kim, Jeong-Sik;Choi, Soo-Mi
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.541-546
    • /
    • 2006
  • 3차원 관상동맥을 분석하기 위해서는 혈관의 분기점, 극단점, 혈관의 계층적 구조 관계를 함축적으로 표현하는 것이 매우 중요하다. 본 논문에서는3차원 CT 혈관 조영 영상으로부터 관상동맥의 3차원 골격을 자동으로 추출하는 방법을 개발하였다. 먼저, CT혈관 조영술에 의해 획득된 슬라이스 이미지로부터 3차원 조작 및 수술 시뮬레이션 등을 위하여 혈관의 3차원 표면에 대한 메쉬 모델을 생성한다. 생성된 메쉬 모델이 임의로 변형된 후에도 자동으로 골격을 쉽게 추출할 수 있도록 메쉬 모델을 복셀화하는 단계를 거친다. 이렇게 얻어진 복셀 모델로부터 표면복셀을 결정하고 표면 복셀로부터 객체 복셀까지의 유클리드 거리값를 계산하여 유클리드 거리맵(EDM)을 계산한다. 계산된 EDM 으로부터 객체 복셀이 가지게 되는 최대 내접 구를 계산하여 Discrete Medial Surface을 생성하게 되는데 이것은 골격의 후보가 된다. 골격의 후보집합 복셀에 대하여 Dijkstra 최단 경로 결정 알고리즘을 적용하여 골격을 자동으로 추출하게 된다. 이렇게 추출된 3차원 골격은 관상동맥 수술 시뮬레이션 등의 다양한 형상 분석에 유용하게 사용될 수 있다.

  • PDF

Robust 2D human upper-body pose estimation with fully convolutional network

  • Lee, Seunghee;Koo, Jungmo;Kim, Jinki;Myung, Hyun
    • Advances in robotics research
    • /
    • v.2 no.2
    • /
    • pp.129-140
    • /
    • 2018
  • With the increasing demand for the development of human pose estimation, such as human-computer interaction and human activity recognition, there have been numerous approaches to detect the 2D poses of people in images more efficiently. Despite many years of human pose estimation research, the estimation of human poses with images remains difficult to produce satisfactory results. In this study, we propose a robust 2D human body pose estimation method using an RGB camera sensor. Our pose estimation method is efficient and cost-effective since the use of RGB camera sensor is economically beneficial compared to more commonly used high-priced sensors. For the estimation of upper-body joint positions, semantic segmentation with a fully convolutional network was exploited. From acquired RGB images, joint heatmaps accurately estimate the coordinates of the location of each joint. The network architecture was designed to learn and detect the locations of joints via the sequential prediction processing method. Our proposed method was tested and validated for efficient estimation of the human upper-body pose. The obtained results reveal the potential of a simple RGB camera sensor for human pose estimation applications.

Extraction and Transfer of Gesture Information using ToF Camera (ToF 카메라를 이용한 제스처 정보의 추출 및 전송)

  • Park, Won-Chang;Ryu, Dae-Hyun;Choi, Tae-Wan
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.10
    • /
    • pp.1103-1109
    • /
    • 2014
  • The latest CCTV camera are network camera in many cases. In this case when transmitting high-quality image by internet, it could be a large load on the internet because the amount of image data is very large. In this study, we propose a method which can reduce the video traffic in this case, and evaluate its performance. We used a method for transmitting and extracting a gesture information using ToF camera such as Kinect in certain circumstances. There may be restrictions on the application of the proposed method because it depends on the performance of the ToF camera. However, it can be applied efficiently to the security or safety management of a small interior space such as a home or office.

Detecting Complex 3D Human Motions with Body Model Low-Rank Representation for Real-Time Smart Activity Monitoring System

  • Jalal, Ahmad;Kamal, Shaharyar;Kim, Dong-Seong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1189-1204
    • /
    • 2018
  • Detecting and capturing 3D human structures from the intensity-based image sequences is an inherently arguable problem, which attracted attention of several researchers especially in real-time activity recognition (Real-AR). These Real-AR systems have been significantly enhanced by using depth intensity sensors that gives maximum information, in spite of the fact that conventional Real-AR systems are using RGB video sensors. This study proposed a depth-based routine-logging Real-AR system to identify the daily human activity routines and to make these surroundings an intelligent living space. Our real-time routine-logging Real-AR system is categorized into two categories. The data collection with the use of a depth camera, feature extraction based on joint information and training/recognition of each activity. In-addition, the recognition mechanism locates, and pinpoints the learned activities and induces routine-logs. The evaluation applied on the depth datasets (self-annotated and MSRAction3D datasets) demonstrated that proposed system can achieve better recognition rates and robust as compare to state-of-the-art methods. Our Real-AR should be feasibly accessible and permanently used in behavior monitoring applications, humanoid-robot systems and e-medical therapy systems.

Fitness Measurement system using deep learning-based pose recognition (딥러닝 기반 포즈인식을 이용한 체력측정 시스템)

  • Kim, Hyeong-gyun;Hong, Ho-Pyo;Kim, Yong-ho
    • Journal of Digital Convergence
    • /
    • v.18 no.12
    • /
    • pp.97-103
    • /
    • 2020
  • The proposed system is composed of two parts, an AI physical fitness measurement part and an AI physical fitness management part. In the AI fitness measurement part, a guide to physical fitness measurement and accurate calculation of the measured value are performed through deep learning-based pose recognition. Based on these measurements, the AI fitness management part designs personalized exercise programs and provides them to dedicated smart applications. To guide the measurement posture, the posture of the subject to be measured is photographed through a webcam and the skeleton line is extracted. Next, the skeletal line of the learned preparation posture is compared with the extracted skeletal line to determine whether or not it is normal, and voice guidance is provided to maintain the normal posture.

Efficient Collecting Scheme the Crack Data via Vector based Data Augmentation and Style Transfer with Artificial Neural Networks (벡터 기반 데이터 증강과 인공신경망 기반 특징 전달을 이용한 효율적인 균열 데이터 수집 기법)

  • Yun, Ju-Young;Kim, Donghui;Kim, Jong-Hyun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.667-669
    • /
    • 2021
  • 본 논문에서는 벡터 기반 데이터 증강 기법(Data augmentation)을 제안하여 학습 데이터를 구축한 뒤, 이를 합성곱 신경망(Convolutional Neural Networks, CNN)으로 실제 균열과 가까운 패턴을 표현할 수 있는 프레임워크를 제안한다. 건축물의 균열은 인명 피해를 가져오는 건물 붕괴와 낙하 사고를 비롯한 큰 사고의 원인이다. 이를 인공지능으로 해결하기 위해서는 대량의 데이터 확보가 필수적이다. 하지만, 실제 균열 이미지는 복잡한 패턴을 가지고 있을 뿐만 아니라, 위험한 상황에 노출되기 때문에 대량의 데이터를 확보하기 어렵다. 이러한 데이터베이스 구축의 문제점은 인위적으로 특정 부분에 변형을 주어 데이터양을 늘리는 탄성왜곡(Elastic distortion) 기법으로 해결할 수 있지만, 본 논문에서는 이보다 향상된 균열 패턴 결과를 CNN을 활용하여 보여준다. 탄성왜곡 기법보다 CNN을 이용했을 때, 실제 균열 패턴과 유사하게 추출된 결과를 얻을 수 있었고, 일반적으로 사용되는 픽셀 기반 데이터가 아닌 벡터 기반으로 데이터 증강을 설계함으로써 균열의 변화량 측면에서 우수함을 보였다. 본 논문에서는 적은 개수의 균열 데이터를 입력으로 사용했음에도 불구하고 균열의 방향 및 패턴을 다양하게 생성하여 쉽게 균열 데이터베이스를 구축할 수 있었다. 이는 장기적으로 구조물의 안정성 평가에 이바지하여 안전사고에 대한 불안감에서 벗어나 더욱 안전하고 쾌적한 주거 환경을 조성할 것으로 기대된다.

  • PDF

STAGCN-based Human Action Recognition System for Immersive Large-Scale Signage Content (몰입형 대형 사이니지 콘텐츠를 위한 STAGCN 기반 인간 행동 인식 시스템)

  • Jeongho Kim;Byungsun Hwang;Jinwook Kim;Joonho Seon;Young Ghyu Sun;Jin Young Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.6
    • /
    • pp.89-95
    • /
    • 2023
  • In recent decades, human action recognition (HAR) has demonstrated potential applications in sports analysis, human-robot interaction, and large-scale signage content. In this paper, spatial temporal attention graph convolutional network (STAGCN)-based HAR system is proposed. Spatioal-temmporal features of skeleton sequences are assigned different weights by STAGCN, enabling the consideration of key joints and viewpoints. From simulation results, it has been shown that the performance of the proposed model can be improved in terms of classification accuracy in the NTU RGB+D dataset.

A Study on Stroke Extraction for Handwritten Korean Character Recognition (필기체 한글 문자 인식을 위한 획 추출에 관한 연구)

  • Choi, Young-Kyoo;Rhee, Sang-Burm
    • The KIPS Transactions:PartB
    • /
    • v.9B no.3
    • /
    • pp.375-382
    • /
    • 2002
  • Handwritten character recognition is classified into on-line handwritten character recognition and off-line handwritten character recognition. On-line handwritten character recognition has made a remarkable outcome compared to off-line hacdwritten character recognition. This method can acquire the dynamic written information such as the writing order and the position of a stroke by means of pen-based electronic input device such as a tablet board. On the contrary, Any dynamic information can not be acquired in off-line handwritten character recognition since there are extreme overlapping between consonants and vowels, and heavily noisy images between strokes, which change the recognition performance with the result of the preprocessing. This paper proposes a method that effectively extracts the stroke including dynamic information of characters for off-line Korean handwritten character recognition. First of all, this method makes improvement and binarization of input handwritten character image as preprocessing procedure using watershed algorithm. The next procedure is extraction of skeleton by using the transformed Lu and Wang's thinning: algorithm, and segment pixel array is extracted by abstracting the feature point of the characters. Then, the vectorization is executed with a maximum permission error method. In the case that a few strokes are bound in a segment, a segment pixel array is divided with two or more segment vectors. In order to reconstruct the extracted segment vector with a complete stroke, the directional component of the vector is mortified by using right-hand writing coordinate system. With combination of segment vectors which are adjacent and can be combined, the reconstruction of complete stroke is made out which is suitable for character recognition. As experimentation, it is verified that the proposed method is suitable for handwritten Korean character recognition.

The influence of mandibular skeletal characteristics on inferior alveolar nerve block anesthesia

  • You, Tae Min;Kim, Kee-Deog;Huh, Jisun;Woo, Eun-Jung;Park, Wonse
    • Journal of Dental Anesthesia and Pain Medicine
    • /
    • v.15 no.3
    • /
    • pp.113-119
    • /
    • 2015
  • Background: The inferior alveolar nerve block (IANB) is the most common anesthetic techniques in dentistry; however, its success rate is low. The purpose of this study was to determine the correlation between IANB failure and mandibular skeletal characteristics Methods: In total, 693 cases of lower third molar extraction (n = 575 patients) were examined in this study. The ratio of the condylar and coronoid distances from the mandibular foramen (condyle-coronoid ratio [CC ratio]) was calculated, and the mandibular skeleton was then classified as normal, retrognathic, or prognathic. The correlation between IANB failure and sex, treatment side, and the CC ratio was assessed. Results: The IANB failure rates for normal, retrognathic, and prognathic mandibles were 7.3%, 14.5%, and 9.5%, respectively, and the failure rate was highest among those with a CC ratio < 0.8 (severe retrognathic mandible). The failure rate was significantly higher in the retrognathic group than in normal group (P = 0.019), and there was no statistically significant difference between the other two groups. Conclusions: IANB failure could be attributable, in part, to the skeletal characteristics of the mandible. In addition, the failure rate was found to be significantly higher in the retrognathic group.

Method for Classification of Age and Gender Using Gait Recognition (걸음걸이 인식을 통한 연령 및 성별 분류 방법)

  • Yoo, Hyun Woo;Kwon, Ki Youn
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.41 no.11
    • /
    • pp.1035-1045
    • /
    • 2017
  • Classification of age and gender has been carried out through different approaches such as facial-based and audio-based classifications. One of the limitations of facial-based methods is the reduced recognition rate over large distances, while another is the prerequisite of the faces to be located in front of the camera. Similarly, in audio-based methods, the recognition rate is reduced in a noisy environment. In contrast, gait-based methods are only required that a target person is in the camera. In previous works, the view point of a camera is only available as a side view and gait data sets consist of a standard gait, which is different from an ordinary gait in a real environment. We propose a feature extraction method using skeleton models from an RGB-D sensor by considering characteristics of age and gender using ordinary gait. Experimental results show that the proposed method could efficiently classify age and gender within a target group of individuals in real-life environments.