• Title/Summary/Keyword: Hand Model

Search Result 3,115, Processing Time 0.034 seconds

Hand Segmentation Using Depth Information and Adaptive Threshold by Histogram Analysis with color Clustering

  • Fayya, Rabia;Rhee, Eun Joo
    • Journal of Korea Multimedia Society
    • /
    • v.17 no.5
    • /
    • pp.547-555
    • /
    • 2014
  • This paper presents a method for hand segmentation using depth information, and adaptive threshold by means of histogram analysis and color clustering in HSV color model. We consider hand area as a nearer object to the camera than background on depth information. And the threshold of hand color is adaptively determined by clustering using the matching of color values on the input image with one of the regions of hue histogram. Experimental results demonstrate 95% accuracy rate. Thus, we confirmed that the proposed method is effective for hand segmentation in variations of hand color, scale, rotation, pose, different lightning conditions and any colored background.

A Study on the Fraud Detection in an Online Second-hand Market by Using Topic Modeling and Machine Learning (토픽 모델링과 머신 러닝 방법을 이용한 온라인 C2C 중고거래 시장에서의 사기 탐지 연구)

  • Dongwoo Lee;Jinyoung Min
    • Information Systems Review
    • /
    • v.23 no.4
    • /
    • pp.45-67
    • /
    • 2021
  • As the transaction volume of the C2C second-hand market is growing, the number of frauds, which intend to earn unfair gains by sending products different from specified ones or not sending them to buyers, is also increasing. This study explores the model that can identify frauds in the online C2C second-hand market by examining the postings for transactions. For this goal, this study collected 145,536 field data from actual C2C second-hand market. Then, the model is built with the characteristics from postings such as the topic and the linguistic characteristics of the product description, and the characteristics of products, postings, sellers, and transactions. The constructed model is then trained by the machine learning algorithm XGBoost. The final analysis results show that fraudulent postings have less information, which is also less specific, fewer nouns and images, a higher ratio of the number and white space, and a shorter length than genuine postings do. Also, while the genuine postings are focused on the product information for nouns, delivery information for verbs, and actions for adjectives, the fraudulent postings did not show those characteristics. This study shows that the various features can be extracted from postings written in C2C second-hand transactions and be used to construct an effective model for frauds. The proposed model can be also considered and applied for the other C2C platforms. Overall, the model proposed in this study can be expected to have positive effects on suppressing and preventing fraudulent behavior in online C2C markets.

Robust 3D Hand Tracking based on a Coupled Particle Filter (결합된 파티클 필터에 기반한 강인한 3차원 손 추적)

  • Ahn, Woo-Seok;Suk, Heung-Il;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.1
    • /
    • pp.80-84
    • /
    • 2010
  • Tracking hands is an essential technique for hand gesture recognition which is an efficient way in Human Computer Interaction (HCI). Recently, many researchers have focused on hands tracking using a 3D hand model and showed robust tracking results compared to using 2D hand models. In this paper, we propose a novel 3D hand tracking method based on a coupled particle filter. This provides robust and fast tracking results by estimating each part of global hand poses and local finger motions separately and then utilizing the estimated results as a prior for each other. Furthermore, in order to improve the robustness, we apply a multi-cue based method by integrating a color-based area matching method and an edge-based distance matching method. In our experiments, the proposed method showed robust tracking results for complex hand motions in a cluttered background.

Dynamic Hand Gesture Recognition Using CNN Model and FMM Neural Networks (CNN 모델과 FMM 신경망을 이용한 동적 수신호 인식 기법)

  • Kim, Ho-Joon
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.95-108
    • /
    • 2010
  • In this paper, we present a hybrid neural network model for dynamic hand gesture recognition. The model consists of two modules, feature extraction module and pattern classification module. We first propose a modified CNN(convolutional Neural Network) a pattern recognition model for the feature extraction module. Then we introduce a weighted fuzzy min-max(WFMM) neural network for the pattern classification module. The data representation proposed in this research is a spatiotemporal template which is based on the motion information of the target object. To minimize the influence caused by the spatial and temporal variation of the feature points, we extend the receptive field of the CNN model to a three-dimensional structure. We discuss the learning capability of the WFMM neural networks in which the weight concept is added to represent the frequency factor in training pattern set. The model can overcome the performance degradation which may be caused by the hyperbox contraction process of conventional FMM neural networks. From the experimental results of human action recognition and dynamic hand gesture recognition for remote-control electric home appliances, the validity of the proposed models is discussed.

A Personalized Hand Gesture Recognition System using Soft Computing Techniques (소프트 컴퓨팅 기법을 이용한 개인화된 손동작 인식 시스템)

  • Jeon, Moon-Jin;Do, Jun-Hyeong;Lee, Sang-Wan;Park, Kwang-Hyun;Bien, Zeung-Nam
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.53-59
    • /
    • 2008
  • Recently, vision-based hand gesture recognition techniques have been developed for assisting elderly and disabled people to control home appliances. Frequently occurred problems which lower the hand gesture recognition rate are due to the inter-person variation and intra-person variation. The recognition difficulty caused by inter-person variation can be handled by using user dependent model and model selection technique. And the recognition difficulty caused by intra-person variation can be handled by using fuzzy logic. In this paper, we propose multivariate fuzzy decision tree learning and classification method for a hand motion recognition system for multiple users. When a user starts to use the system, the most appropriate recognition model is selected and used for the user.

Model Postures at Fashion Shows According to Their Clothing Fashion Images: Focusing on Elegance Image and Neutral-gender Image (패션이미지에 따른 패션쇼 모델의 신체연출에 관한연구 - (제1보) 우아미와 중성미를 중심으로 -)

  • Heo, MIn-Jung;Chung, Sung-Jee
    • Journal of the Korea Fashion and Costume Design Association
    • /
    • v.16 no.2
    • /
    • pp.31-40
    • /
    • 2014
  • The purpose of the study was to examine model postures at fashion shows with respect to expressing fashion images including elegance and neutral-gender images. Data were gathered from the fashion shows held 2000 S/S through 2009 F/W, when elegance and neutral-gender fashion images were obvious in fashion collections. Three designer brands representing elegance and neutral-gender fashion images were selected by the researcher and fashion specialists including graduate students majoring in fashion. The fashion collection photos representing each image were selected from style.com, a website which contains four world's biggest fashion collections. The results showed different hand positions as a model posture according to fashion images. In the neutral-gender image, 16 photos (47%) showed a hand position at pockets, in the elegance image, 24 photos (82.3%) showed a hand position laying down by the sides. Also, walking pose was shown to be different between two fashion images. In the neutral-gender fashion image, 16 photos (52.9%) revealed a pose of 'natural walk', while 29 photos (100%) showed a pose of 'walk in a straight line' in the elegance imaged fashion. In conclusion, the neutral-gender image photos showed the pocket-positioned hand and the 'natural walk' poses more than elegance image photos, and elegance image photos revealed the hand position laying down by the sides and the 'walk in a straight line' poses than the photos of the neutral-gender image.

  • PDF

Comparison of accuracy between free-hand and surgical guide implant placement among experienced and non-experienced dental implant practitioners: an in vitro study

  • Dler Raouf Hama;Bayad Jaza Mahmood
    • Journal of Periodontal and Implant Science
    • /
    • v.53 no.5
    • /
    • pp.388-401
    • /
    • 2023
  • Purpose: This study investigated the accuracy of free-hand implant surgery performed by an experienced operator compared to static guided implant surgery performed by an inexperienced operator on an anterior maxillary dental model arch. Methods: A maxillary dental model with missing teeth (No. 11, 22, and 23) was used for this in vitro study. An intraoral scan was performed on the model, with the resulting digital impression exported as a stereolithography file. Next, a cone-beam computed tomography (CBCT) scan was performed, with the resulting image exported as a Digital Imaging and Communications in Medicine file. Both files were imported into the RealGUIDE 5.0 dental implant planning software. Active Bio implants were selected to place into the model. A single stereolithographic 3-dimensional surgical guide was printed for all cases. Ten clinicians, divided into 2 groups, placed a total of 60 implants in 20 acrylic resin maxillary models. Due to the small sample size, the Mann-Whitney test was used to analyze mean values in the 2 groups. Statistical analyses were performed using SAS version 9.4. Results: The accuracy of implant placement using a surgical guide was significantly higher than that of free-hand implantation. The mean difference between the planned and actual implant positions at the apex was 0.68 mm for the experienced group using the free-hand technique and 0.14 mm for the non-experienced group using the surgical guide technique (P=0.019). At the top of the implant, the mean difference was 1.04 mm for the experienced group using the free-hand technique and 0.52 mm for the non-experienced group using the surgical guide technique (P=0.044). Conclusions: The data from this study will provide valuable insights for future studies, since in vitro studies should be conducted extensively in advance of retrospective or prospective studies to avoid burdening patients unnecessarily.

A Framework for Human Body Parts Detection in RGB-D Image (RGB-D 이미지에서 인체 영역 검출을 위한 프레임워크)

  • Hong, Sungjin;Kim, Myounggyu
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.12
    • /
    • pp.1927-1935
    • /
    • 2016
  • This paper propose a framework for human body parts in RGB-D image. We conduct tasks of obtaining person area, finding candidate areas and local detection in order to detect hand, foot and head which have features of long accumulative geodesic distance. A person area is obtained with background subtraction and noise removal by using depth image which is robust to illumination change. Finding candidate areas performs construction of graph model which allows us to measure accumulative geodesic distance for the candidates. Instead of raw depth map, our approach constructs graph model with segmented regions by quadtree structure to improve searching time for the candidates. Local detection uses HOG based SVM for each parts, and head is detected for the first time. To minimize false detections for hand and foot parts, the candidates are classified with upper or lower body using the head position and properties of geodesic distance. Then, detect hand and foot with the local detectors. We evaluate our algorithm with datasets collected Kinect v2 sensor, and our approach shows good performance for head, hand and foot detection.

Analysis of Face Direction and Hand Gestures for Recognition of Human Motion (인간의 행동 인식을 위한 얼굴 방향과 손 동작 해석)

  • Kim, Seong-Eun;Jo, Gang-Hyeon;Jeon, Hui-Seong;Choe, Won-Ho;Park, Gyeong-Seop
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.4
    • /
    • pp.309-318
    • /
    • 2001
  • In this paper, we describe methods that analyze a human gesture. A human interface(HI) system for analyzing gesture extracts the head and hand regions after taking image sequence of and operators continuous behavior using CCD cameras. As gestures are accomplished with operators head and hands motion, we extract the head and hand regions to analyze gestures and calculate geometrical information of extracted skin regions. The analysis of head motion is possible by obtaining the face direction. We assume that head is ellipsoid with 3D coordinates to locate the face features likes eyes, nose and mouth on its surface. If was know the center of feature points, the angle of the center in the ellipsoid is the direction of the face. The hand region obtained from preprocessing is able to include hands as well as arms. For extracting only the hand region from preprocessing, we should find the wrist line to divide the hand and arm regions. After distinguishing the hand region by the wrist line, we model the hand region as an ellipse for the analysis of hand data. Also, the finger part is represented as a long and narrow shape. We extract hand information such as size, position, and shape.

  • PDF