• Title/Summary/Keyword: posture recognition

Search Result 135, Processing Time 0.026 seconds

Development of Turtle Neck Posture Correction Chair Through Posture Recognition (자세인지를 통한 거북목자세 교정의자 개발)

  • Lee, Jeong-Weon
    • Journal of Korean Society of Neurocognitive Rehabilitation
    • /
    • v.10 no.2
    • /
    • pp.19-26
    • /
    • 2018
  • Many people do not realize that they have poor neck posture. Incorrect forward head posture can lead to turtle neck. This aim to development of specific chair to reduce tension and other symptoms of turtle neck posture. This turtle neck syndrome adjusting chair is a chair that supports the hip and shin of a person to help them correct their posture. It is consisted of the shin support that supports the shin in an angle and the hip support that supports one's hip while the shin is supported at an angle, the main frame that has the two of them connected and the fluid seat that is joined at the top of the hip support and reacts accordingly to the shape of the hip. This is a posture correction chair which has the fluid seat that provides unstable hip support so that it can allow a person to realize their posture from the constant stimulation about the posture. When one seats on the posture correction chair, their hip and shin are supported at an angle that straitens their back, and as their back is straightened, their shoulders and chest are opened, and the neck is positioned at the middle to help them correct their posture. An unbalanced posture causes discomfort to the person seated at the chair, and the person sitting on the posture correction chair will continuously adjust his/her posture to balance the hips to keep the correct posture. Through this process, the person shall adjust his/her left and right posture, ultimately increasing the effectiveness of posture correction. A future collective study on the continuous posture correction of people having turtle neck syndrome using this posture correction chair is required.

A Study on Sitting Posture Recognition using Machine Learning (머신러닝을 이용한 앉은 자세 분류 연구)

  • Ma, Sangyong;Hong, Sangpyo;Shim, Hyeon-min;Kwon, Jang-Woo;Lee, Sangmin
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.9
    • /
    • pp.1557-1563
    • /
    • 2016
  • According to recent studies, poor sitting posture of the spine has been shown to lead to a variety of spinal disorders. For this reason, it is important to measure the sitting posture. We proposed a strategy for classification of sitting posture using machine learning. We retrieved acceleration data from single tri-axial accelerometer attached on the back of the subject's neck in 5-types of sitting posture. 6 subjects without any spinal disorder were participated in this experiment. Acceleration data were transformed to the feature vectors of principle component analysis. Support vector machine (SVM) and K-means clustering were used to classify sitting posture with the transformed feature vectors. To evaluate performance, we calculated the correct rate for each classification strategy. Although the correct rate of SVM in sitting back arch was lower than that of K-means clustering by 2.0%, SVM's correct rate was higher by 1.3%, 5.2%, 16.6%, 7.1% in a normal posture, sitting front arch, sitting cross-legged, sitting leaning right, respectively. In conclusion, the overall correction rates were 94.5% and 88.84% in SVM and K-means clustering respectively, which means that SVM have more advantage than K-means method for classification of sitting posture.

Study on Intelligent Autonomous Navigation of Avatar using Hand Gesture Recognition (손 제스처 인식을 통한 인체 아바타의 지능적 자율 이동에 관한 연구)

  • 김종성;박광현;김정배;도준형;송경준;민병의;변증남
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.483-486
    • /
    • 1999
  • In this paper, we present a real-time hand gesture recognition system that controls motion of a human avatar based on the pre-defined dynamic hand gesture commands in a virtual environment. Each motion of a human avatar consists of some elementary motions which are produced by solving inverse kinematics to target posture and interpolating joint angles for human-like motions. To overcome processing time of the recognition system for teaming, we use a Fuzzy Min-Max Neural Network (FMMNN) for classification of hand postures

  • PDF

A Study on Hand Gesture Recognition using Computer Vision (컴퓨터비전을 이용한 손동작 인식에 관한 연구)

  • Park Chang-Min
    • Management & Information Systems Review
    • /
    • v.4
    • /
    • pp.395-407
    • /
    • 2000
  • It is necessary to develop method that human and computer can interfact by the hand gesture without any special device. In this thesis, the real time hand gesture recognition was developed. The system segments the region of a hand recognizes the hand posture and track the movement of the hand, using computer vision. And it does not use the blue screen as a background, the data glove and special markers for the recognition of the hand gesture.

  • PDF

Incremental Circle Transform Theory and Its Application for Orientation Detection of Two-Dimensional Objects (증분원변환 이론 및 이차원 물체의 자세인식에의 응용)

  • ;;Zeung Nam Bien
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.7
    • /
    • pp.578-589
    • /
    • 1991
  • In this paper, there is proposed a novel concept of Incremintal Circle Transform which can describe the boundary contour of a two-dimensional object without object without occlusions. And a pattern recognition algorithm to determine the posture of an object is developed with the aid of line integral and similarity transform. Also, It is confirmed via experiments that the algorithm can find the posture of an object in a very fast manner independent of the starting point for boundary coding and the position of the object.

  • PDF

The Development of a Real-Time Hand Gestures Recognition System Using Infrared Images (적외선 영상을 이용한 실시간 손동작 인식 장치 개발)

  • Ji, Seong Cheol;Kang, Sun Woo;Kim, Joon Seek;Joo, Hyonam
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.12
    • /
    • pp.1100-1108
    • /
    • 2015
  • A camera-based real-time hand posture and gesture recognition system is proposed for controlling various devices inside automobiles. It uses an imaging system composed of a camera with a proper filter and an infrared lighting device to acquire images of hand-motion sequences. Several steps of pre-processing algorithms are applied, followed by a background normalization process before segmenting the hand from the background. The hand posture is determined by first separating the fingers from the main body of the hand and then by finding the relative position of the fingers from the center of the hand. The beginning and ending of the hand motion from the sequence of the acquired images are detected using pre-defined motion rules to start the hand gesture recognition. A set of carefully designed features is computed and extracted from the raw sequence and is fed into a decision tree-like decision rule for determining the hand gesture. Many experiments are performed to verify the system. In this paper, we show the performance results from tests on the 550 sequences of hand motion images collected from five different individuals to cover the variations among many users of the system in a real-time environment. Among them, 539 sequences are correctly recognized, showing a recognition rate of 98%.

Vision-Based Activity Recognition Monitoring Based on Human-Object Interaction at Construction Sites

  • Chae, Yeon;Lee, Hoonyong;Ahn, Changbum R.;Jung, Minhyuk;Park, Moonseo
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.877-885
    • /
    • 2022
  • Vision-based activity recognition has been widely attempted at construction sites to estimate productivity and enhance workers' health and safety. Previous studies have focused on extracting an individual worker's postural information from sequential image frames for activity recognition. However, various trades of workers perform different tasks with similar postural patterns, which degrades the performance of activity recognition based on postural information. To this end, this research exploited a concept of human-object interaction, the interaction between a worker and their surrounding objects, considering the fact that trade workers interact with a specific object (e.g., working tools or construction materials) relevant to their trades. This research developed an approach to understand the context from sequential image frames based on four features: posture, object, spatial features, and temporal feature. Both posture and object features were used to analyze the interaction between the worker and the target object, and the other two features were used to detect movements from the entire region of image frames in both temporal and spatial domains. The developed approach used convolutional neural networks (CNN) for feature extractors and activity classifiers and long short-term memory (LSTM) was also used as an activity classifier. The developed approach provided an average accuracy of 85.96% for classifying 12 target construction tasks performed by two trades of workers, which was higher than two benchmark models. This experimental result indicated that integrating a concept of the human-object interaction offers great benefits in activity recognition when various trade workers coexist in a scene.

  • PDF

Posture Recognition for a Bi-directional Participatory TV Program based on Face Color Region and Motion Map (시청자 참여형 양방향 TV 방송을 위한 얼굴색 영역 및 모션맵 기반 포스처 인식)

  • Hwang, Sunhee;Lim, Kwangyong;Lee, Suwoong;Yoo, Hoyoung;Byun, Hyeran
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.8
    • /
    • pp.549-554
    • /
    • 2015
  • As intuitive hardware interfaces continue to be developed, it has become more important to recognize the posture of the user. An efficient alternative to adding expensive sensors is to implement computer vision systems. This paper proposes a method to recognize a user's postured in a live broadcast bi-directional participatory TV program. The proposed method first estimates the position of the user's hands by generation a facial color map for the user and a motion map. The posture is then recognized by computing the relative position of the face and the hands. This method exhibited 90% accuracy in an experiment to recognize three defined postures during the live broadcast bi-directional participatory TV program, even when the input images contained a complex background.

Artificial Neural Network for Quantitative Posture Classification in Thai Sign Language Translation System

  • Wasanapongpan, Kumphol;Chotikakamthorn, Nopporn
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1319-1323
    • /
    • 2004
  • In this paper, a problem of Thai sign language recognition using a neural network is considered. The paper addresses the problem in classifying certain signs conveying quantitative meaning, e.g., large or small. By treating those signs corresponding to different quantities as derived from different classes, the recognition error rate of the standard multi-layer Perceptron increases if the precision in recognizing different quantities is increased. This is due the fact that, to increase the quantitative recognition precision of those signs, the number of (increasingly similar) classes must also be increased. This leads to an increase in false classification. The problem is due to misinterpreting the amount of quantity the quantitative signs convey. In this paper, instead of treating those signs conveying quantitative attribute of the same quantity type (such as 'size' or 'amount') as derived from different classes, here they are considered instances of the same class. Those signs of the same quantity type are then further divided into different subclasses according to the level of quantity each sign is associated with. By using this two-level classification, false classification among main gesture classes is made independent to the level of precision needed in recognizing different quantitative levels. Moreover, precision of quantitative level classification can be made higher during the recognition phase, as compared to that used in the training phase. A standard multi-layer Perceptron with a back propagation learning algorithm was adapted in the study to implement this two-level classification of quantitative gesture signs. Experimental results obtained using an electronic glove measurement of hand postures are included.

  • PDF