• Title/Summary/Keyword: Motion Training Video

Search Result 43, Processing Time 0.029 seconds

A Study on Taekwondo Training System using Hybrid Sensing Technique

  • Kwon, Doo Young
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.12
    • /
    • pp.1439-1445
    • /
    • 2013
  • We present a Taekwondo training system using a hybrid sensing technique of a body sensor and a visual sensor. Using a body sensor (accelerometer), rotational and inertial motion data are captured which are important for Taekwondo motion detection and evaluation. A visual sensor (camera) captures and records the sequential images of the performance. Motion chunk is proposed to structuralize Taekwondo motions and design HMM (Hidden Markov Model) for motion recognition. Trainees can evaluates their trial motions numerically by computing the distance to the standard motion performed by a trainer. For motion training video, the real-time video images captured by a camera is overlayed with a visualized body sensor data so that users can see how the rotational and inertial motion data flow.

The Effects of Rehabilitation Training Using Video Game on Improvement Range of Motion for Upper-Extremity, Shoulder Pain and Stress in Stroke Patients with Hemiplegia (비디오 게임을 이용한 재활운동이 뇌졸중 편마비 환자의 상지 관절가동 범위와 통증, 스트레스에 미치는 효과)

  • Buyn, Pil-Suck;Chon, Mi-Young
    • Journal of muscle and joint health
    • /
    • v.19 no.1
    • /
    • pp.46-56
    • /
    • 2012
  • Purpose: This study was to evaluate the effects of rehabilitation training using video game on improvement range of motion for upper -extremity, shoulder pain and stress in stroke patients with hemiplegia. Methods: The study utilized nonequivalent control group non-synchronized design. Participants are sampled from a group of people who are hospitalized in rehabilitation medicine ward at 'K' university hospital in 'S' city from January 1st 2011 to October 31th. Each 28members of control group and experimental group, total 56members were participated. One task is for 10minutes, and the video game for total 30minutes performed 5 times a week, for 3weeks. Data were analyzed by SPSS WIN 17.0. Results: The range of motion for upper-extremity in experimental group was significantly different from that in control group(shoulder flexion t=7.70, $p$ <.001, extension t=7.80, p<.001, abduction t=6.95, $p$ <.001, elbow flexion t=6.47, $p$ <.001). The shoulder pain score in experimental group was significantly different from that in control group(t=-14.58, $p$ <.001). The level of stress in experimental group was significantly different from that in control group(t=-4.89, $p$ <.001). Conclusion: The result proved that rehabilitation training using video game was an effective stroke patients to increase in range of motion for upper-extremity and decrease in the shoulder pain, stress.

An Interactive Aerobic Training System Using Vision and Multimedia Technologies

  • Chalidabhongse, Thanarat H.;Noichaiboon, Alongkot
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1191-1194
    • /
    • 2004
  • We describe the development of an interactive aerobic training system using vision-based motion capture and multimedia technology. Unlike the traditional one-way aerobic training on TV, the proposed system allows the virtual trainer to observe and interact with the user in real-time. The system is composed of a web camera connected to a PC watching the user moves. First, the animated character on the screen makes a move, and then instructs the user to follow its movement. The system applies a robust statistical background subtraction method to extract a silhouette of the moving user from the captured video. Subsequently, principal body parts of the extracted silhouette are located using model-based approach. The motion of these body parts is then analyzed and compared with the motion of the animated character. The system provides audio feedback to the user according to the result of the motion comparison. All the animation and video processing run in real-time on a PC-based system with consumer-type camera. This proposed system is a good example of applying vision algorithms and multimedia technology for intelligent interactive home entertainment systems.

  • PDF

Two-Stream Convolutional Neural Network for Video Action Recognition

  • Qiao, Han;Liu, Shuang;Xu, Qingzhen;Liu, Shouqiang;Yang, Wanggan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3668-3684
    • /
    • 2021
  • Video action recognition is widely used in video surveillance, behavior detection, human-computer interaction, medically assisted diagnosis and motion analysis. However, video action recognition can be disturbed by many factors, such as background, illumination and so on. Two-stream convolutional neural network uses the video spatial and temporal models to train separately, and performs fusion at the output end. The multi segment Two-Stream convolutional neural network model trains temporal and spatial information from the video to extract their feature and fuse them, then determine the category of video action. Google Xception model and the transfer learning is adopted in this paper, and the Xception model which trained on ImageNet is used as the initial weight. It greatly overcomes the problem of model underfitting caused by insufficient video behavior dataset, and it can effectively reduce the influence of various factors in the video. This way also greatly improves the accuracy and reduces the training time. What's more, to make up for the shortage of dataset, the kinetics400 dataset was used for pre-training, which greatly improved the accuracy of the model. In this applied research, through continuous efforts, the expected goal is basically achieved, and according to the study and research, the design of the original dual-flow model is improved.

A Robust Approach for Human Activity Recognition Using 3-D Body Joint Motion Features with Deep Belief Network

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.2
    • /
    • pp.1118-1133
    • /
    • 2017
  • Computer vision-based human activity recognition (HAR) has become very famous these days due to its applications in various fields such as smart home healthcare for elderly people. A video-based activity recognition system basically has many goals such as to react based on people's behavior that allows the systems to proactively assist them with their tasks. A novel approach is proposed in this work for depth video based human activity recognition using joint-based motion features of depth body shapes and Deep Belief Network (DBN). From depth video, different body parts of human activities are segmented first by means of a trained random forest. The motion features representing the magnitude and direction of each joint in next frame are extracted. Finally, the features are applied for training a DBN to be used for recognition later. The proposed HAR approach showed superior performance over conventional approaches on private and public datasets, indicating a prominent approach for practical applications in smartly controlled environments.

Video augmentation technique for human action recognition using genetic algorithm

  • Nida, Nudrat;Yousaf, Muhammad Haroon;Irtaza, Aun;Velastin, Sergio A.
    • ETRI Journal
    • /
    • v.44 no.2
    • /
    • pp.327-338
    • /
    • 2022
  • Classification models for human action recognition require robust features and large training sets for good generalization. However, data augmentation methods are employed for imbalanced training sets to achieve higher accuracy. These samples generated using data augmentation only reflect existing samples within the training set, their feature representations are less diverse and hence, contribute to less precise classification. This paper presents new data augmentation and action representation approaches to grow training sets. The proposed approach is based on two fundamental concepts: virtual video generation for augmentation and representation of the action videos through robust features. Virtual videos are generated from the motion history templates of action videos, which are convolved using a convolutional neural network, to generate deep features. Furthermore, by observing an objective function of the genetic algorithm, the spatiotemporal features of different samples are combined, to generate the representations of the virtual videos and then classified through an extreme learning machine classifier on MuHAVi-Uncut, iXMAS, and IAVID-1 datasets.

Video Object Segmentation with Weakly Temporal Information

  • Zhang, Yikun;Yao, Rui;Jiang, Qingnan;Zhang, Changbin;Wang, Shi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1434-1449
    • /
    • 2019
  • Video object segmentation is a significant task in computer vision, but its performance is not very satisfactory. A method of video object segmentation using weakly temporal information is presented in this paper. Motivated by the phenomenon in reality that the motion of the object is a continuous and smooth process and the appearance of the object does not change much between adjacent frames in the video sequences, we use a feed-forward architecture with motion estimation to predict the mask of the current frame. We extend an additional mask channel for the previous frame segmentation result. The mask of the previous frame is treated as the input of the expanded channel after processing, and then we extract the temporal feature of the object and fuse it with other feature maps to generate the final mask. In addition, we introduce multi-mask guidance to improve the stability of the model. Moreover, we enhance segmentation performance by further training with the masks already obtained. Experiments show that our method achieves competitive results on DAVIS-2016 on single object segmentation compared to some state-of-the-art algorithms.

Motion Adaptive Temporal Noise Reduction Filtering Based on Iterative Least-Square Training (반복적 최적 자승 학습에 기반을 둔 움직임 적응적 시간영역 잡음 제거 필터링)

  • Kim, Sung-Deuk;Lim, Kyoung-Won
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.127-135
    • /
    • 2010
  • In motion adaptive temporal noise reduction filtering used for reducing video noises, the strength of motion adaptive temporal filtering should be carefully controlled according to temporal movement. This paper presents a motion adaptive temporal filtering scheme based on least-square training. Each pixel is classified to a specific class code according to temporal movement, and then, an iterative least-square training method is applied for each class code to find optimal filtering coefficients. The iterative least-square training is an off-line procedure, and the trained filter coefficients are stored in a lookup table (LUT). In actual noise reduction filtering operation, after each pixel is classified by temporal movement, simple filtering operation is applied with the filter coefficients stored in the LUT according to the class code. Experiment results show that the proposed method efficiently reduces video noises without introducing blurring.

Effect of posture correction training in dental scaling using rapid upper limb assessment and 3D motion analysis (Rapid upper limb assessment와 3차원 동작 분석을 활용한 치석제거 자세교정 교육의 효과)

  • Yoon, Tae-Lim;Min, Ji-Hyun;Kim, Han-Na
    • Journal of Korean society of Dental Hygiene
    • /
    • v.18 no.3
    • /
    • pp.269-280
    • /
    • 2018
  • Objectives: The purpose of this study was to investigate the change in the posture of dental hygiene students and clinical dental hygienists when implementing dental scaling before and after posture correction training using the rapid upper limb assessment (RULA) method and 3D motion analysis. Methods: Thirty-two healthy volunteers performed dental scaling to remove artificial calculus on dental manikin. The movement and angle of the joints were verified by RULA and 3D motion analysis during the procedure. The subjects were also photographed for 1 minute during the procedure for 10 minutes while the calculus was removed. After the removal of the calculus, the subject and the instructor checked the video together. Posture correction training was conducted by the instructor so that the subject could perform the calculus removal operation in the correct posture. Artificial calculus of the adjacent teeth was then removed for the same period of time, and the change in posture was reviewed. Results: The total score of the posture change using RULA was $5.72{\pm}0.58$ before training and $4.31{\pm}0.10$ after training, showing a significant decrease after training (p<0.001), and upper arm, lower arm, wrist position, neck and waist position showed significant decrease after training. The three-dimensional motion analysis showed significant differences according to the criteria measured at all measurement sites except the left shoulder (p<0.05) Conclusions: It was confirmed through RULA and 3D motion analysis that postural correction training using calculus removal images was effective, and that correct postural education is essential to preventing musculoskeletal diseases caused by removal of calculus.

Temporal matching prior network for vehicle license plate detection and recognition in videos

  • Yoo, Seok Bong;Han, Mikyong
    • ETRI Journal
    • /
    • v.42 no.3
    • /
    • pp.411-419
    • /
    • 2020
  • In real-world intelligent transportation systems, accuracy in vehicle license plate detection and recognition is considered quite critical. Many algorithms have been proposed for still images, but their accuracy on actual videos is not satisfactory. This stems from several problematic conditions in videos, such as vehicle motion blur, variety in viewpoints, outliers, and the lack of publicly available video datasets. In this study, we focus on these challenges and propose a license plate detection and recognition scheme for videos based on a temporal matching prior network. Specifically, to improve the robustness of detection and recognition accuracy in the presence of motion blur and outliers, forward and bidirectional matching priors between consecutive frames are properly combined with layer structures specifically designed for plate detection. We also built our own video dataset for the deep training of the proposed network. During network training, we perform data augmentation based on image rotation to increase robustness regarding the various viewpoints in videos.