• Title/Summary/Keyword: User's movements

Search Result 137, Processing Time 0.026 seconds

The Effects of Therapeutic Climbing on Shoulder Muscle Activity according to the Inclination of the Climbing Wall

  • Kim, Eun-Jeong;Kim, Se-Hun
    • The Journal of Korean Physical Therapy
    • /
    • v.30 no.3
    • /
    • pp.84-89
    • /
    • 2018
  • Purpose: Therapeutic climbing has become very popular today, with it being reported as a new method for preventing and treating orthopedic trauma to the shoulder joint. However, objective studies on its effects on the musculoskeletal system are still lacking. The objective of the present study was to investigate the effects of wall inclination during therapeutic climbing on the muscle activity around the shoulder joint. Methods: In this study, the participants performed movements at three different inclination angles of $0^{\circ}$, $+15^{\circ}$, and $-15^{\circ}$. sEMG was performed to measure the activities of five different muscles around the shoulder joint (biceps brachii, serratus anterior, upper trapezius, middle trapezius, and lower trapezius muscles). Results: Biceps brachii muscle showed a significant increase at $-15^{\circ}$, as compared to $0^{\circ}$ (p<0.01), and the serratus anterior also showed a significant increase at $-15^{\circ}$, as compared to $0^{\circ}$ (p<0.05). Moreover, the middle and lower trapezius muscles also showed a significant increase at $-15^{\circ}$, as compared to $0^{\circ}$ (p<0.001). Compared to $0^{\circ}$, all muscles showed decreased values at $15^{\circ}$, but the differences were not statistically significant (p>0.05). Conclusion: Therapeutic climbing may be a new therapeutic approach that can increase muscle strength and coordination in the sensory nervous system, since it can be used as a tool that promotes active movement by altering wall inclination and causing the user to generate movements according to the existing situation.

Development of Adaptive Eye Tracking System Using Auto-Focusing Technology of Camera (눈동자 자동 추적 카메라 시스템 설계와 구현)

  • Wei, Zukuan;Liu, Xiaolong;Oh, Young-Hwan;Yook, Ju-Hye
    • Journal of Digital Contents Society
    • /
    • v.13 no.2
    • /
    • pp.159-167
    • /
    • 2012
  • Eye tracking technology tracks human eyes movements to understand user's intention. This technology has been improving slowly and should be used for a variety of occasions now. For example, it enables persons with disabilities to operate a computer with their eyes. This article will show a typical implementation of an eye tracking system for persons with disabilities, after introducing the design principles and specific implementation details of an eye tracking system. The article discussed the realization of self-adapting regulation algorithm in detail. The self-adapting algorithm is based on feedback signal controlling the lens movements to realize automatic focus, and to get a clear eyes image. This CCD camera automatic focusing method has self-adapting capacity for changes of light intensity on the external environment. It also avoids the trouble of manual adjustment and improves the accuracy of the adjustment.

Performance Comparison for Exercise Motion classification using Deep Learing-based OpenPose (OpenPose기반 딥러닝을 이용한 운동동작분류 성능 비교)

  • Nam Rye Son;Min A Jung
    • Smart Media Journal
    • /
    • v.12 no.7
    • /
    • pp.59-67
    • /
    • 2023
  • Recently, research on behavior analysis tracking human posture and movement has been actively conducted. In particular, OpenPose, an open-source software developed by CMU in 2017, is a representative method for estimating human appearance and behavior. OpenPose can detect and estimate various body parts of a person, such as height, face, and hands in real-time, making it applicable to various fields such as smart healthcare, exercise training, security systems, and medical fields. In this paper, we propose a method for classifying four exercise movements - Squat, Walk, Wave, and Fall-down - which are most commonly performed by users in the gym, using OpenPose-based deep learning models, DNN and CNN. The training data is collected by capturing the user's movements through recorded videos and real-time camera captures. The collected dataset undergoes preprocessing using OpenPose. The preprocessed dataset is then used to train the proposed DNN and CNN models for exercise movement classification. The performance errors of the proposed models are evaluated using MSE, RMSE, and MAE. The performance evaluation results showed that the proposed DNN model outperformed the proposed CNN model.

Goal-oriented Movement Reality-based Skeleton Animation Using Machine Learning

  • Yu-Won JEONG
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.2
    • /
    • pp.267-277
    • /
    • 2024
  • This paper explores the use of machine learning in game production to create goal-oriented, realistic animations for skeleton monsters. The purpose of this research is to enhance realism by implementing intelligent movements in monsters within game development. To achieve this, we designed and implemented a learning model for skeleton monsters using reinforcement learning algorithms. During the machine learning process, various reward conditions were established, including the monster's speed, direction, leg movements, and goal contact. The use of configurable joints introduced physical constraints. The experimental method validated performance through seven statistical graphs generated using machine learning methods. The results demonstrated that the developed model allows skeleton monsters to move to their target points efficiently and with natural animation. This paper has implemented a method for creating game monster animations using machine learning, which can be applied in various gaming environments in the future. The year 2024 is expected to bring expanded innovation in the gaming industry. Currently, advancements in technology such as virtual reality, AI, and cloud computing are redefining the sector, providing new experiences and various opportunities. Innovative content optimized for this period is needed to offer new gaming experiences. A high level of interaction and realism, along with the immersion and fun it induces, must be established as the foundation for the environment in which these can be implemented. Recent advancements in AI technology are significantly impacting the gaming industry. By applying many elements necessary for game development, AI can efficiently optimize the game production environment. Through this research, We demonstrate that the application of machine learning to Unity and game engines in game development can contribute to creating more dynamic and realistic game environments. To ensure that VR gaming does not end as a mere craze, we propose new methods in this study to enhance realism and immersion, thereby increasing enjoyment for continuous user engagement.

Implementation to human-computer interface system with motion tracking using OpenCV and FPGA (FPGA와 OpenCV를 이용한 눈동자 모션인식을 통한 의사소통 시스템)

  • Lee, Hee Bin;Heo, Seung Won;Lee, Seung Jun;Yu, Yun Seop
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.05a
    • /
    • pp.696-699
    • /
    • 2018
  • This paper introduces a system that enables pupillary tracing and communication with patients with amyotrophic lateral sclerosis (ALS) who can not move free. Face and pupil are tracked using OpenCV, and eye movements are detected using DE1-SoC board. We use the webcam, track the pupil, identify the pupil's movement according to the pupil coordinate value, and select the character according to the user's intention. We propose a system that can use relatively low development cost and FPGA can be reusable, and can select a text easily to mobile phone by using Bluetooth.

  • PDF

Design and Implementation of the Alternative Reality Game using NFC Tags (NFC 태그를 활용한 ARG 게임의 설계 및 구현)

  • Lee, Yeongha;Khang, Seungwoo;Baek, Seunghyuk;Ha, Minho;Lee, Sangjun
    • Journal of Korea Game Society
    • /
    • v.15 no.6
    • /
    • pp.141-148
    • /
    • 2015
  • Most current games are composed of simple operations without a user's actual movements. because they are made up of only virtual space. For that reason, many side effects arise on a user's body and mind. The Alternative Reality Game(ARG) is new game genre that has an effect to mitigate side effects caused by activity on a virtual space. This new kind of game genre can be instrumental role in creating a greater social intimacy and a higher tendency to cooperate that potentially reaches beyond the game context. In this paper, we suggest a real implementation of ARG using mobile devices and NFC tags. In addition, a noble editor for creating an ARG and a game launcher based on mobile device are suggested.

Development of an intuitive motion-based drone controller (직관적 제어가 가능한 드론과 컨트롤러 개발)

  • Seok, Jung-Hwan;Han, Jung-Hee;Baek, Jun-Hyuk;Chang, Won-Joo;Kim, Huhn
    • Design & Manufacturing
    • /
    • v.11 no.3
    • /
    • pp.41-45
    • /
    • 2017
  • Drones can be manipulated in a variety of ways. One of the most common controller is joystick method. But joystick controller uses both hands and takes a long time to learn. Particularly, in the case of 8-character flight, it is necessary to use both front and rear flight (pitch), left and right flight (Roll), and body rotation (Yaw). Joystick controller has limitations to intuitively control it. In particular, when the main body rotates, the viewpoint of the forward direction is changed between the drones and the user, thereby causing a mental rotation problem in which the user must control the rotating state of the drones. Therefore, we developed a motion matching controller that matches the motion of the drones and the controller. That is, the movement of the drone and the movement of the controller are the same. In this study, we used a gyro sensor and an acceleration sensor to map the controller's forward / backward, left / right and body rotation movements to drone's forward / backward, left / right, and rotational flight motion. The motor output is controlled by the throttle dial at the center of the controller. As the motions coincide with each other, it is expected that the first drone operator will be able to control more intuitively than the joystick manipulator with less learning.

A Study on the Establishment of Exercise Training System (운동 트레이닝 시스템 구축 방안에 관한 연구)

  • Oh, Eun-Yeol
    • Journal of Digital Convergence
    • /
    • v.19 no.8
    • /
    • pp.195-203
    • /
    • 2021
  • This study is about building an exercise training system that analyzes images taken of a person's whole body and displays the normal operating range for user-specific movements as images. This study analyzes the front and side of the user's body based on the standing point, sets the node in the joint position of the human body, and places the node in the spatial coordinate system from the point of entry, and calibrates the normal node operating range according to the set standard node coordinate. Therefore, the method of the study presented a method to select differentiation from this study through prior technical research and literature research, and the purpose of the study is to establish a exercise training system accordingly.

A Unit Touch Gesture Model of Performance Time Prediction for Mobile Devices

  • Kim, Damee;Myung, Rohae
    • Journal of the Ergonomics Society of Korea
    • /
    • v.35 no.4
    • /
    • pp.277-291
    • /
    • 2016
  • Objective: The aim of this study is to propose a unit touch gesture model, which would be useful to predict the performance time on mobile devices. Background: When estimating usability based on Model-based Evaluation (MBE) in interfaces, the GOMS model measured 'operators' to predict the execution time in the desktop environment. Therefore, this study used the concept of operator in GOMS for touch gestures. Since the touch gestures are comprised of possible unit touch gestures, these unit touch gestures can predict to performance time with unit touch gestures on mobile devices. Method: In order to extract unit touch gestures, manual movements of subjects were recorded in the 120 fps with pixel coordinates. Touch gestures are classified with 'out of range', 'registration', 'continuation' and 'termination' of gesture. Results: As a results, six unit touch gestures were extracted, which are hold down (H), Release (R), Slip (S), Curved-stroke (Cs), Path-stroke (Ps) and Out of range (Or). The movement time predicted by the unit touch gesture model is not significantly different from the participants' execution time. The measured six unit touch gestures can predict movement time of undefined touch gestures like user-defined gestures. Conclusion: In conclusion, touch gestures could be subdivided into six unit touch gestures. Six unit touch gestures can explain almost all the current touch gestures including user-defined gestures. So, this model provided in this study has a high predictive power. The model presented in the study could be utilized to predict the performance time of touch gestures. Application: The unit touch gestures could be simply added up to predict the performance time without measuring the performance time of a new gesture.

Remote Control through Tracking of Pupil on Mobile Device (모바일 기기에서 눈동자 추적을 통한 원격 제어)

  • Kim, Su-Sun;Kang, Seok-Hoon;Kim, Seon-Woon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.4
    • /
    • pp.1849-1856
    • /
    • 2012
  • This paper proposes a method to track the center of pupil and perform the remote control for interface based on the substituted commands according to movements of pupil under smart phone environment. The proposed method, which is a remote control through the movement of eyes, may be helpful for the handicapped people or users who want a more convenient input method. A method based on webcam, which is representative one among the previous methods to track pupil of user, has a few limitations on distance and angle between location of user and webcam. However, this paper uses smart phone that is convenient to carry. The proposed method can perform the remote control through tracking of pupil using wireless network without any restriction on the location of users. Thus, the method is effectively applied for controlling the smart TV that should be controlled on the distance as well as the remote control for PC.