• Title/Summary/Keyword: Plane motion error

Search Result 42, Processing Time 0.022 seconds

Technical-note : Real-time Evaluation System for Quantitative Dynamic Fitting during Pedaling (단신 : 페달링 시 정량적인 동적 피팅을 위한 실시간 평가 시스템)

  • Lee, Joo-Hack;Kang, Dong-Won;Bae, Jae-Hyuk;Shin, Yoon-Ho;Choi, Jin-Seung;Tack, Gye-Rae
    • Korean Journal of Applied Biomechanics
    • /
    • v.24 no.2
    • /
    • pp.181-187
    • /
    • 2014
  • In this study, a real-time evaluation system for quantitative dynamic fitting during pedaling was developed. The system is consisted of LED markers, a digital camera connected to a computer and a marker detecting program. LED markers are attached to hip, knee, ankle joint and fifth metatarsal in the sagittal plane. Playstation3 eye which is selected as a main digital camera in this paper has many merits for using motion capture, such as high FPS (Frame per second) about 180FPS, $320{\times}240$ resolution, and low-cost with easy to use. The maker detecting program was made by using Labview2010 with Vision builder. The program was made up of three parts, image acquisition & processing, marker detection & joint angle calculation, and output section. The digital camera's image was acquired in 95FPS, and the program was set-up to measure the lower-joint angle in real-time, providing the user as a graph, and allowing to save it as a test file. The system was verified by pedalling at three saddle heights (knee angle: 25, 35, $45^{\circ}$) and three cadences (30, 60, 90 rpm) at each saddle heights by using Holmes method, a method of measuring lower limbs angle, to determine the saddle height. The result has shown low average error and strong correlation of the system, respectively, $1.18{\pm}0.44^{\circ}$, $0.99{\pm}0.01^{\circ}$. There was little error due to the changes in the saddle height but absolute error occurred by cadence. Considering the average error is approximately $1^{\circ}$, it is a suitable system for quantitative dynamic fitting evaluation. It is necessary to decrease error by using two digital camera with frontal and sagittal plane in future study.

Underwater Hybrid Navigation System Based on an Inertial Sensor and a Doppler Velocity Log Using Indirect Feedback Kalman Filter (간접 되먹임 필터를 이용한 관성센서 및 초음파 속도센서 기반의 수중 복합항법 시스템)

  • Lee, Chong-Moo;Lee, Pan-Mook;Seong, Woo-Jae
    • Proceedings of the Korea Committee for Ocean Resources and Engineering Conference
    • /
    • 2003.05a
    • /
    • pp.149-156
    • /
    • 2003
  • This paper presents an underwater hybrid navigation system for a semi-autonomous underwater vehicle (SAUV). The navigation system consists of an inertial measurement unit (IMU), an ultra-short baseline (USBL) acoustic navigation sensor and a doppler velocity log (DVL) accompanying a magnetic compass. The errors of inertial measurement units increase with time due to the bias errors of gyros and accelerometers. A navigational system model is derived to include the error model of the USBL acoustic navigation sensor and the scale effect and bias errors of the DVL, of which the state equation composed of the navigation states and sensor parameters is 25 in the order. The conventional extended Kalman filter was used to propagate the error covariance, update the measurement errors and correct the state equation when the measurements are available. Simulation was performed with the 6-d.o.f. equations of motion of SAUV in a lawn-mowing survey mode. The hybrid underwater navigation system shows good tracking performance by updating the error covariance and correcting the system's states with the measurement errors from a DVL, a magnetic compass and a depth senor. The error of the estimated position still slowly drifts in horizontal plane about 3.5m for 500 seconds, which could be eliminated with the help of additional USBL information.

  • PDF

The Effect of Vision and Proprioception on Lumbar Movement Accuracy (시각과 고유수용성 감각이 요부 운동의 정확도에 미치는 영향)

  • Sim, Hyun-Po;Yoon, Hong-Il;Youn, I-Na
    • The Journal of Korean Academy of Orthopedic Manual Physical Therapy
    • /
    • v.13 no.2
    • /
    • pp.31-44
    • /
    • 2007
  • The purposes of this study were to examine the normal lumbar proprioception and identify the effect of vision and proprioception on lumbar movement accuracy through measuring a reposition error in visual and non-visual conditions and to provide the basic data for use of vision when rehabilitation program is applied. The subjects of this study were 39 healthy university students who have average physical activity level. They were measured the ability to reproduce the target position(50% of maximal range of motion) of flexion, extension, dominant and non-dominant side flexion in visual and non-visual conditions. Movement accuracy was assessed by reposition error(differences between intended and actual positions) that is calculated by the average of absolute value of 3 repeated measures at each directions. The data were analysed by paired samples t-test, independent samples t-test, and repeated measures ANOVA. The results were as follows : 1. Movement accuracy of flexion, extension, dominant side flexion, and non-dominant side flexion was increased in visual condition. 2. There were no differences in the lumbar movement accuracy between sexes in visual and non-visual conditions. 3. In non-visual condition, the movement in coronal plane(dominant and non dominant side flexion) is more accurate than that in sagittal plane(flexion and extension). 4. In non-visual condition, there were no differences in the lumbar movement accuracy between dominant and non-dominant side flexion. In conclusion, this study demonstrates that the movement is more accurate when the visual information input is available than proprioception is only available. When proprioception is decreased by injury or disease, it disturbs the control of posture and movement. In this case, human controls the posture and movement by using visual compensation. However it is impossible to prevent an injury or trauma because most of injuries occur in an unexpected situation. For this reason, it is important to improve the proprioception. Therefore, proprioceptive training or exercise which improve the ability to control of posture and movement is performed an appropriate control of permission or interception of the visual information input to prevent an excessive visual compensation.

  • PDF

Laser Scanning Technology for Ultrasonic Horn Location Compensation to Modify Nano-size Grain (나노계면 형성을 위한 초음파 진동자 위치보정을 위한 레이저 스캐닝 기술)

  • Kim, Kyugnhan;Lee, Jaehoon;Kim, Hyunse;Park, Jongkweon;Yoon, Kwangho
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.31 no.12
    • /
    • pp.1121-1126
    • /
    • 2014
  • To compensate location error of ultrasonic horn, the laser scanning system based on the galvanometer scanner is developed. It consists of the 3-Axis linear stage and the 2-Axis galvanometer scanner. To measure surface shape of three-dimensional free form surface, the dynamic focusing unit is adopted, which can maintain consistent focal plane. With combining the linear stage and the galvanometer scanner, the scanning area is enlarged. The scanning CAD system is developed by stage motion teaching and NURBS method. The laser scanning system is tested by marking experiment with the semi-cylindrical sample. Scanning accuracy is investigated by measured laser marked line width with various scanning speed.

Parameter Estimation of a Small-Scale Unmanned Helicopter by Automated Flight Test Method (자동화 비행시험기법에 의한 소형 무인헬리콥터의 파라메터 추정)

  • Bang, Keuk-Hee;Kim, Nak-Wan;Hong, Chang-Ho;Suk, Jin-Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.9
    • /
    • pp.916-924
    • /
    • 2008
  • In this paper dynamic modeling parameters were estimated using a frequency domain estimation method. A systematic flight test method was employed using preprogrammed multistep excitation of the swashplate control input. In addition when one axis is excited, the autopilot is engaged in the other axis, thereby obtaining high-quality flight data. A dynamic model was derived for a small scale unmanned helicopter (CNUHELI-020, developed by Chungnam National University) equipped with a Bell-Hiller stabilizer bar. Six degree of freedom equations of motion were derived using the total forces and moments acting on the small scale helicopter. The dynamics of the main rotor is simplified by the first order tip-path plane, and the aerodynamic effects of fuselage, tail rotor, engine, and horizontal/vertical stabilizer were considered. Trim analysis and linearized model were used as a basic model for the parameter estimation. Doublet and multistep inputs are used to excite dynamic motions of the helicopter. The system and input matrices were estimated in the frequency domain using the equation error method in order to match the data of flight test with those of the dynamic modeling. The dynamic modeling and the flight test show similar time responses, which validates the consequence of analytic modeling and the procedures of parameter estimation.

On low cost model-based monitoring of industrial robotic arms using standard machine vision

  • Karagiannidisa, Aris;Vosniakos, George C.
    • Advances in robotics research
    • /
    • v.1 no.1
    • /
    • pp.81-99
    • /
    • 2014
  • This paper contributes towards the development of a computer vision system for telemonitoring of industrial articulated robotic arms. The system aims to provide precision real time measurements of the joint angles by employing low cost cameras and visual markers on the body of the robot. To achieve this, a mathematical model that connects image features and joint angles was developed covering rotation of a single joint whose axis is parallel to the visual projection plane. The feature that is examined during image processing is the varying area of given circular target placed on the body of the robot, as registered by the camera during rotation of the arm. In order to distinguish between rotation directions four targets were used placed every $90^{\circ}$ and observed by two cameras at suitable angular distances. The results were deemed acceptable considering camera cost and lighting conditions of the workspace. A computational error analysis explored how deviations from the ideal camera positions affect the measurements and led to appropriate correction. The method is deemed to be extensible to multiple joint motion of a known kinematic chain.

Registration System of 3D Footwear data by Foot Movements (발의 움직임 추적에 의한 3차원 신발모델 정합 시스템)

  • Jung, Da-Un;Seo, Yung-Ho;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.24-34
    • /
    • 2007
  • Application systems that easy to access a information have been developed by IT growth and a human life variation. In this paper, we propose a application system to register a 3D footwear model using a monocular camera. In General, a human motion analysis research to body movement. However, this system research a new method to use a foot movement. This paper present a system process and show experiment results. For projection to 2D foot plane from 3D shoe model data, we construct processes that a foot tracking, a projection expression and pose estimation process. This system divide from a 2D image analysis and a 3D pose estimation. First, for a foot tracking, we propose a method that find fixing point by a foot characteristic, and propose a geometric expression to relate 2D coordinate and 3D coordinate to use a monocular camera without a camera calibration. We make a application system, and measure distance error. Then, we confirmed a registration very well.

Facial Gaze Detection by Estimating Three Dimensional Positional Movements (얼굴의 3차원 위치 및 움직임 추정에 의한 시선 위치 추적)

  • Park, Gang-Ryeong;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.23-35
    • /
    • 2002
  • Gaze detection is to locate the position on a monitor screen where a user is looking. In our work, we implement it with a computer vision system setting a single camera above a monitor and a user moves (rotates and/or translates) his face to gaze at a different position on the monitor. To detect the gaze position, we locate facial region and facial features(both eyes, nostrils and lip corners) automatically in 2D camera images. From the movement of feature points detected in starting images, we can compute the initial 3D positions of those features by camera calibration and parameter estimation algorithm. Then, when a user moves(rotates and/or translates) his face in order to gaze at one position on a monitor, the moved 3D positions of those features can be computed from 3D rotation and translation estimation and affine transform. Finally, the gaze position on a monitor is computed from the normal vector of the plane determined by those moved 3D positions of features. As experimental results, we can obtain the gaze position on a monitor(19inches) and the gaze position accuracy between the computed positions and the real ones is about 2.01 inches of RMS error.

Camera Tracking Method based on Model with Multiple Planes (다수의 평면을 가지는 모델기반 카메라 추적방법)

  • Lee, In-Pyo;Nam, Bo-Dam;Hong, Hyun-Ki
    • Journal of Korea Game Society
    • /
    • v.11 no.4
    • /
    • pp.143-149
    • /
    • 2011
  • This paper presents a novel camera tracking method based on model with multiple planes. The proposed algorithm detects QR code that is one of the most popular types of two-dimensional barcodes. A 3D model is imported from the detected QR code for augmented reality application. Based on the geometric property of the model, the vertices are detected and tracked using optical flow. A clipping algorithm is applied to identify each plane from model surfaces. The proposed method estimates the homography from coplanar feature correspondences, which is used to obtain the initial camera motion parameters. After deriving a linear equation from many feature points on the model and their 3D information, we employ DLT(Direct Linear Transform) to compute camera information. In the final step, the error of camera poses in every frame are minimized with local Bundle Adjustment algorithm in real-time.

Gaze Detection System by IR-LED based Camera (적외선 조명 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.494-504
    • /
    • 2004
  • The researches about gaze detection have been much developed with many applications. Most previous researches only rely on image processing algorithm, so they take much processing time and have many constraints. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.2 cm of RMS error.