• Title/Summary/Keyword: Motion Correction

Search Result 385, Processing Time 0.027 seconds

Realtime Implementation Method for Perspective Distortion Correction (원근 왜곡 보정의 실시간 구현 방법)

  • Lee, Dong-Seok;Kim, Nam-Gyu;Kwon, Soon-Kak
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.4
    • /
    • pp.606-613
    • /
    • 2017
  • When the planar area is captured by the depth camera, the shape of the plane in the captured image has perspective projection distortion according to the position of the camera. We can correct the distorted image by the depth information in the plane in the captured area. Previous depth information based perspective distortion correction methods fail to satisfy the real-time property due to a large amount of computation. In this paper, we propose the method of applying the conversion table selectively by measuring the motion of the plane and performing the correction process by parallel processing for correcting perspective projection distortion. By appling the proposed method, the system for correcting perspective projection distortion correct the distorted image, whose resolution is 640x480, as 22.52ms per frame, so the proposed system satisfies the real-time property.

Procedural Geometry Calibration and Color Correction ToolKit for Multiple Cameras (절차적 멀티카메라 기하 및 색상 정보 보정 툴킷)

  • Kang, Hoonjong;Jo, Dongsik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.4
    • /
    • pp.615-618
    • /
    • 2021
  • Recently, 3D reconstruction of real objects with multi-cameras has been widely used for many services such as VR/AR, motion capture, and plenoptic video generation. For accurate 3D reconstruction, geometry and color matching between multiple cameras will be needed. However, previous calibration and correction methods for geometry (internal and external parameters) and color (intensity) correction is difficult for non-majors to perform manually. In this paper, we propose a toolkit with procedural geometry calibration and color correction among cameras with different positions and types. Our toolkit consists of an easy user interface and turned out to be effective in setting up multi-cameras for reconstruction.

A Study on MTL Device Design and Motion Tracking in Virtual Reality Environments

  • Oh, Am-Suk
    • Journal of information and communication convergence engineering
    • /
    • v.17 no.3
    • /
    • pp.205-212
    • /
    • 2019
  • Motion tracking and localization devices are an important building block of motion tracking systems in a virtual reality (VR) environment. This study is about improving the accuracy of motion and location for enhancing user immersion in experience type VR environment to position tracking technique. In this study, we propose and test a design of such a device. The module data test of the attitude and heading reference system shows that the implementation with the MPU-9250 sensor is successful and adequate to be used with short operation time. We consider various sensor hardware dependencies of VR, and compare various correction methods and filtering methods to lower the motion to photon (MTP) time that user movement is fully reflected on the display using sensor devices. The Kalman filter is used to combine the accelerometer with the gyroscope in the sensing unit.

Aiming Point Correction Technique for Ship-launched Anti-air Missiles Considering Ship Weaving Motion (함정거동을 고려한 대공방어용 함정 탑재 요격탄 조준점 보정 기법)

  • Hong, Ju-Hyeon;Park, Sanghyuk;Park, Sang-Sup;Ryoo, Chang-Kyung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.1
    • /
    • pp.94-100
    • /
    • 2014
  • In order to intercept anti-ship missiles, it is important to accurately predict the aiming point. The major factor for degrading the accuracy of the aiming point is the motions of the warships due to waves. Therefore, a stage of correcting the aiming point is required to compensate for such motions of warships. The proposed aiming point correction technique treats the changes in positions and velocity of naval guns by considering changes in the positions and velocities of the anti-ship missiles. In this paper, a ship motion estimation filter was also constructed to predict the motions of warships at the firing time of naval guns. In the simulation part, finally, the distance errors before and after aiming point corrections were compared through 6-DOF simulations.

Effects of Motion Correction for Dynamic $[^{11}C]Raclopride$ Brain PET Data on the Evaluation of Endogenous Dopamine Release in Striatum (동적 $[^{11}C]Raclopride$ 뇌 PET의 움직임 보정이 선조체 내인성 도파민 유리 정량화에 미치는 영향)

  • Lee, Jae-Sung;Kim, Yu-Kyeong;Cho, Sang-Soo;Choe, Yearn-Seong;Kang, Eun-Joo;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul;Kim, Sang-Eun
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.413-420
    • /
    • 2005
  • Purpose: Neuroreceptor PET studies require 60-120 minutes to complete and head motion of the subject during the PET scan increases the uncertainty in measured activity. In this study, we investigated the effects of the data-driven head mutton correction on the evaluation of endogenous dopamine release (DAR) in the striatum during the motor task which might have caused significant head motion artifact. Materials and Methods: $[^{11}C]raclopride$ PET scans on 4 normal volunteers acquired with bolus plus constant infusion protocol were retrospectively analyzed. Following the 50 min resting period, the participants played a video game with a monetary reward for 40 min. Dynamic frames acquired during the equilibrium condition (pre-task: 30-50 min, task: 70-90 min, post-task: 110-120 min) were realigned to the first frame in pre-task condition. Intra-condition registrations between the frames were performed, and average image for each condition was created and registered to the pre-task image (inter-condition registration). Pre-task PET image was then co-registered to own MRI of each participant and transformation parameters were reapplied to the others. Volumes of interest (VOI) for dorsal putamen (PU) and caudate (CA), ventral striatum (VS), and cerebellum were defined on the MRI. Binding potential (BP) was measured and DAR was calculated as the percent change of BP during and after the task. SPM analyses on the BP parametric images were also performed to explore the regional difference in the effects of head motion on BP and DAR estimation. Results: Changes in position and orientation of the striatum during the PET scans were observed before the head motion correction. BP values at pre-task condition were not changed significantly after the intra-condition registration. However, the BP values during and after the task and DAR were significantly changed after the correction. SPM analysis also showed that the extent and significance of the BP differences were significantly changed by the head motion correction and such changes were prominent in periphery of the striatum. Conclusion: The results suggest that misalignment of MRI-based VOI and the striatum in PET images and incorrect DAR estimation due to the head motion during the PET activation study were significant, but could be remedied by the data-driven head motion correction.

Depth Image Distortion Correction Method according to the Position and Angle of Depth Sensor and Its Hardware Implementation (거리 측정 센서의 위치와 각도에 따른 깊이 영상 왜곡 보정 방법 및 하드웨어 구현)

  • Jang, Kyounghoon;Cho, Hosang;Kim, Geun-Jun;Kang, Bongsoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.5
    • /
    • pp.1103-1109
    • /
    • 2014
  • The motion recognition system has been broadly studied in digital image and video processing fields. Recently, method using th depth image is used very useful. However, recognition accuracy of depth image based method will be loss caused by size and shape of object distorted for angle of the depth sensor. Therefore, distortion correction of depth sensor is positively necessary for distinguished performance of the recognition system. In this paper, we propose a pre-processing algorithm to improve the motion recognition system. Depth data from depth sensor converted to real world, performed the corrected angle, and then inverse converted to projective world. The proposed system make progress using the OpenCV and the window program, and we test a system using the Kinect in real time. In addition, designed using Verilog-HDL and verified through the Zynq-7000 FPGA Board of Xilinx.

Non-Prior Training Active Feature Model-Based Object Tracking for Real-Time Surveillance Systems (실시간 감시 시스템을 위한 사전 무학습 능동 특징점 모델 기반 객체 추적)

  • 김상진;신정호;이성원;백준기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.23-34
    • /
    • 2004
  • In this paper we propose a feature point tracking algorithm using optical flow under non-prior taming active feature model (NPT-AFM). The proposed algorithm mainly focuses on analysis non-rigid objects[1], and provides real-time, robust tracking by NPT-AFM. NPT-AFM algorithm can be divided into two steps: (i) localization of an object-of-interest and (ii) prediction and correction of the object position by utilizing the inter-frame information. The localization step was realized by using a modified Shi-Tomasi's feature tracking algoriam[2] after motion-based segmentation. In the prediction-correction step, given feature points are continuously tracked by using optical flow method[3] and if a feature point cannot be properly tracked, temporal and spatial prediction schemes can be employed for that point until it becomes uncovered again. Feature points inside an object are estimated instead of its shape boundary, and are updated an element of the training set for AFH Experimental results, show that the proposed NPT-AFM-based algerian can robustly track non-rigid objects in real-time.

Positional uncertainties of cervical and upper thoracic spine in stereotactic body radiotherapy with thermoplastic mask immobilization

  • Jeon, Seung Hyuck;Kim, Jin Ho
    • Radiation Oncology Journal
    • /
    • v.36 no.2
    • /
    • pp.122-128
    • /
    • 2018
  • Purpose: To investigate positional uncertainty and its correlation with clinical parameters in spine stereotactic body radiotherapy (SBRT) using thermoplastic mask (TM) immobilization. Materials and Methods: A total of 21 patients who underwent spine SBRT for cervical or upper thoracic spinal lesions were retrospectively analyzed. All patients were treated with image guidance using cone beam computed tomography (CBCT) and 4 degrees-of-freedom (DoF) positional correction. Initial, pre-treatment, and post-treatment CBCTs were analyzed. Setup error (SE), pre-treatment residual error (preRE), post-treatment residual error (postRE), intrafraction motion before treatment (IM1), and intrafraction motion during treatment (IM2) were determined from 6 DoF manual rigid registration. Results: The three-dimensional (3D) magnitudes of translational uncertainties (mean ${\pm}$ 2 standard deviation) were $3.7{\pm}3.5mm$ (SE), $0.9{\pm}0.9mm$ (preRE), $1.2{\pm}1.5mm$ (postRE), $1.4{\pm}2.4mm$ (IM1), and $0.9{\pm}1.0mm$ (IM2), and average angular differences were $1.1^{\circ}{\pm}1.2^{\circ}$ (SE), $0.9^{\circ}{\pm}1.1^{\circ}$ (preRE), $0.9^{\circ}{\pm}1.1^{\circ}$ (postRE), $0.6^{\circ}{\pm}0.9^{\circ}$ (IM1), and $0.5^{\circ}{\pm}0.5^{\circ}$ (IM2). The 3D magnitude of SE, preRE, postRE, IM1, and IM2 exceeded 2 mm in 18, 0, 3, 3, and 1 patients, respectively. No association were found between all positional uncertainties and body mass index, pain score, and treatment location (p > 0.05, Mann-Whitney test). There was a tendency of intrafraction motion to increase with overall treatment time; however, the correlation was not statistically significant (p > 0.05, Spearman rank correlation test). Conclusion: In spine SBRT using TM immobilization, CBCT and 4 DoF alignment correction, a minimum residual translational uncertainty was 2 mm. Shortening overall treatment time and 6 DoF positional correction may further reduce positional uncertainties.

Correction of Rotated Frames in Video Sequences Using Modified Mojette Transform (변형된 모젯 변환을 이용한 동영상에서의 회전 프레임 보정)

  • Kim, Ji-Hong
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.1
    • /
    • pp.42-49
    • /
    • 2013
  • The camera motion is accompanied with the translation and/or the rotation of objects in frames of a video sequence. An unnecessary rotation of objects declines the quality of the moving pictures and in addition is a primary cause of the viewers' fatigue. In this paper, a novel method for correcting rotated frames in video sequences is presented, where the modified Mojette transform is applied to the motion-compensated area in each frame. The Mojette transform is one of discrete Radon transforms, and is modified for correcting the rotated frames as follows. First, the bin values in the Mojette transform are determined by using pixels on the projection line and the interpolation of pixels adjacent to the line. Second, the bin values are calculated only at some area determined by the motion estimation between current and reference frames. Finally, only one bin at each projection is computed for reducing the amount of the calculation in the Mojette transform. Through the simulation carried out on various test video sequences, it is shown that the proposed scheme has good performance for correcting the rotation of frames in moving pictures.