• Title/Summary/Keyword: 3-D position

Search Result 2,275, Processing Time 0.03 seconds

Evaluation of Target Position's Accuracy in 2D-3D Matching using Rando Phantom (인체팬톰을 이용한 2D-3D 정합시 타켓위치의 정확성 평가)

  • Jang, Eun-Sung;Kang, Soo-Man;Lee, Chul-Soo
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.21 no.1
    • /
    • pp.33-39
    • /
    • 2009
  • Purpose: The aim of this study is to compare patient's body posture and its position at the time of simulation with one at the treatment room using On-board Imaging (OBI) and CT (CBCT). The detected offsets are compared with position errors of Rando Phantom that are practically applied. After that, Rando Phantom's position is selected by moving couch based on detected deviations. In addition, the errors between real measured values of Rando Phantom position and theoretical ones is compared. And we will evaluate target position's accuracy of KV X-ray imaging's 2D and CBCT's 3D one. Materials and Methods: Using the Rando Phantom (Alderson Research Laboratories Inc. Stanford. CT, USA) which simulated human body's internal structure, we will set up Rando Phantom on the treatment couch after implementing simulation and RTP according to the same ways as the real radioactive treatment. We tested Rando Phantom that are assumed to have accurate position with different 3 methods. We measured setup errors on the axis of X, Y and Z, and got mean standard deviation errors by repeating tests 10 times on each tests. Results: The difference between mean detection error and standard deviation are as follows; lateral 0.4+/-0.3 mm, longitudinal 0.6+/-0.5 mm, vertical 0.4+/-0.2 mm which all within 0~10 mm. The couch shift variable after positioning that are comparable to residual errors are 0.3+/-0.1, 0.5+/-0.1, and 0.3+/-0.1 mm. The mean detection errors by longitudinal shift between 20~40 mm are 0.4+/-0.3 in lateral, 0.6+/-0.5 in longitudinal, 0.5+/-0.3 in vertical direction. The detection errors are all within range of 0.3~0.5 mm. Residual errors are within 0.2~0.5 mm. Each values are mean values based on 3 tests. Conclusion: Phantom is based on treatment couch shift and error within the average 5mm can be gained by the diminution detected by image registration based on OBI and CBCT. Therefore, the selection of target position which depends on OBI and CBCT could be considered as useful.

  • PDF

Robust Position Tracking for Position-Based Visual Servoing and Its Application to Dual-Arm Task (위치기반 비주얼 서보잉을 위한 견실한 위치 추적 및 양팔 로봇의 조작작업에의 응용)

  • Kim, Chan-O;Choi, Sung;Cheong, Joo-No;Yang, Gwang-Woong;Kim, Hong-Seo
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.2
    • /
    • pp.129-136
    • /
    • 2007
  • This paper introduces a position-based robust visual servoing method which is developed for operation of a human-like robot with two arms. The proposed visual servoing method utilizes SIFT algorithm for object detection and CAMSHIFT algorithm for object tracking. While the conventional CAMSHIFT has been used mainly for object tracking in a 2D image plane, we extend its usage for object tracking in 3D space, by combining the results of CAMSHIFT for two image plane of a stereo camera. This approach shows a robust and dependable result. Once the robot's task is defined based on the extracted 3D information, the robot is commanded to carry out the task. We conduct several position-based visual servoing tasks and compare performances under different conditions. The results show that the proposed visual tracking algorithm is simple but very effective for position-based visual servoing.

  • PDF

3-dimensional Mesh Model Coding Using Predictive Residual Vector Quantization (예측 잉여신호 벡터 양자화를 이용한 3차원 메시 모델 부호화)

  • 최진수;이명호;안치득
    • Journal of Broadcast Engineering
    • /
    • v.2 no.2
    • /
    • pp.136-145
    • /
    • 1997
  • As a 3D mesh model consists of a lot of vertices and polygons and each vertex position is represented by three 32 bit floating-point numbers in a 3D coordinate, the amount of data needed for representing the model is very excessive. Thus, in order to store and/or transmit the 3D model efficiently, a 3D model compression is necessarily required. In this paper, a 3D model compression method using PRVQ (predictive residual vector quantization) is proposed. Its underlying idea is based on the characteristics such as high correlation between the neighboring vertex positions and the vectorial property inherent to a vertex position. Experimental results show that the proposed method obtains higher compression ratio than that of the existing methods and has the advantage of being capable of transmitting the vertex position data progressively.

  • PDF

ILOCAT: an Interactive GUI Toolkit to Acquire 3D Positions for Indoor Location Based Services (ILOCAT: 실내 위치 기반 서비스를 위한 3차원 위치 획득 인터랙티브 GUI Toolkit)

  • Kim, Seokhwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.7
    • /
    • pp.866-872
    • /
    • 2020
  • Indoor location-based services provide a service based on the distance between an object and a person. Recently, indoor location-based services are often implemented using inexpensive depth sensors such as Kinect. The depth sensor provides a function to measure the position of a person, but the position of an object must be acquired manually using a tool. To acquire a 3D position of an object, it requires 3D interaction, which is difficult to a general user. GUI(Graphical User Interface) is relatively easy to a general user but it is hard to gather a 3D position. This study proposes the Interactive LOcation Context Authoring Toolkit(ILOCAT), which enables a general user to easily acquire a 3D position of an object in real space using GUI. This paper describes the interaction design and implementation of ILOCAT.

Analysis on mandibular movement using the JT-3D system (JT-3D system을 이용한 하악의 운동 분석)

  • Song, Joo-Hun;Kim, Ryeo-Woon;Byun, Jae-Joon;Kim, Hee-Jung;Heo, Yu-ri;Lee, Gyeong-Je
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.36 no.2
    • /
    • pp.80-87
    • /
    • 2020
  • Purpose: This study aims to measure the mandibular movement using JT-3D system and provide a range of mandibular movement that can serve as a good reference for diagnosing the temporomandibular disorder. Materials and Methods: This study was conducted in 60 young male and female adults. The maximum opening and closing movement was recorded using JT-3D system, and by regarding 5 times of repetitive movement as 1 cycle of movement, total 3 cycles of movement were recorded. During the maximum opening, vertical position of lower jaw, antero-posterior position, lateral deflection position, and maximum opening distance were recorded. To evaluate the reproducibility of JT-3D system, statistical analysis were conducted (α = 0.05). Results: During the maximum opening, the average value appeared at 31.56 mm vertically and 24.42 mm rearwardly, lateral deflection position 0.72 mm, and maximum opening distance 40.32 mm. There was no statistical significance in all measured values for three cycles of movement recorded with JT-3D system (P > 0.05). Conclusion: During the maximum opening, the average value appeared at 0.72 mm in lateral deflection position and the maximum opening distance at 40.32 mm, and the analysis on the maximum opening of lower jaw using JT-3D system showed sufficiently reproducible results.

3-D position estimation for eye-in-hand robot vision

  • Jang, Won;Kim, Kyung-Jin;Chung, Myung-Jin;ZeungnamBien
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1988.10b
    • /
    • pp.832-836
    • /
    • 1988
  • "Motion Stereo" is quite useful for visual guidance of the robot, but most range finding algorithms of motion stereo have suffered from poor accuracy due to the quantization noise and measurement error. In this paper, 3-D position estimation and refinement scheme is proposed, and its performance is discussed. The main concept of the approach is to consider the entire frame sequence at the same time rather than to consider the sequence as a pair of images. The experiments using real images have been performed under following conditions : hand-held camera, static object. The result demonstrate that the proposed nonlinear least-square estimation scheme provides reliable and fairly accurate 3-D position information for vision-based position control of robot. of robot.

  • PDF

A 3D GUI System for Controlling Agent Robots

  • Hyunsik Ahn;Kang, Chang-Hoon
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.848-851
    • /
    • 2002
  • Recently, there are lots of concerning on the integration of robot and virtual reality with the trends of the research of intelligent robot and mined reality. In this paper, a 3D GUI system is proposed based on Internet for remote controlling and monitoring of agent robot working for itself. The proposed system is consists of a manager ordering a new position and displaying the motion of robot, an agent robot moving to the destination according to the indication, a positioning module detecting the current position of robot, and a geographical information module. A user can order the robot agent move to a new position in a virtual space and watch the real images captured from the real sites of the robot. Then, the agent robot moves to the position automatically with avoiding collision by using range finding and a path detection algorithm. We demonstrate the proposed 3D GUI system is supporting a more convenient remote control means far the robots.

  • PDF

Gaze Detection System by IR-LED based Camera (적외선 조명 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.494-504
    • /
    • 2004
  • The researches about gaze detection have been much developed with many applications. Most previous researches only rely on image processing algorithm, so they take much processing time and have many constraints. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.2 cm of RMS error.

Position Detection of a Scattering 3D Object by Use of the Axially Distributed Image Sensing Technique

  • Cho, Myungjin;Shin, Donghak;Lee, Joon-Jae
    • Journal of the Optical Society of Korea
    • /
    • v.18 no.4
    • /
    • pp.414-418
    • /
    • 2014
  • In this paper, we present a method to detect the position of a 3D object in scattering media by using the axially distributed sensing (ADS) method. Due to the scattering noise of the elemental images recorded by the ADS method, we apply a statistical image processing algorithm where the scattering elemental images are converted into scatter-reduced ones. With the scatter-reduced elemental images, we reconstruct the 3D images using the digital reconstruction algorithm based on ray back-projection. The reconstructed images are used for the position detection of a 3D object in the scattering medium. We perform the preliminary experiments and present experimental results.

Widerange Microphone System Using 3D Range Sensor (3D 거리 센서를 이용한 강의용 광역 마이크 시스템)

  • Oh, Woojin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.10
    • /
    • pp.1448-1451
    • /
    • 2021
  • In this paper, 3D range sensor is applied to the sensor-based widerange microphone system for lectures. Since the 2D range sensor measures the shortest distance of the speaker, an error occurs and the performance is degraded. The 3D sensor provides a 160×60 distance image so that the position of the speaker can be obtained with accuracy. We propose a method for obtaining the distance per pixel required to determine the absolute position of the speaker from the distance image. The proposed array microphone system using the 3D sensor shows the improvement of 0.8~1.5dB compared to the previous works using 2D sensor.