• Title/Summary/Keyword: Camera Location Simulation

Search Result 44, Processing Time 0.022 seconds

Comparison of knife-edge and multi-slit camera for proton beam range verification by Monte Carlo simulation

  • Park, Jong Hoon;Kim, Sung Hun;Ku, Youngmo;Lee, Hyun Su;Kim, Chan Hyeong;Shin, Dong Ho;Jeong, Jong Hwi
    • Nuclear Engineering and Technology
    • /
    • v.51 no.2
    • /
    • pp.533-538
    • /
    • 2019
  • The mechanical-collimation imaging is the most mature technology in prompt gamma (PG) imaging which is considered the most promising technology for beam range verification in proton therapy. The purpose of the present study is to compare the performances of two mechanical-collimation PG cameras, knife-edge (KE) camera and multi-slit (MS) camera. For this, the PG cameras were modeled by Geant4 Monte Carlo code, and the performances of the cameras were compared for imaginary point and line sources and for proton beams incident on a cylindrical PMMA phantom. From the simulation results, the KE camera was found to show higher counting efficiency than the MS camera, being able to estimate the beam range even for $10^7$ protons. Our results, however, confirmed that in order to estimate the beam range correctly, the KE camera should be aligned, at least approximately, to the location of the proton beam range. The MS camera was found to show lower efficiency, being able to estimate the beam range correctly only when the number of the protons is at least $10^8$. For enough number of protons, however, the MS camera estimated the beam range correctly, errors being less than 1.2 mm, regardless of the location of the camera.

A Switched Visual Servoing Technique Robust to Camera Calibration Errors for Reaching the Desired Location Following a Straight Line in 3-D Space (카메라 교정 오차에 강인한 3차원 직선 경로 추종을 위한 전환 비주얼 서보잉 기법)

  • Kim, Do-Hyoung;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.2
    • /
    • pp.125-134
    • /
    • 2006
  • The problem of establishing the servo system to reach the desired location keeping all features in the field of view and following a straight line is considered. In addition, robustness of camera calibration parameters is considered in this paper. The proposed approach is based on switching from position-based visual servoing (PBVS) to image-based visual servoing (IBVS) and allows the camera path to follow a straight line. To achieve the objective, a pose estimation method is required; the camera's target pose is estimated from the obtained images without the knowledge of the object. A switched control law moves the camera equipped to a robot end-effector near the desired location following a straight line in Cartesian space and then positions it to the desired pose with robustness to camera calibration error. Finally simulation results show the feasibility of the proposed visual servoing technique.

  • PDF

Compressed Sensing-based Multiple-target Tracking Algorithm for Ad Hoc Camera Sensor Networks

  • Lu, Xu;Cheng, Lianglun;Liu, Jun;Chen, Rongjun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.3
    • /
    • pp.1287-1300
    • /
    • 2018
  • Target-tracking algorithm based on ad hoc camera sensor networks (ACSNs) utilizes the distributed observation capability of nodes to achieve accurate target tracking. A compressed sensing-based multiple-target tracking algorithm (CSMTTA) for ACSNs is proposed in this work based on the study of camera node observation projection model and compressed sensing model. The proposed algorithm includes reconfiguration of observed signals and evaluation of target locations. It reconfigures observed signals by solving the convex optimization of L1-norm least and forecasts node group to evaluate a target location by the motion features of the target. Simulation results show that CSMTTA can recover the subtracted observation information accurately under the condition of sparse sampling to a high target-tracking accuracy and accomplish the distributed tracking task of multiple mobile targets.

A Study on Estimating Skill of Smartphone Camera Position using Essential Matrix (필수 행렬을 이용한 카메라 이동 위치 추정 기술 연구)

  • Oh, Jongtaek;Kim, Hogyeom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.6
    • /
    • pp.143-148
    • /
    • 2022
  • It is very important for metaverse, mobile robot, and user location services to analyze the images continuously taken using a mobile smartphone or robot's monocular camera to estimate the camera's location. So far, PnP-related techniques have been applied to calculate the position. In this paper, the camera's moving direction is obtained using the essential matrix in the epipolar geometry applied to successive images, and the camera's continuous moving position is calculated through geometrical equations. A new estimation method was proposed, and its accuracy was verified through simulation. This method is completely different from the existing method and has a feature that it can be applied even if there is only one or more matching feature points in two or more images.

Virtual portraits from rotating selfies

  • Yongsik Lee;Jinhyuk Jang;SeungjoonYang
    • ETRI Journal
    • /
    • v.45 no.2
    • /
    • pp.291-303
    • /
    • 2023
  • Selfies are a popular form of photography. However, due to physical constraints, the compositions of selfies are limited. We present algorithms for creating virtual portraits with interesting compositions from a set of selfies. The selfies are taken at the same location while the user spins around. The scene is analyzed using multiple selfies to determine the locations of the camera, subject, and background. Then, a view from a virtual camera is synthesized. We present two use cases. After rearranging the distances between the camera, subject, and background, we render a virtual view from a camera with a longer focal length. Following that, changes in perspective and lens characteristics caused by new compositions and focal lengths are simulated. Second, a virtual panoramic view with a larger field of view is rendered, with the user's image placed in a preferred location. In our experiments, virtual portraits with a wide range of focal lengths were obtained using a device equipped with a lens that has only one focal length. The rendered portraits included compositions that would be photographed with actual lenses. Our proposed algorithms can provide new use cases in which selfie compositions are not limited by a camera's focal length or distance from the camera.

Multi-camera-based 3D Human Pose Estimation for Close-Proximity Human-robot Collaboration in Construction

  • Sarkar, Sajib;Jang, Youjin;Jeong, Inbae
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.328-335
    • /
    • 2022
  • With the advance of robot capabilities and functionalities, construction robots assisting construction workers have been increasingly deployed on construction sites to improve safety, efficiency and productivity. For close-proximity human-robot collaboration in construction sites, robots need to be aware of the context, especially construction worker's behavior, in real-time to avoid collision with workers. To recognize human behavior, most previous studies obtained 3D human poses using a single camera or an RGB-depth (RGB-D) camera. However, single-camera detection has limitations such as occlusions, detection failure, and sensor malfunction, and an RGB-D camera may suffer from interference from lighting conditions and surface material. To address these issues, this study proposes a novel method of 3D human pose estimation by extracting 2D location of each joint from multiple images captured at the same time from different viewpoints, fusing each joint's 2D locations, and estimating the 3D joint location. For higher accuracy, the probabilistic representation is used to extract the 2D location of the joints, considering each joint location extracted from images as a noisy partial observation. Then, this study estimates the 3D human pose by fusing the probabilistic 2D joint locations to maximize the likelihood. The proposed method was evaluated in both simulation and laboratory settings, and the results demonstrated the accuracy of estimation and the feasibility in practice. This study contributes to ensuring human safety in close-proximity human-robot collaboration by providing a novel method of 3D human pose estimation.

  • PDF

Development of camera caliberation technique using neural-network (신경회로망을 이용함 카메라 보정기법 개발)

  • 한성현;왕한홍;장영희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1617-1620
    • /
    • 1997
  • This paper describes the camera caliberation based-neural network with a camera modeling that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortion. Radial distoriton causes an inward or outward displacement of a given image point from its ideal location. Actual optical systems are subject to various degrees of decentering, that is the optical centers of lens elements are not strictly collinear. Thin prism distortion arises from imperfection in lens design and manufacturing as well as camera assembly. It is our purpose to develop the vision system for the pattern recognition and the automatic test of parts and to apply the line of manufacturing. The performance of proposed camera aclibration is illustrated by simulation and experiment.

  • PDF

Development of Camera Calibration Technique Using Neural-Network (뉴럴네트워크를 이용한 카메라 보정기법 개발)

  • 장영희
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 1997.10a
    • /
    • pp.225-229
    • /
    • 1997
  • This paper describes the camera calibration based-neural network with a camera modeling that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortion. Radial distortion causes and inward or outward displacement of a given image point from its ideal location. Actual optical systems are subject to various degrees of decentering, that is, the optical centers of lens elements are not strictly collinear. Thin prism distortion arises from imperfection in lens design and manufacturing as well as camera assembly. It is our purpose to develop the vision system for the pattern recognition and the automatic test of parts and to apply the line of manufacturing. The performance of proposed camera calibration is illustrated by simulation and experiment.

  • PDF

A Study on the Camera Calibration Algorithm of Robot Vision Using Cartesian Coordinates

  • Lee, Yong-Joong
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.11 no.6
    • /
    • pp.98-104
    • /
    • 2002
  • In this study, we have developed an algorithm by attaching a camera at the end-effector of industrial six-axis robot in order to determine position and orientation of the camera system from cartesian coordinates. Cartesian coordinate as a starting point to evaluate for suggested algorithm, it was easy to confront increase of orientation vector for a linear line point that connects two points from coordinate space applied by recursive least square method which includes previous data result and new data result according to increase of image point. Therefore, when the camera attached to the end-effector has been applied to production location, with a calibration mask that has more than eight points arranged, this simulation approved that it is possible to determine position and orientation of cartesian coordinates of camera system even without a special measuring equipment.

A Study on Machine Vision System and Camera Modeling with Geometric Distortion (기하학적 왜곡을 고려한 카메라 모델링 및 머신비젼 시스템에 관한 연구)

  • 계중읍
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.7 no.4
    • /
    • pp.64-72
    • /
    • 1998
  • This paper a new approach to the design of machine vision technique with a camera modeling that accounts for major sources of geometric distortion, namely, radial, decentering, and thin prism distortion. Radial distortion causes an inward or outward displacement of a given image point from its ideal location. Actual optical systems are subject to various degrees of decentering , that is , the optical centers of lens design and manufacturing as well as camera assembly. It is our propose to develop the vision system for the pattern recognition and the automatic test of parts and to apply the line of manufacturing. The performance of proposed vision system is illustrated by simulation and experiment.