• Title/Summary/Keyword: , Camera estimation method

Search Result 451, Processing Time 0.024 seconds

Range and Velocity Estimation of the Object using a Moving Camera (움직이는 카메라를 이용한 목표물의 거리 및 속도 추정)

  • Byun, Sang-Hoon;Chwa, Dongkyoung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.12
    • /
    • pp.1737-1743
    • /
    • 2013
  • This paper proposes the range and velocity of the object estimation method using a moving camera. Structure and motion (SaM) estimation is to estimate the Euclidean geometry of the object as well as the relative motion between the camera and object. Unlike the previous works, the proposed estimation method can relax the camera and object motion constraints. To this end, we arrange the dynamics of moving camera-moving object relative motion model in an appropriate form such that the nonlinear observer can be employed for the SaM estimation. Through both simulations and experiments we have confirmed the validity of the proposed estimation algorithm.

Advanced surface spectral-reflectance estimation using a population with similar colors (유사색 모집단을 이용한 개선된 분광 반사율 추정)

  • 이철희;김태호;류명춘;오주환
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2001.05a
    • /
    • pp.280-287
    • /
    • 2001
  • The studies to estimate the surface spectral reflectance of an object have received widespread attention using the multi-spectral camera system. However, the multi-spectral camera system requires the additional color filter according to increment of the channel and system complexity is increased by multiple capture. Thus, this paper proposes an algorithm to reduce the estimation error of surface spectral reflectance with the conventional 3-band RGB camera. In the proposed method, adaptive principal components for each pixel are calculated by renewing the population of surface reflectances and the adaptive principal components can reduce estimation error of surface spectral reflectance of current pixel. To evacuate performance of the proposed estimation method, 3-band principal component analysis, 5-band wiener estimation method, and the proposed method are compared in the estimation experiment with the Macbeth ColorChecker. As a result, the proposed method showed a lower mean square ems between the estimated and the measured spectra compared to the conventional 3-band principal component analysis method and represented a similar or advanced estimation performance compared to the 5-band wiener method.

  • PDF

Sum of Squares-Based Range Estimation of an Object Using a Single Camera via Scale Factor

  • Kim, Won-Hee;Kim, Cheol-Joong;Eom, Myunghwan;Chwa, Dongkyoung
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.6
    • /
    • pp.2359-2364
    • /
    • 2017
  • This paper proposes a scale factor based range estimation method using a sum of squares (SOS) method. Many previous studies measured distance by using a camera, which usually required two cameras and a long computation time for image processing. To overcome these disadvantages, we propose a range estimation method for an object using a single moving camera. A SOS-based Luenberger observer is proposed to estimate the range on the basis of the Euclidean geometry of the object. By using a scale factor, the proposed method can realize a faster operation speed compared with the previous methods. The validity of the proposed method is verified through simulation results.

Reliable Camera Pose Estimation from a Single Frame with Applications for Virtual Object Insertion (가상 객체 합성을 위한 단일 프레임에서의 안정된 카메라 자세 추정)

  • Park, Jong-Seung;Lee, Bum-Jong
    • The KIPS Transactions:PartB
    • /
    • v.13B no.5 s.108
    • /
    • pp.499-506
    • /
    • 2006
  • This Paper describes a fast and stable camera pose estimation method for real-time augmented reality systems. From the feature tracking results of a marker on a single frame, we estimate the camera rotation matrix and the translation vector. For the camera pose estimation, we use the shape factorization method based on the scaled orthographic Projection model. In the scaled orthographic factorization method, all feature points of an object are assumed roughly at the same distance from the camera, which means the selected reference point and the object shape affect the accuracy of the estimation. This paper proposes a flexible and stable selection method for the reference point. Based on the proposed method, we implemented a video augmentation system that inserts virtual 3D objects into the input video frames. Experimental results showed that the proposed camera pose estimation method is fast and robust relative to the previous methods and it is applicable to various augmented reality applications.

Fine-Motion Estimation Using Ego/Exo-Cameras

  • Uhm, Taeyoung;Ryu, Minsoo;Park, Jong-Il
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.766-771
    • /
    • 2015
  • Robust motion estimation for human-computer interactions played an important role in a novel method of interaction with electronic devices. Existing pose estimation using a monocular camera employs either ego-motion or exo-motion, both of which are not sufficiently accurate for estimating fine motion due to the motion ambiguity of rotation and translation. This paper presents a hybrid vision-based pose estimation method for fine-motion estimation that is specifically capable of extracting human body motion accurately. The method uses an ego-camera attached to a point of interest and exo-cameras located in the immediate surroundings of the point of interest. The exo-cameras can easily track the exact position of the point of interest by triangulation. Once the position is given, the ego-camera can accurately obtain the point of interest's orientation. In this way, any ambiguity between rotation and translation is eliminated and the exact motion of a target point (that is, ego-camera) can then be obtained. The proposed method is expected to provide a practical solution for robustly estimating fine motion in a non-contact manner, such as in interactive games that are designed for special purposes (for example, remote rehabilitation care systems).

Motion Estimation of a Moving Object in Three-Dimensional Space using a Camera (카메라를 이용한 3차원 공간상의 이동 목표물의 거리정보기반 모션추정)

  • Chwa, Dongkyoung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.12
    • /
    • pp.2057-2060
    • /
    • 2016
  • Range-based motion estimation of a moving object by using a camera is proposed. Whereas the existing results constrain the motion of an object for the motion estimation of an object, the constraints on the motion is relieved in the proposed method in that a more generally moving object motion can be handled. To this end, a nonlinear observer is designed based on the relative dynamics between the object and camera so that the object velocity and the unknown camera velocity can be estimated. Stability analysis and simulation results for the moving object are provided to show the effectiveness of the proposed method.

MEASUREMENT OF THREE-DIMENSIONAL TRAJECTORIES OF BUBBLES AROUND A SWIMMER USING STEREO HIGH-SPEED CAMERA

  • Nomura, Tsuyoshi;Ikeda, Sei;Imura, Masataka;Manabe, Yoshitsugu;Chihara, Kunihiro
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.768-772
    • /
    • 2009
  • This paper proposes a method for measurement three-dimensional trajectories of bubbles generated around a swimmer's arms from stereo high-speed camera videos. This method is based on two techniques: two-dimensional trajectory estimation in single-camera images and trajectory pair matching in stereo-camera images. The two-dimensional trajectory is estimated by block matching using similarity of bubble shape and probability of bubble displacement. The trajectory matching is achieved by a consistensy test using epipolar constraint in multiple frames. The experimental results in two-dimensional trajectory estimation showed the estimation accuracy of 47% solely by the general optical flow estimation, whereas 71% taking the bubble displacement into consideration. This concludes bubble displacement is an efficient aspect in this estimation. In three-dimensional trajectory estimation, bubbles were visually captured moving along the flow generated by an arm; which means an efficient material for swimmers to swim faster.

  • PDF

Robust 2D human upper-body pose estimation with fully convolutional network

  • Lee, Seunghee;Koo, Jungmo;Kim, Jinki;Myung, Hyun
    • Advances in robotics research
    • /
    • v.2 no.2
    • /
    • pp.129-140
    • /
    • 2018
  • With the increasing demand for the development of human pose estimation, such as human-computer interaction and human activity recognition, there have been numerous approaches to detect the 2D poses of people in images more efficiently. Despite many years of human pose estimation research, the estimation of human poses with images remains difficult to produce satisfactory results. In this study, we propose a robust 2D human body pose estimation method using an RGB camera sensor. Our pose estimation method is efficient and cost-effective since the use of RGB camera sensor is economically beneficial compared to more commonly used high-priced sensors. For the estimation of upper-body joint positions, semantic segmentation with a fully convolutional network was exploited. From acquired RGB images, joint heatmaps accurately estimate the coordinates of the location of each joint. The network architecture was designed to learn and detect the locations of joints via the sequential prediction processing method. Our proposed method was tested and validated for efficient estimation of the human upper-body pose. The obtained results reveal the potential of a simple RGB camera sensor for human pose estimation applications.

Camera Calibration Method for an Automotive Safety Driving System (자동차 안전운전 보조 시스템에 응용할 수 있는 카메라 캘리브레이션 방법)

  • Park, Jong-Seop;Kim, Gi-Seok;Roh, Soo-Jang;Cho, Jae-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.621-626
    • /
    • 2015
  • This paper presents a camera calibration method in order to estimate the lane detection and inter-vehicle distance estimation system for an automotive safety driving system. In order to implement the lane detection and vision-based inter-vehicle distance estimation to the embedded navigations or black box systems, it is necessary to consider the computation time and algorithm complexity. The process of camera calibration estimates the horizon, the position of the car's hood and the lane width for extraction of region of interest (ROI) from input image sequences. The precision of the calibration method is very important to the lane detection and inter-vehicle distance estimation. The proposed calibration method consists of three main steps: 1) horizon area determination; 2) estimation of the car's hood area; and 3) estimation of initial lane width. Various experimental results show the effectiveness of the proposed method.

Zoom Motion Estimation Method by Using Depth Information (깊이 정보를 이용한 줌 움직임 추정 방법)

  • Kwon, Soon-Kak;Park, Yoo-Hyun;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.2
    • /
    • pp.131-137
    • /
    • 2013
  • Zoom motion estimation of video sequence is very complicated for implementation. In this paper, we propose a method to implement the zoom motion estimation using together the depth camera and color camera. Depth camera obtains the distance information between current block and reference block, then zoom ratio between both blocks is calculated from this distance information. As the reference block is appropriately zoomed by the zoom ratio, the motion estimated difference signal can be reduced. Therefore, the proposed method is possible to increase the accuracy of motion estimation with keeping zoom motion estimation complexity not greater. Simulation was to measure the motion estimation accuracy of the proposed method, we can see the motion estimation error was decreased significantly compared to conventional block matching method.