• Title/Summary/Keyword: pose estimation

Search Result 392, Processing Time 0.03 seconds

Camera calibration parameters estimation using perspective variation ratio of grid type line widths (격자형 선폭들의 투영변화비를 이용한 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Choi, Seong-Gu;Rho, Do-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.30-32
    • /
    • 2004
  • With 3-D vision measuring, camera calibration is necessary to calculate parameters accurately. Camera calibration was developed widely in two categories. The first establishes reference points in space, and the second uses a grid type frame and statistical method. But, the former has difficulty to setup reference points and the latter has low accuracy. In this paper we present an algorithm for camera calibration using perspective ratio of the grid type frame with different line widths. It can easily estimate camera calibration parameters such as lens distortion, focal length, scale factor, pose, orientations, and distance. The advantage of this algorithm is that it can estimate the distance of the object. Also, the proposed camera calibration method is possible estimate distance in dynamic environment such as autonomous navigation. To validate proposed method, we set up the experiments with a frame on rotator at a distance of 1, 2, 3, 4[m] from camera and rotate the frame from -60 to 60 degrees. Both computer simulation and real data have been used to test the proposed method and very good results have been obtained. We have investigated the distance error affected by scale factor or different line widths and experimentally found an average scale factor that includes the least distance error with each image. The average scale factor tends to fluctuate with small variation and makes distance error decrease. Compared with classical methods that use stereo camera or two or three orthogonal planes, the proposed method is easy to use and flexible. It advances camera calibration one more step from static environments to real world such as autonomous land vehicle use.

  • PDF

Uncertainty Characteristics in Future Prediction of Agrometeorological Indicators using a Climatic Water Budget Approach (기후학적 물수지를 적용한 기후변화에 따른 농업기상지표 변동예측의 불확실성)

  • Nam, Won-Ho;Hong, Eun-Mi;Choi, Jin-Yong;Cho, Jaepil;Hayes, Michael J.
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.57 no.2
    • /
    • pp.1-13
    • /
    • 2015
  • The Coupled Model Intercomparison Project Phase 5 (CMIP5), coordinated by the World Climate Research Programme in support of the Intergovernmental Panel on Climate Change (IPCC) AR5, is the most recent, provides projections of future climate change using various global climate models under four major greenhouse gas emission scenarios. There is a wide selection of climate models available to provide projections of future climate change. These provide for a wide range of possible outcomes when trying to inform managers about possible climate changes. Hence, future agrometeorological indicators estimation will be much impacted by which global climate model and climate change scenarios are used. Decision makers are increasingly expected to use climate information, but the uncertainties associated with global climate models pose substantial hurdles for agricultural resources planning. Although it is the most reasonable that quantifying of the future uncertainty using climate change scenarios, preliminary analysis using reasonable factors for selecting a subset for decision making are needed. In order to narrow the projections to a handful of models that could be used in a climate change impact study, we could provide effective information for selecting climate model and scenarios for climate change impact assessment using maximum/minimum temperature, precipitation, reference evapotranspiration, and moisture index of nine Representative Concentration Pathways (RCP) scenarios.

Zoom Lens Distortion Correction Of Video Sequence Using Nonlinear Zoom Lens Distortion Model (비선형 줌-렌즈 왜곡 모델을 이용한 비디오 영상에서의 줌-렌즈 왜곡 보정)

  • Kim, Dae-Hyun;Shin, Hyoung-Chul;Oh, Ju-Hyun;Nam, Seung-Jin;Sohn, Kwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.14 no.3
    • /
    • pp.299-310
    • /
    • 2009
  • In this paper, we proposed a new method to correct the zoom lens distortion for the video sequence captured by the zoom lens. First, we defined the nonlinear zoom lens distortion model which is represented by the focal length and the lens distortion using the characteristic that lens distortion parameters are nonlinearly and monotonically changed while the focal length is increased. Then, we chose some sample images from the video sequence and estimated a focal length and a lens distortion parameter for each sample image. Using these estimated parameters, we were able to optimize the zoom lens distortion model. Once the zoom lens distortion model was obtained, lens distortion parameters of other images were able to be computed as their focal lengths were input. The proposed method has been made experiments with many real images and videos. As a result, accurate distortion parameters were estimated from the zoom lens distortion model and distorted images were well corrected without any visual artifacts.

Motion Plane Estimation for Real-Time Hand Motion Recognition (실시간 손동작 인식을 위한 동작 평면 추정)

  • Jeong, Seung-Dae;Jang, Kyung-Ho;Jung, Soon-Ki
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.347-358
    • /
    • 2009
  • In this thesis, we develop a vision based hand motion recognition system using a camera with two rotational motors. Existing systems were implemented using a range camera or multiple cameras and have a limited working area. In contrast, we use an uncalibrated camera and get more wide working area by pan-tilt motion. Given an image sequence provided by the pan-tilt camera, color and pattern information are integrated into a tracking system in order to find the 2D position and direction of the hand. With these pose information, we estimate 3D motion plane on which the gesture motion trajectory from approximately forms. The 3D trajectory of the moving finger tip is projected into the motion plane, so that the resolving power of the linear gesture patterns is enhanced. We have tested the proposed approach in terms of the accuracy of trace angle and the dimension of the working volume.

Error Correction Scheme in Location-based AR System Using Smartphone (스마트폰을 이용한 위치정보기반 AR 시스템에서의 부정합 현상 최소화를 위한 기법)

  • Lee, Ju-Yong;Kwon, Jun-Sik
    • Journal of Digital Contents Society
    • /
    • v.16 no.2
    • /
    • pp.179-187
    • /
    • 2015
  • Spread of smartphone creates various contents. Among many contents, AR application using Location Based Service(LBS) is needed widely. In this paper, we propose error correction algorithm for location-based Augmented Reality(AR) system using computer vision technology in android environment. This method that detects the early features with SURF(Speeded Up Robust Features) algorithm to minimize the mismatch and to reduce the operations, and tracks the detected, and applies it in mobile environment. We use the GPS data to retrieve the location information, and use the gyro sensor and G-sensor to get the pose estimation and direction information. However, the cumulative errors of location information cause the mismatch that and an object is not fixed, and we can not accept it the complete AR technology. Because AR needs many operations, implementation in mobile environment has many difficulties. The proposed approach minimizes the performance degradation in mobile environments, and are relatively simple to implement, and a variety of existing systems can be useful in a mobile environment.

Training Network Design Based on Convolution Neural Network for Object Classification in few class problem (소 부류 객체 분류를 위한 CNN기반 학습망 설계)

  • Lim, Su-chang;Kim, Seung-Hyun;Kim, Yeon-Ho;Kim, Do-yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.1
    • /
    • pp.144-150
    • /
    • 2017
  • Recently, deep learning is used for intelligent processing and accuracy improvement of data. It is formed calculation model composed of multi data processing layer that train the data representation through an abstraction of the various levels. A category of deep learning, convolution neural network is utilized in various research fields, which are human pose estimation, face recognition, image classification, speech recognition. When using the deep layer and lots of class, CNN that show a good performance on image classification obtain higher classification rate but occur the overfitting problem, when using a few data. So, we design the training network based on convolution neural network and trained our image data set for object classification in few class problem. The experiment show the higher classification rate of 7.06% in average than the previous networks designed to classify the object in 1000 class problem.

Real-time 3D Volumetric Model Generation using Multiview RGB-D Camera (다시점 RGB-D 카메라를 이용한 실시간 3차원 체적 모델의 생성)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Kwon, Soon-Chul;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.439-448
    • /
    • 2020
  • In this paper, we propose a modified optimization algorithm for point cloud matching of multi-view RGB-D cameras. In general, in the computer vision field, it is very important to accurately estimate the position of the camera. The 3D model generation methods proposed in the previous research require a large number of cameras or expensive 3D cameras. Also, the methods of obtaining the external parameters of the camera through the 2D image have a large error. In this paper, we propose a matching technique for generating a 3D point cloud and mesh model that can provide omnidirectional free viewpoint using 8 low-cost RGB-D cameras. We propose a method that uses a depth map-based function optimization method with RGB images and obtains coordinate transformation parameters that can generate a high-quality 3D model without obtaining initial parameters.

Efficient Kinect Sensor-Based Reactive Path Planning Method for Autonomous Mobile Robots in Dynamic Environments (키넥트 센서를 이용한 동적 환경에서의 효율적인 이동로봇 반응경로계획 기법)

  • Tuvshinjargal, Doopalam;Lee, Deok Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.6
    • /
    • pp.549-559
    • /
    • 2015
  • In this paper, an efficient dynamic reactive motion planning method for an autonomous vehicle in a dynamic environment is proposed. The purpose of the proposed method is to improve the robustness of autonomous robot motion planning capabilities within dynamic, uncertain environments by integrating a virtual plane-based reactive motion planning technique with a sensor fusion-based obstacle detection approach. The dynamic reactive motion planning method assumes a local observer in the virtual plane, which allows the effective transformation of complex dynamic planning problems into simple stationary ones proving the speed and orientation information between the robot and obstacles. In addition, the sensor fusion-based obstacle detection technique allows the pose estimation of moving obstacles using a Kinect sensor and sonar sensors, thus improving the accuracy and robustness of the reactive motion planning approach. The performance of the proposed method was demonstrated through not only simulation studies but also field experiments using multiple moving obstacles in hostile dynamic environments.

Contamination and Risk Assessment of Lead and Cadmium in Commonly Consumed Fishes as Affected by Habitat (서식지에 따른 다소비 어류의 납과 카드뮴의 오염 및 위해 평가)

  • Kim, Ki Hyun;Kim, Yong Jung;Heu, Min Soo;Kim, Jin-Soo
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.49 no.5
    • /
    • pp.541-555
    • /
    • 2016
  • This study determined the concentrations of lead and cadmium in 18 species of commonly consumed fish and assessed the risk based on provisional tolerable weekly (monthly) intakes [PTW(M)I] % as affected by behavioral characteristics, such as migration and settlement. In the 18 species, the mean concentrations of lead and cadmium were higher in the 11 species of migratory fish (llargehead hairtail Trichiurus lepturus, chub mackerel Scomber japonicus, Pacific saury Cololabis saira, skipjack tuna Katsuwonus pelamis, Pacific cod Gadus macrocephalus, anchovy Engraulis japonicus, Alaska pollack Theragra chalcogramm, brown croaker Miichthys miiuy, Japanese Spanish mackerel Scomberomorus niphonius, yellow croaker Larimichthys polyactis, and Pacific herring Clupea pallasii) than in the seven demersal species (red stingray Dasyatis akajei, brown sole Pleuronectes herzensteini, bastard halibut Paralichthys olivaceus, conger eel Conger myriaster, blackmouth angler Lophiomus setigerus, rockfish Sebastes schlegelii, and filefish Stephanolepis cirrhifer). Based on the mean concentrations, the PTWI % of lead and cadmium in commonly consumed migratory fish were 1.900 and 2.986%, respectively, which were higher than the values for lead and cadmium in the commonly consumed demersal fishes (0.257 and 0.318%, respectively). The estimation of weekly (monthly) intakes and target hazard quotients for the toxic elements lead and cadmium revealed that the commonly consumed migratory and demersal fish do not pose any health risks for consumers.

Fast Structure Recovery and Integration using Improved Scaled Orthographic Factorization (개선된 직교분해기법을 사용한 빠른 구조 복원 및 융합)

  • Park, Jong-Seung;Yoon, Jong-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.3
    • /
    • pp.303-315
    • /
    • 2007
  • This paper proposes a 3D structure recovery and registration method that uses four or more common points. For each frame of a given video, a partial structure is recovered using tracked points. The 3D coordinates, camera positions and camera directions are computed at once by our improved scaled orthographic factorization method. The partially recovered point sets are parts of a whole model. A registration of point sets makes the complete shape. The recovered subsets are integrated by transforming each coordinate system of the local point subset into a common basis coordinate system. The process of shape recovery and integration is performed uniformly and linearly without any nonlinear iterative process and without loss of accuracy. The execution time for the integration is significantly reduced relative to the conventional ICP method. Due to the fast recovery and registration framework, our shape recovery scheme is applicable to various interactive video applications. The processing time per frame is under 0.01 seconds in most cases and the integration error is under 0.1mm on average.

  • PDF