• Title/Summary/Keyword: 카메라 위치 추정

Search Result 291, Processing Time 0.029 seconds

An Energy-Efficient Operating Scheme of Surveillance System by Predicting the Location of Targets (감시 대상의 위치 추정을 통한 감시 시스템의 에너지 효율적 운영 방법)

  • Lee, Kangwook;Lee, Soobin;Lee, Howon;Cho, Dong-Ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.2
    • /
    • pp.172-180
    • /
    • 2013
  • In this paper, we propose an energy-efficient camera operating scheme to save energy which can be used for mass surveillance cameras. This technique determines how many cameras should be turned on in the consideration of the velocity vector of monitoring targets, which is acquired by DSRC object tracking, the model of the specification of installed cameras, and the road model of installed sites. Also, we address other techniques used to save energy for the surveillance system as well. Throughout performance evaluation, we demonstrate the excellence of our proposed scheme compared with previous approaches.

Video Augmentation of Virtual Object by Uncalibrated 3D Reconstruction from Video Frames (비디오 영상에서의 비보정 3차원 좌표 복원을 통한 가상 객체의 비디오 합성)

  • Park Jong-Seung;Sung Mee-Young
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.4
    • /
    • pp.421-433
    • /
    • 2006
  • This paper proposes a method to insert virtual objects into a real video stream based on feature tracking and camera pose estimation from a set of single-camera video frames. To insert or modify 3D shapes to target video frames, the transformation from the 3D objects to the projection of the objects onto the video frames should be revealed. It is shown that, without a camera calibration process, the 3D reconstruction is possible using multiple images from a single camera under the fixed internal camera parameters. The proposed approach is based on the simplification of the camera matrix of intrinsic parameters and the use of projective geometry. The method is particularly useful for augmented reality applications to insert or modify models to a real video stream. The proposed method is based on a linear parameter estimation approach for the auto-calibration step and it enhances the stability and reduces the execution time. Several experimental results are presented on real-world video streams, demonstrating the usefulness of our method for the augmented reality applications.

  • PDF

Motion Plane Estimation for Real-Time Hand Motion Recognition (실시간 손동작 인식을 위한 동작 평면 추정)

  • Jeong, Seung-Dae;Jang, Kyung-Ho;Jung, Soon-Ki
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.347-358
    • /
    • 2009
  • In this thesis, we develop a vision based hand motion recognition system using a camera with two rotational motors. Existing systems were implemented using a range camera or multiple cameras and have a limited working area. In contrast, we use an uncalibrated camera and get more wide working area by pan-tilt motion. Given an image sequence provided by the pan-tilt camera, color and pattern information are integrated into a tracking system in order to find the 2D position and direction of the hand. With these pose information, we estimate 3D motion plane on which the gesture motion trajectory from approximately forms. The 3D trajectory of the moving finger tip is projected into the motion plane, so that the resolving power of the linear gesture patterns is enhanced. We have tested the proposed approach in terms of the accuracy of trace angle and the dimension of the working volume.

A Study on AI-based Autonomous Traffic Cone Tracking Algorithm for 1/5 scale Car Platform (인공지능기반 1/5 스케일 콘 추종 자율 주행 기법에 관한 연구)

  • Tae Min KIM;Seong Bin MA;Ui Jun SONG;Yu Bin WON;Jae Hyeok LEE;Kuk Won KO
    • Annual Conference of KIPS
    • /
    • 2023.11a
    • /
    • pp.283-284
    • /
    • 2023
  • 자율주행 경진대회에서 학생들의 장애물 후에 경로를 생성 능력을 검정하는 라바콘 추종 종목은 중요한 항목 중의 하나이다. 라바콘의 위치를 알기 위해서는 라이다 센서가 필요하다. 실내의 경우 저가의 2D 라이다 센서를 사용하여 콘의 위치 검출이 가능하지만, 실외의 경우에는 고가의 3D 라이다 센서 또는 고가의 3차원 카메라가 필요하다. 이러한 고가의 기자재는 실습의 대중화에 걸림돌이 되고 있으므로, 1개의 카메라와 인공지능을 이용한 라이다 콘의 검출하는 방법을 개발하였고, 이를 활용하여 경로 생성 및 제어를 수행하였다. 그 결과 0.4m 이내의 정밀도로 콘의 위치 추정과 주행을 성공적으로 수행하였다.

Estimating Location in Real-world of a Observer for Adaptive Parallax Barrier (적응적 패럴랙스 베리어를 위한 사용자 위치 추적 방법)

  • Kang, Seok-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.12
    • /
    • pp.1492-1499
    • /
    • 2019
  • This paper propose how to track the position of the observer to control the viewing zone using an adaptive parallax barrier. The pose is estimated using a Constrained Local Model based on the shape model and Landmark for robust eye-distance measurement in the face pose. Camera's correlation converts distance and horizontal location to centimeter. The pixel pitch of the adaptive parallax barrier is adjusted according to the position of the observer's eyes, and the barrier is moved to adjust the viewing area. This paper propose a method for tracking the observer in the range of 60cm to 490cm, and measure the error, measurable range, and fps according to the resolution of the camera image. As a result, the observer can be measured within the absolute error range of 3.1642cm on average, and it was able to measure about 278cm at 320×240, about 488cm at 640×480, and about 493cm at 1280×960 depending on the resolution of the image.

The Crowd Density Estimation Using Pedestrian Depth Information (보행자 깊이 정보를 이용한 군중 밀집도 추정)

  • Yu-Jin Roh;Sang-Min Lee
    • Annual Conference of KIPS
    • /
    • 2023.11a
    • /
    • pp.705-708
    • /
    • 2023
  • 다중밀집 사고를 사전에 방지하기 위해 군중 밀집도를 정확하게 파악하는 것은 중요하다. 기존 방법 중 일부는 군중 계수를 기반으로 군중 밀집도를 추정하거나 원근 왜곡이 있는 데이터를 그대로 학습한다. 이 방식은 물체의 거리에 따라 크기가 달라지는 원근 왜곡에 큰 영향을 받는다. 본 연구는 보행자 깊이 정보를 이용한 군중 밀집도 알고리즘을 제안한다. 보행자의 깊이 정보를 계산하기 위해 편차가 적은 머리 크기를 이용한다. 머리를 탐지하기 위해 OC-Sort를 학습모델로 사용한다. 탐지된 머리의 경계박스 좌표, 실제 머리 크기, 카메라 파라미터 등을 이용하여 보행자의 깊이 정보를 추정한다. 이후 깊이 정보를 기반으로 밀도 맵을 추정한다. 제안 알고리즘은 혼잡한 환경에서 객체의 위치와 밀집도를 정확하게 분석하여 군중밀집 사고를 사전에 방지하는 지능형 CCTV시스템의 기반 기술로 활용될 수 있으며, 더불어 보안 및 교통 관리 시스템의 효율성을 향상하는 데 중요한 역할을 할 것으로 기대한다.

Structure and Motion Estimation with Expectation Maximization and Extended Kalman Smoother for Continuous Image Sequences (부드러운 카메라 움직임을 위한 EM 알고리듬을 이용한 삼차원 보정)

  • Seo, Yong-Duek;Hong, Ki-Sang
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.2
    • /
    • pp.245-254
    • /
    • 2004
  • This paper deals with the problem of estimating structure and motion from long continuous image sequences, applying the Expectation Maximization algorithm based on extended Kalman smoother to impose the time-continuity of the motion parameters. By repeatedly estimating the state transition matrix of the dynamic equation and the parameters of noise processes in the dynamic and measurement equations, this optimization gives the maximum likelihood estimates of the motion and structure parameters. Practically, this research is essential for dealing with a long video-rate image sequence with partially unknown system equation and noise. The algorithm is implemented and tested for a real image sequence.

2D Spatial-Map Construction for Workers Identification and Avoidance of AGV (AGV의 작업자 식별 및 회피를 위한 2D 공간 지도 구성)

  • Ko, Jung-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.347-352
    • /
    • 2012
  • In this paper, an 2D spatial-map construction for workers identification and avoidance of AGV using the detection scheme of the spatial coordinates based on stereo camera is proposed. In the proposed system, face area of a moving person is detected from a left image among the stereo image pairs by using the YCbCr color model and its center coordinates are computed by using the centroid method and then using these data, the stereo camera embedded on the mobile robot can be controlled for tracking the moving target in real-time. Moreover, using the disparity map obtained from the left and right images captured by the tracking-controlled stereo camera system and the perspective transformation between a 3-D scene and an image plane, depth map can be detected. From some experiments on AGV driving with 240 frames of the stereo images, it is analyzed that error ratio between the calculated and measured values of the worker's width is found to be very low value of 2.19% and 1.52% on average.

Multi-Depth Map Fusion Technique from Depth Camera and Multi-View Images (깊이정보 카메라 및 다시점 영상으로부터의 다중깊이맵 융합기법)

  • 엄기문;안충현;이수인;김강연;이관행
    • Journal of Broadcast Engineering
    • /
    • v.9 no.3
    • /
    • pp.185-195
    • /
    • 2004
  • This paper presents a multi-depth map fusion method for the 3D scene reconstruction. It fuses depth maps obtained from the stereo matching technique and the depth camera. Traditional stereo matching techniques that estimate disparities between two images often produce inaccurate depth map because of occlusion and homogeneous area. Depth map obtained from the depth camera is globally accurate but noisy and provide a limited depth range. In order to get better depth estimates than these two conventional techniques, we propose a depth map fusion method that fuses the multi-depth maps from stereo matching and the depth camera. We first obtain two depth maps generated from the stereo matching of 3-view images. Moreover, a depth map is obtained from the depth camera for the center-view image. After preprocessing each depth map, we select a depth value for each pixel among them. Simulation results showed a few improvements in some background legions by proposed fusion technique.

An Accurate Extrinsic Calibration of Laser Range Finder and Vision Camera Using 3D Edges of Multiple Planes (다중 평면의 3차원 모서리를 이용한 레이저 거리센서 및 카메라의 정밀 보정)

  • Choi, Sung-In;Park, Soon-Yong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.4
    • /
    • pp.177-186
    • /
    • 2015
  • For data fusion of laser range finder (LRF) and vision camera, accurate calibration of external parameters which describe relative pose between two sensors is necessary. This paper proposes a new calibration method which can acquires more accurate external parameters between a LRF and a vision camera compared to other existing methods. The main motivation of the proposed method is that any corner data of a known 3D structure which is acquired by the LRF should be projected on a straight line in the camera image. To satisfy such constraint, we propose a 3D geometric model and a numerical solution to minimize the energy function of the model. In addition, we describe the implementation steps of the data acquisition of LRF and camera images which are necessary in accurate calibration results. In the experiment results, it is shown that the performance of the proposed method are better in terms of accuracy compared to other conventional methods.