• Title/Summary/Keyword: 카메라 행렬

Search Result 126, Processing Time 0.023 seconds

Image Mosaicing using Voronoi Distance Matching (보로노이 거리(Voronoi Distance)정합을 이용한 영상 모자익)

  • 이칠우;정민영;배기태;이동휘
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.7
    • /
    • pp.1178-1188
    • /
    • 2003
  • In this paper, we describe image mosaicing techniques for constructing a large high-resolution image with images taken by a video camera in hand. we propose the method which is automatically retrieving the exact matching area using color information and shape information. The proposed method extracts first candidate areas which have similar form using a Voronoi Distance Matching Method which is rapidly estimating the correspondent points between adjacent images, and calculating initial transformations of them and finds the final matching area using color information. It is a method that creates Voronoi Surface which set the distance value among feature points and other points on the basis of each feature point of a image, and extracts the correspondent points which minimize Voronoi Distance in matching area between an input image and a basic image using the binary search method. Using the Levenberg-Marquadt method we turn an initial transformation matrix to an optimal transformation matrix, and using this matrix combine a basic image with a input image.

  • PDF

Analysis of Geometrical Relations of 2D Affine-Projection Images and Its 3D Shape Reconstruction (정사투영된 2차원 영상과 복원된 3차원 형상의 기하학적 관계 분석)

  • Koh, Sung-Shik;Zin, Thi Thi;Hama, Hiromitsu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.4 s.316
    • /
    • pp.1-7
    • /
    • 2007
  • In this paper, we analyze geometrical relations of 3D shape reconstruction from 2D images taken under anne projection. The purpose of this research is to contribute to more accurate 3-D reconstruction under noise distribution by analyzing geometrically the 2D to 3D relationship. In situation for no missing feature points (FPs) or no noise in 2D image plane, the accurate solution of 3D shape reconstruction is blown to be provided by Singular Yalue Decomposition (SVD) factorization. However, if several FPs not been observed because of object occlusion and image low resolution, and so on, there is no simple solution. Moreover, the 3D shape reconstructed from noise-distributed FPs is peturbed because of the influence of the noise. This paper focuses on analysis of geometrical properties which can interpret the missing FPs even though the noise is distributed on other FPs.

A New Calibration of 3D Point Cloud using 3D Skeleton (3D 스켈레톤을 이용한 3D 포인트 클라우드의 캘리브레이션)

  • Park, Byung-Seo;Kang, Ji-Won;Lee, Sol;Park, Jung-Tak;Choi, Jang-Hwan;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.247-257
    • /
    • 2021
  • This paper proposes a new technique for calibrating a multi-view RGB-D camera using a 3D (dimensional) skeleton. In order to calibrate a multi-view camera, consistent feature points are required. In addition, it is necessary to acquire accurate feature points in order to obtain a high-accuracy calibration result. We use the human skeleton as a feature point to calibrate a multi-view camera. The human skeleton can be easily obtained using state-of-the-art pose estimation algorithms. We propose an RGB-D-based calibration algorithm that uses the joint coordinates of the 3D skeleton obtained through the posture estimation algorithm as a feature point. Since the human body information captured by the multi-view camera may be incomplete, the skeleton predicted based on the image information acquired through it may be incomplete. After efficiently integrating a large number of incomplete skeletons into one skeleton, multi-view cameras can be calibrated by using the integrated skeleton to obtain a camera transformation matrix. In order to increase the accuracy of the calibration, multiple skeletons are used for optimization through temporal iterations. We demonstrate through experiments that a multi-view camera can be calibrated using a large number of incomplete skeletons.

Stereo Vision Based 3D Input Device (스테레오 비전을 기반으로 한 3차원 입력 장치)

  • Yoon, Sang-Min;Kim, Ig-Jae;Ahn, Sang-Chul;Ko, Han-Seok;Kim, Hyoung-Gon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.4
    • /
    • pp.429-441
    • /
    • 2002
  • This paper concerns extracting 3D motion information from a 3D input device in real time focused to enabling effective human-computer interaction. In particular, we develop a novel algorithm for extracting 6 degrees-of-freedom motion information from a 3D input device by employing an epipolar geometry of stereo camera, color, motion, and structure information, free from requiring the aid of camera calibration object. To extract 3D motion, we first determine the epipolar geometry of stereo camera by computing the perspective projection matrix and perspective distortion matrix. We then incorporate the proposed Motion Adaptive Weighted Unmatched Pixel Count algorithm performing color transformation, unmatched pixel counting, discrete Kalman filtering, and principal component analysis. The extracted 3D motion information can be applied to controlling virtual objects or aiding the navigation device that controls the viewpoint of a user in virtual reality setting. Since the stereo vision-based 3D input device is wireless, it provides users with a means for more natural and efficient interface, thus effectively realizing a feeling of immersion.

A Study on Object Tracking for Autonomous Mobile Robot using Vision Information (비젼 정보를 이용한 이동 자율로봇의 물체 추적에 관한 연구)

  • Kang, Jin-Gu;Lee, Jang-Myung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.2
    • /
    • pp.235-242
    • /
    • 2008
  • An Autonomous mobile robot is a very useful system to achieve various tasks in dangerous environment, because it has the higher performance than a fixed base manipulator in terms of its operational workspace size as well as efficiency. A method for estimating the position of an object in the Cartesian coordinate system based upon the geometrical relationship between the image captured by 2-DOF active camera mounted on mobile robot and the real object, is proposed. With this position estimation, a method of determining an optimal path for the autonomous mobile robot from the current position to the position of object estimated by the image information using homogeneous matrices. Finally, the corresponding joint parameters to make the desired displacement are calculated to capture the object through the control of a mobile robot. The effectiveness of proposed method is demonstrated by the simulation and real experiments using the autonomous mobile robot.

  • PDF

Illumination Estimation Based on Nonnegative Matrix Factorization with Dominant Chromaticity Analysis (주색도 분석을 적용한 비음수 행렬 분해 기반의 광원 추정)

  • Lee, Ji-Heon;Kim, Dae-Chul;Ha, Yeong-Ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.8
    • /
    • pp.89-96
    • /
    • 2015
  • Human visual system has chromatic adaptation to determine the color of an object regardless of illumination, whereas digital camera records illumination and reflectance together, giving the color appearance of the scene varied under different illumination. NMFsc(nonnegative matrix factorization with sparseness constraint) was recently introduced to estimate original object color by using sparseness constraint. In NMFsc, low sparseness constraint is used to estimate illumination and high sparseness constraint is used to estimate reflectance. However, NMFsc has an illumination estimation error for images with large uniform area, which is considered as dominant chromaticity. To overcome the defects of NMFsc, illumination estimation via nonnegative matrix factorization with dominant chromaticity image is proposed. First, image is converted to chromaticity color space and analyzed by chromaticity histogram. Chromaticity histogram segments the original image into similar chromaticity images. A segmented region with the lowest standard deviation is determined as dominant chromaticity region. Next, dominant chromaticity is removed in the original image. Then, illumination estimation using nonnegative matrix factorization is performed on the image without dominant chromaticity. To evaluate the proposed method, experimental results are analyzed by average angular error in the real world dataset and it has shown that the proposed method with 5.5 average angular error achieve better illuminant estimation over the previous method with 5.7 average angular error.

Traffic Information Extraction Using Image Processing Techniques (처리 기술을 이용한 교통 정보 추출)

  • Kim Joon-Cheol;Lee Joon-Whan
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.2 no.1 s.2
    • /
    • pp.75-84
    • /
    • 2003
  • Current techniques for road-traffic monitoring rely on sensors which have limited capabilities, are costly and disruptive to install. The use of video cameras coupled with computer vision techniques offers an attractive alternative to current sensors. Video based traffic monitoring systems are now being considered key points of advanced traffic management systems. In this paper, we propose the new method which extract the traffic information using video camera. The proposed method uses an adaptive updating scheme for background in order to reduce the false alarm rate due to various noises in images. also, the proposed extraction method of traffic information calculates the traffic volume ratio of vehicles passing through predefined detection area, which is defined by the length of profile occupied by cars over that of overall detection area. Then the ratio is used to define 8 different states of traffic and to interpret the state of vehicle flows. The proposed method is verified by an experiment using CCTV traffic data from urban area.

  • PDF

New Method for Vehicle Detection Using Hough Transform (HOUGH 변환을 이용한 차량 검지 기술 개발을 위한 모형)

  • Kim, Dae-Hyon
    • Journal of Korean Society of Transportation
    • /
    • v.17 no.1
    • /
    • pp.105-112
    • /
    • 1999
  • Image Processing Technique has been used as an efficient method to collect traffic information on the road such as vehicle counts, speed, queues, congestion and incidents. Most of the current methods which have been used to detect vehicles by the image processing are based on point processing, dealing with the local gray level of each pixel in the small window. However, these methods have some drawbacks. Firstly, detection is restricted by image quality. Secondly, they can not deal with occlusion and perspective projection problems, In this research, a new method which possibly deals with occlusion and perspective problems will be proposed. It extracts spatial information such as the position, the relationship of vehicles in 3-dimensional space, as well as vehicle detection in the image. The main algorithm used in this research is based on an extension of the Hough Transform. The Hough Transform which is proposed to estimates parameters of vertices and directed edges analytically on the Hough Space, is a valuable method for the 3-dimensional analysis of static scenes, motion detection and the estimation of viewing parameters.

  • PDF

Real-Virtual Fusion Hologram Generation System using RGB-Depth Camera (RGB-Depth 카메라를 이용한 현실-가상 융합 홀로그램 생성 시스템)

  • Song, Joongseok;Park, Jungsik;Park, Hanhoon;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.19 no.6
    • /
    • pp.866-876
    • /
    • 2014
  • Generating of digital hologram of video contents with computer graphics(CG) requires natural fusion of 3D information between real and virtual. In this paper, we propose the system which can fuse real-virtual 3D information naturally and fast generate the digital hologram of fused results using multiple-GPUs based computer-generated-hologram(CGH) computing part. The system calculates camera projection matrix of RGB-Depth camera, and estimates the 3D information of virtual object. The 3D information of virtual object from projection matrix and real space are transmitted to Z buffer, which can fuse the 3D information, naturally. The fused result in Z buffer is transmitted to multiple-GPUs based CGH computing part. In this part, the digital hologram of fused result can be calculated fast. In experiment, the 3D information of virtual object from proposed system has the mean relative error(MRE) about 0.5138% in relation to real 3D information. In other words, it has the about 99% high-accuracy. In addition, we verify that proposed system can fast generate the digital hologram of fused result by using multiple GPUs based CGH calculation.

Video see-through HMD based Hand Interface for Augmented Reality (Video see-through HMD 기반 증강현실을 위한 손 인터페이스)

  • Ha, Tae-Jin;Woo, Woon-Tack
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.169-174
    • /
    • 2006
  • 본 논문에서는 Video see-through HMD 에 기반 하여 증강 현실을 위한 손 인터페이스를 제안한다. 착용형 컴퓨터의 입력 장치로써 Video see-through HMD 에 부착된 USB 카메라로부터 영상을 입력 받은 후, HSV 컬러 공간에서 탐색 윈도우 안의 개체를 이중 임계 값을 이용해 손과 팔이 포함된 객체로 분리한다. 그 다음 거리 변형 행렬을 이용하여 손과 팔을 분리하고, 볼록 다각형 외각점 추출을 통해 손 끝의 좌표를 검출한다. 이를 기반으로 한 어플리케이션 "AR-Memo"은 현실세계에서 손끝에 가상의 펜을 증강하여 메모를 하고 이동중에 손바닥을 통해 메모를 볼 수 있다. 증강 현실 기반 손 인터페이스를 사용함으로써 사용자는 이동중에도 직관적으로 입력을 할 수 있다. 또한 어떠한 물리적인 장치나 마커를 손에 부착하지 않기 때문에 자연스러운 인터페이스이다. 본 시스템은 착용형 컴퓨터와 결합되어 사용자에게 편리한 인터페이스를 제공할 수 있을 것으로 기대된다.

  • PDF