• Title/Summary/Keyword: Camera Matrix

Search Result 196, Processing Time 0.028 seconds

Video Augmentation of Virtual Object by Uncalibrated 3D Reconstruction from Video Frames (비디오 영상에서의 비보정 3차원 좌표 복원을 통한 가상 객체의 비디오 합성)

  • Park Jong-Seung;Sung Mee-Young
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.4
    • /
    • pp.421-433
    • /
    • 2006
  • This paper proposes a method to insert virtual objects into a real video stream based on feature tracking and camera pose estimation from a set of single-camera video frames. To insert or modify 3D shapes to target video frames, the transformation from the 3D objects to the projection of the objects onto the video frames should be revealed. It is shown that, without a camera calibration process, the 3D reconstruction is possible using multiple images from a single camera under the fixed internal camera parameters. The proposed approach is based on the simplification of the camera matrix of intrinsic parameters and the use of projective geometry. The method is particularly useful for augmented reality applications to insert or modify models to a real video stream. The proposed method is based on a linear parameter estimation approach for the auto-calibration step and it enhances the stability and reduces the execution time. Several experimental results are presented on real-world video streams, demonstrating the usefulness of our method for the augmented reality applications.

  • PDF

Camera Motion and Structure Recovery Using Two-step Sampling (2단계 샘플링을 이용한 카메라 움직임 및 장면 구조 복원)

  • 서정국;조청운;홍현기
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.347-356
    • /
    • 2003
  • Camera pose and scene geometry estimation from video sequences is widely used in various areas such as image composition. Structure and motion recovery based on the auto calibration algorithm can insert synthetic 3D objects in real but un modeled scenes and create their views from the camera positions. However, most previous methods require bundle adjustment or non linear minimization process [or more precise results. This paper presents a new auto' calibration algorithm for video sequence based on two steps: the one is key frame selection, and the other removes the key frame with inaccurate camera matrix based on an absolute quadric estimation by LMedS. In the experimental results, we have demonstrated that the proposed method can achieve a precise camera pose estimation and scene geometry recovery without bundle adjustment. In addition, virtual objects have been inserted in the real images by using the camera trajectories.

Microsoft Kinect-based Indoor Building Information Model Acquisition (Kinect(RGB-Depth Camera)를 활용한 실내 공간 정보 모델(BIM) 획득)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.4
    • /
    • pp.207-213
    • /
    • 2018
  • This paper investigates applicability of Microsoft $Kinect^{(R)}$, RGB-depth camera, to implement a 3D image and spatial information for sensing a target. The relationship between the image of the Kinect camera and the pixel coordinate system is formulated. The calibration of the camera provides the depth and RGB information of the target. The intrinsic parameters are calculated through a checker board experiment and focal length, principal point, and distortion coefficient are obtained. The extrinsic parameters regarding the relationship between the two Kinect cameras consist of rotational matrix and translational vector. The spatial images of 2D projection space are converted to a 3D images, resulting on spatial information on the basis of the depth and RGB information. The measurement is verified through comparison with the length and location of the 2D images of the target structure.

2D Adjacency Matrix Generation using DCT for UWV Contents (DCT를 통한 UWV 콘텐츠의 2D 인접도 행렬 생성)

  • Xiaorui, Li;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.3
    • /
    • pp.366-374
    • /
    • 2017
  • Since a display device such as TV or digital signage is getting larger, the types of media is getting changed into wider view one such as UHD, panoramic and jigsaw-like media. Especially, panoramic and jigsaw-like media is realized by stitching video clips, which are captured by different camera or devices. However, a stitching process takes long time, and has difficulties in applying for a real-time process. Thus, this paper suggests to find out 2D Adjacency Matrix, which tells spatial relationships among those video clips in order to decrease a stitching processing time. Using the Discrete Cosine Transform (DCT), we convert the each frame of video source from the spatial domain (2D) into frequency domain. Based on the aforementioned features, 2D Adjacency Matrix of images could be found that we can efficiently make the spatial map of the images by using DCT. This paper proposes a new method of generating 2D adjacency matrix by using DCT for producing a panoramic and jigsaw-like media through various individual video clips.

Automation of Bio-Industrial Process Via Tele-Task Command(I) -identification and 3D coordinate extraction of object- (원격작업 지시를 이용한 생물산업공정의 생력화 (I) -대상체 인식 및 3차원 좌표 추출-)

  • Kim, S. C.;Choi, D. Y.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.26 no.1
    • /
    • pp.21-28
    • /
    • 2001
  • Major deficiencies of current automation scheme including various robots for bioproduction include the lack of task adaptability and real time processing, low job performance for diverse tasks, and the lack of robustness of take results, high system cost, failure of the credit from the operator, and so on. This paper proposed a scheme that could solve the current limitation of task abilities of conventional computer controlled automatic system. The proposed scheme is the man-machine hybrid automation via tele-operation which can handle various bioproduction processes. And it was classified into two categories. One category was the efficient task sharing between operator and CCM(computer controlled machine). The other was the efficient interface between operator and CCM. To realize the proposed concept, task of the object identification and extraction of 3D coordinate of an object was selected. 3D coordinate information was obtained from camera calibration using camera as a measurement device. Two stereo images were obtained by moving a camera certain distance in horizontal direction normal to focal axis and by acquiring two images at different locations. Transformation matrix for camera calibration was obtained via least square error approach using specified 6 known pairs of data points in 2D image and 3D world space. 3D world coordinate was obtained from two sets of image pixel coordinates of both camera images with calibrated transformation matrix. As an interface system between operator and CCM, a touch pad screen mounted on the monitor and remotely captured imaging system were used. Object indication was done by the operator’s finger touch to the captured image using the touch pad screen. A certain size of local image processing area was specified after the touch was made. And image processing was performed with the specified local area to extract desired features of the object. An MS Windows based interface software was developed using Visual C++6.0. The software was developed with four modules such as remote image acquisiton module, task command module, local image processing module and 3D coordinate extraction module. Proposed scheme shoed the feasibility of real time processing, robust and precise object identification, and adaptability of various job and environments though selected sample tasks.

  • PDF

Fundamental Matrix Estimation and Key Frame Selection for Full 3D Reconstruction Under Circular Motion (회전 영상에서 기본 행렬 추정 및 키 프레임 선택을 이용한 전방향 3차원 영상 재구성)

  • Kim, Sang-Hoon;Seo, Yung-Ho;Kim, Tae-Eun;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.2
    • /
    • pp.10-23
    • /
    • 2009
  • The fundamental matrix and key frame selection are one of the most important techniques to recover full 3D reconstruction of objects from turntable sequences. This paper proposes a new algorithm that estimates a robust fundamental matrix for camera calibration from uncalibrated images taken under turn-table motion. Single axis turntable motion can be described in terms of its fixed entities. This provides new algorithms for computing the fundamental matrix. From the projective properties of the conics and fundamental matrix the Euclidean 3D coordinates of a point are obtained from geometric locus of the image points trajectories. Experimental results on real and virtual image sequences demonstrate good object reconstructions.

Control of an Omni-directional Mobile Robot Based on Camera Image (카메라 영상기반 전방향 이동 로봇의 제어)

  • Kim, Bong Kyu;Ryoo, Jung Rae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.1
    • /
    • pp.84-89
    • /
    • 2014
  • In this paper, an image-based visual servo control strategy for tracking a target object is applied to a camera-mounted omni-directional mobile robot. In order to get target angular velocity of each wheel from image coordinates of the target object, in general, a mathematical image Jacobian matrix is built using a camera model and a mobile robot kinematics. Unlike to the well-known mathematical image Jacobian, a simple rule-based control strategy is proposed to generate target angular velocities of the wheels in conjunction with size of the target object captured in a camera image. A camera image is divided into several regions, and a pre-defined rule corresponding to the target-located image region is applied to generate target angular velocities of wheels. The proposed algorithm is easily implementable in that no mathematical description for image Jacobian is required and a small number of rules are sufficient for target tracking. Experimental results are presented with descriptions about the overall experimental system.

Camera Model Identification Based on Deep Learning (딥러닝 기반 카메라 모델 판별)

  • Lee, Soo Hyeon;Kim, Dong Hyun;Lee, Hae-Yeoun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.10
    • /
    • pp.411-420
    • /
    • 2019
  • Camera model identification has been a subject of steady study in the field of digital forensics. Among the increasingly sophisticated crimes, crimes such as illegal filming are taking up a high number of crimes because they are hard to detect as cameras become smaller. Therefore, technology that can specify which camera a particular image was taken on could be used as evidence to prove a criminal's suspicion when a criminal denies his or her criminal behavior. This paper proposes a deep learning model to identify the camera model used to acquire the image. The proposed model consists of four convolution layers and two fully connection layers, and a high pass filter is used as a filter for data pre-processing. To verify the performance of the proposed model, Dresden Image Database was used and the dataset was generated by applying the sequential partition method. To show the performance of the proposed model, it is compared with existing studies using 3 layers model or model with GLCM. The proposed model achieves 98% accuracy which is similar to that of the latest technology.

Development of the Computer Vision based Continuous 3-D Feature Extraction System via Laser Structured Lighting (레이저 구조광을 이용한 3차원 컴퓨터 시각 형상정보 연속 측정 시스템 개발)

  • Im, D. H.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.24 no.2
    • /
    • pp.159-166
    • /
    • 1999
  • A system to extract continuously the real 3-D geometric fearture information from 2-D image of an object, which is fed randomly via conveyor has been developed. Two sets of structured laser lightings were utilized. And the laser structured light projection image was acquired using the camera from the signal of the photo-sensor mounted on the conveyor. Camera coordinate calibration matrix was obtained, which transforms 2-D image coordinate information into 3-D world space coordinate using known 6 points. The maximum error after calibration showed 1.5 mm within the height range of 103mm. The correlation equation between the shift amount of the laser light and the height was generated. Height information estimated after correlation showed the maximum error of 0.4mm within the height range of 103mm. An interactive 3-D geometric feature extracting software was developed using Microsoft Visual C++ 4.0 under Windows system environment. Extracted 3-D geometric feature information was reconstructed into 3-D surface using MATLAB.

  • PDF

Guidance of Mobile Robot for Inspection of Pipe (파이프 내부검사를 위한 이동로봇의 유도방법)

  • 정규원
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2002.04a
    • /
    • pp.480-485
    • /
    • 2002
  • The purpose of this paper is the development of guidance algorithm for a mobile robot which is used to acquire the position and state information of the pipe defects such as crack, damage and through hole. The data used for the algorithm is the range data obtained by the range sensor which is based on an optical triangulation method. The sensor, which consists of a laser slit beam and a CCD camera, measures the 3D profile of the pipe's inner surface. After setting the range sensor on the robot, the robot is put into a pipe. While the camera and the LSB sensor part is rotated about the robot axis, a laser slit beam (LSB) is projected onto the inner surface of the pipe and a CCD camera captures the image. From the images the range data is obtained with respect to the sensor coordinate through a series of image processing and applying the sensor matrix. After the data is transformed into the robot coordinate, the position and orientation of the robot should be obtained in order to guide the robot. In addition, analyzing the data, 3D shape of the pipe is constructed and the numerical data for the defects of the pipe can be found. These data will be used for pipe maintenance and service.

  • PDF