• Title/Summary/Keyword: Camera Matrix

Search Result 195, Processing Time 0.027 seconds

A Study on Assmbling of Sub Pictures using Approximate Junctions

  • Kurosu, Kenji;Morita, Kiyoshi;Furuya, Tadayoshi;Soeda, Mitsuru
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.06a
    • /
    • pp.284-287
    • /
    • 1998
  • It is important to develop a method of assembling a set of sub pictures automatically into a mosaic picture , because a view through fiberscopes or microscopes with higher magnifying power is much larger than the field of view taken by a camera. This paper presents a method of assembling sub pictures, where roughly estimated junctions called approximate junctions are employed for matching triangles formed by selected junctions in sub pictures. To over come the difficulties in processing speed and noise corruption, fuzzy rules is applied to get fuzzy values for existence of approximate junctions and fuzzy similarity for congruent triangle matching. Some demonstration, exemplified by assembling microscopic metal matrix photographs, are given to show feasibility of this method.

  • PDF

Moving Object Detection in Pan-Tilt Camera using Image Alignment (영상 정렬 알고리듬을 이용한 팬틸트 카메라에서 움직이는 물체 탐지 기법)

  • Baek, Young-Min;Choi, Jin-Young
    • Proceedings of the KIEE Conference
    • /
    • 2008.10b
    • /
    • pp.260-261
    • /
    • 2008
  • 이동 물체 탐지(Object Detection) 기법은 대부분의 감시 시스템에서 가장 초기 단계로서, 이후에 물체 추적(Object Tracking) 및 물체 식별(Object Classification) 등의 지능 알고리듬에 입력으로 사용된다. 따라서 물체의 윤곽의 변화 없이 최대한 정교하게 이동 물체 영역 맵을 생성하는 것이 물체 탐지의 가장 중요한 요소가 된다. 카메라가 고정되어 있는 경우에는 현재 들어오는 영상에 대한 확률적 배경 모델을 생성할 수 있지만, 팬틸트 카메라와 같이 영상의 좌표가 변하는 환경에서는 배경 모델도 계속 변하기 때문에 기존의 배경 모델을 그대로 사용할 수 없다. 본 논문에서는 팬틸트 카메라와 같이 동적인 카메라에서 이동 물체 탐지를 위해, 국소 특징점(Local Feature)를 통해 카메라의 움직임을 판단하여 연속되는 영상간의 변환 행렬(Transformation Matrix)를 구하고 하고, 확률적 배경 모델링을 통한 이동 물체 탐지 기법을 제안한다. 자제 촬영한 이동 카메라 실험영상을 통해서 이 알고리듬이 동적 배경에서도 매우 강인하게 동작하는 것을 검증하였다.

  • PDF

Application of Computer Vision System for the Point Position Determination in the Plane (평면상에 있는 점위치 결정을 위한 컴퓨터장 비젼의 응용)

  • 장완식;장종근;유창규
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1124-1128
    • /
    • 1995
  • This paper presents the appplication of computer vision for the purpose of determing the position of the unknown point in the plane. The presented contrik method is estimate the six view parameters reqresenting the relationship between the image plane coordinates and the real physical coordinates. The estimation of six parameters is indispensable for transforming the 2-dimensional camera coordinates to the 3-dimensional spatial coordinates. Then, the position of unknown point is estimated based on the estimated parameters depending on the cameras. The suitability of this control scheme is demonstrated experimentally by determining of position the unknown point in the plane.

  • PDF

A study on the rigid bOdy placement task of robot system based on the computer vision system (컴퓨터 비젼시스템을 이용한 로봇시스템의 강체 배치 실험에 대한 연구)

  • 장완식;유창규;신광수;김호윤
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1114-1119
    • /
    • 1995
  • This paper presents the development of estimation model and control method based on the new computer vision. This proposed control method is accomplished using a sequential estimation scheme that permits placement of the rigid body in each of the two-dimensional image planes of monitoring cameras. Estimation model with six parameters is developed based on a model that generalizes known 4-axis scara robot kinematics to accommodate unknown relative camera position and orientation, etc. Based on the estimated parameters,depending on each camers the joint angle of robot is estimated by the iteration method. The method is tested experimentally in two ways, the estimation model test and a three-dimensional rigid body placement task. Three results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as assembly and welding.

  • PDF

3D Spatial Info Development using Layered Depth Images (계층적 깊이 영상을 활용한 3차원 공간정보 구현)

  • Song, Sang-Hun;Jo, Myung-Hee
    • Proceedings of the KSRS Conference
    • /
    • 2007.03a
    • /
    • pp.97-102
    • /
    • 2007
  • 3차원 공간정보는 2차원에 비해 공간적 현실감이 뛰어나기 때문에 최근 경관분석,도시 계획 및 웹(Web) 을 통한 지도 서비스 분야 등에서 이에 대한 관심이 증가하고 있으나,3차원 공간 정보의 기하학적 특성상 기존의 2차원 공간정보에 비해 데이터 량이 방대해 지고 있으며 이를 활용한 또 다른 콘텐츠 제작과 빠르고 효율적인 처리에 많은 문제점을 가지고 있다. 본 논문에서는 이러한 문제점을 해결하기 위한 방법으로 위성 및 항공으로부터 획득한 DEM(Digital Elevation Model)을 이용하여 생성된 3차 원의 지형정보와 도시 모델링 및 텍스처 맵핑 과정을 통해 획득한 정보를 기반으로 하여 각각의 위치에 카메라를 설정하고, 설정된 카메라 위치를 기반으로 Camera Matrix를 구한다. 이렇게 획득한 카메라의 정보엔 깊이 정보를 포함하고 있는데,깊이 정보를 기반으로 하여 3차원의 워핑(Warping)작업을 통해 계층적 핍이 영상(LDI)를 생성하고,생성된 계층적 깊이 영상을 이용하여 3차원의 공간정보를 구현한다.

  • PDF

Visual Tracking Control of Aerial Robotic Systems with Adaptive Depth Estimation

  • Metni, Najib;Hamel, Tarek
    • International Journal of Control, Automation, and Systems
    • /
    • v.5 no.1
    • /
    • pp.51-60
    • /
    • 2007
  • This paper describes a visual tracking control law of an Unmanned Aerial Vehicle(UAV) for monitoring of structures and maintenance of bridges. It presents a control law based on computer vision for quasi-stationary flights above a planar target. The first part of the UAV's mission is the navigation from an initial position to a final position to define a desired trajectory in an unknown 3D environment. The proposed method uses the homography matrix computed from the visual information and derives, using backstepping techniques, an adaptive nonlinear tracking control law allowing the effective tracking and depth estimation. The depth represents the desired distance separating the camera from the target.

Optical Implementation of Real-Time Two-Dimensional Hopfield Neural Network Model Using Multifocus Hololens (Multifocus Hololens를 이용한 실시간 2차원 Hopfield 신경회로망 모델의 광학적 실험)

  • 박인호;서춘원;이승현;이우상;김은수;양인응
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.10
    • /
    • pp.1576-1583
    • /
    • 1989
  • In this paper, we describe real-time optical implementation of the Hopfield neural network model for two-dimensional associative memory by using commercial LCTV and Multifocus For real-time processing capability, we use LCTV as a memory mask and a input spatial light modulator. Inner product between input pattern and memory matrix is processed by the multifocus holographic lens. The output signal is then electrically thresholded fed back to the system input by 2-D CCD camera. From the good experimental results, the proposed system can be applied to pattern recognition and machine vision in future.

  • PDF

The Alignment of Measuring Data using the Pattern Matching Method (패턴매칭을 이용한 형상측정 데이터의 결합)

  • 조택동;이호영
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.307-310
    • /
    • 2000
  • The measuring method of large object using the pattern matching is discussed in the paper. It is hard and expensive to get the complete 3D data when the object is large or exceeds the limit of measuring devices. The large object is divided into several smaller areas and is scanned several times to get the data of all the pieces. These data are aligned to get the complete 3D data using the pattern matching method. The point pattern matching method and transform matrix algorithm are used for aligning. The laser slit beam and CCD camera is applied for experimental measurement. Visual C++ on Window98 is implemented in processing the algorithm.

  • PDF

Producing a Virtual Object with Realistic Motion for a Mixed Reality Space

  • Daisuke Hirohashi;Tan, Joo-Kooi;Kim, Hyoung-Seop;Seiji Ishikawa
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.153.2-153
    • /
    • 2001
  • A technique is described for producing a virtual object with realistic motion. A 3-D human motion model is obtained by applying a developed motion capturing technique to a real human in motion. Factorization method is a technique for recovering 3-D shape of a rigid object from a single video image stream without using camera parameters. The technique is extended for recovering 3-D human motions. The proposed system is composed of three fixed cameras which take video images of a human motion. Three obtained image sequences are analyzed to yield measurement matrices at individual sampling times, and they are merged into a single measurement matrix to which the factorization is applied and the 3-D human motion is recovered ...

  • PDF

3D Reconstruction using three vanishing points from a single image

  • Yoon, Yong-In;Im, Jang-Hwan;Kim, Dae-Hyun;Park, Jong-Soo
    • Proceedings of the IEEK Conference
    • /
    • 2002.07b
    • /
    • pp.1145-1148
    • /
    • 2002
  • This paper presents a new method which is calculated to use only three vanishing points in order to compute the dimensions of object and its pose from a single image of perspective projection taken by a camera and the problem of recovering 3D models from three vanishing points of box scene. Our approach is to compute only three vanishing points without this information such as the focal length, rotation matrix, and translation from images in the case of perspective projection. We assume that the object can be modeled as a linear function of a dimension vector ν. The input of reconstruction is a set of correspondences between features in the model and features in the image. To minimize each the dimensions of the parameterized models, this reconstruction of optimization can be solved by the standard nonlinear optimization techniques with a multi-start method which generates multiple starting points for the optimizer by sampling the parameter space uniformly.

  • PDF