• Title/Summary/Keyword: Cameras

Search Result 2,250, Processing Time 0.034 seconds

Measurement of the position and pose of arbitrarily placed polyhedrons (임의로 놓여진 다면체의 위치와 자세측정에 관한 연구)

  • 이상용;한민홍
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1990.10a
    • /
    • pp.613-617
    • /
    • 1990
  • This paper presents a method of calculating the position and orientation of a polyhedron arbitrarily placed in 3-D space using two cameras. We use key feature of the object and CAD data to solve the correspondence problem between two cameras' images.

  • PDF

Locally Initiating Line-Based Object Association in Large Scale Multiple Cameras Environment

  • Cho, Shung-Han;Nam, Yun-Young;Hong, Sang-Jin;Cho, We-Duke
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.3
    • /
    • pp.358-379
    • /
    • 2010
  • Multiple object association is an important capability in visual surveillance system with multiple cameras. In this paper, we introduce locally initiating line-based object association with the parallel projection camera model, which can be applicable to the situation without the common (ground) plane. The parallel projection camera model supports the camera movement (i.e. panning, tilting and zooming) by using the simple table based compensation for non-ideal camera parameters. We propose the threshold distance based homographic line generation algorithm. This takes account of uncertain parameters such as transformation error, height uncertainty of objects and synchronization issue between cameras. Thus, the proposed algorithm associates multiple objects on demand in the surveillance system where the camera movement dynamically changes. We verify the proposed method with actual image frames. Finally, we discuss the strategy to improve the association performance by using the temporal and spatial redundancy.

Measurement of Hot WireRod Cross-Section by Vision System (비전시스템에 의한 열간 선재 단면 측정)

  • Park, Joong-Jo;Tak, Young-Bong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.6 no.12
    • /
    • pp.1106-1112
    • /
    • 2000
  • In this paper, we present a vision system which measures the cross-section of a hot wire-rod in the steel plant. We developed a mobile vision system capable of accurate measurement, which is strong to vibration and jolt when moving. Our system uses green laser light sources and CCD cameras as a sensor, where laser sheet beams form a cross-section contour on the surface of the hot wire-rod and the reflected light from the wire-rode is imaged on the CCD cameras. We use four lasers and four cameras to obtain the image with the complete cross-section contour without an occlusion region. We also perform camera calibrations to obtain each cameras physical parameters by using a single calibration pattern sheet. In our measuring algorithm, distorted four-camera images are corrected by using the camera calibration information and added to generate an image with the complete cross-section contour of the wire-rod. Then, from this image, the cross-section contour of the wire-rod is extracted by preprocessing and segmentation, and its height, width and area are measured.

  • PDF

A Novel Character Segmentation Method for Text Images Captured by Cameras

  • Lue, Hsin-Te;Wen, Ming-Gang;Cheng, Hsu-Yung;Fan, Kuo-Chin;Lin, Chih-Wei;Yu, Chih-Chang
    • ETRI Journal
    • /
    • v.32 no.5
    • /
    • pp.729-739
    • /
    • 2010
  • Due to the rapid development of mobile devices equipped with cameras, instant translation of any text seen in any context is possible. Mobile devices can serve as a translation tool by recognizing the texts presented in the captured scenes. Images captured by cameras will embed more external or unwanted effects which need not to be considered in traditional optical character recognition (OCR). In this paper, we segment a text image captured by mobile devices into individual single characters to facilitate OCR kernel processing. Before proceeding with character segmentation, text detection and text line construction need to be performed in advance. A novel character segmentation method which integrates touched character filters is employed on text images captured by cameras. In addition, periphery features are extracted from the segmented images of touched characters and fed as inputs to support vector machines to calculate the confident values. In our experiment, the accuracy rate of the proposed character segmentation system is 94.90%, which demonstrates the effectiveness of the proposed method.

Development of Data Logging Platform of Multiple Commercial Radars for Sensor Fusion With AVM Cameras (AVM 카메라와 융합을 위한 다중 상용 레이더 데이터 획득 플랫폼 개발)

  • Jin, Youngseok;Jeon, Hyeongcheol;Shin, Young-Nam;Hyun, Eugin
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.13 no.4
    • /
    • pp.169-178
    • /
    • 2018
  • Currently, various sensors have been used for advanced driver assistance systems. In order to overcome the limitations of individual sensors, sensor fusion has recently attracted the attention in the field of intelligence vehicles. Thus, vision and radar based sensor fusion has become a popular concept. The typical method of sensor fusion involves vision sensor that recognizes targets based on ROIs (Regions Of Interest) generated by radar sensors. Especially, because AVM (Around View Monitor) cameras due to their wide-angle lenses have limitations of detection performance over near distance and around the edges of the angle of view, for high performance of sensor fusion using AVM cameras and radar sensors the exact ROI extraction of the radar sensor is very important. In order to resolve this problem, we proposed a sensor fusion scheme based on commercial radar modules of the vendor Delphi. First, we configured multiple radar data logging systems together with AVM cameras. We also designed radar post-processing algorithms to extract the exact ROIs. Finally, using the developed hardware and software platforms, we verified the post-data processing algorithm under indoor and outdoor environments.

A 3D Modeling System Using Multiple Stereo Cameras (다중 스테레오 카메라를 이용한 3차원 모델링 시스템)

  • Kim, Han-Sung;Sohn, Kwang-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.1
    • /
    • pp.1-9
    • /
    • 2007
  • In this paper, we propose a new 3D modeling and rendering system using multiple stereo cameras. When target objects are captured by cameras, each capturing PC segments the objects and estimates disparity fields, then they transmit the segmented masks, disparity fields, and color textures of objects to a 3D modeling server. The modeling server generates 3D models of the objects from the gathered masks and disparity fields. Finally, the server generates a video at the designated point of view with the 3D model and texture information from cameras.

Measurement of 3D Spreader Position Information using the CCD Cameras and a Laser Distance Measuring Unit

  • Lee, Jung-Jae;Nam, Gi-Gun;Lee, Bong-Ki;Lee, Jang-Myung
    • Journal of Navigation and Port Research
    • /
    • v.28 no.4
    • /
    • pp.323-331
    • /
    • 2004
  • This paper introduces a novel approach that can provide the three dimensional information about the movement of a spreader by using two CCD cameras and a laser distance measuring unit in order to derive ALS (Automatic Landing System) in the crane used at a harbor. So far a kind of 2D Laser scanner sensor or laser distance measuring units are used as comer detectors for the geometrical matching between the spreader and a container. Such systems provide only two dimensional information which is not enough for an accurate and fast ALS. In addition to this deficiency in performance, the price of the system is too high to adapt to the ALS. Therefore, to overcome these defects, we proposed a novel method to acquire the three dimensional spreader information using two CCD cameras and a laser distance measuring unit. To show the efficiency of proposed method, real experiments are performed to show the improvement of accuracy in distance measurement by fusing the sensory information of the CCD cameras and a laser distance measuring unit.

Repeatability Test for the Asymmetry Measurement of Human Appearance using General-purpose Depth Cameras (범용 깊이 카메라를 이용한 인체 외형 비대칭 측정의 반복성 평가)

  • Jang, Jun-Su
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.30 no.3
    • /
    • pp.184-189
    • /
    • 2016
  • Human appearance analysis is an important part of both eastern and western medicine fields, such as Sasang constitutional medicine, rehabilitation medicine, dental medicine, and etc. By the rapid growing of depth camera technology, 3D measuring becomes popular in many applications including medical area. In this study, the possibility of using depth cameras in asymmetry analysis of human appearance is examined. We introduce the development of 3D measurement system using 2 Microsoft Kinect depth cameras and fully automated asymmetry analysis algorithms based on computer vision technology. We compare the proposed automated method to the manual method, which is usually used in asymmetry analysis. As a measure of repeatability, standard deviations of asymmetry indices are examined by 10 times repeated experiments. Experimental results show that the standard deviation of the automated method (1.00mm for face, 1.22mm for body) is better than that of the manual method (2.06mm for face, 3.44mm for body) for the same 3D measurement. We conclude that the automated method using depth cameras can be successfully applicable to practical asymmetry analysis and contribute to reliable human appearance analysis.

Development of 3D Stereoscopic Image Generation System Using Real-time Preview Function in 3D Modeling Tools

  • Yun, Chang-Ok;Yun, Tae-Soo;Lee, Dong-Hoon
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.6
    • /
    • pp.746-754
    • /
    • 2008
  • A 3D stereoscopic image is generated by interdigitating every scene with video editing tools that are rendered by two cameras' views in 3D modeling tools, like Autodesk MAX(R) and Autodesk MAYA(R). However, the depth of object from a static scene and the continuous stereo effect in the view of transformation, are not represented in a natural method. This is because after choosing the settings of arbitrary angle of convergence and the distance between the modeling and those two cameras, the user needs to render the view from both cameras. So, the user needs a process of controlling the camera's interval and rendering repetitively, which takes too much time. Therefore, in this paper, we will propose the 3D stereoscopic image editing system for solving such problems as well as exposing the system's inherent limitations. We can generate the view of two cameras and can confirm the stereo effect in real-time on 3D modeling tools. Then, we can intuitively determine immersion of 3D stereoscopic image in real-time, by using the 3D stereoscopic image preview function.

  • PDF

Point Cloud Registration Algorithm Based on RGB-D Camera for Shooting Volumetric Objects (체적형 객체 촬영을 위한 RGB-D 카메라 기반의 포인트 클라우드 정합 알고리즘)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.765-774
    • /
    • 2019
  • In this paper, we propose a point cloud matching algorithm for multiple RGB-D cameras. In general, computer vision is concerned with the problem of precisely estimating camera position. Existing 3D model generation methods require a large number of cameras or expensive 3D cameras. In addition, the conventional method of obtaining the camera external parameters through the two-dimensional image has a large estimation error. In this paper, we propose a method to obtain coordinate transformation parameters with an error within a valid range by using depth image and function optimization method to generate omni-directional three-dimensional model using 8 low-cost RGB-D cameras.