• Title/Summary/Keyword: Camera View

Search Result 829, Processing Time 0.027 seconds

Design and Verification of 3D Digital Image Correlation Systems for Measurement of Large Object Displacement Using Stereo Camera (대면적 대상물 변위계측을 위한 스테레오 카메라 3차원 DIC 시스템 기초설계 및 검증에 관한 연구)

  • Ko, Younghun;Seo, Seunghwan;Lim, Hyunsung;Jin, Tailie;Chung, Moonkyung
    • Explosives and Blasting
    • /
    • v.38 no.2
    • /
    • pp.1-12
    • /
    • 2020
  • Digital Image Correlation is a well-established method for displacements, strains and shape measurements of engineering objects. Stereo-camera 3D Digital Image Correlation (3D-DIC) systems have been developed to match the specific requirements for measurements posed by material and mechanical industries. Although DIC method provides the capabilities of scaling a field-of-view(FOV), dimensions of Geotechnical structure objects in many cases are too big to be measured with DIC based on a single camera pair. It can be the most important issue with applying 3D DIC to the measurement of Geotechnical structures. In this paper, We were present stereo vision conditions in a 3D DIC system that can be measured for large FOV(30×20m) and high precisions(z-displacement 0.5mm) of the ground objects with Stereo-camera DIC systems.

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

Displacement Measurement of Structure using Multi-View Camera & Photogrammetry (사진측량법과 다시점 카메라를 이용한 구조물의 변위계측)

  • Yeo, Jeong-Hyeon;Yoon, In-Mo;Jeong, Young-Kee
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.1
    • /
    • pp.1141-1144
    • /
    • 2005
  • In this paper, we propose an automatic displacement system for testing stability of structure. Photogrammetry is a method which can measure accurate 3D data from 2D images taken from different locations and which is suitable for analyzing and measuring the displacement of structure. This paper consists of camera calibration, feature extraction using coded target & retro-reflective circle, 3D reconstruction and analyzing accuracy. Multi-view camera which is used for measuring displacement of structure is placed with different location respectively. Camera calibration calculates trifocal tensor from corresponding points in images, from which Euclidean camera is calculated. Especially, in a step of feature extraction, we utilize sub-pixel method and pattern recognition in order to measure the accurate 3D locations. Scale bar is used as reference to measure. the accurate value of world coordinate..

  • PDF

A Study on Estimating Skill of Smartphone Camera Position using Essential Matrix (필수 행렬을 이용한 카메라 이동 위치 추정 기술 연구)

  • Oh, Jongtaek;Kim, Hogyeom
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.6
    • /
    • pp.143-148
    • /
    • 2022
  • It is very important for metaverse, mobile robot, and user location services to analyze the images continuously taken using a mobile smartphone or robot's monocular camera to estimate the camera's location. So far, PnP-related techniques have been applied to calculate the position. In this paper, the camera's moving direction is obtained using the essential matrix in the epipolar geometry applied to successive images, and the camera's continuous moving position is calculated through geometrical equations. A new estimation method was proposed, and its accuracy was verified through simulation. This method is completely different from the existing method and has a feature that it can be applied even if there is only one or more matching feature points in two or more images.

A Multi-view Super-Resolution Method with Joint-optimization of Image Fusion and Blind Deblurring

  • Fan, Jun;Wu, Yue;Zeng, Xiangrong;Huangpeng, Qizi;Liu, Yan;Long, Xin;Zhou, Jinglun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2366-2395
    • /
    • 2018
  • Multi-view super-resolution (MVSR) refers to the process of reconstructing a high-resolution (HR) image from a set of low-resolution (LR) images captured from different viewpoints typically by different cameras. These multi-view images are usually obtained by a camera array. In our previous work [1], we super-resolved multi-view LR images via image fusion (IF) and blind deblurring (BD). In this paper, we present a new MVSR method that jointly realizes IF and BD based on an integrated energy function optimization. First, we reformulate the MVSR problem into a multi-channel blind deblurring (MCBD) problem which is easier to be solved than the former. Then the depth map of the desired HR image is calculated. Finally, we solve the MCBD problem, in which the optimization problems with respect to the desired HR image and with respect to the unknown blur are efficiently addressed by the alternating direction method of multipliers (ADMM). Experiments on the Multi-view Image Database of the University of Tsukuba and images captured by our own camera array system demonstrate the effectiveness of the proposed method.

Making of View Finder for Drone Photography (드론 촬영을 위한 뷰파인더 제작)

  • Park, Sung-Dae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.12
    • /
    • pp.1645-1652
    • /
    • 2018
  • A drone which was developed first for military purpose has been expanded to various civil areas, with its technological development. Of the drones developed for such diverse purposes, a drone for photography has a camera installed and is actively applied to a variety of image contents making, beyond filming and broadcasting. A drone for photography makes it possible to shoot present and dynamic images which were hard to be photographed with conventional photography technology. This study made a view finder which helps a drone operator to control a drone and directly view an object to shoot with the drone camera. The view finder for drones is a type of glasses. It was developed in the way of printing out the data modelled with 3D MAX in a 3D printer and installing a ultra-small LCD monitor. The view finder for drones makes it possible to fly a drone safely and achieve accurate framing of an object to shoot.

Design Android-based image processing system using the Around-View (후방 카메라와 USB 장치 기반의 영상처리를 이용한 Around-View 시스템 개발)

  • Kim, Gyu-Hyun;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.465-468
    • /
    • 2014
  • The image processing device sold by the market, which increases the comfort of the driver Around-View of the camera. This system while driving or when parked, came about to prevent accidents caused by driver error or disable the visibility of the system. However, it did not spread widely to the driver due to the problem of the high installation cost and complex installation process from the system for easy operation. Due to problems such as first, expensive equipment and second, the development environment is difficult and third, inconvenient installation process, it is not out because of the prohibitively high cost burden and difficult development environment, programmers and operators. I think if this is solved even one problem of this system would be able to access the user are a little more affordable. In this paper The AVM(Around-View Monitoring) system is proposed, the two problems that minimize expensive equipment, the installation process is inconvenient problem of the three aforementioned systems. Solved the problem caused by a lot of the cost by using low-cost USB device, and a rear camera. Was developed to facilitate the installation is possible by considering the inconvenient installation. Reducing the price paid by consumers because of the system.

  • PDF

Virtual Control of Optical Axis of the 3DTV Camera for Reducing Visual Fatigue in Stereoscopic 3DTV

  • Park, Jong-Il;Um, Gi-Mun;Ahn, Chung-Hyun;Ahn, Chie-Teuk
    • ETRI Journal
    • /
    • v.26 no.6
    • /
    • pp.597-604
    • /
    • 2004
  • In stereoscopic television, there is a trade-off between visual comfort and 3-dimensional (3D) impact with respect to the baseline-stretch of a 3DTV camera. It is necessary to adjust the baseline-stretch at an appropriate the distance depending on the contents of a scene if we want to obtain a subjectively optimal quality of an image. However, it is very hard to obtain a small baseline-stretch using commercially available cameras of broadcasting quality where the sizes of the lens and CCD module are large. In order to overcome this limitation, we attempt to freely control the baseline-stretch of a stereoscopic camera by synthesizing the virtual views at the desired location of interval between two cameras. This proposed technique is based on the stereo matching and view synthesis techniques. We first obtain a dense disparity map using a hierarchical stereo matching with the edge-adaptive multiple shifted windows. Then, we synthesize the virtual views using the disparity map. Simulation results with various stereoscopic images demonstrate the effectiveness of the proposed technique.

  • PDF

Depth Generation Method Using Multiple Color and Depth Cameras (다시점 카메라와 깊이 카메라를 이용한 3차원 장면의 깊이 정보 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.13-18
    • /
    • 2011
  • In this paper, we explain capturing, postprocessing, and depth generation methods using multiple color and depth cameras. Although the time-of-flight (TOF) depth camera measures the scene's depth in real-time, there are noises and lens distortion in the output depth images. The correlation between the multi-view color images and depth images is also low. Therefore, it is essential to correct the depth images and then we use them to generate the depth information of the scene. The results of stereo matching based on the disparity information from the depth cameras showed the better performance than the previous method. Moreover, we obtained the accurate depth information even at the occluded or textureless regions which are the weaknesses of stereo matching.

Video Augmentation by Image-based Rendering

  • Seo, Yong-Duek;Kim, Seung-Jin;Sang, Hong-Ki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.147-153
    • /
    • 1998
  • This paper provides a method for video augmentation using image interpolation. In computer graphics or augmented reality, 3D information of a model object is necessary to generate 2D views of the model, which are then inserted into or overlayed on environmental views or real video frames. However, we do not require any three dimensional model but images of the model object at some locations to render views according to the motion of video camera which is calculated by an SFM algorithm using point matches under weak-perspective (scaled-orthographic) projection model. Thus, a linear view interpolation algorithm is applied rather than a 3D ray-tracing method to get a view of the model at different viewpoints from model views. In order to get novel views in a way that agrees with the camera motion the camera coordinate system is embedded into model coordinate system at initialization time on the basis of 3D information recovered from video images and model views, respectively. During the sequence, motion parameters from video frames are used to compute interpolation parameters, and rendered model views are overlayed on corresponding video frames. Experimental results for real video frames and model views are given. Finally, discussion on the limitations of the method and subjects for future research are provided.

  • PDF