• Title/Summary/Keyword: Aerial CCD Image

Search Result 12, Processing Time 0.026 seconds

Automatic Building Extraction Using LIDAR and Aerial Image (LIDAR 데이터와 수치항공사진을 이용한 건물 자동추출)

  • Jeong, Jae-Wook;Jang, Hwi-Jeong;Kim, Yu-Seok;Cho, Woo-Sug
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.13 no.3 s.33
    • /
    • pp.59-67
    • /
    • 2005
  • Building information is primary source in many applications such as mapping, telecommunication, car navigation and virtual city modeling. While aerial CCD images which are captured by passive sensor(digital camera) provide horizontal positioning in high accuracy, it is far difficult to process them in automatic fashion due to their inherent properties such as perspective projection and occlusion. On the other hand, LIDAR system offers 3D information about each surface rapidly and accurately in the form of irregularly distributed point clouds. Contrary to the optical images, it is much difficult to obtain semantic information such as building boundary and object segmentation. Photogrammetry and LIDAR have their own major advantages and drawbacks for reconstructing earth surfaces. The purpose of this investigation is to automatically obtain spatial information of 3D buildings by fusing LIDAR data with aerial CCD image. The experimental results show that most of the complex buildings are efficiently extracted by the proposed method and signalize that fusing LIDAR data and aerial CCD image improves feasibility of the automatic detection and extraction of buildings in automatic fashion.

  • PDF

Analysis of sideward footprint of Multi-view imagery by sidelap changing (횡중복도 변화에 따른 다각사진 Sideward Footprint 분석)

  • Seo, Sang-Il;Park, Seon-Dong;Kim, Jong-In;Yoon, Jong-Seong
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2010.04a
    • /
    • pp.53-56
    • /
    • 2010
  • An aerial multi-looking camera system equips itself with five separate cameras which enables acquiring one vertical image and four oblique images at the same time. This provides diverse information about the site compared to aerial photographs vertically. However, multi-looking Aerial Camera for building a 3D spatial information don't use a large-size CCD camera, do uses a medium-size CCD camera, if acquiring forward, backward, left and right imagery of Certain objects, Aerial photographing set overlap and sidelap must be considered. Especially, Sideward-looking camera set up by the sidelap to determine whether a particular object can be acquisition Through our research we analyzed of sideward footprint and aerial photographing efficiency of Multi-view imagery by sidelap changing.

  • PDF

Land Cover Classification Using Lidar and Optical Image (라이다와 광학영상을 이용한 토지피복분류)

  • Cho Woo-Sug;Chang Hwi-Jung;Kim Yu-Seok
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.24 no.1
    • /
    • pp.139-145
    • /
    • 2006
  • The advantage of the lidar data is in fast acquisition and process time as well as in high accuracy and high point density. However lidar data itself is difficult to classify the earth surface because lidar data is in the form of irregularly distributed point clouds. In this study, we investigated land cover classification using both lidar data and optical image through a supervised classification method. Firstly, we generated 1m grid DSM and DEM image and then nDSM was produced by using DSM and DEM. In addition, we had made intensity image using the intensity value of lidar data. As for optical images, the red, blue, green band of CCD image are used. Moreover, a NDVI image using a red band of the CCD image and infrared band of IKONOS image is generated. The experimental results showed that land cover classification with lidar data and optical image together could reach to the accuracy of 74.0%. To improve classification accuracy, we further performed re-classification of shadow area and water body as well as forest and building area. The final classification accuracy was 81.8%.

3D Reconstruction of Urban Building using Laser range finder and CCD camera

  • Kim B. S.;Park Y. M.;Lee K. H.
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.128-131
    • /
    • 2004
  • In this paper, we describe reconstructed 3D-urban modeling techniques for laser scanner and CCD camera system, which are loading on the vehicle. We use two laser scanners, the one is horizon scanner and the other is vertical scanner. Horizon scanner acquires the horizon data of building for localization. Vertical scan data are main information for constructing a building. We compared extraction of edge aerial image with laser scan data. This method is able to correct the cumulative error of self-localization. Then we remove obstacles of 3D-reconstructed building. Real-texture information that is acquired with CCD camera is mapped by 3D-depth information. 3D building of urban is reconstructed to 3D-virtual world. These techniques apply to city plan. 3D-environment game. movie background. unmanned-patrol etc.

  • PDF

A Study on Visual Servoing Image Information for Stabilization of Line-of-Sight of Unmanned Helicopter (무인헬기의 시선안정화를 위한 시각제어용 영상정보에 관한 연구)

  • 신준영;이현정;이민철
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.600-603
    • /
    • 2004
  • UAV (Unmanned Aerial Vehicle) is an aerial vehicle that can accomplish the mission without pilot. UAV was developed for a military purpose such as a reconnaissance in an early stage. Nowadays usage of UAV expands into a various field of civil industry such as a drawing a map, broadcasting, observation of environment. These UAV, need vision system to offer accurate information to person who manages on ground and to control the UAV itself. Especially LOS(Line-of-Sight) system wants to precisely control direction of system which wants to tracking object using vision sensor like an CCD camera, so it is very important in vision system. In this paper, we propose a method to recognize object from image which is acquired from camera mounted on gimbals and offer information of displacement between center of monitor and center of object.

  • PDF

Detecting and Restoring the Occlusion Area for Generating the True Orthoimage Using IKONOS Image (IKONOS 정사영상제작을 위한 폐색 영역의 탐지와 복원)

  • Seo Min-Ho;Lee Byoung-Kil;Kim Yong-Il;Han Dong-Yeob
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.2
    • /
    • pp.131-139
    • /
    • 2006
  • IKONOS images have the perspective geometry in CCD sensor line like aerial images with central perspective geometry. So the occlusion by buildings, terrain or other objects exist in the image. It is difficult to detect the occlusion with RPCs(rational polynomial coefficients) for ortho-rectification of image. Therefore, in this study, we detected the occlusion areas in IKONOS images using the nominal collection elevation/azimuth angle and restored the hidden areas using another stereo images, from which the rue ortho image could be produced. The algorithm's validity was evaluated using the geometric accuracy of the generated ortho image.

Accuracy Analysis of Image Orientation Technique and Direct Georeferencing Technique

  • Bae Sang-Keun;Kim Byung-Guk
    • Spatial Information Research
    • /
    • v.13 no.4 s.35
    • /
    • pp.373-380
    • /
    • 2005
  • Mobile Mapping Systems are effective systems to acquire the position and image data using vehicle equipped with the GPS (Global Positioning System), IMU (Inertial Measurement Unit), and CCD camera. They are used in various fields of road facility management, map update, and etc. In the general photogrammetry such as aerial photogrammetry, GCP (Ground Control Point)s are needed to compute the image exterior orientation elements (the position and attitude of camera). These points are measured by field survey at the time of data acquisition. But it costs much time and money. Moreover, it is not possible to make sufficient GCP as much as we want. However Mobile Mapping Systems are more efficient both in time and money because they can obtain the position and attitude of camera at the time of photographing. That is, Image Orientation Technique must use GCP to compute the image exterior orientation elements, but on the other hand Direct Georeferencing can directly compute the image exterior orientation elements by GPS/INS. In this paper, we analyze about the positional accuracy comparison of ground point using the Image Orientation Technique and Direct Georeferencing Technique.

  • PDF

Performance Enhancement of the Attitude Estimation using Small Quadrotor by Vision-based Marker Tracking (영상기반 물체추적에 의한 소형 쿼드로터의 자세추정 성능향상)

  • Kang, Seokyong;Choi, Jongwhan;Jin, Taeseok
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.5
    • /
    • pp.444-450
    • /
    • 2015
  • The accuracy of small and low cost CCD camera is insufficient to provide data for precisely tracking unmanned aerial vehicles(UAVs). This study shows how UAV can hover on a human targeted tracking object by using CCD camera rather than imprecise GPS data. To realize this, UAVs need to recognize their attitude and position in known environment as well as unknown environment. Moreover, it is necessary for their localization to occur naturally. It is desirable for an UAV to estimate of his attitude by environment recognition for UAV hovering, as one of the best important problems. In this paper, we describe a method for the attitude of an UAV using image information of a maker on the floor. This method combines the observed position from GPS sensors and the estimated attitude from the images captured by a fixed camera to estimate an UAV. Using the a priori known path of an UAV in the world coordinates and a perspective camera model, we derive the geometric constraint equations which represent the relation between image frame coordinates for a marker on the floor and the estimated UAV's attitude. Since the equations are based on the estimated position, the measurement error may exist all the time. The proposed method utilizes the error between the observed and estimated image coordinates to localize the UAV. The Kalman filter scheme is applied for this method. its performance is verified by the image processing results and the experiment.

Development of PKNU3: A small-format, multi-spectral, aerial photographic system

  • Lee Eun-Khung;Choi Chul-Uong;Suh Yong-Cheol
    • Korean Journal of Remote Sensing
    • /
    • v.20 no.5
    • /
    • pp.337-351
    • /
    • 2004
  • Our laboratory originally developed the compact, multi-spectral, automatic aerial photographic system PKNU3 to allow greater flexibility in geological and environmental data collection. We are currently developing the PKNU3 system, which consists of a color-infrared spectral camera capable of simultaneous photography in the visible and near-infrared bands; a thermal infrared camera; two computers, each with an 80-gigabyte memory capacity for storing images; an MPEG board that can compress and transfer data to the computers in real-time; and the capability of using a helicopter platform. Before actual aerial photographic testing of the PKNU3, we experimented with each sensor. We analyzed the lens distortion, the sensitivity of the CCD in each band, and the thermal response of the thermal infrared sensor before the aerial photographing. As of September 2004, the PKNU3 development schedule has reached the second phase of testing. As the result of two aerial photographic tests, R, G, B and IR images were taken simultaneously; and images with an overlap rate of 70% using the automatic 1-s interval data recording time could be obtained by PKNU3. Further study is warranted to enhance the system with the addition of gyroscopic and IMU units. We evaluated the PKNU 3 system as a method of environmental remote sensing by comparing each chlorophyll image derived from PKNU 3 photographs. This appraisement was backed up with existing study that resulted in a modest improvement in the linear fit between the measures of chlorophyll and the RVI, NDVI and SAVI images stem from photographs taken by Duncantech MS 3100 which has same spectral configuration with MS 4000 used in PKNU3 system.

Accuracy Comparison of Direct Georeferencing and Indirect Georeferencing in the Mobile Mapping System

  • Bae Sang-Keun;Kim Byung-Guk;Sung Jung-Gon
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.656-660
    • /
    • 2004
  • The Mobile Mapping System is an effective method to acquire the position and image data using vehicle equipped with the GPS (Global Positioning System), IMU (Inertial Measurement Unit), and CCD camera. It is used in various fields of road facility management, map update, and etc. In the general photogrammetry such as aerial photogrammetry, GCP (Ground Control Point)s are needed to compute the image exterior orientation elements (the position and attitude of camera). These points are measured by field survey at the time of data acquisition. But it costs much time and money. Moreover, it is not possible to make sufficient GCP as much as we want. However Mobile Mapping System is more efficient both in time and money because it can obtain the position and attitude of camera at the time of photographing. That is, Indirect Georeferencing must use GCP to compute the image exterior orientation elements, but on the other hand Direct Georeferencing can directly compute the image exterior orientation elements by GPS/INS. In this paper, we analyze about the positional accuracy comparison of ground point using the Direct Georeferencing and Indirect Georeferencing.

  • PDF