• Title/Summary/Keyword: fusion of sensor information

Search Result 406, Processing Time 0.019 seconds

Economical and Industrial Effects of Fusion Technologies of multi-sensor Spatial Imagery (멀티센서 공간영상정보 통합처리기술의 경제적.산업적 효과분석)

  • Chang, Eun-Mi;Yoon, Min-Kyung
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2007.06a
    • /
    • pp.147-155
    • /
    • 2007
  • 본 연구는 기술개발 자체의 효과성을 개발된 기술의 시장성, 확대 보급가능성, 민간분야의 기술 로드맵과의 관계성을 도출하는 것으로 실제로 업계에서 다양한 인맥과 프로젝트의 경험을 가지고 있는 자에 의한 심층인터뷰를 근거로 한 정성적 판단과 시장조사를 통한 정량적 판단을 결합하여 멀티센서의 기술개발의 가치를 평가하는 후속 조치에 해당되는 연구이다. 직접적 측면의 산업적 파급효과는 2006년에는 시범적인 수준에서 적용된 사례를 중심으로 정리해 본 결과, 다음과 같이 요약될 수 있다. 첫째, 전문화된 기업의 경우 각자의 강점에 기반을 두어 멀티센서의 적용시장을 바라보고 있다는 점이다. 모든 소프트웨어의 생산을 서버 부분부터 웹 버전, 모바일 버전까지 모두 보유하고 있는 벡터 부분의 GIS 수준과는 달리 위성영상 및 멀티센서 분야의 소프트웨어는 대용량으로 인한 한계로, 서버중심, 웹 중심의 개발이 이루어지고 있으나 모바일 분야까지 확장되지는 않고, 차량항법장치와의 연계를 통한 확장을 꾀하는 수준이라고 요약할 수 있다. 둘째, 전문기업이 아닌 대기업의 시장분석 및 전략에 관한 부분을 요약하자면, 멀티센서와 직접적인 연관을 갖는 회사는 많지 않으나 대체로 U-city 사업 발굴 시 멀티센서가 융합기술이 요소기술로서 기여할 수 있을 것이라는 기대는 하고 있으며, 규모도 1,000억 원 대를 상회할 것으로 바라보고 있다. 셋째, 멀티센서 개발기술의 상용화 및 산업화를 위한 제거 요소 및 감소 요소, 증가 요소 및 새로이 만들어야 할 요소 등을 다차원 전략으로 제시하였으나, 전략을 구사할 기관이 산재되어 있어 제도적 차원의 뒷받침이 기술개발과 더불어 진행되어야 한다는 결론에 이르게 된다. 넷째, 개발된 4개의 기술에 대하여 KVA에서 산출한 기업평가 방식을 변형하여 적용하였는데, 위성영상과 DEM 개발기술이 87% 이상의 점수를 받아 가장 시장성 및 활용성이 높은 기술로 평가되었으며, 초다분광영상에 대한 기술은 70%를 겨우 넘는 수준에서 평가가 되었다. 멀티센서 공간영상정보 통합처리 기술 개발은 다목적 실용위성의 보유, 국가 NGIS 사업의 결과물이 상당히 축척이 되어 있고, 라이다(LiDAR) 기술의 도입을 위한 환경이 조성되었기에 다른 국가에 비해 멀티센서 기술의 적용과 산업화가 가시화 될 수 있을 것으로 기대된다. 그러나 멀티센서 자료의 수급이 용이하지 못하고, 법 제도적인 한계, 시장의 성숙도가 기대이하라는 점 등의 한계를 노정하고 있다.

  • PDF

Orthophoto and DEM Generation Using Low Specification UAV Images from Different Altitudes (고도가 다른 저사양 UAV 영상을 이용한 정사영상 및 DEM 제작)

  • Lee, Ki Rim;Lee, Won Hee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.5
    • /
    • pp.535-544
    • /
    • 2016
  • Even though existing methods for orthophoto production using expensive aircraft are effective in large areas, they are drawbacks when dealing with renew quickly according to geographic features. But, as UAV(Unmanned Aerial Vehicle) technology has advanced rapidly, and also by loading sensors such as GPS and IMU, they are evaluates that these UAV and sensor technology can substitute expensive traditional aerial photogrammetry. Orthophoto production by using UAV has advantages that spatial information of small area can be updated quickly. But in the case of existing researches, images of same altitude are used in orthophoto generation, they are drawbacks about repetition of data and renewal of data. In this study, we targeted about small slope area, and by using low-end UAV, generated orthophoto and DEM(Digital Elevation Model) through different altitudinal images. The RMSE of the check points is σh = 0.023m on a horizontal plane and σv = 0.049m on a vertical plane. This maximum value and mean RMSE are in accordance with the working rule agreement for the aerial photogrammetry of the National Geographic Information Institute(NGII) on a 1/500 scale digital map. This paper suggests that generate orthophoto of high accuracy using a different altitude images. Reducing the repetition of data through images of different altitude and provide the informations about the spatial information quickly.

Object-based Building Change Detection Using Azimuth and Elevation Angles of Sun and Platform in the Multi-sensor Images (태양과 플랫폼의 방위각 및 고도각을 이용한 이종 센서 영상에서의 객체기반 건물 변화탐지)

  • Jung, Sejung;Park, Jueon;Lee, Won Hee;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_2
    • /
    • pp.989-1006
    • /
    • 2020
  • Building change monitoring based on building detection is one of the most important fields in terms of monitoring artificial structures using high-resolution multi-temporal images such as CAS500-1 and 2, which are scheduled to be launched. However, not only the various shapes and sizes of buildings located on the surface of the Earth, but also the shadows or trees around them make it difficult to detect the buildings accurately. Also, a large number of misdetection are caused by relief displacement according to the azimuth and elevation angles of the platform. In this study, object-based building detection was performed using the azimuth angle of the Sun and the corresponding main direction of shadows to improve the results of building change detection. After that, the platform's azimuth and elevation angles were used to detect changed buildings. The object-based segmentation was performed on a high-resolution imagery, and then shadow objects were classified through the shadow intensity, and feature information such as rectangular fit, Gray-Level Co-occurrence Matrix (GLCM) homogeneity and area of each object were calculated for building candidate detection. Then, the final buildings were detected using the direction and distance relationship between the center of building candidate object and its shadow according to the azimuth angle of the Sun. A total of three methods were proposed for the building change detection between building objects detected in each image: simple overlay between objects, comparison of the object sizes according to the elevation angle of the platform, and consideration of direction between objects according to the azimuth angle of the platform. In this study, residential area was selected as study area using high-resolution imagery acquired from KOMPSAT-3 and Unmanned Aerial Vehicle (UAV). Experimental results have shown that F1-scores of building detection results detected using feature information were 0.488 and 0.696 respectively in KOMPSAT-3 image and UAV image, whereas F1-scores of building detection results considering shadows were 0.876 and 0.867, respectively, indicating that the accuracy of building detection method considering shadows is higher. Also among the three proposed building change detection methods, the F1-score of the consideration of direction between objects according to the azimuth angles was the highest at 0.891.

Image Restoration and Segmentation for PAN-sharpened High Multispectral Imagery (PAN-SHARPENED 고해상도 다중 분광 자료의 영상 복원과 분할)

  • Lee, Sanghoon
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_1
    • /
    • pp.1003-1017
    • /
    • 2017
  • Multispectral image data of high spatial resolution is required to obtain correct information on the ground surface. The multispectral image data has lower resolution compared to panchromatic data. PAN-sharpening fusion technique produces the multispectral data with higher resolution of panchromatic image. Recently the object-based approach is more applied to the high spatial resolution data than the conventional pixel-based one. For the object-based image analysis, it is necessary to perform image segmentation that produces the objects of pixel group. Image segmentation can be effectively achieved by the process merging step-by-step two neighboring regions in RAG (Regional Adjacency Graph). In the satellite remote sensing, the operational environment of the satellite sensor causes image degradation during the image acquisition. This degradation increases variation of pixel values in same area, and results in deteriorating the accuracy of image segmentation. An iterative approach that reduces the difference of pixel values in two neighboring pixels of same area is employed to alleviate variation of pixel values in same area. The size of segmented regions is associated with the quality of image segmentation and is decided by a stopping rue in the merging process. In this study, the image restoration and segmentation was quantitatively evaluated using simulation data and was also applied to the three PAN-sharpened multispectral images of high resolution: Dubaisat-2 data of 1m panchromatic resolution from LA, USA and KOMPSAT3 data of 0.7m panchromatic resolution from Daejeon and Chungcheongnam-do in the Korean peninsula. The experimental results imply that the proposed method can improve analytical accuracy in the application of remote sensing high resolution PAN-sharpened multispectral imagery.

A Study on the Use of Drones for Disaster Damage Investigation in Mountainous Terrain (산악지형에서의 재난피해조사를 위한 드론 맵핑 활용방안 연구)

  • Shin, Dongyoon;Kim, Dajinsol;Kim, Seongsam;Han, Youkyung;Nho, Hyunju
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_4
    • /
    • pp.1209-1220
    • /
    • 2020
  • In the case of forest areas, the installation of ground control points (GCPs) and the selection of terrain features, which are one of the unmanned aerial photogrammetry work process, are limited compared to urban areas, and safety problems arise due to non-visible flight due to high forest. To compensate for this problem, the drone equipped with a real time kinematic (RTK) sensor that corrects the position of the drone in real time, and a 3D flight method that fly based on terrain information are being developed. This study suggests to present a method for investigating damage using drones in forest areas. Position accuracy evaluation was performed for three methods: 1) drone mapping through GCP measurement (normal mapping), 2) drone mapping based on topographic data (3D flight mapping), 3) drone mapping using RTK drone (RTK mapping), and all showed an accuracy within 2 cm in the horizontal and within 13 cm in the vertical position. After evaluating the position accuracy, the volume of the landslide area was calculated and the volume values were compared, and all showed similar values. Through this study, the possibility of utilizing 3D flight mapping and RTK mapping in forest areas was confirmed. In the future, it is expected that more effective damage investigations can be conducted if the three methods are appropriately used according to the conditions of area of the disaster.

Classification of Multi-temporal SAR Data by Using Data Transform Based Features and Multiple Classifiers (자료변환 기반 특징과 다중 분류자를 이용한 다중시기 SAR자료의 분류)

  • Yoo, Hee Young;Park, No-Wook;Hong, Sukyoung;Lee, Kyungdo;Kim, Yeseul
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.3
    • /
    • pp.205-214
    • /
    • 2015
  • In this study, a novel land-cover classification framework for multi-temporal SAR data is presented that can combine multiple features extracted through data transforms and multiple classifiers. At first, data transforms using principle component analysis (PCA) and 3D wavelet transform are applied to multi-temporal SAR dataset for extracting new features which were different from original dataset. Then, three different classifiers including maximum likelihood classifier (MLC), neural network (NN) and support vector machine (SVM) are applied to three different dataset including data transform based features and original backscattering coefficients, and as a result, the diverse preliminary classification results are generated. These results are combined via a majority voting rule to generate a final classification result. From an experiment with a multi-temporal ENVISAT ASAR dataset, every preliminary classification result showed very different classification accuracy according to the used feature and classifier. The final classification result combining nine preliminary classification results showed the best classification accuracy because each preliminary classification result provided complementary information on land-covers. The improvement of classification accuracy in this study was mainly attributed to the diversity from combining not only different features based on data transforms, but also different classifiers. Therefore, the land-cover classification framework presented in this study would be effectively applied to the classification of multi-temporal SAR data and also be extended to multi-sensor remote sensing data fusion.