• Title/Summary/Keyword: 영상매핑

Search Result 339, Processing Time 0.029 seconds

Land Cover Classification of Coastal Area by SAM from Airborne Hyperspectral Images (항공 초분광 영상으로부터 연안지역의 SAM 토지피복분류)

  • LEE, Jin-Duk;BANG, Kon-Joon;KIM, Hyun-Ho
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.1
    • /
    • pp.35-45
    • /
    • 2018
  • Image data collected by an airborne hyperspectral camera system have a great usability in coastal line mapping, detection of facilities composed of specific materials, detailed land use analysis, change monitoring and so forh in a complex coastal area because the system provides almost complete spectral and spatial information for each image pixel of tens to hundreds of spectral bands. A few approaches after classifying by a few approaches based on SAM(Spectral Angle Mapper) supervised classification were applied for extracting optimal land cover information from hyperspectral images acquired by CASI-1500 airborne hyperspectral camera on the object of a coastal area which includes both land and sea water areas. We applied three different approaches, that is to say firstly the classification approach of combined land and sea areas, secondly the reclassification approach after decompostion of land and sea areas from classification result of combined land and sea areas, and thirdly the land area-only classification approach using atmospheric correction images and compared classification results and accuracies. Land cover classification was conducted respectively by selecting not only four band images with the same wavelength range as IKONOS, QuickBird, KOMPSAT and GeoEye satelllite images but also eight band images with the same wavelength range as WorldView-2 from 48 band hyperspectral images and then compared with the classification result conducted with all of 48 band images. As a result, the reclassification approach after decompostion of land and sea areas from classification result of combined land and sea areas is more effective than classification approach of combined land and sea areas. It is showed the bigger the number of bands, the higher accuracy and reliability in the reclassification approach referred above. The results of higher spectral resolution showed asphalt or concrete roads was able to be classified more accurately.

Mapping Precise Two-dimensional Surface Deformation on Kilauea Volcano, Hawaii using ALOS2 PALSAR2 Spotlight SAR Interferometry (ALOS-2 PALSAR-2 Spotlight 영상의 위성레이더 간섭기법을 활용한 킬라우에아 화산의 정밀 2차원 지표변위 매핑)

  • Hong, Seong-Jae;Baek, Won-Kyung;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_3
    • /
    • pp.1235-1249
    • /
    • 2019
  • Kilauea Volcano is one of the most active volcano in the world. In this study, we used the ALOS-2 PALSAR-2 satellite imagery to measure the surface deformation occurring near the summit of the Kilauea volcano from 2015 to 2017. In order to measure two-dimensional surface deformation, interferometric synthetic aperture radar (InSAR) and multiple aperture SAR interferometry (MAI) methods were performed using two interferometric pairs. To improve the precision of 2D measurement, we compared root-mean-squared deviation (RMSD) of the difference of measurement value as we change the effective antenna length and normalized squint value, which are factors that can affect the measurement performance of the MAI method. Through the compare, the values of the factors, which can measure deformation most precisely, were selected. After select optimal values of the factors, the RMSD values of the difference of the MAI measurement were decreased from 4.07 cm to 2.05 cm. In each interferograms, the maximum deformation in line-of-sight direction is -28.6 cm and -27.3 cm, respectively, and the maximum deformation in the along-track direction is 20.2 cm and 20.8 cm, in the opposite direction is -24.9 cm and -24.3 cm, respectively. After stacking the two interferograms, two-dimensional surface deformation mapping was performed, and a maximum surface deformation of approximately 30.4 cm was measured in the northwest direction. In addition, large deformation of more than 20 cm were measured in all directions. The measurement results show that the risk of eruption activity is increasing in Kilauea Volcano. The measurements of the surface deformation of Kilauea volcano from 2015 to 2017 are expected to be helpful for the study of the eruption activity of Kilauea volcano in the future.

Real-Time Shadow Generation using Image Warping (이미지 와핑을 이용한 실시간 그림자 생성 기법)

  • Kang, Byung-Kwon;Ihm, In-Sung
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.5
    • /
    • pp.245-256
    • /
    • 2002
  • Shadows are important elements in producing a realistic image. Generation of exact shapes and positions of shadows is essential in rendering since it provides users with visual cues on the scene. It is also very important to be able to create soft shadows resulted from area light sources since they increase the visual realism drastically. In spite of their importance. the existing shadow generation algorithms still have some problems in producing realistic shadows in real-time. While image-based rendering techniques can often be effective1y applied to real-time shadow generation, such techniques usually demand so large memory space for storing preprocessed shadow maps. An effective compression method can help in reducing memory requirement, only at the additional decoding costs. In this paper, we propose a new image-barred shadow generation method based on image warping. With this method, it is possible to generate realistic shadows using only small sizes of pre-generated shadow maps, and is easy to extend to soft shadow generation. Our method will be efficiently used for generating realistic scenes in many real-time applications such as 3D games and virtual reality systems.

Registration Technique of Partial 3D Point Clouds Acquired from a Multi-view Camera for Indoor Scene Reconstruction (실내환경 복원을 위한 다시점 카메라로 획득된 부분적 3차원 점군의 정합 기법)

  • Kim Sehwan;Woo Woontack
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.3 s.303
    • /
    • pp.39-52
    • /
    • 2005
  • In this paper, a registration method is presented to register partial 3D point clouds, acquired from a multi-view camera, for 3D reconstruction of an indoor environment. In general, conventional registration methods require a high computational complexity and much time for registration. Moreover, these methods are not robust for 3D point cloud which has comparatively low precision. To overcome these drawbacks, a projection-based registration method is proposed. First, depth images are refined based on temporal property by excluding 3D points with a large variation, and spatial property by filling up holes referring neighboring 3D points. Second, 3D point clouds acquired from two views are projected onto the same image plane, and two-step integer mapping is applied to enable modified KLT (Kanade-Lucas-Tomasi) to find correspondences. Then, fine registration is carried out through minimizing distance errors based on adaptive search range. Finally, we calculate a final color referring colors of corresponding points and reconstruct an indoor environment by applying the above procedure to consecutive scenes. The proposed method not only reduces computational complexity by searching for correspondences on a 2D image plane, but also enables effective registration even for 3D points which have low precision. Furthermore, only a few color and depth images are needed to reconstruct an indoor environment.

Implementation of virtual reality for interactive disaster evacuation training using close-range image information (근거리 영상정보를 활용한 실감형 재난재해 대피 훈련 가상 현실 구현)

  • KIM, Du-Young;HUH, Jung-Rim;LEE, Jin-Duk;BHANG, Kon-Joon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.1
    • /
    • pp.140-153
    • /
    • 2019
  • Cloase-range image information from drones and ground-based camera has been frequently used in the field of disaster mitigation with 3D modeling and mapping. In addition, the utilization of virtual reality(VR) is being increased by implementing realistic 3D models with the VR technology simulating disaster circumstances in large scale. In this paper, we created a VR training program by extracting realistic 3D models from close-range images from unmanned aircraft and digital camera on hand and observed several issues occurring during the implementation and the effectiveness in the case of a VR application in training for disaster mitigation. First of all, we built up a scenario of disaster and created 3D models after image processing with the close-range imagery. The 3D models were imported into Unity, a software for creation of augmented/virtual reality, as a background for android-based mobile phones and VR environment was created with C#-based script language. The generated virtual reality includes a scenario in which the trainer moves to a safe place along the evacuation route in the event of a disaster, and it was considered that the successful training can be obtained with virtual reality. In addition, the training through the virtual reality has advantages relative to actual evacuation training in terms of cost, space and time efficiencies.

Flood Mapping Using Modified U-NET from TerraSAR-X Images (TerraSAR-X 영상으로부터 Modified U-NET을 이용한 홍수 매핑)

  • Yu, Jin-Woo;Yoon, Young-Woong;Lee, Eu-Ru;Baek, Won-Kyung;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1709-1722
    • /
    • 2022
  • The rise in temperature induced by global warming caused in El Nino and La Nina, and abnormally changed the temperature of seawater. Rainfall concentrates in some locations due to abnormal variations in seawater temperature, causing frequent abnormal floods. It is important to rapidly detect flooded regions to recover and prevent human and property damage caused by floods. This is possible with synthetic aperture radar. This study aims to generate a model that directly derives flood-damaged areas by using modified U-NET and TerraSAR-X images based on Multi Kernel to reduce the effect of speckle noise through various characteristic map extraction and using two images before and after flooding as input data. To that purpose, two synthetic aperture radar (SAR) images were preprocessed to generate the model's input data, which was then applied to the modified U-NET structure to train the flood detection deep learning model. Through this method, the flood area could be detected at a high level with an average F1 score value of 0.966. This result is expected to contribute to the rapid recovery of flood-stricken areas and the derivation of flood-prevention measures.

Object VR-based 2.5D Virtual Textile Wearing System : Viewpoint Vector Estimation and Textile Texture Mapping (오브젝트 VR 기반 2.5D 가상 직물 착의 시스템 : 시점 벡터 추정 및 직물 텍스쳐 매핑)

  • Lee, Eun-Hwan;Kwak, No-Yoon
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.19-26
    • /
    • 2008
  • This paper is related to a new technology allowing a user to have a 360 degree viewpoint of the virtual wearing object, and to an object VR(Virtual Reality)-based 2D virtual textile wearing system using viewpoint vector estimation and textile texture mapping. The proposed system is characterized as capable of virtually wearing a new textile pattern selected by the user to the clothing shape section segmented from multiview 2D images of clothes model for object VR, and three-dimensionally viewing its virtual wearing appearance at a 360 degree viewpoint of the object. Regardless of color or intensity of model clothes, the proposed system is possible to virtually change the textile pattern with holding the illumination and shading properties of the selected clothing shape section, and also to quickly and easily simulate, compare, and select multiple textile pattern combinations for individual styles or entire outfits. The proposed system can provide higher practicality and easy-to-use interface, as it makes real-time processing possible in various digital environment, and creates comparatively natural and realistic virtual wearing styles, and also makes semi -automatic processing possible to reduce the manual works to a minimum. According to the proposed system, it can motivate the creative activity of the designers with simulation results on the effect of textile pattern design on the appearance of clothes without manufacturing physical clothes and, as it can help the purchasers for decision-making with them, promote B2B or B2C e-commerce.

  • PDF

Realtime Video Visualization based on 3D GIS (3차원 GIS 기반 실시간 비디오 시각화 기술)

  • Yoon, Chang-Rak;Kim, Hak-Cheol;Kim, Kyung-Ok;Hwang, Chi-Jung
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.1
    • /
    • pp.63-70
    • /
    • 2009
  • 3D GIS(Geographic Information System) processes, analyzes and presents various real-world 3D phenomena by building 3D spatial information of real-world terrain, facilities, etc., and working with visualization technique such as VR(Virtual Reality). It can be applied to such areas as urban management system, traffic information system, environment management system, disaster management system, ocean management system, etc,. In this paper, we propose video visualization technology based on 3D geographic information to provide effectively real-time information in 3D geographic information system and also present methods for establishing 3D building information data. The proposed video visualization system can provide real-time video information based on 3D geographic information by projecting real-time video stream from network video camera onto 3D geographic objects and applying texture-mapping of video frames onto terrain, facilities, etc.. In this paper, we developed sem i-automatic DBM(Digital Building Model) building technique using both aerial im age and LiDAR data for 3D Projective Texture Mapping. 3D geographic information system currently provide static visualization information and the proposed method can replace previous static visualization information with real video information. The proposed method can be used in location-based decision-making system by providing real-time visualization information, and moreover, it can be used to provide intelligent context-aware service based on geographic information.

  • PDF

Remote Sensing based Algae Monitoring in Dams using High-resolution Satellite Image and Machine Learning (고해상도 위성영상과 머신러닝을 활용한 녹조 모니터링 기법 연구)

  • Jung, Jiyoung;Jang, Hyeon June;Kim, Sung Hoon;Choi, Young Don;Yi, Hye-Suk;Choi, Sunghwa
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.42-42
    • /
    • 2022
  • 지금까지도 유역에서의 녹조 모니터링은 현장채수를 통한 점 단위 모니터링에 크게 의존하고 있어 기후, 유속, 수온조건 등에 따라 수체에 광범위하게 발생하는 녹조를 효율적으로 모니터링하고 대응하기에는 어려운 점들이 있어왔다. 또한, 그동안 제한된 관측 데이터로 인해 현장 측정된 실측 데이터 보다는 녹조와 관련이 높은 NDVI, FGAI, SEI 등의 파생적인 지수를 산정하여 원격탐사자료와 매핑하는 방식의 분석연구 등이 선행되었다. 본 연구는 녹조의 모니터링시 정확도와 효율성을 향상을 목표로 하여, 우선은 녹조 측정장비를 활용, 7000개 이상의 녹조 관측 데이터를 확보하였으며, 이를 바탕으로 동기간의 고해상도 위성 자료와 실측자료를 매핑하기 위해 다양한Machine Learning기법을 적용함으로써 그 효과성을 검토하고자 하였다. 연구대상지는 낙동강 내성천 상류에 위치한 영주댐 유역으로서 데이터 수집단계에서는 면단위 현장(in-situ) 관측을 위해 2020년 2~9월까지 4회에 걸쳐 7291개의 녹조를 측정하고, 동일 시간 및 공간의 Sentinel-2자료 중 Band 1~12까지 총 13개(Band 8은 8과 8A로 2개)의 분광특성자료를 추출하였다. 다음으로 Machine Learning 분석기법의 적용을 위해 algae_monitoring Python library를 구축하였다. 개발된 library는 1) Training Set과 Test Set의 구분을 위한 Data 준비단계, 2) Random Forest, Gradient Boosting Regression, XGBoosting 알고리즘 중 선택하여 적용할 수 있는 모델적용단계, 3) 모델적용결과를 확인하는 Performance test단계(R2, MSE, MAE, RMSE, NSE, KGE 등), 4) 모델결과의 Visualization단계, 5) 선정된 모델을 활용 위성자료를 녹조값으로 변환하는 적용단계로 구분하여 영주댐뿐만 아니라 다양한 유역에 범용적으로 적용할 수 있도록 구성하였다. 본 연구의 사례에서는 Sentinel-2위성의 12개 밴드, 기상자료(대기온도, 구름비율) 총 14개자료를 활용하여 Machine Learning기법 중 Random Forest를 적용하였을 경우에, 전반적으로 가장 높은 적합도를 나타내었으며, 적용결과 Test Set을 기준으로 NSE(Nash Sutcliffe Efficiency)가 0.96(Training Set의 경우에는 0.99) 수준의 성능을 나타내어, 광역적인 위성자료와 충분히 확보된 현장실측 자료간의 데이터 학습을 통해서 조류 모니터링 분석의 효율성이 획기적으로 증대될 수 있음을 확인하였다.

  • PDF

Mapping Topography Change via Multi-Temporal Sentinel-1 Pixel-Frequency Approach on Incheon River Estuary Wetland, Gochang, Korea (다중시기 Sentinel-1 픽셀-빈도 기법을 통한 고창 인천강 하구 습지의 지형 변화 매핑)

  • Won-Kyung Baek;Moung-Jin Lee;Ha-Eun Yu;Jeong-Cheol Kim;Joo-Hyung Ryu
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1747-1761
    • /
    • 2023
  • Wetlands, defined as lands periodically inundated or exposed during the year, are crucial for sustaining biodiversity and filtering environmental pollutants. The importance of mapping and monitoring their topographical changes is therefore paramount. This study focuses on the topographical variations at the Incheon River estuary wetland post-restoration, noting a lack of adequate prior measurements. Using a multi-temporal Sentinel-1 dataset from October 2014 to March 2023, we mapped long-term variations in water bodies and detected topographical change anomalies using a pixel-frequency approach. Our analysis, based on 196 Sentinel-1 acquisitions from an ascending orbit, revealed significant topography changes. Since 2020, employing the pixel-frequency technique, we observed area increases of +0.0195, 0.0016, 0.0075, and 0.0163 km2 in water level sections at depths of 2-3 m, 1-2 m, 0-1 m, and less than 0 m, respectively. These findings underscore the effectiveness of the wetland restoration efforts in the area.