• 제목/요약/키워드: multi-temporal images

검색결과 214건 처리시간 0.026초

도심지역의 그림자 영향을 고려한 다시기 고해상도 위성영상의 선택적 히스토그램 매칭 (Selective Histogram Matching of Multi-temporal High Resolution Satellite Images Considering Shadow Effects in Urban Area)

  • 염준호;김용일
    • 대한공간정보학회지
    • /
    • 제20권2호
    • /
    • pp.47-54
    • /
    • 2012
  • 도심지 모델링과 분석을 효과적으로 수행하기 위해서는 다른 시기나 다른 지역의 추가적인 고해상도 위성영상이 반드시 필요하다. 그러나 같은 지상 개체라 하더라도 서로 다른 영상에서 방사적인 불일치가 존재하며 이는 영상 처리와 분석의 정확도를 저하시키는 원인이 된다. 더욱이 도심지의 경우 건물, 수목, 교량, 기타 구조물 등 높이를 갖는 개체들은 영상 전체에 걸쳐 그림자를 발생시키며 이는 상대 방사 정규화의 질을 저하시킨다. 본 연구에서는 태양과 위성의 기하학적 위치 정보, 부가적인 수치 표고 모델이 없어도 적용이 가능한 단영상 기반의 그림자 추출기법을 적용하고 그림자의 영향을 배제한 선택적인 히스토그램 매칭 기법을 제안하였다. 건물의 에지 버퍼 영역에 대한 인접 정보와 분할을 통해 생성된 객체의 공간 및 분광인자를 이용하여 그림자를 추출한 후, 아스팔트 도로와 같이 그림자로 잘못 추출된 이상 객체를 제거하였다. 최종적으로 그림자 지역이 마스킹 된 Quickbird-2 다시기 영상을 이용하여 비그림자 지역만을 이용하여 선택적 히스토그램 매칭을 수행하였다.

Estimation of the Flood Area Using Multi-temporal RADARSAT SAR Imagery

  • Sohn, Hong-Gyoo;Song, Yeong-Sun;Yoo, Hwan-Hee;Jung, Won-Jo
    • Korean Journal of Geomatics
    • /
    • 제2권1호
    • /
    • pp.37-46
    • /
    • 2002
  • Accurate classification of water area is an preliminary step to accurately analyze the flooded area and damages caused by flood. This step is especially useful for monitoring the region where annually repeating flood is a problem. The accurate estimation of flooded area can ultimately be utilized as a primary source of information for the policy decision. Although SAR (Synthetic Aperture Radar) imagery with its own energy source is sensitive to the water area, its shadow effect similar to the reflectance signature of the water area should be carefully checked before accurate classification. Especially when we want to identify small flood area with mountainous environment, the step for removing shadow effect turns out to be essential in order to accurately classify the water area from the SAR imagery. In this paper, the flood area was classified and monitored using multi-temporal RADARSAT SAR images of Ok-Chun and Bo-Eun located in Chung-Book Province taken in 12th (during the flood) and 19th (after the flood) of August, 1998. We applied several steps of geometric and radiometric calculations to the SAR imagery. First we reduced the speckle noise of two SAR images and then calculated the radar backscattering coefficient $(\sigma^0)$. After that we performed the ortho-rectification via satellite orbit modeling developed in this study using the ephemeris information of the satellite images and ground control points. We also corrected radiometric distortion caused by the terrain relief. Finally, the water area was identified from two images and the flood area is calculated accordingly. The identified flood area is analyzed by overlapping with the existing land use map.

  • PDF

고해상도 다중시기 위성영상을 이용한 밭작물 분류: 마늘/양파 재배지 사례연구 (Field Crop Classification Using Multi-Temporal High-Resolution Satellite Imagery: A Case Study on Garlic/Onion Field)

  • 유희영;이경도;나상일;박찬원;박노욱
    • 대한원격탐사학회지
    • /
    • 제33권5_2호
    • /
    • pp.621-630
    • /
    • 2017
  • 이 논문에서는 고해상도 다중시기 위성영상을 이용한 밭작물 재배지 분류 가능성을 확인하기 위해 마늘과 양파 주산지를 대상으로 분류를 수행하였다. 마늘과 양파의 생육주기에 맞춰 영상을 수집하고 단일시기와 다양한 다중시기 자료의 조합으로 분류를 시도하였다. 단일시기 자료의 경우 파종이 모두 끝난 시기인 12월과 작물이 활발히 자라기 시작하는 3월 영상을 이용하였을 때 높은 분류 정확도를 보였다. 한편, 단일시기 자료 보다는 다중시기 자료를 이용하였을 때 더 높은 분류 정확도를 보였는데 자료의 수가 많은 것이 무조건 높은 분류 정확도를 반영하지는 않았다. 오히려 파종 시기 또는 파종 직후의 영상은 분류 정확도를 떨어뜨리는 역할을 하였고 마늘과 양파의 성장기인 3, 4, 5월 영상을 동시에 이용하여 분류하였을 때 가장 높은 분류 정확도를 얻었다. 따라서, 다중시기 위성영상을 이용하여 마늘과 양파를 분류하기 위해서는 작물 주요 성장기의 영상 확보가 매우 중요하다는 것을 확인할 수 있었다.

Multi-Frame Face Classification with Decision-Level Fusion based on Photon-Counting Linear Discriminant Analysis

  • Yeom, Seokwon
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제14권4호
    • /
    • pp.332-339
    • /
    • 2014
  • Face classification has wide applications in security and surveillance. However, this technique presents various challenges caused by pose, illumination, and expression changes. Face recognition with long-distance images involves additional challenges, owing to focusing problems and motion blurring. Multiple frames under varying spatial or temporal settings can acquire additional information, which can be used to achieve improved classification performance. This study investigates the effectiveness of multi-frame decision-level fusion with photon-counting linear discriminant analysis. Multiple frames generate multiple scores for each class. The fusion process comprises three stages: score normalization, score validation, and score combination. Candidate scores are selected during the score validation process, after the scores are normalized. The score validation process removes bad scores that can degrade the final output. The selected candidate scores are combined using one of the following fusion rules: maximum, averaging, and majority voting. Degraded facial images are employed to demonstrate the robustness of multi-frame decision-level fusion in harsh environments. Out-of-focus and motion blurring point-spread functions are applied to the test images, to simulate long-distance acquisition. Experimental results with three facial data sets indicate the efficiency of the proposed decision-level fusion scheme.

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • 제17권3호
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

Multi-Temporal Spectral Analysis of Rice Fields in South Korea Using MODIS and RapidEye Satellite Imagery

  • Kim, Hyun Ok;Yeom, Jong Min
    • Journal of Astronomy and Space Sciences
    • /
    • 제29권4호
    • /
    • pp.407-411
    • /
    • 2012
  • Space-borne remote sensing is an effective and inexpensive way to identify crop fields and detect the crop condition. We examined the multi-temporal spectral characteristics of rice fields in South Korea to detect their phenological development and condition. These rice fields are compact, small-scale parcels of land. For the analysis, moderate resolution imaging spectroradiometer (MODIS) and RapidEye images acquired in 2011 were used. The annual spectral tendencies of different crop types could be detected using MODIS data because of its high temporal resolution, despite its relatively low spatial resolution. A comparison between MODIS and RapidEye showed that the spectral characteristics changed with the spatial resolution. The vegetation index (VI) derived from MODIS revealed more moderate values among different land-cover types than the index derived from RapidEye. Additionally, an analysis of various VIs using RapidEye satellite data showed that the VI adopting the red edge band reflected crop conditions better than the traditionally used normalized difference VI.

고해상도 광학영상과 SAR 영상 간 자동 변위량 추정 (Automatic Estimation of Geometric Translations Between High-resolution Optical and SAR Images)

  • 한유경;변영기;김용일
    • 대한공간정보학회지
    • /
    • 제20권3호
    • /
    • pp.41-48
    • /
    • 2012
  • 고해상도 위성영상을 공간정보 분야에 효과적으로 활용하기 위해서는 다중센서와 다시기 영상 데이터를 공간분석에 함께 사용하여 이들 데이터의 장점을 최대한 활용하는 것이 중요하다. 본 연구에서는 고해상도의 다중센서자료를 동시에 활용하기 위해, 영상 간 존재하는 변위량을 자동으로 추정하여 다중센서 영상 간 기하보정을 수행하는 새로운 영상정합기법을 개발하였다. 영상의 취득 방식과 방사적 특성이 다른 광학영상과 SAR 영상 간의 유사도를 효과적으로 계산하기 위하여 기하적, 방사적 전처리 과정을 수행하였고, 두 영상 간 변위량 측정은 상호정보기법을 통해 계산하였다. 또한, 변위량 측정방식의 계산 효율과 정확도 향상을 위하여 영상 피라미드 방식을 적용하여 상위 피라미드 영상부터 차례로 x, y 방향에 대한 변위량을 최적화기법을 통해 추정하였다. 이러한 과정을 피라미드의 최하부인 원영상에까지 반복적으로 수행함으로써 두 영상 간 정밀한 변위량을 추정하였으며, 수동으로 추출된 검사점을 통해 제안기법에 대한 정확도 평가를 수행한 결과, 영상간 변위량에 대한 고려만으로도 약 5m 이내 (RMSE)의 기하보정 정확도를 도출할 수 있었다.

Comparing LAI Estimates of Corn and Soybean from Vegetation Indices of Multi-resolution Satellite Images

  • Kim, Sun-Hwa;Hong, Suk Young;Sudduth, Kenneth A.;Kim, Yihyun;Lee, Kyungdo
    • 대한원격탐사학회지
    • /
    • 제28권6호
    • /
    • pp.597-609
    • /
    • 2012
  • Leaf area index (LAI) is important in explaining the ability of the crop to intercept solar energy for biomass production and in understanding the impact of crop management practices. This paper describes a procedure for estimating LAI as a function of image-derived vegetation indices from temporal series of IKONOS, Landsat TM, and MODIS satellite images using empirical models and demonstrates its use with data collected at Missouri field sites. LAI data were obtained several times during the 2002 growing season at monitoring sites established in two central Missouri experimental fields, one planted to soybean (Glycine max L.) and the other planted to corn (Zea mays L.). Satellite images at varying spatial and spectral resolutions were acquired and the data were extracted to calculate normalized difference vegetation index (NDVI) after geometric and atmospheric correction. Linear, exponential, and expolinear models were developed to relate temporal NDVI to measured LAI data. Models using IKONOS NDVI estimated LAI of both soybean and corn better than those using Landsat TM or MODIS NDVI. Expolinear models provided more accurate results than linear or exponential models.

Integrated Water Resources Management in the Era of nGreat Transition

  • Ashkan Noori;Seyed Hossein Mohajeri;Milad Niroumand Jadidi;Amir Samadi
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2023년도 학술발표회
    • /
    • pp.34-34
    • /
    • 2023
  • The Chah-Nimeh reservoirs, which are a sort of natural lakes located in the border of Iran and Afghanistan, are the main drinking and agricultural water resources of Sistan arid region. Considering the occurrence of intense seasonal wind, locally known as levar wind, this study aims to explore the possibility to provide a TSM (Total Suspended Matter) monitoring model of Chah-Nimeh reservoirs using multi-temporal satellite images and in-situ wind speed data. The results show that a strong correlation between TSM concentration and wind speed are present. The developed empirical model indicated high performance in retrieving spatiotemporal distribution of the TSM concentration with R2=0.98 and RMSE=0.92g/m3. Following this observation, we also consider a machine learning-based model to predicts the average TSM using only wind speed. We connect our in-situ wind speed data to the TSM data generated from the inversion of multi-temporal satellite imagery to train a neural network based mode l(Wind2TSM-Net). Examining Wind2TSM-Net model indicates this model can retrieve the TSM accurately utilizing only wind speed (R2=0.88 and RMSE=1.97g/m3). Moreover, this results of this study show tha the TSM concentration can be estimated using only in situ wind speed data independent of the satellite images. Specifically, such model can supply a temporally persistent means of monitoring TSM that is not limited by the temporal resolution of imagery or the cloud cover problem in the optical remote sensing.

  • PDF

Application of the 3D Discrete Wavelet Transformation Scheme to Remotely Sensed Image Classification

  • Yoo, Hee-Young;Lee, Ki-Won;Kwon, Byung-Doo
    • 대한원격탐사학회지
    • /
    • 제23권5호
    • /
    • pp.355-363
    • /
    • 2007
  • The 3D DWT(The Three Dimensional Discrete Wavelet Transform) scheme is potentially regarded as useful one on analyzing both spatial and spectral information. Nevertheless, few researchers have attempted to process or classified remotely sensed images using the 3D DWT. This study aims to apply the 3D DWT to the land cover classification of optical and SAR(Synthetic Aperture Radar) images. Then, their results are evaluated quantitatively and compared with the results of traditional classification technique. As the experimental results, the 3D DWT shows superior classification results to conventional techniques, especially dealing with the high-resolution imagery and SAR imagery. It is thought that the 3D DWT scheme can be extended to multi-temporal or multi-sensor image classification.