• Title/Summary/Keyword: 영상 전처리

Search Result 1,103, Processing Time 0.029 seconds

Modification of Hydro-BEAM Model for Flood Discharge Analysis (홍수유출해석을 위한 Hydro-BEAM모형의 개선)

  • Park, Jin-Hyeog;Yun, Ji-Heun;Chong, Koo-Yol;Sung, Young-Du
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2008.05a
    • /
    • pp.2179-2183
    • /
    • 2008
  • 지금까지 분포형 모형 개발에 대한 많은 노력이 있음에도 불구하고 여러 제약사항들에 의해 잠재력을 보여주는 정도로 활용되어 왔으나, 최근 급속도로 발전하는 컴퓨터의 계산능력, DEM 등 디지털정보의 구축이 진행되어 오고 있고, GIS 및 인공위성 영상기법의 발달로 공간적인 비균질성을 고려하여 유출과정에서 운동역학적인 이론을 기반으로 물의 흐름을 수리학적으로 추적해 나가는 물리적기반의 분포형 유출모형의 활용도가 높아지고 있다. 본 모형개발에 있어 이론적 배경이 된 모형은 1998년부터 일본 교토대학 방재연구소 코지리 연구실에서 개발 중인 Hydro-BEAM으로 유역 물순환의 건전성을 평가하기 위하여 장기간의 유역 내 유량, 수질을 시계열 및 공간적으로 파악하여 유역의 영향평가를 위해 개발된 물리적 기반의 격자구조를 가진 분포형 장기유출 모형이다. 유출량 계산은 유역내 수평 유출량산정 모듈로서 평면 분포형의 격자형을, 연직 분포형으로는 $A{\sim}B$층의 수평유출량은 하천으로 유입하고, C층은 하천유량에 영향을 미치지 않는 지하수층으로 가정하는 다층모형을 이용해서 A층, 지표 및 하도흐름은 운동파 법(kinematic wave)으로, $B{\sim}C$층의 유출량은 선형저류법으로 계산하는 모형이다. 본 연구에서는 격자흐름방향을 4방향에서 8방향으로 개선하였고, 모형의 각종 수문매개변수들을 GIS와 연계하여 직접 입력할 수 있도록 하였으며, 물리적기반의 침투과정을 모의할 수 있도록 Green & Ampt모듈을 추가하고, 향후 레이더 강우 및 수치예보강우의 홍수유출예측을 염두에 두고 격자강우량을 활용할 수 있도록 하는 등 홍수유출해석을 위한 분포형 강우-유출모형으로 개선 하였고, 이를 남강댐유역에 적용해 봄으로써 모형의 적용성을 검토해 보고자 하였다. 홍수기동안의 지표흐름과 지표하 흐름의 시간적 변화와 공간적 분포를 모의할 수 있었으며, 전처리과정으로서 ArcGIS 혹은 ArcView등의 GIS 프로그램을 이용하여 모형에 필요한 ASCII형태의 입력 매개 변수 자료들을 가공하였다. 또한 후처리과정으로서 모형의 수행결과인 유역내의 유출량 분포 등을 GIS상에서 나타낼 수 있도록 ASCII형태로 출력하도록 구성하였다. 남강댐유역을 대상으로 유역을 500m의 정방형 격자로 분할하고 수계망을 통하여 유역 출구까지 운동파이론에 의해 추적 계산하였으며, 수문곡선 비교결과 재현성 높은 결과를 보여주었다. 모형의 정확성 및 실용성에 대한 보다 정확한 평가를 위해서는 향후 다양한 강우 사상 혹은 다양한 크기의 유역에 대한 유출량의 재현성 및 매개변수 등에 검증이 이루어져야 할 것이다.

  • PDF

The Study of Land Surface Change Detection Using Long-Term SPOT/VEGETATION (장기간 SPOT/VEGETATION 정규화 식생지수를 이용한 지면 변화 탐지 개선에 관한 연구)

  • Yeom, Jong-Min;Han, Kyung-Soo;Kim, In-Hwan
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.13 no.4
    • /
    • pp.111-124
    • /
    • 2010
  • To monitor the environment of land surface change is considered as an important research field since those parameters are related with land use, climate change, meteorological study, agriculture modulation, surface energy balance, and surface environment system. For the change detection, many different methods have been presented for distributing more detailed information with various tools from ground based measurement to satellite multi-spectral sensor. Recently, using high resolution satellite data is considered the most efficient way to monitor extensive land environmental system especially for higher spatial and temporal resolution. In this study, we use two different spatial resolution satellites; the one is SPOT/VEGETATION with 1 km spatial resolution to detect coarse resolution of the area change and determine objective threshold. The other is Landsat satellite having high resolution to figure out detailed land environmental change. According to their spatial resolution, they show different observation characteristics such as repeat cycle, and the global coverage. By correlating two kinds of satellites, we can detect land surface change from mid resolution to high resolution. The K-mean clustering algorithm is applied to detect changed area with two different temporal images. When using solar spectral band, there are complicate surface reflectance scattering characteristics which make surface change detection difficult. That effect would be leading serious problems when interpreting surface characteristics. For example, in spite of constant their own surface reflectance value, it could be changed according to solar, and sensor relative observation location. To reduce those affects, in this study, long-term Normalized Difference Vegetation Index (NDVI) with solar spectral channels performed for atmospheric and bi-directional correction from SPOT/VEGETATION data are utilized to offer objective threshold value for detecting land surface change, since that NDVI has less sensitivity for solar geometry than solar channel. The surface change detection based on long-term NDVI shows improved results than when only using Landsat.

Quantitative Analysis of Digital Radiography Pixel Values to absorbed Energy of Detector based on the X-Ray Energy Spectrum Model (X선 스펙트럼 모델을 이용한 DR 화소값과 디텍터 흡수에너지의 관계에 대한 정량적 분석)

  • Kim Do-Il;Kim Sung-Hyun;Ho Dong-Su;Choe Bo-young;Suh Tae-Suk;Lee Jae-Mun;Lee Hyoung-Koo
    • Progress in Medical Physics
    • /
    • v.15 no.4
    • /
    • pp.202-209
    • /
    • 2004
  • Flat panel based digital radiography (DR) systems have recently become useful and important in the field of diagnostic radiology. For DRs with amorphous silicon photosensors, CsI(TI) is normally used as the scintillator, which produces visible light corresponding to the absorbed radiation energy. The visible light photons are converted into electric signal in the amorphous silicon photodiodes which constitute a two dimensional array. In order to produce good quality images, detailed behaviors of DR detectors to radiation must be studied. The relationship between air exposure and the DR outputs has been investigated in many studies. But this relationship was investigated under the condition of the fixed tube voltage. In this study, we investigated the relationship between the DR outputs and X-ray in terms of the absorbed energy in the detector rather than the air exposure using SPEC-l8, an X-ray energy spectrum model. Measured exposure was compared with calculated exposure for obtaining the inherent filtration that is a important input variable of SPEC-l8. The absorbed energy in the detector was calculated using algorithm of calculating the absorbed energy in the material and pixel values of real images under various conditions was obtained. The characteristic curve was obtained using the relationship of two parameter and the results were verified using phantoms made of water and aluminum. The pixel values of the phantom image were estimated and compared with the characteristic curve under various conditions. It was found that the relationship between the DR outputs and the absorbed energy in the detector was almost linear. In a experiment using the phantoms, the estimated pixel values agreed with the characteristic curve, although the effect of scattered photons introduced some errors. However, effect of a scattered X-ray must be studied because it was not included in the calculation algorithm. The result of this study can provide useful information about a pre-processing of digital radiography.

  • PDF

A Case Study on the Data Processing to Enhance the Resolution of Chirp SBP Data (Chirp SBP 자료 해상도 향상을 위한 전산처리연구)

  • Kim, Young-Jun;Kim, Won-Sik;Shin, Sung-Ryul;Kim, Jin-Ho
    • Geophysics and Geophysical Exploration
    • /
    • v.14 no.4
    • /
    • pp.289-297
    • /
    • 2011
  • Chirp sub-bottom profilers (SBP) data are comparatively higher-resolution data than other seismic data and it's raw signal can be used as a final section after conducting basic filtering. However, Chirp SBP signal has possibility to include various noise in high-frequency band and to provide the distorted image for the complex geological structure in time domain. This study aims at the goal to establish the workflow of Chirp SBP data processing for enhanced image and to analyze the proper parameters for the domestic continental shelf. After pre-processing, we include the dynamic S/N filtering to eliminate the high-frequency component noise, the dip scan stack to enhance the continuity of reflection events and finally the post-stack depth migration to correct the distorted structure on the time domain sections. We demonstrated our workflow on the data acquired by domestically widely used equipments and then we could obtain the improved seismic sections of depth domain. This workflow seems to provide the proper seismic section to interpretation when applied to data processing of Chirp SBP that are largely used for domestic acquisition.

Oceanic Application of Satellite Synthetic Aperture Radar - Focused on Sea Surface Wind Retrieval - (인공위성 합성개구레이더 영상 자료의 해양 활용 - 해상풍 산출을 중심으로 -)

  • Jang, Jae-Cheol;Park, Kyung-Ae
    • Journal of the Korean earth science society
    • /
    • v.40 no.5
    • /
    • pp.447-463
    • /
    • 2019
  • Sea surface wind is a fundamental element for understanding the oceanic phenomena and for analyzing changes of the Earth environment caused by global warming. Global research institutes have developed and operated scatterometers to accurately and continuously observe the sea surface wind, with the accuracy of approximately ${\pm}20^{\circ}$ for wind direction and ${\pm}2m\;s^{-1}$ for wind speed. Given that the spatial resolution of the scatterometer is 12.5-25.0 km, the applicability of the data to the coastal area is limited due to complicated coastal lines and many islands around the Korean Peninsula. In contrast, Synthetic Aperture Radar (SAR), one of microwave sensors, is an all-weather instrument, which enables us to retrieve sea surface wind with high resolution (<1 km) and compensate the sparse resolution of the scatterometer. In this study, we investigated the Geophysical Model Functions (GMF), which are the algorithms for retrieval of sea surface wind speed from the SAR data depending on each band such as C-, L-, or X-band radar. We reviewed in the simulation of the backscattering coefficients for relative wind direction, incidence angle, and wind speed by applying LMOD, CMOD, and XMOD model functions, and analyzed the characteristics of each GMF. We investigated previous studies about the validation of wind speed from the SAR data using these GMFs. The accuracy of sea surface wind from SAR data changed with respect to observation mode, GMF type, reference data for validation, preprocessing method, and the method for calculation of relative wind direction. It is expected that this study contributes to the potential users of SAR images who retrieve wind speeds from SAR data at the coastal region around the Korean Peninsula.

An Implementation of OTB Extension to Produce TOA and TOC Reflectance of LANDSAT-8 OLI Images and Its Product Verification Using RadCalNet RVUS Data (Landsat-8 OLI 영상정보의 대기 및 지표반사도 산출을 위한 OTB Extension 구현과 RadCalNet RVUS 자료를 이용한 성과검증)

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.449-461
    • /
    • 2021
  • Analysis Ready Data (ARD) for optical satellite images represents a pre-processed product by applying spectral characteristics and viewing parameters for each sensor. The atmospheric correction is one of the fundamental and complicated topics, which helps to produce Top-of-Atmosphere (TOA) and Top-of-Canopy (TOC) reflectance from multi-spectral image sets. Most remote sensing software provides algorithms or processing schemes dedicated to those corrections of the Landsat-8 OLI sensors. Furthermore, Google Earth Engine (GEE), provides direct access to Landsat reflectance products, USGS-based ARD (USGS-ARD), on the cloud environment. We implemented the Orfeo ToolBox (OTB) atmospheric correction extension, an open-source remote sensing software for manipulating and analyzing high-resolution satellite images. This is the first tool because OTB has not provided calibration modules for any Landsat sensors. Using this extension software, we conducted the absolute atmospheric correction on the Landsat-8 OLI images of Railroad Valley, United States (RVUS) to validate their reflectance products using reflectance data sets of RVUS in the RadCalNet portal. The results showed that the reflectance products using the OTB extension for Landsat revealed a difference by less than 5% compared to RadCalNet RVUS data. In addition, we performed a comparative analysis with reflectance products obtained from other open-source tools such as a QGIS semi-automatic classification plugin and SAGA, besides USGS-ARD products. The reflectance products by the OTB extension showed a high consistency to those of USGS-ARD within the acceptable level in the measurement data range of the RadCalNet RVUS, compared to those of the other two open-source tools. In this study, the verification of the atmospheric calibration processor in OTB extension was carried out, and it proved the application possibility for other satellite sensors in the Compact Advanced Satellite (CAS)-500 or new optical satellites.

Comparative study of flood detection methodologies using Sentinel-1 satellite imagery (Sentinel-1 위성 영상을 활용한 침수 탐지 기법 방법론 비교 연구)

  • Lee, Sungwoo;Kim, Wanyub;Lee, Seulchan;Jeong, Hagyu;Park, Jongsoo;Choi, Minha
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.181-193
    • /
    • 2024
  • The increasing atmospheric imbalance caused by climate change leads to an elevation in precipitation, resulting in a heightened frequency of flooding. Consequently, there is a growing need for technology to detect and monitor these occurrences, especially as the frequency of flooding events rises. To minimize flood damage, continuous monitoring is essential, and flood areas can be detected by the Synthetic Aperture Radar (SAR) imagery, which is not affected by climate conditions. The observed data undergoes a preprocessing step, utilizing a median filter to reduce noise. Classification techniques were employed to classify water bodies and non-water bodies, with the aim of evaluating the effectiveness of each method in flood detection. In this study, the Otsu method and Support Vector Machine (SVM) technique were utilized for the classification of water bodies and non-water bodies. The overall performance of the models was assessed using a Confusion Matrix. The suitability of flood detection was evaluated by comparing the Otsu method, an optimal threshold-based classifier, with SVM, a machine learning technique that minimizes misclassifications through training. The Otsu method demonstrated suitability in delineating boundaries between water and non-water bodies but exhibited a higher rate of misclassifications due to the influence of mixed substances. Conversely, the use of SVM resulted in a lower false positive rate and proved less sensitive to mixed substances. Consequently, SVM exhibited higher accuracy under conditions excluding flooding. While the Otsu method showed slightly higher accuracy in flood conditions compared to SVM, the difference in accuracy was less than 5% (Otsu: 0.93, SVM: 0.90). However, in pre-flooding and post-flooding conditions, the accuracy difference was more than 15%, indicating that SVM is more suitable for water body and flood detection (Otsu: 0.77, SVM: 0.92). Based on the findings of this study, it is anticipated that more accurate detection of water bodies and floods could contribute to minimizing flood-related damages and losses.

A Comparative Study on the Effective Deep Learning for Fingerprint Recognition with Scar and Wrinkle (상처와 주름이 있는 지문 판별에 효율적인 심층 학습 비교연구)

  • Kim, JunSeob;Rim, BeanBonyka;Sung, Nak-Jun;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.17-23
    • /
    • 2020
  • Biometric information indicating measurement items related to human characteristics has attracted great attention as security technology with high reliability since there is no fear of theft or loss. Among these biometric information, fingerprints are mainly used in fields such as identity verification and identification. If there is a problem such as a wound, wrinkle, or moisture that is difficult to authenticate to the fingerprint image when identifying the identity, the fingerprint expert can identify the problem with the fingerprint directly through the preprocessing step, and apply the image processing algorithm appropriate to the problem. Solve the problem. In this case, by implementing artificial intelligence software that distinguishes fingerprint images with cuts and wrinkles on the fingerprint, it is easy to check whether there are cuts or wrinkles, and by selecting an appropriate algorithm, the fingerprint image can be easily improved. In this study, we developed a total of 17,080 fingerprint databases by acquiring all finger prints of 1,010 students from the Royal University of Cambodia, 600 Sokoto open data sets, and 98 Korean students. In order to determine if there are any injuries or wrinkles in the built database, criteria were established, and the data were validated by experts. The training and test datasets consisted of Cambodian data and Sokoto data, and the ratio was set to 8: 2. The data of 98 Korean students were set up as a validation data set. Using the constructed data set, five CNN-based architectures such as Classic CNN, AlexNet, VGG-16, Resnet50, and Yolo v3 were implemented. A study was conducted to find the model that performed best on the readings. Among the five architectures, ResNet50 showed the best performance with 81.51%.

Studies on Solvent Extraction and Analytical Application of Metal-dithizone Complexes(I). Separation and Determination of Trace Heavy Metals in Urine (Dithizone 금속착물의 용매추출 및 분석적 응용(제1보). 뇨중 흔적량 중금속 원소의 분리 정량)

  • Jeon, Moon-Kyo;Choi, Jong-Moon;Kim, Young-Sang
    • Analytical Science and Technology
    • /
    • v.9 no.4
    • /
    • pp.336-344
    • /
    • 1996
  • The extraction of trace cobalt, copper, nickel, cadmium, lead and zinc in urine samples of organic and alkali metal matrix into chloroform by the complex with a dithizone was studied for graphite furnace AAS determination. Various experimental conditions such as the pretreatment of urine, the pH of sample solution, and dithizone concentration in a solvent were optimized for the effective extraction, and some essential conditions were also studied for the back-extraction and digestion as well. All organic materials in 100 mL urine were destructed by the digestion with conc. $HNO_3$ 30 mL and 30% $H_2O_2$ 50 mL. Here, $H_2O_2$ was added dropwise with each 5.0 mL, serially. Analytes were extracted into 15.0 mL chloroform of 0.1% dithizone from the digested urine at pH 8.0 by shaking for 90 minutes. The pH was adjusted with a commercial buffer solution. Among analytes, cadmium, lead and zinc were back-extracted to 10.00 mL of 0.2 M $HNO_3$ from the solvent for the determination, and after the organic solvent was evaporated, others were dissolved with $HNO_3-H_2O_2$ and diluted to 10.00 mL with a deionized water. Synthetic digested urines were used to obtain optimum conditions and to plot calibration-eurves. Average recoveries of 77 to 109% for each element were obtained in sample solutions in which given amounts of analytes were added, and detection limits were Cd 0.09, Pb 0.59, Zn 0.18, Co 0.24, Cu 1.3 and Ni 1.7 ng/mL, respectively. It was concluded that this method could be applied for the determination of heavy elements in urine samples without any interferences of organic materials and major alkaline elements.

  • PDF

A Study on the Field Data Applicability of Seismic Data Processing using Open-source Software (Madagascar) (오픈-소스 자료처리 기술개발 소프트웨어(Madagascar)를 이용한 탄성파 현장자료 전산처리 적용성 연구)

  • Son, Woohyun;Kim, Byoung-yeop
    • Geophysics and Geophysical Exploration
    • /
    • v.21 no.3
    • /
    • pp.171-182
    • /
    • 2018
  • We performed the seismic field data processing using an open-source software (Madagascar) to verify if it is applicable to processing of field data, which has low signal-to-noise ratio and high uncertainties in velocities. The Madagascar, based on Python, is usually supposed to be better in the development of processing technologies due to its capabilities of multidimensional data analysis and reproducibility. However, this open-source software has not been widely used so far for field data processing because of complicated interfaces and data structure system. To verify the effectiveness of the Madagascar software on field data, we applied it to a typical seismic data processing flow including data loading, geometry build-up, F-K filter, predictive deconvolution, velocity analysis, normal moveout correction, stack, and migration. The field data for the test were acquired in Gunsan Basin, Yellow Sea using a streamer consisting of 480 channels and 4 arrays of air-guns. The results at all processing step are compared with those processed with Landmark's ProMAX (SeisSpace R5000) which is a commercial processing software. Madagascar shows relatively high efficiencies in data IO and management as well as reproducibility. Additionally, it shows quick and exact calculations in some automated procedures such as stacking velocity analysis. There were no remarkable differences in the results after applying the signal enhancement flows of both software. For the deeper part of the substructure image, however, the commercial software shows better results than the open-source software. This is simply because the commercial software has various flows for de-multiple and provides interactive processing environments for delicate processing works compared to Madagascar. Considering that many researchers around the world are developing various data processing algorithms for Madagascar, we can expect that the open-source software such as Madagascar can be widely used for commercial-level processing with the strength of expandability, cost effectiveness and reproducibility.