• Title/Summary/Keyword: Model calibration

Search Result 1,570, Processing Time 0.042 seconds

Evaluation of stream flow prediction performance of hydrological model with MODIS LAI-based calibration (MODIS LAI 자료 기반의 수문 모형 보정을 통한 하천유량 예측 성능 평가)

  • Choi, Jeonghyeon;Kim, Sangdan
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.288-288
    • /
    • 2021
  • 수문 모델링을 이용하여 미계측 유역의 유출을 예측하고 나아가 수문 현상을 이해하기 위해서는 기존과는 다른 새로운 모형 보정 전략과 평가 방법이 필요하다. 위성 관측자료의 가용성 증가는 미계측 유역에서 수문 모형의 예측 성능을 확보할 기회를 제공한다. 유역 내 증발산 과정은 물 순환 과정을 설명하는 주요한 부분 중 하나이다. 또한 식생에 대한 정보는 증발산 과정과 밀접한 연관을 가지기 때문에 간접적으로 유역의 증발산 과정을 이해할 수 있는 중요한 정보이다. 본 연구는 미계측 유역의 하천유량을 예측하기 위해 위성 관측 기반의 식생 정보만을 이용하여 보정된 생태 수문 모형의 잠재력을 조사한다. 이러한 보정 방법은 관측된 하천유량 자료가 있어야 하지 않기에 미계측 유역의 하천유량 예측에 특히 유용할 것이다. 모델링 실험은 관측 하천유량 자료가 존재하는 5개의 댐 유역(남강댐, 안동댐, 합천댐, 임하댐)에 대해 수행되었다. 본 연구에서는 식생동역학이 결합 된 집체형 수문 모델을 이용하였으며, MODIS 잎면적지수(Leaf Area Index, LAI) 자료를 이용하여 모형을 보정하였다. 보정된 모형으로부터 생산된 일 유량 결과는 관측 유량 자료와 비교된다. 또한, 전통적인 관측 유량 기반의 모형 보정 방법과 비교된다. 그 결과 LAI 시계열을 이용한 모형의 보정으로 획득한 유량의 적합도는 남강댐, 안동댐, 합천댐 유역에서 KGE가 임계치 이상으로 나타나 만족스러운 결과를 보여주지만, 임하댐 유역은 KGE가 임계치 이하로 계산되었다. 그러나 해당 유역에 대해 관측 유량을 기반으로 모형 보정 결과 또한 좋지 않은 적합도를 보여주기에 이는 LAI 자료 기반 접근법의 문제가 아닌 입력정보 또는 모형 자체에 포함된 오차로 인해 해당 유역의 특성을 반영하기에 어려운 것으로 판단된다. 이러한 결과는 증발산 과정에 주요한 식생 정보의 제약만으로도 비교적 만족스럽게 유역의 수문 순환을 재현할 수 있다는 가능성을 보여준다.

  • PDF

Airborne Hyperspectral Imagery availability to estimate inland water quality parameter (수질 매개변수 추정에 있어서 항공 초분광영상의 가용성 고찰)

  • Kim, Tae-Woo;Shin, Han-Sup;Suh, Yong-Cheol
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.1
    • /
    • pp.61-73
    • /
    • 2014
  • This study reviewed an application of water quality estimation using an Airborne Hyperspectral Imagery (A-HSI) and tested a part of Han River water quality (especially suspended solid) estimation with available in-situ data. The estimation of water quality was processed two methods. One is using observation data as downwelling radiance to water surface and as scattering and reflectance into water body. Other is linear regression analysis with water quality in-situ measurement and upwelling data as at-sensor radiance (or reflectance). Both methods drive meaningful results of RS estimation. However it has more effects on the auxiliary dataset as water quality in-situ measurement and water body scattering measurement. The test processed a part of Han River located Paldang-dam downstream. We applied linear regression analysis with AISA eagle hyperspectral sensor data and water quality measurement in-situ data. The result of linear regression for a meaningful band combination shows $-24.847+0.013L_{560}$ as 560 nm in radiance (L) with 0.985 R-square. To comparison with Multispectral Imagery (MSI) case, we make simulated Landsat TM by spectral resampling. The regression using MSI shows -55.932 + 33.881 (TM1/TM3) as radiance with 0.968 R-square. Suspended Solid (SS) concentration was about 3.75 mg/l at in-situ data and estimated SS concentration by A-HIS was about 3.65 mg/l, and about 5.85mg/l with MSI with same location. It shows overestimation trends case of estimating using MSI. In order to upgrade value for practical use and to estimate more precisely, it needs that minimizing sun glint effect into whole image, constructing elaborate flight plan considering solar altitude angle, and making good pre-processing and calibration system. We found some limitations and restrictions such as precise atmospheric correction, sample count of water quality measurement, retrieve spectral bands into A-HSI, adequate linear regression model selection, and quantitative calibration/validation method through the literature review and test adopted general methods.

Validation of Surface Reflectance Product of KOMPSAT-3A Image Data: Application of RadCalNet Baotou (BTCN) Data (다목적실용위성 3A 영상 자료의 지표 반사도 성과 검증: RadCalNet Baotou(BTCN) 자료 적용 사례)

  • Kim, Kwangseob;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_2
    • /
    • pp.1509-1521
    • /
    • 2020
  • Experiments for validation of surface reflectance produced by Korea Multi-Purpose Satellite (KOMPSAT-3A) were conducted using Chinese Baotou (BTCN) data among four sites of the Radical Calibration Network (RadCalNet), a portal that provides spectrophotometric reflectance measurements. The atmosphere reflectance and surface reflectance products were generated using an extension program of an open-source Orfeo ToolBox (OTB), which was redesigned and implemented to extract those reflectance products in batches. Three image data sets of 2016, 2017, and 2018 were taken into account of the two sensor model variability, ver. 1.4 released in 2017 and ver. 1.5 in 2019, such as gain and offset applied to the absolute atmospheric correction. The results of applying these sensor model variables showed that the reflectance products by ver. 1.4 were relatively well-matched with RadCalNet BTCN data, compared to ones by ver. 1.5. On the other hand, the reflectance products obtained from the Landsat-8 by the USGS LaSRC algorithm and Sentinel-2B images using the SNAP Sen2Cor program were used to quantitatively verify the differences in those of KOMPSAT-3A. Based on the RadCalNet BTCN data, the differences between the surface reflectance of KOMPSAT-3A image were shown to be highly consistent with B band as -0.031 to 0.034, G band as -0.001 to 0.055, R band as -0.072 to 0.037, and NIR band as -0.060 to 0.022. The surface reflectance of KOMPSAT-3A also indicated the accuracy level for further applications, compared to those of Landsat-8 and Sentinel-2B images. The results of this study are meaningful in confirming the applicability of Analysis Ready Data (ARD) to the surface reflectance on high-resolution satellites.

Analysis of the Effect of Objective Functions on Hydrologic Model Calibration and Simulation (목적함수에 따른 매개변수 추정 및 수문모형 정확도 비교·분석)

  • Lee, Gi Ha;Yeon, Min Ho;Kim, Young Hun;Jung, Sung Ho
    • Journal of Korean Society of Disaster and Security
    • /
    • v.15 no.1
    • /
    • pp.1-12
    • /
    • 2022
  • An automatic optimization technique is used to estimate the optimal parameters of the hydrologic model, and different hydrologic response results can be provided depending on objective functions. In this study, the parameters of the event-based rainfall-runoff model were estimated using various objective functions, the reproducibility of the hydrograph according to the objective functions was evaluated, and appropriate objective functions were proposed. As the rainfall-runoff model, the storage function model(SFM), which is a lumped hydrologic model used for runoff simulation in the current Korean flood forecasting system, was selected. In order to evaluate the reproducibility of the hydrograph for each objective function, 9 rainfall events were selected for the Cheoncheon basin, which is the upstream basin of Yongdam Dam, and widely-used 7 objective functions were selected for parameter estimation of the SFM for each rainfall event. Then, the reproducibility of the simulated hydrograph using the optimal parameter sets based on the different objective functions was analyzed. As a result, RMSE, NSE, and RSR, which include the error square term in the objective function, showed the highest accuracy for all rainfall events except for Event 7. In addition, in the case of PBIAS and VE, which include an error term compared to the observed flow, it also showed relatively stable reproducibility of the hydrograph. However, in the case of MIA, which adjusts parameters sensitive to high flow and low flow simultaneously, the hydrograph reproducibility performance was found to be very low.

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

Location Accuracy of Unmanned Aerial Photogrammetry Results According to Change of Number of Ground Control Points (지상기준점 개수 변화에 따른 무인항공 사진측량 성과물의 위치 정확도 분석)

  • YUN, Bu-Yeol;SUNG, Sang-Min
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.2
    • /
    • pp.24-33
    • /
    • 2018
  • DSM and orthoimage, which are representative results of UAV photogrammetry, are high-quality spatial information data and are widely used in various fields of spatial information industry in recent years. However, the UAV photogrammetry has a problem that the quality of the output of the UAV deteriorates due to the altitude of the UAV, the camera calibration, the weather conditions at the time of shooting, the performance of the GPS / IMU and the number of the ground reference points. The purpose of this study is to analyze the location accuracy of unmanned aerial photogrammetry according to the change of the number if ground control points. Experiments were made with fixed wing, and the shooting altitude was set at 130m and 260m. The number of ground reference points used was 9, 8, 5, and 4, respectively. Ten checkpoints were used. XY RMSE for orthoimage and Z RMSE for DSM were compared and analyzed. In addition, the resolution of the orthoimage was determined to affect the judgment of the operator in the verification of the planimetric position accuracy, and the visual resolution was analyzed using the Siemens star target. As a result of the analysis, the variation of the vertical position accuracy is larger than the variation of the planimetric position accuracy when the number of the ground reference points are different. Also The higher the flying height, the greater the effect of change of ground control points on position accuracy.

Evaluation of Soil Parameters Using Adaptive Management Technique (적응형 관리 기법을 이용한 지반 물성 값의 평가)

  • Koo, Bonwhee;Kim, Taesik
    • Journal of the Korean GEO-environmental Society
    • /
    • v.18 no.2
    • /
    • pp.47-51
    • /
    • 2017
  • In this study, the optimization algorithm by inverse analysis that is the core of the adaptive management technique was adopted to update the soil engineering properties based on the ground response during the construction. Adaptive management technique is the framework wherein construction and design procedures are adjusted based on observations and measurements made as construction proceeds. To evaluate the performance of the adaptive management technique, the numerical simulation for the triaxial tests and the synthetic deep excavation were conducted with the Hardening Soil model. To effectively conduct the analysis, the effective parameters among the parameters employed in the model were selected based on the composite scaled sensitivity analysis. The results from the undrained triaxial tests performed with soft Chicago clays were used for the parameter calibration. The simulation for the synthetic deep excavation were conducted assuming that the soil engineering parameters obtained from the triaxial simulation represent the actual field condition. These values were used as the reference values. The observation for the synthetic deep excavation simulations was the horizontal displacement of the support wall that has the highest composite scaled sensitivity among the other possible observations. It was found that the horizontal displacement of the support wall with the various initial soil properties were converged to the reference displacement by using the adaptive management technique.

NEAR-INFRARED STUDIES ON STRUCTURE-PROPERTIES RELATIONSHIP IN HIGH DENSITY AND LOW DENSITY POLYETHYLENE

  • Sato, Harumi;Simoyama, Masahiko;Kamiya, Taeko;Amari, Trou;Sasic, Slobodan;Ninomiya, Toshio;Siesler, Heinz-W.;Ozaki, Yukihiro
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1281-1281
    • /
    • 2001
  • Near-infrared (NIR) spectra have bean measured for high-density (HDPE), linear low-density (LLDPE), and low-density (LDPE) polyethylene in pellet or thin films. The obtained spectra have been analyzed by conventional spectroscopic analysis methods and chemometrics. By using the second derivative, principal component analysis (PCA), and two-dimensional (2D) correlation analysis, we could separate many overlapped bands in the NIR. It was found that the intensities of some bands are sensitive to density and crystallinity of PE. This may be the first time that such bands in the NIR region have ever been discussed. Correlations of such marker bands among the NIR spectra have also been investigated. This sort of investigation is very important not only for further understanding of vibration spectra of various of PE but also for quality control of PE by vibrational spectroscopy. Figure 1 (a) and (b) shows a NIR reflectance spectrum of one of the LLDPE samples and that of PE, respectively. Figure 2 shows a PC weight loadings plot of factor 1 for a score plot of PCA for the 16 kinds of LLDPE and PE based upon their 51 NIR spectra in the 1100-1900 nm region. The PC loadings plot separates the bands due to the $CH_3$ groups and those arising form the $CH_2$ groups, allowing one to make band assignments. The 2D correlation analysis is also powerful in band enhancement, and the band assignments based upon PCA are in good agreement with those by the 2D correlation analysis.(Figure omitted). We have made a calibration model, which predicts the density of LLDPE by use of partial least square (PLS) regression. From the loadings plot of regression coefficients for the model , we suggest that the band at 1542, 1728, and 1764 nm very sensitive to the changes in density and crystalinity.

  • PDF

Integrating UAV Remote Sensing with GIS for Predicting Rice Grain Protein

  • Sarkar, Tapash Kumar;Ryu, Chan-Seok;Kang, Ye-Seong;Kim, Seong-Heon;Jeon, Sae-Rom;Jang, Si-Hyeong;Park, Jun-Woo;Kim, Suk-Gu;Kim, Hyun-Jin
    • Journal of Biosystems Engineering
    • /
    • v.43 no.2
    • /
    • pp.148-159
    • /
    • 2018
  • Purpose: Unmanned air vehicle (UAV) remote sensing was applied to test various vegetation indices and make prediction models of protein content of rice for monitoring grain quality and proper management practice. Methods: Image acquisition was carried out by using NIR (Green, Red, NIR), RGB and RE (Blue, Green, Red-edge) camera mounted on UAV. Sampling was done synchronously at the geo-referenced points and GPS locations were recorded. Paddy samples were air-dried to 15% moisture content, and then dehulled and milled to 92% milling yield and measured the protein content by near-infrared spectroscopy. Results: Artificial neural network showed the better performance with $R^2$ (coefficient of determination) of 0.740, NSE (Nash-Sutcliffe model efficiency coefficient) of 0.733 and RMSE (root mean square error) of 0.187% considering all 54 samples than the models developed by PR (polynomial regression), SLR (simple linear regression), and PLSR (partial least square regression). PLSR calibration models showed almost similar result with PR as 0.663 ($R^2$) and 0.169% (RMSE) for cloud-free samples and 0.491 ($R^2$) and 0.217% (RMSE) for cloud-shadowed samples. However, the validation models performed poorly. This study revealed that there is a highly significant correlation between NDVI (normalized difference vegetation index) and protein content in rice. For the cloud-free samples, the SLR models showed $R^2=0.553$ and RMSE = 0.210%, and for cloud-shadowed samples showed 0.479 as $R^2$ and 0.225% as RMSE respectively. Conclusion: There is a significant correlation between spectral bands and grain protein content. Artificial neural networks have the strong advantages to fit the nonlinear problem when a sigmoid activation function is used in the hidden layer. Quantitatively, the neural network model obtained a higher precision result with a mean absolute relative error (MARE) of 2.18% and root mean square error (RMSE) of 0.187%.

Study on Enhancement of TRANSGUIDE Outlier Filter Method under Unstable Traffic Flow for Reliable Travel Time Estimation -Focus on Dedicated Short Range Communications Probes- (불안정한 교통류상태에서 TRANSGUIDE 이상치 제거 기법 개선을 통한 교통 통행시간 예측 향상 연구 -DSRC 수집정보를 중심으로-)

  • Khedher, Moataz Bellah Ben;Yun, Duk Geun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.3
    • /
    • pp.249-257
    • /
    • 2017
  • Filtering the data for travel time records obtained from DSRC probes is essential for a better estimation of the link travel time. This study addresses the major deficiency in the performance of TRANSGUIDE in removing anomalous data. This algorithm is unable to handle unstable traffic flow conditions for certain time intervals, where fluctuations are observed. In this regard, this study proposes an algorithm that is capable of overcoming the weaknesses of TRANSGUIDE. If TRANSGUIDE fails to validate sufficient number of observations inside one time interval, another process specifies a new validity range based on the median absolute deviation (MAD), a common statistical approach. The proposed algorithm suggests the parameters, ${\alpha}$ and ${\beta}$, to consider the maximum allowed outlier within a one-time interval to respond to certain traffic flow conditions. The parameter estimation relies on historical data because it needs to be updated frequently. To test the proposed algorithm, the DSRC probe travel time data were collected from a multilane highway road section. Calibration of the model was performed by statistical data analysis through using cumulative relative frequency. The qualitative evaluation shows satisfactory performance. The proposed model overcomes the deficiency associated with the rapid change in travel time.