• Title/Summary/Keyword: modeling errors

Search Result 874, Processing Time 0.024 seconds

The NCAM Land-Atmosphere Modeling Package (LAMP) Version 1: Implementation and Evaluation (국가농림기상센터 지면대기모델링패키지(NCAM-LAMP) 버전 1: 구축 및 평가)

  • Lee, Seung-Jae;Song, Jiae;Kim, Yu-Jung
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.18 no.4
    • /
    • pp.307-319
    • /
    • 2016
  • A Land-Atmosphere Modeling Package (LAMP) for supporting agricultural and forest management was developed at the National Center for AgroMeteorology (NCAM). The package is comprised of two components; one is the Weather Research and Forecasting modeling system (WRF) coupled with Noah-Multiparameterization options (Noah-MP) Land Surface Model (LSM) and the other is an offline one-dimensional LSM. The objective of this paper is to briefly describe the two components of the NCAM-LAMP and to evaluate their initial performance. The coupled WRF/Noah-MP system is configured with a parent domain over East Asia and three nested domains with a finest horizontal grid size of 810 m. The innermost domain covers two Gwangneung deciduous and coniferous KoFlux sites (GDK and GCK). The model is integrated for about 8 days with the initial and boundary conditions taken from the National Centers for Environmental Prediction (NCEP) Final Analysis (FNL) data. The verification variables are 2-m air temperature, 10-m wind, 2-m humidity, and surface precipitation for the WRF/Noah-MP coupled system. Skill scores are calculated for each domain and two dynamic vegetation options using the difference between the observed data from the Korea Meteorological Administration (KMA) and the simulated data from the WRF/Noah-MP coupled system. The accuracy of precipitation simulation is examined using a contingency table that is made up of the Probability of Detection (POD) and the Equitable Threat Score (ETS). The standalone LSM simulation is conducted for one year with the original settings and is compared with the KoFlux site observation for net radiation, sensible heat flux, latent heat flux, and soil moisture variables. According to results, the innermost domain (810 m resolution) among all domains showed the minimum root mean square error for 2-m air temperature, 10-m wind, and 2-m humidity. Turning on the dynamic vegetation had a tendency of reducing 10-m wind simulation errors in all domains. The first nested domain (7,290 m resolution) showed the highest precipitation score, but showed little advantage compared with using the dynamic vegetation. On the other hand, the offline one-dimensional Noah-MP LSM simulation captured the site observed pattern and magnitude of radiative fluxes and soil moisture, and it left room for further improvement through supplementing the model input of leaf area index and finding a proper combination of model physics.

A Study on the Geophysical Characteristics and Geological Structure of the Northeastern Part of the Ulleung Basin in the East Sea (동해 울릉분지 북동부지역의 지구물리학적 특성 및 지구조 연구)

  • Kim, Chang-Hwan;Park, Chan-Hong
    • Economic and Environmental Geology
    • /
    • v.43 no.6
    • /
    • pp.625-636
    • /
    • 2010
  • The geophysical characteristics and geological structure of the northeastern part of the Ulleung Basin were investigated from interpretation of geophysical data including gravity, magnetic, bathymetry data, and seismic data. Relative correction was applied to reduce errors between sets of gravity and magnetic data, obtained at different times and by different equipments. The northeastern margin of the Ulleung Basin is characterized by complicated morphology consisting of volcanic islands (Ulleungdo and Dokdo), the Dokdo seamounts, and a deep pathway (Korea Gap) with the maximum depth of -2500 m. Free-air anomalies generally reflect the topography effect. There are high anomalies over the volcanic islands and the Dokdo seamounts. Except local anomalous zones of volcanic edifices, the gradual increasing of the Bouguer anomalies from the Oki Bank toward the Ulleung Basin and the Korea Gap is related to higher mantle level and denser crust in the central of the Ulleung Basin. Complicated magnetic anomalies in the study area occur over volcanic islands and seamounts. The power spectrum analysis of the Bouguer anomalies indicates that the depth to the averaged Moho discontinuity is -16.1 km. The inversion of the Bouguer anomaly shows that the Moho depth under the Korea Gap is about -16~17 km and the Moho depths towards the Oki Bank and the northwestern part of Ulleung Island are gradually deeper. The inversion result suggests that the crust of the Ulleung Basin is thicker than normal oceanic crusts. The result of 20 gravity modeling is in good agreement with the results of the power spectrum analysis and the inversion of the Bouguer anomaly. Except the volcanic edifices, the main pattern of magnetization distribution shows lineation in NE-SW. The inversion results, the 2D gravity modeling, and the magnetization distribution support possible NE-SW spreading of the Ulleung Basin proposed by other papers.

Object Tracking Based on Exactly Reweighted Online Total-Error-Rate Minimization (정확히 재가중되는 온라인 전체 에러율 최소화 기반의 객체 추적)

  • JANG, Se-In;PARK, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.53-65
    • /
    • 2019
  • Object tracking is one of important steps to achieve video-based surveillance systems. Object tracking is considered as an essential task similar to object detection and recognition. In order to perform object tracking, various machine learning methods (e.g., least-squares, perceptron and support vector machine) can be applied for different designs of tracking systems. In general, generative methods (e.g., principal component analysis) were utilized due to its simplicity and effectiveness. However, the generative methods were only focused on modeling the target object. Due to this limitation, discriminative methods (e.g., binary classification) were adopted to distinguish the target object and the background. Among the machine learning methods for binary classification, total error rate minimization can be used as one of successful machine learning methods for binary classification. The total error rate minimization can achieve a global minimum due to a quadratic approximation to a step function while other methods (e.g., support vector machine) seek local minima using nonlinear functions (e.g., hinge loss function). Due to this quadratic approximation, the total error rate minimization could obtain appropriate properties in solving optimization problems for binary classification. However, this total error rate minimization was based on a batch mode setting. The batch mode setting can be limited to several applications under offline learning. Due to limited computing resources, offline learning could not handle large scale data sets. Compared to offline learning, online learning can update its solution without storing all training samples in learning process. Due to increment of large scale data sets, online learning becomes one of essential properties for various applications. Since object tracking needs to handle data samples in real time, online learning based total error rate minimization methods are necessary to efficiently address object tracking problems. Due to the need of the online learning, an online learning based total error rate minimization method was developed. However, an approximately reweighted technique was developed. Although the approximation technique is utilized, this online version of the total error rate minimization could achieve good performances in biometric applications. However, this method is assumed that the total error rate minimization can be asymptotically achieved when only the number of training samples is infinite. Although there is the assumption to achieve the total error rate minimization, the approximation issue can continuously accumulate learning errors according to increment of training samples. Due to this reason, the approximated online learning solution can then lead a wrong solution. The wrong solution can make significant errors when it is applied to surveillance systems. In this paper, we propose an exactly reweighted technique to recursively update the solution of the total error rate minimization in online learning manner. Compared to the approximately reweighted online total error rate minimization, an exactly reweighted online total error rate minimization is achieved. The proposed exact online learning method based on the total error rate minimization is then applied to object tracking problems. In our object tracking system, particle filtering is adopted. In particle filtering, our observation model is consisted of both generative and discriminative methods to leverage the advantages between generative and discriminative properties. In our experiments, our proposed object tracking system achieves promising performances on 8 public video sequences over competing object tracking systems. The paired t-test is also reported to evaluate its quality of the results. Our proposed online learning method can be extended under the deep learning architecture which can cover the shallow and deep networks. Moreover, online learning methods, that need the exact reweighting process, can use our proposed reweighting technique. In addition to object tracking, the proposed online learning method can be easily applied to object detection and recognition. Therefore, our proposed methods can contribute to online learning community and object tracking, detection and recognition communities.

Analysis of the Statistical Methods used in Scientific Research published in The Korean Journal of Culinary Research (한국조리학회지에 게재된 학술적 연구의 통계적 기법 분석)

  • Rha, Young-Ah;Na, Tae-Kyun
    • Culinary science and hospitality research
    • /
    • v.21 no.6
    • /
    • pp.49-62
    • /
    • 2015
  • Give that statistical analysis is an essential component of foodservice-related research, the purpose of this review is to analyse research trends of statistical methods applied to foodservice-related research. To achieve these objective, this study carried out a content analysis on a total of 251 out of 415 research articles published in The Korean Journal of Culinary Research(TKJCR) from January 2010 to December 2013. Of the total 164 research articles focussing on natural science research, qualitative research, articles written in English were excluded from the scope of this study. The results of this study are as follows. First, it turned out that 269 research articles applied quantitative research methods, and only 10 articles applied qualitative research methods among the 279 research articles based on social science research methods. Second, 20 article (8.0%) among the 251 did not specify the statistical methods or computer programs that were used for statistical analysis. Third, it was found that 228 articles (90.8%) used the SPSS program for data analysis. Fourth, in terms of frequency of use, it was revealed frequency analysis was most used, followed in order by reliability analysis, exploratory factor analysis, correlation analysis, regression analysis, structural equation modeling, confirmatory factor analysis, t-test, variance analysis, and cross tabs analysis, However, 3 out of 56 research articles that used a t-test did not suggest a t-value. 10 out of 64 articles that used ANOVA and demonstrated a significant difference in between-group mean did not conducted post-hoc test. Therefore, the researchers with interest in foodservice fields need to keep in mind that choosing and applying the correct statistical technique both determine the value and the success or failure of a study. To enhance the value and success of a study, it is necessary to use the proper statistical technique in an efficient way in order to prevent statistical errors.

Development of Rule-Set Definition for Architectural Design Code Checking based on BIM - for Act on the Promotion and Guarantee of Access for the Disabled, the Aged, and Pregnant Women to Facilities and Information - (BIM 기반의 건축법규검토를 위한 룰셋 정의서 개발 - 장애인,노인,임산부 등의 편의증진 보장에 관한 법률 대상으로 -)

  • Kim, Yuri;Lee, Sang-Hya;Park, Sang-Hyuk
    • Korean Journal of Construction Engineering and Management
    • /
    • v.13 no.6
    • /
    • pp.143-152
    • /
    • 2012
  • As the Public Procurement Service announced the compulsory of BIM adaption in every public construction from 2016, the importance of BIM is increasing. Besides, automatic code checking takes significance in terms of the quality control for BIM based design. In this study, rule-sets were defined for Act on the Promotion and Guarantee of Access for the Disabled, the Aged, and Pregnant Women to Facilities and Information. Three analytic steps were suggested to shortlist the objective clauses from the entire code; the frequency analysis using project reviews for architectural code compliance, the clause analysis on quantifiability, and the analysis for model checking possibilities. The shortlisted clauses were transformed into the machine readable rule-set definition. A case study was conducted to verify the adaptiveness and consistency of rule-set definitions. In future study, it is required the methodologies of selecting objective clauses to be specified and its indicators to be quantified. Also case studies should be performed to determine the pre-conditions in modeling and to check interoperability issues and other possible errors in models.

Efficient Correlation Channel Modeling for Transform Domain Wyner-Ziv Video Coding (Transform Domain Wyner-Ziv 비디오 부호를 위한 효과적인 상관 채널 모델링)

  • Oh, Ji-Eun;Jung, Chun-Sung;Kim, Dong-Yoon;Park, Hyun-Wook;Ha, Jeong-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.23-31
    • /
    • 2010
  • The increasing demands on low-power, and low-complexity video encoder have been motivating extensive research activities on distributed video coding (DVC) in which the encoder compresses frames without utilizing inter-frame statistical correlation. In DVC encoder, contrary to the conventional video encoder, an error control code compresses the video frames by representing the frames in the form of syndrome bits. In the meantime, the DVC decoder generates side information which is modeled as a noisy version of the original video frames, and a decoder of the error-control code corrects the errors in the side information with the syndrome bits. The noisy observation, i.e., the side information can be understood as the output of a virtual channel corresponding to the orignal video frames, and the conditional probability of the virtual channel model is assumed to follow a Laplacian distribution. Thus, performance improvement of DVC systems depends on performances of the error-control code and the optimal reconstruction step in the DVC decoder. In turn, the performances of two constituent blocks are directly related to a better estimation of the parameter of the correlation channel. In this paper, we propose an algorithm to estimate the parameter of the correlation channel and also a low-complexity version of the proposed algorithm. In particular, the proposed algorithm minimizes squared-error of the Laplacian probability distribution and the empirical observations. Finally, we show that the conventional algorithm can be improved by adopting a confidential window. The proposed algorithm results in PSNR gain up to 1.8 dB and 1.1 dB on Mother and Foreman video sequences, respectively.

A joint modeling of longitudinal zero-inflated count data and time to event data (경시적 영과잉 가산자료와 생존자료의 결합모형)

  • Kim, Donguk;Chun, Jihun
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.7
    • /
    • pp.1459-1473
    • /
    • 2016
  • Both longitudinal data and survival data are collected simultaneously in longitudinal data which are observed throughout the passage of time. In this case, the effect of the independent variable becomes biased (provided that sole use of longitudinal data analysis does not consider the relation between both data used) if the missing that occurred in the longitudinal data is non-ignorable because it is caused by a correlation with the survival data. A joint model of longitudinal data and survival data was studied as a solution for such problem in order to obtain an unbiased result by considering the survival model for the cause of missing. In this paper, a joint model of the longitudinal zero-inflated count data and survival data is studied by replacing the longitudinal part with zero-inflated count data. A hurdle model and proportional hazards model were used for each longitudinal zero inflated count data and survival data; in addition, both sub-models were linked based on the assumption that the random effect of sub-models follow the multivariate normal distribution. We used the EM algorithm for the maximum likelihood estimator of parameters and estimated standard errors of parameters were calculated using the profile likelihood method. In simulation, we observed a better performance of the joint model in bias and coverage probability compared to the separate model.

Development of Prediction Models for Traffic Noise Considering Traffic Environment and Road Geometry (교통환경 및 도로기하구조를 고려한 도로교통소음 예측모형 개발에 관한 연구)

  • Oh, Seok Jin;Park, Je Jin;Choi, Gun Soo;Ha, Tae Jun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.38 no.4
    • /
    • pp.587-593
    • /
    • 2018
  • The current road traffic noise prediction programs of Korea, which are widely used, are based upon foreign prediction model. Thus, it is necessary to verify foreign prediction models to find out whether they are suitable for the domestic road traffic environment. In addition, an analysis and an in-depth study on the main factors should be conducted in advance as the influence factors on the occurrence of traffic noise vary for each prediction model. Therefore, this study examined the influence factors and the existing prediction models used to forecast road traffic noise. Also, analyzed their relationship with the factors influencing the noise generated by driving vehicles through multiple regression analysis using a prediction model, taking into consideration of the traffic environment and the road geometric structure. In addition, this study will apply experimental values to the existing road traffic noise prediction model (NIER, RLS-90) and the deducted road traffic noise prediction model. As a result, the order of the absolute value sum of the errors are NIER, RLS-90, model value. Through comparison and verification, developed models are to be analyzed for providing basic research results for future study on road traffic noise prediction modeling.

Analysis of Repeated Measurement Problem in SP data (SP 데이터의 Repeated Measurement Problem 분석)

  • CHO, Hye-Jin
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.1
    • /
    • pp.111-119
    • /
    • 2002
  • One of the advantages of SP methods is the possibility of getting a number of responses from each respondent. However, when the repeated observations from each respondent are analysed by applying the simple modeling method, a potential problem is created because of upbiased significance due to the repeated observation from each respondent. This study uses a variety of approaches to explore this issue and to test the robustness of the simple model estimates. Among several different approaches, the Jackknife method and Kocurs method were applied. The Jackknife method was implemented using a program JACKKNIFE. The model estimate results of Jackknife method and Kocurs method were compared with those of the uncorrected estimates in order to test whether there was repeated measurement problem or not and the extent to which this problem affected the model estimates. The standard errors between the uncorrected model estimates and Jackknife estimates were also compared. The results reveals that the t-ratios of Kocurs are much lower than those of the uncorrected method and Jackknife estimates, indicating that Kocurs method underestimates the significance of the coefficients. Jackknife method produced the almost same coefficients as those of the uncorrected model but the lower t-ratios. These results indicate that the coefficients of the uncorrected method are accurate but that their significance are somewhat overestimated. In this study. 1 concluded that the repeated measurement Problem did exist in our data, but that it did not affect the model estimation results significantly. It is recommended that such a test should become a standard procedure. If it turns out that the analysis based on the simple uncorrected method are influenced by the repeated measurement problem. it should be corrected.

Weibull Diameter Distribution Yield Prediction System for Loblolly Pine Plantations (테다소나무 조림지(造林地)에 대한 Weibull 직경분포(直經分布) 수확예측(收穫豫測) 시스템에 관(關)한 연구(硏究))

  • Lee, Young-Jin;Hong, Sung-Cheon
    • Journal of Korean Society of Forest Science
    • /
    • v.90 no.2
    • /
    • pp.176-183
    • /
    • 2001
  • Loblolly pine (Pinus taeda L.) is the most economically important timber producing species in the southern United States. Much attention has been given to predicting diameter distributions for the solution of multiple-product yield estimates. The three-parameter Weibull diameter distribution yield prediction systems were developed for loblolly pine plantations. A parameter recovery procedure for the Weibull distribution function based on four percentile equations was applied to develop diameter distribution yield prediction models. Four percentiles (0th, 25th, 50th, 95th) of the cumulative diameter distribution were predicted as a function of quadratic mean diameter. Individual tree height prediction equations were developed for the calculation of yields by diameter class. By using individual tree content prediction equations, expected yield by diameter class can be computed. To reduce rounding-off errors, the Weibull cumulative upper bound limit difference procedure applied in this study shows slightly better results compared with upper and lower bound procedure applied in the past studies. To evaluate this system, the predicted diameter distributions were tested against the observed diameter distributions using the Kolmogorov-Smirnov two sample test at the ${\alpha}$=0.05 level to check if any significant differences existed. Statistically, no significant differences were detected based on the data from 516 evaluation data sets. This diameter distribution yield prediction system will be useful in loblolly pine stand structure modeling, in updating forest inventories, and in evaluating investment opportunities.

  • PDF