• Title/Summary/Keyword: Kernel Density Analysis

Search Result 118, Processing Time 0.022 seconds

A Study on the Trade Area Analysis Model based on GIS - A Case of Huff probability model - (GIS 기반의 상권분석 모형 연구 - Huff 확률모형을 중심으로 -)

  • Son, Young-Gi;An, Sang-Hyun;Shin, Young-Chul
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.10 no.2
    • /
    • pp.164-171
    • /
    • 2007
  • This research used GIS spatial analysis model and Huff probability model and achieved trade area analysis of area center. we constructed basic maps that were surveyed according to types of business, number of households etc. using a land registration map of LMIS(Land Management Information System) in Bokdae-dong, Cheongju-si. Kernel density function and NNI(Nearest Neighbor Index) was used to estimate store distribution center area in neighborhood life zones. The center point of area and scale were estimated by means of the center area. Huff probability model was used in abstracting trade areas according to estimated center areas, those was drew map. Therefore, this study describes method that can apply in Huff probability model through kernel density function and NNI of GIS spatial analysis techniques. A trade area was abstracted more exactly by taking advantage of this method, which will can aid merchant for the foundation of small sized enterprises.

  • PDF

Probabilistic Forecasting of Seasonal Inflow to Reservoir (계절별 저수지 유입량의 확률예측)

  • Kang, Jaewon
    • Journal of Environmental Science International
    • /
    • v.22 no.8
    • /
    • pp.965-977
    • /
    • 2013
  • Reliable long-term streamflow forecasting is invaluable for water resource planning and management which allocates water supply according to the demand of water users. It is necessary to get probabilistic forecasts to establish risk-based reservoir operation policies. Probabilistic forecasts may be useful for the users who assess and manage risks according to decision-making responding forecasting results. Probabilistic forecasting of seasonal inflow to Andong dam is performed and assessed using selected predictors from sea surface temperature and 500 hPa geopotential height data. Categorical probability forecast by Piechota's method and logistic regression analysis, and probability forecast by conditional probability density function are used to forecast seasonal inflow. Kernel density function is used in categorical probability forecast by Piechota's method and probability forecast by conditional probability density function. The results of categorical probability forecasts are assessed by Brier skill score. The assessment reveals that the categorical probability forecasts are better than the reference forecasts. The results of forecasts using conditional probability density function are assessed by qualitative approach and transformed categorical probability forecasts. The assessment of the forecasts which are transformed to categorical probability forecasts shows that the results of the forecasts by conditional probability density function are much better than those of the forecasts by Piechota's method and logistic regression analysis except for winter season data.

Smoothing Parameter Selection in Nonparametric Spectral Density Estimation

  • Kang, Kee-Hoon;Park, Byeong-U;Cho, Sin-Sup;Kim, Woo-Chul
    • Communications for Statistical Applications and Methods
    • /
    • v.2 no.2
    • /
    • pp.231-242
    • /
    • 1995
  • In this paper we consider kernel type estimator of the spectral density at a point in the analysis of stationary time series data. The kernel entails choice of smoothing parameter called bandwidth. A data-based bandwidth choice is proposed, and it is obtained by solving an equation similar to Sheather(1986) which relates to the probability density estimation. A Monte Carlo study is done. It reveals that the spectral density estimates using the data-based bandwidths show comparatively good performance.

  • PDF

The Study on Application of Regional Frequency Analysis using Kernel Density Function (핵밀도 함수를 이용한 지역빈도해석의 적용에 관한 연구)

  • Oh, Tae-Suk;Kim, Jong-Suk;Moon, Young-Il;Yoo, Seung-Yeon
    • Journal of Korea Water Resources Association
    • /
    • v.39 no.10 s.171
    • /
    • pp.891-904
    • /
    • 2006
  • The estimation of the probability precipitation is essential for the design of hydrologic projects. The techniques to calculate the probability precipitation can be determined by the point frequency analysis and the regional frequency analysis. The regional frequency analysis includes index-flood technique and L-moment technique. In the regional frequency analysis, even if the rainfall data passed homogeneity, suitable distributions can be different at each point. However, the regional frequency analysis can supplement the lacking precipitation data. Therefore, the regional frequency analysis has weaknesses compared to parametric point frequency analysis because of suppositions about probability distributions. Therefore, this paper applies kernel density function to precipitation data so that homogeneity is defined. In this paper, The data from 16 rainfall observatories were collected and managed by the Korea Meteorological Administration to achieve the point frequency analysis and the regional frequency analysis. The point frequency analysis applies parametric technique and nonparametric technique, and the regional frequency analysis applies index-flood techniques and L-moment techniques. Also, the probability precipitation was calculated by the regional frequency analysis using variable kernel density function.

A novel reliability analysis method based on Gaussian process classification for structures with discontinuous response

  • Zhang, Yibo;Sun, Zhili;Yan, Yutao;Yu, Zhenliang;Wang, Jian
    • Structural Engineering and Mechanics
    • /
    • v.75 no.6
    • /
    • pp.771-784
    • /
    • 2020
  • Reliability analysis techniques combining with various surrogate models have attracted increasing attention because of their accuracy and great efficiency. However, they primarily focus on the structures with continuous response, while very rare researches on the reliability analysis for structures with discontinuous response are carried out. Furthermore, existing adaptive reliability analysis methods based on importance sampling (IS) still have some intractable defects when dealing with small failure probability, and there is no related research on reliability analysis for structures involving discontinuous response and small failure probability. Therefore, this paper proposes a novel reliability analysis method called AGPC-IS for such structures, which combines adaptive Gaussian process classification (GPC) and adaptive-kernel-density-estimation-based IS. In AGPC-IS, an efficient adaptive strategy for design of experiments (DoE), taking into consideration the classification uncertainty, the sampling uniformity and the regional classification accuracy improvement, is developed with the purpose of improving the accuracy of Gaussian process classifier. The adaptive kernel density estimation is introduced for constructing the quasi-optimal density function of IS. In addition, a novel and more precise stopping criterion is also developed from the perspective of the stability of failure probability estimation. The efficiency, superiority and practicability of AGPC-IS are verified by three examples.

Radar Pulse Clustering using Kernel Density Window (커널 밀도 윈도우를 이용한 레이더 펄스 클러스터링)

  • Lee, Dong-Weon;Han, Jin-Woo;Lee, Won-Don
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.973-974
    • /
    • 2008
  • As radar signal environments become denser and more complex, the capability of high-speed and accurate signal analysis is required for ES(Electronic warfare Support) system to identify individual radar signals at real-time. In this paper, we propose the new novel clustering algorithm of radar pulses to alleviate the load of signal analysis process and support reliable analysis. The proposed algorithm uses KDE(Kernel Density Estimation) and its CDF(Cumulative Distribution Function) to compose clusters considering the distribution characteristics of pulses. Simulation results show the good performance of the proposed clustering algorithm in clustering and classifying the emitters.

  • PDF

Comparison Study of Kernel Density Estimation according to Various Bandwidth Selectors (다양한 대역폭 선택법에 따른 커널밀도추정의 비교 연구)

  • Kang, Young-Jin;Noh, Yoojeong
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.32 no.3
    • /
    • pp.173-181
    • /
    • 2019
  • To estimate probabilistic distribution function from experimental data, kernel density estimation(KDE) is mostly used in cases when data is insufficient. The estimated distribution using KDE depends on bandwidth selectors that smoothen or overfit a kernel estimator to experimental data. In this study, various bandwidth selectors such as the Silverman's rule of thumb, rule using adaptive estimates, and oversmoothing rule, were compared for accuracy and conservativeness. For this, statistical simulations were carried out using assumed true models including unimodal and multimodal distributions, and, accuracies and conservativeness of estimating distribution functions were compared according to various data. In addition, it was verified how the estimated distributions using KDE with different bandwidth selectors affect reliability analysis results through simple reliability examples.

Color cast detection based on color by correlation and color constancy algorithm using kernel density estimation (색 상관 관계 기반의 색조 검출 및 핵밀도 추정을 이용한 색 항상성 알고리즘)

  • Jung, Jun-Woo;Kim, Gyeong-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.4
    • /
    • pp.535-546
    • /
    • 2010
  • Digital images have undesired color casts due to various illumination conditions and intrinsic characteristics of cameras. Since the color casts in the images deteriorate performance of color representations, color correction is required for further analysis of images. In this paper, an algorithm for detection and removal of color casts is presented. The proposed algorithm consists of four steps: retrieving similar image using color by correlation, extraction of near neutral color regions, kernel density estimation, and removal of color casts. Ambiguities in near neutral color regions are excluded based on kernel density estimation by the color by correlation algorithm. The method determines whether there are color casts by chromaticity distributions in near neutral color regions, and removes color casts for color constancy. Experimental results suggest that the proposed method outperforms the gray world algorithm and the color by correlation algorithm.

An Automatic Spectral Density Estimate

  • Park, Byeong U.;Cho, Sin-Sup;Kee H. Kang
    • Journal of the Korean Statistical Society
    • /
    • v.23 no.1
    • /
    • pp.79-88
    • /
    • 1994
  • This paper concerns the problem of estimating the spectral density function in the analysis of stationary time series data. A kernel type estimate is considered, which entails choice of bandwidth. A data-driven bandwidth choice is proposed, and it is obtained by plugging some suitable estimates into the unknown parts of a theoretically optimal choice. A theoretical justification is give for this choice in terms of how far it is from the theoretical optimum. Furthermore, an empirical investigation is done. It shows that the data-driven choice yields a reliable spectrum estimate.

  • PDF

User Identification Using Real Environmental Human Computer Interaction Behavior

  • Wu, Tong;Zheng, Kangfeng;Wu, Chunhua;Wang, Xiujuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.3055-3073
    • /
    • 2019
  • In this paper, a new user identification method is presented using real environmental human-computer-interaction (HCI) behavior data to improve method usability. User behavior data in this paper are collected continuously without setting experimental scenes such as text length, action number, etc. To illustrate the characteristics of real environmental HCI data, probability density distribution and performance of keyboard and mouse data are analyzed through the random sampling method and Support Vector Machine(SVM) algorithm. Based on the analysis of HCI behavior data in a real environment, the Multiple Kernel Learning (MKL) method is first used for user HCI behavior identification due to the heterogeneity of keyboard and mouse data. All possible kernel methods are compared to determine the MKL algorithm's parameters to ensure the robustness of the algorithm. Data analysis results show that keyboard data have a narrower range of probability density distribution than mouse data. Keyboard data have better performance with a 1-min time window, while that of mouse data is achieved with a 10-min time window. Finally, experiments using the MKL algorithm with three global polynomial kernels and ten local Gaussian kernels achieve a user identification accuracy of 83.03% in a real environmental HCI dataset, which demonstrates that the proposed method achieves an encouraging performance.