• 제목/요약/키워드: kernel density function

Search Result 98, Processing Time 0.034 seconds

An Overview of Unsupervised and Semi-Supervised Fuzzy Kernel Clustering

  • Frigui, Hichem;Bchir, Ouiem;Baili, Naouel
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.13 no.4
    • /
    • pp.254-268
    • /
    • 2013
  • For real-world clustering tasks, the input data is typically not easily separable due to the highly complex data structure or when clusters vary in size, density and shape. Kernel-based clustering has proven to be an effective approach to partition such data. In this paper, we provide an overview of several fuzzy kernel clustering algorithms. We focus on methods that optimize an fuzzy C-mean-type objective function. We highlight the advantages and disadvantages of each method. In addition to the completely unsupervised algorithms, we also provide an overview of some semi-supervised fuzzy kernel clustering algorithms. These algorithms use partial supervision information to guide the optimization process and avoid local minima. We also provide an overview of the different approaches that have been used to extend kernel clustering to handle very large data sets.

A Study on the Trade Area Analysis Model based on GIS - A Case of Huff probability model - (GIS 기반의 상권분석 모형 연구 - Huff 확률모형을 중심으로 -)

  • Son, Young-Gi;An, Sang-Hyun;Shin, Young-Chul
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.10 no.2
    • /
    • pp.164-171
    • /
    • 2007
  • This research used GIS spatial analysis model and Huff probability model and achieved trade area analysis of area center. we constructed basic maps that were surveyed according to types of business, number of households etc. using a land registration map of LMIS(Land Management Information System) in Bokdae-dong, Cheongju-si. Kernel density function and NNI(Nearest Neighbor Index) was used to estimate store distribution center area in neighborhood life zones. The center point of area and scale were estimated by means of the center area. Huff probability model was used in abstracting trade areas according to estimated center areas, those was drew map. Therefore, this study describes method that can apply in Huff probability model through kernel density function and NNI of GIS spatial analysis techniques. A trade area was abstracted more exactly by taking advantage of this method, which will can aid merchant for the foundation of small sized enterprises.

  • PDF

Simulation of Hourly Precipitation using Nonhomogeneous Markov Chain Model and Derivation of Rainfall Mass Curve using Transition Probability (비동질성 Markov 모형에 의한 시간강수량 모의 발생과 천이확률을 이용한 강우의 시간분포 유도)

  • Choi, Byung-Kyu;Oh, Tae-Suk;Park, Rae-Gun;Moon, Young-Il
    • Journal of Korea Water Resources Association
    • /
    • v.41 no.3
    • /
    • pp.265-276
    • /
    • 2008
  • The observed data of enough period need for design of hydrological works. But, most hydrological data aren't enough. Therefore in this paper, hourly precipitation generated by nonhomogeneous Markov chain model using variable Kernel density function. First, the Kernel estimator is used to estimate the transition probabilities. Second, wet hours are decided by transition probabilities and random numbers. Third, the amount of precipitation of each hours is calculated by the Kernel density function that estimated from observed data. At the results, observed precipitation data and generated precipitation data have similar statistic. Also, rainfall mass curve is derived by calculated transition probabilities for generation of hourly precipitation.

Data Clustering Method Using a Modified Gaussian Kernel Metric and Kernel PCA

  • Lee, Hansung;Yoo, Jang-Hee;Park, Daihee
    • ETRI Journal
    • /
    • v.36 no.3
    • /
    • pp.333-342
    • /
    • 2014
  • Most hyper-ellipsoidal clustering (HEC) approaches use the Mahalanobis distance as a distance metric. It has been proven that HEC, under this condition, cannot be realized since the cost function of partitional clustering is a constant. We demonstrate that HEC with a modified Gaussian kernel metric can be interpreted as a problem of finding condensed ellipsoidal clusters (with respect to the volumes and densities of the clusters) and propose a practical HEC algorithm that is able to efficiently handle clusters that are ellipsoidal in shape and that are of different size and density. We then try to refine the HEC algorithm by utilizing ellipsoids defined on the kernel feature space to deal with more complex-shaped clusters. The proposed methods lead to a significant improvement in the clustering results over K-means algorithm, fuzzy C-means algorithm, GMM-EM algorithm, and HEC algorithm based on minimum-volume ellipsoids using Mahalanobis distance.

Video Object Segmentation using Kernel Density Estimation and Spatio-temporal Coherence (커널 밀도 추정과 시공간 일치성을 이용한 동영상 객체 분할)

  • Ahn, Jae-Kyun;Kim, Chang-Su
    • Journal of IKEEE
    • /
    • v.13 no.4
    • /
    • pp.1-7
    • /
    • 2009
  • A video segmentation algorithm, which can extract objects even with non-stationary backgrounds, is proposed in this work. The proposed algorithm is composed of three steps. First, we perform an initial segmentation interactively to build the probability density functions of colors per each macro block via kernel density estimation. Then, for each subsequent frame, we construct a coherence strip, which is likely to contain the object contour, by exploiting spatio-temporal correlations. Finally, we perform the segmentation by minimizing an energy function composed of color, coherence, and smoothness terms. Experimental results on various test sequences show that the proposed algorithm provides accurate segmentation results.

  • PDF

Probabilistic Prediction of Estimated Ultimate Recovery in Shale Reservoir using Kernel Density Function (셰일 저류층에서의 핵밀도 함수를 이용한 확률론적 궁극가채량 예측)

  • Shin, Hyo-Jin;Hwang, Ji-Yu;Lim, Jong-Se
    • Journal of the Korean Institute of Gas
    • /
    • v.21 no.3
    • /
    • pp.61-69
    • /
    • 2017
  • The commercial development of unconventional gas is pursued in North America because it is more feasible owing to the technology required to improve productivity. Shale reservoir have low permeability and gas production can be carried out through cracks generated by hydraulic fracturing. The decline rate during the initial production period is high, but very low latter on, there are significant variations from the initial production behavior. Therefore, in the prediction of the production rate using deterministic decline curve analysis(DCA), it is not possible to consider the uncertainty in the production behavior. In this study, production rate of the Eagle Ford shale is predicted by Arps Hyperbolic and Modified SEPD. To minimize the uncertainty in predicting the Estimated Ultimate Recovery(EUR), Monte Carlo simulation is used to multi-wells analysis. Also, kernel density function is applied to determine probability distribution of decline curve factors without any assumption.

Robustness of Minimum Disparity Estimators in Linear Regression Models

  • Pak, Ro-Jin
    • Journal of the Korean Statistical Society
    • /
    • v.24 no.2
    • /
    • pp.349-360
    • /
    • 1995
  • This paper deals with the robustness properties of the minimum disparity estimation in linear regression models. The estimators defined as statistical quantities whcih minimize the blended weight Hellinger distance between a weighted kernel density estimator of the residuals and a smoothed model density of the residuals. It is shown that if the weights of the density estimator are appropriately chosen, the estimates of the regression parameters are robust.

  • PDF

The Study on Application of Regional Frequency Analysis using Kernel Density Function (핵밀도 함수를 이용한 지역빈도해석의 적용에 관한 연구)

  • Oh, Tae-Suk;Kim, Jong-Suk;Moon, Young-Il;Yoo, Seung-Yeon
    • Journal of Korea Water Resources Association
    • /
    • v.39 no.10 s.171
    • /
    • pp.891-904
    • /
    • 2006
  • The estimation of the probability precipitation is essential for the design of hydrologic projects. The techniques to calculate the probability precipitation can be determined by the point frequency analysis and the regional frequency analysis. The regional frequency analysis includes index-flood technique and L-moment technique. In the regional frequency analysis, even if the rainfall data passed homogeneity, suitable distributions can be different at each point. However, the regional frequency analysis can supplement the lacking precipitation data. Therefore, the regional frequency analysis has weaknesses compared to parametric point frequency analysis because of suppositions about probability distributions. Therefore, this paper applies kernel density function to precipitation data so that homogeneity is defined. In this paper, The data from 16 rainfall observatories were collected and managed by the Korea Meteorological Administration to achieve the point frequency analysis and the regional frequency analysis. The point frequency analysis applies parametric technique and nonparametric technique, and the regional frequency analysis applies index-flood techniques and L-moment techniques. Also, the probability precipitation was calculated by the regional frequency analysis using variable kernel density function.

Bandwidth selections based on cross-validation for estimation of a discontinuity point in density (교차타당성을 이용한 확률밀도함수의 불연속점 추정의 띠폭 선택)

  • Huh, Jib
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.4
    • /
    • pp.765-775
    • /
    • 2012
  • The cross-validation is a popular method to select bandwidth in all types of kernel estimation. The maximum likelihood cross-validation, the least squares cross-validation and biased cross-validation have been proposed for bandwidth selection in kernel density estimation. In the case that the probability density function has a discontinuity point, Huh (2012) proposed a method of bandwidth selection using the maximum likelihood cross-validation. In this paper, two forms of cross-validation with the one-sided kernel function are proposed for bandwidth selection to estimate the location and jump size of the discontinuity point of density. These methods are motivated by the least squares cross-validation and the biased cross-validation. By simulated examples, the finite sample performances of two proposed methods with the one of Huh (2012) are compared.

An Automatic Spectral Density Estimate

  • Park, Byeong U.;Cho, Sin-Sup;Kee H. Kang
    • Journal of the Korean Statistical Society
    • /
    • v.23 no.1
    • /
    • pp.79-88
    • /
    • 1994
  • This paper concerns the problem of estimating the spectral density function in the analysis of stationary time series data. A kernel type estimate is considered, which entails choice of bandwidth. A data-driven bandwidth choice is proposed, and it is obtained by plugging some suitable estimates into the unknown parts of a theoretically optimal choice. A theoretical justification is give for this choice in terms of how far it is from the theoretical optimum. Furthermore, an empirical investigation is done. It shows that the data-driven choice yields a reliable spectrum estimate.

  • PDF