• Title/Summary/Keyword: 커널추정량

Search Result 34, Processing Time 0.032 seconds

Improved Multiplication-free One-bit Transform-based Motion Estimation (향상된 곱셈이 없는 1비트 변환 알고리듬)

  • Jun, Jee-Hyun;Yoo, Ho-Sun;Jeong, Je-Chang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2011.11a
    • /
    • pp.211-214
    • /
    • 2011
  • 비디오 압축 기법에서 움직임 추정 (Motion Estimation)은 매우 중요한 부분을 차지하는데, 그것은 움직임 추정이 화질과 인코딩 시간에 직접적으로 영향을 미치기 때문이다. 가장 기본적인 움직임 추정 기법은 전역 탐색 기법 (Full Search Algorithm, FSA)인데, 이는 가장 좋은 화질을 보여주긴 하지만 매우 많은 계산량을 필요로 한다는 단점이 있다. 따라서 좋은 화질을 유지하면서도 계산량을 낮추기 위한 많은 고속 탐색 알고리즘들이 제안되었다. 이 논문에서는 고속 탐색 알고리듬 중 하드웨어 구현 시 많은 이점을 가진 1비트 변환 알고리듬 (One-bit Transform-based Motion Estimation, 1BT)을 소개하고 1비트 변환 알고리듬의 방법에 있어서 기존의 1비트 변환 알고리듬의 PSNR을 유지하면서 좀 더 빠른 속도로 인코딩이 가능한 커널 및 알고리듬을 제시한다. 실험결과에 따르면 우리가 제안한 알고리듬은 기존의 1비트 변환 알고리듬과 비슷한 PSNR을 유지하면서 속도가 향상된 것을 볼 수 있었다.

  • PDF

A Second Order Smoother (이차 평활스플라인)

  • 김종태
    • The Korean Journal of Applied Statistics
    • /
    • v.11 no.2
    • /
    • pp.363-376
    • /
    • 1998
  • The linear smoothing spline estimator is modified to remove boundary bias effects. The resulting estimator can be calculated efficiently using an O(n) algorithm that is developed for the computation of fitted values and associated smoothing parameter selection criteria. The asymptotic properties of the estimator are studied for the case of a uniform design. In this case the mean squared error properties of boundary corrected linear smoothing splines are seen to be asymptotically competitive with those for standard second order kernel smoothers.

  • PDF

Analysis of Roadkill Hotspot According to the Spatial Clustering Methods (공간 군집지역 탐색방법에 따른 로드킬 다발구간 분석)

  • Song, Euigeun;Seo, Hyunjin;Kim, Kyungmin;Woo, Donggul;Park, Taejin;Choi, Taeyoung
    • Journal of Environmental Impact Assessment
    • /
    • v.28 no.6
    • /
    • pp.580-591
    • /
    • 2019
  • This study analyzed roadkill hotspots in Yeongju, Mungyeong-si Andong-si and Cheongsong-gun to compare the method of searching the area of the spatial cluster for selecting the roadkill hotspots. The local spatial autocorrelation index Getis-Ord Gi* statistics were calculated by different units of analysis, drawing hotspot areas of 9% from 300 m and 14% from 1 km on the basis of the total road area. The rating of Z-score in the 1km hotspot area showed the highest Z-score in the 28th National Road section on the border between Yecheon-gun and Yeongj-si. The kernel density method performed general kernel density estimation and network kernel density estimation analysis, both of which made it easier to visualize roadkill hotspots than district unit analysis, but there were limitations that it was difficult to determine statistically significant priority. As a result, local hotspot areas were found to be different according to the cluster analysis method, and areas that are in common need of reduction measures were found to be the hotspot of 28th National Road through Yeongju-si and Yecheon-gun. It is deemed that the results of this study can be used as basic data when identifying roadkill hotspots and establishing measures to reduce roadkill.

Multi-focus Image Fusion Technique Based on Parzen-windows Estimates (Parzen 윈도우 추정에 기반한 다중 초점 이미지 융합 기법)

  • Atole, Ronnel R.;Park, Daechul
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.8 no.4
    • /
    • pp.75-88
    • /
    • 2008
  • This paper presents a spatial-level nonparametric multi-focus image fusion technique based on kernel estimates of input image blocks' underlying class-conditional probability density functions. Image fusion is approached as a classification task whose posterior class probabilities, P($wi{\mid}Bikl$), are calculated with likelihood density functions that are estimated from the training patterns. For each of the C input images Ii, the proposed method defines i classes wi and forms the fused image Z(k,l) from a decision map represented by a set of $P{\times}Q$ blocks Bikl whose features maximize the discriminant function based on the Bayesian decision principle. Performance of the proposed technique is evaluated in terms of RMSE and Mutual Information (MI) as the output quality measures. The width of the kernel functions, ${\sigma}$, were made to vary, and different kernels and block sizes were applied in performance evaluation. The proposed scheme is tested with C=2 and C=3 input images and results exhibited good performance.

  • PDF

A Note on Complete Convergence in $C_{0}(R)\;and\;L^{1}(R)$ with Application to Kernel Density Function Estimators ($C_0(R)$$L^1(R)$의 완전수렴(完全收斂)과 커널밀도함수(密度函數) 추정량(推定量)의 응용(應用)에 대(對)한 연구(硏究))

  • Lee, Sung-Ho
    • Journal of the Korean Data and Information Science Society
    • /
    • v.3 no.1
    • /
    • pp.25-31
    • /
    • 1992
  • Some results relating to $C_{0}(R)\;and\;L^{1}(R)$ spaces with application to kernel density estimators will be introduced. First, random elements in $C_{0}(R)\;and\;L^{1}(R)$ are discussed. Then, complete convergence limit theorems are given to show that these results can be used in establishing uniformly consistency and $L^{1}$ consistency.

  • PDF

Selection of bandwidth for local linear composite quantile regression smoothing (국소 선형 복합 분위수 회귀에서의 평활계수 선택)

  • Jhun, Myoungshic;Kang, Jongkyeong;Bang, Sungwan
    • The Korean Journal of Applied Statistics
    • /
    • v.30 no.5
    • /
    • pp.733-745
    • /
    • 2017
  • Local composite quantile regression is a useful non-parametric regression method widely used for its high efficiency. Data smoothing methods using kernel are typically used in the estimation process with performances that rely largely on the smoothing parameter rather than the kernel. However, $L_2$-norm is generally used as criterion to estimate the performance of the regression function. In addition, many studies have been conducted on the selection of smoothing parameters that minimize mean square error (MSE) or mean integrated square error (MISE). In this paper, we explored the optimality of selecting smoothing parameters that determine the performance of non-parametric regression models using local linear composite quantile regression. As evaluation criteria for the choice of smoothing parameter, we used mean absolute error (MAE) and mean integrated absolute error (MIAE), which have not been researched extensively due to mathematical difficulties. We proved the uniqueness of the optimal smoothing parameter based on MAE and MIAE. Furthermore, we compared the optimal smoothing parameter based on the proposed criteria (MAE and MIAE) with existing criteria (MSE and MISE). In this process, the properties of the proposed method were investigated through simulation studies in various situations.

Testing of a discontinuity point in the log-variance function based on likelihood (가능도함수를 이용한 로그분산함수의 불연속점 검정)

  • Huh, Jib
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.1
    • /
    • pp.1-9
    • /
    • 2009
  • Let us consider that the variance function in regression model has a discontinuity/change point at unknown location. Yu and Jones (2004) proposed the local polynomial fit to estimate the log-variance function which break the positivity of the variance. Using the local polynomial fit, Huh (2008) estimate the discontinuity point of the log-variance function. We propose a test for the existence of a discontinuity point in the log-variance function with the estimated jump size in Huh (2008). The proposed method is based on the asymptotic distribution of the estimated jump size. Numerical works demonstrate the performance of the method.

  • PDF

Power Comparison between Methods of Empirical Process and a Kernel Density Estimator for the Test of Distribution Change (분포변화 검정에서 경험확률과정과 커널밀도함수추정량의 검정력 비교)

  • Na, Seong-Ryong;Park, Hyeon-Ah
    • Communications for Statistical Applications and Methods
    • /
    • v.18 no.2
    • /
    • pp.245-255
    • /
    • 2011
  • There are two nonparametric methods that use empirical distribution functions and probability density estimators for the test of the distribution change of data. In this paper we investigate the two methods precisely and summarize the results of previous research. We assume several probability models to make a simulation study of the change point analysis and to examine the finite sample behavior of the two methods. Empirical powers are compared to verify which is better for each model.

A Study of the Feature Classification and the Predictive Model of Main Feed-Water Flow for Turbine Cycle (주급수 유량의 형상 분류 및 추정 모델에 대한 연구)

  • Yang, Hac Jin;Kim, Seong Kun;Choi, Kwang Hee
    • Journal of Energy Engineering
    • /
    • v.23 no.4
    • /
    • pp.263-271
    • /
    • 2014
  • Corrective thermal performance analysis is required for thermal power plants to determine performance status of turbine cycle. We developed classification method for main feed water flow to make precise correction for performance analysis based on ASME (American Society of Mechanical Engineers) PTC (Performance Test Code). The classification is based on feature identification of status of main water flow. Also we developed predictive algorithms for corrected main feed-water through Support Vector Machine (SVM) Model for each classified feature area. The results was compared to estimations using Neural Network(NN) and Kernel Regression(KR). The feature classification and predictive model of main feed-water flow provides more practical methods for corrective thermal performance analysis of turbine cycle.

Practical Approach for Blind Algorithms Using Random-Order Symbol Sequence and Cross-Correntropy (랜덤오더 심볼열과 상호 코렌트로피를 이용한 블라인드 알고리듬의 현실적 접근)

  • Kim, Namyong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.3
    • /
    • pp.149-154
    • /
    • 2014
  • The cross-correntropy concept can be expressed with inner products of two different probability density functions constructed by Gaussian-kernel density estimation methods. Blind algorithms based on the maximization of the cross-correntropy (MCC) and a symbol set of randomly generated N samples yield superior learning performance, but have a huge computational complexity in the update process at the aim of weight adjustment based on the MCC. In this paper, a method of reducing the computational complexity of the MCC algorithm that calculates recursively the gradient of the cross-correntropy is proposed. The proposed method has only O(N) operations per iteration while the conventional MCC algorithms that calculate its gradients by a block processing method has $O(N^2)$. In the simulation results, the proposed method shows the same learning performance while reducing its heavy calculation burden significantly.