• Title/Summary/Keyword: Point-kernel method

Search Result 100, Processing Time 0.029 seconds

Multiple change-point estimation in spectral representation

  • Kim, Jaehee
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.1
    • /
    • pp.127-150
    • /
    • 2022
  • We discuss multiple change-point estimation as edge detection in piecewise smooth functions with finitely many jump discontinuities. In this paper we propose change-point estimators using concentration kernels with Fourier coefficients. The change-points can be located via the signal based on Fourier transformation system. This method yields location and amplitude of the change-points with refinement via concentration kernels. We prove that, in an appropriate asymptotic framework, this method provides consistent estimators of change-points with an almost optimal rate. In a simulation study the proposed change-point estimators are compared and discussed. Applications of the proposed methods are provided with Nile flow data and daily won-dollar exchange rate data.

A Protection Technique for Kernel Functions under the Windows Operating System (윈도우즈 운영체제 기반 커널 함수 보호 기법)

  • Back, Dusung;Pyun, Kihyun
    • Journal of Internet Computing and Services
    • /
    • v.15 no.5
    • /
    • pp.133-139
    • /
    • 2014
  • Recently the Microsoft Windows OS(operating system) is widely used for the internet banking, games etc. The kernel functions provided by the Windows OS can perform memory accesses, keyboard input/output inspection, and graphics output of any processes. Thus, many hacking programs utilizes those for memory hacking, keyboard hacking, and making illegal automation tools for game programs. Existing protection mechanisms make decisions for existence of hacking programs by inspecting some kernel data structures and the initial parts of kernel functions. In this paper, we point out drawbacks of existing methods and propose a new solution. Our method can remedy those by modifying the system service dispatcher code. If the dispatcher code is utilized by a hacking program, existing protection methods cannot detect illegal operations. Thus, we suggest that protection methods should investigate the modification of the dispatcher code as well as kernel data structures and the initial parts of kernel functions.

Efficient Kernel Based 3-D Source Localization via Tensor Completion

  • Lu, Shan;Zhang, Jun;Ma, Xianmin;Kan, Changju
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.1
    • /
    • pp.206-221
    • /
    • 2019
  • Source localization in three-dimensional (3-D) wireless sensor networks (WSNs) is becoming a major research focus. Due to the complicated air-ground environments in 3-D positioning, many of the traditional localization methods, such as received signal strength (RSS) may have relatively poor accuracy performance. Benefit from prior learning mechanisms, fingerprinting-based localization methods are less sensitive to complex conditions and can provide relatively accurate localization performance. However, fingerprinting-based methods require training data at each grid point for constructing the fingerprint database, the overhead of which is very high, particularly for 3-D localization. Also, some of measured data may be unavailable due to the interference of a complicated environment. In this paper, we propose an efficient kernel based 3-D localization algorithm via tensor completion. We first exploit the spatial correlation of the RSS data and demonstrate the low rank property of the RSS data matrix. Based on this, a new training scheme is proposed that uses tensor completion to recover the missing data of the fingerprint database. Finally, we propose a kernel based learning technique in the matching phase to improve the sensitivity and accuracy in the final source position estimation. Simulation results show that our new method can effectively eliminate the impairment caused by incomplete sensing data to improve the localization performance.

Uncertainty analysis of containment dose rate for core damage assessment in nuclear power plants

  • Wu, Guohua;Tong, Jiejuan;Gao, Yan;Zhang, Liguo;Zhao, Yunfei
    • Nuclear Engineering and Technology
    • /
    • v.50 no.5
    • /
    • pp.673-682
    • /
    • 2018
  • One of the most widely used methods to estimate core damage during a nuclear power plant accident is containment radiation measurement. The evolution of severe accidents is extremely complex, leading to uncertainty in the containment dose rate (CDR). Therefore, it is difficult to accurately determine core damage. This study proposes to conduct uncertainty analysis of CDR for core damage assessment. First, based on source term estimation, the Monte Carlo (MC) and point-kernel integration methods were used to estimate the probability density function of the CDR under different extents of core damage in accident scenarios with late containment failure. Second, the results were verified by comparing the results of both methods. The point-kernel integration method results were more dispersed than the MC results, and the MC method was used for both quantitative and qualitative analyses. Quantitative analysis indicated a linear relationship, rather than the expected proportional relationship, between the CDR and core damage fraction. The CDR distribution obeyed a logarithmic normal distribution in accidents with a small break in containment, but not in accidents with a large break in containment. A possible application of our analysis is a real-time core damage estimation program based on the CDR.

Estimation of Probability Precipitation by Regional Frequency Analysis using Cluster analysis and Variable Kernel Density Function (군집분석과 변동핵밀도함수를 이용한 지역빈도해석의 확률강우량 산정)

  • Oh, Tae Suk;Moon, Young-Il;Oh, Keun-Taek
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.2B
    • /
    • pp.225-236
    • /
    • 2008
  • The techniques to calculate the probability precipitation for the design of hydrological projects can be determined by the point frequency analysis and the regional frequency analysis. Probability precipitation usually calculated by point frequency analysis using rainfall data that is observed in rainfall observatory which is situated in the basin. Therefore, Probability precipitation through point frequency analysis need observed rainfall data for enough periods. But, lacking precipitation data can be calculated to wrong parameters. Consequently, the regional frequency analysis can supplement the lacking precipitation data. Therefore, the regional frequency analysis has weaknesses compared to point frequency analysis because of suppositions about probability distributions. In this paper, rainfall observatory in Korea did grouping by cluster analysis using position of timely precipitation observatory and characteristic time rainfall. Discordancy and heterogeneity measures verified the grouping precipitation observatory by the cluster analysis. So, there divided rainfall observatory in Korea to 6 areas, and the regional frequency analysis applies index-flood techniques and L-moment techniques. Also, the probability precipitation was calculated by the regional frequency analysis using variable kernel density function. At the results, the regional frequency analysis of the variable kernel function can utilize for decision difficulty of suitable probability distribution in other methods.

An Algorithm of Score Function Generation using Convolution-FFT in Independent Component Analysis (독립성분분석에서 Convolution-FFT을 이용한 효율적인 점수함수의 생성 알고리즘)

  • Kim Woong-Myung;Lee Hyon-Soo
    • The KIPS Transactions:PartB
    • /
    • v.13B no.1 s.104
    • /
    • pp.27-34
    • /
    • 2006
  • In this study, we propose this new algorithm that generates score function in ICA(Independent Component Analysis) using entropy theory. To generate score function, estimation of probability density function about original signals are certainly necessary and density function should be differentiated. Therefore, we used kernel density estimation method in order to derive differential equation of score function by original signal. After changing formula to convolution form to increase speed of density estimation, we used FFT algorithm that can calculate convolution faster. Proposed score function generation method reduces the errors, it is density difference of recovered signals and originals signals. In the result of computer simulation, we estimate density function more similar to original signals compared with Extended Infomax and Fixed Point ICA in blind source separation problem and get improved performance at the SNR(Signal to Noise Ratio) between recovered signals and original signal.

Unsteady Aerodynimic Analysis of an Aircraft Using a Frequency Domain 3-D Panel Method (주파수영역 3차원 패널법을 이용한 항공기의 비정상 공력해석)

  • 김창희;조진수;염찬홍
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.18 no.7
    • /
    • pp.1808-1817
    • /
    • 1994
  • Unsteady aerodynamic analysis of an aircraft is done using a frequency domian 3-D panel method. The method is based on an unsteady linear compressible lifting surface theory. The lifting surface is placed in a flight patch, and angle of attack and camber effects are implemented in upwash. Fuselage effects are not considered. The unsteady solutions of the code are validated by comparing with the solutions of a hybrid doublet lattice-doublet point method and a doublet point method for various wing configurations at subsonic and supersonic flow conditions. The calculated results of dynamic stability derivatives for aircraft are shown without comparision due to lack of available measured data or calculated results.

A Study on the Trade Area Analysis Model based on GIS - A Case of Huff probability model - (GIS 기반의 상권분석 모형 연구 - Huff 확률모형을 중심으로 -)

  • Son, Young-Gi;An, Sang-Hyun;Shin, Young-Chul
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.10 no.2
    • /
    • pp.164-171
    • /
    • 2007
  • This research used GIS spatial analysis model and Huff probability model and achieved trade area analysis of area center. we constructed basic maps that were surveyed according to types of business, number of households etc. using a land registration map of LMIS(Land Management Information System) in Bokdae-dong, Cheongju-si. Kernel density function and NNI(Nearest Neighbor Index) was used to estimate store distribution center area in neighborhood life zones. The center point of area and scale were estimated by means of the center area. Huff probability model was used in abstracting trade areas according to estimated center areas, those was drew map. Therefore, this study describes method that can apply in Huff probability model through kernel density function and NNI of GIS spatial analysis techniques. A trade area was abstracted more exactly by taking advantage of this method, which will can aid merchant for the foundation of small sized enterprises.

  • PDF

An efficient reliability analysis strategy for low failure probability problems

  • Cao, Runan;Sun, Zhili;Wang, Jian;Guo, Fanyi
    • Structural Engineering and Mechanics
    • /
    • v.78 no.2
    • /
    • pp.209-218
    • /
    • 2021
  • For engineering, there are two major challenges in reliability analysis. First, to ensure the accuracy of simulation results, mechanical products are usually defined implicitly by complex numerical models that require time-consuming. Second, the mechanical products are fortunately designed with a large safety margin, which leads to a low failure probability. This paper proposes an efficient and high-precision adaptive active learning algorithm based on the Kriging surrogate model to deal with the problems with low failure probability and time-consuming numerical models. In order to solve the problem with multiple failure regions, the adaptive kernel-density estimation is introduced and improved. Meanwhile, a new criterion for selecting points based on the current Kriging model is proposed to improve the computational efficiency. The criterion for choosing the best sampling points considers not only the probability of misjudging the sign of the response value at a point by the Kriging model but also the distribution information at that point. In order to prevent the distance between the selected training points from too close, the correlation between training points is limited to avoid information redundancy and improve the computation efficiency of the algorithm. Finally, the efficiency and accuracy of the proposed method are verified compared with other algorithms through two academic examples and one engineering application.

Nonlinear Chemical Plant Modeling using Support Vector Machines: pH Neutralization Process is Targeted (SVM을 이용한 비선형 화학공정 모델링: pH 중화공정에의 적용 예)

  • Kim, Dong-Won;Yoo, Ah-Rim;Yang, Dae-Ryook;Park, Gwi-Tae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.12
    • /
    • pp.1178-1183
    • /
    • 2006
  • This paper is concerned with the modeling and identification of pH neutralization process as nonlinear chemical system. The pH control has been applied to various chemical processes such as wastewater treatment, chemical, and biochemical industries. But the control of the pH is very difficult due to its highly nonlinear nature which is the titration curve with the steepest slope at the neutralization point. We apply SVM which have become an increasingly popular tool for machine teaming tasks such as classification, regression or detection to model pH process which has strong nonlinearities. Linear and radial basis function kernels are employed and each result has been compared. So SVH based on kernel method have been found to work well. Simulations have shown that the SVM based on the kernel substitution including linear and radial basis function kernel provides a promising alternative to model strong nonlinearities of the pH neutralization but also to control the system.