• Title/Summary/Keyword: Gaussian kernel

Search Result 137, Processing Time 0.022 seconds

INSTABILITY OF THE BETTI SEQUENCE FOR PERSISTENT HOMOLOGY AND A STABILIZED VERSION OF THE BETTI SEQUENCE

  • JOHNSON, MEGAN;JUNG, JAE-HUN
    • Journal of the Korean Society for Industrial and Applied Mathematics
    • /
    • v.25 no.4
    • /
    • pp.296-311
    • /
    • 2021
  • Topological Data Analysis (TDA), a relatively new field of data analysis, has proved very useful in a variety of applications. The main persistence tool from TDA is persistent homology in which data structure is examined at many scales. Representations of persistent homology include persistence barcodes and persistence diagrams, both of which are not straightforward to reconcile with traditional machine learning algorithms as they are sets of intervals or multisets. The problem of faithfully representing barcodes and persistent diagrams has been pursued along two main avenues: kernel methods and vectorizations. One vectorization is the Betti sequence, or Betti curve, derived from the persistence barcode. While the Betti sequence has been used in classification problems in various applications, to our knowledge, the stability of the sequence has never before been discussed. In this paper we show that the Betti sequence is unstable under the 1-Wasserstein metric with regards to small perturbations in the barcode from which it is calculated. In addition, we propose a novel stabilized version of the Betti sequence based on the Gaussian smoothing seen in the Stable Persistence Bag of Words for persistent homology. We then introduce the normalized cumulative Betti sequence and provide numerical examples that support the main statement of the paper.

Generation of emulsions due to the impact of surfactant-laden droplet on a viscous oil layer on water (벤츄리 노즐 출구 형상과 작동 조건에 따른 캐비테이션 기포 발생 특성 연구)

  • Changhoon Oh;Joon Hyun Kim;Jaeyong Sung
    • Journal of the Korean Society of Visualization
    • /
    • v.21 no.1
    • /
    • pp.94-102
    • /
    • 2023
  • Three design parameters were considered in this study: outlet nozzle angle (30°, 60°, 80°), neck length (1 mm, 3 mm), and flow rate (0.5, 0.6, 0.7, 0.8 lpm). A neck diameter of 0.5 mm induced cavitation flow at a venture nozzle. A secondary transparent chamber was connected after ejection to increase bubble duration and shape visibility. The bubble size was estimated using a Gaussian kernel function to identify bubbles in the acquired images. Data on bubble size were used to obtain Sauter's mean diameter and probability density function to obtain specific bubble state conditions. The degree of bubble generation according to the bubble size was compared for each design variable. The bubble diameter increased as the flow rate increased. The frequency of bubble generation was highest around 20 ㎛. With the same neck length, the smaller the CV number, the larger the average bubble diameter. It is possible to increase the generation frequency of smaller bubbles by the cavitation method by changing the magnification angle and length of the neck. However, if the flow rate is too large, the average bubble diameter tends to increase, so an appropriate flow rate should be selected.

A New Bias Scheduling Method for Improving Both Classification Performance and Precision on the Classification and Regression Problems (분류 및 회귀문제에서의 분류 성능과 정확도를 동시에 향상시키기 위한 새로운 바이어스 스케줄링 방법)

  • Kim Eun-Mi;Park Seong-Mi;Kim Kwang-Hee;Lee Bae-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.11
    • /
    • pp.1021-1028
    • /
    • 2005
  • The general solution for classification and regression problems can be found by matching and modifying matrices with the information in real world and then these matrices are teaming in neural networks. This paper treats primary space as a real world, and dual space that Primary space matches matrices using kernel. In practical study, there are two kinds of problems, complete system which can get an answer using inverse matrix and ill-posed system or singular system which cannot get an answer directly from inverse of the given matrix. Further more the problems are often given by the latter condition; therefore, it is necessary to find regularization parameter to change ill-posed or singular problems into complete system. This paper compares each performance under both classification and regression problems among GCV, L-Curve, which are well known for getting regularization parameter, and kernel methods. Both GCV and L-Curve have excellent performance to get regularization parameters, and the performances are similar although they show little bit different results from the different condition of problems. However, these methods are two-step solution because both have to calculate the regularization parameters to solve given problems, and then those problems can be applied to other solving methods. Compared with UV and L-Curve, kernel methods are one-step solution which is simultaneously teaming a regularization parameter within the teaming process of pattern weights. This paper also suggests dynamic momentum which is leaning under the limited proportional condition between learning epoch and the performance of given problems to increase performance and precision for regularization. Finally, this paper shows the results that suggested solution can get better or equivalent results compared with GCV and L-Curve through the experiments using Iris data which are used to consider standard data in classification, Gaussian data which are typical data for singular system, and Shaw data which is an one-dimension image restoration problems.

Palatability Grading Analysis of Hanwoo Beef using Sensory Properties and Discriminant Analysis (관능특성 및 판별함수를 이용한 한우고기 맛 등급 분석)

  • Cho, Soo-Hyun;Seo, Gu-Reo-Un-Dal-Nim;Kim, Dong-Hun;Kim, Jae-Hee
    • Food Science of Animal Resources
    • /
    • v.29 no.1
    • /
    • pp.132-139
    • /
    • 2009
  • The objective of this study was to investigate the most effective analysis methods for palatability grading of Hanwoo beef by comparing the results of discriminant analysis with sensory data. The sensory data were obtained from sensory testing by 1,300 consumers evaluated tenderness, juiciness, flavor-likeness and overall acceptability of Hanwoo beef samples prepared by boiling, roasting and grilling cooking methods. For the discriminant analysis with one factor, overall acceptability, the linear discriminant functions and the non-parametric discriminant function with the Gaussian kernel were estimated. The linear discriminant functions were simple and easy to understand while the non-parametric discriminant functions were not explicit and had the problem of selection of kernel function and bandwidth. With the three palatability factors such as tenderness, juiciness and flavor-likeness, the canonical discriminant analysis was used and the ability of classification was calculated with the accurate classification rate and the error rate. The canonical discriminant analysis did not need the specific distributional assumptions and only used the principal component and canonical correlation. Also, it contained the function of 3 factors (tenderness, juiciness and flavor-likeness) and accurate classification rate was similar with the other discriminant methods. Therefore, the canonical discriminant analysis was the most proper method to analyze the palatability grading of Hanwoo beef.

Future Korean Water Resources Projection Considering Uncertainty of GCMs and Hydrological Models (GCM과 수문모형의 불확실성을 고려한 기후변화에 따른 한반도 미래 수자원 전망)

  • Bae, Deg-Hyo;Jung, Il-Won;Lee, Byung-Ju;Lee, Moon-Hwan
    • Journal of Korea Water Resources Association
    • /
    • v.44 no.5
    • /
    • pp.389-406
    • /
    • 2011
  • The objective of this study is to examine the climate change impact assessment on Korean water resources considering the uncertainties of Global Climate Models (GCMs) and hydrological models. The 3 different emission scenarios (A2, A1B, B1) and 13 GCMs' results are used to consider the uncertainties of the emission scenario and GCM, while PRMS, SWAT, and SLURP models are employed to consider the effects of hydrological model structures and potential evapotranspiration (PET) computation methods. The 312 ensemble results are provided to 109 mid-size sub-basins over South Korean and Gaussian kernel density functions obtained from their ensemble results are suggested with the ensemble mean and their variabilities of the results. It shows that the summer and winter runoffs are expected to be increased and spring runoff to be decreased for the future 3 periods relative to past 30-year reference period. It also provides that annual average runoff increased over all sub-basins, but the increases in the northern basins including Han River basin are greater than those in the southern basins. Due to the reason that the increase in annual average runoff is mainly caused by the increase in summer runoff and consequently the seasonal runoff variations according to climate change would be severe, the climate change impact on Korean water resources could intensify the difficulties to water resources conservation and management. On the other hand, as regards to the uncertainties, the highest and lowest ones are in winter and summer seasons, respectively.

No-reference objective quality assessment of image using blur and blocking metric (블러링과 블록킹 수치를 이용한 영상의 무기준법 객관적 화질 평가)

  • Jeong, Tae-Uk;Kim, Young-Hie;Lee, Chul-Hee
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.3
    • /
    • pp.96-104
    • /
    • 2009
  • In this paper, we propose a no-reference objective Quality assessment metrics of image. The blockiness and blurring of edge areas which are sensitive to the human visual system are modeled as step functions. Blocking and blur metrics are obtained by estimating local visibility of blockiness and edge width, For the blocking metric, horizontal and vertical blocking lines are first determined by accumulating weighted differences of adjacent pixels and then the local visibility of blockiness at the intersection of blocking lines is obtained from the total difference of amplitudes of the 2-D step function which is modelled as a blocking region. The blurred input image is first re-blurred by a Gaussian blur kernel and an edge mask image is generated. In edge blocks, the local edge width is calculated from four directional projections (horizontal, vertical and two diagonal directions) using local extrema positions. In addition, the kurtosis and SSIM are used to compute the blur metric. The final no-reference objective metric is computed after those values are combined using an appropriate function. Experimental results show that the proposed objective metrics are highly correlated to the subjective data.

Graph Cut-based Automatic Color Image Segmentation using Mean Shift Analysis (Mean Shift 분석을 이용한 그래프 컷 기반의 자동 칼라 영상 분할)

  • Park, An-Jin;Kim, Jung-Whan;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.11
    • /
    • pp.936-946
    • /
    • 2009
  • A graph cuts method has recently attracted a lot of attentions for image segmentation, as it can globally minimize energy functions composed of data term that reflects how each pixel fits into prior information for each class and smoothness term that penalizes discontinuities between neighboring pixels. In previous approaches to graph cuts-based automatic image segmentation, GMM(Gaussian mixture models) is generally used, and means and covariance matrixes calculated by EM algorithm were used as prior information for each cluster. However, it is practicable only for clusters with a hyper-spherical or hyper-ellipsoidal shape, as the cluster was represented based on the covariance matrix centered on the mean. For arbitrary-shaped clusters, this paper proposes graph cuts-based image segmentation using mean shift analysis. As a prior information to estimate the data term, we use the set of mean trajectories toward each mode from initial means randomly selected in $L^*u^*{\upsilon}^*$ color space. Since the mean shift procedure requires many computational times, we transform features in continuous feature space into 3D discrete grid, and use 3D kernel based on the first moment in the grid, which are needed to move the means to modes. In the experiments, we investigate the problems of mean shift-based and normalized cuts-based image segmentation methods that are recently popular methods, and the proposed method showed better performance than previous two methods and graph cuts-based automatic image segmentation using GMM on Berkeley segmentation dataset.

Monitoring of Chemical Processes Using Modified Scale Space Filtering and Functional-Link-Associative Neural Network (개선된 스케일 스페이스 필터링과 함수연결연상 신경망을 이용한 화학공정 감시)

  • Park, Jung-Hwan;Kim, Yoon-Sik;Chang, Tae-Suk;Yoon, En-Sup
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.6 no.12
    • /
    • pp.1113-1119
    • /
    • 2000
  • To operate a process plant safely and economically, process monitoring is very important. Process monitoring is the task to identify the state of the system from sensor data. Process monitoring includes data acquisition, regulatory control, data reconciliation, fault detection, etc. This research focuses on the data recon-ciliation using scale-space filtering and fault detection using functional-link associative neural networks. Scale-space filtering is a multi-resolution signal analysis method. Scale-space filtering can extract highest frequency factors(noise) effectively. But scale-space filtering has too large calculation costs and end effect problems. This research reduces the calculation cost of scale-space filtering by applying the minimum limit to the gaussian kernel. And the end-effect that occurs at the end of the signal of the scale-space filtering is overcome by using extrapolation related with the clustering change detection method. Nonlinear principal component analysis methods using neural network have been reviewed and the separately expanded functional-link associative neural network is proposed for chemical process monitoring. The separately expanded functional-link associative neural network has better learning capabilities, generalization abilities and short learning time than the exiting-neural networks. Separately expanded functional-link associative neural network can express a statistical model similar to real process by expanding the input data separately. Combining the proposed methods-modified scale-space filtering and fault detection method using the separately expanded functional-link associative neural network-a process monitoring system is proposed in this research. the usefulness of the proposed method is proven by its application a boiler water supply unit.

  • PDF

PDF-Distance Minimizing Blind Algorithm based on Delta Functions for Compensation for Complex-Channel Phase Distortions (복소 채널의 위상 왜곡 보상을 위한 델타함수 기반의 확률분포거리 최소화 블라인드 알고리듬)

  • Kim, Nam-Yong;Kang, Sung-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.12
    • /
    • pp.5036-5041
    • /
    • 2010
  • This paper introduces the complex-version of an Euclidean distance minimization algorithm based on a set of delta functions. The algorithm is analyzed to be able to compensate inherently the channel phase distortion caused by inferior complex channels. Also this algorithm has a relatively small size of Gaussian kernel compared to the conventional method of using a randomly generated symbol set. This characteristic implies that the information potential between desired symbol and output is higher so that the algorithm forces output more strongly to gather close to the desired symbol. Based on 16 QAM system and phase distorted complex-channel models, mean squared error (MSE) performance and concentration performance of output symbol points are evaluated. Simulation results show that the algorithm compensates channel phase distortion effectively in constellation performance and about 5 dB enhancement in steady state MSE performance.

Practical Approach for Blind Algorithms Using Random-Order Symbol Sequence and Cross-Correntropy (랜덤오더 심볼열과 상호 코렌트로피를 이용한 블라인드 알고리듬의 현실적 접근)

  • Kim, Namyong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39A no.3
    • /
    • pp.149-154
    • /
    • 2014
  • The cross-correntropy concept can be expressed with inner products of two different probability density functions constructed by Gaussian-kernel density estimation methods. Blind algorithms based on the maximization of the cross-correntropy (MCC) and a symbol set of randomly generated N samples yield superior learning performance, but have a huge computational complexity in the update process at the aim of weight adjustment based on the MCC. In this paper, a method of reducing the computational complexity of the MCC algorithm that calculates recursively the gradient of the cross-correntropy is proposed. The proposed method has only O(N) operations per iteration while the conventional MCC algorithms that calculate its gradients by a block processing method has $O(N^2)$. In the simulation results, the proposed method shows the same learning performance while reducing its heavy calculation burden significantly.