• Title/Summary/Keyword: Smoothing algorithm

Search Result 436, Processing Time 0.029 seconds

Neural network based tool path planning for complex pocket machining (신경회로망 방식에 의한 복잡한 포켓형상의 황삭경로 생성)

  • Shin, Yang-Soo;Suh, Suk-Hwan
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.12 no.7
    • /
    • pp.32-45
    • /
    • 1995
  • In this paper, we present a new method to tool path planning problem for rough cut of pocket milling operations. The key idea is to formulate the tool path problem into a TSP (Travelling Salesman Problem) so that the powerful neural network approach can be effectively applied. Specifically, our method is composed of three procedures: a) discretization of the pocket area into a finite number of tool points, b) neural network approach (called SOM-Self Organizing Map) for path finding, and c) postprocessing for path smoothing and feedrate adjustment. By the neural network procedure, an efficient tool path (in the sense of path length and tool retraction) can be robustly obtained for any arbitrary shaped pockets with many islands. In the postprocessing, a) the detailed shape of the path is fine tuned by eliminating sharp corners of the path segments, and b) any cross-overs between the path segments and islands. With the determined tool path, the feedrate adjustment is finally performed for legitimate motion without requiring excessive cutting forces. The validity and powerfulness of the algorithm is demonstrated through various computer simulations and real machining.

  • PDF

Optimal Replacement Scheduling of Water Pipelines

  • Ghobadi, Fatemeh;Kang, Doosun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.145-145
    • /
    • 2021
  • Water distribution networks (WDNs) are designed to satisfy water requirement of an urban community. One of the central issues in human history is providing sufficient quality and quantity of water through WDNs. A WDN consists of a great number of pipelines with different ages, lengths, materials, and sizes in varying degrees of deterioration. The available annual budget for rehabilitation of these infrastructures only covers part of the network; thus it is important to manage the limited budget in the most cost-effective manner. In this study, a novel pipe replacement scheduling approach is proposed in order to smooth the annual investment time series based on a life cycle cost assessment. The proposed approach is applied to a real WDN currently operating in South Korea. The proposed scheduling plan considers both the annual budget limitation and the optimum investment on pipes' useful life. A non-dominated sorting genetic algorithm is used to solve a multi-objective optimization problem. Three decision-making objectives, including the minimum imposed LCC of the network, the minimum standard deviation of annual cost, and the minimum average age of the network, are considered to find optimal pipe replacement planning over long-term time period. The results indicate that the proposed scheduling structure provides efficient and cost-effective rehabilitation management of water network with consistent annual budget.

  • PDF

Development of the Modified Preprocessing Method for Pipe Wall Thinning Data in Nuclear Power Plants (원자력 발전소 배관 감육 측정데이터의 개선된 전처리 방법 개발)

  • Seong-Bin Mun;Sang-Hoon Lee;Young-Jin Oh;Sung-Ryul Kim
    • Transactions of the Korean Society of Pressure Vessels and Piping
    • /
    • v.19 no.2
    • /
    • pp.146-154
    • /
    • 2023
  • In nuclear power plants, ultrasonic test for pipe wall thickness measurement is used during periodic inspections to prevent pipe rupture due to pipe wall thinning. However, when measuring pipe wall thickness using ultrasonic test, a significant amount of measurement error occurs due to the on-site conditions of the nuclear power plant. If the maximum pipe wall thinning rate is decided by the measured pipe wall thickness containing a significant error, the pipe wall thinning rate data have significant uncertainty and systematic overestimation. This study proposes preprocessing of pipe wall thinning measurement data using support vector machine regression algorithm. By using support vector machine, pipe wall thinning measurement data can be smoothened and accordingly uncertainty and systematic overestimation of the estimated pipe wall thinning rate data can be reduced.

Adaptive Sea Level Prediction Method Based on Harmonic Analysis (조화분석에 기반한 적응적 조위 예측 방법)

  • Park, Sanghyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.2
    • /
    • pp.276-283
    • /
    • 2018
  • Climate changes consistently cause coastal accidents such as coastal flooding, so the studies on monitoring the marine environments are progressing to prevent and reduce the damage from coastal accidents. In this paper, we propose a new method to predict the sea level which can be applied to coastal monitoring systems to observe the variation of sea level and warn about the dangers. Existing sea level models are very complicated and need a lot of tidal data, so they are not proper for real-time prediction systems. On the other hand, the proposed algorithm is very simple but precise in short period such as one or two hours since we use the measured data from the sensor. The proposed method uses Kalman filter algorithm for harmonic analysis and double exponential smoothing for additional error correction. It is shown by experimental results that the proposed method is simple but predicts the sea level accurately.

Comparison of Thresholding Techniques for SVD Coefficients in CT Perfusion Image Analysis (CT 관류 영상 해석에서의 SVD 계수 임계화 기법의 성능 비교)

  • Kim, Nak Hyun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.6
    • /
    • pp.276-286
    • /
    • 2013
  • SVD-based deconvolution algorithm has been known as the most effective technique for CT perfusion image analysis. In this algorithm, in order to reduce noise effects, SVD coefficients smaller than a certain threshold are removed. As the truncation threshold, either a fixed value or a variable threshold yielding a predetermined OI (oscillation index) is frequently employed. Each of these two thresholding methods has an advantage to the other either in accuracy or efficiency. In this paper, we propose a Monte Carlo simulation method to evaluate the accuracy of the two methods. An extension of the proposed method is presented as well to measure the effects of image smoothing on the accuracy of the thresholding methods. In this paper, after the simulation method is described, experimental results are presented using both simulated data and real CT images.

Adaptive Extended Bilateral Motion Estimation Considering Block Type and Frame Motion Activity (블록의 성질과 프레임 움직임을 고려한 적응적 확장 블록을 사용하는 프레임율 증강 기법)

  • Park, Daejun;Jeong, Jechang
    • Journal of Broadcast Engineering
    • /
    • v.18 no.3
    • /
    • pp.342-348
    • /
    • 2013
  • In this paper, a novel frame rate up conversion (FRUC) algorithm using adaptive extended bilateral motion estimation (AEBME) is proposed. Conventionally, extended bilateral motion estimation (EBME) conducts dual motion estimation (ME) processes on the same region, therefore involves high complexity. However, in this proposed scheme, a novel block type matching procedure is suggested to accelerate the ME procedure. We calculate the edge information using sobel mask, and the calculated edge information is used in block type matching procedure. Based on the block type matching, decision will be made whether to use EBME. Motion vector smoothing (MVS) is adopted to detect outliers and correct outliers in the motion vector field. Finally, overlapped block motion compensation (OBMC) and motion compensated frame interpolation (MCFI) are adopted to interpolate the intermediate frame in which OBMC is employed adaptively based on frame motion activity. Experimental results show that this proposed algorithm has outstanding performance and fast computation comparing with EBME.

Wavelength selection by loading vector analysis in determining total protein in human serum using near-infrared spectroscopy and Partial Least Squares Regression

  • Kim, Yoen-Joo;Yoon, Gil-Won
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.4102-4102
    • /
    • 2001
  • In multivariate analysis, absorbance spectrum is measured over a band of wavelengths. One does not often pay attention to the size of this wavelength band. However, it is desirable that spectrum is measured at only necessary wavelengths as long as the acceptable accuracy of prediction can be met. In this paper, the method of selecting an optimal band of wavelengths based on the loading vector analysis was proposed and applied for determining total protein in human serum using near-infrared transmission spectroscopy and PLSR. Loading vectors in the full spectrum PLSR were used as reference in selecting wavelengths, but only the first loading vector was used since it explains the spectrum best. Absorbance spectra of sera from 97 outpatients were measured at 1530∼1850 nm with an interval of 2 nm. Total protein concentrations of sera were ranged from 5.1 to 7.7 g/㎗. Spectra were measured by Cary 5E spectrophotometer (Varian, Australia). Serum in the 5 mm-pathlength cuvette was put in the sample beam and air in the reference beam. Full spectrum PLSR was applied to determine total protein from sera. Next, the wavelength region of 1672∼1754 nm was selected based on the first loading vector analysis. Standard Error of Cross Validation (SECV) of full spectrum (1530∼l850 nm) PLSR and selected wavelength PLSR (1672∼1754 nm) was respectively 0.28 and 0.27 g/㎗. The prediction accuracy between the two bands was equal. Wavelength selection based on loading vector in PLSR seemed to be simple and robust in comparison to other methods based on correlation plot, regression vector and genetic algorithm. As a reference of wavelength selection for PLSR, the loading vector has the advantage over the correlation plot since the former is based on multivariate model whereas the latter, on univariate model. Wavelength selection by the first loading vector analysis requires shorter computation time than that by genetic algorithm and needs not smoothing.

  • PDF

Performance Improvement of Radial Basis Function Neural Networks Using Adaptive Feature Extraction (적응적 특징추출을 이용한 Radial Basis Function 신경망의 성능개선)

  • 조용현
    • Journal of Korea Multimedia Society
    • /
    • v.3 no.3
    • /
    • pp.253-262
    • /
    • 2000
  • This paper proposes a new RBF neural network that determines the number and the center of hidden neurons based on the adaptive feature extraction for the input data. The principal component analysis is applied for extracting adaptively the features by reducing the dimension of the given input data. It can simultaneously achieve a superior property of both the principal component analysis by mapping input data into set of statistically independent features and the RBF neural networks. The proposed neural networks has been applied to classify the 200 breast cancer databases by 2-class. The simulation results shows that the proposed neural networks has better performances of the learning time and the classification for test data, in comparison with those using the k-means clustering algorithm. And it is affected less than the k-means clustering algorithm by the initial weight setting and the scope of the smoothing factor.

  • PDF

Adaptive Noise Removal Based on Nonstationary Correlation (영상의 비정적 상관관계에 근거한 적응적 잡음제거 알고리듬)

  • 박성철;김창원;강문기
    • Journal of Broadcast Engineering
    • /
    • v.8 no.3
    • /
    • pp.278-287
    • /
    • 2003
  • Noise in an image degrades image quality and deteriorates coding efficiency. Recently, various edge-preserving noise filtering methods based on the nonstationary image model have been proposed to overcome this problem. In most conventional nonstationary image models, however, pixels are assumed to be uncorrelated to each other in order not to Increase the computational burden too much. As a result, some detailed information is lost in the filtered results. In this paper, we propose a computationally feasible adaptive noise smoothing algorithm which considers the nonstationary correlation characteristics of images. We assume that an image has a nonstationary mean and can be segmented into subimages which have individually different stationary correlations. Taking advantage of the special structure of the covariance matrix that results from the proposed image model, we derive a computationally efficient FFT-based adaptive linear minimum mean-square-error filter. Justification for the proposed image model is presented and effectiveness of the proposed algorithm is demonstrated experimentally.

Density Measurement for Continuous Flow Segment Using Two Point Detectors (두 개의 지점 검지기를 이용한 연속류 구간의 밀도측정 방안)

  • Kim, Min-Sung;Eom, Ki-Jong;Lee, Chung-Won
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.8 no.1
    • /
    • pp.37-44
    • /
    • 2009
  • Density is the most important congestion indicator among the three fundamental flow variables, flow, speed and density. Measuring density in the field has two different ways, direct and indirect. Taking photos with wide views is one of direct ways, which is not widely used because of its cost and lacking of proper positions. Another direct density measuring method using two spot detectors has been introduced with the concept of instantaneous density, average density and measurement interval. The relationship between accuracy and measurement interval has been investigated using the simulation data produced by Paramics API function. Finally, density measurement algorithm has been suggested including exponential smoothing for device development.

  • PDF