• Title/Summary/Keyword: Smoothing algorithm

Search Result 436, Processing Time 0.023 seconds

AWGN Removal using Laplace Distribution and Weighted Mask (라플라스 분포와 가중치 마스크를 이용한 AWGN 제거)

  • Park, Hwa-Jung;Kim, Nam-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.12
    • /
    • pp.1846-1852
    • /
    • 2021
  • In modern society, various digital devices are being distributed in a wide range of fields due to the fourth industrial revolution and the development of IoT technology. However, noise is generated in the process of acquiring or transmitting an image, and not only damages the information, but also affects the system, causing errors and incorrect operation. AWGN is a representative noise among image noise. As a method for removing noise, prior research has been conducted, and among them, AF, A-TMF, and MF are the representative methods. Existing filters have a disadvantage that smoothing occurs in areas with high frequency components because it is difficult to consider the characteristics of images. Therefore, the proposed algorithm calculates the standard deviation distribution to effectively eliminate noise even in the high frequency domain, and then calculates the final output by applying the probability density function weight of the Laplace distribution using the curve fitting method.

Comparison between Old and New Versions of Electron Monte Carlo (eMC) Dose Calculation

  • Seongmoon Jung;Jaeman Son;Hyeongmin Jin;Seonghee Kang;Jong Min Park;Jung-in Kim;Chang Heon Choi
    • Progress in Medical Physics
    • /
    • v.34 no.2
    • /
    • pp.15-22
    • /
    • 2023
  • This study compared the dose calculated using the electron Monte Carlo (eMC) dose calculation algorithm employing the old version (eMC V13.7) of the Varian Eclipse treatment-planning system (TPS) and its newer version (eMC V16.1). The eMC V16.1 was configured using the same beam data as the eMC V13.7. Beam data measured using the VitalBeam linear accelerator were implemented. A box-shaped water phantom (30×30×30 cm3) was generated in the TPS. Consequently, the TPS with eMC V13.7 and eMC V16.1 calculated the dose to the water phantom delivered by electron beams of various energies with a field size of 10×10 cm2. The calculations were repeated while changing the dose-smoothing levels and normalization method. Subsequently, the percentage depth dose and lateral profile of the dose distributions acquired by eMC V13.7 and eMC V16.1 were analyzed. In addition, the dose-volume histogram (DVH) differences between the two versions for the heterogeneous phantom with bone and lung inserted were compared. The doses calculated using eMC V16.1 were similar to those calculated using eMC V13.7 for the homogenous phantoms. However, a DVH difference was observed in the heterogeneous phantom, particularly in the bone material. The dose distribution calculated using eMC V16.1 was comparable to that of eMC V13.7 in the case of homogenous phantoms. The version changes resulted in a different DVH for the heterogeneous phantoms. However, further investigations to assess the DVH differences in patients and experimental validations for eMC V16.1, particularly for heterogeneous geometry, are required.

Speech Enhancement Based on Minima Controlled Recursive Averaging Technique Incorporating Conditional MAP (조건 사후 최대 확률 기반 최소값 제어 재귀평균기법을 이용한 음성향상)

  • Kum, Jong-Mo;Park, Yun-Sik;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.5
    • /
    • pp.256-261
    • /
    • 2008
  • In this paper, we propose a novel approach to improve the performance of minima controlled recursive averaging (MCRA) which is based on the conditional maximum a posteriori criterion. A crucial component of a practical speech enhancement system is the estimation of the noise power spectrum. One state-of-the-art approach is the minima controlled recursive averaging (MCRA) technique. The noise estimate in the MCRA technique is obtained by averaging past spectral power values based on a smoothing parameter that is adjusted by the signal presence probability in frequency subbands. We improve the MCRA using the speech presence probability which is the a posteriori probability conditioned on both the current observation the speech presence or absence of the previous frame. With the performance criteria of the ITU-T P.862 perceptual evaluation of speech quality (PESQ) and subjective evaluation of speech quality, we show that the proposed algorithm yields better results compared to the conventional MCRA-based scheme.

Revision of ART with Iterative Partitioning for Performance Improvement (입력 도메인 반복 분할 기법 성능 향상을 위한 고려 사항 분석)

  • Shin, Seung-Hun;Park, Seung-Kyu;Jung, Ki-Hyun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.3
    • /
    • pp.64-76
    • /
    • 2009
  • Adaptive Random Testing through Iterative Partitioning(IP-ART) is one of Adaptive Random Testing(ART) techniques. IP-ART uses an iterative partitioning method for input domain to improve the performances of early-versions of ART that have significant drawbacks in computation time. Another version of IP-ART, named with EIP-ART(IP-ART with Enlarged Input Domain), uses virtually enlarged input domain to remove the unevenly distributed parts near the boundary of the domain. EIP-ART could mitigate non-uniform test case distribution of IP-ART and achieve relatively high performances in a variety of input domain environments. The EIP-ART algorithm, however, have the drawback of higher computation time to generate test cases mainly due to the additional workload from enlarged input domain. For this reason, a revised version of IP-ART without input domain enlargement needs to improve the distribution of test cases to remove the additional time cost. We explore three smoothing algorithms which influence the distribution of test cases, and analyze to check if any performance improvements take place by them. The simulation results show that the algorithm of a restriction area management achieves better performance than other ones.

A Study on Real-time Tracking Method of Horizontal Face Position for Optimal 3D T-DMB Content Service (지상파 DMB 단말에서의 3D 컨텐츠 최적 서비스를 위한 경계 정보 기반 실시간 얼굴 수평 위치 추적 방법에 관한 연구)

  • Kang, Seong-Goo;Lee, Sang-Seop;Yi, June-Ho;Kim, Jung-Kyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.88-95
    • /
    • 2011
  • An embedded mobile device mostly has lower computation power than a general purpose computer because of its relatively lower system specifications. Consequently, conventional face tracking and face detection methods, requiring complex algorithms for higher recognition rates, are unsuitable in a mobile environment aiming for real time detection. On the other hand, by applying a real-time tracking and detecting algorithm, we would be able to provide a two-way interactive multimedia service between an user and a mobile device thus providing a far better quality of service in comparison to a one-way service. Therefore it is necessary to develop a real-time face and eye tracking technique optimized to a mobile environment. For this reason, in this paper, we proposes a method of tracking horizontal face position of a user on a T-DMB device for enhancing the quality of 3D DMB content. The proposed method uses the orientation of edges to estimate the left and right boundary of the face, and by the color edge information, the horizontal position and size of face is determined finally to decide the horizontal face. The sobel gradient vector is projected vertically and candidates of face boundaries are selected, and we proposed a smoothing method and a peak-detection method for the precise decision. Because general face detection algorithms use multi-scale feature vectors, the detection time is too long on a mobile environment. However the proposed algorithm which uses the single-scale detection method can detect the face more faster than conventional face detection methods.

A Baseline Correction for Effective Analysis of Alzheimer’s Disease based on Raman Spectra from Platelet (혈소판 라만 스펙트럼의 효율적인 분석을 위한 기준선 보정 방법)

  • Park, Aa-Ron;Baek, Sung-June
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.1
    • /
    • pp.16-22
    • /
    • 2012
  • In this paper, we proposed a method of baseline correction for analysis of Raman spectra of platelets from Alzheimer's disease (AD) transgenic mice. Measured Raman spectra include the meaningful information and unnecessary noise which is composed of baseline and additive noise. The Raman spectrum is divided into the local region including several peaks and the spectrum of the region is modeled by curve fitting using Gaussian model. The additive noise is clearly removed from the process of replacing the original spectrum with the fitted model. The baseline correction after interpolating the local minima of the fitted model with linear, piecewise cubic Hermite and cubic spline algorithm. The baseline corrected models extract the feature with principal component analysis (PCA). The classification result of support vector machine (SVM) and maximum $a$ posteriori probability (MAP) using linear interpolation method showed the good performance about overall number of principal components, especially SVM gave the best performance which is about 97.3% true classification average rate in case of piecewise cubic Hermite algorithm and 5 principal components. In addition, it confirmed that the proposed baseline correction method compared with the previous research result could be effectively applied in the analysis of the Raman spectra of platelet.

Wheel tread defect detection for high-speed trains using FBG-based online monitoring techniques

  • Liu, Xiao-Zhou;Ni, Yi-Qing
    • Smart Structures and Systems
    • /
    • v.21 no.5
    • /
    • pp.687-694
    • /
    • 2018
  • The problem of wheel tread defects has become a major challenge for the health management of high-speed rail as a wheel defect with small radius deviation may suffice to give rise to severe damage on both the train bogie components and the track structure when a train runs at high speeds. It is thus highly desirable to detect the defects soon after their occurrences and then conduct wheel turning for the defective wheelsets. Online wheel condition monitoring using wheel impact load detector (WILD) can be an effective solution, since it can assess the wheel condition and detect potential defects during train passage. This study aims to develop an FBG-based track-side wheel condition monitoring method for the detection of wheel tread defects. The track-side sensing system uses two FBG strain gauge arrays mounted on the rail foot, measuring the dynamic strains of the paired rails excited by passing wheelsets. Each FBG array has a length of about 3 m, slightly longer than the wheel circumference to ensure a full coverage for the detection of any potential defect on the tread. A defect detection algorithm is developed for using the online-monitored rail responses to identify the potential wheel tread defects. This algorithm consists of three steps: 1) strain data pre-processing by using a data smoothing technique to remove the trends; 2) diagnosis of novel responses by outlier analysis for the normalized data; and 3) local defect identification by a refined analysis on the novel responses extracted in Step 2. To verify the proposed method, a field test was conducted using a test train incorporating defective wheels. The train ran at different speeds on an instrumented track with the purpose of wheel condition monitoring. By using the proposed method to process the monitoring data, all the defects were identified and the results agreed well with those from the static inspection of the wheelsets in the depot. A comparison is also drawn for the detection accuracy under different running speeds of the test train, and the results show that the proposed method can achieve a satisfactory accuracy in wheel defect detection when the train runs at a speed higher than 30 kph. Some minor defects with a depth of 0.05 mm~0.06 mm are also successfully detected.

The Positional Accuracy Quality Assessment of Digital Map Generalization (수치지도 일반화 위치정확도 품질평가)

  • 박경식;임인섭;최석근
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.19 no.2
    • /
    • pp.173-181
    • /
    • 2001
  • It is very important to assess spatial data quality of a digital map produced through digital map generalization. In this study, as a aspect of spatial data quality maintenance, we examined the tolerate range of theoretical expectation accuracy and established the quality assessment standard in spatial data for the transformed digital map data do not act contrary to the digital map specifications and the digital map accuracy of the relational scale. And, transforming large scale digital map to small scale, if we reduce complexity through processes as simplification, smoothing, refinement and so on., the spatial position change may be always happened. thus, because it is very difficult to analyse the spatial accuracy of the transformed position, we used the buffering as assessment method of spatial accuracy in digital map generalization procedure. Although the tolerated range of generic positioning error for l/l, 000 and l/5, 000 scale is determined based on related law, because the algorithms adapted to each processing elements have different property each other, if we don't determine the suitable parameter and tolerance, we will not satisfy the result after generalization procedure with tolerated range of positioning error. The results of this study test which is about the parameters of each algorithm based on tolerated range showed that the parameter of the simplification algorithm and the positional accuracy are 0.2617 m, 0.4617 m respectively.

  • PDF

Direct Reconstruction of Displaced Subdivision Mesh from Unorganized 3D Points (연결정보가 없는 3차원 점으로부터 차이분할메쉬 직접 복원)

  • Jung, Won-Ki;Kim, Chang-Heon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.29 no.6
    • /
    • pp.307-317
    • /
    • 2002
  • In this paper we propose a new mesh reconstruction scheme that produces a displaced subdivision surface directly from unorganized points. The displaced subdivision surface is a new mesh representation that defines a detailed mesh with a displacement map over a smooth domain surface, but original displaced subdivision surface algorithm needs an explicit polygonal mesh since it is not a mesh reconstruction algorithm but a mesh conversion (remeshing) algorithm. The main idea of our approach is that we sample surface detail from unorganized points without any topological information. For this, we predict a virtual triangular face from unorganized points for each sampling ray from a parameteric domain surface. Direct displaced subdivision surface reconstruction from unorganized points has much importance since the output of this algorithm has several important properties: It has compact mesh representation since most vertices can be represented by only a scalar value. Underlying structure of it is piecewise regular so it ran be easily transformed into a multiresolution mesh. Smoothness after mesh deformation is automatically preserved. We avoid time-consuming global energy optimization by employing the input data dependant mesh smoothing, so we can get a good quality displaced subdivision surface quickly.

Theoretical Investigations on Compatibility of Feedback-Based Cellular Models for Dune Dynamics : Sand Fluxes, Avalanches, and Wind Shadow ('되먹임 기반' 사구 역학 모형의 호환 가능성에 대한 이론적 고찰 - 플럭스, 사면조정, 바람그늘 문제를 중심으로 -)

  • RHEW, Hosahng
    • Journal of the Korean association of regional geographers
    • /
    • v.22 no.3
    • /
    • pp.681-702
    • /
    • 2016
  • Two different modelling approaches to dune dynamics have been established thus far; continuous models that emphasize the precise representation of wind field, and feedback-based models that focus on the interactions between dunes, rather than aerodynamics. Though feedback-based models have proven their capability to capture the essence of dune dynamics, the compatibility issues on these models have less been addressed. This research investigated, mostly from the theoretical point of view, the algorithmic compatibility of three feedback-based dune models: sand slab models, Nishimori model, and de Castro model. Major findings are as follows. First, sand slab models and de Castro model are both compatible in terms of flux perspectives, whereas Nishimori model needs a tuning factor. Second, the algorithm of avalanching can be easily implemented via repetitive spatial smoothing, showing high compatibility between models. Finally, the wind shadow rule might not be a necessary component to reproduce dune patterns unlike the interpretation or assumption of previous studies. The wind shadow rule, rather, might be more important in understanding bedform-level interactions. Overall, three models show high compatibility between them, or seem to require relatively small modification, though more thorough investigation is needed.

  • PDF