• Title/Summary/Keyword: Smoothing Algorithm

Search Result 440, Processing Time 0.028 seconds

Image Exposure Compensation Based on Conditional Expectation (Conditional Expectation을 이용한 영상의 노출 보정)

  • Kim, Dong-Sik;Lee, Su-Yeon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.6
    • /
    • pp.121-132
    • /
    • 2005
  • In the formation of images in a camera, the exposure time is appropriately adjusted to obtain a good image. Hence for a successful alignment of a sequence of images to the same scene, it is required to compensate the different exposure times. If we have no knowledge regarding the exposure time, then we should develop an algorithm that can compensate an image with respect to a reference image without using any camera formation models. In this paper, an exposure compensation is performed by designing predictors based on the conditional expectation between the reference and input images. Further, an adaptive predictor design is conducted to manage the irregular exposure or histogram problem. In order to alleviate the blocking artifact and the overfitting problems in the adaptive scheme, a smoothing technique, which uses the pixels of the adjacent blocks, is proposed. We successfully conducted the exposure compensation using real images obtained from digital cameras and the transmission electron microscopy.

A Study on Spotlight SAR Image Formation by using Motion Measurement Results of CDGPS (CDGPS의 요동 측정 결과를 이용한 Spotlight SAR 영상 형성에 관한 연구)

  • Hwang, Jeonghun;Ko, Young-Chang;Kim, So-Yeon;Kwon, Kyoung-Il;Yoon, Sang-Ho;Kim, Hyung-Suk;Shin, Hyun-Ik
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.21 no.2
    • /
    • pp.166-172
    • /
    • 2018
  • To develop and evaluate the real-time SAR(Synthetic Aperture Radar) motion measurement system, true antenna phase center(APC) positions during SAT(Synthetic Aperture Time) are needed. In this paper, CDGPS(Carrier phase Differential Global Positioning System) post processing method is proposed to get the true APC position for spotlight SAR image formation. The CDGPS position is smoothed to remove high frequency noise which exists inherently in the carrier phase measurement. This paper shows smoothed CDGPS data is enough to provide the true APC for high-quality SAR image formation through motion measurement result, phase error estimation and IRF(Impulse Response Function) analysis.

A Study on Target Acquisition and Tracking to Develop ARPA Radar (ARPA 레이더 개발을 위한 물표 획득 및 추적 기술 연구)

  • Lee, Hee-Yong;Shin, Il-Sik;Lee, Kwang-Il
    • Journal of Navigation and Port Research
    • /
    • v.39 no.4
    • /
    • pp.307-312
    • /
    • 2015
  • ARPA(Automatic Radar Plotting Aid) is a device to calculate CPA(closest point of approach)/TCPA(time of CPA), true course and speed of targets by vector operation of relative courses and speeds. The purpose of this study is to develop target acquisition and tracking technology for ARPA Radar implementation. After examining the previous studies, applicable algorithms and technologies were developed to be combined and basic ARPA functions were developed as a result. As for main research contents, the sequential image processing technology such as combination of grayscale conversion, gaussian smoothing, binary image conversion and labeling was deviced to achieve a proper target acquisition, and the NNS(Nearest Neighbor Search) algorithm was appllied to identify which target came from the previous image and finally Kalman Filter was used to calculate true course and speed of targets as an analysis of target behavior. Also all technologies stated above were implemented as a SW program and installed onboard, and verified the basic ARPA functions to be operable in practical use through onboard test.

Classifying Indian Medicinal Leaf Species Using LCFN-BRNN Model

  • Kiruba, Raji I;Thyagharajan, K.K;Vignesh, T;Kalaiarasi, G
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3708-3728
    • /
    • 2021
  • Indian herbal plants are used in agriculture and in the food, cosmetics, and pharmaceutical industries. Laboratory-based tests are routinely used to identify and classify similar herb species by analyzing their internal cell structures. In this paper, we have applied computer vision techniques to do the same. The original leaf image was preprocessed using the Chan-Vese active contour segmentation algorithm to efface the background from the image by setting the contraction bias as (v) -1 and smoothing factor (µ) as 0.5, and bringing the initial contour close to the image boundary. Thereafter the segmented grayscale image was fed to a leaky capacitance fired neuron model (LCFN), which differentiates between similar herbs by combining different groups of pixels in the leaf image. The LFCN's decay constant (f), decay constant (g) and threshold (h) parameters were empirically assigned as 0.7, 0.6 and h=18 to generate the 1D feature vector. The LCFN time sequence identified the internal leaf structure at different iterations. Our proposed framework was tested against newly collected herbal species of natural images, geometrically variant images in terms of size, orientation and position. The 1D sequence and shape features of aloe, betel, Indian borage, bittergourd, grape, insulin herb, guava, mango, nilavembu, nithiyakalyani, sweet basil and pomegranate were fed into the 5-fold Bayesian regularization neural network (BRNN), K-nearest neighbors (KNN), support vector machine (SVM), and ensemble classifier to obtain the highest classification accuracy of 91.19%.

AWGN Removal using Laplace Distribution and Weighted Mask (라플라스 분포와 가중치 마스크를 이용한 AWGN 제거)

  • Park, Hwa-Jung;Kim, Nam-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.12
    • /
    • pp.1846-1852
    • /
    • 2021
  • In modern society, various digital devices are being distributed in a wide range of fields due to the fourth industrial revolution and the development of IoT technology. However, noise is generated in the process of acquiring or transmitting an image, and not only damages the information, but also affects the system, causing errors and incorrect operation. AWGN is a representative noise among image noise. As a method for removing noise, prior research has been conducted, and among them, AF, A-TMF, and MF are the representative methods. Existing filters have a disadvantage that smoothing occurs in areas with high frequency components because it is difficult to consider the characteristics of images. Therefore, the proposed algorithm calculates the standard deviation distribution to effectively eliminate noise even in the high frequency domain, and then calculates the final output by applying the probability density function weight of the Laplace distribution using the curve fitting method.

Comparison between Old and New Versions of Electron Monte Carlo (eMC) Dose Calculation

  • Seongmoon Jung;Jaeman Son;Hyeongmin Jin;Seonghee Kang;Jong Min Park;Jung-in Kim;Chang Heon Choi
    • Progress in Medical Physics
    • /
    • v.34 no.2
    • /
    • pp.15-22
    • /
    • 2023
  • This study compared the dose calculated using the electron Monte Carlo (eMC) dose calculation algorithm employing the old version (eMC V13.7) of the Varian Eclipse treatment-planning system (TPS) and its newer version (eMC V16.1). The eMC V16.1 was configured using the same beam data as the eMC V13.7. Beam data measured using the VitalBeam linear accelerator were implemented. A box-shaped water phantom (30×30×30 cm3) was generated in the TPS. Consequently, the TPS with eMC V13.7 and eMC V16.1 calculated the dose to the water phantom delivered by electron beams of various energies with a field size of 10×10 cm2. The calculations were repeated while changing the dose-smoothing levels and normalization method. Subsequently, the percentage depth dose and lateral profile of the dose distributions acquired by eMC V13.7 and eMC V16.1 were analyzed. In addition, the dose-volume histogram (DVH) differences between the two versions for the heterogeneous phantom with bone and lung inserted were compared. The doses calculated using eMC V16.1 were similar to those calculated using eMC V13.7 for the homogenous phantoms. However, a DVH difference was observed in the heterogeneous phantom, particularly in the bone material. The dose distribution calculated using eMC V16.1 was comparable to that of eMC V13.7 in the case of homogenous phantoms. The version changes resulted in a different DVH for the heterogeneous phantoms. However, further investigations to assess the DVH differences in patients and experimental validations for eMC V16.1, particularly for heterogeneous geometry, are required.

Speech Enhancement Based on Minima Controlled Recursive Averaging Technique Incorporating Conditional MAP (조건 사후 최대 확률 기반 최소값 제어 재귀평균기법을 이용한 음성향상)

  • Kum, Jong-Mo;Park, Yun-Sik;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.5
    • /
    • pp.256-261
    • /
    • 2008
  • In this paper, we propose a novel approach to improve the performance of minima controlled recursive averaging (MCRA) which is based on the conditional maximum a posteriori criterion. A crucial component of a practical speech enhancement system is the estimation of the noise power spectrum. One state-of-the-art approach is the minima controlled recursive averaging (MCRA) technique. The noise estimate in the MCRA technique is obtained by averaging past spectral power values based on a smoothing parameter that is adjusted by the signal presence probability in frequency subbands. We improve the MCRA using the speech presence probability which is the a posteriori probability conditioned on both the current observation the speech presence or absence of the previous frame. With the performance criteria of the ITU-T P.862 perceptual evaluation of speech quality (PESQ) and subjective evaluation of speech quality, we show that the proposed algorithm yields better results compared to the conventional MCRA-based scheme.

Revision of ART with Iterative Partitioning for Performance Improvement (입력 도메인 반복 분할 기법 성능 향상을 위한 고려 사항 분석)

  • Shin, Seung-Hun;Park, Seung-Kyu;Jung, Ki-Hyun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.3
    • /
    • pp.64-76
    • /
    • 2009
  • Adaptive Random Testing through Iterative Partitioning(IP-ART) is one of Adaptive Random Testing(ART) techniques. IP-ART uses an iterative partitioning method for input domain to improve the performances of early-versions of ART that have significant drawbacks in computation time. Another version of IP-ART, named with EIP-ART(IP-ART with Enlarged Input Domain), uses virtually enlarged input domain to remove the unevenly distributed parts near the boundary of the domain. EIP-ART could mitigate non-uniform test case distribution of IP-ART and achieve relatively high performances in a variety of input domain environments. The EIP-ART algorithm, however, have the drawback of higher computation time to generate test cases mainly due to the additional workload from enlarged input domain. For this reason, a revised version of IP-ART without input domain enlargement needs to improve the distribution of test cases to remove the additional time cost. We explore three smoothing algorithms which influence the distribution of test cases, and analyze to check if any performance improvements take place by them. The simulation results show that the algorithm of a restriction area management achieves better performance than other ones.

A Study on Real-time Tracking Method of Horizontal Face Position for Optimal 3D T-DMB Content Service (지상파 DMB 단말에서의 3D 컨텐츠 최적 서비스를 위한 경계 정보 기반 실시간 얼굴 수평 위치 추적 방법에 관한 연구)

  • Kang, Seong-Goo;Lee, Sang-Seop;Yi, June-Ho;Kim, Jung-Kyu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.88-95
    • /
    • 2011
  • An embedded mobile device mostly has lower computation power than a general purpose computer because of its relatively lower system specifications. Consequently, conventional face tracking and face detection methods, requiring complex algorithms for higher recognition rates, are unsuitable in a mobile environment aiming for real time detection. On the other hand, by applying a real-time tracking and detecting algorithm, we would be able to provide a two-way interactive multimedia service between an user and a mobile device thus providing a far better quality of service in comparison to a one-way service. Therefore it is necessary to develop a real-time face and eye tracking technique optimized to a mobile environment. For this reason, in this paper, we proposes a method of tracking horizontal face position of a user on a T-DMB device for enhancing the quality of 3D DMB content. The proposed method uses the orientation of edges to estimate the left and right boundary of the face, and by the color edge information, the horizontal position and size of face is determined finally to decide the horizontal face. The sobel gradient vector is projected vertically and candidates of face boundaries are selected, and we proposed a smoothing method and a peak-detection method for the precise decision. Because general face detection algorithms use multi-scale feature vectors, the detection time is too long on a mobile environment. However the proposed algorithm which uses the single-scale detection method can detect the face more faster than conventional face detection methods.

A Baseline Correction for Effective Analysis of Alzheimer’s Disease based on Raman Spectra from Platelet (혈소판 라만 스펙트럼의 효율적인 분석을 위한 기준선 보정 방법)

  • Park, Aa-Ron;Baek, Sung-June
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.49 no.1
    • /
    • pp.16-22
    • /
    • 2012
  • In this paper, we proposed a method of baseline correction for analysis of Raman spectra of platelets from Alzheimer's disease (AD) transgenic mice. Measured Raman spectra include the meaningful information and unnecessary noise which is composed of baseline and additive noise. The Raman spectrum is divided into the local region including several peaks and the spectrum of the region is modeled by curve fitting using Gaussian model. The additive noise is clearly removed from the process of replacing the original spectrum with the fitted model. The baseline correction after interpolating the local minima of the fitted model with linear, piecewise cubic Hermite and cubic spline algorithm. The baseline corrected models extract the feature with principal component analysis (PCA). The classification result of support vector machine (SVM) and maximum $a$ posteriori probability (MAP) using linear interpolation method showed the good performance about overall number of principal components, especially SVM gave the best performance which is about 97.3% true classification average rate in case of piecewise cubic Hermite algorithm and 5 principal components. In addition, it confirmed that the proposed baseline correction method compared with the previous research result could be effectively applied in the analysis of the Raman spectra of platelet.