• Title/Summary/Keyword: Noise Robust

Search Result 1,308, Processing Time 0.025 seconds

SAVITZKY-GOLAY DERIVATIVES : A SYSTEMATIC APPROACH TO REMOVING VARIABILITY BEFORE APPLYING CHEMOMETRICS

  • Hopkins, David W.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1041-1041
    • /
    • 2001
  • Removal of variability in spectra data before the application of chemometric modeling will generally result in simpler (and presumably more robust) models. Particularly for sparsely sampled data, such as typically encountered in diode array instruments, the use of Savitzky-Golay (S-G) derivatives offers an effective method to remove effects of shifting baselines and sloping or curving apparent baselines often observed with scattering samples. The application of these convolution functions is equivalent to fitting a selected polynomial to a number of points in the spectrum, usually 5 to 25 points. The value of the polynomial evaluated at its mid-point, or its derivative, is taken as the (smoothed) spectrum or its derivative at the mid-point of the wavelength window. The process is continued for successive windows along the spectrum. The original paper, published in 1964 [1] presented these convolution functions as integers to be used as multipliers for the spectral values at equal intervals in the window, with a normalization integer to divide the sum of the products, to determine the result for each point. Steinier et al. [2] published corrections to errors in the original presentation [1], and a vector formulation for obtaining the coefficients. The actual selection of the degree of polynomial and number of points in the window determines whether closely situated bands and shoulders are resolved in the derivatives. Furthermore, the actual noise reduction in the derivatives may be estimated from the square root of the sums of the coefficients, divided by the NORM value. A simple technique to evaluate the actual convolution factors employed in the calculation by the software will be presented. It has been found that some software packages do not properly account for the sampling interval of the spectral data (Equation Ⅶ in [1]). While this is not a problem in the construction and implementation of chemometric models, it may be noticed in comparing models at differing spectral resolutions. Also, the effects on parameters of PLS models of choosing various polynomials and numbers of points in the window will be presented.

  • PDF

Study on Robust Differential Privacy Using Secret Sharing Scheme (비밀 분산 기법을 이용한 강건한 디퍼렌셜 프라이버시 개선 방안에 관한 연구)

  • Kim, Cheoljung;Yeo, Kwangsoo;Kim, Soonseok
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.2
    • /
    • pp.311-319
    • /
    • 2017
  • Recently invasion of privacy problem in medical information have been issued following the interest in secondary use of large medical information. These large medical information is very useful information that can be used in various fields such as disease research and prevention. However, due to the privacy laws such as Privacy Act and Medical Law, these informations including patients or health professionals' personal information are difficult to utilize secondary. Accordingly, various methods such as k-anonymity, l-diversity and differential-privacy that can be utilized while protecting privacy have been developed and utilized in this field. In this paper, we study differential privacy processing procedure, one of various methods, and find out about the differential privacy problem using Laplace noise. Finally, we propose a new method using the Shamir's secret sharing method and symemetric key encryption algorithm such as AES for this problem.

Threshold-based Pre-impact Fall Detection and its Validation Using the Real-world Elderly Dataset (임계값 기반 충격 전 낙상검출 및 실제 노인 데이터셋을 사용한 검증)

  • Dongkwon Kim;Seunghee Lee;Bummo Koo;Sumin Yang;Youngho Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.6
    • /
    • pp.384-391
    • /
    • 2023
  • Among the elderly, fatal injuries and deaths are significantly attributed to falls. Therefore, a pre-impact fall detection system is necessary for injury prevention. In this study, a robust threshold-based algorithm was proposed for pre-impact fall detection, reducing false positives in highly dynamic daily-living movements. The algorithm was validated using public datasets (KFall and FARSEEING) that include the real-world elderly fall. A 6-axis IMU sensor (Movella Dot, Movella, Netherlands) was attached to S2 of 20 healthy adults (aged 22.0±1.9years, height 164.9±5.9cm, weight 61.4±17.1kg) to measure 14 activities of daily living and 11 fall movements at a sampling frequency of 60Hz. A 5Hz low-pass filter was applied to the IMU data to remove high-frequency noise. Sum vector magnitude of acceleration and angular velocity, roll, pitch, and vertical velocity were extracted as feature vector. The proposed algorithm showed an accuracy 98.3%, a sensitivity 100%, a specificity 97.0%, and an average lead-time 311±99ms with our experimental data. When evaluated using the KFall public dataset, an accuracy in adult data improved to 99.5% compared to recent studies, and for the elderly data, a specificity of 100% was achieved. When evaluated using FARSEEING real-world elderly fall data without separate segmentation, it showed a sensitivity of 71.4% (5/7).

Robust Radiometric and Geometric Correction Methods for Drone-Based Hyperspectral Imaging in Agricultural Applications

  • Hyoung-Sub Shin;Seung-Hwan Go;Jong-Hwa Park
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.3
    • /
    • pp.257-268
    • /
    • 2024
  • Drone-mounted hyperspectral sensors (DHSs) have revolutionized remote sensing in agriculture by offering a cost-effective and flexible platform for high-resolution spectral data acquisition. Their ability to capture data at low altitudes minimizes atmospheric interference, enhancing their utility in agricultural monitoring and management. This study focused on addressing the challenges of radiometric and geometric distortions in preprocessing drone-acquired hyperspectral data. Radiometric correction, using the empirical line method (ELM) and spectral reference panels, effectively removed sensor noise and variations in solar irradiance, resulting in accurate surface reflectance values. Notably, the ELM correction improved reflectance for measured reference panels by 5-55%, resulting in a more uniform spectral profile across wavelengths, further validated by high correlations (0.97-0.99), despite minor deviations observed at specific wavelengths for some reflectors. Geometric correction, utilizing a rubber sheet transformation with ground control points, successfully rectified distortions caused by sensor orientation and flight path variations, ensuring accurate spatial representation within the image. The effectiveness of geometric correction was assessed using root mean square error(RMSE) analysis, revealing minimal errors in both east-west(0.00 to 0.081 m) and north-south directions(0.00 to 0.076 m).The overall position RMSE of 0.031 meters across 100 points demonstrates high geometric accuracy, exceeding industry standards. Additionally, image mosaicking was performed to create a comprehensive representation of the study area. These results demonstrate the effectiveness of the applied preprocessing techniques and highlight the potential of DHSs for precise crop health monitoring and management in smart agriculture. However, further research is needed to address challenges related to data dimensionality, sensor calibration, and reference data availability, as well as exploring alternative correction methods and evaluating their performance in diverse environmental conditions to enhance the robustness and applicability of hyperspectral data processing in agriculture.

TCN-USAD for Anomaly Power Detection (이상 전력 탐지를 위한 TCN-USAD)

  • Hyeonseok Jin;Kyungbaek Kim
    • Smart Media Journal
    • /
    • v.13 no.7
    • /
    • pp.9-17
    • /
    • 2024
  • Due to the increase in energy consumption, and eco-friendly policies, there is a need for efficient energy consumption in buildings. Anomaly power detection based on deep learning are being used. Because of the difficulty in collecting anomaly data, anomaly detection is performed using reconstruction error with a Recurrent Neural Network(RNN) based autoencoder. However, there are some limitations such as the long time required to fully learn temporal features and its sensitivity to noise in the train data. To overcome these limitations, this paper proposes the TCN-USAD, combined with Temporal Convolution Network(TCN) and UnSupervised Anomaly Detection for multivariate data(USAD). The proposed model using TCN-based autoencoder and the USAD structure, which uses two decoders and adversarial training, to quickly learn temporal features and enable robust anomaly detection. To validate the performance of TCN-USAD, comparative experiments were performed using two building energy datasets. The results showed that the TCN-based autoencoder can perform faster and better reconstruction than RNN-based autoencoder. Furthermore, TCN-USAD achieved 20% improved F1-Score over other anomaly detection models, demonstrating excellent anomaly detection performance.

Front-End Processing for Speech Recognition in the Telephone Network (전화망에서의 음성인식을 위한 전처리 연구)

  • Jun, Won-Suk;Shin, Won-Ho;Yang, Tae-Young;Kim, Weon-Goo;Youn, Dae-Hee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.4
    • /
    • pp.57-63
    • /
    • 1997
  • In this paper, we study the efficient feature vector extraction method and front-end processing to improve the performance of the speech recognition system using KT(Korea Telecommunication) database collected through various telephone channels. First of all, we compare the recognition performances of the feature vectors known to be robust to noise and environmental variation and verify the performance enhancement of the recognition system using weighted cepstral distance measure methods. The experiment result shows that the recognition rate is increasedby using both PLP(Perceptual Linear Prediction) and MFCC(Mel Frequency Cepstral Coefficient) in comparison with LPC cepstrum used in KT recognition system. In cepstral distance measure, the weighted cepstral distance measure functions such as RPS(Root Power Sums) and BPL(Band-Pass Lifter) help the recognition enhancement. The application of the spectral subtraction method decrease the recognition rate because of the effect of distortion. However, RASTA(RelAtive SpecTrAl) processing, CMS(Cepstral Mean Subtraction) and SBR(Signal Bias Removal) enhance the recognition performance. Especially, the CMS method is simple but shows high recognition enhancement. Finally, the performances of the modified methods for the real-time implementation of CMS are compared and the improved method is suggested to prevent the performance degradation.

  • PDF

Outlier Detection from High Sensitive Geiger Mode Imaging LIDAR Data retaining a High Outlier Ratio (높은 이상점 비율을 갖는 고감도 가이거모드 영상 라이다 데이터로부터 이상점 검출)

  • Kim, Seongjoon;Lee, Impyeong;Lee, Youngcheol;Jo, Minsik
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.5
    • /
    • pp.573-586
    • /
    • 2012
  • Point clouds acquired by a LIDAR(Light Detection And Ranging, also LADAR) system often contain erroneous points called outliers seeming not to be on physical surfaces, which should be carefully detected and eliminated before further processing for applications. Particularly in case of LIDAR systems employing with a Gieger-mode array detector (GmFPA) of high sensitivity, the outlier ratio is significantly high, which makes existing algorithms often fail to detect the outliers from such a data set. In this paper, we propose a method to discriminate outliers from a point cloud with high outlier ratio acquired by a GmFPA LIDAR system. The underlying assumption of this method is that a meaningful targe surface occupy at least two adjacent pixels and the ranges from these pixels are similar. We applied the proposed method to simulated LIDAR data of different point density and outlier ratio and analyzed the performance according to different thresholds and data properties. Consequently, we found that the outlier detection probabilities are about 99% in most cases. We also confirmed that the proposed method is robust to data properties and less sensitive to the thresholds. The method will be effectively utilized for on-line realtime processing and post-processing of GmFPA LIDAR data.

A Depth-based Disocclusion Filling Method for Virtual Viewpoint Image Synthesis (가상 시점 영상 합성을 위한 깊이 기반 가려짐 영역 메움법)

  • Ahn, Il-Koo;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.6
    • /
    • pp.48-60
    • /
    • 2011
  • Nowadays, the 3D community is actively researching on 3D imaging and free-viewpoint video (FVV). The free-viewpoint rendering in multi-view video, virtually move through the scenes in order to create different viewpoints, has become a popular topic in 3D research that can lead to various applications. However, there are restrictions of cost-effectiveness and occupying large bandwidth in video transmission. An alternative to solve this problem is to generate virtual views using a single texture image and a corresponding depth image. A critical issue on generating virtual views is that the regions occluded by the foreground (FG) objects in the original views may become visible in the synthesized views. Filling this disocclusions (holes) in a visually plausible manner determines the quality of synthesis results. In this paper, a new approach for handling disocclusions using depth based inpainting algorithm in synthesized views is presented. Patch based non-parametric texture synthesis which shows excellent performance has two critical elements: determining where to fill first and determining what patch to be copied. In this work, a noise-robust filling priority using the structure tensor of Hessian matrix is proposed. Moreover, a patch matching algorithm excluding foreground region using depth map and considering epipolar line is proposed. Superiority of the proposed method over the existing methods is proved by comparing the experimental results.

Dilution and redundancy effects on Stroop interference (스트룹 간섭의 희석 및 중복 효과)

  • Lee, Ji-Young;Min, Soo-Jung;Yi, Do-Joon
    • Korean Journal of Cognitive Science
    • /
    • v.22 no.4
    • /
    • pp.469-494
    • /
    • 2011
  • It is well known that visual objects belonging to the same perceptual category compete for category-specific, limited-capacity attentional resource. However, it remains to be seen how perceptually identical objects interact with each other during visual analyses. Perceptually identical objects might suppress each other as much as categorically identical objects do. Alternatively, they might cooperate to generate a perceptual representation which is long lasting and robust to noise. Such possibilities were tested in the current research with three behavioral experiments using the Stroop task. As results, relative to a single distractor, Stroop interference was diluted by two different distractors of a category while it was enhanced by two perceptually identical distractors (Experiment 1). This redundancy effect disappeared when two different distractors associated with the same response were presented (Experiment 2), and it was not affected by the between- vs. within-hemisphere distractor presentations (Experiment 3). These findings indicate that the redundancy effect of distractors may be mediated by perceptual representations based on hemisphere-independent attentional resources. Overall, the current study supports the hypothesis that Stroop interference is constrained by category-specific attentional resources and further suggests that redundant presentations of a stimulus overcome such attentional constraints by facilitating perceptual processing.

  • PDF

Cable Fault Detection Improvement of STDR Using Reference Signal Elimination (인가신호 제거를 이용한 STDR의 케이블 고장 검출 성능 향상)

  • Jeon, Jeong-Chay;Kim, Taek-Hee
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.3
    • /
    • pp.450-456
    • /
    • 2016
  • STDR (sequence time domain reflectometry) to detect a cable fault using a pseudo noise sequence as a reference signal, and time correlation analysis between the reference signal and reflection signal is robust to noisy environments and can detect intermittent faults including open faults and short circuits. On the other hand, if the distance of the fault location is far away or the fault type is a soft fault, attenuation of the reflected signal becomes larger; hence the correlation coefficient in the STDR becomes smaller, which makes fault detection difficult and the measurement error larger. In addition, automation of the fault location by detection of phase and peak value becomes difficult. Therefore, to improve the cable fault detection of a conventional STDR, this paper proposes the algorithm in that the peak value of the correlation coefficient of the reference signal is detected, and a peak value of the correlation coefficient of the reflected signal is then detected after removing the reference signal. The performance of the proposed method was validated experimentally in low-voltage power cables. The performance evaluation showed that the proposed method can identify whether a fault occurred more accurately and can track the fault locations better than conventional STDR despite the signal attenuation. In addition, there was no error of an automatic fault type and its location by the detection of the phase and peak value through the elimination of the reference signal and normalization of the correlation coefficient.