• Title/Summary/Keyword: Signal Discretization

Search Result 9, Processing Time 0.019 seconds

Fault Pattern Extraction Via Adjustable Time Segmentation Considering Inflection Points of Sensor Signals for Aircraft Engine Monitoring (센서 데이터 변곡점에 따른 Time Segmentation 기반 항공기 엔진의 고장 패턴 추출)

  • Baek, Sujeong
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.3
    • /
    • pp.86-97
    • /
    • 2021
  • As mechatronic systems have various, complex functions and require high performance, automatic fault detection is necessary for secure operation in manufacturing processes. For conducting automatic and real-time fault detection in modern mechatronic systems, multiple sensor signals are collected by internet of things technologies. Since traditional statistical control charts or machine learning approaches show significant results with unified and solid density models under normal operating states but they have limitations with scattered signal models under normal states, many pattern extraction and matching approaches have been paid attention. Signal discretization-based pattern extraction methods are one of popular signal analyses, which reduce the size of the given datasets as much as possible as well as highlight significant and inherent signal behaviors. Since general pattern extraction methods are usually conducted with a fixed size of time segmentation, they can easily cut off significant behaviors, and consequently the performance of the extracted fault patterns will be reduced. In this regard, adjustable time segmentation is proposed to extract much meaningful fault patterns in multiple sensor signals. By considering inflection points of signals, we determine the optimal cut-points of time segments in each sensor signal. In addition, to clarify the inflection points, we apply Savitzky-golay filter to the original datasets. To validate and verify the performance of the proposed segmentation, the dataset collected from an aircraft engine (provided by NASA prognostics center) is used to fault pattern extraction. As a result, the proposed adjustable time segmentation shows better performance in fault pattern extraction.

Distance Estimation Using Discretized Frequency Synthesis of Ultrasound Signals (초음파의 이산 주파수 합성을 이용한 거리 측정)

  • Park, Sang-Wook;Kim, Dae-Eun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.5
    • /
    • pp.499-504
    • /
    • 2011
  • In this paper, we suggest a method for discretized frequency modulations of ultrasonic signals. A continuous sweep of frequency modulation signals can be modelled with fine levels of discretization. If the ultrasound signals are modulated with monotonically decreasing frequencies, then the cross-correlation between an emitted signal and received signal can be used to identify the distance of multiple target objects. For the discretized frequency synthesis, CF ultrasounds with different frequencies are serially ordered. The auto-correlation test with the signal shows effective results for distance estimation. The discretized frequency syntheses have better distance resolution than CF ultrasound signals and the resolution depends on the number of the combined ultrasound frequencies.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Nonlinear Deblurring Algorithm on Convex-Mirror Image for Reducing Occlusion (볼록거울 영상에서 일어나는 영상 겹침 극복을 위한 비선형적 디블러링 알고리즘)

  • Lee, In-Jung
    • The KIPS Transactions:PartA
    • /
    • v.13A no.5 s.102
    • /
    • pp.429-434
    • /
    • 2006
  • A CCTV system reduces some number of cameras if we use convex-mirror. In this case, convex-mirror Image distorted, we need transformation to flat images. In the center of mirror images, a transformed image has no distortion, but at near boundary image has plentiful distortion. This distortion is caused by occlusion of angled ray and diffraction. We know that the linear filtering approach cannot separate noise from signal where their Fourier spectra overlap. But using a non-linear discretization method, we shall reduce blurred noise. In this paper, we introduce the backward solution of nonlinear wave equation for reducing blurred noise and biased expansion of equilibrium contour. We propose, after applying the introduced method, and calculate with discretization method. To analysis the experimental result, we investigate to PSNR and get about 4dB better than current method.

Deriviation of the z-transfer Function of Optimal Digital Controller Using an Integral-Square-Error Criterion with the continuous-data Model in Linear Control Systems (선형연속데이터형 제어계통의 플랜트와 디지털모델의 오차자승적분지표에 의한 최적디지탈제어기의 전달함수유도)

  • Park, Kyung-Sam
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.32 no.6
    • /
    • pp.211-218
    • /
    • 1983
  • In this paper, an attempt is made to match the continuous state trajectory of the digital control system with that of its continuous data model. Matching the state trajectories instead of the output responses assures that the performances of the internal variables of the plant as well as the output variables are preserved in the discretization. The mathematical tool used in this research is an extended maximum principle of the Pontryagin type, which enables one to synthesize a staircase type of optimal control signals, such as the output signal of a zero-order hold asociated with a digital controller. A general mathematical expression of the digital controller which may be used to replace the analog controller of a general system while preserving as mauch as possible the performance characteristics of the original continuous-data control system is derived in this paper.

  • PDF

Detection of tonal frequency of underwater radiated noise via atomic norm minimization (Atomic norm minimization을 통한 수중 방사 소음 신호의 토널 주파수 탐지)

  • Kim, Junhan;Kim, Jinhong;Shim, Byonghyo;Hong, Jungpyo;Kim, Seongil;Hong, Wooyoung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.5
    • /
    • pp.543-548
    • /
    • 2019
  • The tonal signal caused by the machinery component of a vessel such as an engine, gearbox, and support elements, can be modeled as a sparse signal in the frequency domain. Recently, compressive sensing based techniques that recover an original signal using a small number of measurements in a short period of time, have been applied for the tonal frequency detection. These techniques, however, cannot avoid a basis mismatch error caused by the discretization of the frequency domain. In this paper, we propose a method to detect the tonal frequency with a small number of measurements in the continuous domain by using the atomic norm minimization technique. From the simulation results, we demonstrate that the proposed technique outperforms conventional methods in terms of the exact recovery ratio and mean square error.

Geometric Detail Suppression for the Generation of Efficient Finite Elements (효율적 유한요소 생성을 위한 미소 기하 특징 소거)

  • 이용구;이건우
    • Korean Journal of Computational Design and Engineering
    • /
    • v.2 no.3
    • /
    • pp.175-185
    • /
    • 1997
  • Given the widespread use of the Finite Element Method in strength analysis, automatic mesh generation is an important component in the computer-aided design of parts and assemblies. For a given resolution of geometric accuracy, the purpose of mesh generators is to discretize the continuous model of a part within this error limit. Sticking to this condition often produces many small elements around small features in spite that these regions are usually of little interest and computer resources are thus wasted. Therefore, it is desirable to selectively suppress small features from the model before discretization. This can be achieved by low-pass filtering a CAD model. A spatial function of one dimension higher than the model of interest is represented using the Fourier basis functions and the region where the function yields a value greater than a prescribed value is considered as the extent of a shape. Subsequently, the spatial function is low-pass filtered, yielding a shape without the small features. As an undesirable effect to this operation, all sharp corners are rounded. Preservation of sharp corners is important since stress concentrations might occur there. This is why the LPF (low-pass filtered) model can not be directly used. Instead, the distances of the boundary elements of the original shape from the LPF model are calculated and those that are far from the LPF model are identified and removed. It is shown that the number of mesh elements generated on the simplified model is much less than that of the original model.

  • PDF

Design of FIR Filters With Sparse Signed Digit Coefficients (희소한 부호 자리수 계수를 갖는 FIR 필터 설계)

  • Kim, Seehyun
    • Journal of IKEEE
    • /
    • v.19 no.3
    • /
    • pp.342-348
    • /
    • 2015
  • High speed implementation of digital filters is required in high data rate applications such as hard-wired wide band modem and high resolution video codec. Since the critical path of the digital filter is the MAC (multiplication and accumulation) circuit, the filter coefficient with sparse non-zero bits enables high speed implementation with adders of low hardware cost. Compressive sensing has been reported to be very successful in sparse representation and sparse signal recovery. In this paper a filter design method for digital FIR filters with CSD (canonic signed digit) coefficients using compressive sensing technique is proposed. The sparse non-zero signed bits are selected in the greedy fashion while pruning the mistakenly selected digits. A few design examples show that the proposed method can be utilized for designing sparse CSD coefficient digital FIR filters approximating the desired frequency response.

Development of A Recovery Algorithm for Sparse Signals based on Probabilistic Decoding (확률적 희소 신호 복원 알고리즘 개발)

  • Seong, Jin-Taek
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.5
    • /
    • pp.409-416
    • /
    • 2017
  • In this paper, we consider a framework of compressed sensing over finite fields. One measurement sample is obtained by an inner product of a row of a sensing matrix and a sparse signal vector. A recovery algorithm proposed in this study for sparse signals based probabilistic decoding is used to find a solution of compressed sensing. Until now compressed sensing theory has dealt with real-valued or complex-valued systems, but for the processing of the original real or complex signals, the loss of the information occurs from the discretization. The motivation of this work can be found in efforts to solve inverse problems for discrete signals. The framework proposed in this paper uses a parity-check matrix of low-density parity-check (LDPC) codes developed in coding theory as a sensing matrix. We develop a stochastic algorithm to reconstruct sparse signals over finite field. Unlike LDPC decoding, which is published in existing coding theory, we design an iterative algorithm using probability distribution of sparse signals. Through the proposed recovery algorithm, we achieve better reconstruction performance as the size of finite fields increases. Since the sensing matrix of compressed sensing shows good performance even in the low density matrix such as the parity-check matrix, it is expected to be actively used in applications considering discrete signals.