• Title/Summary/Keyword: 평균필터링

Search Result 257, Processing Time 0.037 seconds

Design of a 60 Hz Band Rejection FilterInsensitive to Component Tolerances (부품 허용 오차에 둔감한 60Hz 대역 억제 필터 설계)

  • Cheon, Jimin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.2
    • /
    • pp.109-116
    • /
    • 2022
  • In this paper, we propose a band rejection filter (BRF) with a state variable filter (SVF) structure to effectively remove the influence of 60 Hz line frequency noise introduced into the sensor system. The conventional BRF of the SVF structure uses an additional operational amplifier (OPAMP) to add a low pass filter (LPF) output and a high pass filter (HPF) output or an input signal and a band pass filter. Therefore, the notch frequency and the notch depth that determine the signal attenuation of the BRF greatly depend on the tolerance of the resistors used to obtain the sum or difference of the signals. On the other hand, in the proposed BRF, since the BRF output is formed naturally within the SVF structure, there is no need for a combination between each port. The notch frequency of the proposed BRF is 59.99 Hz, and it can be confirmed that it is not affected at all by the tolerance of the resistor through the Monte Carlo simulation results. The notch depth also has an average of -42.54dB and a standard deviation of 0.63dB, confirming that normal operation as a BRF is possible. Also, with the proposed BRF, noise filtering was applied to the electrocardiogram (ECG) signal that interfered with 60 Hz noise, and it was confirmed that the 60 Hz noise was appropriately suppressed.

A Convergence Study of Surface Electromyography in Swallowing Stages for Swallowing Function Evaluation in Older Adults: Systematic Review (노인의 삼킴 단계별 삼킴 기능 평가를 위한 표면 근전도 검사의 융합적 연구 : 체계적 문헌고찰)

  • Park, Sun-Ha;Bae, Suyeong;Kim, Jung-eun;Park, Hae-Yean
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.5
    • /
    • pp.9-19
    • /
    • 2022
  • In this study, a systematic review was conducted to analyze the method of applying sEMG to evaluate the swallowing function of the elderly at each stage of swallowing, and to help objectively measure the swallowing stage of the older adults in clinical practice. From 2011 to 2021, 7 studies that met the selection criteria were selected using Pubmed, Scopus, and Web of Science (WoS). As a result of this study, the older adults and adults were divided into an experimental group and a control group and the swallowing phase was analyzed using sEMG only for the older adults. sEMG was used to evaluate swallowing in the oral and pharyngeal stages, and the sEMG attachment site was attached to the swallowing muscle involved in each stage. The collected sEMG data were filtered using a bandpass-filter and a notch-filter, and were analyzed using RMS, amplitude, and maximum spontaneous contraction. In this study, it was found that sEMG can be used as a tool to objectively and quantitatively evaluate the swallowing function in stages. Therefore, it is expected that this study will activate various studies that incorporate sEMG to evaluate the swallowing function in stages.

A Study on the Selection of Parameter Values of FUSION Software for Improving Airborne LiDAR DEM Accuracy in Forest Area (산림지역에서의 LiDAR DEM 정확도 향상을 위한 FUSION 패러미터 선정에 관한 연구)

  • Cho, Seungwan;Park, Joowon
    • Journal of Korean Society of Forest Science
    • /
    • v.106 no.3
    • /
    • pp.320-329
    • /
    • 2017
  • This study aims to evaluate whether the accuracy of LiDAR DEM is affected by the changes of the five input levels ('1','3','5','7' and '9') of median parameter ($F_{md}$), mean parameter ($F_{mn}$) of the Filtering Algorithm (FA) in the GroundFilter module and median parameter ($I_{md}$), mean parameter ($I_{mn}$) of the Interpolation Algorithm (IA) in the GridSurfaceCreate module of the FUSION in order to present the combination of parameter levels producing the most accurate LiDAR DEM. The accuracy is measured by the residuals calculated by difference between the field elevation values and their corresponding DEM elevation values. A multi-way ANOVA is used to statistically examine whether there are effects of parameter level changes on the means of the residuals. The Tukey HSD is conducted as a post-hoc test. The results of the multi- way ANOVA test show that the changes in the levels of $F_{md}$, $F_{mn}$, $I_{mn}$ have significant effects on the DEM accuracy with the significant interaction effect between $F_{md}$ and $F_{mn}$. Therefore, the level of $F_{md}$, $F_{mn}$, and the interaction between two variables are considered to be factors affecting the accuracy of LiDAR DEM as well as the level of $I_{mn}$. As the results of the Tukey HSD test on the combination levels of $F_{md}{\ast}F_{mn}$, the mean of residuals of the '$9{\ast}3$' combination provides the highest accuracy while the '$1{\ast}1$' combination provides the lowest one. Regarding $I_{mn}$ levels, the mean of residuals of the both '3' and '1' provides the highest accuracy. This study can contribute to improve the accuracy of the forest attributes as well as the topographic information extracted from the LiDAR data.

Evaluation to Obtain the Image According to the Spatial Domain Filtering of Various Convolution Kernels in the Multi-Detector Row Computed Tomography (MDCT에서의 Convolution Kernel 종류에 따른 공간 영역 필터링의 영상 평가)

  • Lee, Hoo-Min;Yoo, Beong-Gyu;Kweon, Dae-Cheol
    • Journal of radiological science and technology
    • /
    • v.31 no.1
    • /
    • pp.71-81
    • /
    • 2008
  • Our objective was to evaluate the image of spatial domain filtering as an alternative to additional image reconstruction using different kernels in MDCT. Derived from thin collimated source images were generated using water phantom and abdomen B10(very smooth), B20(smooth), B30(medium smooth), B40 (medium), B50(medium sharp), B60(sharp), B70(very sharp) and B80(ultra sharp) kernels. MTF and spatial resolution measured with various convolution kernels. Quantitative CT attenuation coefficient and noise measurements provided comparable HU(Hounsfield) units in this respect. CT attenuation coefficient(mean HU) values in the water were values in the water were $1.1{\sim}1.8\;HU$, air($-998{\sim}-1000\;HU$) and noise in the water($5.4{\sim}44.8\;HU$), air($3.6{\sim}31.4\;HU$). In the abdominal fat a CT attenuation coefficient($-2.2{\sim}0.8\;HU$) and noise($10.1{\sim}82.4\;HU$) was measured. In the abdominal was CT attenuation coefficient($53.3{\sim}54.3\;HU$) and noise($10.4{\sim}70.7\;HU$) in the muscle and in the liver parenchyma of CT attenuation coefficient($60.4{\sim}62.2\;HU$) and noise ($7.6{\sim}63.8\;HU$) in the liver parenchyma. Image reconstructed with a convolution kernel led to an increase in noise, whereas the results for CT attenuation coefficient were comparable. Image scanned with a high convolution kernel(B80) led to an increase in noise, whereas the results for CT attenuation coefficient were comparable. Image medications of image sharpness and noise eliminate the need for reconstruction using different kernels in the future. Adjusting CT various kernels, which should be adjusted to take into account the kernels of the CT undergoing the examination, may control CT images increase the diagnostic accuracy.

  • PDF

Quantitative Conductivity Estimation Error due to Statistical Noise in Complex $B_1{^+}$ Map (정량적 도전율측정의 오차와 $B_1{^+}$ map의 노이즈에 관한 분석)

  • Shin, Jaewook;Lee, Joonsung;Kim, Min-Oh;Choi, Narae;Seo, Jin Keun;Kim, Dong-Hyun
    • Investigative Magnetic Resonance Imaging
    • /
    • v.18 no.4
    • /
    • pp.303-313
    • /
    • 2014
  • Purpose : In-vivo conductivity reconstruction using transmit field ($B_1{^+}$) information of MRI was proposed. We assessed the accuracy of conductivity reconstruction in the presence of statistical noise in complex $B_1{^+}$ map and provided a parametric model of the conductivity-to-noise ratio value. Materials and Methods: The $B_1{^+}$ distribution was simulated for a cylindrical phantom model. By adding complex Gaussian noise to the simulated $B_1{^+}$ map, quantitative conductivity estimation error was evaluated. The quantitative evaluation process was repeated over several different parameters such as Larmor frequency, object radius and SNR of $B_1{^+}$ map. A parametric model for the conductivity-to-noise ratio was developed according to these various parameters. Results: According to the simulation results, conductivity estimation is more sensitive to statistical noise in $B_1{^+}$ phase than to noise in $B_1{^+}$ magnitude. The conductivity estimate of the object of interest does not depend on the external object surrounding it. The conductivity-to-noise ratio is proportional to the signal-to-noise ratio of the $B_1{^+}$ map, Larmor frequency, the conductivity value itself and the number of averaged pixels. To estimate accurate conductivity value of the targeted tissue, SNR of $B_1{^+}$ map and adequate filtering size have to be taken into account for conductivity reconstruction process. In addition, the simulation result was verified at 3T conventional MRI scanner. Conclusion: Through all these relationships, quantitative conductivity estimation error due to statistical noise in $B_1{^+}$ map is modeled. By using this model, further issues regarding filtering and reconstruction algorithms can be investigated for MREPT.

The Flow-rate Measurements in a Multi-phase Flow Pipeline by Using a Clamp-on Sealed Radioisotope Cross Correlation Flowmeter (투과 감마선 계측신호의 Cross correlation 기법 적용에 의한 다중상 유체의 유량측정)

  • Kim, Jin-Seop;Kim, Jong-Bum;Kim, Jae-Ho;Lee, Na-Young;Jung, Sung-Hee
    • Journal of Radiation Protection and Research
    • /
    • v.33 no.1
    • /
    • pp.13-20
    • /
    • 2008
  • The flow rate measurements in a multi-phase flow pipeline were evaluated quantitatively by means of a clamp-on sealed radioisotope based on a cross correlation signal processing technique. The flow rates were calculated by a determination of the transit time between two sealed gamma sources by using a cross correlation function following FFT filtering, then corrected with vapor fraction in the pipeline which was measured by the ${\gamma}$-ray attenuation method. The pipeline model was manufactured by acrylic resin(ID. 8 cm, L=3.5 m, t=10 mm), and the multi-phase flow patterns were realized by an injection of compressed $N_2$ gas. Two sealed gamma sources of $^{137}Cs$ (E=0.662 MeV, ${\Gamma}$ $factor=0.326\;R{\cdot}h^{-1}{\cdot}m^2{\cdot}Ci^{-1}$) of 20 mCi and 17 mCi, and radiation detectors of $2"{\times}2"$ NaI(Tl) scintillation counter (Eberline, SP-3) were used for this study. Under the given conditions(the distance between two sources: 4D(D; inner diameter), N/S ratio: $0.12{\sim}0.15$, sampling time ${\Delta}t$: 4msec), the measured flow rates showed the maximum. relative error of 1.7 % when compared to the real ones through the vapor content corrections($6.1\;%{\sim}9.2\;%$). From a subsequent experiment, it was proven that the closer the distance between the two sealed sources is, the more precise the measured flow rates are. Provided additional studies related to the selection of radioisotopes their activity, and an optimization of the experimental geometry are carried out, it is anticipated that a radioisotope application for flow rate measurements can be used as an important tool for monitoring multi-phase facilities belonging to petrochemical and refinery industries and contributes economically in the light of maintenance and control of them.

A Study on the Effect of Using Sentiment Lexicon in Opinion Classification (오피니언 분류의 감성사전 활용효과에 대한 연구)

  • Kim, Seungwoo;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.133-148
    • /
    • 2014
  • Recently, with the advent of various information channels, the number of has continued to grow. The main cause of this phenomenon can be found in the significant increase of unstructured data, as the use of smart devices enables users to create data in the form of text, audio, images, and video. In various types of unstructured data, the user's opinion and a variety of information is clearly expressed in text data such as news, reports, papers, and various articles. Thus, active attempts have been made to create new value by analyzing these texts. The representative techniques used in text analysis are text mining and opinion mining. These share certain important characteristics; for example, they not only use text documents as input data, but also use many natural language processing techniques such as filtering and parsing. Therefore, opinion mining is usually recognized as a sub-concept of text mining, or, in many cases, the two terms are used interchangeably in the literature. Suppose that the purpose of a certain classification analysis is to predict a positive or negative opinion contained in some documents. If we focus on the classification process, the analysis can be regarded as a traditional text mining case. However, if we observe that the target of the analysis is a positive or negative opinion, the analysis can be regarded as a typical example of opinion mining. In other words, two methods (i.e., text mining and opinion mining) are available for opinion classification. Thus, in order to distinguish between the two, a precise definition of each method is needed. In this paper, we found that it is very difficult to distinguish between the two methods clearly with respect to the purpose of analysis and the type of results. We conclude that the most definitive criterion to distinguish text mining from opinion mining is whether an analysis utilizes any kind of sentiment lexicon. We first established two prediction models, one based on opinion mining and the other on text mining. Next, we compared the main processes used by the two prediction models. Finally, we compared their prediction accuracy. We then analyzed 2,000 movie reviews. The results revealed that the prediction model based on opinion mining showed higher average prediction accuracy compared to the text mining model. Moreover, in the lift chart generated by the opinion mining based model, the prediction accuracy for the documents with strong certainty was higher than that for the documents with weak certainty. Most of all, opinion mining has a meaningful advantage in that it can reduce learning time dramatically, because a sentiment lexicon generated once can be reused in a similar application domain. Additionally, the classification results can be clearly explained by using a sentiment lexicon. This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of movie reviews. Additionally, various parameters in the parsing and filtering steps of the text mining may have affected the accuracy of the prediction models. However, this research contributes a performance and comparison of text mining analysis and opinion mining analysis for opinion classification. In future research, a more precise evaluation of the two methods should be made through intensive experiments.