• Title/Summary/Keyword: 입력신호

Search Result 2,707, Processing Time 0.03 seconds

Artifact Reduction in Sparse-view Computed Tomography Image using Residual Learning Combined with Wavelet Transformation (Wavelet 변환과 결합한 잔차 학습을 이용한 희박뷰 전산화단층영상의 인공물 감소)

  • Lee, Seungwan
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.3
    • /
    • pp.295-302
    • /
    • 2022
  • Sparse-view computed tomography (CT) imaging technique is able to reduce radiation dose, ensure the uniformity of image characteristics among projections and suppress noise. However, the reconstructed images obtained by the sparse-view CT imaging technique suffer from severe artifacts, resulting in the distortion of image quality and internal structures. In this study, we proposed a convolutional neural network (CNN) with wavelet transformation and residual learning for reducing artifacts in sparse-view CT image, and the performance of the trained model was quantitatively analyzed. The CNN consisted of wavelet transformation, convolutional and inverse wavelet transformation layers, and input and output images were configured as sparse-view CT images and residual images, respectively. For training the CNN, the loss function was calculated by using mean squared error (MSE), and the Adam function was used as an optimizer. Result images were obtained by subtracting the residual images, which were predicted by the trained model, from sparse-view CT images. The quantitative accuracy of the result images were measured in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The results showed that the trained model is able to improve the spatial resolution of the result images as well as reduce artifacts in sparse-view CT images effectively. Also, the trained model increased the PSNR and SSIM by 8.18% and 19.71% in comparison to the imaging model trained without wavelet transformation and residual learning, respectively. Therefore, the imaging model proposed in this study can restore the image quality of sparse-view CT image by reducing artifacts, improving spatial resolution and quantitative accuracy.

Time Series Data Analysis and Prediction System Using PCA (주성분 분석 기법을 활용한 시계열 데이터 분석 및 예측 시스템)

  • Jin, Young-Hoon;Ji, Se-Hyun;Han, Kun-Hee
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.11
    • /
    • pp.99-107
    • /
    • 2021
  • We live in a myriad of data. Various data are created in all situations in which we work, and we discover the meaning of data through big data technology. Many efforts are underway to find meaningful data. This paper introduces an analysis technique that enables humans to make better choices through the trend and prediction of time series data as a principal component analysis technique. Principal component analysis constructs covariance through the input data and presents eigenvectors and eigenvalues that can infer the direction of the data. The proposed method computes a reference axis in a time series data set having a similar directionality. It predicts the directionality of data in the next section through the angle between the directionality of each time series data constituting the data set and the reference axis. In this paper, we compare and verify the accuracy of the proposed algorithm with LSTM (Long Short-Term Memory) through cryptocurrency trends. As a result of comparative verification, the proposed method recorded relatively few transactions and high returns(112%) compared to LSTM in data with high volatility. It can mean that the signal was analyzed and predicted relatively accurately, and it is expected that better results can be derived through a more accurate threshold setting.

Statistical Techniques to Detect Sensor Drifts (센서드리프트 판별을 위한 통계적 탐지기술 고찰)

  • Seo, In-Yong;Shin, Ho-Cheol;Park, Moon-Ghu;Kim, Seong-Jun
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.3
    • /
    • pp.103-112
    • /
    • 2009
  • In a nuclear power plant (NPP), periodic sensor calibrations are required to assure sensors are operating correctly. However, only a few faulty sensors are found to be calibrated. For the safe operation of an NPP and the reduction of unnecessary calibration, on-line calibration monitoring is needed. In this paper, principal component-based Auto-Associative support vector regression (PCSVR) was proposed for the sensor signal validation of the NPP. It utilizes the attractive merits of principal component analysis (PCA) for extracting predominant feature vectors and AASVR because it easily represents complicated processes that are difficult to model with analytical and mechanistic models. With the use of real plant startup data from the Kori Nuclear Power Plant Unit 3, SVR hyperparameters were optimized by the response surface methodology (RSM). Moreover the statistical techniques are integrated with PCSVR for the failure detection. The residuals between the estimated signals and the measured signals are tested by the Shewhart Control Chart, Exponentially Weighted Moving Average (EWMA), Cumulative Sum (CUSUM) and generalized likelihood ratio test (GLRT) to detect whether the sensors are failed or not. This study shows the GLRT can be a candidate for the detection of sensor drift.

Automated Analysis for PDC-R Technique by Multiple Filtering (다중필터링에 의한 PDC-R 기법의 자동화 해석)

  • Joh, Sung-Ho;Rahman, Norinah Abd;Hassanul, Raja
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.3C
    • /
    • pp.141-148
    • /
    • 2010
  • Electrical noises like self potential, burst noises and 60-Hz electrical noises are one of the causes to reduce reliability of electrical resistivity survey. Even the PDC-R (Pseudo DC resisitivity) technique, recently developed, is suffering from the problem of low reliability due to electrical noises. That is, both DC-based and AC-based resistivity technique is subject to reliability problem due to electrical noises embedded in urban geotechnical sites. In this research, a new technique to enhance reliability of the PDC-R technique by minimizing influence of electrical noises was proposed. In addition, an automated procedure was also proposed to facilitate data analysis and interpretation of PDC-R measurements. The proposed technique is composed of two steps: 1. to extract information only related with the input current by means of multiple-filter technique, and 2. to undertake a task to sort out signal information only to show stable and reliable characteristics. This automated procedure was verified by a synthetic harmonic wave including DC shift, burst random noises and 60-Hz electrical noises. Also the procedure was applied to site investigation at urban areas for proving its feasibility and accuracy.

Implementation of Parallel Processor for Sound Synthesis of Guitar (기타의 음 합성을 위한 병렬 프로세서 구현)

  • Choi, Ji-Won;Kim, Yong-Min;Cho, Sang-Jin;Kim, Jong-Myon;Chong, Ui-Pil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.3
    • /
    • pp.191-199
    • /
    • 2010
  • Physical modeling is a synthesis method of high quality sound which is similar to real sound for musical instruments. However, since physical modeling requires a lot of parameters to synthesize sound of a musical instrument, it prevents real-time processing for the musical instrument which supports a large number of sounds simultaneously. To solve this problem, this paper proposes a single instruction multiple data (SIMD) parallel processor that supports real-time processing of sound synthesis of guitar, a representative plucked string musical instrument. To control six strings of guitar, we used a SIMD parallel processor which consists of six processing elements (PEs). Each PE supports modeling of the corresponding string. The proposed SIMD processor can generate synthesized sounds of six strings simultaneously when a parallel synthesis algorithm receives excitation signals and parameters of each string as an input. Experimental results using a sampling rate 44.1 kHz and 16 bits quantization indicate that synthesis sounds using the proposed parallel processor were very similar to original sound. In addition, the proposed parallel processor outperforms commercial TI's TMS320C6416 in terms of execution time (8.9x better) and energy efficiency (39.8x better).

Analysis of Color Distortion in Hazy Images (안개가 포함된 영상에서의 색 왜곡 특성 분석)

  • JeongYeop Kim
    • Journal of Platform Technology
    • /
    • v.11 no.6
    • /
    • pp.68-78
    • /
    • 2023
  • In this paper, the color distortion in images with haze would be analyzed. When haze is included in the scene, the color signal reflected in the scene is accompanied by color distortion due to the influence of transmittance according to the haze component. When the influence of haze is excluded by a conventional de-hazing method, the distortion of color tends to not be sufficiently resolved. Khoury et al. used the dark channel priority technique, a haze model mentioned in many studies, to determine the degree of color distortion. However, only the tendency of distortion such as color error values was confirmed, and specific color distortion analysis was not performed. This paper analyzes the characteristic of color distortion and proposes a restoration method that can reduce color distortion. Input images of databases used by Khoury et al. include Macbeth color checker, a standard color tool. Using Macbeth color checker's color values, color distortion according to changes in haze concentration was analyzed, and a new color distortion model was proposed through modeling. The proposed method is to obtain a mapping function using the change in chromaticity by step according to the change in haze concentration and the color of the ground truth. Since the form of color distortion varies from step to step in proportion to the haze concentration, it is necessary to obtain an integrated thought function that operates stably at all stages. In this paper, the improvement of color distortion through the proposed method was estimated based on the value of angular error, and it was verified that there was an improvement effect of about 15% compared to the conventional method.

  • PDF

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

An Area-Efficient Time-Shared 10b DAC for AMOLED Column Driver IC Applications (AMOLED 컬럼 구동회로 응용을 위한 시분할 기법 기반의 면적 효율적인 10b DAC)

  • Kim, Won-Kang;An, Tai-Ji;Lee, Seung-Hoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.5
    • /
    • pp.87-97
    • /
    • 2016
  • This work proposes a time-shared 10b DAC based on a two-step resistor string to minimize the effective area of a DAC channel for driving each AMOLED display column. The proposed DAC shows a lower effective DAC area per unit column driver and a faster conversion speed than the conventional DACs by employing a time-shared DEMUX and a ROM-based two-step decoder of 6b and 4b in the first and second resistor string. In the second-stage 4b floating resistor string, a simple current source rather than a unity-gain buffer decreases the loading effect and chip area of a DAC channel and eliminates offset mismatch between channels caused by buffer amplifiers. The proposed 1-to-24 DEMUX enables a single DAC channel to drive 24 columns sequentially with a single-phase clock and a 5b binary counter. A 0.9pF sampling capacitor and a small-sized source follower in the input stage of each column-driving buffer amplifier decrease the effect due to channel charge injection and improve the output settling accuracy of the buffer amplifier while using the top-plate sampling scheme in the proposed DAC. The proposed DAC in a $0.18{\mu}m$ CMOS shows a signal settling time of 62.5ns during code transitions from '$000_{16}$' to '$3FF_{16}$'. The prototype DAC occupies a unit channel area of $0.058mm^2$ and an effective unit channel area of $0.002mm^2$ while consuming 6.08mW with analog and digital power supplies of 3.3V and 1.8V, respectively.

Local Shape Analysis of the Hippocampus using Hierarchical Level-of-Detail Representations (계층적 Level-of-Detail 표현을 이용한 해마의 국부적인 형상 분석)

  • Kim Jeong-Sik;Choi Soo-Mi;Choi Yoo-Ju;Kim Myoung-Hee
    • The KIPS Transactions:PartA
    • /
    • v.11A no.7 s.91
    • /
    • pp.555-562
    • /
    • 2004
  • Both global volume reduction and local shape changes of hippocampus within the brain indicate their abnormal neurological states. Hippocampal shape analysis consists of two main steps. First, construct a hippocampal shape representation model ; second, compute a shape similarity from this representation. This paper proposes a novel method for the analysis of hippocampal shape using integrated Octree-based representation, containing meshes, voxels, and skeletons. First of all, we create multi-level meshes by applying the Marching Cube algorithm to the hippocampal region segmented from MR images. This model is converted to intermediate binary voxel representation. And we extract the 3D skeleton from these voxels using the slice-based skeletonization method. Then, in order to acquire multiresolutional shape representation, we store hierarchically the meshes, voxels, skeletons comprised in nodes of the Octree, and we extract the sample meshes using the ray-tracing based mesh sampling technique. Finally, as a similarity measure between the shapes, we compute $L_2$ Norm and Hausdorff distance for each sam-pled mesh pair by shooting the rays fired from the extracted skeleton. As we use a mouse picking interface for analyzing a local shape inter-actively, we provide an interaction and multiresolution based analysis for the local shape changes. In this paper, our experiment shows that our approach is robust to the rotation and the scale, especially effective to discriminate the changes between local shapes of hippocampus and more-over to increase the speed of analysis without degrading accuracy by using a hierarchical level-of-detail approach.

Evaluation of Magnetization Transfer Ratio Imaging by Phase Sensitive Method in Knee Joint (슬관절 부위에서 자화전이 위상감도법에 의한 자화전이율 영상 평가)

  • Yoon, Moon-Hyun;Seung, Mi-Sook;Choe, Bo-Young
    • Progress in Medical Physics
    • /
    • v.19 no.4
    • /
    • pp.269-275
    • /
    • 2008
  • Although MR imaging is generally applicable to depict knee joint deterioration it, is sometimes occurred to mis-read and mis-diagnose the common knee joint diseases. In this study, we employed magnetization transfer ratio (MTR) method to improve the diagnosis of the various knee joint diseases. Spin-echo (SE) T2-weighted images (TR/TE 3,400-3,500/90-100 ms) were obtained in seven cases of knee joint deterioration, FSE T2-weighted images (TR/TE 4,500-5,000/100-108 ms) were obtained in seven cases of knee joint deterioration, gradient-echo (GRE) T2-weighted images (TR/TE 9/4.56/$50^{\circ}$ flip angle, NEX 1) were obtained in 3 cases of knee joint deterioration, In six cases of knee joint deterioration, fat suppression was performed using a T2-weighted short T1/tau inverse recovery (STIR) sequence (TR/TE =2,894-3,215 ms/70 ms, NEX 3, ETL 9). Calculation of MTR for individual pixels was performed on registration of unsaturated and saturated images. After processing to make MTR images, the images were displayed in gray color. For improving diagnosis, three-dimensional isotropic volume images, the MR tristimulus color mapping and the MTR map was employed. MTR images showed diagnostic images quality to assess the patients' pathologies. The intensity difference between MTR images and conventional MRI was seen on the color bar. The profile graph on MTR imaging effect showed a quantitative measure of the relative decrease in signal intensity due to the MT pulse. To diagnose the pathologies of the knee joint, the profile graph data was shown on the image as a small cross. The present study indicated that MTR images in the knee joint were feasible. Investigation of physical change on MTR imaging enables to provide us more insight in the physical and technical basis of MTR imaging. MTR images could be useful for rapid assessment of diseases that we examine unambiguous contrast in MT images of knee disorder patients.

  • PDF