• Title/Summary/Keyword: signal Processing

Search Result 6,360, Processing Time 0.035 seconds

Experimental study on structural integrity assessment of utility tunnels using coupled pulse-impact echo method (결합된 초음파-충격 반향 기법 기반의 일반 지하구 구조체의 건전도 평가에 관한 실험적 연구)

  • Jin Kim;Jeong-Uk Bang;Seungbo Shim;Gye-Chun Cho
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.25 no.6
    • /
    • pp.479-493
    • /
    • 2023
  • The need for safety management has arisen due to the increasing number of years of operated underground structures, such as tunnels and utility tunnels, and accidents caused by those aging infrastructures. However, in the case of privately managed underground utility ducts, there is a lack of detailed guidelines for facility safety and maintenance, resulting in inadequate safety management. Furthermore, the absence of basic design information and the limited space for safety assessments make applying currently used non-destructive testing methods challenging. Therefore, this study suggests non-destructive inspection methods using ultrasonic and impact-echo techniques to assess the quality of underground structures. Thickness, presence of rebars, depth of rebars, and the presence and depth of internal defects are assessed to provide fundamental data for the safety assessment of box-type general underground structures. To validate the proposed methodology, different conditions of concrete specimens are designed and cured to simulate actual field conditions. Applying ultrasonic and impact signals and collecting data through multi-channel accelerometers determine the thickness of the simulated specimens, the depth of embedded rebar, and the extent of defects. The predicted results are well agreed upon compared with actual measurements. The proposed methodology is expected to contribute to developing safety diagnostic methods applicable to general underground structures in practical field conditions.

Comparative analysis of wavelet transform and machine learning approaches for noise reduction in water level data (웨이블릿 변환과 기계 학습 접근법을 이용한 수위 데이터의 노이즈 제거 비교 분석)

  • Hwang, Yukwan;Lim, Kyoung Jae;Kim, Jonggun;Shin, Minhwan;Park, Youn Shik;Shin, Yongchul;Ji, Bongjun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.209-223
    • /
    • 2024
  • In the context of the fourth industrial revolution, data-driven decision-making has increasingly become pivotal. However, the integrity of data analysis is compromised if data quality is not adequately ensured, potentially leading to biased interpretations. This is particularly critical for water level data, essential for water resource management, which often encounters quality issues such as missing values, spikes, and noise. This study addresses the challenge of noise-induced data quality deterioration, which complicates trend analysis and may produce anomalous outliers. To mitigate this issue, we propose a noise removal strategy employing Wavelet Transform, a technique renowned for its efficacy in signal processing and noise elimination. The advantage of Wavelet Transform lies in its operational efficiency - it reduces both time and costs as it obviates the need for acquiring the true values of collected data. This study conducted a comparative performance evaluation between our Wavelet Transform-based approach and the Denoising Autoencoder, a prominent machine learning method for noise reduction.. The findings demonstrate that the Coiflets wavelet function outperforms the Denoising Autoencoder across various metrics, including Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and Mean Squared Error (MSE). The superiority of the Coiflets function suggests that selecting an appropriate wavelet function tailored to the specific application environment can effectively address data quality issues caused by noise. This study underscores the potential of Wavelet Transform as a robust tool for enhancing the quality of water level data, thereby contributing to the reliability of water resource management decisions.

Atomic Layer Deposition Method for Polymeric Optical Waveguide Fabrication (원자층 증착 방법을 이용한 폴리머 광도파로 제작)

  • Eun-Su Lee;Kwon-Wook Chun;Jinung Jin;Ye-Jun Jung;Min-Cheol Oh
    • Korean Journal of Optics and Photonics
    • /
    • v.35 no.4
    • /
    • pp.175-183
    • /
    • 2024
  • Research into optical signal processing using photonic integrated circuits (PICs) has been actively pursued in various fields, including optical communication, optical sensors, and quantum optics. Among the materials used in PIC fabrication, polymers have attracted significant interest due to their unique characteristics. To fabricate polymer-based PICs, establishing an accurate manufacturing process for the cross-sectional structure of an optical waveguide is crucial. For stable device performance and high yield in mass production, a process with high reproducibility and a wide tolerance for variation is necessary. This study proposes an efficient method for fabricating polymer optical-waveguide devices by introducing the atomic layer deposition (ALD) process. Compared to conventional photoresist or metal-film deposition methods, the ALD process enables more precise fabrication of the optical waveguide's core structure. Polyimide optical waveguides with a core size of 1.8 × 1.6 ㎛2 are fabricated using the ALD process, and their propagation losses are measured. Additionally, a multimode interference (MMI) optical-waveguide power-splitter device is fabricated and characterized. Throughout the fabrication, no cracking issues are observed in the etching-mask layer, the vertical profiles of the waveguide patterns are excellent, and the propagation loss is below 1.5 dB/cm. These results confirm that the ALD process is a suitable method for the mass production of high-quality polymer photonic devices.

A COMPARATIVE STUDY UPON EVENT-RELATED POTENTIALS OF THE PATIENTS WITH ADHD AND NORMAL CHILDREN USING FOURIER TRANSFORMATION AND WAVELET ANALYSIS (푸리에 변환과 웨이브렛 분석을 통한 주의력결핍 ${\cdot}$ 과잉운동장애 아동과 정상 아동의 사건관련전위 비교 연구)

  • Park, Jin-Hyoung;Kim, Hee-Chan;Cho, Soo-Churl;Shin, Sung-Woong
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.12 no.1
    • /
    • pp.25-50
    • /
    • 2001
  • Using Fourier transformation and wavelet analysis, we compared the auditory event-related potentials of the patients with attention deficit-hyperactivity disorders(abbr. ADHD, 13 boys) and normal control children(8 boys). Amplitudes of the event-related potentials which were calculated via Fourier transformation were compared between the groups and between conditions(non-target versus target) in each group. To the non-target stimuli, the patients with ADHD showed significantly greater amplitudes across almost all of the electrode sites and frequencies. To the target stimuli, the incidents which ADHD patients showed much higher amplitudes than normal controls significantly decreased, while those of the reverse results increased significantly. These results were consistent with the comparison results about negative difference wave(abbr. Nd wave) using Fourier transformation. In summary, it was proved that non-target stimulus which should be ignored elicited more robust electrical response from the patients with ADHD than normal children, but the target stimulus which reguired active processing did much less electrical activity in the patients. For the patients, they showed much inhibited electrical response to the target stimuli in some electrodes and frequency ranges. Normal children were more strongly stimulated by the target stimuli in almost all electrodes and frequency ranges than the patients, but less in prefrontal leads and frontal leads. Wavelet analysis results proved that early responses(0-300msec) to the nontarget stimuli of the patients were significantly greater than the normal controls in prefrontal, anterior frontal, some parts of temporal, and occipital lobes and that late response(300-370msec) were significantly lesser than normal children in parietal and central electrodes. Target stimuli elicited significantly higher electrical activity in both group than non-target stimuli did. Prefrontal and frontal lobes showed stronger responses in the patients than normal children irrespective of stimulus condition, but parietal and temporal lobes did higher activities in normal children than the patients only to the target stimuli. In conclusion, the patients with ADHD showed much greater responses to the stimuli which should be ignored, but failed to activated the necessary processes to the target stimuli. Also, we found that the frequency-dimension analysis and wavelet analysis were useful for the signal processing such as event related potentials.

  • PDF

Two-Dimensional Interpretation of Ear-Remote Reference Magnetotelluric Data for Geothermal Application (심부 지열자원 개발을 위한 원거리 기준점 MT 탐사자료의 2차원 역산 해석)

  • Lee, Tae-Jong;Song, Yoon-Ho;Uchida, Toshihiro
    • Geophysics and Geophysical Exploration
    • /
    • v.8 no.2
    • /
    • pp.145-155
    • /
    • 2005
  • A two-dimensional (2-D) interpretation of MT data has been performed for the purpose of fracture detection for geothermal development. Remote stations have been operated in Kyushu, Japan (480 km apart) as well as in Korea (60 km and 165 km apart in 2002 and 2003 data set, respectively). Apparent resistivity and phase curves calculated by remote processing with the Japan remote data showed enough quality for 2-D inversion for the whole frequency range. Remote reference processing with Korea remote reference data also showed quite good continuity in apparent resistivity and phase curves except some noisy frequency bands; around the power frequency, 60 Hz, and around the dead band $10^{-1}Hz\;Hz\;\~1\;Hz$, where the natural EM signal is known to be very weak. Even though the subsurface showed severe three-dimensional (3-D) characteristics in the survey area so that 2-D inversion by itself could not give enough information for deep geological structures, the 2-D inversion for the 5 survey lines showed several common features. The conductive semi-consolidate mudstone layer is dipping from north to south (about 500 m depth on the south and 200 m on the north most part of the survey area). The boundary between the low (L-2) and high (H-2) resistivity anomalies can be thought as a major fault with strike $N15^{\circ}E$, passing through the sites 206, 112 and 414. The shallow (< 1 km) conductive anomalies (L-4) seem to be fracture zones having strike E-W (at site 105) and $N60^{\circ}W$ (at site 434). And there exists a conductive layer in the western and west-southern part of the survey area in the depth below $2\~3\;km$, for which further investigation is to be needed.

A Localized Secular Variation Model of the Geomagnetic Field Over Northeast Asia Region between 1997 to 2011 (지역화된 동북아시아지역의 지구자기장 영년변화 모델: 1997-2011)

  • Kim, Hyung Rae
    • Economic and Environmental Geology
    • /
    • v.48 no.1
    • /
    • pp.51-63
    • /
    • 2015
  • I produced a secular variation model of geomagnetic field by using the magnetic component data from four geomagnetic observatories located in Northeast Asia during the years between 1997 and 2011. The Earth's magnetic field varies with time and location due to the dynamics of fluid outer core and the magnetic observatories on the surface measure in time series. To adequately represent the magnetic field or secular variations of the Earth, a spatio-temporal model is required. In making a global model, satellite observations as well as limited observatory data are necessary to cover the regions and time intervals. However, you need a considerable work and time to process a huge amount of the dataset with complicated signal separation procedures. When you update the model, the same amount of chores is demanded. Besides, the global model might be affected by the measurement errors of each observatory that are biased and the processing errors in satellite data so that the accuracy of the model would be degraded. In this study, as considered these problems, I introduced a localized method in modeling secular variation of the Earth's magnetic field over Northeast Asia region. Secular variation data from three Japanese observatories and one Chinese observatory that are all in the INTERMAGNET are implemented in the model valid between 1997 to 2011 with the interval of 6 months. With the resulting model, I compared with the global model called CHAOS-4, which includes the main, secular variation and secular acceleration models between 1997 to 2013 by using the three satellites' databases and INTERMAGNET observatory data. Also, the geomagnetic 'jerk' which is known as a sudden change in the time derivatives of the main field of the Earth, was discussed from the localized secular acceleration coefficients derived from spline models.

A Variable Latency Newton-Raphson's Floating Point Number Reciprocal Square Root Computation (가변 시간 뉴톤-랍손 부동소수점 역수 제곱근 계산기)

  • Kim Sung-Gi;Cho Gyeong-Yeon
    • The KIPS Transactions:PartA
    • /
    • v.12A no.5 s.95
    • /
    • pp.413-420
    • /
    • 2005
  • The Newton-Raphson iterative algorithm for finding a floating point reciprocal square mot calculates it by performing a fixed number of multiplications. In this paper, a variable latency Newton-Raphson's reciprocal square root algorithm is proposed that performs multiplications a variable number of times until the error becomes smaller than a given value. To find the rediprocal square root of a floating point number F, the algorithm repeats the following operations: '$X_{i+1}=\frac{{X_i}(3-e_r-{FX_i}^2)}{2}$, $i\in{0,1,2,{\ldots}n-1}$' with the initial value is '$X_0=\frac{1}{\sqrt{F}}{\pm}e_0$'. The bits to the right of p fractional bits in intermediate multiplication results are truncated and this truncation error is less than '$e_r=2^{-p}$'. The value of p is 28 for the single precision floating point, and 58 for the double precision floating point. Let '$X_i=\frac{1}{\sqrt{F}}{\pm}e_i$, there is '$X_{i+1}=\frac{1}{\sqrt{F}}-e_{i+1}$, where '$e_{i+1}{<}\frac{3{\sqrt{F}}{{e_i}^2}}{2}{\mp}\frac{{Fe_i}^3}{2}+2e_r$'. If '$|\frac{\sqrt{3-e_r-{FX_i}^2}}{2}-1|<2^{\frac{\sqrt{-p}{2}}}$' is true, '$e_{i+1}<8e_r$' is less than the smallest number which is representable by floating point number. So, $X_{i+1}$ is approximate to '$\frac{1}{\sqrt{F}}$. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications Per an operation is derived from many reciprocal square root tables ($X_0=\frac{1}{\sqrt{F}}{\pm}e_0$) with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a reciprocal square root unit. Also, it can be used to construct optimized approximate reciprocal square root tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia, scientific computing, etc.

A Variable Latency Newton-Raphson's Floating Point Number Reciprocal Computation (가변 시간 뉴톤-랍손 부동소수점 역수 계산기)

  • Kim Sung-Gi;Cho Gyeong-Yeon
    • The KIPS Transactions:PartA
    • /
    • v.12A no.2 s.92
    • /
    • pp.95-102
    • /
    • 2005
  • The Newton-Raphson iterative algorithm for finding a floating point reciprocal which is widely used for a floating point division, calculates the reciprocal by performing a fixed number of multiplications. In this paper, a variable latency Newton-Raphson's reciprocal algorithm is proposed that performs multiplications a variable number of times until the error becomes smaller than a given value. To find the reciprocal of a floating point number F, the algorithm repeats the following operations: '$'X_{i+1}=X=X_i*(2-e_r-F*X_i),\;i\in\{0,\;1,\;2,...n-1\}'$ with the initial value $'X_0=\frac{1}{F}{\pm}e_0'$. The bits to the right of p fractional bits in intermediate multiplication results are truncated, and this truncation error is less than $'e_r=2^{-p}'$. The value of p is 27 for the single precision floating point, and 57 for the double precision floating point. Let $'X_i=\frac{1}{F}+e_i{'}$, these is $'X_{i+1}=\frac{1}{F}-e_{i+1},\;where\;{'}e_{i+1}, is less than the smallest number which is representable by floating point number. So, $X_{i+1}$ is approximate to $'\frac{1}{F}{'}$. Since the number of multiplications performed by the proposed algorithm is dependent on the input values, the average number of multiplications per an operation is derived from many reciprocal tables $(X_0=\frac{1}{F}{\pm}e_0)$ with varying sizes. The superiority of this algorithm is proved by comparing this average number with the fixed number of multiplications of the conventional algorithm. Since the proposed algorithm only performs the multiplications until the error gets smaller than a given value, it can be used to improve the performance of a reciprocal unit. Also, it can be used to construct optimized approximate reciprocal tables. The results of this paper can be applied to many areas that utilize floating point numbers, such as digital signal processing, computer graphics, multimedia scientific computing, etc.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

GPU Based Feature Profile Simulation for Deep Contact Hole Etching in Fluorocarbon Plasma

  • Im, Yeon-Ho;Chang, Won-Seok;Choi, Kwang-Sung;Yu, Dong-Hun;Cho, Deog-Gyun;Yook, Yeong-Geun;Chun, Poo-Reum;Lee, Se-A;Kim, Jin-Tae;Kwon, Deuk-Chul;Yoon, Jung-Sik;Kim3, Dae-Woong;You, Shin-Jae
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.08a
    • /
    • pp.80-81
    • /
    • 2012
  • Recently, one of the critical issues in the etching processes of the nanoscale devices is to achieve ultra-high aspect ratio contact (UHARC) profile without anomalous behaviors such as sidewall bowing, and twisting profile. To achieve this goal, the fluorocarbon plasmas with major advantage of the sidewall passivation have been used commonly with numerous additives to obtain the ideal etch profiles. However, they still suffer from formidable challenges such as tight limits of sidewall bowing and controlling the randomly distorted features in nanoscale etching profile. Furthermore, the absence of the available plasma simulation tools has made it difficult to develop revolutionary technologies to overcome these process limitations, including novel plasma chemistries, and plasma sources. As an effort to address these issues, we performed a fluorocarbon surface kinetic modeling based on the experimental plasma diagnostic data for silicon dioxide etching process under inductively coupled C4F6/Ar/O2 plasmas. For this work, the SiO2 etch rates were investigated with bulk plasma diagnostics tools such as Langmuir probe, cutoff probe and Quadruple Mass Spectrometer (QMS). The surface chemistries of the etched samples were measured by X-ray Photoelectron Spectrometer. To measure plasma parameters, the self-cleaned RF Langmuir probe was used for polymer deposition environment on the probe tip and double-checked by the cutoff probe which was known to be a precise plasma diagnostic tool for the electron density measurement. In addition, neutral and ion fluxes from bulk plasma were monitored with appearance methods using QMS signal. Based on these experimental data, we proposed a phenomenological, and realistic two-layer surface reaction model of SiO2 etch process under the overlying polymer passivation layer, considering material balance of deposition and etching through steady-state fluorocarbon layer. The predicted surface reaction modeling results showed good agreement with the experimental data. With the above studies of plasma surface reaction, we have developed a 3D topography simulator using the multi-layer level set algorithm and new memory saving technique, which is suitable in 3D UHARC etch simulation. Ballistic transports of neutral and ion species inside feature profile was considered by deterministic and Monte Carlo methods, respectively. In case of ultra-high aspect ratio contact hole etching, it is already well-known that the huge computational burden is required for realistic consideration of these ballistic transports. To address this issue, the related computational codes were efficiently parallelized for GPU (Graphic Processing Unit) computing, so that the total computation time could be improved more than few hundred times compared to the serial version. Finally, the 3D topography simulator was integrated with ballistic transport module and etch reaction model. Realistic etch-profile simulations with consideration of the sidewall polymer passivation layer were demonstrated.

  • PDF