• Title/Summary/Keyword: Noise Removal

Search Result 503, Processing Time 0.027 seconds

Abnormal Behavior Detection Based on Adaptive Background Generation for Intelligent Video Analysis (지능형 비디오 분석을 위한 적응적 배경 생성 기반의 이상행위 검출)

  • Lee, Seoung-Won;Kim, Tae-Kyung;Yoo, Jang-Hee;Paik, Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.111-121
    • /
    • 2011
  • Intelligent video analysis systems require techniques which can predict accidents and provide alarms to the monitoring personnel. In this paper, we present an abnormal behavior analysis technique based on adaptive background generation. More specifically, abnormal behaviors include fence climbing, abandoned objects, fainting persons, and loitering persons. The proposed video analysis system consists of (i) background generation and (ii) abnormal behavior analysis modules. For robust background generation, the proposed system updates static regions by detecting motion changes at each frame. In addition, noise and shadow removal steps are also were added to improve the accuracy of the object detection. The abnormal behavior analysis module extracts object information, such as centroid, silhouette, size, and trajectory. As the result of the behavior analysis function objects' behavior is configured and analyzed based on the a priori specified scenarios, such as fence climbing, abandoning objects, fainting, and loitering. In the experimental results, the proposed system was able to detect the moving object and analyze the abnormal behavior in complex environments.

TREATMENT OF DENTAL CARIES BY ER:YAG LASER IN CHILDREN (소아 환자에서 Er:YAG Laser를 이용한 우식 병소의 처치)

  • Jang, Eun-Young;Lee, Sang-Ho;Lee, Chang-Seop
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.27 no.4
    • /
    • pp.558-563
    • /
    • 2000
  • The lasers have been used in dentistry for more than 30 years and the application of lasers for drilling dental hard tissue has been investigated since the early developement of lasers. Recently, the Er:YAG laser was invented for hard tissue ablation. The Er:YAG laser, having a wavelength of 2.94um, is highly absorbed in both water and hydroxiapatite, leading to a very effective material for hard tissue removal by bursting off the solid tissue component that is, enamel and dentin are removed by the Er :YAG laser by water vaporization and microexplosion, without any melting of inorganic tissues. Therefore, the Er:YAG laser produced round craters with well defined margins and the surrounding tissues had no cracks and no charring. When used for cavity preparation, pulpal damage should not occur if hear buildup is minimized by careful selection of exposure parameters and by use of a water spray. The present study demonstrated that the Er:YAG laser cut the tooth substance adequately for composite resin restoration, without having undesirable side effects such as harmful effects on the pulp, discoloration or cracking etc. Also, the child patients were well cooperative during laser treatment mainly because of little noise, lesser vibration and minimal pain compared to conventional means of cavity preparation.

  • PDF

A Study on the Method of High-Speed Reading of Postal 4-state Bar Code for Supporting Automatic Processing (우편용 4-state 바코드 고속판독 방법에 관한 연구)

  • Park, Moon-Sung;Kim, Hye-Kyu;Jung, Hoe-Kyung
    • The KIPS Transactions:PartD
    • /
    • v.8D no.3
    • /
    • pp.285-294
    • /
    • 2001
  • Recently many efforts on the development of automatic processing system for delivery sequency sorting have been performed in ETRI, which requires the use of postal 4-state bar code system to encode delivery points. This paper addresses the issue on the extension of read range and the improvement of image processing method. For the improvement of image processing procedure, applied information acquisition method through basic two thresholds onto the horizontal axial line of gray image based on reference information of 4-state bar code symbology. Symbol values are computed after creating two threshold values based on the obtained information through search of horizontal axial values. The implementation result of 4-state bar code reader are obtained the symbol values within 30~60 msec (58,000~116,000 mail item/hour)without noise removal or image rotation in spite of the incline $\pm 45^{\circ}$.

  • PDF

Formant-broadened CMS Using the Log-spectrum Transformed from the Cepstrum (켑스트럼으로부터 변환된 로그 스펙트럼을 이용한 포먼트 평활화 켑스트럴 평균 차감법)

  • 김유진;정혜경;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.4
    • /
    • pp.361-373
    • /
    • 2002
  • In this paper, we propose a channel normalization method to improve the performance of CMS (cepstral mean subtraction) which is widely adopted to normalize a channel variation for speech and speaker recognition. CMS which estimates the channel effects by averaging long-term cepstrum has a weak point that the estimated channel is biased by the formants of voiced speech which include a useful speech information. The proposed Formant-broadened Cepstral Mean Subtraction (FBCMS) is based on the facts that the formants can be found easily in log spectrum which is transformed from the cepstrum by fourier transform and the formants correspond to the dominant poles of all-pole model which is usually modeled vocal tract. The FBCMS evaluates only poles to be broadened from the log spectrum without polynomial factorization and makes a formant-broadened cepstrum by broadening the bandwidths of formant poles. We can estimate the channel cepstrum effectively by averaging formant-broadened cepstral coefficients. We performed the experiments to compare FBCMS with CMS, PFCMS using 4 simulated telephone channels. In the experiment of channel estimation, we evaluated the distance cepstrum of real channel from the cepstrum of estimated channel and found that we were able to get the mean cepstrum closer to the channel cepstrum due to an softening the bias of mean cepstrum to speech. In the experiment of text-independent speaker identification, we showed the result that the proposed method was superior than the conventional CMS and comparable to the pole-filtered CMS. Consequently, we showed the proposed method was efficiently able to normalize the channel variation based on the conventional CMS.

Automatic Selection of Optimal Parameter for Baseline Correction using Asymmetrically Reweighted Penalized Least Squares (Asymmetrically Reweighted Penalized Least Squares을 이용한 기준선 보정에서 최적 매개변수 자동 선택 방법)

  • Park, Aaron;Baek, Sung-June;Park, Jun-Qyu;Seo, Yu-Gyung;Won, Yonggwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.3
    • /
    • pp.124-131
    • /
    • 2016
  • Baseline correction is very important due to influence on performance of spectral analysis in application of spectroscopy. Baseline is often estimated by parameter selection using visual inspection on analyte spectrum. It is a highly subjective procedure and can be tedious work especially with a large number of data. For these reasons, it is an objective and automatic procedure is necessary to select optimal parameter value for baseline correction. Asymmetrically reweighted penalized least squares (arPLS) based on penalized least squares was proposed for baseline correction in our previous study. The method uses a new weighting scheme based on the generalized logistic function. In this study, we present an automatic selection of optimal parameter for baseline correction using arPLS. The method computes fitness and smoothness values of fitted baseline within available range of parameters and then selects optimal parameter when the sum of normalized fitness and smoothness gets minimum. According to the experimental results using simulated data with varying baselines, sloping, curved and doubly curved baseline, and real Raman spectra, we confirmed that the proposed method can be effectively applied to optimal parameter selection for baseline correction using arPLS.

Comparative Study of GDPA and Hough Transformation for Linear Feature Extraction using Space-borne Imagery (위성 영상정보를 이용한 선형 지형지물 추출에서의 GDPA와 Hough 변환 처리결과 비교연구)

  • Lee Kiwon;Ryu Hee-Young;Kwon Byung-Doo
    • Korean Journal of Remote Sensing
    • /
    • v.20 no.4
    • /
    • pp.261-274
    • /
    • 2004
  • The feature extraction using remotely sensed imagery has been recognized one of the important tasks in remote sensing applications. As the high-resolution imagery are widely used to the engineering purposes, need of more accurate feature information also is increasing. Especially, in case of the automatic extraction of linear feature such as road using mid or low-resolution imagery, several techniques was developed and applied in the mean time. But quantitatively comparative analysis of techniques and case studies for high-resolution imagery is rare. In this study, we implemented a computer program to perform and compare GDPA (Gradient Direction Profile Analysis) algorithm and Hough transformation. Also the results of applying two techniques to some images were compared with road centerline layers and boundary layers of digital map and presented. For quantitative comparison, the ranking method using commission error and omission error was used. As results, Hough transform had high accuracy over 20% on the average. As for execution speed, GDPA shows main advantage over Hough transform. But the accuracy was not remarkable difference between GDPA and Hough transform, when the noise removal was app]ied to the result of GDPA. In conclusion, it is expected that GDPA have more advantage than Hough transform in the application side.

Clustering Performance Analysis of Autoencoder with Skip Connection (스킵연결이 적용된 오토인코더 모델의 클러스터링 성능 분석)

  • Jo, In-su;Kang, Yunhee;Choi, Dong-bin;Park, Young B.
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.403-410
    • /
    • 2020
  • In addition to the research on noise removal and super-resolution using the data restoration (Output result) function of Autoencoder, research on the performance improvement of clustering using the dimension reduction function of autoencoder are actively being conducted. The clustering function and data restoration function using Autoencoder have common points that both improve performance through the same learning. Based on these characteristics, this study conducted an experiment to see if the autoencoder model designed to have excellent data recovery performance is superior in clustering performance. Skip connection technique was used to design autoencoder with excellent data recovery performance. The output result performance and clustering performance of both autoencoder model with Skip connection and model without Skip connection were shown as graph and visual extract. The output result performance was increased, but the clustering performance was decreased. This result indicates that the neural network models such as autoencoders are not sure that each layer has learned the characteristics of the data well if the output result is good. Lastly, the performance degradation of clustering was compensated by using both latent code and skip connection. This study is a prior study to solve the Hanja Unicode problem by clustering.

An Experimental Study of Demountable Bolted Shear Connectors for the Easy Dismantling and Reconstruction of Concrete Slabs of Steel-Concrete Composite Bridges (강합성 교량의 콘크리트 바닥판 해체 및 재시공이 용이한 분리식 볼트접합 전단연결재에 관한 실험적 연구)

  • Jung, Dae Sung;Park, Se-Hyun;Kim, Tae Hyeong;Kim, Chul Young
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.42 no.6
    • /
    • pp.751-762
    • /
    • 2022
  • Welded head studs are mainly used as shear connectors to bond steel girders and concrete slabs in steel-concrete composite bridges. For welded shear connectors, environmental problems include noise and scattering dust which are generated during the removal of damaged or aged slabs. Therefore, it is necessary to develop demountable shear connectors that can easily replace aged concrete slabs for efficient maintenance and thus for better management of environmental problems and life cycle costs. The buried nut method is commonly studied in relation to bolted shear connectors, but this method is not used in civil structures such as bridges due to low rigidity, low shear resistance, and increased initial slip. In this study, in order to mitigate these problems, a demountable bolted shear connector is proposed in which the buried nut is integrated into the stud column and has a tapered shape at the bottom of an enlarged column shank. To verify the performance of the proposed demountable stud bolts in terms of static shear strength and slip displacement, a horizontal shear test was conducted, with the performance outcomes compared to those of conventional welded studs. It was confirmed that the proposed demountable bolted shear connector is capable of excellent shear performance and that it satisfies the slip displacement and ductility design criteria, meaning that it is feasible as a replacement for existing welding studs.

Development of Video-Detection Integration Algorithm on Vehicle Tracking (트래킹 기반 영상검지 통합 알고리즘 개발)

  • Oh, Jutaek;Min, Junyoung;Hu, Byungdo;Hwang, Bohee
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.5D
    • /
    • pp.635-644
    • /
    • 2009
  • Image processing technique in the outdoor environment is very sensitive, and it tends to lose a lot of accuracy when it rapidly changes by outdoor environment. Therefore, in order to calculate accurate traffic information using the traffic monitoring system, we must resolve removing shadow in transition time, Distortion by the vehicle headlights at night, noise of rain, snow, and fog, and occlusion. In the research, we developed a system to calibrate the amount of traffic, speed, and time occupancy by using image processing technique in a variety of outdoor environments change. This system were tested under outdoor environments at the Gonjiam test site, which is managed by Korea Institute of Construction Technology (www.kict.re.kr) for testing performance. We evaluated the performance of traffic information, volume counts, speed, and occupancy time, with 4 lanes (2 lanes are upstream and the rests are downstream) from the 16th to 18th December, 2008. The evaluation method performed as based on the standard data is a radar detection compared to calculated data using image processing technique. The System evaluation results showed that the amount of traffic, speed, and time occupancy in period (day, night, sunrise, sunset) are approximately 92-97% accuracy when these data compared to the standard data.

Development of smart car intelligent wheel hub bearing embedded system using predictive diagnosis algorithm

  • Sam-Taek Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.1-8
    • /
    • 2023
  • If there is a defect in the wheel bearing, which is a major part of the car, it can cause problems such as traffic accidents. In order to solve this problem, big data is collected and monitoring is conducted to provide early information on the presence or absence of wheel bearing failure and type of failure through predictive diagnosis and management technology. System development is needed. In this paper, to implement such an intelligent wheel hub bearing maintenance system, we develop an embedded system equipped with sensors for monitoring reliability and soundness and algorithms for predictive diagnosis. The algorithm used acquires vibration signals from acceleration sensors installed in wheel bearings and can predict and diagnose failures through big data technology through signal processing techniques, fault frequency analysis, and health characteristic parameter definition. The implemented algorithm applies a stable signal extraction algorithm that can minimize vibration frequency components and maximize vibration components occurring in wheel bearings. In noise removal using a filter, an artificial intelligence-based soundness extraction algorithm is applied, and FFT is applied. The fault frequency was analyzed and the fault was diagnosed by extracting fault characteristic factors. The performance target of this system was over 12,800 ODR, and the target was met through test results.