• Title/Summary/Keyword: Noise Removing

Search Result 407, Processing Time 0.024 seconds

Particle-motion-tracking Algorithm for the Evaluation of the Multi-physical Properties of Single Nanoparticles (단일 나노입자의 다중 물리량의 평가를 위한 입자 모션 트랙킹 알고리즘)

  • Park, Yeeun;Kang, Geeyoon;Park, Minsu;Noh, Hyowoong;Park, Hongsik
    • Journal of Sensor Science and Technology
    • /
    • v.31 no.3
    • /
    • pp.175-179
    • /
    • 2022
  • The physical properties of biomaterials are important for their isolation and separation from body fluids. In particular, the precise evaluation of the multi-physical properties of single biomolecules is essential in that the correlation between physical and biological properties of specific biomolecule. However, the majority of scientific equipment, can only determine specific-physical properties of single nanoparticles, making the evaluation of the multi-physical properties difficult. The improvement of analytical techniques for the evaluation of multi-physical properties is therefore required in various research fields. In this study, we developed a motion-tracking algorithm to evaluate the multi-physical properties of single-nanoparticles by analyzing their behavior. We observed the Brownian motion and electric-field-induced drift of fluorescent nanoparticles injected in a microfluidic chip with two electrodes using confocal microscopy. The proposed algorithm is able to determine the size of the nanoparticles by i) removing the background noise from images, ii) tracking the motion of nanoparticles using the circular-Hough transform, iii) extracting the mean squared displacement (MSD) of the tracked nanoparticles, and iv) applying the MSD to the Stokes-Einstein equation. We compared the evaluated size of the nanoparticles with the size measured by SEM. We also determined the zeta-potential and surface-charge density of the nanoparticles using the extracted electrophoretic velocity and the Helmholtz-Smoluchowski equation. The proposed motion-tracking algorithm could be employed in various fields related to biomaterial analysis, such as exosome analysis.

Optimal Design Space Exploration of Multi-core Architecture for Real-time Lane Detection Algorithm (실시간 차선인식 알고리즘을 위한 최적의 멀티코어 아키텍처 디자인 공간 탐색)

  • Jeong, Inkyu;Kim, Jongmyon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.3
    • /
    • pp.339-349
    • /
    • 2017
  • This paper proposes a four-stage algorithm for detecting lanes on a driving car. In the first stage, it extracts region of interests in an image. In the second stage, it employs a median filter to remove noise. In the third stage, a binary algorithm is used to classify two classes of backgrond and foreground of an input image. Finally, an image erosion algorithm is utilized to obtain clear lanes by removing noises and edges remained after the binary process. However, the proposed lane detection algorithm requires high computational time. To address this issue, this paper presents a parallel implementation of a real-time line detection algorithm on a multi-core architecture. In addition, we implement and simulate 8 different processing element (PE) architectures to select an optimal PE architecture for the target application. Experimental results indicate that 40×40 PE architecture show the best performance, energy efficiency and area efficiency.

Secure Self-Driving Car System Resistant to the Adversarial Evasion Attacks (적대적 회피 공격에 대응하는 안전한 자율주행 자동차 시스템)

  • Seungyeol Lee;Hyunro Lee;Jaecheol Ha
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.907-917
    • /
    • 2023
  • Recently, a self-driving car have applied deep learning technology to advanced driver assistance system can provide convenience to drivers, but it is shown deep that learning technology is vulnerable to adversarial evasion attacks. In this paper, we performed five adversarial evasion attacks, including MI-FGSM(Momentum Iterative-Fast Gradient Sign Method), targeting the object detection algorithm YOLOv5 (You Only Look Once), and measured the object detection performance in terms of mAP(mean Average Precision). In particular, we present a method applying morphology operations for YOLO to detect objects normally by removing noise and extracting boundary. As a result of analyzing its performance through experiments, when an adversarial attack was performed, YOLO's mAP dropped by at least 7.9%. The YOLO applied our proposed method can detect objects up to 87.3% of mAP performance.

Seismic interval velocity analysis on prestack depth domain for detecting the bottom simulating reflector of gas-hydrate (가스 하이드레이트 부존층의 하부 경계면을 규명하기 위한 심도영역 탄성파 구간속도 분석)

  • Ko Seung-Won;Chung Bu-Heung
    • 한국신재생에너지학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.638-642
    • /
    • 2005
  • For gas hydrate exploration, long offset multichannel seismic data acquired using by the 4km streamer length in Ulleung basin of the East Sea. The dataset was processed to define the BSRs (Bottom Simulating Reflectors) and to estimate the amount of gas hydrates. Confirmation of the presence of Bottom Simulating reflectors (BSR) and investigation of its physical properties from seismic section are important for gas hydrate detection. Specially, faster interval velocity overlying slower interval velocity indicates the likely presences of gas hydrate above BSR and free gas underneath BSR. In consequence, estimation of correct interval velocities and analysis of their spatial variations are critical processes for gas hydrate detection using seismic reflection data. Using Dix's equation, Root Mean Square (RMS) velocities can be converted into interval velocities. However, it is not a proper way to investigate interval velocities above and below BSR considering the fact that RMS velocities have poor resolution and correctness and the assumption that interval velocities increase along the depth. Therefore, we incorporated Migration Velocity Analysis (MVA) software produced by Landmark CO. to estimate correct interval velocities in detail. MVA is a process to yield velocities of sediments between layers using Common Mid Point (CMP) gathered seismic data. The CMP gathered data for MVA should be produced after basic processing steps to enhance the signal to noise ratio of the first reflections. Prestack depth migrated section is produced using interval velocities and interval velocities are key parameters governing qualities of prestack depth migration section. Correctness of interval velocities can be examined by the presence of Residual Move Out (RMO) on CMP gathered data. If there is no RMO, peaks of primary reflection events are flat in horizontal direction for all offsets of Common Reflection Point (CRP) gathers and it proves that prestack depth migration is done with correct velocity field. Used method in this study, Tomographic inversion needs two initial input data. One is the dataset obtained from the results of preprocessing by removing multiples and noise and stacked partially. The other is the depth domain velocity model build by smoothing and editing the interval velocity converted from RMS velocity. After the three times iteration of tomography inversion, Optimum interval velocity field can be fixed. The conclusion of this study as follow, the final Interval velocity around the BSR decreased to 1400 m/s from 2500 m/s abruptly. BSR is showed about 200m depth under the seabottom

  • PDF

The Effect of the Telephone Channel to the Performance of the Speaker Verification System (전화선 채널이 화자확인 시스템의 성능에 미치는 영향)

  • 조태현;김유진;이재영;정재호
    • The Journal of the Acoustical Society of Korea
    • /
    • v.18 no.5
    • /
    • pp.12-20
    • /
    • 1999
  • In this paper, we compared speaker verification performance of the speech data collected in clean environment and in channel environment. For the improvement of the performance of speaker verification gathered in channel, we have studied on the efficient feature parameters in channel environment and on the preprocessing. Speech DB for experiment is consisted of Korean doublet of numbers, considering the text-prompted system. Speech features including LPCC(Linear Predictive Cepstral Coefficient), MFCC(Mel Frequency Cepstral Coefficient), PLP(Perceptually Linear Prediction), LSP(Line Spectrum Pair) are analyzed. Also, the preprocessing of filtering to remove channel noise is studied. To remove or compensate for the channel effect from the extracted features, cepstral weighting, CMS(Cepstral Mean Subtraction), RASTA(RelAtive SpecTrAl) are applied. Also by presenting the speech recognition performance on each features and the processing, we compared speech recognition performance and speaker verification performance. For the evaluation of the applied speech features and processing methods, HTK(HMM Tool Kit) 2.0 is used. Giving different threshold according to male or female speaker, we compare EER(Equal Error Rate) on the clean speech data and channel data. Our simulation results show that, removing low band and high band channel noise by applying band pass filter(150~3800Hz) in preprocessing procedure, and extracting MFCC from the filtered speech, the best speaker verification performance was achieved from the view point of EER measurement.

  • PDF

Study on Improvement for selecting the optimum voice channels in the radio voice communication (무전기 음성통신에서 최적음성채널 선택을 위한 개선방안에 관한 연구)

  • Lew, Chang-Guk;Lee, Bae-Ho
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.2
    • /
    • pp.171-178
    • /
    • 2016
  • An aircraft in flight and ATC(: Air Traffic Controllers) working in the Ground Control Center carry out a voice communication using the radio. Voice signal to be transmitted from the aircraft is received to a plurality of terrestrial sites around the country at the same time. The ATC receives the various quality of voice signal from the aircraft depending on the distance, speed, weather conditions and adjusted condition of the antenna and the radio. The ATC carries out a voice communication with aircraft in the optimal conditions finding the best voice signal. However, the present system chooses the values of the CD(: Carrier Dectect) which is determined to be superior to, based on the input voice level, as optimal channel. Thus this system can not be seen to select the optimal channel because it doesn't consider the effect of the noise which influences on the communication quality. In this paper, after removing the noise in the voice signal, we could give the digitized information and an improved voice signal quality, so that users can select an optimal channel. By using it, when operating the training eavesdropping system or the aircraft control, we can expect prevention accident and improvement of training performance by selecting the improved quality channel.

Performance of VLC-CDMA Communication System Using LED (LED를 이용한 VLC-CDMA 통신 시스템 성능 분석)

  • Bae, Su-Jin;Hong, Yeong-Jo;Lee, Kye-San
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.8 no.2
    • /
    • pp.83-90
    • /
    • 2009
  • White LEDs(Light Emitting Diode) offer advantageous properties such as high brightness, improved reliability, lower power consumption, and long lifetimes. An LED is an electronic device that converts an electrical signal into a tight signal and is used not only in Optical Communication Indoor wireless optical illuminating rooms, but also for wireless optical communication systems. Currently, studies about these white LEDs have been being progressed, and in this raper, we discuss the multiplex and the multiple access method of VLC(Visible Light Communication) systems using white LEDs. In proposed system, CDMA(Code Division Multiple Access) apples to VLC system to reduce interference of VLC system, and improve capacity. The superiority of OOK modulation is presented in analysis of results by comparing VLC-CDMA communication system using OOK(On-off keying) modulation and BPSK modulation in AWGN(Additive White Gaussian Noise) channel and Diffuse channel. And we investigate the significance of a solution of interference by multipath by comparing BER in multipath channel and AWGN channel. In the proposed system, we assume Directed LOS(Line Of Sight) and Diffuse Link, and suppose VLC-CDMA using OOC(Optical Orthogonal Code) as methods to increase efficiency of system by removing ISI(Inter Symbol Interference) caused by multiple access, optical spreading code, and also present an analysis of its performance.

  • PDF

Multi-spectral Flash Imaging using Region-based Weight Map (영역기반 가중치 맵을 이용한 멀티스팩트럼 플래시 영상 획득)

  • Choi, Bong-Seok;Kim, Dae-Chul;Lee, Cheol-Hee;Ha, Yeong-Ho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.9
    • /
    • pp.127-135
    • /
    • 2013
  • In order to acquire images in low-light environments, it is usually necessary to adopt long exposure times or resort to flash lights. However, flashes often induce color distortion, cause the red-eye effect and can be disturbing to subjects. On the other hand, long-exposure shots are susceptible to subject-motion, as well as motion-blur due to camera shake when performed hand-held. A recently introduced technique to overcome the limitations of traditional low-light photography is that of multi-spectral flash. Multi-spectral flash images are a combination of UV/IR and visible spectrum information. The general idea is that of retrieving details from the UV/IR spectrum and color from the visible spectrum. However, multi-spectral flash images themselves are subject to color distortion and noise. This works presents a method to compute multi-spectral flash images so that noise can be reduced and color accuracy improved. The proposed approach is a previously seen optimization method, improved by the introduction of a weight map used to discriminate uniform regions from detail regions. The weight map is generated by applying canny edge operator and it is applied to the optimization process for discriminating the weights in uniform region and edge. Accordingly, the weight of color information is increased in the uniform region and the detail region of weight is decreased in detail region. Therefore, the proposed method can be enhancing color reproduction and removing artifacts. The performance of the proposed method has been objectively evaluated using long-exposure shots as reference.

Comparative Analysis among Radar Image Filters for Flood Mapping (홍수매핑을 위한 레이더 영상 필터의 비교분석)

  • Kim, Daeseong;Jung, Hyung-Sup;Baek, Wonkyung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.1
    • /
    • pp.43-52
    • /
    • 2016
  • Due to the characteristics of microwave signals, Radar satellite image has been used for flood detection without weather and time influence. The more methods of flood detection were developed, the more detection rate of flood area has been increased. Since flood causes a lot of damages, flooded area should be distinguished from non flooded area. Also, the detection of flood area should be accurate. Therefore, not only image resolution but also the filtering process is critical to minimize resolution degradation. Although a resolution of radar images become better as technology develops, there were a limited focused on a highly suitable filtering methods for flood detection. Thus, the purpose of this study is to find out the most appropriate filtering method for flood detection by comparing three filtering methods: Lee filter, Frost filter and NL-means filter. Therefore, to compare the filters to detect floods, each filters are applied to the radar image. Comparison was drawn among filtered images. Then, the flood map, results of filtered images are compared in that order. As a result, Frost and NL-means filter are more effective in removing the speckle noise compared to Lee filter. In case of Frost filter, resolution degradation occurred severly during removal of the noise. In case of NL-means filter, shadow effect which could be one of the main reasons that causes false detection were not eliminated comparing to other filters. Nevertheless, result of NL-means filter shows the best detection rate because the number of shadow pixels is relatively low in entire image. Kappa coefficient is scored 0.81 for NL-means filtered image and 0.55, 0.64 and 0.74 follows for non filtered image, Lee filtered image and Frost filtered image respectively. Also, in the process of NL-means filter, speckle noise could be removed without resolution degradation. Accordingly, flooded area could be distinguished effectively from other area in NL-means filtered image.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.