• Title/Summary/Keyword: selection of representative noise level

Search Result 4, Processing Time 0.02 seconds

Comparison of Evaluation Methodsfor Receiver Setting and Representative Noise Level Selection in Calculating Population Exposed to Noise (소음예측 모델링을 이용한 소음노출인구산정 시 수음점 설정 및 대표소음도 평가방법에 따른 비교)

  • Yun, Hee-Kyung;Lee, Jae-Won;Kwon, Myung-Hee
    • Journal of Environmental Impact Assessment
    • /
    • v.27 no.2
    • /
    • pp.105-113
    • /
    • 2018
  • The modeling of noise mapping and the evaluation of noise pollution based on the exposed population were frequently used as an indicator of environmental noise assessment to overcome the limitations of field survey and Tele-Monitoring System. Results from these methods were highly influenced by the setting of noise source, input data of prediction factors and analytical methods of predictive values. The population exposed to noise were estimated as M1-1>M2-1>Base>M2-2>M1-2 in both areas. The highest noise setting methods(M1-1, M1-2) were overestimated, being compared with the Base method.

An Effective Selection of white Gaussian Noise Sub-band using Singular Value Decomposition (특이값 분해를 이용한 효율적인 백색가우시안 잡음대역 선정 방법)

  • Shin, Seung-Min;Kim, Young-Soo;Kim, Sang-Tae;Suk, Mi-Kyung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.3A
    • /
    • pp.272-280
    • /
    • 2009
  • Measurement of the background radio noise is very important process being used in survey of radio noise environment, calculating the threshold level for the frequency occupancy measurement and so forth. First step of background radio noise measurement is to select the sample sub-band which is mostly dominated by the background white Gaussian noise (WGN) within the target band. The second step is to carry out the main measurement of radio noise on this selected sample sub-band for the representative value of the noise power. In this paper, a method for selection of sample sub-band for the effective background radio noise measurement using SVD is proposed under the assumption that background radio noise is WGN. The performance of the proposed method is compared with that of the APD method which is widely used for the same purpose. Simulation results are shown to demonstrate the high performance of the proposed method in comparison with the existing APD method.

Wyner-Ziv Video Compression using Noise Model Selection (잡음 모델 선택을 이용한 Wyner-Ziv 비디오 압축)

  • Park, Chun-Ho;Shim, Hiuk-Jae;Jeon, Byeung-Woo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.4
    • /
    • pp.58-66
    • /
    • 2009
  • Recently the emerging demands of the light-video encoder promotes lots of research efforts on DVC (Distributed Video Coding). As an appropriate video compression method, DVC has been studied, and Wyner-Ziv (WZ) video compression is its one representative structure. The WZ encoder splits the image into two kinds of frames, one is key frame which is compressed by conventional intra coding, and the other is WZ frame which is encoded by WZ coding. The WZ decoder decodes the key frame first, and estimates the WZ frame using temporal correlation between key frames. Estimated WZ frame (Side Information) cannot be the same as the original WZ frame due to the absence of the WZ frame information at decoder. As a result, the difference between the estimated and original WZ frames are regarded as virtual channel noise. The WZ frame is reconstructed by removing noise in side information. Therefore precise noise estimation produces good performance gain in WZ video compression by improving error correcting capability by channel code. But noise cannot be estimated precisely at WZ decoder unless there is good WZ frame information, and generally it is estimated from the difference of corresponding key frames. Also the estimated noise is limited by comparing with frame level noise to reduce the uncertainty of the estimation method. However these methods cannot provide good noise estimation for every frame or each bit plane. In this paper, we propose a noise nodel selection method which chooses a better noise model for each bit plane after generating candidate noise models. Experimental result shows PSNR gain up to 0.8 dB.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.