• Title/Summary/Keyword: Noisy

Search Result 1,570, Processing Time 0.164 seconds

A Survey on the Utilization University Food Service by Student in Daejeon City (대전지역 대학생들의 대학급식소 이용실태 조사)

  • 박상욱;장영상
    • Korean journal of food and cookery science
    • /
    • v.14 no.4
    • /
    • pp.400-406
    • /
    • 1998
  • Questionnaire survey was done on the utilization status of college feeding facilities by students. Three hundred and nine students from three universities in Daejeon city, Korea participated in this survey. The results were summarized as followings. 1. Students who had experience of utilizing college feeding facilities were totalled to two hundred and ninety six. Among them 87.50% students were utilizing for lunch. Female students utilization ratio for lunch was 17.94% higher than male. College B students showed specially low utilization ratio. 2. Utilization frequency ratio of once per day was the highest as 30.72%. College C showed the highest utilization ratio. 3. Students utilized the campus food services most frequently as 47.97%, the restaurant outside campus the next and snack corner in the campus the last. 4. Reasons for the utilization were listed as low price, time saving, near place and no other place to eat. 5. Reasons for the not utilization were indicated as tasteless, simple menu and noisy and crowded. 6. Recommendations for the improvements of' college feeding facilities can be summarized to emphasize on taste, menu variation, sanitation, price decrease and comfortable atmosphere.

  • PDF

Development of Acquisition and Analysis System of Radar Information for Small Inshore and Coastal Fishing Vessels - Suppression of Radar Clutter by CFAR - (연근해 소형 어선의 레이더 정보 수록 및 해석 시스템 개발 - CFAR에 의한 레이더 잡음 억제 -)

  • 이대재;김광식;신형일;변덕수
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.39 no.4
    • /
    • pp.347-357
    • /
    • 2003
  • This paper describes on the suppression of sea clutter on marine radar display using a cell-averaging CFAR(constant false alarm rate) technique, and on the analysis of radar echo signal data in relation to the estimation of ARPA functions and the detection of the shadow effect in clutter returns. The echo signal was measured using a X -band radar, that is located on the Pukyong National University, with a horizontal beamwidth of $$3.9^{\circ}$$, a vertical beamwidth of $20^{\circ}$, pulsewidth of $0.8 {\mu}s$ and a transmitted peak power of 4 ㎾ The suppression performance of sea clutter was investigated for the probability of false alarm between $l0-^0.25;and; 10^-1.0$. Also the performance of cell averaging CFAR was compared with that of ideal fixed threshold. The motion vectors and trajectory of ships was extracted and the shadow effect in clutter returns was analyzed. The results obtained are summarized as follows;1. The ARPA plotting results and motion vectors for acquired targets extracted by analyzing the echo signal data were displayed on the PC based radar system and the continuous trajectory of ships was tracked in real time. 2. To suppress the sea clutter under noisy environment, a cell averaging CFAR processor having total CFAR window of 47 samples(20+20 reference cells, 3+3 guard cells and the cell under test) was designed. On a particular data set acquired at Suyong Man, Busan, Korea, when the probability of false alarm applied to the designed cell averaging CFAR processor was 10$^{-0}$.75/ the suppression performance of radar clutter was significantly improved. The results obtained suggest that the designed cell averaging CFAR processor was very effective in uniform clutter environments. 3. It is concluded that the cell averaging CF AR may be able to give a considerable improvement in suppression performance of uniform sea clutter compared to the ideal fixed threshold. 4. The effective height of target, that was estimated by analyzing the shadow effect in clutter returns for a number of range bins behind the target as seen from the radar antenna, was approximately 1.2 m and the information for this height can be used to extract the shape parameter of tracked target..

Effect of Noise in Human Body (소음이 인체에 미치는 영향)

  • 이영노
    • Proceedings of the KOR-BRONCHOESO Conference
    • /
    • 1972.03a
    • /
    • pp.7-8
    • /
    • 1972
  • The effects of noise exposure are of two types: Nonauditory effects and auditory effects. Nonauditory effects of noise exposure are interference with communication by speech, sleeping and emotional behavior. The noise will cause the high blood pressure and rapid pulse, also that decrease the salivation and gastric juice. in experimentaly showed that the Corticoid hormon: Gonatotropic hormone were decrease and Thyrotropic hormoone is increase. Auditory effect of noise exposure. when the normal ear is exposed to noise at noise at hamful intensities (above 90㏈) for sufficiently long periods of time, a temoral depression of hearing results, disappearing after minutes or hours of rest. When the exposure longer or intesity greater is reached the Permanent threshold shift called noise-induced hearing loss. Hearing loss resulting from noise exposure presents legal as well as medical problems. The otologist who examines and evaluates the industrial hearing loss cases must be properly informed, not only concerning the otologic but also about the physical and legal aspects of the problems. The measurement of hearing ability is the most important part of a hearing conservation, both preplacement and periodic follow-up tests of hearing. The ideal hearing conservation program would be able to reduce or eliminate the hazardous noise at its source or by acoustic isolation of noisy working area and two ear protections (plugs and muff type) were developed for personal protection.

  • PDF

A Clinical Study on Binaural Hearing Aid (양이 보청효과에 관한 연구)

  • 김기령;김영명;심윤주
    • Proceedings of the KOR-BRONCHOESO Conference
    • /
    • 1978.06a
    • /
    • pp.9.2-9
    • /
    • 1978
  • Monaural and binaural hearing aid performance under quiet and noisy conditions were compared in regard to (1) the degree of hearing impairment, (2) the symmetry of pure tone audiogram, (3) the automatic gain control of the hearing aid. (4) hearing impairement with recruitment and, word discrimination ability. Performance using binaural hearing aids was consistently superior to that using monaural hearing aids. The results were as follows. 1. Speech detection thresholds were enhanced by a mean of 4.25dB when tested with danavox 747 PP stereo type hearing aid and by a mean of 4.12 dB when tested hearing aids connected seperately to the right and left ears. 2. Binaurally tested speech reception thresholds were superior to monaurally tested thresholds by a mean of 3.56dB when tested in quiet and by a mean of 5.56dB when tested in noise. 3. Binaurally tested word discrimination scores were also superior by a mean of 17.09% in quiet and by a mean 19.63% in noise. 4. Both SRT and word discrimination scores were performed best by subjects with moderately-severe impairement. The performance by one mildly impaired subject was the poorest of all performances. The levels of performance order were; moderately-severe loss, severe loss. moderate loss and mild loss. 5. The data obtained using AGC aids when compaired with that of linear amplification show that when AGC aids were worn in both ears. the results were very poor but when one AGC aid was worn in one ear and linear amplification in the other. the results were good. 6. The advantages of binaural hearing aids were obvious even in cases 1) with great diferences in hearing thresholds between right and left ears, 2) when the subject was unable to discriminate words without vision and. 3) when the subject had extreme recruitme t phenomenon.

  • PDF

Noise Exposure Level Measurements for Different Job Categories on Ships (선박의 담당업무에 따른 소음노출레벨 측정에 관한 연구)

  • Im, Myeong-Hwan;Choe, Sang-Bom
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.6
    • /
    • pp.875-882
    • /
    • 2021
  • To minimize occupational noise induced hearing loss, it is recommended that workers should not be exposed to noise levels exceeding 85 dBA for over 8 h. In the present study, noise exposure levels were measured for seven workers based on their tasks on a training ship. The A-weighted noise exposure level (Lex,24h) was measured by taking into account the A-weighted equivalent continuous sound level (LAeq,i), duration (h) and noise contribution (Lex,24h,i) from the workers' locations. Results are thus obtained for different job categories as follows: officer group Lex,24h=56.1 dB, navigation crew Lex,24h=58.9 dB, navigation cadet Lex,24h=62.0 dB, ship's cook Lex,24h=64.3 dB, engine cadet Lex,24h=91.1 dB, engineer Lex,24h=91.1 dB, and engine crew Lex,24h=95.1 dB. It was determined that the engineers, engine crews, and engine cadets in charge of machinery must wear hearing protection devices. By wearing hearing protection devices when working in highly noisy engine rooms, it is estimated that the noise expose levels could be reduced by the following amounts: engineer Lex,24h=23.1 dB, engine Crew Lex,24h=24.4 dB, and engine cadet Lex,24h=21.5 dB. Moreover, if the no. 2 lecture room and mess room bottom plates in the cadets accommodations were improved to the 64 mm A-60-class floating plates, then further reductions are possible as follows: navigation cadet Lex,24h=4.3 dB and engine cadet Lex,24h=1.8 dB.

Improved Method of License Plate Detection and Recognition using Synthetic Number Plate (인조 번호판을 이용한 자동차 번호인식 성능 향상 기법)

  • Chang, Il-Sik;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.26 no.4
    • /
    • pp.453-462
    • /
    • 2021
  • A lot of license plate data is required for car number recognition. License plate data needs to be balanced from past license plates to the latest license plates. However, it is difficult to obtain data from the actual past license plate to the latest ones. In order to solve this problem, a license plate recognition study through deep learning is being conducted by creating a synthetic license plates. Since the synthetic data have differences from real data, and various data augmentation techniques are used to solve these problems. Existing data augmentation simply used methods such as brightness, rotation, affine transformation, blur, and noise. In this paper, we apply a style transformation method that transforms synthetic data into real-world data styles with data augmentation methods. In addition, real license plate data are noisy when it is captured from a distance and under the dark environment. If we simply recognize characters with input data, chances of misrecognition are high. To improve character recognition, in this paper, we applied the DeblurGANv2 method as a quality improvement method for character recognition, increasing the accuracy of license plate recognition. The method of deep learning for license plate detection and license plate number recognition used YOLO-V5. To determine the performance of the synthetic license plate data, we construct a test set by collecting our own secured license plates. License plate detection without style conversion recorded 0.614 mAP. As a result of applying the style transformation, we confirm that the license plate detection performance was improved by recording 0.679mAP. In addition, the successul detection rate without image enhancement was 0.872, and the detection rate was 0.915 after image enhancement, confirming that the performance improved.

A COVID-19 Diagnosis Model based on Various Transformations of Cough Sounds (기침 소리의 다양한 변환을 통한 코로나19 진단 모델)

  • Minkyung Kim;Gunwoo Kim;Keunho Choi
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.57-78
    • /
    • 2023
  • COVID-19, which started in Wuhan, China in November 2019, spread beyond China in 2020 and spread worldwide in March 2020. It is important to prevent a highly contagious virus like COVID-19 in advance and to actively treat it when confirmed, but it is more important to identify the confirmed fact quickly and prevent its spread since it is a virus that spreads quickly. However, PCR test to check for infection is costly and time consuming, and self-kit test is also easy to access, but the cost of the kit is not easy to receive every time. Therefore, if it is possible to determine whether or not a person is positive for COVID-19 based on the sound of a cough so that anyone can use it easily, anyone can easily check whether or not they are confirmed at anytime, anywhere, and it can have great economic advantages. In this study, an experiment was conducted on a method to identify whether or not COVID-19 was confirmed based on a cough sound. Cough sound features were extracted through MFCC, Mel-Spectrogram, and spectral contrast. For the quality of cough sound, noisy data was deleted through SNR, and only the cough sound was extracted from the voice file through chunk. Since the objective is COVID-19 positive and negative classification, learning was performed through XGBoost, LightGBM, and FCNN algorithms, which are often used for classification, and the results were compared. Additionally, we conducted a comparative experiment on the performance of the model using multidimensional vectors obtained by converting cough sounds into both images and vectors. The experimental results showed that the LightGBM model utilizing features obtained by converting basic information about health status and cough sounds into multidimensional vectors through MFCC, Mel-Spectogram, Spectral contrast, and Spectrogram achieved the highest accuracy of 0.74.

A Study of Cultural Migration of Pungmul-gut - Focusing on a Pungmul-pae's Activity in Toronto, Canada - (풍물굿의 해외 문화이주 현상에 관한 연구 - 캐나다 토론토의 풍물패 활동을 중심으로 -)

  • Lee, Yon-Shik
    • (The) Research of the performance art and culture
    • /
    • no.41
    • /
    • pp.353-380
    • /
    • 2020
  • Samul nori/Pungmul-gut is the symbol of ethnic identity for the Koreans abroad. It is the representative diaspora musical genre which is performed many cultural events held by Koreans. It is, at the same time, a global music which is appreciated by not only the Koreans but also the foreigners. Many musical communities in various countries exhibit the cultural migration through the discourse of 'tradition/variation' and 'authenticity/hybridity' in the course of the acculturation and enculturation of samul nori/pungmul-gut. The pungmul-pae 'Bichoe June' active in Toronto, Canada was organized by a foreign performer. For the foreigners pungmul-gut is easy to access as a genre of world music. As a percussion ensemble, it is easy to learn for the foreigners. The pungmul-pae 'Bichoe June' is a 'music community' consist of the Koreans and foreigners. The band tries to preserve the traditionality and authenticity of the Korean music. There is no variation or hybridity in its music since the member still learns the authentic music through various available textbooks and internet sites. Through the participation of the Koreans and foreigners, the band stimulates the globalzation of the pungmul-gut. The enculturation of the pungmul-gut is exhibited in two performances held by the band. One was host by the Canadian progressive group and the other was by the Korean conservative community. The former understood the nature of pungmul-gut as the music of the common people. The latter, however, accepted the music as the representative traditional music but was not easy to enjoy the 'noisy' music. In other words, the positive/negative acceptance of the pungmul-gut depends of the ideological nature of the listeners rather than the ethnical nature.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.