• Title/Summary/Keyword: noise correlation

Search Result 1,267, Processing Time 0.027 seconds

Accuracy Assessment of Reservoir Depth Measurement Data by Unmanned Boat using GIS (GIS를 이용한 무인보트의 저수지 수심측정자료 정확도 평가)

  • Kim, Dae-Sik
    • Journal of Korean Society of Rural Planning
    • /
    • v.30 no.3
    • /
    • pp.75-84
    • /
    • 2024
  • This study developed the procedure and method for the accuracy assessment of unmanned boat survey data, based on the reservoir water depth data of Misan Reservoir, measured by the manned and unmanned boats in 2009 by Korea Rural Community Corporation. In the first step, this study devised the method to extract the contour map of NGIS data in AutoCAD to generate easily the reservoir boundary map used to set the survey range of reservoir water depth and to test the survey accuracy. The surveyed data coordinate systems of the manned and the unmanned boat were also unified by using ArcGIS for the standards of accuracy assessment. In the accuracy assessment, the spatial correlation coefficient of the grid maps of the two measurement results was 0.95, showing high pattern similarity, although the average error was high at 78cm. To analyze in more detail assessment, this study generated randomly the 3,250m transverse profile route (PR), and then extracted grid values of water depth on the PR. In the results of analysis to the extracted depth data on PR, the error average difference of the unmanned boat measurements was 73.18cm and the standard deviation of the error was 55cm compared to the manned boat. This study set these values as the standard for the correction value by average shift and noise removal of the unmanned boat measurement data. By correcting the unmanned boat measurements with these values, this study has high accuracy results, the reservoir water depth and surface area curve with R2 = 0.97 and the water depth and storage volume curve with R2 = 0.999.

Estimation of Soil Moisture Using Sentinel-1 SAR Images and Multiple Linear Regression Model Considering Antecedent Precipitations (선행 강우를 고려한 Sentinel-1 SAR 위성영상과 다중선형회귀모형을 활용한 토양수분 산정)

  • Chung, Jeehun;Son, Moobeen;Lee, Yonggwan;Kim, Seongjoon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.515-530
    • /
    • 2021
  • This study is to estimate soil moisture (SM) using Sentinel-1A/B C-band SAR (synthetic aperture radar) images and Multiple Linear Regression Model(MLRM) in the Yongdam-Dam watershed of South Korea. Both the Sentinel-1A and -1B images (6 days interval and 10 m resolution) were collected for 5 years from 2015 to 2019. The geometric, radiometric, and noise corrections were performed using the SNAP (SentiNel Application Platform) software and converted to backscattering coefficient of VV and VH polarization. The in-situ SM data measured at 6 locations using TDR were used to validate the estimated SM results. The 5 days antecedent precipitation data were also collected to overcome the estimation difficulty for the vegetated area not reaching the ground. The MLRM modeling was performed using yearly data and seasonal data set, and correlation analysis was performed according to the number of the independent variable. The estimated SM was verified with observed SM using the coefficient of determination (R2) and the root mean square error (RMSE). As a result of SM modeling using only BSC in the grass area, R2 was 0.13 and RMSE was 4.83%. When 5 days of antecedent precipitation data was used, R2 was 0.37 and RMSE was 4.11%. With the use of dry days and seasonal regression equation to reflect the decrease pattern and seasonal variability of SM, the correlation increased significantly with R2 of 0.69 and RMSE of 2.88%.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Factors Related with Job Satisfaction in Workers - Through the Application of NIOSH Job Stress Model - (직장인의 직무만족도 관련요인 분석 - NIOSH의 직무스트레스 모형을 적용하여 -)

  • Kim, Soon-Lae;Lee, Bok-Im;Lee, Jong-Eun;Rhee, Kyung-Yong;Jung, Hye-Sun
    • Research in Community and Public Health Nursing
    • /
    • v.14 no.2
    • /
    • pp.190-199
    • /
    • 2003
  • This study was conducted to determine the factors affecting job satisfaction in workers by using the Job Stress Model proposed by the National Institute for Occupational Safety and Health (NIOSH). Data were collected from December 1 to December 30, 1999. The subjects were 2,133 workers employed at 155 work sites, who were examined using NIOSH Job Stress questionnaire translated by the Korea Occupational Safety ${\pounds}|$ Health Academy and Occupational Safety ${\pounds}|$ Health Research Institute. SAS/PC program was used for statistical analysis using descriptive analysis. Pearson's correlation coefficient, ANOVA, and Stepwise multiple regression analysis. The results of this study were as follows. 1. According to general characteristics of the subjects, job satisfaction was high in those with less number of children. 2. By work condition, job satisfaction was higher in those who were working in a permanent job position, were working with regular time basis than with shift basis, were working in regular shift hours than in changing shift hours, were working for a short period, and were working less hours and overtime works per week. 3. In terms of physical work environment, job satisfaction was significantly related to 10 physical environmental factors. In other words, job satisfaction was high in workers who were working in an environment with no noise, bright light, temperature adjusted to an appropriate level during summer and winter, humidity adjusted to an appropriate level. well ventilation, clean air, no exposure to hazardous substance during work hour, overall pleasant work environment and not crowded work space. 4. By work-related factors, job satisfaction was high in those with less ambiguity about future job and role, high job control/autonomy, and less workload. On the other hand, job satisfaction was low in those with little utilization of competencies, and much role conflict at work and workload. 5. As for the relationships between job satisfaction and the non-work related factors, job satisfaction was high in workers who were volunteering at different organizations or active in religious activities for 5-10 hours per week. 6. In the relationships between job satisfaction and buffering factors, significantly positive correlations were found between job satisfaction and factors such as support by direct superior, support by peers, and support by spouse, friend and family. 7. There were nine factors that affected job satisfaction in the workers: age, number of children, work hours per week, noise, temperature at the work site during summer, uncomfortable physical environment, role ambiguity, role conflict, ambiguity in job future, work load, no utilization of competencies and social support from direct supervisor. These nine factors accounted for 26% of the total variance in the multiple regression analysis. In conclusion. the following are proposed based on the results of this study. 1. The most important physical environmental factors affecting job satisfaction in workers were noise, role ambiguity, and work load, suggesting a need to develop strategies or programs to manage these factors at work sites. 2. A support system that could promote job satisfaction is needed by emphasizing the roles of occupational health nurses who may be stationed at work sites and manage the factors that could generate job stress. 3. Job satisfaction is one of the three acute responses to stress proposed in NIOSH job stress model (job satisfaction. physical discomfort and industrial accidents). Therefore, further studies need to be conducted on the other two issues.

  • PDF

The Study of Affecting Image Quality according to forward Scattering Dose used Additional Filter in Diagnostic Imaging System (부가필터 사용 시 전방 산란선량에 따른 화질 영향에 대한 연구)

  • Choi, Il-Hong;Kim, Kyo-Tae;Heo, Ye-Ji;Park, Hyong-Hu;Kang, Sang-Sik;Noh, Si-Cheol;Park, Ji-Koon
    • Journal of the Korean Society of Radiology
    • /
    • v.10 no.8
    • /
    • pp.597-602
    • /
    • 2016
  • Recent clinical field utilizes the aluminium filter in order to reduce the low-energy photons. However, the usage of the filter can cause adverse effect on the image quality because of the scattered dose that is generated by X-ray hardening phenomenon. Further, usage of filter with improper thickness can be a reason of dose creep phenomenon where unnecessary exposure is generated towards the patient. In this study, the author evaluated the RMS and the RSD analysis in order to have a quantitative evaluation for the effect of forward scattering dose by the filter on the image. as a result of the study, the FSR and the RSD was increased together with the increasing of thickness of the filter. In this study the RSD means the standard deviation of the mean value is relatively size. It can be understood that the signal-to-noise ratio decreases when the average value is taken as a signal and the standard deviation is judged as a noise. The signal-to-noise ratio can understanding as index of resolution at image. Based on these findings, it was quantitatively verified that there is a correlation of the image quality with the FSR by using an additional filter. The results, a 2.5 mmAl which is as recommended by NCRP in the tube voltage of 70 kVp or more showed the 14.6% on the RSD when the filter was not in used. these results are considered able to be utilized as basic data for the study about the filter to improve the quality of the image.

A Study on the Temporomandibular Joint Disorder and School Life Stress of High School Student by Department (계열별 남자고등학생의 학교생활스트레스와 측두하악장애에 관한 연구)

  • Lee, Jung-Hwa;Choi, Jung-Mi
    • Journal of dental hygiene science
    • /
    • v.7 no.3
    • /
    • pp.179-185
    • /
    • 2007
  • The purpose of this study targeted on high school student in the department of liberal arts, industry in Daegu metropolitan city, is to get basic data necessary for the development of dental educational program, to discern prevention and treatment of temporomandibular joint disorder by observing the situation temporomandibular joint disorder and contribution element, of relationship of school life stress The results are as follows.: 1. The percentage of occurring temporomandibular joint disorder in the high school resulted in a joint noise at 61.8% and joint dislocation 6.9%, sharp pain 47.5% at time of chewing. 29.8% at the time of the non-chewing, lockjaw 11.3%, a headache appeared at 40.4%.2. In the contribution factor of occurring temporomandibular joint disorder, the cause of joint noise was the clench one's teeth, lip and cheek clench, For the pain at the time of chewing clench one's teeth, one side chewing, over-chewing, lip clench, sideways sleeping showed the difference. (P < 0.01) For the pain at the time of non-chewing, clench one's teeth, bruxism, one side chewing, lip and cheek clench were similar, and for the lockjaw, clench one's teeth, bruxism, sideways sleeping showed the difference. The plum evil thing period at time of the fault writing that statistically showed the difference. For the headache, the contribution factors were the all bad habits mentioned above excluding one side sleeping.(P < 0.01, P < 0.05). 3. The rate of experiencing temporomandibular joint disorder by oral and maxillofacia was 13.4% in industrial department, and 19.6% in liberal arts. And for the factor of wound was that exercise 26.8%, others 24.4%, fall-down 19.5%. And for the industrial, exercise 44.4%, fall-down 22.2%, others 14.9%. The treatment experience appeared at 5.0% in industrial department, 2.9% in liberal arts. And for the medical institutions, liberal arts were dental clinic 50%, orthopedics 50%, and the industrial department orthopedics 40%, oriental medicine clinic 30%, dental clinic 30%. 4. In case of temporomandibular joint disorder, there were no difference by grades or educational background. And at the time of chewing or non-chewing showed similar difference.(P < 0.01). 5. Compared to stress in the high school, it generally showed higher in liberal arts than in industrial department due to school record. Its scope was $3.75{\pm}1.14$ in liberal arts, $3.01{\pm}1.23$ in industrial department. 6. The school record, school life, stress problems by teachers, chewing/non-chewing pain of temporomandibular joint disorder, joint noise had a similar correlation.(P < 0.01, < 0.05).

  • PDF

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Consideration on the Relation between Vibration Level and Peak Particle Velocity in Regulation of Ground Vibration (지반진동 규제기준에서 진동레벨과 진동속도의 상호관계에 대한 고찰)

  • Choi, Byung-Hee;Ryu, Chang-Ha
    • Explosives and Blasting
    • /
    • v.30 no.2
    • /
    • pp.1-8
    • /
    • 2012
  • The only law related to airblast and ground vibration control in Korea is the Noise and Vibration Control Act enforced by the Ministry of Environment. But this law mainly deals with the annoyance aspects of noises and vibrations in ordinary human life. Hence, the law defines the safety criteria of ground vibration as the vibration level (VL) of dB(V) unit. The ground vibrations produced from blasting, however, have the unique characteristics that can be shown in shock vibrations, and the duration is also very short compared to the vibrations from machinery, tools or facilities. Hence, vibration regulations for blasting operations usually define the safety criterion as the peak particle velocity (PPV) considering the effect of ground vibrations to structural damage. Notwithstanding, there are several attempts that predict VL from PPV or estimate VL based on the scaled distances (SD; in unit of $m/kg^{1/2}$ or $m/kg^{1/3}$) without considering their frequency spectra. It appears that these attempts are conducted mainly for the purpose of satisfying the law in blasting contracts. But, in principle there could no correlation between peaks of velocity and acceleration over entire frequency spectrum. Therefore, such correlations or estimations should be conducted only for the waves with the same or very similar frequency spectra.

Improvement of Residual Delay Compensation Algorithm of KJJVC (한일상관기의 잔차 지연 보정 알고리즘의 개선)

  • Oh, Se-Jin;Yeom, Jae-Hwan;Roh, Duk-Gyoo;Oh, Chung-Sik;Jung, Jin-Seung;Chung, Dong-Kyu;Oyama, Tomoaki;Kawaguchi, Noriyuki;Kobayashi, Hideyuki;Kawakami, Kazuyuki;Ozeki, Kensuke;Onuki, Hirohumi
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.2
    • /
    • pp.136-146
    • /
    • 2013
  • In this paper, the residual delay compensation algorithm is proposed for FX-type KJJVC. In case of initial version as that design algorithm of KJJVC, the integer calculation and the cos/sin table for the phase compensation coefficient were introduced in order to speed up of calculation. The mismatch between data timing and residual delay phase and also between bit-jump and residual delay phase were found and fixed. In final design of KJJVC residual delay compensation algorithm, the initialization problem on the rotation memory of residual delay compensation was found when the residual delay compensated value was applied to FFT-segment, and this problem is also fixed by modifying the FPGA code. Using the proposed residual delay compensation algorithm, the band shape of cross power spectrum becomes flat, which means there is no significant loss over the whole bandwidth. To verify the effectiveness of proposed residual delay compensation algorithm, we conducted the correlation experiments for real observation data using the simulator and KJJVC. We confirmed that the designed residual delay compensation algorithm is well applied in KJJVC, and the signal to noise ratio increases by about 8%.

Adaptive Hard Decision Aided Fast Decoding Method in Distributed Video Coding (적응적 경판정 출력을 이용한 고속 분산 비디오 복호화 기술)

  • Oh, Ryang-Geun;Shim, Hiuk-Jae;Jeon, Byeung-Woo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.6
    • /
    • pp.66-74
    • /
    • 2010
  • Recently distributed video coding (DVC) is spotlighted for the environment which has restriction in computing resource at encoder. Wyner-Ziv (WZ) coding is a representative scheme of DVC. The WZ encoder independently encodes key frame and WZ frame respectively by conventional intra coding and channel code. WZ decoder generates side information from reconstructed two key frames (t-1, t+1) based on temporal correlation. The side information is regarded as a noisy version of original WZ frame. Virtual channel noise can be removed by channel decoding process. So the performance of WZ coding greatly depends on the performance of channel code. Among existing channel codes, Turbo code and LDPC code have the most powerful error correction capability. These channel codes use stochastically iterative decoding process. However the iterative decoding process is quite time-consuming, so complexity of WZ decoder is considerably increased. Analysis of the complexity of LPDCA with real video data shows that the portion of complexity of LDPCA decoding is higher than 60% in total WZ decoding complexity. Using the HDA (Hard Decision Aided) method proposed in channel code area, channel decoding complexity can be much reduced. But considerable RD performance loss is possible according to different thresholds and its proper value is different for each sequence. In this paper, we propose an adaptive HDA method which sets up a proper threshold according to sequence. The proposed method shows about 62% and 32% of time saving, respectively in LDPCA and WZ decoding process, while RD performance is not that decreased.