• Title/Summary/Keyword: Noise estimation

Search Result 1,980, Processing Time 0.033 seconds

Prediction of Disk Cutter Wear Considering Ground Conditions and TBM Operation Parameters (지반 조건과 TBM 운영 파라미터를 고려한 디스크 커터 마모 예측)

  • Yunseong Kang;Tae Young Ko
    • Tunnel and Underground Space
    • /
    • v.34 no.2
    • /
    • pp.143-153
    • /
    • 2024
  • Tunnel Boring Machine (TBM) method is a tunnel excavation method that produces lower levels of noise and vibration during excavation compared to drilling and blasting methods, and it offers higher stability. It is increasingly being applied to tunnel projects worldwide. The disc cutter is an excavation tool mounted on the cutterhead of a TBM, which constantly interacts with the ground at the tunnel face, inevitably leading to wear. In this study quantitatively predicted disc cutter wear using geological conditions, TBM operational parameters, and machine learning algorithms. Among the input variables for predicting disc cutter wear, the Uniaxial Compressive Strength (UCS) is considerably limited compared to machine and wear data, so the UCS estimation for the entire section was first conducted using TBM machine data, and then the prediction of the Coefficient of Wearing rate(CW) was performed with the completed data. Comparing the performance of CW prediction models, the XGBoost model showed the highest performance, and SHapley Additive exPlanation (SHAP) analysis was conducted to interpret the complex prediction model.

Comparison of Algorithms for Generating Parametric Image of Cerebral Blood Flow Using ${H_2}^{15}O$ PET Positron Emission Tomography (${H_2}^{15}O$ PET을 이용한 뇌혈류 파라메트릭 영상 구성을 위한 알고리즘 비교)

  • Lee, Jae-Sung;Lee, Dong-Soo;Park, Kwang-Suk;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.5
    • /
    • pp.288-300
    • /
    • 2003
  • Purpose: To obtain regional blood flow and tissue-blood partition coefficient with time-activity curves from ${H_2}^{15}O$ PET, fitting of some parameters in the Kety model is conventionally accomplished by nonlinear least squares (NLS) analysis. However, NLS requires considerable compuation time then is impractical for pixel-by-pixel analysis to generate parametric images of these parameters. In this study, we investigated several fast parameter estimation methods for the parametric image generation and compared their statistical reliability and computational efficiency. Materials and Methods: These methods included linear least squres (LLS), linear weighted least squares (LWLS), linear generalized least squares (GLS), linear generalized weighted least squares (GWLS), weighted Integration (WI), and model-based clustering method (CAKS). ${H_2}^{15}O$ dynamic brain PET with Poisson noise component was simulated using numerical Zubal brain phantom. Error and bias in the estimation of rCBF and partition coefficient, and computation time in various noise environments was estimated and compared. In audition, parametric images from ${H_2}^{15}O$ dynamic brain PET data peformed on 16 healthy volunteers under various physiological conditions was compared to examine the utility of these methods for real human data. Results: These fast algorithms produced parametric images with similar image qualify and statistical reliability. When CAKS and LLS methods were used combinedly, computation time was significantly reduced and less than 30 seconds for $128{\times}128{\times}46$ images on Pentium III processor. Conclusion: Parametric images of rCBF and partition coefficient with good statistical properties can be generated with short computation time which is acceptable in clinical situation.

Estimation of Soil Moisture Using Sentinel-1 SAR Images and Multiple Linear Regression Model Considering Antecedent Precipitations (선행 강우를 고려한 Sentinel-1 SAR 위성영상과 다중선형회귀모형을 활용한 토양수분 산정)

  • Chung, Jeehun;Son, Moobeen;Lee, Yonggwan;Kim, Seongjoon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.515-530
    • /
    • 2021
  • This study is to estimate soil moisture (SM) using Sentinel-1A/B C-band SAR (synthetic aperture radar) images and Multiple Linear Regression Model(MLRM) in the Yongdam-Dam watershed of South Korea. Both the Sentinel-1A and -1B images (6 days interval and 10 m resolution) were collected for 5 years from 2015 to 2019. The geometric, radiometric, and noise corrections were performed using the SNAP (SentiNel Application Platform) software and converted to backscattering coefficient of VV and VH polarization. The in-situ SM data measured at 6 locations using TDR were used to validate the estimated SM results. The 5 days antecedent precipitation data were also collected to overcome the estimation difficulty for the vegetated area not reaching the ground. The MLRM modeling was performed using yearly data and seasonal data set, and correlation analysis was performed according to the number of the independent variable. The estimated SM was verified with observed SM using the coefficient of determination (R2) and the root mean square error (RMSE). As a result of SM modeling using only BSC in the grass area, R2 was 0.13 and RMSE was 4.83%. When 5 days of antecedent precipitation data was used, R2 was 0.37 and RMSE was 4.11%. With the use of dry days and seasonal regression equation to reflect the decrease pattern and seasonal variability of SM, the correlation increased significantly with R2 of 0.69 and RMSE of 2.88%.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Efficient Correlation Channel Modeling for Transform Domain Wyner-Ziv Video Coding (Transform Domain Wyner-Ziv 비디오 부호를 위한 효과적인 상관 채널 모델링)

  • Oh, Ji-Eun;Jung, Chun-Sung;Kim, Dong-Yoon;Park, Hyun-Wook;Ha, Jeong-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.23-31
    • /
    • 2010
  • The increasing demands on low-power, and low-complexity video encoder have been motivating extensive research activities on distributed video coding (DVC) in which the encoder compresses frames without utilizing inter-frame statistical correlation. In DVC encoder, contrary to the conventional video encoder, an error control code compresses the video frames by representing the frames in the form of syndrome bits. In the meantime, the DVC decoder generates side information which is modeled as a noisy version of the original video frames, and a decoder of the error-control code corrects the errors in the side information with the syndrome bits. The noisy observation, i.e., the side information can be understood as the output of a virtual channel corresponding to the orignal video frames, and the conditional probability of the virtual channel model is assumed to follow a Laplacian distribution. Thus, performance improvement of DVC systems depends on performances of the error-control code and the optimal reconstruction step in the DVC decoder. In turn, the performances of two constituent blocks are directly related to a better estimation of the parameter of the correlation channel. In this paper, we propose an algorithm to estimate the parameter of the correlation channel and also a low-complexity version of the proposed algorithm. In particular, the proposed algorithm minimizes squared-error of the Laplacian probability distribution and the empirical observations. Finally, we show that the conventional algorithm can be improved by adopting a confidential window. The proposed algorithm results in PSNR gain up to 1.8 dB and 1.1 dB on Mother and Foreman video sequences, respectively.

CR Technology and Activation Plan for White Space Utilization (화이트 스페이스 활용을 위한 무선환경 인지 기술 및 활성화 방안)

  • Yoo, Sung-Jin;Kang, Kyu-Min;Jung, Hoiyoon;Park, SeungKeun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39B no.11
    • /
    • pp.779-789
    • /
    • 2014
  • Cognitive radio (CR) technology based on geo-location database access approach and/or wideband spectrum sensing approach is absolutely vital in order to recognize available frequency bands in white spaces (WSs), and efficiently utilize shared spectrums. This paper presents a new structure for the TVWS database access protocol implementation based on Internet Engineering Task Force (IETF) Protocol to Access WS database (PAWS). A wideband compressive spectrum sensing (WCSS) scheme using a modulated wideband converter is also proposed for the TVWS utilization. The developed database access protocol technology which is adopted in both the TV band device (TVBD) and the TVWS database operates well in the TV frequency bands. The proposed WCSS shows a stable performance in false alarm probability irrespective of noise variance estimation error as well as provides signal detection probabilities greater than 95%. This paper also investigates Federal Communications Commision (FCC) regulatory requirements of TVWS database as well as European Telecommunications Standards Institute (ETSI) policy related to TVWS database. A standardized protocol to achieve interoperability among multiple TVBDs and TVWS databases, which is currently prepared in the IETF, is discussed.

A Study on the Resistance Against Environmental Loading of the Fine-Size Exposed Aggregate Portland Cement Concrete Pavements (소입경 골재노출콘크리트포장의 환경하중 저항성에 대한 연구)

  • Chon, Beom-Jun;Lee, Seung-Woo;Chae, Sung-Wook;Bae, Jae-Min
    • International Journal of Highway Engineering
    • /
    • v.11 no.2
    • /
    • pp.99-109
    • /
    • 2009
  • Fine-size exposed aggregate portland cement concrete pavements (FEACP) have surface texture of exposed aggregate by removing upper 2$\sim$3mm mortar of surface of which curing is delayed by using delay-setting agent. FEACPs have advantages of maintaining low-noise and adequate skid-resistance level during the performance period than general portland cement concrete pavements. It is necessary to ensure the durability environmental loading to prevent unexpected distress during the service life of FEACP. In the process of curing, volume change accompanied change in by moisture and temperature could be an important cause of crack in concrete to construct for successful FEACP, The use of chloride containing deicer may accelerate defects of concrete pavement, such as crack and scaling. This study aim to evaluate environmental loading resistance of FEACP, based on the estimation of shrinkage-crack-control-capability by moisture evaporation and scaling by deicer in freeze-thaw reaction.

  • PDF

Improvement of the PFCM(Possibilistic Fuzzy C-Means) Clustering Method (PFCM 클러스터링 기법의 개선)

  • Heo, Gyeong-Yong;Choe, Se-Woon;Woo, Young-Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.1
    • /
    • pp.177-185
    • /
    • 2009
  • Cluster analysis or clustering is a kind of unsupervised learning method in which a set of data points is divided into a given number of homogeneous groups. Fuzzy clustering method, one of the most popular clustering method, allows a point to belong to all the clusters with different degrees, so produces more intuitive and natural clusters than hard clustering method does. Even more some of fuzzy clustering variants have noise-immunity. In this paper, we improved the Possibilistic Fuzzy C-Means (PFCM), which generates a membership matrix as well as a typicality matrix, using Gath-Geva (GG) method. The proposed method has a focus on the boundaries of clusters, which is different from most of the other methods having a focus on the centers of clusters. The generated membership values are suitable for the classification-type applications. As the typicality values generated from the algorithm have a similar distribution with the values of density function of Gaussian distribution, it is useful for Gaussian-type density estimation. Even more GG method can handle the clusters having different numbers of data points, which the other well-known method by Gustafson and Kessel can not. All of these points are obvious in the experimental results.

Feasibility of Ultrasonic Inspection for Nuclear Grade Graphite (원자력급 흑연의 산화 정도에 따른 초음파특성 변화 및 초음파탐상의 타당성 연구)

  • Park, Jae-Seok;Yoon, Byung-Sik;Jang, Chang-Heui;Lee, Jong-Po
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.28 no.5
    • /
    • pp.436-442
    • /
    • 2008
  • Graphite material has been recognized as a very competitive candidate for reflector, moderator, and structural material for very high temperature reactor (VHTR). Since VHTR is operated up to $900-950^{\circ}C$, small amount of impurity may accelerate the oxidation and degradation of carbon graphite, which results in increased porosity and lowered fracture toughness. In this study, ultrasonic wave propagation properties were investigated for both as-received and degradated material, and the feasibility of ultrasonic testing (UT) was estimated based on the result of ultrasonic property measurements. The ultrasonic properties of carbon graphite were half, more than 5 times, and 1/3 for velocity, attenuation, and signal-to-noise (S/N) ratio respectively. Degradation reduces the ultrasonic velocity slightly by 100 m/s, however the attenuation is about 2 times of as-receive state. The results of probability of detection (POD) estimation based on S/N ratio for side-drilled-hole (SDHs) of which depths were less than 100 mm were merely affected by oxidation and degradation. This result suggests that UT would be reliable method for nondestructive testing of carbon graphite material of which thickness is not over 100 mm. In accordance with the result produced by commercial automated ultrasonic testing (AUT) system, human error of ultrasonic testing is barely expected for the material of which thickness is not over 80 mm.

Study on the Development of Program for Measuring Preference of Portrait based on Sensibility (감성기반 인물사진 선호도 측정 프로그램 개발 연구)

  • Lee, Chang-Seop;Har, Dong-Hwan
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.2
    • /
    • pp.178-187
    • /
    • 2018
  • This study aimed to develop a model of the program for automation measuring the preference of the portraits based on the relationship between the image quality factors and the preferences in the portraits for manufacturers aiming at high utilization of the users. in order to proceed with the evaluation, the image quality measurement was divided into objective and subjective items, and the evaluation was done through image processing and statistical methods. the image quality measurement items can be divided into objective evaluation items and subjective evaluation items. RSC Contrast, Dynamic Range and Noise were selected for the objective evaluation items, and the numerical values were statistically analyzed and evaluated through the program. Exposure, Color Tone, composition of person, position of person, and out of focus were selected for subjective evaluation items and evaluated by image processing method. By applying objective and subjective assessment items, the results were very accurate, with the results obtained by the developed program and the results of the actual visual inspection. but since the currently developed program can be evalua ted only after facial recognition of the person, future research will need to develop a program that can evaluate all kinds of portraits.