• Title/Summary/Keyword: gaussian process

Search Result 671, Processing Time 0.028 seconds

Performance Evaluation of Machine Learning Algorithms for Cloud Removal of Optical Imagery: A Case Study in Cropland (광학 영상의 구름 제거를 위한 기계학습 알고리즘의 예측 성능 평가: 농경지 사례 연구)

  • Soyeon Park;Geun-Ho Kwak;Ho-Yong Ahn;No-Wook Park
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.507-519
    • /
    • 2023
  • Multi-temporal optical images have been utilized for time-series monitoring of croplands. However, the presence of clouds imposes limitations on image availability, often requiring a cloud removal procedure. This study assesses the applicability of various machine learning algorithms for effective cloud removal in optical imagery. We conducted comparative experiments by focusing on two key variables that significantly influence the predictive performance of machine learning algorithms: (1) land-cover types of training data and (2) temporal variability of land-cover types. Three machine learning algorithms, including Gaussian process regression (GPR), support vector machine (SVM), and random forest (RF), were employed for the experiments using simulated cloudy images in paddy fields of Gunsan. GPR and SVM exhibited superior prediction accuracy when the training data had the same land-cover types as the cloud region, and GPR showed the best stability with respect to sampling fluctuations. In addition, RF was the least affected by the land-cover types and temporal variations of training data. These results indicate that GPR is recommended when the land-cover type and spectral characteristics of the training data are the same as those of the cloud region. On the other hand, RF should be applied when it is difficult to obtain training data with the same land-cover types as the cloud region. Therefore, the land-cover types in cloud areas should be taken into account for extracting informative training data along with selecting the optimal machine learning algorithm.

Assessment of compressive strength of high-performance concrete using soft computing approaches

  • Chukwuemeka Daniel;Jitendra Khatti;Kamaldeep Singh Grover
    • Computers and Concrete
    • /
    • v.33 no.1
    • /
    • pp.55-75
    • /
    • 2024
  • The present study introduces an optimum performance soft computing model for predicting the compressive strength of high-performance concrete (HPC) by comparing models based on conventional (kernel-based, covariance function-based, and tree-based), advanced machine (least square support vector machine-LSSVM and minimax probability machine regressor-MPMR), and deep (artificial neural network-ANN) learning approaches using a common database for the first time. A compressive strength database, having results of 1030 concrete samples, has been compiled from the literature and preprocessed. For the purpose of training, testing, and validation of soft computing models, 803, 101, and 101 data points have been selected arbitrarily from preprocessed data points, i.e., 1005. Thirteen performance metrics, including three new metrics, i.e., a20-index, index of agreement, and index of scatter, have been implemented for each model. The performance comparison reveals that the SVM (kernel-based), ET (tree-based), MPMR (advanced), and ANN (deep) models have achieved higher performance in predicting the compressive strength of HPC. From the overall analysis of performance, accuracy, Taylor plot, accuracy metric, regression error characteristics curve, Anderson-Darling, Wilcoxon, Uncertainty, and reliability, it has been observed that model CS4 based on the ensemble tree has been recognized as an optimum performance model with higher performance, i.e., a correlation coefficient of 0.9352, root mean square error of 5.76 MPa, and mean absolute error of 4.1069 MPa. The present study also reveals that multicollinearity affects the prediction accuracy of Gaussian process regression, decision tree, multilinear regression, and adaptive boosting regressor models, novel research in compressive strength prediction of HPC. The cosine sensitivity analysis reveals that the prediction of compressive strength of HPC is highly affected by cement content, fine aggregate, coarse aggregate, and water content.

Lip-Synch System Optimization Using Class Dependent SCHMM (클래스 종속 반연속 HMM을 이용한 립싱크 시스템 최적화)

  • Lee, Sung-Hee;Park, Jun-Ho;Ko, Han-Seok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.7
    • /
    • pp.312-318
    • /
    • 2006
  • The conventional lip-synch system has a two-step process, speech segmentation and recognition. However, the difficulty of speech segmentation procedure and the inaccuracy of training data set due to the segmentation lead to a significant Performance degradation in the system. To cope with that, the connected vowel recognition method using Head-Body-Tail (HBT) model is proposed. The HBT model which is appropriate for handling relatively small sized vocabulary tasks reflects co-articulation effect efficiently. Moreover the 7 vowels are merged into 3 classes having similar lip shape while the system is optimized by employing a class dependent SCHMM structure. Additionally in both end sides of each word which has large variations, 8 components Gaussian mixture model is directly used to improve the ability of representation. Though the proposed method reveals similar performance with respect to the CHMM based on the HBT structure. the number of parameters is reduced by 33.92%. This reduction makes it a computationally efficient method enabling real time operation.

Safety Evaluation of Subway Tunnel Structures According to Adjacent Excavation (인접굴착공사에 따른 지하철 터널 구조물 안전성 평가)

  • Jung-Youl Choi;Dae-Hui Ahn;Jee-Seung Chung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.559-563
    • /
    • 2024
  • Currently, in Korea, large-scale, deep excavations are being carried out adjacent to structures due to overcrowding in urban areas. for adjacent excavations in urban areas, it is very important to ensure the safety of earth retaining structures and underground structures. accordingly, an automated measurement system is being introduced to manage the safety of subway tunnel structures. however, the utilization of automated measurement system results is very low. existing evaluation techniques rely only on the maximum value of measured data, which can overestimate abnormal behavior. accordingly, in this study, a vast amount of automated measurement data was analyzed using the Gaussian probability density function, a technique that can quantitatively evaluate. highly reliable results were derived by applying probabilistic statistical analysis methods to a vast amount of data. therefore, in this study, the safety evaluation of subway tunnel structures due to adjacent excavation work was performed using a technique that can process a large amount of data.

Magnetic Tunnel Junctions with AlN and AlO Barriers

  • Yoon, Tae-Sick;Yoshimura, Satoru;Tsunoda, Masakiyo;Takahashi, Migaku;Park, Bum-Chan;Lee, Young-Woo;Li, Ying;Kim, Chong-Oh
    • Journal of Magnetics
    • /
    • v.9 no.1
    • /
    • pp.17-22
    • /
    • 2004
  • We studied the magnetotransport properties of tunnel junctions with AlO and AlN barriers fabricated using microwave-excited plasma. The plasma nitridation process provided wider controllability than the plasma oxidization for the formation of MTJs with ultra-thin insulating layer, because of the slow nitriding rate of metal Al layers, comparing with the oxidizing rate of them. High tunnel magnetoresistance (TMR) ratios of 49 and 44% with respective resistance-area product $(R{\times}A) of 3 {\times} 10^4 and 6 {\times} 10^3 {\Omega}{\mu}m^2$ were obtained in the Co-Fe/Al-N/Co-Fe MTJs. We conclude that AlN is a hopeful barrier material to realize MTJs with high TMR ratio and low $R{\times}A$ for high performance MRAM cells. In addition, in order to clarify the annealing temperature dependence of TMR, the local transport properties were measured for Ta $50{\AA} /Cu 200 {\AA}/Ta 50 {\AA}/Ni_{76}Fe_{24} 20 {\AA}/Cu 50 {\AA}/Mn_{75}Ir_{25} 100 {\AA}/Co_{71}Fe_{29} 40 {\AA}/Al-O$ junction with $d_{Al}= 8 {\AA} and P_{O2}{\times}t_{0X}/ = 8.4 {\times} 10^4$ at various temperatures. The current histogram statistically calculated from the electrical current image was well in accord with the fitting result considering the Gaussian distribution and Fowler-Nordheim equation. After annealing at $340^{\circ}C$, where the TMR ratio of the corresponding MTJ had the maximum value of 44%, the average barrier height increased to 1.12 eV and its standard deviation decreased to 0.1 eV. The increase of TMR ratio after annealing could be well explained by the enhancement of the average barrier height and the reduction of its fluctuation.

Railway Track Extraction from Mobile Laser Scanning Data (모바일 레이저 스캐닝 데이터로부터 철도 선로 추출에 관한 연구)

  • Yoonseok, Jwa;Gunho, Sohn;Jong Un, Won;Wonchoon, Lee;Nakhyeon, Song
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.33 no.2
    • /
    • pp.111-122
    • /
    • 2015
  • This study purposed on introducing a new automated solution for detecting railway tracks and reconstructing track models from the mobile laser scanning data. The proposed solution completes following procedures; the study initiated with detecting a potential railway region, called Region Of Interest (ROI), and approximating the orientation of railway track trajectory with the raw data. At next, the knowledge-based detection of railway tracks was performed for localizing track candidates in the first strip. In here, a strip -referring the local track search region- is generated in the orthogonal direction to the orientation of track trajectory. Lastly, an initial track model generated over the candidate points, which were detected by GMM-EM (Gaussian Mixture Model-Expectation & Maximization) -based clustering strip- wisely grows to capture all track points of interest and thus converted into geometric track model in the tracking by detection framework. Therefore, the proposed railway track tracking process includes following key features; it is able to reduce the complexity in detecting track points by using a hypothetical track model. Also, it enhances the efficiency of track modeling process by simultaneously capturing track points and modeling tracks that resulted in the minimization of data processing time and cost. The proposed method was developed using the C++ program language and was evaluated by the LiDAR data, which was acquired from MMS over an urban railway track area with a complex railway scene as well.

Analytical Methods of Levoglucosan, a Tracer for Cellulose in Biomass Burning, by Four Different Techniques

  • Bae, Min-Suk;Lee, Ji-Yi;Kim, Yong-Pyo;Oak, Min-Ho;Shin, Ju-Seon;Lee, Kwang-Yul;Lee, Hyun-Hee;Lee, Sun-Young;Kim, Young-Joon
    • Asian Journal of Atmospheric Environment
    • /
    • v.6 no.1
    • /
    • pp.53-66
    • /
    • 2012
  • A comparison of analytical approaches for Levoglucosan ($C_6H_{10}O_5$, commonly formed from the pyrolysis of carbohydrates such as cellulose) and used for a molecular marker in biomass burning is made between the four different analytical systems. 1) Spectrothermography technique as the evaluation of thermograms of carbon using Elemental Carbon & Organic Carbon Analyzer, 2) mass spectrometry technique using Gas Chromatography/mass spectrometer (GC/MS), 3) Aerosol Mass Spectrometer (AMS) for the identification of the particle size distribution and chemical composition, and 4) two dimensional Gas Chromatography with Time of Flight mass spectrometry (GC${\times}$GC-TOFMS) for defining the signature of Levoglucosan in terms of chemical analytical process. First, a Spectrothermography, which is defined as the graphical representation of the carbon, can be measured as a function of temperature during the thermal separation process and spectrothermographic analysis. GC/MS can detect mass fragment ions of Levoglucosan characterized by its base peak at m/z 60, 73 in mass fragment-grams by methylation and m/z 217, 204 by trimethylsilylderivatives (TMS-derivatives). AMS can be used to analyze the base peak at m/z 60.021, 73.029 in mass fragment-grams with a multiple-peak Gaussian curve fit algorithm. In the analysis of TMS derivatives by GC${\times}$GC-TOFMS, it can detect m/z 73 as the base ion for the identification of Levoglucosan. It can also observe m/z 217 and 204 with existence of m/z 333. Although the ratios of m/z 217 and m/z 204 to the base ion (m/z 73) in the mass spectrum of GC${\times}$GC-TOFMS lower than those of GC/MS, Levoglucosan can be separated and characterized from D (-) +Ribose in the mixture of sugar compounds. At last, the environmental significance of Levoglucosan will be discussed with respect to the health effect to offer important opportunities for clinical and potential epidemiological research for reducing incidence of cardiovascular and respiratory diseases.

The Algorithm of Protein Spots Segmentation using Watersheds-based Hierarchical Threshold (Watersheds 기반 계층적 이진화를 이용한 단백질 반점 분할 알고리즘)

  • Kim Youngho;Kim JungJa;Kim Daehyun;Won Yonggwan
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.239-246
    • /
    • 2005
  • Biologist must have to do 2DGE biological experiment for Protein Search and Analysis. This experiment coming into being 2 dimensional image. 2DGE (2D Gel Electrophoresis : two dimensional gel electrophoresis) image is the most widely used method for isolating of the objective protein by comparative analysis of the protein spot pattern in the gel plane. The process of protein spot analysis, firstly segment protein spots that are spread in 2D gel plane by image processing and can find important protein spots through comparative analysis with protein pattern of contrast group. In the algorithm which detect protein spots, previous 2DGE image analysis is applies gaussian fitting, however recently Watersheds region based segmentation algorithm, which is based on morphological segmentation is applied. Watersheds has the benefit that segment rapidly needed field in big sized image, however has under-segmentation and over-segmentation of spot area when gray level is continuous. The drawback was somewhat solved by marker point institution, but needs the split and merge process. This paper introduces a novel marker search of protein spots by watersheds-based hierarchical threshold, which can resolve the problem of marker-driven watersheds.

Gaussian Noise Reduction Method using Adaptive Total Variation : Application to Cone-Beam Computed Tomography Dental Image (적응형 총변이 기법을 이용한 가우시안 잡음 제거 방법: CBCT 치과 영상에 적용)

  • Kim, Joong-Hyuk;Kim, Jung-Chae;Kim, Kee-Deog;Yoo, Sun-K.
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.1
    • /
    • pp.29-38
    • /
    • 2012
  • The noise generated in the process of obtaining the medical image acts as the element obstructing the image interpretation and diagnosis. To restore the true image from the image polluted from the noise, the total variation optimization algorithm was proposed by the R.O. F (L.Rudin, S Osher, E. Fatemi). This method removes the noise by fitting the balance of the regularity and fidelity. However, the blurring phenomenon of the border area generated in the process of performing the iterative operation cannot be avoided. In this paper, we propose the adaptive total variation method by mapping the control parameter to the proposed transfer function for minimizing boundary error. The proposed transfer function is determined by the noise variance and the local property of the image. The proposed method was applied to 464 tooth images. To evaluate proposed method performance, PSNR which is a indicator of signal and noise's signal power ratio was used. The experimental results show that the proposed method has better performance than other methods.

Extraction of Renal Glomeruli Region using Genetic Algorithm (유전적 알고리듬을 이용한 신장 사구체 영역의 추출)

  • Kim, Eung-Kyeu
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.2
    • /
    • pp.30-39
    • /
    • 2009
  • Extraction of glomeruli region plays a very important role for diagnosing nephritis automatically. However, it is not easy to extract glomeruli region correctly because the difference between glomeruli region and other region is not obvious, simultaneously unevennesses that is brought in the sampling process and in the imaging process. In this study, a new method for extracting renal glomeruli region using genetic algorithm is proposed. The first, low and high resolution images are obtained by using Laplacian-Gaussian filter with ${\sigma}=2.1$ and ${\sigma}=1.8$, then, binary images by setting the threshold value to zero are obtained. And then border edge is detected from low resolution images, the border of glomeruli is expressed by a closed B-splines' curve line. The parameters that decide the closed curve line with this low resolution image prevent the noises and the border lines from breaking off in the middle by searching using genetic algorithm. Next, in order to obtain more precise border edges of glomeruli, the number of node points is increased and corrected in order from eight to sixteen and thirty two from high resolution images. Finally, the validity of this proposed method is shown to be effective by applying to the real images.