• Title/Summary/Keyword: Square Root

Search Result 2,666, Processing Time 0.03 seconds

Developing GPS Code Multipath Grid Map (CMGM) of Domestic Reference Station (국내 기준국의 GPS 코드 다중경로오차 격자지도 생성)

  • Gyu Min Kim;Gimin Kim;Chandeok Park
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.13 no.1
    • /
    • pp.85-92
    • /
    • 2024
  • This study develops a Global Positioning System (GPS) Code Multipath Grid Map (CMGM) of each individual domestic reference station from the extracted code multipath of measurement data. Multipath corresponds to signal reflection/refraction caused by obstacles around the receiver antenna, and it is a major source of error that cannot be eliminated by differencing. From the receiver-independent exchange format (RINEX) data for two days, the associated code multipath of a satellite tracking arc is extracted. These code multipath data go through bias correction and interpolation to yield the CMGM with respect to the azimuth and elevation angles. The effect of the CMGM on multipath mitigation is then quantitatively analyzed to improve the Root Mean Square (RMS) of averaged pseudo multipath. Furthermore, the single point positioning (SPP) accuracy is analyzed in terms of the RMS of the horizontal and vertical errors. During two weeks in February 2023, the RMSs of the averaged pseudo multipath for five reference stations decreased by about 40% on average after CMGM application. Also, the SPP accuracies increased by about 7% for horizontal errors and about 10% for vertical errors on average after CMGM application. The overall quantitative analysis indicates that the proposed approach will reduce the convergence time of Differential Global Navigation Satellite System (DGNSS), Real-Time Kinematic (RTK), and Precise Point Positioning (PPP)-RTK correction information in real-time to use measurement data whose code multipath is corrected and mitigated by the CMGM.

Forecasting Fish Import Using Deep Learning: A Comprehensive Analysis of Two Different Fish Varieties in South Korea

  • Abhishek Chaudhary;Sunoh Choi
    • Smart Media Journal
    • /
    • v.12 no.11
    • /
    • pp.134-144
    • /
    • 2023
  • Nowadays, Deep Learning (DL) technology is being used in several government departments. South Korea imports a lot of seafood. If the demand for fishery products is not accurately predicted, then there will be a shortage of fishery products and the price of the fishery product may rise sharply. So, South Korea's Ministry of Ocean and Fisheries is attempting to accurately predict seafood imports using deep learning. This paper introduces the solution for the fish import prediction in South Korea using the Long Short-Term Memory (LSTM) method. It was found that there was a huge gap between the sum of consumption and export against the sum of production especially in the case of two species that are Hairtail and Pollock. An import prediction is suggested in this research to fill the gap with some advanced Deep Learning methods. This research focuses on import prediction using Machine Learning (ML) and Deep Learning methods to predict the import amount more precisely. For the prediction, two Deep Learning methods were chosen which are Artificial Neural Network (ANN) and Long Short-Term Memory (LSTM). Moreover, the Machine Learning method was also selected for the comparison between the DL and ML. Root Mean Square Error (RMSE) was selected for the error measurement which shows the difference between the predicted and actual values. The results obtained were compared with the average RMSE scores and in terms of percentage. It was found that the LSTM has the lowest RMSE score which showed the prediction with higher accuracy. Meanwhile, ML's RMSE score was higher which shows lower accuracy in prediction. Moreover, Google Trend Search data was used as a new feature to find its impact on prediction outcomes. It was found that it had a positive impact on results as the RMSE values were lowered, increasing the accuracy of the prediction.

Service life evaluation of HPC with increasing surface chlorides from field data in different sea conditions

  • Jong-Suk Lee;Keun-Hyeok Yang;Yong-Sik Yoon;Jin-Won Nam;Seug-Jun Kwon
    • Advances in concrete construction
    • /
    • v.16 no.3
    • /
    • pp.155-167
    • /
    • 2023
  • The penetrated chloride in concrete has different behavior with mix proportions and local exposure conditions, even in the same environments, so that it is very important to quantify surface chloride contents for durability design. As well known, the surface chloride content which is a key parameter like external loading in structural safety design increases with exposure period. In this study, concrete samples containing OPC (Ordinary Portland Cement), GGBFS (Ground Granulated Blast Furnace Slag), and FA (Fly Ash) had been exposed to submerged, tidal, and splash area for 5 years, then the surface chloride contents changing with exposure period were evaluated. The surface chloride contents were obtained from the chloride profile based on the Fick's 2nd Law, and the regression analysis for them was performed with exponential and square root function. After exposure period of 5 years in submerged and tidal area conditions, the surface chloride content of OPC concrete increased to 6.4 kg/m3 - 7.3 kg/m3, and the surface chloride content of GGBFS concrete was evaluated as 7.3 kg/m3 - 11.5 kg/m3. In the higher replacement ratio of GGBFS, the higher surface chloride contents were evaluated. The surface chloride content in FA concrete showed a range of 6.7 kg/m3 to 9.9 kg/m3, which was the intermediate level of OPC and GGBFS concrete. In the case of splash area, the surface chloride contents in all specimens were from 0.59 kg/m3 to 0.75 kg/m3, which was the lowest of all exposure conditions. Experimental constants available for durability design of chloride ingress were derived through regression analysis over exposure period. In the concrete with GGBFS replacement ratio of 50%, the increase rate of surface chloride contents decreased rapidly as the water to binder ratio increased.

Robust Radiometric and Geometric Correction Methods for Drone-Based Hyperspectral Imaging in Agricultural Applications

  • Hyoung-Sub Shin;Seung-Hwan Go;Jong-Hwa Park
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.3
    • /
    • pp.257-268
    • /
    • 2024
  • Drone-mounted hyperspectral sensors (DHSs) have revolutionized remote sensing in agriculture by offering a cost-effective and flexible platform for high-resolution spectral data acquisition. Their ability to capture data at low altitudes minimizes atmospheric interference, enhancing their utility in agricultural monitoring and management. This study focused on addressing the challenges of radiometric and geometric distortions in preprocessing drone-acquired hyperspectral data. Radiometric correction, using the empirical line method (ELM) and spectral reference panels, effectively removed sensor noise and variations in solar irradiance, resulting in accurate surface reflectance values. Notably, the ELM correction improved reflectance for measured reference panels by 5-55%, resulting in a more uniform spectral profile across wavelengths, further validated by high correlations (0.97-0.99), despite minor deviations observed at specific wavelengths for some reflectors. Geometric correction, utilizing a rubber sheet transformation with ground control points, successfully rectified distortions caused by sensor orientation and flight path variations, ensuring accurate spatial representation within the image. The effectiveness of geometric correction was assessed using root mean square error(RMSE) analysis, revealing minimal errors in both east-west(0.00 to 0.081 m) and north-south directions(0.00 to 0.076 m).The overall position RMSE of 0.031 meters across 100 points demonstrates high geometric accuracy, exceeding industry standards. Additionally, image mosaicking was performed to create a comprehensive representation of the study area. These results demonstrate the effectiveness of the applied preprocessing techniques and highlight the potential of DHSs for precise crop health monitoring and management in smart agriculture. However, further research is needed to address challenges related to data dimensionality, sensor calibration, and reference data availability, as well as exploring alternative correction methods and evaluating their performance in diverse environmental conditions to enhance the robustness and applicability of hyperspectral data processing in agriculture.

Optimization of forensic identification through 3-dimensional imaging analysis of labial tooth surface using open-source software

  • Arofi Kurniawan;Aspalilah Alias;Mohd Yusmiaidil Putera Mohd Yusof;Anand Marya
    • Imaging Science in Dentistry
    • /
    • v.54 no.1
    • /
    • pp.63-69
    • /
    • 2024
  • Purpose: The objective of this study was to determine the minimum number of teeth in the anterior dental arch that would yield accurate results for individual identification in forensic contexts. Materials and Methods: The study involved the analysis of 28 sets of 3-dimensional (3D) point cloud data, focused on the labial surface of the anterior teeth. These datasets were superimposed within each group in both genuine and imposter pairs. Group A incorporated data from the right to the left central incisor, group B from the right to the left lateral incisor, and group C from the right to the left canine. A comprehensive analysis was conducted, including the evaluation of root mean square error (RMSE) values and the distances resulting from the superimposition of dental arch segments. All analyses were conducted using CloudCompare version 2.12.4 (Telecom ParisTech and R&D, Kyiv, Ukraine). Results: The distances between genuine pairs in groups A, B, and C displayed an average range of 0.153 to 0.184mm. In contrast, distances for imposter pairs ranged from 0.338 to 0.522 mm. RMSE values for genuine pairs showed an average range of 0.166 to 0.177, whereas those for imposter pairs ranged from 0.424 to 0.638. A statistically significant difference was observed between the distances of genuine and imposter pairs(P<0.05). Conclusion: The exceptional performance observed for the labial surfaces of anterior teeth underscores their potential as a dependable criterion for accurate 3D dental identification. This was achieved by assessing a minimum of 4 teeth.

A Study on Estimating Earthquake Magnitudes Based on the Observed S-Wave Seismograms at the Near-Source Region (근거리 지진관측자료의 S파를 이용한 지진규모 평가 연구)

  • Yun, Kwan-Hee;Choi, Shin-Kyu;Lee, Kang-Ryel
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.28 no.3
    • /
    • pp.121-128
    • /
    • 2024
  • There are growing concerns that the recently implemented Earthquake Early Warning service is overestimating the rapidly provided earthquake magnitudes (M). As a result, the predicted damages unnecessarily activate earthquake protection systems for critical facilities and lifeline infrastructures that are far away. This study is conducted to improve the estimation accuracy of M by incorporating the observed S-wave seismograms in the near source region after removing the site effects of the seismograms in real time by filtering in the time domain. The ensemble of horizontal S-wave spectra from at least five seismograms without site effects is calculated and normalized to a hypocentric target distance (21.54 km) by using the distance attenuation model of Q(f)=348f0.52 and a cross-over distance of 50 km. The natural logarithmic mean of the S-wave ensemble spectra is then fitted to Brune's source spectrum to obtain the best estimates for M and stress drop (SD) with the fitting weight of 1/standard deviation. The proposed methodology was tested on the 18 recent inland earthquakes in South Korea, and the condition of at least five records for the near-source region is sufficiently fulfilled at an epicentral distance of 30 km. The natural logarithmic standard deviation of the observed S-wave spectra of the ensemble was calculated to be 0.53 using records near the source for 1~10 Hz, compared to 0.42 using whole records. The result shows that the root-mean-square error of M and ln(SD) is approximately 0.17 and 0.6, respectively. This accuracy can provide a confidence interval of 0.4~2.3 of Peak Ground Acceleration values in the distant range.

Determination of High-pass Filter Frequency with Deep Learning for Ground Motion (딥러닝 기반 지반운동을 위한 하이패스 필터 주파수 결정 기법)

  • Lee, Jin Koo;Seo, JeongBeom;Jeon, SeungJin
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.28 no.4
    • /
    • pp.183-191
    • /
    • 2024
  • Accurate seismic vulnerability assessment requires high quality and large amounts of ground motion data. Ground motion data generated from time series contains not only the seismic waves but also the background noise. Therefore, it is crucial to determine the high-pass cut-off frequency to reduce the background noise. Traditional methods for determining the high-pass filter frequency are based on human inspection, such as comparing the noise and the signal Fourier Amplitude Spectrum (FAS), f2 trend line fitting, and inspection of the displacement curve after filtering. However, these methods are subject to human error and unsuitable for automating the process. This study used a deep learning approach to determine the high-pass filter frequency. We used the Mel-spectrogram for feature extraction and mixup technique to overcome the lack of data. We selected convolutional neural network (CNN) models such as ResNet, DenseNet, and EfficientNet for transfer learning. Additionally, we chose ViT and DeiT for transformer-based models. The results showed that ResNet had the highest performance with R2 (the coefficient of determination) at 0.977 and the lowest mean absolute error (MAE) and RMSE (root mean square error) at 0.006 and 0.074, respectively. When applied to a seismic event and compared to the traditional methods, the determination of the high-pass filter frequency through the deep learning method showed a difference of 0.1 Hz, which demonstrates that it can be used as a replacement for traditional methods. We anticipate that this study will pave the way for automating ground motion processing, which could be applied to the system to handle large amounts of data efficiently.

Sensitivity of Data Assimilation Configuration in WAVEWATCH III applying Ensemble Optimal Interpolation

  • Hye Min Lim;Kyeong Ok Kim;Hanna Kim;Sang Myeong Oh;Young Ho Kim
    • Journal of the Korean earth science society
    • /
    • v.45 no.4
    • /
    • pp.349-362
    • /
    • 2024
  • We aimed to evaluate the effectiveness of ensemble optimal interpolation (EnOI) in improving the analysis of significant wave height (SWH) within wave models using satellite-derived SWH data. Satellite observations revealed higher SWH in mid-latitude regions (30° to 60° in both hemispheres) due to stronger winds, whereas equatorial and coastal areas exhibited lower wave heights, attributed to calmer winds and land interactions. Root mean square error (RMSE) analysis of the control experiment without data assimilation revealed significant discrepancies in high-latitude areas, underscoring the need for enhanced analysis techniques. Data assimilation experiments demonstrated substantial RMSE reductions, particularly in high-latitude regions, underscoring the effectiveness of the technique in enhancing the quality of analysis fields. Sensitivity experiments with varying ensemble sizes showed modest global improvements in analysis fields with larger ensembles. Sensitivity experiments based on different decorrelation length scales demonstrated significant RMSE improvements at larger scales, particularly in the Southern Ocean and Northwest Pacific. However, some areas exhibited slight RMSE increases, suggesting the need for region-specific tuning of assimilation parameters. Reducing the observation error covariance improved analysis quality in certain regions, including the equator, but generally degraded it in others. Rescaling background error covariance (BEC) resulted in overall improvements in analysis fields, though sensitivity to regional variability persisted. These findings underscore the importance of data assimilation, parameter tuning, and BEC rescaling in enhancing the quality and reliability of wave analysis fields, emphasizing the necessity of region-specific adjustments to optimize assimilation performance. These insights are valuable for understanding ocean dynamics, improving navigation, and supporting coastal management practices.

Hybrid machine learning with HHO method for estimating ultimate shear strength of both rectangular and circular RC columns

  • Quang-Viet Vu;Van-Thanh Pham;Dai-Nhan Le;Zhengyi Kong;George Papazafeiropoulos;Viet-Ngoc Pham
    • Steel and Composite Structures
    • /
    • v.52 no.2
    • /
    • pp.145-163
    • /
    • 2024
  • This paper presents six novel hybrid machine learning (ML) models that combine support vector machines (SVM), Decision Tree (DT), Random Forest (RF), Gradient Boosting (GB), extreme gradient boosting (XGB), and categorical gradient boosting (CGB) with the Harris Hawks Optimization (HHO) algorithm. These models, namely HHO-SVM, HHO-DT, HHO-RF, HHO-GB, HHO-XGB, and HHO-CGB, are designed to predict the ultimate strength of both rectangular and circular reinforced concrete (RC) columns. The prediction models are established using a comprehensive database consisting of 325 experimental data for rectangular columns and 172 experimental data for circular columns. The ML model hyperparameters are optimized through a combination of cross-validation technique and the HHO. The performance of the hybrid ML models is evaluated and compared using various metrics, ultimately identifying the HHO-CGB model as the top-performing model for predicting the ultimate shear strength of both rectangular and circular RC columns. The mean R-value and mean a20-index are relatively high, reaching 0.991 and 0.959, respectively, while the mean absolute error and root mean square error are low (10.302 kN and 27.954 kN, respectively). Another comparison is conducted with four existing formulas to further validate the efficiency of the proposed HHO-CGB model. The Shapely Additive Explanations method is applied to analyze the contribution of each variable to the output within the HHO-CGB model, providing insights into the local and global influence of variables. The analysis reveals that the depth of the column, length of the column, and axial loading exert the most significant influence on the ultimate shear strength of RC columns. A user-friendly graphical interface tool is then developed based on the HHO-CGB to facilitate practical and cost-effective usage.

Dielectric properties of KTN(80/20) thin films with pzt buffer layer for tunable microwave devices

  • Kyeong-Min Kim;Sam-Haeng Lee;Byeong-Jun Park;Joo-Seok Park;Sung-Gap Lee
    • Journal of Ceramic Processing Research
    • /
    • v.23 no.1
    • /
    • pp.29-32
    • /
    • 2022
  • K(Ta0.80Nb0.20)O3 films with Pb(Zr0.52Ti0.48)O3PZT buffer layer on Pt/Ti/SiO2/Si substrate were fabricated by sol-gel and spin-coating method. Structural and electrical properties were measured with variation of the sintering temperature, and the applicability to microwave materials was investigated. All K(Ta0.80Nb0.20)O3 films showed a cubic crystal structure. Average grain size was about 123~193 nm and average thickness of the K(Ta0.80Nb0.20)O3 films was approximately 366 nm. Through the AFM results, root mean square roughness (Rrms) of all K(Ta0.80Nb0.20)O3 films was around 6 nm. All K(Ta0.80Nb0.20)O3 films showed a tendency to increase dielectric loss as frequency increased. As the sintering temperature increased, tunability with an applied DC voltage indicated a decreasing tendency. Tunability and temperature coefficient of the K(Ta0.80Nb0.20)O3 film sintered at 700 ℃ showed good values of 22.1% at 10 V, -0.594/℃.