• Title/Summary/Keyword: Robustness performance

Search Result 1,706, Processing Time 0.024 seconds

Frequency Domain Double-Talk Detector Based on Gaussian Mixture Model (주파수 영역에서의 Gaussian Mixture Model 기반의 동시통화 검출 연구)

  • Lee, Kyu-Ho;Chang, Joon-Hyuk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.4
    • /
    • pp.401-407
    • /
    • 2009
  • In this paper, we propose a novel method for the cross-correlation based double-talk detection (DTD), which employing the Gaussian Mixture Model (GMM) in the frequency domain. The proposed algorithm transforms the cross correlation coefficient used in the time domain into 16 channels in the frequency domain using the discrete fourier transform (DFT). The channels are then selected into seven feature vectors for GMM and we identify three different regions such as far-end, double-talk and near-end speech using the likelihood comparison based on those feature vectors. The presented DTD algorithm detects efficiently the double-talk regions without Voice Activity Detector which has been used in conventional cross correlation based double-talk detection. The performance of the proposed algorithm is evaluated under various conditions and yields better results compared with the conventional schemes. especially, show the robustness against detection errors resulting from the background noises or echo path change which one of the key issues in practical DTD.

Estimating the tensile strength of geopolymer concrete using various machine learning algorithms

  • Danial Fakhri;Hamid Reza Nejati;Arsalan Mahmoodzadeh;Hamid Soltanian;Ehsan Taheri
    • Computers and Concrete
    • /
    • v.33 no.2
    • /
    • pp.175-193
    • /
    • 2024
  • Researchers have embarked on an active investigation into the feasibility of adopting alternative materials as a solution to the mounting environmental and economic challenges associated with traditional concrete-based construction materials, such as reinforced concrete. The examination of concrete's mechanical properties using laboratory methods is a complex, time-consuming, and costly endeavor. Consequently, the need for models that can overcome these drawbacks is urgent. Fortunately, the ever-increasing availability of data has paved the way for the utilization of machine learning methods, which can provide powerful, efficient, and cost-effective models. This study aims to explore the potential of twelve machine learning algorithms in predicting the tensile strength of geopolymer concrete (GPC) under various curing conditions. To fulfill this objective, 221 datasets, comprising tensile strength test results of GPC with diverse mix ratios and curing conditions, were employed. Additionally, a number of unseen datasets were used to assess the overall performance of the machine learning models. Through a comprehensive analysis of statistical indices and a comparison of the models' behavior with laboratory tests, it was determined that nearly all the models exhibited satisfactory potential in estimating the tensile strength of GPC. Nevertheless, the artificial neural networks and support vector regression models demonstrated the highest robustness. Both the laboratory tests and machine learning outcomes revealed that GPC composed of 30% fly ash and 70% ground granulated blast slag, mixed with 14 mol of NaOH, and cured in an oven at 300°F for 28 days exhibited superior tensile strength.

Robust Radiometric and Geometric Correction Methods for Drone-Based Hyperspectral Imaging in Agricultural Applications

  • Hyoung-Sub Shin;Seung-Hwan Go;Jong-Hwa Park
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.3
    • /
    • pp.257-268
    • /
    • 2024
  • Drone-mounted hyperspectral sensors (DHSs) have revolutionized remote sensing in agriculture by offering a cost-effective and flexible platform for high-resolution spectral data acquisition. Their ability to capture data at low altitudes minimizes atmospheric interference, enhancing their utility in agricultural monitoring and management. This study focused on addressing the challenges of radiometric and geometric distortions in preprocessing drone-acquired hyperspectral data. Radiometric correction, using the empirical line method (ELM) and spectral reference panels, effectively removed sensor noise and variations in solar irradiance, resulting in accurate surface reflectance values. Notably, the ELM correction improved reflectance for measured reference panels by 5-55%, resulting in a more uniform spectral profile across wavelengths, further validated by high correlations (0.97-0.99), despite minor deviations observed at specific wavelengths for some reflectors. Geometric correction, utilizing a rubber sheet transformation with ground control points, successfully rectified distortions caused by sensor orientation and flight path variations, ensuring accurate spatial representation within the image. The effectiveness of geometric correction was assessed using root mean square error(RMSE) analysis, revealing minimal errors in both east-west(0.00 to 0.081 m) and north-south directions(0.00 to 0.076 m).The overall position RMSE of 0.031 meters across 100 points demonstrates high geometric accuracy, exceeding industry standards. Additionally, image mosaicking was performed to create a comprehensive representation of the study area. These results demonstrate the effectiveness of the applied preprocessing techniques and highlight the potential of DHSs for precise crop health monitoring and management in smart agriculture. However, further research is needed to address challenges related to data dimensionality, sensor calibration, and reference data availability, as well as exploring alternative correction methods and evaluating their performance in diverse environmental conditions to enhance the robustness and applicability of hyperspectral data processing in agriculture.

Effects of a new stirrup hook on the behavior of reinforced concrete beams

  • Zehra Sule Garip;Furkan Erdema
    • Structural Engineering and Mechanics
    • /
    • v.91 no.3
    • /
    • pp.263-277
    • /
    • 2024
  • The primary aim of this study is to introduce an innovative configuration for stirrup hooks in reinforced concrete beams and analyze the impact of factors such as stirrup spacing, placement, and hook lengths on the structural performance of reinforced concrete beam elements. A total of 18 specimens were produced and subjected to reversed cyclic loading, with two specimens serving as reference specimens and the remaining 16 specimens utilizing a specifically developed stirrup hook configuration. The experiment used reinforced concrete beams scaled down to half their original size. These beams were built with a shear span-to-depth ratio of 3 (a/d=3). The experimental samples were divided into two distinct groups. The first group comprises nine test specimens that consider the contribution of concrete to shear strength, while the second group consists of nine test specimens that do not consider this contribution. The preparation of reference beam specimens for both groups involved the utilization of standard hooks. The stirrup hooks in the test specimens are configured with a 90-degree angle positioned at the midpoint of the bottom section of the beam. The criteria considered in this study included the distance between hooks, hook angle, stirrup spacing, hook orientation, and hook length. In the experimental group examining the contribution of concrete on shear strength, it was noted that the stirrup hooks of both the R1 reference specimen and specific test specimens displayed indications of opening. However, when the contribution of concrete on shear strength was not considered, it was observed that none of the stirrup hooks proposed in the R0 reference specimen and test specimens showed any indications of opening. Neglecting the contribution of concrete in the assessment of shear strength yielded more favorable outcomes regarding structural robustness. The study found that the strength values obtained using the suggested alternative stirrup hook were similar to those of the reference specimens. Furthermore, all the test specimens successfully achieved the desired strengths.

Detection of Phantom Transaction using Data Mining: The Case of Agricultural Product Wholesale Market (데이터마이닝을 이용한 허위거래 예측 모형: 농산물 도매시장 사례)

  • Lee, Seon Ah;Chang, Namsik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.161-177
    • /
    • 2015
  • With the rapid evolution of technology, the size, number, and the type of databases has increased concomitantly, so data mining approaches face many challenging applications from databases. One such application is discovery of fraud patterns from agricultural product wholesale transaction instances. The agricultural product wholesale market in Korea is huge, and vast numbers of transactions have been made every day. The demand for agricultural products continues to grow, and the use of electronic auction systems raises the efficiency of operations of wholesale market. Certainly, the number of unusual transactions is also assumed to be increased in proportion to the trading amount, where an unusual transaction is often the first sign of fraud. However, it is very difficult to identify and detect these transactions and the corresponding fraud occurred in agricultural product wholesale market because the types of fraud are more intelligent than ever before. The fraud can be detected by verifying the overall transaction records manually, but it requires significant amount of human resources, and ultimately is not a practical approach. Frauds also can be revealed by victim's report or complaint. But there are usually no victims in the agricultural product wholesale frauds because they are committed by collusion of an auction company and an intermediary wholesaler. Nevertheless, it is required to monitor transaction records continuously and to make an effort to prevent any fraud, because the fraud not only disturbs the fair trade order of the market but also reduces the credibility of the market rapidly. Applying data mining to such an environment is very useful since it can discover unknown fraud patterns or features from a large volume of transaction data properly. The objective of this research is to empirically investigate the factors necessary to detect fraud transactions in an agricultural product wholesale market by developing a data mining based fraud detection model. One of major frauds is the phantom transaction, which is a colluding transaction by the seller(auction company or forwarder) and buyer(intermediary wholesaler) to commit the fraud transaction. They pretend to fulfill the transaction by recording false data in the online transaction processing system without actually selling products, and the seller receives money from the buyer. This leads to the overstatement of sales performance and illegal money transfers, which reduces the credibility of market. This paper reviews the environment of wholesale market such as types of transactions, roles of participants of the market, and various types and characteristics of frauds, and introduces the whole process of developing the phantom transaction detection model. The process consists of the following 4 modules: (1) Data cleaning and standardization (2) Statistical data analysis such as distribution and correlation analysis, (3) Construction of classification model using decision-tree induction approach, (4) Verification of the model in terms of hit ratio. We collected real data from 6 associations of agricultural producers in metropolitan markets. Final model with a decision-tree induction approach revealed that monthly average trading price of item offered by forwarders is a key variable in detecting the phantom transaction. The verification procedure also confirmed the suitability of the results. However, even though the performance of the results of this research is satisfactory, sensitive issues are still remained for improving classification accuracy and conciseness of rules. One such issue is the robustness of data mining model. Data mining is very much data-oriented, so data mining models tend to be very sensitive to changes of data or situations. Thus, it is evident that this non-robustness of data mining model requires continuous remodeling as data or situation changes. We hope that this paper suggest valuable guideline to organizations and companies that consider introducing or constructing a fraud detection model in the future.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

Gross Profitability Premium in the Korean Stock Market and Its Implication for the Fund Distribution Industry (한국 주식시장에서 총수익성 프리미엄에 관한 분석 및 펀드 유통산업에 주는 시사점)

  • Yoon, Bo-Hyun;Liu, Won-Suk
    • Journal of Distribution Science
    • /
    • v.13 no.9
    • /
    • pp.37-45
    • /
    • 2015
  • Purpose - This paper's aim is to investigate whether or not gross profitability explains the cross-sectional variation of the stock returns in the Korean stock market. Gross profitability is an alternative profitability measure proposed by Novy-Marx in 2013 to predict cross-sectional variation of stock returns in the US. He shows that the gross profitability adds explanatory power to the Fama-French 3 factor model. Interestingly, gross profitability is negatively correlated with the book-to-market ratio. By confirming the gross profitability premium in the Korean stock market, we may provide some implications regarding the well-known value premium. In addition, our empirical results may provide opportunities for the fund distribution industry to promote brand new styles of funds. Research design, data, and methodology - For our empirical analysis, we collect monthly market prices of all the companies listed on the Korea Composite Stock Price Index (KOSPI) of the Korea Exchanges (KRX). Our sample period covers July1994 to December2014. The data from the company financial statementsare provided by the financial information company WISEfn. First, using Fama-Macbeth cross-sectional regression, we investigate the relation between gross profitability and stock return performance. For robustness in analyzing the performance of the gross profitability strategy, we consider value weighted portfolio returns as well as equally weighted portfolio returns. Next, using Fama-French 3 factor models, we examine whether or not the gross profitability strategy generates excess returns when firmsize and the book-to-market ratio are controlled. Finally, we analyze the effect of firm size and the book-to-market ratio on the gross profitability strategy. Results - First, through the Fama-MacBeth cross-sectional regression, we show that gross profitability has almost the same explanatory power as the book-to-market ratio in explaining the cross-sectional variation of the Korean stock market. Second, we find evidence that gross profitability is a statistically significant variable for explaining cross-sectional stock returns when the size and the value effect are controlled. Third, we show that gross profitability, which is positively correlated with stock returns and firm size, is negatively correlated with the book-to-market ratio. From the perspective of portfolio management, our results imply that since the gross profitability strategy is a distinctive growth strategy, value strategies can be improved by hedging with the gross profitability strategy. Conclusions - Our empirical results confirm the existence of a gross profitability premium in the Korean stock market. From the perspective of the fund distribution industry, the gross profitability portfolio is worthy of attention. Since the value strategy portfolio returns are negatively correlated with the gross profitability strategy portfolio returns, by mixing both portfolios, investors could be better off without additional risk. However, the profitable firms are dissimilar from the value firms (high book-to-market ratio firms); therefore, an alternative factor model including gross profitability may help us understand the economic implications of the well-known anomalies such as value premium, momentum, and low volatility. We reserve these topics for future research.

Seabed Sediment Feature Extraction Algorithm using Attenuation Coefficient Variation According to Frequency (주파수에 따른 감쇠계수 변화량을 이용한 해저 퇴적물 특징 추출 알고리즘)

  • Lee, Kibae;Kim, Juho;Lee, Chong Hyun;Bae, Jinho;Lee, Jaeil;Cho, Jung Hong
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.111-120
    • /
    • 2017
  • In this paper, we propose novel feature extraction algorithm for classification of seabed sediment. In previous researches, acoustic reflection coefficient has been used to classify seabed sediments, which is constant in terms of frequency. However, attenuation of seabed sediment is a function of frequency and is highly influenced by sediment types in general. Hence, we developed a feature vector by using attenuation variation with respect to frequency. The attenuation variation is obtained by using reflected signal from the second sediment layer, which is generated by broadband chirp. The proposed feature vector has advantage in number of dimensions to classify the seabed sediment over the classical scalar feature (reflection coefficient). To compare the proposed feature with the classical scalar feature, dimension of proposed feature vector is reduced by using linear discriminant analysis (LDA). Synthesised acoustic amplitudes reflected by seabed sediments are generated by using Biot model and the performance of proposed feature is evaluated by using Fisher scoring and classification accuracy computed by maximum likelihood decision (MLD). As a result, the proposed feature shows higher discrimination performance and more robustness against measurement errors than that of classical feature.

Precise Rectification of Misaligned Stereo Images for 3D Image Generation (입체영상 제작을 위한 비정렬 스테레오 영상의 정밀편위수정)

  • Kim, Jae-In;Kim, Tae-Jung
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.411-421
    • /
    • 2012
  • The stagnant growth in 3D market due to 3D movie contents shortage is encouraging development of techniques for production cost reduction. Elimination of vertical disparity generated during image acquisition requires heaviest time and effort in the whole stereoscopic film-making process. This matter is directly related to competitiveness in the market and is being dealt with as a very important task. The removal of vertical disparity, i.e. image rectification has been treated for a long time in the photogrammetry field. While computer vision methods are focused on fast processing and automation, photogrammetry methods on accuracy and precision. However, photogrammetric approaches have not been tried for the 3D film-making. In this paper, proposed is a photogrammetry-based rectification algorithm that enable to eliminate the vertical disparity precisely by reconstruction of geometric relationship at the time of shooting. Evaluation of proposed algorithm was carried out by comparing the performance with two existing computer vision algorithms. The epipolar constraint satisfaction, epipolar line accuracy and vertical disparity of result images were tested. As a result, the proposed algorithm showed excellent performance than the other algorithms in term of accuracy and precision, and also revealed robustness about position error of tie-points.

Design of spectrum spreading technique applied to DVB-S2

  • Kim, Pan-Soo;Chang, Dae-Ig;Lee, Ho-Jin
    • Journal of Satellite, Information and Communications
    • /
    • v.2 no.2
    • /
    • pp.22-28
    • /
    • 2007
  • Spectrum spreading, in its general form, can be conceived as an artificial expansion of the signal bandwidth with respect to the minimum Nyquist band required to transmit the desired information. Spreading can be functional to several objectives, including resilience to interference and jammers and reduction of power spectral density levels. In the paper, signal spreading is manly used for increasing the received energy, thus satisfying link budget constraints, for terminals with low aperture antennas, without increasing the transmitted EIRP. As a matter of fact, in many mobile scenarios, even when MODCOD configurations with very low spectral efficiency (i.e. QPSK-1/4) in DVB-S2 standard, are used, the link budget cannot be closed. Spectrum spreading has been recently proposed as a technique to improve system performance without introducing additional MODCOD configurations under the constraint of fixed power spectrum density level at the transmitter side. To this aim, the design of spectrum spreading techniques shall keep into consideration requirements such as spectrum mask, physical layer performance, link budget, hardware reuse, robustness, complexity, and backward compliance with existing commercial equipments. The proposed implementation allows to fully reuse the standard DVB-S2 circuitry and is inserted as an 'inner layer' in the standard DVB-S2 chain.

  • PDF